The focus of this article would be cloud specific steps rather than repeating already well known steps in the net.Its purpose is only to give you an idea for an use case and should not be used in a production environment.

The high level setup:

Two nodes (VirtualBox) Oracle Linux 7.
Oracle Grid Infrastructure 12.2 .
Oracle RAC 12.2, storage ASM.

Target (to be created) in Oracle cloud:
Single node Oracle Linux 7.
Oracle GI/Restart 12.2 .
Oracle 12.2, storage ASM.

You can find here further details of instances’ config like pfiles, tns/listener entries, rman scripts, ssh config, etc.

For simplicity sake all the communication will be based on ssh tunnels.All used ports will be default.Our source database is tiny (about 3G) so it should serve the purpose. OCI offers enterprise level network connectivity - you can connect your premises to cloud via VPN with Juniper, Cisco and other popular providers. If you need guaranteed speed between, Oracle has an agreement with certain ISPs. There are real case examples of backing up 1TB database with 250GB backup set size in the cloud within 40min. .


| RAC NODE1  | 
| ASM1       |                                              +-----------------+
-------------+                +-------------+               |  Physical copy  | 
              <= SSH tunnel=> |  BASTION    |<= SSH tunnel=>|  Oracle restart |
+------------+                |     SERVER  |               |       ASM       |
| RAC NODE2  |                +-------------+               +-----------------+
|      ASM2  |              



OCI offers a friendly interface for defining our objects in the cloud. We are going to create a VCN (virtual cloud network) with standard definitions and we’ll add connectivity on TCP port 1521 - the listener that will be running on the host in the cloud.

Navigate from the top menu Networking then Virtual Cloud Network.

Navigate from the top menu "Networking" then Virtual Cloud Network.
Press Create Virtual Cloud Network. Select the compartment where VCN should be created.
Put a descriptive name and select Create VCN plus related resources - this way Oracle takes care of subnets, routing, gateways etc.
Remember to keep all objects created under the same compartment and same availability domain.
Create Virtual Cloud Network
Once ready press Create VCN button and you will get a summary of what has been created.
To allow incoming connection on instance default listener port 1521 we need to add an ingress rule - navigate Networking,VCNs,Security Lists and select the VCN created above (zVCN01).
Then click on Edit All Rules:
Security list details
edit security list rules
On the top ingress rule for TCP port 22, just add 1521 and save the changes.
edit ingress rule

Compute instance

Here we can create the instance, it will be the smallest possible shape VM.Standard1.1: 7Gb. memory and 1 ocpu. Here you can find further details on the shapes.
Navigate Compute, Instances:

Press Launch Instance:
Create virtual cloud network
Fill in “NAME”, select availability domain and oracle image “Oracle Linux 7.4”. Shape VM.Standard 1.1, latest image built and the VCN created above. Check the box to get public IP assigned - this way Oracle will connect the instance to Internet. As mentioned earlier - this is only for test purposes. Still later on we are going to force Oracle Net encryption between the nodes.
In the last box you could paste or point the public key of a pair generated earlier. This article shows how to generate and use the keys.
Launch instance window

OS configuration

At this stage we have a box with installed Oracle Linux 7. A note that the user to connect is “opc” with no password - use the corresponding private key of the public one used for the instance creation.
Please follow the instruction here and install and configure required OS packages, user, groups, memory parameters,user limits and disable SELinux.

Custom setup following the OS config:

[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/grid 
[opc@zdb02 ~]$ sudo chown -R oracle:oinstall /u01/app 
[opc@zdb02 ~]$ sudo chmod 775 /u01/app 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/grid/product/12.2.0/gridhome 
[opc@zdb02 ~]$ sudo chown -R grid:oinstall /u01/app/grid 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/oracle 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/oracle/product/12.2.0/dbhome_1 
[opc@zdb02 ~]$ sudo chown -R oracle:oinstall /u01/app/oracle 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/oraInventory 
[opc@zdb02 ~]$ sudo chown grid:oinstall /u01/app/oraInventory 
[opc@zdb02 ~]$ sudo chmod 770 /u01/app/oraInventory  

[opc@zdb02 ~]$ sudo -u grid vi /home/grid/.bash_profile 
export ORACLE_BASE=/u01/app/grid 
export ORACLE_HOME=/u01/app/grid/product/12.2.0/gridhome 
export PATH=$ORACLE_HOME/bin:$PATH export ORACLE_SID="+ASM" 

[opc@zdb02 ~]$ sudo -u oracle vi /home/oracle/.bash_profile 
export ORACLE_BASE=/u01/app/oracle 
export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 
export ORACLE_SID=cdb1s1  

Download Oracle grid and database software from .
Unzip grid software with the same user in GRID_HOME:

[opc@zdb02 ~]$sudo su - grid  
[grid@zdb02 ~]$unzip /home/grid/install/ -d /u01/app/grid/product/12.2.0/gridhome  

Unzip with oracle user database software in a temporary location:

[opc@zdb02 ~]$sudo su - oracle  
[oracle@zdb02 ~]$unzip -d /home/oracle/install/  

Open listener port:

[opc@zdb02 install]$ sudo firewall-cmd --zone=public --permanent --add-port=1521/tcp 
[opc@zdb02 ~]$ sudo firewall-cmd --reload 

Attaching block storage

I am going to attach a 100Gb. block storage to zdb02 that will be used for the ASM instance.
Current state:

[opc@zdb02]$ lsscsi -i  
[2:0:0:0] storage IET Controller 0001 - - [2:0:0:1] 
disk ORACLE BlockVolume 1.0 /dev/sda 36035e5779ab0470e999f5048f8da5a09  

[opc@zdb02 install]$ lsblk 
sda 8:0 0 46.6G 0 disk 
├─sda1 8:1 0 512M 0 part /boot/efi 
├─sda2 8:2 0 8G 0 part [SWAP] 
└─sda3 8:3 0 38.1G 0 part /  

Navigate Storage, Block volume and press Create block volume:

Create block volume
Select the same compartment and availability domain where the instance resides.Put some descriptive name (zdb02_asm_data01) and the required size then confirm volume block creation.
Block Volume Details Provisioning
Block Volume Details Available
Once it is provisioned we can attach it to the instance. Navigate Compute, Instances and select zDB02.
zDB02 Instance details
Press Attach Block Volume and select from the drop down menus the one we created earlier and press Attach:
attach block volume
Now we need to configure zdb02 to “see” the attached storage. Oracle provides all the required commands next to each attached volume:
commands to attached volume
Select iSCSI commands & Information and copy all commands in Attach commands pane:
iSCSI commands & Information
Execute the commands as opc user:

[opc@zdb02]$ sudo iscsiadm -m node -o new -T -p 
[opc@zdb02]$ sudo iscsiadm -m node -o update -T -n node.startup -v automatic 
[opc@zdb02]$ sudo iscsiadm -m node -T -p -l  

For comparison - now we can see sdb device:

[opc@zdb02]$ lsscsi -i
[2:0:0:0] storage IET Controller 0001 - -
[2:0:0:1] disk ORACLE BlockVolume 1.0 /dev/sda 36035e5779ab0470e999f5048f8da5a09
[3:0:0:0] storage IET Controller 0001 - -
[3:0:0:1] disk ORACLE BlockVolume 1.0 /dev/sdb 360d0c86e73b74b6f826f979ec7a73153
[opc@zdb02]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 46.6G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi ├─sda2 8:2 0 8G 0 part [SWAP] └─sda3 8:3 0 38.1G 0 part / sdb 8:16 0 100G 0 disk
Partition the device:

[opc@zdb02]$ sudo su - 
[root@zdb02 ~]# parted /dev/sdb mklabel gpt  
[root@zdb02 ~]# parted /dev/sdb mkpart primary 0% 100%  

Udev rules, use the id returned from the command above:

[root@zdb02 ~]#vi /etc/udev/rules.d/99-oracleasm.rules
KERNEL==“sd?1”, SUBSYSTEM==“block”, PROGRAM==“/usr/lib/udev/scsi_id -g -u -d /dev/$parent”,
RESULT==”360d0c86e73b74b6f826f979ec7a73153”, SYMLINK+=“oracleasm/asm-disk01”, OWNER=“grid”, GROUP=“dba”,
Refresh OS partition information and reload udev rules:

[root@zdb02 ~]#partprobe /dev/sdb 
[root@zdb02 ~]#partprobe /dev/sdb1  
[root@zdb02 ~]#udevadm control --reload-rules  

You should get something like this:

[root@zdb02 ~]# ls -l /dev/oracleasm/
total 0
lrwxrwxrwx. 1 root root 7 Dec 8 08:42 asm-disk01 -> ../sdb1
[root@zdb02 ~]# ls -l /dev/sdb1 brw-rw—- 1 grid dba 8, 17 Dec 11 13:14 /dev/sdb1
Make sure the partition created (sdb1 in our case) is owned by the user who will own GI installation.

Grid setup

[grid@zdb02 gridhome]$cd /u01/app/grid/product/12.2.0/gridhome
[grid@zdb02 gridhome]$ ./ -silent -force -responseFile /home/grid/install/grid_standalone.rsp
As root:


As grid user:

[grid@zdb02 gridhome]$ /u01/app/grid/product/12.2.0/gridhome/ -executeConfigTools -responseFile /home/grid/install/grid_standalone.rsp -silent  

[grid@zdb02 gridhome]$ crsctl stat res -t  
Name           Target  State        Server       State details    Local Resources
               ONLINE  ONLINE       zdb02                    STABLE
               ONLINE  ONLINE       zdb02                    STABLE
               ONLINE  ONLINE       zdb02                    Started,STABLE
               OFFLINE OFFLINE      zdb02                    STABLE
      1        ONLINE  ONLINE       zdb02                    STABLE
      1        OFFLINE OFFLINE                                    STABLE
      1        ONLINE  ONLINE       zdb02                    STABLE

Database software only

As oracle user:

[oracle@zdb02]$ cd /home/oracle/install/database
[oracle@zdb02]$ ./runInstaller -silent -force -responseFile /home/oracle/install/12.2db.rsp
As root:


Start in nomount state the “auxiliary” instance using a simple init file:

[oracle@zdb02 dbs]$ vi $ORACLE_HOME/dbs/initcdb1s1.ora 

[oracle@zdb02 dbs]$ sqlplus / as sysdba  

SQL*Plus: Release Production on Sun Dec 99 14:36:52 2017  
Copyright (c) 1982, 2016, Oracle. All rights reserved.  
Connected to an idle instance.  

SQL> startup nomount 
ORACLE instance started.  
Total System Global Area 2147483648 bytes 
Fixed Size 8622776 bytes 
Variable Size 503319880 bytes 
Database Buffers 1627389952 bytes 
Redo Buffers 8151040 bytes  

 SQL> exit  

Oracle Net Services

A reminder that we are using ssh tunnel from the cloud to our premises.In the other direction a tunnel is not required. SSh config setup for zdb02 host can be found here.
Oracle net services including transport encryption is available here.

In addition to that primary and copy databases have the same password file for authentication. Please note that listener’s static registration is enable for purpose as over rman clone the target database has to be bounced.


[oracle@zdb02 dbs]$ rman target sys@cdb1 auxiliary sys@cdb1s1
RMAN> run { 2> allocate channel p1 type disk; 3> allocate auxiliary channel s1 type disk; 4> 5> duplicate target database 6> for standby 7> from active database 8> dorecover 9> spfile 10> set instance_name=‘cdb1s1’ 11> set db_unique_name=‘cdb1s1’ 12> set db_domain=‘’ 13> set sga_target=‘2g’ 14> set pga_aggregate_target=‘512m’ 15> reset sga_max_size 16> reset audit_trail 17> reset audit_file_dest 18> reset dispatchers 19> reset local_listener 20> reset cluster_database 21> nofilenamecheck; 22> }
The complete output log of the above execution is here.

At this stage you have a clone in the cloud of your source database that is a few steps away from a primary or standby/DG depending on your needs.