You are here

ROAD for Oracle VM Runbooks Repository

ROAD for Oracle VM - Availability Protection, Disaster Recovery and Operations Automation, Ready for Business Reporting and Operations Toolkit

 
There are many ways to organize runbooks and the files they include. This document describes example runbooks illustrating lots of these techniques. 
 
ROAD for Oracle VM runbooks contain the configurations for an automated task or process. Runbooks can declare configurations, as well as orchestrate steps of ordered process. 
 
Runbooks are designed to be human-readable text files using simple keyword value pairs, i.e. keyword = value. Runbooks are used in conjunction with ROAD for Oracle VM commands that invoke the Oracle VM CLI, and native Linux commands and shell scripts. Runbooks are very flexible, and can be configured to adapt to your Oracle VM automation needs.
 
The Jedi Release of ROAD for Oracle VM has the following built-in automations:
  • Reset the Oracle VM Manager Database to a clean first login state
  • Reset an Oracle VM server’s cluster configurations to a clean state
  • Backup and restore Oracle VM Manager user-friendly names, server pool, network, and vlan descriptions, and tags
  • Orchestrate starting, stopping, and migrating virtual machines with ordered process
  • Orchestrate a complete Oracle VM Manager UUID restore on the same host or a different host using a runbook with the server pools running configurations
  • Orchestrate importing block (OCFS2) and file (NFS) storage repositories including changing Oracle VM Manager UUIDs, virtual machine network IDs, as well as virtual and physical disk (source and target) mappings.
  • Orchestrate service window changes
  • Orchestrate disaster recovery failover testing
  • Orchestrate resetting Oracle VM Managers and servers to a clean first login state
  • Orchestrate migrating Oracle VM Manager 3.2 from an Oracle 11G database to MySQL.
 
ROAD for Oracle VM runbooks have the following minimum keyword value pair requirements:
mokum.log.loc = /tmp/mokum_utils.
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = yourpassword
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = yourpassword
 
The following list describes runbook requirements:
  • Runbooks must be configured prior to running any of the commands
  • Runbooks contains a set of lines representing “keyword = value” pairs
  • Runbooks may also contain blank lines, and comment lines. Comment lines begin with a “#” (hash) and continue until the end of the line.
  • Keywords must begin on the first character of a line
  • Keywords are followed by a space, then an equal sign, then a space, and then a value, i.e. keyword = value
  • Values may be single values, or a comma separated list
  • A value's format is dependent upon the keyword definition
  • A runbook must be selected to be able to run commands
 
The next tables describes the ROAD for Oracle VM Jedi release commands, including descriptions, and the runbook keywords.
Command Name
Description
Runbook Keywords
ie. keyword = value
ovm_wipedb.sh
This command wips an Oracle VM Manager's MySQL database repository resulting in an empty “like new” state.
This command does not use a runbook.
restore_manager.sh
This command automates the process of performing Oracle VM Manager UUID restore on the same or different Oracle VM Manager host.
ovs.servers
ovs.server.names
ovs.nfsadmin.servers
ovs.fcsanadmin.servers
ovs.iscsisanadmin.servers
ovs.nfsrefresh.servers
ovs.storage.type
ovs.storage.type
ovs.storage.plugin
ovs.nfs.plugin
ovs.nfsstorage.name
ovs.scsistorage.name
ovs.nfsstoragename.accessHosts
ovs.scsistoragename.accessHosts
ovm.ntp.servers
ovm.yum.baseURL
ovm.yum.gpgkeycheck
ovm.yum.gpgkey
ovm.repo.server
ovs.nic.network
ovs.host.nic.network
save_nice_names.sh
This command creates a text and tar file with the Oracle VM Manager user-friendly names, network, vlan and server pool descriptions, and tags that can be used to restore the objects.
mokum.util.otypes
mokum.nicenames.path
restore_nice_names.sh
This command restores the Oracle VM Manager user-friendly names, network, vlan and server pool descriptions, and tags using the outfiles from the save_nice_names.sh command.
mokum.util.otypes
mokum.nicenames.path
status_vms.sh
This command checks, and prints the status of all virtual machine to the terminal.
Minimum keyword value pair requirements
start_vms.sh
This command attempts to start the virtual machines based upon the list of virtual machines specified in the runbook.
ovs.start.vm
shutdown_vms.sh
This command attempts to shutdown the list of virtual machines specified in the runbook.
ovs.stop.vm
ovs.vm.killwait (optional)
migrate_vms.sh
This command attempts to migrate the virtual machines from Unassigned Virtual Machines folder, or running or stopped (only 3.2) virtual machines between Oracle VM server pool members based upon the list of virtual machines specified in the runbook.
ovs.migrate.assignedvm
 
3.2+ Only
ovs.migrate.unassignedvm
Note: Only uncomment  "ovs.migrate.assignedvm" or  "ovs.migrate.unassignedvm" at any one time in a runbook.
 
3.3+ Only
ovs.migrate.runningvm
Note: Only uncomment  "ovs.migrate.runningvm" or  "ovs.migrate.unassignedvm" at any one time in a runbook.
rename_vdisks.sh
This command will rename all virtual disks that are assigned to virtual machines following a standard naming convention. Virtual disk names start with the virtual machine name followed by its disk slot number.
Minimum keyword value pair requirements
rename_pdisks.sh
This command will rename all physical disks that are assigned to virtual machines following a standard naming convention. Physical disk names start with the virtual machine name followed by its disk slot number.
Minimum keyword value pair requirements
rename_allvmdisks.sh
This command will rename all virtual and physical disks that are assigned to virtual machines following a standard naming convention. Virtual and physical disk names start with the virtual machine name followed by its disk slot number.
Minimum keyword value pair requirements
import_file_repo.sh
This command is used to migrate NFS storage repositories between server pools. The repository migration includes changing source and target bridge IDs,  as well as virtual and physical disk mappings in the vm.cfg files.
ovs.migrate.filerepo
.ovs_import_file_repo.sh
This command is not executed by user directly, but copied to the target Oracle VM server when the import_file_repo.sh command is run.
 
Bidirectional ssh key based authentication is required between the source Oracle VM Manager host, and the target Oracle VM server.
None
import_block_repo.sh
This command is used to migrate block (ISCSI or Fibre Channel) storage repositories between server pools. The repository migration includes changing source and target bridge IDs,  as well as virtual and physical disk mappings in the vm.cfg files.
ovs.migrate.blockrepo
.ovs_import_block_repo.sh
This command is not executed by user directly, but copied to the target Oracle VM server when the import_block_repo.sh command is run.
 
Bidirectional ssh key based authentication is required between the source Oracle VM Manager host, and the target Oracle VM server.
None
 
The following example shows a command, migrate_vms.sh being run along with the runbook selection prompt. By default when a command is run the the runbook selection prompt is displayed. A runbook can be selected by entering its full path, then press Enter to run the command with the runbook automation. 
# ./migrate_vms.sh
Please select the appropriate runbook from the below list.
/opt/mokum/etc/ovm-prod-3.2-VM-Status.conf
/opt/mokum/etc/ovm-prod-3.2-Save-NiceNames.conf
/opt/mokum/etc/ovm-prod-3.2-Restore-NiceNames.conf
/opt/mokum/etc/ovm-prod-3.2-Full-Restore.conf
/opt/mokum/etc/ovm-prod-3.2-DR-Testing-MigrateStorage-StartVMs.conf
/opt/mokum/etc/ovm-prod-3.2-Stop-MigrateUnassigned-Start-VMs.conf
/opt/mokum/etc/ovm-prod-3.2-Migrate-VMs.conf
/opt/mokum/etc/ovm-prod-3.2-Sunday-11pm-Full-Restore-Stop-MigrateUnassigned-Start-VMs.conf
/opt/mokum/etc/ovm-prod-3.2-Wednesday-11pm-Migrate-VMs.conf
Please enter the full file path here : /opt/mokum/etc/ovm-prod-3.2-Migrate-VMs.conf
 
There are many ways to organize runbooks and the files they include. The following example Oracle VM Release 3.3 runbooks illustrating lots of these techniques. 
 
# ll /opt/mokum/etc | awk '{print $9;}' | tail -n +2
ovm611-3.3-CLIConnect.conf
ovm611-3.3-FullDRFailover-MigrateStorageUnassignedVMs-StartPinVMs.conf
ovm611-3.3-FullManagerRestore.conf
ovm611-3.3-FullManagerRestoreGlocalYUM.conf
ovm611-3.3-ProdDRTest-MigrateStorageUnassignedVMs-StartVMs.conf
ovm611-3.3-SetPinnedVMs.conf
ovm611-3.3-TestDRTest-MigrateStorageUnassignedVMs-StartVMs.conf
 
# cat /opt/mokum/etc/ovm611-3.3-CLIConnect.conf
mokum.log.loc = /tmp/mokum_utils.
mokum.nicenames.path = /tmp/mokum_nice_names.txt
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = password
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = password
 
# cat /opt/mokum/etc/ovm611-3.3-FullDRFailover-MigrateStorageUnassignedVMs-StartPinVMs.conf
mokum.log.loc = /tmp/mokum_utils.
#mokum.util.otypes = PhysicalDisk,VirtualDisk,VM,ServerPool,Repository,Network,vlanGroup,SanServer,Tag
mokum.util.otypes = ServerPool,Tag
mokum.nicenames.path = /tmp/mokum_nice_names.txt
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = password
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = password
#Prod
ovs.migrate.blockrepo = 360014052c41ebfcd3c85d343adb806d3,0004fb0000020000f7e6003cd92fce71 
ovs.migrate.filerepo = nfs_prod,0004fb0000020000f7e6003cd92fce71
ovs.migrate.unassignedvm = ovs626:proddb02,proddb04,prodebsapp02,prodebsapp04,ovm612
ovs.migrate.unassignedvm = ovs627:prodhyp02,prodhyp04,prodobidwh02,prodobidwh04,ovm613
ovs.migrate.unassignedvm = ovs628:prodrac102,prodrac104,prodsoa02,prodsoa04
#Test/Dev
ovs.migrate.filerepo = nfs_migration1,0004fb00000200003d03d38f43761ba1
ovs.migrate.filerepo = nfs_migration2,0004fb00000200003d03d38f43761ba1
ovs.migrate.unassignedvm = ovs623:devdb02,devdb04,devobidwh02,devrac102,testsoa02
ovs.migrate.unassignedvm = ovs624:devrac104,devsoa02,devsoa04,testdb02,testrac104,testsoa04
ovs.migrate.unassignedvm = ovs625:testdb04,testebsapp04,testhyp02,testobidwh02,testrac102
#Single Manager Operations
ovs.start.vm = proddb02,proddb04,prodebsapp02,prodebsapp04,prodhyp02,prodhyp04,prodobidwh02,prodobidwh04,prodrac102,prodrac104,prodsoa02,prodsoa04,ovm612,ovm613,devdb02,devdb04,devobidwh02,devrac102,devrac104,devsoa02,devsoa04,testdb02,testdb04,testebsapp04,testhyp02,testobidwh02,testrac102,testrac104,testsoa02,testsoa04
ovs.stop.vm = proddb02,proddb04,prodebsapp02,prodebsapp04,prodhyp02,prodhyp04,prodobidwh02,prodobidwh04,prodrac102,prodrac104,prodsoa02,prodsoa04,ovm612,ovm613,devdb02,devdb04,devobidwh02,devrac102,devrac104,devsoa02,devsoa04,testdb02,testdb04,testebsapp04,testhyp02,testobidwh02,testrac102,testrac104,testsoa02,testsoa04
ovs.vm.killwait = 30
 
# cat /opt/mokum/etc/ovm611-3.3-FullManagerRestore.conf
mokum.log.loc = /tmp/mokum_utils.
#mokum.util.otypes = PhysicalDisk,VirtualDisk,VM,ServerPool,Repository,Network,vlanGroup,SanServer,Tag
mokum.util.otypes = ServerPool,Tag
mokum.nicenames.path = /tmp/mokum_nice_names.txt
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = passw0rD
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = password
ovs.servers = 192.168.20.103,192.168.20.104,192.168.20.105,192.168.20.106,192.168.20.107,192.168.20.108
ovs.server.names = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.nfsadmin.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.fcsanadmin.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.iscsisanadmin.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.nfsrefresh.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.storage.type = iSCSIStorageArray
ovs.storage.type = FibreChannelStorageArray
ovs.storage.plugin = Oracle Generic SCSI Plugin
ovs.nfs.plugin = Oracle Generic Network File System
ovs.nfsstorage.name = nas510
ovs.scsistorage.name = nas510
ovs.nfsstoragename.accessHosts = nas510,192.168.2.100
ovs.scsistoragename.accessHosts = nas510,192.168.2.100:3260
ovm.ntp.servers = 192.168.20.201
ovm.yum.serverupdategroup = serverUpdateConfiguration_0004fb0000020000f7e6003cd92fce71:0004fb0000020000f7e6003cd92fce71,serverUpdateConfiguration_0004fb00000200003d03d38f43761ba1:0004fb00000200003d03d38f43761ba1
ovm.yum.reposname = 3x_03072016,3x_02022016
ovm.yum.rname = ovm611,ovm611
ovm.yum.baseURL = http://192.168.20.201/yum/public/3x_03072016/ovm3x_latest/getPackage/,ht...
ovm.yum.repoenabled = yes,yes
ovm.yum.pkgSignatureType = GPG,GPG
ovm.yum.gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle,file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
ovm.yum.serverupdategroupId = serverUpdateConfiguration_0004fb0000020000f7e6003cd92fce71,serverUpdateConfiguration_0004fb00000200003d03d38f43761ba1
ovm.repo.server = 1TBJan212016,ovs626,ovs627,ovs628
ovm.repo.server = nfs_prod,ovs626,ovs627,ovs628
ovm.repo.server = nfs_migration1,ovs623,ovs624,ovs625
ovm.repo.server = nfs_migration2,ovs623,ovs624,ovs625
ovs.nic.network = eth1:10ab3f506f,eth2:10cab5fdd8,eth3:10b9ab2785
 
# cat /opt/mokum/etc/ovm611-3.3-FullManagerRestoreGlocalYUM.conf
mokum.log.loc = /tmp/mokum_utils.
#mokum.util.otypes = PhysicalDisk,VirtualDisk,VM,ServerPool,Repository,Network,vlanGroup,SanServer,Tag
mokum.util.otypes = ServerPool,Repository,Network,vlanGroup,SanServer
mokum.nicenames.path = /tmp/mokum_nice_names.txt
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = passw0rD
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = password
ovs.servers = 192.168.20.103,192.168.20.104,192.168.20.105,192.168.20.106,192.168.20.107,192.168.20.108
ovs.server.names = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.nfsadmin.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.fcsanadmin.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.iscsisanadmin.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.nfsrefresh.servers = ovs623,ovs624,ovs625,ovs626,ovs627,ovs628
ovs.storage.type = iSCSIStorageArray
ovs.storage.type = FibreChannelStorageArray
ovs.storage.plugin = Oracle Generic SCSI Plugin
ovs.nfs.plugin = Oracle Generic Network File System
ovs.nfsstorage.name = nas510
ovs.scsistorage.name = nas510
ovs.nfsstoragename.accessHosts = nas510,192.168.2.100
ovs.scsistoragename.accessHosts = nas510,192.168.2.100:3260
ovm.ntp.servers = 192.168.20.201
ovm.yum.reposname = ovm611
ovm.yum.rname = 3x_02022016
ovm.yum.baseURL = http://192.168.20.201/yum/public/3x_02022016/ovm3x_latest/getPackage/
ovm.yum.repoenabled = yes
ovm.yum.pkgSignatureType = GPG
ovm.yum.gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
ovm.yum.serverupdategroupId = GlobalX86ServerUpdateConfiguration
ovm.repo.server = 1TBJan212016,ovs626,ovs627,ovs628
ovm.repo.server = nfs_prod,ovs626,ovs627,ovs628
ovm.repo.server = nfs_migration1,ovs623,ovs624,ovs625
ovm.repo.server = nfs_migration2,ovs623,ovs624,ovs625
ovs.nic.network = eth1:10ab3f506f,eth2:10cab5fdd8,eth3:10b9ab2785
 
# cat /opt/mokum/etc/ovm611-3.3-ProdDRTest-MigrateStorageUnassignedVMs-StartVMs.conf
mokum.log.loc = /tmp/mokum_utils.
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = password
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = passw0rD
ovs.migrate.blockrepo = 360014052c41ebfcd3c85d343adb806d3,0004fb0000020000f7e6003cd92fce71 
ovs.migrate.filerepo = nfs_prod,0004fb0000020000f7e6003cd92fce71
ovs.migrate.unassignedvm = ovs626:proddb02,proddb04,prodebsapp02,prodebsapp04,ovm-crusher
ovs.migrate.unassignedvm = ovs627:prodhyp02,prodhyp04,prodobidwh02,prodobidwh04,ovm-mccoy
ovs.migrate.unassignedvm = ovs628:prodrac102,prodrac104,prodsoa02,prodsoa04,ovm-thedoctor
ovs.start.vm = proddb02,proddb04,prodebsapp02,prodebsapp04,prodhyp02,prodhyp04,prodobidwh02,prodobidwh04,prodrac102,prodrac104,prodsoa02,prodsoa04,ovm-crusher,ovm-mccoy,ovm-thedoctor
ovs.stop.vm = proddb02,proddb04,prodebsapp02,prodebsapp04,prodhyp02,prodhyp04,prodobidwh02,prodobidwh04,prodrac102,prodrac104,prodsoa02,prodsoa04,ovm-crusher,ovm-mccoy,ovm-thedoctor
ovs.vm.killwait = 30
 
# cat /opt/mokum/etc/ovm611-3.3-SetPinnedVMs.conf
mokum.log.loc = /tmp/mokum_utils.
mokum.nicenames.path = /tmp/mokum_nice_names.txt
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = password
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = passw0rD
ovs.migrate.runningvm = ovs626:proddb02,proddb04,prodebsapp02,prodebsapp04,ovm612
ovs.migrate.runningvm = ovs627:prodhyp02,prodhyp04,prodobidwh02,prodobidwh04,ovm613
ovs.migrate.runningvm = ovs628:prodrac102,prodrac104,prodsoa02,prodsoa04
ovs.migrate.runningvm = ovs623:devdb02,devdb04,devobidwh02,devrac102,testsoa02
ovs.migrate.runningvm = ovs624:devrac104,devsoa02,devsoa04,testdb02,testrac104,testsoa04
ovs.migrate.runningvm = ovs625:testdb04,testebsapp04,testhyp02,testobidwh02,testrac102
 
# cat /opt/mokum/etc/ovm611-3.3-TestDRTest-MigrateStorageUnassignedVMs-StartVMs.conf
mokum.log.loc = /tmp/mokum_utils.
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = password
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = password
ovs.migrate.filerepo = nfs_migration1,0004fb00000200003d03d38f43761ba1
ovs.migrate.filerepo = nfs_migration2,0004fb00000200003d03d38f43761ba1
ovs.migrate.unassignedvm = ovs623:devdb02,devdb04,devobidwh02,devrac102,testsoa02
ovs.migrate.unassignedvm = ovs624:devrac104,devsoa02,devsoa04,testdb02,testrac104,testsoa04
ovs.migrate.unassignedvm = ovs625:testdb04,testebsapp04,testhyp02,testobidwh02,testrac102
ovs.start.vm = devdb02,devdb04,devobidwh02,devrac102,devrac104,devsoa02,devsoa04,testdb02,testdb04,testebsapp04,testhyp02,testobidwh02,testrac102,testrac104,testsoa02,testsoa04
ovs.stop.vm = devdb02,devdb04,devobidwh02,devrac102,devrac104,devsoa02,devsoa04,testdb02,testdb04,testebsapp04,testhyp02,testobidwh02,testrac102,testrac104,testsoa02,testsoa04
ovs.vm.killwait = 30