You are here

ROAD for Oracle VM Jedi Release for Oracle VM for x86 Release 3.3

ROAD for Oracle VM - Availability Protection, Disaster Recovery and Operations Automation, Ready for Business Reporting and Operations Toolkit

 

 

 

 
Table of Contents
ROAD for Oracle VM Introduction
  Oracle VM Availability Protection
  Oracle VM Operations Automation, and ROAD for Oracle VM Runbooks
ROAD for Oracle VM: Oracle VM Manager Installation
  ROAD for Oracle VM Directory Structure
  ROAD for Oracle VM SSH Prerequisites
Oracle VM Manager: ROAD for Oracle VM Command Definitions and Usage
  mokum_utils_lib.sh
  ovm_wipedb.sh
  restore_manager.sh
  start_vms.sh
  shutdown_vms.sh
  status_vms.sh
  migrate_vms.sh
  rename_vdisks.sh
  rename_pdisks.sh
  rename_allvmdisks.sh
  save_nice_names.sh
  restore_nice_names.sh
  import_block_repo.sh
  .ovs_import_block_repo.sh
  import_file_repo.sh
  .ovs_import_file_repo.sh
ROAD for Oracle VM Runbook Keywords
  Runbook Minimum Requirement Keywords
  Runbook Oracle VM Server Keywords
  Runbook Oracle VM Storage Keywords
  Runbook Oracle VM NTP Keywords
  Runbook Oracle VM YUM Update Repository Keywords
    Oracle VM Release 3.3 Yum Repository Keywords
    Oracle VM Release 3.2 Yum Repository Keywords
  Oracle VM Server Network Runbook Keywords
  Virtual Machine Runbook Keywords
  Oracle VM Storage Repository Migration Runbook Keywords
Oracle VM Manager: ROAD for Oracle VM Logging
Oracle VM Manager: ROAD for Oracle VM Command Start-up Processing and Exit Status
Oracle VM Manager Silent Installations and Oracle VM Manager Availability
  Oracle VM Manager Release 3.3 Silent Install and Uninstall Steps
  Oracle VM Manager Release 3.2 Silent Install and Uninstall Steps
Appendix
 

ROAD for Oracle VM Introduction

Mokum offers Oracle VM customers of all sizes availability protection, disaster recovery and operations automation, Ready for Business Reporting, and an operation toolkit for Oracle VM. ROAD for Oracle VM enables IT to comfortably meet service level agreements, RPOs and RTOs for business critical Oracle workloads running on Oracle VM without interruption.
 
Under day-to-day operations, there are several aspects to provide availability protection, disaster recovery and operations automation for Oracle VM.
 
  • First, is to provide availability protection for Oracle VM Manager. Oracle VM Manager is the centralized management and operations center for Oracle VM. Without Oracle VM Manager operations will quickly screech to a halt. ROAD for Oracle VM can recover a corrupt Oracle VM Manager instance on the same host, or onto a different host with RPOs and RTOs of < 15 minutes without interrupting running Oracle VM servers or virtual machines. ROAD for Oracle VM eliminates the need for high-maintenance, and overly complicated Oracle Clusterware, and time consuming, and often error-prone Oracle VM Manager database recoveries.
  • Second, is to provide Oracle VM operations automation to quickly implement change during maintenance windows, and disaster recovery events, while mitigating the risk of change. ROAD for Oracle VM combines extensive automation with intuitive workflows to be able to quickly implement change during maintenance windows, and disaster recovery events.
  • Lastly, is to provide ready for business reporting and an operation toolkit to allow operations to proactively respond to issues before they impact the business. ROAD for Oracle VM’s ready for business reporting and operations utilities allow operations to proactively respond to issues before they impact the business.
 
ROAD for Oracle VM consists of two modules. One module is installed on the Oracle VM Manager hosts, and the second module is installed on the Oracle VM servers. The Oracle VM Manager modules utilize the Oracle VM CLI, runbooks, and native Linux commands and shell scripts to provide Oracle VM Manager Availability Protection, and Oracle VM Disaster Recovery and Operations Automation. The Oracle VM server modules provides the Oracle VM Ready for Business Reporting and an Operations Toolkit. Oracle VM Ready for Business Reporting and the Operations Toolkit use native Linux commands and shell scripts to provide in depth Oracle VM server, cluster, and server pool reporting, along with a toolkit for IT operations to pro-actively respond to issues before they impact the business.
 
ROAD for Oracle VM should be installed on all Oracle VM Manager and server nodes. We recommend having at least one dedicated Oracle VM Manager hot backup machine with ROAD including all of your runbooks to be able to quickly recover from any Oracle VM Manager outage.
 

Oracle VM Availability Protection

Oracle VM Manager is the centralized management and operations center for Oracle VM. Without Oracle VM Manager, operations will quickly screech to a halt. ROAD for Oracle VM provides availability protection for Oracle VM Manager with RPOs and RTOs of < 15 minutes.
 
Over the years we learned that Oracle's Oracle VM Manager [1]availability solution, Oracle Clusterware, is high-maintenance, and still does not address the root cause of the majority of Oracle VM Manager outages, database corruption. Oracle's Oracle VM Manager database [2]recovery option is time consuming, and error-prone, often resulting in prolonged outages and total cluster rebuilds.
 
ROAD for Oracle VM takes a fundamentally different approach to Oracle VM Manager availability and recoverability by capturing the running Oracle VM configuration in runbooks. The runbooks are completely portable and can be moved between any ROAD enabled Oracle VM Manager host to quickly and efficiently recover Oracle VM Manager and its objects without interrupting mission critical Oracle VM servers and virtual machines. ROAD for Oracle VM can recover a corrupt Oracle VM Manager instance on the same host, or recover the running configuration onto a different host with RPOs and RTOs of < 15 minutes. ROAD for Oracle VM eliminates the need for high-maintenance, and overly complicated Oracle Clusterware, and time consuming, and error-prone Oracle VM Manager database recoveries.
 
ROAD for Oracle VM consolidates, automates, and enhances the entire manual Oracle VM Manager recovery process. One of the manual recovery operations is an Oracle VM Manager UUID restore. Oracle VM Manager UUID restores can be used to partially restore Oracle VM Manager on the same or different Oracle VM Manager host.
 
Each Oracle VM Manager installation has a unique UUID. Oracle VM server pools that are created by an Oracle VM Manager instance are stamped with a unique UUID. Oracle VM server pools can only be managed by one Oracle VM Manager instance at a time with the appropriate UUID. There are two different Oracle VM Manager UUID restore options.
 
The first option is to recover an Oracle VM Manager instance that has had a catastrophic failure. An Oracle VM Manager UUID installation can be performed using the UUID from the crashed Oracle VM Manager instance. After the Oracle VM Manager UUID installation, the recovery process can be started that consists of many manual steps.
 
The second Oracle VM Manager UUID restore options is to recover from a corrupt Oracle VM Manager database. Recovering Oracle VM Manager from a corrupt database requires a database wipe, that resets the Oracle VM Manager database back to its default first login state, but still retaining the original UUID. After the Oracle VM Manager database reset, the recovery process can be started that consists of many manual steps.
 
Post UUID installation or database reset, the first step is to discover the running Oracle VM servers. Oracle VM Manager UUID restores populate the Oracle VM Manager GUI with the running Oracle VM server pool configurations from raw data in the Oracle VM server’s Berkeley Databases, networking, and storage repository configuration files. Once the Oracle VM servers and server pools have been discovered, various storage, repository, yum, and NTP properties must be manually set. Finally, Oracle VM server and storage repositories must be refreshed. The end result is a partial restore of Oracle VM Manager without the user-friendly names, tags, or server, pool, network and storage descriptions. The user-friendly names, tags and descriptions might be recoverable via the crashed Oracle VM Manager repository database, or from a clean database backup. The challenge with UUID Oracle VM Manager restores, along with Oracle VM Manager database [2]recoveries is that they are time consuming, error-prone, and often result in prolonged outages and total cluster rebuilds. ROAD for Oracle VM automates and streamlines the entire UUID restore process including user-friendly names, tags, and property descriptions with RTOs of < 15 minutes.
 
Note: It's worth mentioning that the missing user-friendly names, tags, and descriptions from Oracle VM Manager UUID restores caused enough customer noise that prompted Oracle to release [3]Doc ID 1981708.1. The solution in Doc ID 1981708.1 recovers only the user-friendly virtual disk names.
 
ROAD for Oracle VM should be installed on all Oracle VM Manager nodes to provide availability protection, and operations automation. We recommend having at least one dedicated Oracle VM Manager hot backup machine with ROAD including all of your runbooks to be able to quickly recover from any Oracle VM Manager outage.
 

Oracle VM Operations Automation, and ROAD for Oracle VM Runbooks

Automation allows you to quickly implement change during maintenance windows, and disaster recovery events, while reducing inconsistencies, errors, and the risk of change. ROAD for Oracle VM combines extensive automation with intuitive workflows to be able to quickly implement change during maintenance windows, and disaster recovery events.
 
ROAD for Oracle VM runbooks contain the configurations for an automated task or process. Runbooks can declare configurations, as well as orchestrate steps of ordered process.
 
Runbooks are designed to be human-readable text files using simple keyword value pairs, i.e. keyword = value. Runbooks are used in conjunction with ROAD for Oracle VM commands that invoke the Oracle VM CLI, and native Linux commands and shell scripts. Runbooks are very flexible, and can be configured to adapt to your Oracle VM automation needs.
 
The Jedi Release of ROAD for Oracle VM has the following built-in automations:
  • Reset the Oracle VM Manager Database to a clean first login state
  • Reset an Oracle VM server’s cluster configurations to a clean state
  • Backup and restore Oracle VM Manager user-friendly names, server pool, network, and vlan descriptions, and tags
  • Orchestrate starting, stopping, and migrating virtual machines with ordered process
  • Orchestrate a complete Oracle VM Manager UUID restore on the same host or a different host using a runbook with the server pools running configurations
  • Orchestrate importing block (OCFS2) and file (NFS) storage repositories including changing Oracle VM Manager UUIDs, virtual machine network IDs, as well as virtual and physical disk (source and target) mappings.
  • Orchestrate service window changes
  • Orchestrate disaster recovery failovers
  • Orchestrate resetting Oracle VM Managers and servers to a clean first login state
  • Orchestrate migrating Oracle VM Manager 3.2 from an Oracle 11G database to MySQL.
 
ROAD for Oracle VM runbooks have the following minimum keyword value pair requirements:
mokum.log.loc = /tmp/mokum_utils.
ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw = yourpassword
cli.host = localhost  
cli.port = 10000
cli.user = admin
ovs.agent.port = 8899
ovs.agent.user = oracle
ovs.agent.pw = yourpassword
 
The following list describes runbook requirements:
  • Runbooks must be configured prior to running any of the commands
  • Runbooks contains a set of lines representing “keyword = value” pairs
  • Runbooks may also contain blank lines, and comment lines. Comment lines begin with a “#” (hash) and continue until the end of the line.
  • Keywords must begin on the first character of a line
  • Keywords are followed by a space, then an equal sign, then a space, and then a value, i.e. keyword = value
  • Values may be single values, or a comma separated list
  • A value's format is dependent upon the keyword definition
  • A runbook must be selected to be able to run commands
 
The next tables describes the ROAD for Oracle VM Jedi release commands, including descriptions, and the runbook keywords.
Command Name
Description
Runbook Keywords
ie. keyword = value
ovm_wipedb.sh
This command wips an Oracle VM Manager's MySQL database repository resulting in an empty “like new” state.
This command does not use a runbook.
restore_manager.sh
This command automates the process of performing Oracle VM Manager UUID restore on the same or different Oracle VM Manager host.
ovs.servers
ovs.server.names
ovs.nfsadmin.servers
ovs.fcsanadmin.servers
ovs.iscsisanadmin.servers
ovs.nfsrefresh.servers
ovs.storage.type
ovs.storage.type
ovs.storage.plugin
ovs.nfs.plugin
ovs.nfsstorage.name
ovs.scsistorage.name
ovs.nfsstoragename.accessHosts
ovs.scsistoragename.accessHosts
ovm.ntp.servers
ovm.yum.baseURL
ovm.yum.gpgkeycheck
ovm.yum.gpgkey
ovm.repo.server
ovs.nic.network
ovs.host.nic.network
save_nice_names.sh
This command creates a text and tar file with the Oracle VM Manager user-friendly names, network, vlan and server pool descriptions, and tags that can be used to restore the objects.
mokum.util.otypes
mokum.nicenames.path
restore_nice_names.sh
This command restores the Oracle VM Manager user-friendly names, network, vlan and server pool descriptions, and tags using the outfiles from the save_nice_names.sh command.
mokum.util.otypes
mokum.nicenames.path
status_vms.sh
This command checks, and prints the status of all virtual machine to the terminal.
Minimum keyword value pair requirements
start_vms.sh
This command attempts to start the virtual machines based upon the list of virtual machines specified in the runbook.
ovs.start.vm
shutdown_vms.sh
This command attempts to shutdown the list of virtual machines specified in the runbook.
ovs.stop.vm
ovs.vm.killwait (optional)
migrate_vms.sh
This command attempts to migrate the virtual machines from Unassigned Virtual Machines folder, or running or stopped (only 3.2) virtual machines between Oracle VM server pool members based upon the list of virtual machines specified in the runbook.
ovs.migrate.assignedvm, ovs.migrate.unassignedvm, and ovs.migrate.runningvm
 
3.2+ and 3.4+
ovs.migrate.unassignedvm
Note: Only uncomment  "ovs.migrate.assignedvm" or  "ovs.migrate.unassignedvm" at any one time in a runbook.
 
3.3+ Only
ovs.migrate.runningvm
Note: Only uncomment  "ovs.migrate.runningvm" or  "ovs.migrate.unassignedvm" at any one time in a runbook.
rename_vdisks.sh
This command will rename all virtual disks that are assigned to virtual machines following a standard naming convention. Virtual disk names start with the virtual machine name followed by its disk slot number.
Minimum keyword value pair requirements
rename_pdisks.sh
This command will rename all physical disks that are assigned to virtual machines following a standard naming convention. Physical disk names start with the virtual machine name followed by its disk slot number.
Minimum keyword value pair requirements
rename_allvmdisks.sh
This command will rename all virtual and physical disks that are assigned to virtual machines following a standard naming convention. Virtual and physical disk names start with the virtual machine name followed by its disk slot number.
Minimum keyword value pair requirements
import_file_repo.sh
This command is used to migrate NFS storage repositories between server pools. The repository migration includes changing source and target bridge IDs,  as well as virtual and physical disk mappings in the vm.cfg files.
ovs.migrate.filerepo
.ovs_import_file_repo.sh
This command is not executed by user directly, but copied to the target Oracle VM server when the import_file_repo.sh command is run.
 
Bidirectional ssh key based authentication is required between the source Oracle VM Manager host, and the target Oracle VM server.
None
import_block_repo.sh
This command is used to migrate block (ISCSI or Fibre Channel) storage repositories between server pools. The repository migration includes changing source and target bridge IDs,  as well as virtual and physical disk mappings in the vm.cfg files.
ovs.migrate.blockrepo
.ovs_import_block_repo.sh
This command is not executed by user directly, but copied to the target Oracle VM server when the import_block_repo.sh command is run.
 
Bidirectional ssh key based authentication is required between the source Oracle VM Manager host, and the target Oracle VM server.
None
 
The following example shows a command, migrate_vms.sh being run along with the runbook selection prompt. By default when a command is run the the runbook selection prompt is displayed. A runbook can be selected by entering its full path, then press Enter to run the command with the runbook automation. 
# ./migrate_vms.sh
Please select the appropriate runbook from the below list.
/opt/mokum/etc/ovm-prod-3.2-VM-Status.conf
/opt/mokum/etc/ovm-prod-3.2-Save-NiceNames.conf
/opt/mokum/etc/ovm-prod-3.2-Restore-NiceNames.conf
/opt/mokum/etc/ovm-prod-3.2-Full-Restore.conf
/opt/mokum/etc/ovm-prod-3.2-DR-Testing-MigrateStorage-StartVMs.conf
/opt/mokum/etc/ovm-prod-3.2-Stop-MigrateUnassigned-Start-VMs.conf
/opt/mokum/etc/ovm-prod-3.2-Migrate-VMs.conf
/opt/mokum/etc/ovm-prod-3.2-Sunday-11pm-Full-Restore-Stop-MigrateUnassigned-Start-VMs.conf
/opt/mokum/etc/ovm-prod-3.2-Wednesday-11pm-Migrate-VMs.conf
Please enter the full file path here : /opt/mokum/etc/ovm-prod-3.2-Migrate-VMs.conf
 
There are many ways to organize runbooks and the files they include. There are some example runbooks illustrating lots of these techniques in the ROAD for Oracle VM Runbooks repository. We recommend looking at the runbook examples in another browser tab as you go along.
 

ROAD for Oracle VM: Oracle VM Manager Installation

ROAD for Oracle VM consists of two modules. One module is installed on the Oracle VM Manager hosts, and the second module is installed on the Oracle VM servers. The Oracle VM Manager modules are installed only on Oracle VM Manager hosts, not on Oracle VM servers.
 
ROAD for Oracle VM is installed by creating the /opt/mokum directory, and copying the files and runbooks into the directories described here.
 
Note: The ROAD for Oracle VM Jedi release does not have an installation program or script.
 
We recommend installing ROAD on all Oracle VM Manager nodes, as well as having at least one dedicated Oracle VM Manager host with ROAD for Oracle VM and all your runbooks to be able to quickly recover any crashed Oracle VM Manager instances.
 
ROAD for Oracle VM: Oracle VM Manager Installation
1) Copy the ROAD for Oracle VM tar file onto the Oracle VM Manager host.
2) As root, on the Oracle VM Manager host create the /opt/mokum directory:
mkdir -p /opt/mokum
2) As root, uncompress the ROAD for Oracle VM tar files into /opt/mokum:
tar xvf road-jedi-OVM.tar -C /opt/mokum  
3) Set the permissions on /opt/mokum to 700 for the root user:
chmod 700 -R /opt/mokum

ROAD for Oracle VM Directory Structure

/opt/mokum
  • The /opt/mokum directory may be added to the path of the root user (optional), i.e.
    • Add the following 2 lines in ~/.bash_profile:
      PATH=$PATH:/opt/mokum
      export PATH
    • If /opt/mokum is added to the path variable the ROAD for Oracle VM commands can be ran from any directory.
  • All of the files should be placed in /opt/mokum
    • mokum_utils_lib.sh
    • ovm_wipedb.sh
    • restore_manager.sh
    • save_nice_names.sh
    • restore_nice_names.sh
    • status_vms.sh
    • start_vms.sh
    • shutdown_vms.sh
    • migrate_vms.sh
    • rename_vdisks.sh
    • rename_pdisks.sh
    • rename_allvmdisks.sh
    • import_file_repo.sh
    • .ovs_import_file_repo.sh
    • import_block_repo.sh
    • .ovs_import_block_repo.sh
  • All .sh files should be owned by root and marked as executable, i.e. “ -rwx------ 1 root root”
 
/opt/mokum/etc
  • This directory contains runbooks each with environment-specific configurations that are used by the commands in /opt/mokum
  • Initially, this directory contains a single commented example runbook called “runbook_example.conf”
 
/opt/mokum/ovm-install
  • This directory contains an answer file for a silent Oracle VM Manager installation

 

ROAD for Oracle VM SSH Prerequisites

ROAD for Oracle VM uses key based SSH authentication for the Oracle VM CLI, and between the Oracle VM Manager hosts and the Oracle VM servers. Add your public key on the Oracle VM Manager host to /home/oracle/.ssh/ovmcli_authorized_keys and to the the Oracle VM servers root user to eliminate password prompts. After the SSH keys are set up, the first Oracle VM Manager CLI login will require entering the Oracle VM Manager admin password. Subsequent logins will use key based SSH authentication. If the admin user's password is changed, the first login after the password reset will require entering the new admin user's password.
 
Configure Key Based SSH Authentication
To setup key based SSH authentication on the Oracle VM Manager host, as root:
1) As root, access the Oracle VM Manager host. From the Oracle VM Manager host ssh to the Oracle VM CLI, and accept the identity of the DSA key fingerprint:
The following example shows the steps to ssh into the Oracle VM CLI, and to accept the identity of the DSA key fingerprint.
 
ssh -p 10000 -l admin localhost
The authenticity of host '[localhost]:10000 ([127.0.0.1]:10000)' can't be established.
DSA key fingerprint is 6e:ba:49:61:4c:4f:c7:40:56:f6:71:77:69:9d:f7:6b.
Are you sure you want to continue connecting (yes/no)? yes
As shown above, type yes, then press Enter.
Next exit the Oracle VM CLI by typing exit 
Next type the following command, and follow the prompts:
ssh-keygen -t rsa
Next copy the saved key to the oracle accounts authorized keys file:
cat /root/.ssh/id_rsa.pub >> /home/oracle/.ssh/ovmcli_authorized_keys
If the .ssh directory does not exist, create it and chown to oracle:oinstall. For example, as root:
mkdir -p /home/oracle/.ssh
chown oracle:oinstall -R /home/oracle/.ssh
 
Note: The next two steps are only required if using the import_block_repo.sh and import_file_repo.sh commands. The the import_block_repo.sh and import_file_repo.sh commands are used to import repositories and virtual machines that were previously owned by a different Oracle VM Manager and pool, as well as to import replicated storage repositories between sites.
 
2) Setup bidirectional key based SSH authentication for the Oracle VM servers that will be used with the import_block_repo.sh and import_file_repo.sh commands. From the Oracle VM Manager host, as root, type the following command for each Oracle VM server. Substitute ovmserver with the hostname of each of your Oracle VM servers.
ssh-copy-id -i ~/.ssh/id_rsa.pub ovmserver
 
3) From the Oracle VM server, as root, type the following commands. 
First generate the RSA key pair on the Oracle VM server. Type the following command, and follow the prompts:
ssh-keygen -t rsa
 
Next copy the key to the Oracle VM manager host. Substitute ovmmanager with the hostname of the Oracle VM manager host.
ssh-copy-id -i ~/.ssh/id_rsa.pub ovmmanager
 
Note: If Oracle VM manager is reinstalled the root user's ~/.ssh/known_hosts entry for localhost as well as any of the Oracle VM servers root user's ~/.ssh/known_hosts entry will need to be removed and updated with the new Oracle VM Manager SSH localhost entry.  
 
After reinstalling Oracle VM Manager on the same host, while running ROAD commands you may see the following message:
Can't connect to OVM CLI on the specified host and port. Please check error logs...
 
Or while accessing the Oracle VM CLI you may see the following ssh message:
# ssh -l admin -p 10000 localhost
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the DSA host key has just been changed.
The fingerprint for the DSA key sent by the remote host is
7d:24:ad:52:66:f0:bd:d6:45:86:c8:c8:86:c1:11:74.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending key in /root/.ssh/known_hosts:12
DSA host key for [localhost]:10000 has changed and you have requested strict checking.
Host key verification failed.
 
Solution: Remove the offending key in /root/.ssh/known_hosts, then authenticate into the Oracle VM Manager CLI. For example, on the Oracle VM Manager host:
 
# ssh-keygen -f /root/.ssh/known_hosts -R "[localhost]:10000"
/root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old

Oracle VM Manager: ROAD for Oracle VM Command Definitions and Usage

mokum_utils_lib.sh

This file represents the library of variables set from runbook parameters and reusable utility functions that are called by the main logic in the ROAD for Oracle VM commands.  
 
The following is a summary of the function names and their use:
 

choose_conf_file

  • To select conf file to set variables from the parameters mentioned there.

action_set_variables

  • Reads and applies values in the selected runbook to the global variables defined in mokum_utils_lib.sh which are used by the ROAD for Oracle VM commands.
 
Variables that cover the following broad categories:
  • Argument Defaults
  • Represents the global variables shared by the various utilities. Includes things like location of the configuration files and log variables
  • Oracle VM CLI Defaults
  • All variables used to connect to the Oracle VM CLI
  • e.g. SSHCMD Variable using variables cliHost, cliUser, cliPort whose values specified via the configuration file opens up an ssh session with the Oracle VM CLI instance

ovm.config Defaults

  • Variables that match to the keywords present in the Oracle VM Manager's .config file
  • Oracle VM Defaults
  • Variables used to support querying and setting elements managed by the Oracle VM Manager
  • Variables related to Oracle VM storage types, plugins etc.
  • Oracle VM Other configurations
  • Variables related to NTP, YUM etc..
 

fcsarr

  • To assign FC storage Array names as values to Array- fcStorArrayName.

list_ovsadmin_server_fs

  • To make a list of Oracle VM Admin servers for file servers.

check_ovm_release

  • To check if current Oracle VM release is supported by ROAD for Oracle VM, if not, stop further script execution.

connect_ovmcli

  • To check if ROAD for Oracle VM can connect to Oracle VM CLI, if not, stop further script execution.

start_task

  • Function to show starting of new step or command.

mnt_check

  • Function to make /mnt available for temporary mounting for specific command

task_status

  • Function to check the status of task performed by the command or by each function written in the command.
 

ovm_wipedb.sh

This command will wipe the Oracle VM Manager's MySQL database repository to an empty “first login” state. It does this by using the “deletedb” function of the ovm_upgrade utility that is included in an Oracle VM Manager installation.
 
The command assumes the following:
  • /u01/app/oracle/ovm-manager-3 is the installation home of the Oracle VM Manager instance to clear
  • The Oracle VM Manager configuration file (.config) exists in directory /u01/app/oracle/ovm-manager-3/
  • ovm_upgrade.sh exists in directory  /u01/app/oracle/ovm-manager-3/bin/
  • ovm_upgrade.sh exists in directory /u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/
  • If any of the above are untrue, then the ovm_wipedb.sh utility will need to be modified to match the directory structure of the existing installation.
  • Oracle VM 3.3: configure_client_cert_login.sh exists in directory /u01/app/oracle/ovm-manager-3/bin/
 
The ovm_wipedb.sh utility does require the password to the OVS Schema user as specified in the .config file. If the command is called from an automation utility, the password may be placed into an environment variable called “MYPASSWORD” with value set from variable ovmPw based on parameter ovm.pw in conf file. If MYPASSWORD is exported and defined in the current environment, then it's value will be passed into the ovm_upgrade.sh utility as the value for the dbpass argument.
 
The called script configure_client_cert_login.sh requires Oracle VM Manager username and password which need to be entered when asked during script execution.
 
As this command must stop and start the ovmm and ovmcli services on Oracle VM Manager, it requires to be either run by the “root” user, or a user with “sudo” privileges.
 

restore_manager.sh

This command automates the process of performing an Oracle VM Manager UUID restore on the same or different Oracle VM Manager host. This command uses a runbook with the running Oracle VM Manager configuration.
 
At a high level, the command perform the following steps:
  1. Discover the existing Oracle VM servers and server pools.
  2. Discover File servers, and Storage Arrays.
  3. Add Oracle VM servers as Admin Servers to the related File servers.
  4. Add Oracle VM servers as Admin Servers to the ISCSI and Fibre Channel Storage Arrays.
  5. Add refresh servers to File servers
  6. Validate and refresh Storage Arrays
  7. Refresh File Servers
  8. Refresh file systems
  9. Present the Repositories to respective Oracle VM servers
  10. Refresh Storage Layer
  11. Refresh all Oracle VM servers
  12. Add the NTP configuration information to Oracle VM Manager and propagate the NTP settings to the Oracle VM servers.
  13. Add the YUM server configuration information to Oracle VM Manager and propagate to the Oracle VM servers. With 3.3 we can add server update groups (SUGs) other than default-Global server update group, each with a unique repository
  14. Add access ports to networks (for cluster heartbeat and live migration) if not added during restore
  15. Add access ports if not added during restore, for multiple cluster heartbeat and live migration networks.
  16. Perform a final refresh of all servers
 
Usage:
./restore_manager.sh  or  sh restore_manager.sh
 
Logic:
  • Process runbook
  • If there is an error opening the session, the command will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the command will exit with an appropriate result code
  • Initialize the list of Oracle VM servers by performing a phyiscal and logical discovery of the sever object's identified in the runbook's ovs.servers, ovs.agent.port, ovs.agent.user, and ovs.agent.pw keywords. If any of the server specified in conf file can’t be discovered, the command will stop further execution
  • Configure the newly discovered/re-discovered servers to be associated with the SAN Storage as identified in the runbook's ovs.storage type, ovs.storage.plugin, ovs.storage.name, and optionally ovs.scsistoragename.accessHosts (for iSCSI) and ovs.nfsstoragename.accessHosts (for fileserver) keywords. Add admin servers for File servers and Storage Arrays
  • Refresh File servers and Storage Arrays
  • Refresh file systems other than pool file systems
  • Present any existing Repository objects to the newly discovered/re-discovered Oracle VM servers. Here the runbook keyword ovm.repo.server with its assigned value decides which repository to be presented to which Oracle VM servers
  • Refresh the Oracle VM servers
  • Apply and update the NTP Server configuration to all Oracle VM servers using the ovm.ntp.servers keyword
  • Apply and update the YUM configuration to all Oracle VM servers. Can add server update groups, multiple server update repositories or server pool specific depending on runbook parameters and their values provided
  • Add access ports to networks (for cluster heartbeat and live migration) if not added during uuid restore.
  • Add access ports if not added during uuid restore, for multiple cluster heartbeat and live migration networks
  • Perform a final refresh of all of the Oracle VM server objects
  • Close the Oracle VM CLI ssh session
 

start_vms.sh

This command attempts to start the virtual machines based upon the list of virtual machines specified in the runbook.
  • For each VM, its status is checked.
  • Then the VM will start on a server based upon the policy specified in the Server Pool that owns the Repository that this specific VM's configuration is from.
 
Note: The assumption is that the specified VMs are not running on any other server in the current network and VM infrastructure.
 
Usage:
./start_vms.sh or sh start_vms.sh
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the command will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the command will exit with an appropriate result code
  • For all VMs mentioned in runbook with parameter ovs.start.vm
    • Check VM's status
    • Check VM's configuration to get appropriate information about its associated server, server pool, repository, vmDiskMapping which is required to start VM.
    • Start VM
  • If a failure occurs, skip processing that VM and move on to the next one
  • Wait for the command to complete
  • If the command is successful, VM will start
  • Otherwise write to log file that the VM would not start, see log file for more details
  • Close the Oracle VM CLI ssh session
 

shutdown_vms.sh

This command attempts to shutdown the list of virtual machines specified in the runbook.
  • First it will check the status of all VMs.
  • Next, an attempt is made to shutdown the running VMs.
  • If VMs do not successfully shut down, then the command will try to “kill” the VMs after waiting time specified in conf file using parameter ovs.vm.killwait as a last resort.
 
Note: The assumption is that the specified VMs are running in the current network and VM infrastructure.
 
Usage:
./shutdown_vms.sh or sh  shutdown_vms.sh
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the utility will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the utility will exit with an appropriate result code
  • For all VMs mentioned in runbook using parameter ovs.stop.vm
  • Check VM's status
  • If  VM's status is running, issue a “Stop VM” command to attempt to cause a clean shutdown of the VMs.
  • If the command was successful, wait for the VM status to become “Stopped”
  • If the VM could not be stopped in time specified in conf file using parameter ovs.vm.killwait
  • Attempt to forcefully shutdown the VM by issuing the “Kill VM” command.
  • Wait for the VMs to be stopped.
  • Close the Oracle VM CLI ssh session.
 

status_vms.sh

This command checks, and prints the status of all virtual machine to the terminal. 
  • For each VM, its status is checked, to find if it is running or stopped. Oracle VM Release 3.3 also includes VM template details.

Note: If virtual machines or templates with the same name exist in repositories status_vms.sh will not display the status. For example, if an Oracle VM server pool has a template named OVM_OL7U2_x86_64_PVHVM in more than one repository status_vms.sh would display: Virtual Machine OVM_OL7U2_x86_64_PVHVM is. If only one copy the template named OVM_OL7U2_x86_64_PVHVM exists, status_vms.sh would display Virtual Machine OVM_OL7U2_x86_64_PVHVM is Template.

Usage:
./status_vms.sh or sh status_vms.sh
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the command will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the command will exit with an appropriate result code
  • For all VMs listed with Oracle VM CLI “list vm”
  • Check VM's status
  • Wait for the command to complete
  • If the command is successful, VM’s status will be displayed
  • See log file for more details
  • Close the Oracle VM CLI ssh Session
 

migrate_vms.sh

This command attempts to migrate the virtual machines from Unassigned Virtual Machines folder, or running or stopped (only Oracle VM Release 3.2) virtual machines between Oracle VM server pool members based upon the list of virtual machines specified in the runbook.
  • For each VM, its status is checked.
  • Oracle VM Release 3.3: Then the VMs from Unassigned Virtual Machines folder will be migrated to a server pool of specified Oracle VM Server. (Considering the policy specified in the Server Pool that owns the Repository that this specific VM's configuration is from.)
  • Oracle VM Release 3.2: Then the VMs from Unassigned Virtual Machines folder will be migrated to the specified Oracle VM Server. (Considering the policy specified in the Server Pool that owns the Repository that this specific VM's configuration is from.)
  • Attempts to migrate running or stopped virtual machines from the list of VMs specified in the conf file from one Oracle VM server to another in the same server pool.
  • For each VM, its status is checked.
  • Then the VMs are migrated from one Oracle VM server (source) to another (target server specified in runbook) in the same server pool.
 
Note:  VMs migrated from Unassigned Virtual Machines folder to Server pool will be in stopped state, use script start_vms.sh to start them.
 
Usage:
./migrate_vms.sh or sh migrate_vms.sh
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the command will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the command will exit with an appropriate result code
  • Oracle VM Release 3.3 and 3.2: For all VMs mentioned in conf file with parameter ovs.migrate.unassignedvm
    • Check VM's status
    • Check Server pool of the specified server to migrate VM to.
    • Migrate VM from Unassigned Virtual Machines to target Oracle VM server
    • If a failure occurs, skip processing that VM and move on to the next one
    • Wait for the command to complete
    • If the command is successful, VM will be migrated
    • Otherwise the VM would not migrate, see log file for more details
  • Oracle VM Release 3.3: For all VMs mentioned in runbook with parameter ovs.migrate.runningvm
    • Check VM's status, to verify if it is running
    • Check server pool of the target Oracle VM server, then verify that both source and target Oracle VM servers are in the same server pool
    • If yes, migrate VM from currently running Oracle VM server to specified Oracle VM server
    • Process all VMs from the list
    • Close the Oracle VM CLI ssh Session
  • Oracle VM Release 3.2: For all VMs mentioned in runbook with parameter ovs.migrate.assignedvm
    • Check VM's status, to see if it is running or stopped
    • Check server pool of the target Oracle VM server, then verify that both source and target Oracle VM servers are in the same server pool
    • If yes, migrate VM from currently running Oracle VM server to specified Oracle VM server
    • Stopped VM which is assigned to server is migrated to other Oracle VM server if both servers are in the same server pool
    • Process all VMs from the list
    • Close the Oracle VM CLI ssh session

 

rename_vdisks.sh

This command will rename all virtual disks that are assigned to virtual machines following a standard naming convention. Virtual disk names start with the virtual machine name followed by its disk slot number.
 
Virtual disk renaming is based on values extracted from the Oracle VM CLI command VMdiskmapping.
 
Usage: ./rename_vdisks.sh or ./rename_vdisks.sh -v or ./rename_vdisks.sh -d -v
 
Naming Standard:
vmname-vdisk0 (slot 0-x where x is the slot number)
vmname-vdisk1 (slot 1-x where x is the slot number)
vmname1-vmname2-svdisk0 (where a disk is shared between 2 VMs)

OPTIONS:
-h   Show this message
-m   OVM 3.2 Manager
-d   Dry Run.  Command will make no changes, but instead provide the output of an actual run
-v   Verbose flag
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the utility will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the utility will exit with an appropriate result code
  • Get list and details from VMdiskMapping, then get virtual disk Id, name, Virtual machine name, Id, virtual disk slot number from it
  • Generate standard virtual disk name based on above logic
  • If virtual disk name is not a standard, rename it
  • Virtual disk name will be renamed to <VMname>-vdisk<Slotnumber>
  • Shared virtual disks are detected when the virtual disk details have multiple vmdiskmappings pointing to same virtual disk Id.
  • If virtual disk is shared, it will be renamed to <VMname1>-<VMname2-...>-svdisk<Slotnumbers>
  • If virtual disk name follows naming standard, skip it and process next virtual disk until all virtual disks are processed.
  • Close the Oracle VM CLI ssh session.

Note: The command can be run with dry run option before making any changes like:  ./rename_vdisks.sh -d -v
 

rename_pdisks.sh

This command will rename all physical disks that are assigned to virtual machines following a standard naming convention. Physical disk names start with the virtual machine name followed by its disk slot number.
 
Physical disk renaming is based on values extracted from the Oracle VM CLI command VMdiskmapping.
 
Usage: ./rename_pdisks.sh or ./rename_pdisks.sh -v or ./rename_pdisks.sh -d -v
 
Naming Standard:
vmname-pdisk0 (slot 0-x where x is the slot number)
vmname-pdisk1 (slot 1-x where x is the slot number)
vmname1-vmname2-spdisk1 (where a disk is shared between 2 VMs)

OPTIONS:
-h   Show this message
-m   OVM 3.2 Manager
-d   Dry Run.  Command will make no changes, but instead provide the output of an actual run
-v   Verbose flag
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the utility will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the utility will exit with an appropriate result code
  • Get list and details of VMdiskMapping, and get physical disk Id, name, Virtual machine name, Id, physical disk slot number
  • Generate standard physical disk names based on all above information
  • If physical disk name is not a standard, rename it based on above information
  • Physical disk name will be renamed to <VMname>-pdisk<Slotnumber>
  • Shared physical disks are detected when the physical disk details have multiple vmdiskmappings pointing to same physical disk Id.
  • If physical disk is shared, it will be renamed to <VMname1>-<VMname2-...>-spdisk<Slotnumbers>
  • If physical disk name follows naming standard, skip it and process next physical disk until all physical disks are processed
  • Close the Oracle VM CLI ssh session
 
Note: The command can be run with dry run option before making any changes like: ./rename_pdisks.sh -d -v

 

rename_allvmdisks.sh

This command will rename all virtual and physical disks that are assigned to virtual machines following a standard naming convention. Virtual and physical disk names start with the virtual machine name followed by its disk slot number.
 
Disk renaming is based on values extracted from the Oracle VM CLI command VMdiskmapping.
 
Usage: ./rename_allvmdisks.sh or ./rename_allvmdisks.sh -v or ./rename_allvmdisks.sh -d -v
 
Naming Standard:
vmname-vdisk0 (slot 0-x where x is the slot number)
vmname-vdisk1 (slot 1-x where x is the slot number)
vmname1-vmname2-svdisk0 (where a disk is shared between 2 VMs)
vmname-pdisk0 (slot 0-x where x is the slot number)
vmname-pdisk1 (slot 1-x where x is the slot number)
vmname1-vmname2-spdisk01 (where a disk is shared between 2 VMs)

OPTIONS:
-h   Show this message
-m  OVM 3.2 Manager
-d   Dry Run.  Command will make no changes, but instead provide the output of an actual run
-v   Verbose flag
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the utility will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the utility will exit with an appropriate result code
  • Get list and details of VMdiskMapping and get physical or virtual disk Id, name, Virtual machine name, Id, disk slot number from it
  • Generate standard VM disk names based on all above information
  • If VM disk name is not a standard, rename it based on above information
  • Physical disk name will be renamed to <VMname>-pdisk<Slotnumber>
  • Shared physical disks are detected when the physical disk details have multiple vmdiskmappings pointing to same physical disk Id.
  • If physical disk is shared, it will be renamed to <VMname1>-<VMname2-...>-spdisk<Slotnumbers>
  • Virtual disk name will be renamed to <VMname>-vdisk<Slotnumber>
  • Shared virtual disks are detected when the virtual disk details have multiple vmdiskmappings pointing to same virtual disk Id.
  • If virtual disk is shared, it will be renamed to <VMname1>-<VMname2-...>-svdisk<Slotnumbers>
  • If physical or virtual disk name follows naming standard, skip it and process next vmdiskmapping and its associated disk until all vmdiskmappings and associated physical or virtual disks are processed.
  • Close the Oracle VM CLI ssh Session
 
Note: The command can be run with dry run option before making any changes like:  ./rename_allvmdisks.sh -d -v

 

save_nice_names.sh

This command creates a text file with the Oracle VM Manager user-friendly names, network, vlan and server pool descriptions, and tags that can be used to recover the objects. The user-friendly names output is captured into a plain text file /tmp/mokum_nice_names.txt that is defined from the 'mokum.nicenames.path' parameter in the selected runbook. Along with the /tmp/mokum_nice_names.txt file additional details about tags and tagged objects are saved in /tmp/tagobjects file. Both these files are automatically backed up in /tmp/mokum_nice_names.<YYYYMMDD_HHMISS>.tar.
 
For each instance of an object of the given type, its unique ID is retrieved along with the "nice name" that goes with it. This information is then written out to a plain text file that contains a single header row followed by multiple detail rows.
 
Each row contains the following columns in order, separated by a single pipe ("|") character:
  • object_type (e.g. PhysicalDisk)
  • uid_field (e.g. Page83 ID)
  • uid_value
  • nice_name_field (e.g. Name)
  • nice_name_value
  • object_id (The OVM Object's ID at the time it is saved)
 
All above fields are used for objects physicaldisk, virtualdisk, VM, repository, storagearray (SAN server).
 
For objects network, vlangroup (3.2 Only), serverpool, tag along with name and object id the description is also saved. The generated output files are intended to be used with the restore_nice_names.sh command, which will read and apply the captured values to an existing Oracle VM Manager datbase repository.
 
Usage: ./save_nice_names.sh OR sh save_nice_names.sh
 
Logic:
  • Process runbook
    • If there is an error processing runbook, the command will exit with an appropriate result code
  • Open an SSH session to the OVM CLI
    • If there is an error opening the session, the utility will exit with an appropriate result code
  • Initialize the list of Oracle VM Object Types to process
  • For each Object Type specified
    • Get the list of Ids and Names of all instances of objects of the current type
    • For each Object Instance
      • Process the Object Instance by writing it's “nice name”, description (for objects network, vlangroup (3.2 Only), serverpool and tag) and unique identifier to the output file
    • If the object type is tag, save more details about tags and tagged objects in /tmp/tagobjects file
  • Close the Oracle VM CLI ssh Session
 

restore_nice_names.sh

This command restores the Oracle VM Manager user-friendly names, network, vlan and server pool descriptions, and tags using the outfiles from the save_nice_names.sh command. For each instance of an object of the given type, it's unique ID is used to update/replace the existing object's "nice name". The information is read from a plain text file that contains a single header row followed by multiple detail rows. Each row contains the following columns in order, separated by a single pipe ("|") character:
  • object_type (e.g. PhysicalDisk)
  • uid_field (e.g. Page83 ID)
  • uid_value
  • nice_name_field (e.g. Name)
  • nice_name_value
  • object_id (The OVM Object's ID at the time it was saved)
The input must have been previously captured by the save_nice_names.sh command into a plain text file that is defined in the runbook 'mokum.nicenames.path' keyword pair, along with the tag object details from the /tmp/tagobjects file.
 
Usage: ./restore_nice_names.sh OR sh restore_nice_names.sh
 
Logic:
  • Process runbook
    • If there is an error processing configuration file, the utility will exit with an appropriate result code
  • Open an SSH session to the OVM CLI
    • If there is an error opening the session, the utility will exit with an appropriate result code
  • Initialize the list of Oracle VM Object Types to process
  • Read the existing “nice name” values from the previously specified input file
  • For each Object Type specified
    • Get the list of instance Ids and Names of all objects of the current type from the currently connected Oracle VM CLI
    • For each of the previously saved “nice names” retrieved from the input file for the current Object Type
      • If the previously saved “nice name” has a corresponding object in the existing Object list
        • And if the existing object's name is different than the preserved “nice name”
          • Apply the preserved “nice name” to the existing object.
        • Else
          • Ignore this entry
      • Else
        • If the Object Type is tag and not seen in present Oracle VM Manager repository database
          • Add a new object of this type to the existing Oracle VM Manager repository database
        • Add tag to objects listed in /tmp/tagobjects file, if those objects are in current object list
  • Close the Oracle VM CLI ssh Session
 

import_block_repo.sh

This command is used to migrate block (ISCSI or Fibre Channel) storage repositories between server pools. For example, storage repositories that have been replicated, cloned, or copied (i.e. for disaster recovery) or moved between server pools:
a) must be on-line in read-write mode
b) must be available (shared, zoned, maked, mapped, etc..) to all of the target Oracle VM servers
As part of migrating replicated, cloned, or copied volumes, the command “.ovs_import_block_repo.sh” called from import_block_repo.sh will be used from only one of the Oracle VM servers to ensure that the storage is properly associated with desired OCFS2 Cluster and Oracle VM Manager. If defined in the runbook the commands will change the virtual machine vlan IDs and physical disk page 83 IDs in each virtual machine’s vm.cfg file. After “.ovs_block import_repo.sh” completes on the target Oracle VM server, the last step in bringing cloned volumes on line is to execute the “import_block_repo.sh script” on Oracle VM Manager. This command refreshes physical disks, filesystems, and then presents the storage repositories to the pool servers, and then refreshes the repositories to make the virtual machines, templates and assemblies visible.
 
This command requires bidirectional key based SSH authentication for the Oracle VM servers that will be slected when running the import_block_repo.sh commands. From the Oracle VM Manager host, as root, type the following command for the target Oracle VM server. Substitute ovmserver with the hostname of each of your Oracle VM servers.
 
ssh-copy-id -i ~/.ssh/id_rsa.pub ovmserver
From the Oracle VM server, as root, type the following command.Substitute ovmmanager with the hostname of the Oracle VM manager host.
ssh-copy-id -i ~/.ssh/id_rsa.pub ovmmanager
 
Note: Please refer to the ROAD for Oracle VM SSH Prerequisites section for the complete ssh key setup.
 
Usage - From Oracle VM Manager: ./import_block_repo.sh or sh import_block_repo.sh
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the utility will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the utility will exit with an appropriate result code
  • The Oracle VM Manager Hostname or IP address, along with the target Oracle VM server Hostname or IP address must be entered when we run the command
  • Repository (block device- LUN) Page 83 ID, vlan (bridge) ID pairs and optionally physical disk Page 83 ID pairs (old to be replaced by new) information saved in /tmp/repopg83bids.txt.
  • The /tmp/repopg83bids.txt file, and .ovs_import_block_repo.sh is sent via ssh to the target Oracle VM server for processing
  • The .ovs_import_block_repo.sh command is executed on the target Oracle VM server, to ensure that the storage is properly associated with desired OCFS2 Cluster and Oracle VM Manager.
  • If defined in the runbook the commands will change the virtual machine vlan IDs and physical disk page 83 IDs in each virtual machine’s vm.cfg file. The old vm.cfg files are backed up in the migrated repository in /OVS/Repositories/<LUNPage83ID>/mokum_vmcfg_backup_YYYYMMDD_HH before making any vlan ID or Physical disk Page 83 ID changes
  • Copy log file /tmp/import_blockrepo.<YYYYMMDD_HHMISS>.log to Oracle VM Manager in /tmp/ directory
  • After above import process done by .ovs_import_block_repo.sh, refresh repository (physical disk) and underlying filesystem, and if refresh is successful, present repository to server pool mentioned in runbook, then refresh repository to make the virtual machines visible
  • Close the Oracle VM CLI ssh session.
 

.ovs_import_block_repo.sh

This command is not executed by user directly but executed when the import_block_repo.sh command is run.
 
Logic:
  • Repository (block device- LUN) Page 83 ID, vlan(bridge) ID pairs and physical disk Page 83 ID pairs (old to be replaced by new) information saved in /tmp/repopg83bids.txt is sent by the import_block_repo.sh command executed on the Oracle VM Manager host to the target Oracle VM server.
  • The command ensures that the storage is properly associated with desired OCFS2 Cluster and Oracle VM Manager UUID, and if necessary changes the virtual machine vlan IDs and physical disk Page 83 IDs in each virtual machine’s vm.cfg file. The old vm.cfg files are backed up in the migrated repository in /OVS/Repositories/<LUNPage83ID>/mokum_vmcfg_backup_YYYYMMDD_HH before making any vlan ID or Physical disk Page 83 ID changes
  • Copy log file /tmp/import_blockrepo.<YYYYMMDD_HHMISS>.log to Oracle VM Manager in /tmp/ directory
  • Command import_block_repo.sh continues further execution to present and refresh repositories
 

import_file_repo.sh

This command is used to migrate file (NFS) storage repositories between server pools. For example, storage repositories that have been replicated, cloned, or copied (i.e. for disaster recovery) or moved between server pools:
a) must be on-line in read-write mode
b) must be available (shared, mapped, etc..) to all of the target Oracle VM servers
As part of migrating replicated, cloned, or copied volumes, the command “.ovs_import_file_repo.sh” called from import_file_repo.sh will be used from only one of the Oracle VM servers to ensure that the storage is properly associated with desired OCFS2 Cluster and Oracle VM Manager. If defined in the runbook the commands will change the virtual machine vlan IDs and physical disk page 83 IDs in each virtual machine’s vm.cfg file. After “.ovs_file_import_repo.sh” completes on the target Oracle VM server, the last step in bringing cloned volumes on line is to execute the “import_file_repo.sh script” on Oracle VM Manager. This command refreshes physical disks, filesystems, and then presents the storage repositories to the pool servers, and then refreshes the repositories to make the virtual machines, templates and assemblies visible.
 
This command requires bidirectional key based SSH authentication for the Oracle VM servers that will be slected when running the import_block_repo.sh commands. From the Oracle VM Manager host, as root, type the following command for the target Oracle VM server. Substitute ovmserver with the hostname of each of your Oracle VM servers.
 
ssh-copy-id -i ~/.ssh/id_rsa.pub ovmserver
From the Oracle VM server, as root, type the following command.Substitute ovmmanager with the hostname of the Oracle VM manager host.
ssh-copy-id -i ~/.ssh/id_rsa.pub ovmmanager
 
Note: Please refer to the ROAD for Oracle VM SSH Prerequisites section for the complete ssh key setup.
 
Usage: ./import_file_repo.sh or sh import_file_repo.sh
 
Logic:
  • Process runbook with the help of sourced utils_lib.sh
  • If there is an error, the utility will exit with an appropriate result code
  • Open an SSH session to the Oracle VM CLI
  • If there is an error opening the session, the utility will exit with an appropriate result code
  • The Oracle VM Manager Hostname or IP address, along with the target Oracle VM server Hostname or IP address must be entered when we run the command
  • Repository (block device- LUN) Page 83 ID, vlan (bridge) ID pairs and optionally physical disk Page 83 ID pairs (old to be replaced by new) information saved in /tmp/repopg83bids.txt.
  • The /tmp/repopg83bids.txt file, and .ovs_import_file_repo.sh is sent via ssh to the target Oracle VM server for processing
  • The .ovs_import_file_repo.sh command is executed on the target Oracle VM server, to ensure that the storage is properly associated with desired OCFS2 Cluster and Oracle VM Manager.
  • If defined in the runbook the commands will change the virtual machine vlan IDs and physical disk page 83 IDs in each virtual machine’s vm.cfg file. The old vm.cfg files are backed up in the migrated repository in /OVS/Repositories/<LUNPage83ID>/mokum_vmcfg_backup_YYYYMMDD_HH before making any vlan ID or Physical disk Page 83 ID changes
  • Copy log file /tmp/import_filerepo.<YYYYMMDD_HHMISS>.log to Oracle VM Manager in /tmp/ directory
  • After above import process done by .ovs_import_file_repo.sh, refresh repository (physical disk) and underlying filesystem, and if refresh is successful, present repository to server pool mentioned in runbook, then refresh repository to make the virtual machines visible
  • Close the Oracle VM CLI ssh session.
 

.ovs_import_file_repo.sh

This command is not executed by user directly but executed when the import_file_repo.sh command is run.
 
Logic:
  • Repository (block device- LUN) Page 83 ID, vlan(bridge) ID pairs and physical disk Page 83 ID pairs (old to be replaced by new) information saved in /tmp/repopg83bids.txt is sent by the import_file_repo.sh command executed on the Oracle VM Manager host to the target Oracle VM server.
  • The command ensures that the storage is properly associated with desired OCFS2 Cluster and Oracle VM Manager UUID, and if necessary changes the virtual machine vlan IDs and physical disk Page 83 IDs in each virtual machine’s vm.cfg file. The old vm.cfg files are backed up in the migrated repository in /OVS/Repositories/<LUNPage83ID>/mokum_vmcfg_backup_YYYYMMDD_HH before making any vlan ID or Physical disk Page 83 ID changes
  • Copy log file /tmp/import_filerepo.<YYYYMMDD_HHMISS>.log to Oracle VM Manager in /tmp/ directory
  • Command import_file_repo.sh continues further execution to present and refresh repositories
 
Note: ssh key-based-authentication needs to be configured between Oracle VM Manager and Server before running import_block_repo.sh and import_file_repo.sh scripts.
 

ROAD for Oracle VM Runbook Keywords

This section describes each ROAD for Oracle VM runbook keyword. Any differences between Oracle VM Release 3.3 and 3.2 keywords will be highlighted with the appropriate Oracle VM Release number.
 
NOTE: The Oracle VM 3.2 CLI was updated with the Oracle VM Release 3.3 Release. There are numerous differences between the Oracle VM 3.2 and 3.3 CLI. The majority of ROAD for Oracle VM keywords are shared between Oracle VM Release 3.3 and 3.2. There is small number of Oracle VM Release specific keywords that address the differences between the Oracle VM 3.2 and 3.3 CLI commands and functionality.

 

Runbook Minimum Requirement Keywords

 
mokum.log.loc
 
  • Must be the first line in the runbook in order to capture all further events in the log file
  • A log file is created whenever any of the ROAD for Oracle VM commands are run
  • Special Handling: To auto-generate a timestamp at the end of the file name, put a period “.” as the last character of the file name. The extension will be of the format YYYYMMDD_HHMMSS.log
  • Default: mokum.log.loc =  /tmp/mokum_utils.
ovm.config.path
 
  • The full path on the Oracle VM Manager server to the Oracle VM Manager's configuration file.
  • Default: ovm.config.path = /u01/app/oracle/ovm-manager-3/.config
ovm.pw
 
  • Password for the “admin” user used to login to the Oracle VM Manager CLI (Command Line Interface)
  • Format/Example: ovm.pw = admin-user-password
    • ovm.pw = mypassword
  • The password must be set in the runbook
cli.host
 
  • Hostname of the server running Oracle VM Manager and the Oracle VM Manager CLI
  • Format: cli.host = ovm-manager-hostname or cli.host = localhost
cli.port
 
  • Port that the Oracle VM Manager CLI daemon is listening on
  • Default/Format: cli.port = 10000
cli.user
 
  • User that logs into the Oracle VM Manager CLI
  • Default/Format: cli.user = admin
ovs.agent.port
 
  • Port that the Oracle VM server agent is listening on
  • Default/Format: ovs.agent.port = 8899
ovs.agent.user
 
  • User associated with the agent user account (ovs-agent) on the Oracle VM servers
  • Default/Format: ovs.agent.user = oracle
ovs.agent.pw
 
  • Password for the ovs.agent.user
  • Format/Example: ovs.agent.pw = my-ovs-agent-password
    • ovs.agent.pw = mypassword
  • The password must be set in the runbook

Runbook Oracle VM Server Keywords

ovs.servers
 
  • List of the Oracle VM server IP addresses for discovery
  • No override, this setting must be specified in the runbook
  • Multiple servers are specified as ip-address[,ip-address,...]
  • Format/Example: ovs.servers = 10.1.2.130,10.2.3.234
ovs.server.names
  • Oracle VM server hostnames with respect to IP addresses listed in above ovs.servers
  • Format/Example: ovs.server.names = ovs-server1,ovs-server2,ovs-server3
  • Use the “hostname” command on the Oracle VM server, or the Oracle VM CLI commands: “list server”, “show server name=servername”... to get Oracle VM server names with respect to IP Addresses set above in ovs.servers
e.g. OVM> list server
id:01:fb:9a:13:8f:fe:d5:11:a2:72:1c:c1:de:07:9e:49  name:ovs-pickard
id:81:0c:aa:37:8e:fe:d5:11:a6:89:1c:c1:de:07:2f:35  name:ovs-sulu
OVM> show server name=ovs-sulu
Ip Address = 192.168.3.107
 

Runbook Oracle VM Storage Keywords

ovs.nfsadmin.servers
  • Name or ID of the Oracle VM Admin Server(s) for NFS storage
  • Specify comma separated list if more than one Oracle VM Admin Server (Oracle VM servers) exists
  • Format/Example: ovs.nfsadmin.servers = ovs-server1,ovs-server2,ovs-server3
  • Use Oracle VM CLI commands: list fileServer, show fileserver name=... (see Admin Server line in output) to get admin server for fileserver, or the admin can decide which Oracle VM servers to add
e.g. OVM> list fileServer
...
id:0004fb0000090000729e74e2a87fbc6b  name:nas-trip
...
OVM> show fileServer name=nas-trip
  Access Host = 192.168.2.100
  Admin Server 1 = 81:0c:aa:37:8e:fe:d5:11:a6:89:1c:c1:de:07:2f:35  [ovs-sulu]
  Admin Server 2 = 01:fb:9a:13:8f:fe:d5:11:a2:72:1c:c1:de:07:9e:49  [ovs-pickard]
  Refresh Server 1 = 81:0c:aa:37:8e:fe:d5:11:a6:89:1c:c1:de:07:2f:35  [ovs-sulu]
  Refresh Server 2 = 01:fb:9a:13:8f:fe:d5:11:a2:72:1c:c1:de:07:9e:49  [ovs-pickard]
   Id = 0004fb0000090000729e74e2a87fbc6b  [nas-trip]
  • i.e. ovs.nfsadmin.servers = ovs-sulu,ovs-pickard,ovs-sulu,ovs-pickard
ovs.iscsisanadmin.servers
  • Name or ID of the Oracle VM Admin Server(s) for iSCSI storage
  • Specify comma separated list if more than one Oracle VM Admin Server (Oracle VM servers) exists
  • Format/Example: ovs.iscsisanadmin.servers = ovs-server1,ovs-server2,ovs-server3
  • Oracle VM Release 3.3: Use Oracle VM CLI commands: list storagearray, show storagearray name=... (see Admin Server line in output)  to get Admin Server(s) for the iSCSI storagearray
e.g. OVM> list storagearray
id:0004fb0000090000bab8c5b7d0445d30  name:scsi-stor
OVM> show storagearray name=scsi-stor
Admin Server 1 = 81:0c:aa:37:8e:fe:d5:11:a6:89:1c:c1:de:07:2f:35  [ovs-sulu]
Admin Server 2 = 01:fb:9a:13:8f:fe:d5:11:a2:72:1c:c1:de:07:9e:49  [ovs-pickard]
Storage Type = Iscsi
Access Host 1 - Hostname = 192.168.2.100
Access Host 1 - Port = 3260
Id = 0004fb0000090000bab8c5b7d0445d30  [scsi-stor]
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: list sanserver, show sanserver name=... (see Admin Server line in output)  to get Admin Server(s) for the iSCSI storagearray
e.g. OVM> list sanserver
id:0004fb0000090000bab8c5b7d0445d30  name:scsi-stor
OVM> show sanserver name=scsi-stor
 Name = nas-trip
 Id = 0004fb00000900001ae90accf5e879ad
 Storage Plug-in = Oracle Generic SCSI Plugin
 Access Host 1 = 192.168.2.100
 Access Port 1 = 3260
 Storage Type = iSCSI Storage Server
 Admin Server 1 = 81:a4:20:2e:8e:fe:d5:11:a1:df:1c:c1:de:07:2f:1a  [ovs-spock]
 Admin Server 2 = 81:0c:aa:37:8e:fe:d5:11:a6:89:1c:c1:de:07:2f:35  [ovs-sulu]
...
  • i.e. ovs.iscsisanadmin.servers = ovs-sulu,ovs-pickard
 
ovs.fcsanadmin.servers
  • Name or ID of the Oracle VM Admin Server(s) for Fibre Channel storage
  • Specify comma separated list if more than one Oracle VM Admin Server (Oracle VM servers) exists
  • Format/Example: ovs.fcsanadmin.servers = ovs-server1,ovs-server2,ovs-server3
  • Oracle VM Release 3.3: Use Oracle VM CLI commands: list storagearray, show storagearray name=... (see Admin Server line in output)  to get admin servers for FC storagearray or the admin can decide which Oracle VM servers to add
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: list sanserver, show sanserver name=... (see Admin Server line in output)  to get admin servers for FC storagearray or admin can decide.
 
ovs.nfsrefresh.servers
  • Name or ID of Oracle VM server(s) to be used as a refresh server(s) for NFS storage using either uniform or non-uniform exports.
  • The refresh server is used to refresh file systems on NFS file servers
  • Specify comma separated list if more than one Oracle VM server
  • Format/Example: ovs.nfsrefresh.servers = ovs-server1,ovs-server2,ovs-server3
  • Use Oracle VM CLI commands: list fileServer, show fileserver name=... (see Refresh Server line in output) to get refresh servers for file servers or the admin can decide which Oracle VM servers to add
 
ovs.storage.type
  • The Oracle VM San Server Storage Type as configured within Oracle VM Manager.
  • The options are as follows:
    • Fibre Channel: FibreChannelStorageArray (Literal: SAN Storage Server)
    • iSCSI: iSCSIStorageArray (Literal: iSCSI Storage Server)
  • Both can be present in a single Environment
  • Format:
    • ovs.storage.type = FibreChannelStorageArray
    • ovs.storage.type = iSCSIStorageArray
 
ovs.storage.plugin
  • Name of the storage plugin that should be used when creating/discovering the storage array
  • There is currently only one option: Oracle Generic SCSI Plugin(1.1.0)
  • Default/Format: ovs.storage.plugin = Oracle Generic SCSI Plugin(1.1.0)
  • Oracle VM Release 3.3: Use Oracle VM CLI command list storagearrayplugin to get the value
e.g. OVM> list storagearrayplugin
id:oracle.generic.SCSIPlugin.GenericPlugin (1.1.0)  name:Oracle Generic SCSI Plugin
...
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: list sanserver, show sanserver name=...  to get the value
 
ovs.nfs.plugin
  • Name of the storage plugin that should be used when creating/discovering the file server
  • Default/Format: ovs.nfs.plugin = Oracle Generic Network File System
  • Oracle VM Release 3.3: Use Oracle VM CLI command list fileServerplugin to get the value
e.g. OVM> list fileServerPlugin
id:oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0)  name:Oracle Generic Network File System
...
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: list fileserver, show fileserver name=...  to get the value (see Storage Plug-in line in output)
 
ovs.nfsstorage.name
  • Name of the NFS file server that is being used in Oracle VM Manager
  • Multiple NFS storages are specified as fileserver1, fileserver2
  • Format/Example: ovs.nfsstorage.name = fileserver1
  • Use Oracle VM CLI command list fileServer to get nfs storage (fileserver) names
e.g. OVM> list fileServer
id:0004fb0000090000729e74e2a87fbc6b  name:nas-trip
...
 
ovs.fcstorage.name
  • Name of the SAN Storage that is being used in the Oracle VM Manager
  • Format/Example: ovs.fcstorage.name = Unmanaged FibreChannel Storage Array
  • Oracle VM Release 3.3: Use Oracle VM CLI commands: list storagearray, show storagearray name=... (see Storage Type line in output) to get SAN storage name
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: list sanserver, show sanserver name=... (see Storage Type line in output) to get SAN storage name.
 
ovs.scsistorage.name
  • Name of the ISCSI storage array that is being used in Oracle VM Manager
  • Multiple iSCSI storage arrays are specified as iscsiserver1, iscsiserver2
  • Format/Example: ovs.scsistorage.name = iscsiserver1
  • Oracle VM Release 3.3: Use Oracle VM CLI commands: list storagearray, show storagearray name=... (see Storage Type line in output) to get iSCSI storage name.
e.g. Refer ovs.iscsisanadmin.servers
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: list sanserver, show sanserver name=... (check Storage Type line in output) to get iSCSI storage name.
e.g. Refer ovs.iscsisanadmin.servers
 
ovs.nfsstoragename.accessHosts
  • Oracle VM NFS storage name
  • Name or IP Address of Access Host server to use with NFS storage
  • Can have multiple occurrences of this keyword value pair if more than one NFS storage present.
  • Format/Example: ovs.nfsstorage.accessHosts = fileserver1,10.214.11.1,10.214.12.1
    ovs.nfsstorage.accessHosts = fileserver2,10.214.11.2
  • Use Oracle VM CLI commands: list fileserver, show fileserver name=... (see Access Host line in output) to get Access Hosts for NFS Storage, and/or admin can decide which Access Hosts for NFS Storage entries to add
e.g. Refer ovs.nfsadmin.servers
 
ovs.scsistoragename.accessHosts
  • Oracle VM San Server Access Hosts and Port for iSCSI storage types (only)
  • Oracle VM San(iSCSI) storage array name
  • Hostname or IP Address of Access Host server to use with an iSCSI storage
  • Access port (default port number is 3260) is after the IP Address/Hostname separated by colon.
  • Can have multiple occurrences of this keyword value pair if more than one scsi storage arrays present.
  • Format/Example: ovs.scsistorage.accessHosts = nas-trip,10.214.11.1:3260
  • Format/Example: ovs.scsistorage.accessHosts = nas-trip,10.214.11.1:3260,10.214.12.1:3260
  • Oracle VM Release 3.3: Use Oracle VM CLI commands: list storagearray, show storagearray name=... (see Access Host lines in output) to get Access Hosts and port numbers for iSCSI Storage.
e.g. Refer ovs.iscsisanadmin.servers
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: list sanserver, show sanserver name=... (see Access Host lines in output) to get Access Hosts and port numbers for iSCSI Storage.
e.g. Refer ovs.iscsisanadmin.servers
 
ovm.repo.server
  • Presenting a specific Storage Repository to one or more Oracle VM servers
  • First is the name of the repository followed by comma separated list of Oracle VM server names
  • Can have multiple occurrences of this keyword value pair if more than one repositories to present
  • Format/Example:
    • ovm.repo.server = Tier-1,ovs-server1
    • or with multiple Oracle VM servers
    • ovm.repo.server = Tier-1,ovs-server1,ovs-server2,ovs-server3, etc..
  • (Tier-1 is repository name and ovs-server1-3 are Oracle VM server names)
    • ovm.repo.server = Tier-2,ovs-server1,ovs-server2
  • (Tier-2 is repository name and ovs-server1,ovs-server2 are Oracle VM server names)
  • Use Oracle VM CLI commands: list repository, show repository name=... ( see Presented Server lines in output) to get repository names and Oracle VM server names to present repository to or admin can decide.
  • e.g. OVM> list repository
id:0004fb00000300000fc9d5a05ede58a0  name:nfs_prod
  id:0004fb0000030000f0847ce439568f7b  name:ocfs2_prod
id:0004fb00000300006bf4c8541f45efc6  name:ocfs2_dr
 
OVM> show repository name=ocfs2_dr
 
...
Presented = Yes
Presented Server 1 = 81:0c:aa:37:8e:fe:d5:11:a6:89:1c:c1:de:07:2f:35  [ovs-sulu]
Presented Server 2 = 01:fb:9a:13:8f:fe:d5:11:a2:72:1c:c1:de:07:9e:49  [ovs-pickard]
Id = 0004fb00000300006bf4c8541f45efc6  [ocfs2_dr]
  • i.e. ovm.repo.server = ocfs2_dr,ovs-sulu,ovs-pickard

Runbook Oracle VM NTP Keywords

ovm.ntp.servers
 
  • Comma separated list of NTP servers (IP Address/ Hostname)
  • No override, this must be specified in the conf file
  • Format/Example: ovm.ntp.servers = ntp1.nist.gov,ntp2.nist.gov
  • Oracle VM Release 3.3: Use Oracle VM CLI commands: list server, show server name=... (see NTP Server lines in output) to get NTP server’s IP Addresses or admin can decide.
e.g. OVM> show server name=ovs-sulu
NTP Server 1 = 192.168.20.101
  • Oracle VM Release 3.2: Use Oracle VM CLI commands: showntp to get NTP server’s IP Addresses or admin can decide.
e.g. OVM> showntp
192.168.20.101
 

Runbook Oracle VM YUM Update Repository Keywords

Oracle VM Release 3.3 has eight yum runbook keywords. With Oracle VM Release 3.3 supports the default-Global server update group, as well as server update groups (SUGs). The default-Global server update group are globally assigned to all server pools. The server update groups (SUGs) can be assigned to individual servers pools, i.e. each server pool, Sandbox, Test, Dev, and Prod could have its own time stamped yum update repository.
 
Oracle VM Release 3.2 has three runbook keywords. Oracle VM Release 3.2 supports only global yum repository configurations.
 

Oracle VM Release 3.3 Yum Repository Keywords

Oracle VM Release 3.3 has eight yum runbook keywords. With Oracle VM Release 3.3 supports the default-Global server update group, as well as server update groups (SUGs). The default-Global server update group are globally assigned to all server pools. The server update groups (SUGs) can be assigned to individual servers pools, i.e. each server pool, Sandbox, Test, Dev, and Prod could have its own time stamped yum update repository.
 
  • Keywords for the repositoryName, name, base URL, repository enablement, pkgSignatureType, GPG key, serverupdategroupId
  • All yum parameters mentioned below can have multiple values
  • For multiple yum repositories, i.e. public, and server pool specific, assign multiple values. For example, each yum keyword would have a corresponding value for each yum repository
  • Use Oracle VM CLI commands: list serverupdaterepository, show serverupdaterepository name=... to get values for below keywords or admin can decide
 
e.g. OVM> list serverupdaterepository
id:0004fb0000310000ec84dab5c6aaed27 name:3xPublic_Latest
OVM> show serverupdaterepository name=3xPublic_Latest
...
Server Update Group = GlobalX86ServerUpdateConfiguration  
[GlobalX86ServerUpdateConfiguration]
  Repository Name = 3xPublic_Latest
  Enabled = Yes
  Package Signature Type = GPG
Package Signature Key = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
  Id = 0004fb0000310000ec84dab5c6aaed27  [3xPublic_Latest]
Name = 3xPublic_Latest
...
  • Refer above output for yum parameters listed below
 
ovm.yum.serverupdategroup
  • Parameter used to add server update groups (SUGs) other than default-Global server update group (Default- Global SUG: GlobalX86ServerUpdateConfiguration or GlobalSparcServerUpdateConfiguration)
  • Keyword is ovm.yum.serverupdategroup
  • Value should be colon separated server update group name and server pool id on which to add the server update group (SUG)
  • NOTE: Only specify the ids of server pool only, not names
  • Use a comma separated list of above value for server update group specific to server pools
  • Format/Example: ovm.yum.serverupdategroup = serverUpdateConfiguration_0004fb0000020000621b4af3adc02b9e:0004fb0000020000621b4af3adc02b9e
  • Format/Example: ovm.yum.serverupdategroup = serverUpdateConfiguration_0004fb0000020000621b4af3adc02b9e:0004fb0000020000621b4af3adc02b9e,serverUpdateConfiguration_0004fb00000200005f5a87e8663a0739:0004fb00000200005f5a87e8663a0739
  • Use Oracle VM CLI commands: list serverupdategroup, show serverupdategroup name=... ( see Server Pool line in output) to get name of server update group and server pool ID, or admin can decide
e.g. OVM> list serverupdategroup
id:GlobalX86ServerUpdateConfiguration  name:GlobalX86ServerUpdateConfiguration
  id:GlobalSparcServerUpdateConfiguration  name:GlobalSparcServerUpdateConfiguration
id:serverUpdateConfiguration_0004fb0000020000ad6d8645f920bcd2 name:SUG1 …
OVM> show serverupdategroup name=SUG1
Server Pool = 0004fb0000020000ad6d8645f920bcd2  [HQ]
Global = No
...
 
ovm.yum.reposname
  • Keyword value for the name of the server update repository
  • This value can only contain alphanumeric characters and underscores, no spaces are permitted.
  • Format/Example: ovm.yum.reposname = 3xPublic_Latest
    ovm.yum.reposname = 3xPublic_Latest,Private_Repo1
  • Use Oracle VM CLI command show serverupdaterepository name=... or admin can decide
(See Repository Name line from output)
 
ovm.yum.rname
  • Keyword value for the name to identify the yum server update repository
  • Format/Example: ovm.yum.rname = OraclePublicYumRepo
    ovm.yum.rname = OraclePublicYumRepo,custom08062015
  • Use Oracle VM CLI command show serverupdaterepository name=... or admin can decide
(See Name line from output)
 
ovm.yum.baseURL
  • Use Oracle VM CLI command show serverupdaterepository name=... or admin can decide
(See Url  line from output)
 
ovm.yum.repoenabled
  • Keyword value to enable the yum repository
  • Values can be Yes/No
  • Format/Example: ovm.yum.repoenabled = yes
    ovm.yum.repoenabled = yes,no
  • Use Oracle VM CLI command show serverupdaterepository name=... or admin can decide
(See Enabled line from output)
 
ovm.yum.pkgSignatureType
  • Keyword value to select the signature type to verify the validity of the yum repository
  • Values can be either GPG (key), CA or None if there is no verification required
  • Format/Example: ovm.yum.pkgSignatureType = GPG
    ovm.yum.pkgSignatureType = GPG,None
  • Use Oracle VM CLI command show serverupdaterepository name=... or admin can decide
(See Package Signature Type line from output)
 
ovm.yum.gpgkey
  • Keyword value to select the verification signature for the repository
  • The location of the GPG key using any of the HTTP, FTP, HTTPS protocols or locally using file
  • Format/Example: ovm.yum.gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
  • Use Oracle VM CLI command show serverupdaterepository name=... or admin can decide.
(See Package Signature Key line from output)
 
ovm.yum.serverupdategroupId
  • Keyword value to select the ID of server update group
  • For x86-based server pools, the default ID is GlobalX86ServerUpdateConfiguration
  • For SPARC-based server pools, the default ID is GlobalSparcServerUpdateConfiguration.
  • Format/Example: ovm.yum.serverupdategroupId = GlobalX86ServerUpdateConfiguration
    ovm.yum.serverupdategroupId = GlobalX86ServerUpdateConfiguration,serverUpdateConfiguration_0004fb0000020000621b4af3adc02b9e
  • Use Oracle VM CLI command show serverupdaterepository name=... or admin can decide.
(See Server Update Group line from output)
 

Oracle VM Release 3.2 Yum Repository Keywords

Oracle VM Release 3.2 has three runbook keywords. Oracle VM Release 3.2 supports only global yum repository configurations.
  • Keywords for the base URL, GPG Key enablement and the GPG key
  • Use Oracle VM CLI command: show yumconfig to get values for below keywords or admin can decide
e.g. OVM> show yumconfig
Enable GPG Key = Yes
YUM GPG Key = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
 
ovm.yum.baseURL
  • Use Oracle VM CLI command show yumconfig or admin can decide
(See Yum Base URL line from output)
 
ovm.yum.gpgkeycheck
  • Keyword value to enable or disable GPG key authorization for the Yum repository
  • Value can be either Yes or No.
  • Format/Example: ovm.yum.gpgkeycheck = Yes
    ovm.yum.gpgkeycheck = No
  • Use Oracle VM CLI command show yumconfig or admin can decide
(See Enable GPG Key line from output)
 
ovm.yum.gpgkey
  • Keyword value to select the verification signature for the yum repository
  • The location of the GPG key using any of the HTTP, FTP, HTTPS protocols or locally using file.
  • Format/Example: ovm.yum.gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
  • Use Oracle VM CLI command show yumconfig or admin can decide.
(See YUM GPG Key line from output)
 

Oracle VM Server Network Runbook Keywords

ovs.nic.network
  • Keyword value to add access ports, specifically the cluster heartbeat and live migration ports to Oracle VM server networks if not added during an Oracle VM Manager restore using the restore_manager.sh command
  • Comma separated list of colon separated network interface name and Network ID
  • Format/Example: ovs.nic.network = eth0:103er10e8d
    ovs.nic.network = eth0:10163c75fe,eth1:10e8d28156
  • Use Oracle VM CLI command: list network, or admin can decide
 
ovs.host.nic.network
  • Keyword value to add access ports, specifically the cluster heartbeat and live migration ports to Oracle VM server networks if not added during an Oracle VM Manager restore using the restore_manager.sh command
  • The Keyword value list is made up of multiple occurrences of the following keyword/value pair that represents the Oracle VM server names to map network ports, and interfaces to network ids.
  • The keyword is: ovs.host.nic.network
  • Format is the keyword followed by an Oracle VM server instance identifier and list of colon separated interface names and network ids.
  • Format: ovs.host.nic.network = ovm-server-instance,interfacename1:networkid,interfacename2:networkid
    • i.e ovm-server1,eth0:1033695f01,eth1:1058fd1459
    • i.e ovm-server2,eth0:1033695f01,eth1:1058fd1459
  • ovm-server-instance is formatted as follows: ovs-servername
  • Oracle VM server Names instead of Ids are used for simplicity, try names only
  • Example: ovs.host.nic.network = ovshost1,eth0:103er10e8d,eth1:2d156101ae
    •      ovs.host.nic.network = ovshost2,eth0:103er10e8d,eth1:2d156101ae

Virtual Machine Runbook Keywords

ovs.start.vm
  • Keyword to select Virtual Machine Instances on Oracle VM servers to start
  • Format is the keyword followed by comma separated list of VM instance identifiers to start
  • VM instance is formatted as follows: VM instance name (not ID)
  • Format/Example: ovs.start.vm = testvm1,testvm3
  • Use Oracle VM CLI command: list vm to get names of VMs to start
e.g. OVM> list vm
id:0004fb00000600004fc9500e826b6c68  name:ovm-mccoy
  id:0004fb000006000073629389b0b5c8e9  name:migrate4
id:0004fb00000600003eeb0f3e53924f7d  name:ovm-thedoctor
...
 
ovs.stop.vm
  • Keyword to select the Virtual Machine Instances running on Oracle VM servers to stop/shutdown
  • Format is the keyword followed by comma separated list of VM instance identifiers to stop/shutdown
  • VM instance is formatted as follows: VM instance name
  • ovs.stop.vm = testvm1,testvm3
  • Use Oracle VM CLI command: list vm to get names of VMs to stop.
e.g. Refer ovs.start.vm
 
ovs.vm.killwait
  • Keyword to set the waiting time before killing the Virtual Machines which are not getting shutdown normally
  • This time is applicable for all VMs mentioned with ovs.stop.vm
  • Format is keyword followed by wait time in seconds
  • Example: ovs.vm.killwait = 120
ovs.migrate.unassignedvm
  • Keyword to select the the Virtual Machines to migrate from the Unassigned Virtual Machines Folder to the server pools (of specified Oracle VM servers)
  • Format is the keyword followed by colon separated Oracle VM server instance identifier and comma separated VM instance identifiers. Names as identifiers are preferred for simplicity.
  • vm-instance is formatted as follows: vm-name e.g. ol6vm-x86_64_test1
  • ovs-instance is formatted as follows: ovm-server-name e.g. ovs-server1
  • The list is made up of multiple occurrences of the following keyword/value pair
  • Format: ovs.migrate.unassignedvm = ovs-server1:vm-instance1,vm-instance2,vm-instance3
  • Example: ovs.migrate.unassignedvm = ovs-server2:vm-instance4,vm-instance5,vm-instance6
  • Use Oracle VM CLI commands: list vm, show vm name=... ( see Server, Server Pool lines in output) to get server name to migrate VMs to or admin can decide.
e.g. OVM> list vm
id:0004fb00000600004fc9500e826b6c68  name:ovm-mccoy
  id:0004fb000006000073629389b0b5c8e9  name:migrate4
id:0004fb00000600003eeb0f3e53924f7d  name:gle-troi
OVM> show vm name=gle-troi
Server = 01:fb:9a:13:8f:fe:d5:11:a2:72:1c:c1:de:07:9e:49  [ovs-pickard]
  Server Pool = 0004fb0000020000ad6d8645f920bcd2  [HQ]
  Repository = 0004fb00000300006bf4c8541f45efc6  [2TBMigration]
Id = 0004fb00000600003eeb0f3e53924f7d  [gle-troi]
 
Oracle VM Release 3.3
ovs.migrate.runningvm
  • Keyword used to select running VM Instances to migrate from one Oracle VM Server to another
  • Format is the keyword followed by colon separated OVM server instance identifier and comma separated  VM instance identifiers. Names as identifiers are preferred for simplicity.
  • The list is made up of multiple occurrences of the following keyword/value pair
  • Format: ovs.migrate.runningvm = OVS-instance1:vm-instance1,vm-instance2,vm-instance3
  • Example: ovs.migrate.runningvm = ovs-sulu:vdi-yar,migrate.4
  • Use Oracle VM CLI commands: list vm, show vm name=... ( see Server, Server Pool lines in output) to get server name to migrate VMs to or admin can decide.
e.g. Refer ovs.migrate.unassignedvm
 
Oracle VM Release 3.2
ovs.migrate.assignedvm
  • Keyword used to select running or stopped VM Instances (all assigned to servers) to migrate from one Oracle VM Server to another
  • Format is the keyword followed by colon separated OVM server instance identifier and comma separated  VM instance identifiers. Names as identifiers are preferred for simplicity.
  • The list is made up of multiple occurrences of the following keyword/value pair
  • Format: ovs.migrate.assignedvm = OVS-instance1:vm-instance1,vm-instance2,vm-instance3
  • Example: ovs.migrate.assignedvm = ovs-sulu:vdi-yar,migrate.4
  • Use Oracle VM CLI commands: list vm, show vm name=... ( see Server line in output) to get server name to migrate VMs to or admin can decide.
e.g. Refer ovs.migrate.unassignedvm
 

Oracle VM Storage Repository Migration Runbook Keywords

Oracle VM Release 3.3 and 3.2 manipulate storage objects in a different manner using different Oracle VM Manager CLI commands. This section is broken down into two sections, Oracle VM Release 3.3 and Oracle VM Release 3.2.
 
Oracle VM Release 3.3
 
ovs.migrate.blockrepo
  • Keyword used to select block repositories (Fibre Channel and iSCSI) to be migrated/imported to a new server pool
  • Format is Repository-device ID (i.e. LUN Page 83 ID), Server pool ID to present repository to, followed by old:new Bridge ID pairs optionally followed by |old:new physical disk page 83 ID pairs (if VMs in repository are having physical disks attached to them)
  • Example: ovs.migrate.blockrepo = 36001405a51df13ed5e3dd3667d9d08de,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e|360014057fc7ed3dd1eaad3f51d9c3cd0:36001405ad5b37b1d92e2d3f0bda815d7,360014058ac94bc4d259dd3e44dba6cd4:36001405edd88eced2188d364ed92b4d0,360014058f44d33cdcbabd3fe9d82d5dd:36001405f7ab4faad7070d30cada337d1
ovs.migrate.blockrepo = 360014058f2fa80bd6863d38e9d8cacde,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e
  • Use Oracle VM CLI commands: list serverpool (to get server pool ID), list physicalDisk, show physicalDisk name=... (see Page83 ID line in output) to get LUN page83 ID and destination server pool id to present repository to or admin can decide. Use Oracle VM CLI command: list network to get  bridge (vlan) IDs on source and destination server pool and note corresponding  bridge ID from vm.cfg files. Note physical disk page 83 IDs from vm.cfg ( Refer lines similar to 'phy:/dev/mapper/3600140512a47d37d2fc7d3db3dbe48d3,xvdb,w' )
Can check device(LUN) IDs with ls -1 /dev/mapper
 
e.g. OVM> list serverpool
id:0004fb0000020000ad6d8645f920bcd2  name:DR
OVM> list physicalDisk
id:0004fb000018000094b8ad997f379340  name:SYNOLOGY (1)
  id:0004fb00001800002bc713c427409e5a  name:SYNOLOGY (2)
  id:0004fb00001800002c5da2d1f45f35ac  name:SYNOLOGY (3)
OVM> show physicalDisk name="SYNOLOGY (1)"
 
Device Name 1 = /dev/mapper/3600140547cce070d7059d3182da9f0de
 
Page83 ID = 3600140547cce070d7059d3182da9f0de
Id = 0004fb000018000094b8ad997f379340  [SYNOLOGY (1)]
 
OVM> list network
 
id:c0a80300 name:192.168.20.0
id:10616940f5  name:192.168.21.0
id:10b365f33e  name:192.168.22.0
# grep “bridge” vm.cfg
 
vif = ['mac=00:21:f6:be:a1:2f,bridge=c0a80300']
 
disk = ['file:/OVS/Repositories/0004fb0000030000732541521e134825/VirtualDisks/0004fb000012000074df05323fb96e3f.img,xvda,w', 'phy:/dev/mapper/36001405ad5b37b1d92e2d3f0bda815d7,xvdb,w']
 
ovs.migrate.filerepo
  • Keyword used to select file (NFS) repositories to be migrated/imported to a new server pool
  • Format is Repository name, Server pool ID to present repository to, followed by old:new Bridge ID pairs optionally followed by |old:new physical disk page 83 ID pairs (if VMs in repository are having physical disks attached to them)
  • Example: ovs.migrate.filerepo = nfs_migration1,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e|360014057fc7ed3dd1eaad3f51d9c3cd0:36001405ad5b37b1d92e2d3f0bda815d7,360014058ac94bc4d259dd3e44dba6cd4:36001405edd88eced2188d364ed92b4d0,360014058f44d33cdcbabd3fe9d82d5dd:36001405f7ab4faad7070d30cada337d1
ovs.migrate.filerepo = nfs_migration2,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e
  • Use Oracle VM CLI commands: list repository, list serverpool (to get server pool ID) or admin can decide. Use command: list network to get  bridge (vlan) IDs on source and destination server pool and note corresponding  bridge ID from vm.cfg files. Note physical disk page 83 IDs from vm.cfg ( Refer lines similar to 'phy:/dev/mapper/3600140512a47d37d2fc7d3db3dbe48d3,xvdb,w' )
e.g. OVM> list repository
id:0004fb0000030000732541521e134825  name:nfs_migration1
  id:0004fb0000030000854f30c7b209e3b6  name:nfs_migration2
id:0004fb00000300000fc9d5a05ede58a0  name:nfs_migration3
OVM> list serverpool
id:0004fb0000020000ad6d8645f920bcd2  name:HQ
OVM> list network
id:c0a80300 name:192.168.20.0
id:10616940f5  name:192.168.21.0
id:10b365f33e  name:192.168.22.0
# grep “bridge” vm.cfg
vif = ['mac=00:21:f6:be:a1:2f,bridge=10616940f5']
# grep “phy:/dev/mapper” vm.cfg
disk = ['file:/OVS/Repositories/0004fb0000030000732541521e134825/VirtualDisks/0004fb000012000074df05323fb96e3f.img,xvda,w', 'phy:/dev/mapper/36001405ad5b37b1d92e2d3f0bda815d7,xvdb,w']
 
 
Oracle VM Release 3.2
ovs.migrate.blockrepo
  • Keyword used to select block repositories (Fibre Channel and iSCSI) to be migrated/imported to a new server pool
  • Format is Repository-device ID (i.e. LUN Page 83 ID), Server pool ID to present repository to, followed by old:new Bridge ID pairs optionally followed by | old:new physical disk page 83 ID pairs (if VMs in repository are having physical disks attached to them)
  • Example: ovs.migrate.blockrepo = 36001405a51df13ed5e3dd3667d9d08de,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e|360014057fc7ed3dd1eaad3f51d9c3cd0:36001405ad5b37b1d92e2d3f0bda815d7,360014058ac94bc4d259dd3e44dba6cd4:36001405edd88eced2188d364ed92b4d0,360014058f44d33cdcbabd3fe9d82d5dd:36001405f7ab4faad7070d30cada337d1
ovs.migrate.blockrepo = 360014058f2fa80bd6863d38e9d8cacde,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e
  • Use Oracle VM CLI commands: list serverpool (to get server pool ID), list physicalDisk, show physicalDisk name=... (see Page83 ID line in output) to get LUN page83 ID and destination server pool id to present repository to or admin can decide. Use Oracle VM CLI command: list network to get  bridge (vlan) IDs on source and destination server pool and note corresponding  bridge ID from vm.cfg files. Note physical disk page 83 IDs from vm.cfg ( Refer lines similar to 'phy:/dev/mapper/3600140512a47d37d2fc7d3db3dbe48d3,xvdb,w' )
Can check device(LUN) IDs with ls -1 /dev/mapper
 
e.g. OVM> list serverpool
id:0004fb0000020000ad6d8645f920bcd2  name:DR
OVM> list physicalDisk
id:0004fb000018000094b8ad997f379340  name:SYNOLOGY (1)
  id:0004fb00001800002bc713c427409e5a  name:SYNOLOGY (2)
  id:0004fb00001800002c5da2d1f45f35ac  name:SYNOLOGY (3)
OVM> show physicalDisk name="SYNOLOGY (1)"
Name = SYNOLOGY (1)
Id = 0004fb000018000094b8ad997f379340
...
Page83 ID = 3600140547cce070d7059d3182da9f0de
...
 
OVM> list network
id:c0a80300 name:192.168.20.0
id:10616940f5  name:192.168.21.0
id:10b365f33e  name:192.168.22.0
# grep “bridge” vm.cfg
vif = ['mac=00:21:f6:be:a1:2f,bridge=c0a80300']
# grep “phy:/dev/mapper” vm.cfg
disk = ['file:/OVS/Repositories/0004fb0000030000732541521e134825/VirtualDisks/0004fb000012000074df05323fb96e3f.img,xvda,w', 'phy:/dev/mapper/3600140512a47d37d2fc7d3db3dbe48d3,xvdb,w']
 
ovs.migrate.filerepo
  • Keyword used to select file (NFS) repositories to be migrated/imported to a new server pool
  • Format is Repository name, Server pool ID to present repository to, followed by old:new Bridge ID pairs optionally followed by |old:new physical disk page 83 ID pairs (if VMs in repository are having physical disks attached to them)
  • Example: ovs.migrate.filerepo = nfs_migration1,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e|360014057fc7ed3dd1eaad3f51d9c3cd0:36001405ad5b37b1d92e2d3f0bda815d7,360014058ac94bc4d259dd3e44dba6cd4:36001405edd88eced2188d364ed92b4d0,360014058f44d33cdcbabd3fe9d82d5dd:36001405f7ab4faad7070d30cada337d1
ovs.migrate.filerepo = nfs_migration2,0004fb0000020000ad6d8645f920bcd2,10cac49c1b:102119a2be,10f042bc49:10616940f5,10d788509f:10b365f33e
  • Use Oracle VM CLI commands: list repository, list serverpool (to get server pool ID) or admin can decide. Use command: list network to get  bridge (vlan) IDs on source and destination server pool and note corresponding  bridge ID from vm.cfg files. Note physical disk page 83 IDs from vm.cfg ( Refer lines similar to 'phy:/dev/mapper/3600140512a47d37d2fc7d3db3dbe48d3,xvdb,w' )
e.g. OVM> list repository
id:0004fb0000030000732541521e134825  name:nfs_migration1
  id:0004fb0000030000854f30c7b209e3b6  name:nfs_migration2
id:0004fb00000300000fc9d5a05ede58a0  name:nfs_migration3
OVM> list serverpool
id:0004fb0000020000ad6d8645f920bcd2  name:HQ
OVM> list network
id:c0a80300 name:192.168.20.0
id:10616940f5  name:192.168.21.0
id:10b365f33e  name:192.168.22.0
# grep “bridge” vm.cfg
vif = ['mac=00:21:f6:be:a1:2f,bridge=10616940f5']
# grep “phy:/dev/mapper” vm.cfg
disk = ['file:/OVS/Repositories/0004fb0000030000732541521e134825/VirtualDisks/0004fb000012000074df05323fb96e3f.img,xvda,w', 'phy:/dev/mapper/3600140512a47d37d2fc7d3db3dbe48d3,xvdb,w']

 

Oracle VM Manager: ROAD for Oracle VM Logging

The following log file is automatically created upon execution of any of the ROAD for Oracle VM commands.
 
/tmp/mokum_utils.YYYYMMDD_HHMMSS.log
  • Automatically created upon execution of any of the ROAD for Oracle VM commands
  • Fine-grained logging of all interaction via the Oracle VM CLI including all console output with timestamps
  • Default name/location: See mokum.log.loc from the runbook examples for more details.
 

Oracle VM Manager: ROAD for Oracle VM Command Start-up Processing and Exit Status

ROAD for Oracle VM commands are written in Linux BASH. All of the scripts begin by “sourcing”, AKA “including”, a file called “mokum_utils_lib.sh”.  The “mokum_utils_lib.sh” file contains the definitions for global variables that are used commonly by all commands, as well as a shared library of common procedures that are used for communicating with the Oracle VM CLI, result processing, file handling, logging, and argument processing. (Please refer mokum_utils_lib.sh from “Oracle VM Manager: ROAD for Oracle VM Command Definitions” section for more details)
 
After loading “mokum_utils_lib.sh” and testing successful connection with Oracle VM CLI through the library connect_ovm cli function, the utility can be assured that all of its required global variables have been initialized and it can start further execution. The commands main function 'main' calls further functions to perform tasks. The function main is called from the very start and end of the commands.
 
If there are any errors with the format or presence of required information, either coming from the runbooks or through functions, an error message will be logged and the command will end with one of the following exit status:
Exit Status
Description
11
Mokum configuration file was not selected hence exit without command execution.
12
Mokum configuration file processing incomplete with utils_lib hence exit without further command execution.
13
Current Oracle VM Manager Release (build) is not supported release hence exit without further command execution.
14
Could not reach the Oracle VM Manager and/or Oracle VM Manager CLI on the specified host and port hence exit further command execution.
15
Input values required by the script variables not provided through runbook, hence exit without further command execution
16
Exit without further command execution
17
Exit without further command execution, if some options are not used properly while running the command or not passed through the runbook. (used mainly in all rename disks scripts)
 

Oracle VM Manager Silent Installations and Oracle VM Manager Availability

The ability to quickly and efficiently uninstall, install, and recover Oracle VM Manager is an essential Oracle VM lifecycle operation. ROAD for Oracle VM includes an answer file for a silent Oracle VM Manager installations in the /opt/mokum/ovm-install directory. Silent Oracle VM Manager installations automate the installation process. Post installation use the ROAD for Oracle VM restore_manager.sh and restore_nice_names.sh commands to quickly recover Oracle VM Manager on a standby.
 
Note: The answer file will need to be updated with your Oracle VM manager installation passwords.
 
We recommend having at least one dedicated Oracle VM Manager standby with ROAD for Oracle VM including all of your runbooks to be able to quickly recover from any Oracle VM Manager outage. If you have only one Oracle VM Manager instance, your standby should have Oracle VM Manager installed with the primary instance UUID. This would allow you to quickly recover Oracle VM Manager on the standby when/if the primary instance crashes. If you have more than one Oracle VM Manager instance the standby may not even have Oracle VM Manager installed.
 
Each Oracle VM Manager installation has a unique UUID. Oracle VM server pools that are created by an Oracle VM Manager instance are stamped with a unique UUID. Oracle VM server pools can only be managed by one Oracle VM Manager instance at a time with the appropriate UUID. UUID installations with ROAD for Oracle VM can give you the flexibility to quickly recover any crashed Oracle VM Manager instance without impacting running Oracle VM servers or virtual machines.
 

Oracle VM Manager Release 3.3 Silent Install and Uninstall Steps

  • Change each instance of mypassword in the /opt/mokum/ovm-install/ovm3.3.yml file to your Oracle VM Manager installation password.
  • Download the Oracle VM Manager ISO file (not source files) from the Oracle Software Delivery Cloud - Oracle Linux and Oracle VM portal, or from My Oracle Support.
    • Copy the Oracle VM Manager ISO file to a directory on the Oracle VM Manager host.
    • Log in to the Oracle VM Manager host as root, and mount the ISO file by typing:
    • mount -o loop <FILE NAME>.iso /mnt
    • Change to the /mnt directory, i.e. “cd /mnt.
  • To install Oracle VM Manager using the answer file, as root:
  • cd /mnt
  • ./runInstaller.sh --config=/opt/mokum/ovm-install/ovm3.3.yml -i install --assumeyes -u UUID
    • If you receive strange installation errors rendering the /opt/mokum/ovm-install/ovm3.3.yml file, confirm that no tabs have crept in to the file. YAML uses spaces, not tabs. Tabs can be converted to spaces in vim using these commands: :set tabstop=2 expandtab and then :retab.
  • NOTE: The UUID option is optional and would only be used to specify an Oracle VM Manager UUID. For example, if the installation is done using the -u switch the Oracle VM Server pools that are owned by the UUID used could be quickly recovered using ROAD for Oracle VM.
  • To uninstall Oracle VM Manager, as root, mount the desired ISO file, change to the mount directory, then: ./runInstaller.sh -i Uninstall -y
 
The Oracle VM Manager runInstaller.sh script has several handy options. The options can be listed by typing:
# ./runInstaller.sh --help
Oracle VM Manager Release 3.3.3 Installer
Usage:  runInstaller.sh [options]
options
 -h, --help                       Shows this message
 -c, --config <cfgFile>   Use specified config file to do install
 -u, --uuid <uuid>         Manager UUID (install using the provided manager UUID)
 -i, --installtype <type> Install type : Install, Uninstall, Upgrade
 -y, --assumeyes           Automatically answer yes on Continue? questions
 -n, --noprereq              Ignore prerequisite checks
 -k, --cleanup                Clean up temporary config file after installation
 
Using the runInstaller.sh script with no options will prompt for each required option.
 

Appendix

When running commands I receive: /bin/bash^M: bad interpreter: No such file or directory

The ^M is a carriage return character. Linux uses the line feed character to mark the end of a line, whereas Windows uses the two-character sequence CR LF. The ROAD for Oracle VM command (the file) has end up with a Windows line ending, which makes Bash generate the "/bin/bash^M: bad interpreter: No such file or directory" message.  
Solution:  Edit the command (the file) with vim and use the 'fileformat' and 'fileformats' options to set the file format. Using vim open the file and enter :set ff=unix to remove ^M, or use the dos2unix command, i.e. dos2unix file_name to remove the ^Ms.  
 
[1] Oracle VM 3: Using Oracle Clusterware to Protect Oracle VM Manager: http://www.oracle.com/technetwork/server-storage/vm/ovm3-clusteredmanage...
[2] Oracle's recommended Oracle VM Manager recovery process: https://docs.oracle.com/cd/E50245_01/E50251/html/vmadm-manager-backup-restore.html
[3] OVM 3.2: How to Dump/Restore VM Manager Information By Using OVMModelDump.py (Doc ID 1981708.1)