Jump to Navigation

Chapter 6: Oracle VM 2.2 SAN, iSCSI and NFS Back-end Storage Configurations

Last update 11-14-2010
Copyright © 2009 - 2012 Roddy Rodstein. All rights reserved.
 
This chapter outlines how to configure an Oracle VM 2.2 pool with Fibre Channel and iSCSI SANs and NFS back-end storage. The chapter also covers guest front-end storage options and configurations. The chapter starts with an overview of the Oracle VM storage stack, followed by an introduction to Oracle VM back-end storage options, configurations and considerations. Next, we will summarize storage administration with Oracle VM 2.2 followed by example root and extended storage repository (SR) configurations using Fibre Channel and iSCSI SANs and NFS storage arrays. The chapter concludes with a review Oracle VM guest front-end storage options and configurations.
 
Table of Contents
 
An Oracle VM storage solution consists of three distinct layers. Each layer has its own unique requirements, configurations, dependencies and features. The first layer is the storage array, which is referred to as back-end storage. Oracle VM supports local storage, Fibre Channel and iSCSI SANs and NFS back-end storage. The second layer is the server layer, which consists of the Oracle VM server storage configurations and the virtual machine file system, i.e. the Oracle Cluster File System 2 (OCFS2) or NFS. Oracle VM supports a wide variety of configurations for Fibre Channel and iSCSI SANs and NFS storage arrays. The third layer is the guest front-end storage, which consists of multiple guest storage and driver options.
 
Note: Oracle VM supports both local and shared back-end storage. Local storage refers to a file system that can only be accessed by a single Oracle VM server. This chapter covers shared back-end storage supporting a clustered multi server pool environment, not local storage.
 
Figure 1 shows a high-level overview of the three layers of the storage stack with a virtual machine running on an Oracle VM server, connected to a storage array. At the bottom of the stack is the storage array. The storage array layer is where the physical disks are managed and presented to the Oracle VM pool members as logical disks. Above the storage array is the server layer. The server layer is where the storage configurations and the OCSF2 or NFS virtual machine file system (the cluster stack) are managed. At the top of the stack is the virtual machine layer. The virtual machine layer is where virtual machine storage is presented to the virtual machine by the Oracle VM server.
 
Figure 1 shows the Oracle VM storage stack.
 
When designing an Oracle VM virtual environment, one of the most important considerations is the back-end storage. There are many back-end storage options ranging from local storage, Fibre Channel and iSCSI SANs and NFS. Each back-end storage option has its own capacity, performance and availability features. The back-end storage is where you store and run the virtual machines. Oracle VM supports the Oracle Cluster File System v2 (OCSF2) and NFS on the back-end storage to store and run the virtual machines. The OCFS2 cluster file system and NFS both have their own unique management, performance and availability features.
 
The next section will review the OCSF2 cluster file system, NFS and the cluster stack. The goal of the OCSF2 section is to provide an overview of the architecture,configurations, dependencies and features of the virtual machine file system, at the server layer of the storage stack. Understanding the architecture, configurations, dependencies and features at the server layer will allow you to design a manageable back-end storage solution for Oracle VM.
 
Note: For the remainder of this chapter, the terms “pool” and “cluster” should be considered to be interchangeable.
 
The Oracle Cluster File System 2 (OCFS2) is a general-purpose journaling file system developed by Oracle. Oracle released OCFS2 under the GNU General Public License (GPL), version 2. The OCSF2 source code and its tool set are part of the mainline Linux 2.6 kernel and above. The OCSF2 source code and its tool set can be downloaded from kernel.org and from the Unbreakable Linux Network.
 
Note: OCFS2 is not integrated or supported with any volume manager (LVM) to manage the back-end block storage. Fibre Channel and iSCSI partitions must be provisioned at static sizes, i.e. partition sizes can not change once a partition is formatted with OCFS2. Many customers try to use LVM to manage the back-end block storage for OCFS2. LVM is not cluster aware, so changes made to the back-end block storage by LVM will not be propagated to the OCFS2 file system. The Oracle VM pool members would continue to write to the old volume layout, and corruption will occur.
 
OCFS2 has two components, a kernel component and a user-space component. The kernel component consists of the file system and the cluster stack. The user-space component consists of the utilities to manage the file system and the cluster stack.
 
A slightly modified version of OCSF2 (o2dlm) is bundled with Oracle VM. The OCFS2 file system and cluster stack are installed and configured as part of an Oracle VM server installation. The o2cb service manages the cluster stack and the ocfs2 service manages the OCSF2 file system. The o2cb cluster service is a set of modules and in-memory file systems that manage the ocfs2 file system service.
 
Once a server pool is created using Oracle VM Manager, two cluster configuration files are shared across the server pool that maintain the cluster layout and cluster timeouts configurations. The /etc/ocfs2/cluster.conf file maintains the cluster layout and the /etc/sysconfig/o2cb file maintains the cluster timeouts. Both configuration files are read by the user-space utility configfs. configfs communicates the list of nodes in the /etc/ocfs2/cluster.conf file to the in-kernel node manager, along with the resource used for the heartbeat to the in-kernel heartbeat thread.
 
The ovs-agent, which is also installed and configured by default, is responsible for propagating the /etc/ocfs2/cluster.conf file to all of the pool members. The ovs-agent is an Oracle VM server service that is used for centralized pool management, orchestrated by Oracle VM Manager or the Oracle VM Management Pack. Each time an ovs-agent starts and stops, it updates the pool status, which is managed by the master pool agent. The master pool agent updates the pool membership status and then propagates an up to date /etc/ocfs2/cluster.conf file to all of the pool’s ovs-agents.
 
An Oracle VM server must be online to be in an OCFS2 cluster. Once the cluster is on-line, each pool member starts a process, o2net. The o2net process creates TCP/IP intra-cluster node communication channels on port 7777 and sends regular keepalive packages to each node in the cluster to validate if the nodes are alive. The intra-cluster node communication uses the Oracle VM management network. The Oracle VM management network is selected during the Oracle VM server installation. If a pool member falls of the network and the keepalive connection becomes silent, the server will self-fence. Fencing forcefully removes dead servers from a pool to ensure that active servers are not obstructed from accessing fenced servers cluster resources.
 
Along with the keepalive packages that check for node connectivity, the cluster stack also employs a disk heartbeat check. o2hb is the process that is responsible for the disk heartbeat component of cluster stack that actively monitors the status of all pool members. The heartbeat system uses a file on the OCSF2 file system, that each pool member periodically writes a block to, along with a time stamp. The time stamps are read by each pool member and are used to check if a pool member is alive or dead. If a pool member’s block stops getting updated, the server is considered dead. When a server dies, the server gets fenced. Fencing forcefully removes dead pool member from the pool to ensure that active pool members are not obstructed from accessing fenced pool members resources.
 
Another important OCSF2 component is the distributed lock manager. The distributed lock manager (o2dlm) tracks all the locks in the cluster, including lock ownership and lock status. Cluster locking is added at the lowest level, in the xend code. The locking method is defined in the xend-config.sxp file, (xend-domains-lock-path /opt/ovs-agent-2.3/utils/dlm.py)‏. All access methods must take a lock, for example, Oracle VM Manager, xm and the XenAPI. dlm is also used for Oracle VM HA, which relies on the cluster stack to validate pool member status for HA purposes. For example, as pool members boot, reboot and restart, pool membership status will change across the pool.
 
There is also a virtual filesystem interface (dlmfs) that allows user space processes to access the in-kernel distributed lock manager. dlmfs communicates locking and unlocking for pool wide locks on resources to the in-kernel distributed lock manager. The in-kernel distributed lock manager keeps track of all locks and their owners and status. The o2cb init script mounts the virtual filesystem under /dlm on each Oracle VM server.
 
To provide OCFS2 functionality with an NFS storage repository, Oracle VM uses a hidden OCFS2 file-backed block device that facilitates the use of the OCFS2 distributed lock manager (DLM) with NFS. The ability to use the OCFS2 distributed lock manager with OCFS2 and NFS allows Oracle VM to monitor both OCFS2 and NFS storage repositories with the same interface.
 
Table 1 shows the OCSF2 cluster service stack.
Service
Description
In Kernel Node Manager (NM)
The in kernel node manager tracks all of the pool members in the /etc/ocfs2/cluster.conf file.
Network and Storage Heartbeat (HB)
The network and storage heartbeat dispatches up/down notifications when pool members join or leave the cluster.
TCP/IP
The TCP/IP protocols handle the communication between pool members. 
Distributed Lock Manager (DLM)
DLM tracks locks in the pool, including lock ownership and lock status.
configfs
configfs communicates the list of pool members to the in-kernel node manager. configsf also communicates the heartbeat resource to the in-kernel heartbeat thread. Configfs mounts under /sys/kernel/config
dlmfs
dlmfs communicates locking and unlocking for pool wide locks on resources to the in-kernel distributed lock manager. The in-kernel distributed lock manager keeps track of all locks and their owners and status. The dlmfs user space virtual filesystem interface mounts under /dlmfs
 
Now that we have reviewed the components of the OCFS2 file system and cluster stack, let’s see how OCFS2 works together with Oracle VM. 
 
When an Oracle VM 2.2 server boots, the o2cb and ocfs2 services are started which bring up the OCFS2 clusterstack. Once the OCFS2 clusterstack is online, the ovs-agent informs the pool master that the node is online. Next, the pool master updates the nodemap file with the node’s online status. Next, the ovs-agent queries the pool master and pulls down an up-to-date /etc/ocfs2/cluster.conf configuration. Next, the ovs-agent mounts the root and any extended repositories and checks that /OVS is symlinked correctly.
 
When a pool member stops, starts or dies, the pool master attempts to take an EX lock for the dead pool member’s resources. The master agent then updates the nodemap file to monitor the aliveness of all active pool members. Whenever the pool membership status changes, the mater agent will recreate the cluster.conf file and propagate the changes to all of the pool members.
 
The next section will review the OCFS2 user-space management utilities and commands.
 
OCFS2 has a full suite of utilities to manage the OCFS2 file system and the cluster stack.
 
Table 2 lists the OCFS2 file system utilities that are available in dom0. 
OCFS2 Utility
Description
mkfs.ocfs2
mkfs.ocfs2 is used to format an OCFS2 file system on a device. mkfs.ocfs2 requires the O2CB cluster service to be up.
tune.ocfs2
tune.ocfs2 is used to manage OCFS2 file system parameters, including the volume label, number of node slots and journal size for all node slots.
mounted.ocfs2
mounted.ocfs2 detects and lists all OCFS2 volumes on an Oracle VM server.
fsck.ocfs2
fsck.ocfs2 checks and repairs the OCFS2 file system.
debugfs.ocfs2
debugfs.ocfs2 is used to query the state of the OCFS file system for debugging.
Table 3 lists the commands to manage the o2cb services (the clusterstack).
Command
Description
/etc/init.d/o2cb status
Reports if the o2cb services are loaded and mounted
/etc/init.d/o2cb load
Loads the O2CB modules and in-memory file systems
/etc/init.d/o2cb online ocfs2
Onlines the cluster named ocfs2. The default name for Oracle VM OCFS2 cluster is ocfs2. The cluster name is defined in the cluster.conf file. At least one pool member must be active for the cluster to be online.
/etc/init.d/o2cb offline ocfs2
Offlines the cluster named ocfs2. The default name for Oracle VM OCFS2 cluster is ocfs2. The cluster name is defined in the cluster.conf file.
/etc/init.d/o2cb unload
Unloads the O2CB modules and in-memory file systems
/etc/init.d/o2cb start ocfs2
Starts the cluster named ocfs2 by loading o2cb and onlining the cluster. The default name for Oracle VM OCFS2 cluster is ocfs2. The cluster name is defined in the cluster.conf file. At least one pool member must be active for the cluster to be online.
Next, we will review Oracle VM local storage, Fibre Channel and iSCSI SANs and NFS storage repositories.
 
A default Oracle VM 2.2 server installation creates a “local” OCFS2 virtual machine file system that is mounted under /var/ovs/mount/UUID and linked to /OVS. Using a local storage repository restricts pool membership to “one” Oracle VM server without Live Migration or HA functionality. To increase the capacity of an Oracle VM pool past one Oracle VM server, the addition of a shared back-end storage repository is required.
 
An Oracle VM storage repository can consist of “one” large repository, commonly referred to as “a root repository” or a root repository with multiple extended sub repositories. Oracle VM 2.x does not have volume management, so adding storage to a root repository volume will not grow the root repository. The only option to grow an Oracle VM 2.x storage repository is to add sub repositories beneath the root repository. A best practice is to provision one or more larger repositories to avoid the management overhead of numerous sub repositories.
 
Tip: In general, you should consider provisioning at least 30% to 50% more storage for your Oracle VM storage repositories than the expected size.
 
Configuring an Oracle VM pool’s storage repository is a multi step process. Once the back-end storage is provisioned, the pool master must be connected to the storage from dom0. Next, all of the Oracle VM servers that will be added to the pool should be connected to the storage, again from dom0. Finally, all of the Oracle VM servers should be added to the pool using Oracle VM Manager or the Oracle VM Management Pack. Once the pool has multiple servers, virtual machines can start on and migrate to any server in the pool.
 
To add storage to an Oracle VM storage repository, the first step is to provision the storage. Next, connect the pool master, followed by each pool member to the storage using the /opt/ovs-agent-2.3/utils/repos.py script with the -n (new) followed by the -i (initialize, aka mount) switches, to add and then mount the sub storage repository. Finally the new mount point in /var/ovs/mount/UUID needs to be linked to /OVS/UUID, by typing “ln -nsf /var/ovs/mount/<UUID>/OVS”, again from dom0. The end result is a root repository with an “extended” sub repository mounted under /var/ovs/mount/UUID, linked to /OVS/UUID. The Oracle VM agents will automatically place resources such as virtual machines, templates, or ISO files on the root or sub repository with available space.
 
Figure 2 shows a root storage repository.
 
Figure 3 shows a root storage repository and an extended sub repository.
 
Oracle VM 2.2 uses the /opt/ovs-agent-2.3/utils/repos.py script to configure storage repositories and a local Berkley DB to save the storage repository configurations. The Oracle VM agent is also responsible for mounting and linking storage repositories when an Oracle VM server boots or restarts. For example, you will not see entries in /etc/fstab for any Oracle VM storage repositories. Oracle VM storage repository configurations are saved in a local Berkley DB or in a shared Berkley DB in the root storage repository.
 
Oracle VM root and extended storage repositories all share the same directory structure. Oracle VM’s OCFS2 file system, clusterstack, repos.py script, Oracle VM agent, Oracle VM Manager as well as the Oracle VM Management Pack are wired to use the default storage repository directory structure.
 
The following example shows the Oracle VM storage repository directory structure including a brief explanation of each directory.
 
/OVS                        (Root directory)
 | B47E850ABA50460882B30645CF051619 (UUID of an extended file system)
 | iso_pool                (ISO files storage, requires VT chip extensions)
 | lost+found             (The lost and found directory)
 | publish_pool         (Public virtual machine storage)
 | running_pool        (Published virtual machine storage)
 | seed_pool             (Virtual Machine template storage)
 | sharedDisk            (Shared virtual disk storage)
 
The next example shows the storage repository directory structure of an extended storage repository.
 
 / B47E850ABA50460882B30645CF051619 (UUID of an extended file system)
 | iso_pool                (ISO files storage, requires VT chip extensions)
 | lost+found             (The lost and found directory)
 | publish_pool         (Public virtual machine storage)
 | running_pool        (Published virtual machine storage)
 | seed_pool             (Virtual Machine template storage)
 | sharedDisk            (Shared virtual disk storage)
 
Now that we know all about the OCFS2 file system, clusterstack and the storage repository directory structure, we will to turn our attention to Oracle VM storage administration. As discussed in the Oracle VM Storage Stacksection, there are three distinct layers of an Oracle VM storage solution. The first layer is the storage array, which is referred to as back-end storage. The second layer is the server layer, which consists of the Oracle VM server storage configurations and the virtual machine file system. The third layer is the guest front-end storage, which consists of multiple guest storage and driver options.
 
The following sections will review storage administration at each layer of the storage stack. 
 
Oracle VM storage administration is done at the storage array layer. Oracle VM Manager or the Oracle VM Management pack is responsible for pool creation, not for storage repository management. For example, storage repository provisioning, storage repository snapshotting, storage repository replication, storage repository monitoring as well as storage repository backup and restoration are preformed at the storage array layer.
 
Storage array configurations and storage best practices are vendor specific and out of the scope of this document. Please consult your storage administrators and storage vendor and application owners to help develop a storage solution to meet your business requirements. This section will review Oracle VM specific storage array considerations.
 
Administrators do not always have the luxury to design the best storage solution for their environment. Ultimately management makes the decisions and administrators make the best out of what equipment they get. Ideally, we would like to design a storage solution that allowed us to provision tiered storage for different workloads. Each workload, for example, RAC, Fusion Middleware, and E-Business Suite has different requirements, so depending on the workload, back-end disk configurations and guest front-end disk configurations will affect the performance of the workloads. The only way to validate the best configuration for a workload is to benchmark the workload using a variety of back-end and front-end configurations. Once you know which configurations provide the best performance for a given workload then it is time to provision and configure the back-end and front-end storage accordingly.
 
Swap is another storage array layer component that requires careful consideration. The best practice for guests is to add RAM to a guest to tune the database or application workload to minimize swapping altogether. If some swapping is necessary, placing the guests swap files on the Oracle VM server’s local disk will offer better peformnace than hosting a guests swap file on a SAN. Paging over a SAN in parallel with swap traffic from other guests can easily contribute to an I/O bottleneck. Swap traffic from other guests is especially bad when a common set of physical LUNs is provisioned as swap space for many guests. If several guests load up and start swapping heavily, all the guests on that storage will grind to a halt waiting for the saturated LUNs to respond.
 
Please note that placing a guests swap file on an Oracle VM server’s local disk will eliminate the ability to use Live Migration.
 
Identify a backup and restoration strategy for the guests. If the storage array does not offer a suitable guest backup and restoration solution at the storage array layer consider using an OS agent based backup and restoration solution.
 
Configure the Fibre Channel and iSCSI multi-pathing using dm-multipath. Installing 3rd party SAN connectivity software in dom0 is not supported by Oracle.
 
Guest virtual disks stored on Oracle VM 2.2 and above OCFS2 file systems use sparse files and unwritten extents by default. When using sparse files and unwritten extents, guest virtual disk files grows proportionally to the number of writes to the disk by the guest, so that large portions of the unused disk do not consume space.
 
The advantage of using sparse files is that storage is allocated only when needed which reduces the time it takes to create sparse files along with saving disk space.
 
The disadvantage of using sparse files is that the file system free space reports may be misleading. For example, since storage is allocated only when needed, the file system free space reports may not be accurate since large portions of unused disk, i.e. the sparse zero sections have not yet been written to disk.
 
Tip: Some application do not support copying sparse files and may copy the entire uncompressed size of the file including the sparse sections.
 
Configuring an Oracle VM pool’s storage repository is a multi step process. Once the back-end storage is provisioned, the pool master must be connected to the storage from dom0. Oracle VM supports SAN, iSCSI and NFS back-end storage. Once the pool master is connected to the storage and an HA pool is created in Oracle VM Manager, each pool member should be configured to access the storage, and then added to the pool using Oracle VM Manager.
 
Next, we walk through example root and extended storage repository (SR) configurations using Fibre Channel and iSCSI SANs and NFS storage arrays.
 
In this section we will walk through the steps to configure Oracle VM servers using a Fibre Channel SAN storage array. All of the steps will be executed on each Oracle VM server from dom0 as root. Once all of the Oracle VM servers are configured, an HA enabled pool will be created in Oracle VM Manager with the pool master server. Next, all of the other configured Oracle VM server will be added to the HA enabled pool. 
 
Tip: An HA enabled pool will automatically add (repos.py -n and -r) and mount (repos.py -i) root and extended storage repositories for all pool members.
 
OCFS2 is not integrated or supported with any volume manager (LVM) to manage the back-end block storage. Fibre Channel and iSCSI partitions must be provisioned at static sizes, i.e. partition sizes can not change once a partition is formatted with OCFS2. The challenge with supporting volume management with OCFS2 is that the volume manager needs to be cluster-aware and integrated with the OCFS2 cluster stack. To date there are no supported volume management solutions for OCFS2. For example, many customers use LVM to manage the back-end block storage for OCFS2. LVM is not cluster aware, so changes made to the back-end block storage by LVM will not be propagated to the OCFS2 file system. The Oracle VM pool members would continue to write to the old volume layout, and corruption will occur.
 
SAN connectivity is configured using fiber channel HBAs with dm-multipath in dom0 to allow the Oracle VM server to access a Logical Unit (LU) using multiple paths. Oracle VM also supports boot from SAN. The "linux mpath" install option is used to boot an Oracle VM server from a SAN. By using the "linux mpath" install option, the installer will see the multipath devices and allow you to create the boot/root partitions, along with the master boot record (MBR) on the SAN. Please note that this document will not cover boot from SAN.
 
To connect an Oracle VM server to a Fibre Channel storage array, each Oracle VM server’s HBAs must be zoned and masked to the storage. Once the HBAs are zoned and masked, the next step is to configure dm-multipath to detect the LUNs which are recognized as multipath devices. Once the multipath devices are detected, we need to format the devices on the “pool master” using the mkfs.ocfs2 utility. Next, use the repos.py script to configure the storage repository. Finally, create a pool using Oracle VM Manager or the Oracle VM Management Pack by selecting the pool master server. Once the pool is created add all the other Oracle VM servers to the pool.
 
  • Create the LUN(s)
  • The HBAs must be zoned and masked to the storage.
1. All Oracle VM servers must be patched from the Unbreakable Linux Network (ULN) to ensure that the storage configurations will not be hampered by unpatched bugs.
 
2. Select an Oracle VM server that will be used as the Oracle VM pool master. After the Oracle VM pool master and all the other Oracle VM pool members meet the prerequisites outlined in the following steps, access Oracle VM Manager and create an HA enabled pool using the Oracle VM pool master server.
 
Note: An HA enabled pool automatically mounts and links root and extended storage repositories for each Oracle VM pool member that is added to a pool.
 
3. Create a multipath.conf file for the storage array. Please refer to Appendix A for examples multipath.conf files.
 
4. Ensure that all the Oracle VM servers’ clocks are synchronized using NTP.
 
First, open the “/etc/ntp.conf” file by typing “vi /etc/ntp.conf” and validate that at least two available NTP servers entries are listed. The next example shows two bold NTP server entries in an ntp.conf file.
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server myntp1.com
server myntp2.com
 
Ping each NTP server listed in the ntp.conf file from each Oracle VM server to ensure network connectivity.
 
Next, type "ntpstat" on each Oracle VM server to validate the NTP configuration. The next example shows the output from typing the ntpstat command on an Oracle VM server that has its time synchronized to an NTP server with the IP address of 192.168.4.251.
# ntpstat

synchronized to NTP server (192.168.4.251) at stratum 4 time correct to within 54 ms polling server every 1024 s

Finally, validate that the time, date and time zone on each Oracle VM server as well as on the Oracle VM Manager host is synchronized by typing the "date" command.
 
5. All Oracle VM servers have consistent name resolution using DNS with both forward and reverse lookups.
 
First, open the “/etc/resolv.conf” file by typing “vi /etc/resolv.conf” and validate that two available DNS servers are listed. The next example shows two DNS servers listed in a resolve.conf file.
# vi /etc/resolve.conf
nameserver <MY DNS SERVER1 IP ADDRESS>
nameserver <MY DNS SERVER2 IP ADDRESS>
From each Oracle VM server ping each DNS server listed in the resolv.conf file to ensure network connectivity.
 
Next, validate the forward and reverse lookups for each Oracle VM pool member and the Oracle VM Manager host using the “host” command. For example, to validate server2's forward lookup from server1 type “host server2” as shown in the next example.
# host server2
server2 has address 192.168.4.6
Next, to validate server2's reverse lookup from server1 type “host 192.168.4.6” as shown in the next example.
# host 192.168.4.6
6.4.168.192.in-addr.arpa domain name pointer
server2
Note: Using hosts files without DNS is not advised and may produce unpredictable results.
 
6. The Oracle VM server’s host name in the /etc/hosts file must be associated with the Oracle VM server's public IP address. If an Oracle VM pool member's host name is associated with 127.0.0.1, the cluster.conf file will be malformed and the Oracle VM pool will not be operational. The next example shows the improper syntax from an Oracle VM server's hosts file entry.
127.0.0.1               servername.com servername localhost.localdomain localhost
192.168.4.8           servername.com servername
The next example shows the proper syntax for an Oracle VM server’s hosts file entry. 
127.0.0.1               localhost.localdomain localhost
192.168.4.8           servername.com servername

7. ocfs2 network connectivity between all Oracle VM server pool members must be operational before creating a multiple server pool. Check the ocfs2 network connectivity between all Oracle VM pool members by typing "nc -zv <myoraclevmserver1> 7777". For example, if you have two Oracle VM servers named ovs1 and ovs2, from ovs1 type "nc -zv ovs2 7777". Typing "nc -zv ovs2 7777" from ovs1 should return "succeeded!". If you receive a "failed: Connection refused" message between any Oracle VM servers, something (firewall, switch, router, cable, etc..) is restricting communication between the hosts.

 
The iptables firewall on an Oracle VM server may be blocking the ocfs2 connectivity. If iptables is disabled and allowing all connections, the output from typing “iptables -L” will look like the next example. 
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
 
Chain FORWARD (policy ACCEPT)
target prot opt source destination
 
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
If typing “iptables -L” lists firewall rules, you can a) disable iptables by typing "service iptables stop && chkconfig iptables off" or b) add the following bold iptables rule to the /etc/sysconfig/iptables file prior to the last line on all Oracle VM pool members.
 
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 7777 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
 
After you have added the above bold iptables rule, restart the iptables service by typing "service iptables restart".
 
8. If an Oracle VM server was originally installed with a local ocfs2 storage repository, it is necessary to remove and unmount the local ocfs2 storage repository before adding the Oracle VM server to a pool. To determine if an Oracle VM server is using a local storage repository type "/opt/ovs-agent-2.3/utils/repos.py -l" to list all configured storage repositories. If a storage repository is listed, type "/opt/ovs-agent-2.3/utils/repos.py -d UUID" to remove the local repository from the Oracle VM server.
 
Next, check if the local storage repository is still mounted under /var/ovs/mount/UUID. Type “mount |grep mount”, as shown in the next example to list the mounts. 
# mount |grep mount
/dev/sda3 on /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D type ocfs2 (rw,heartbeat=none)
The above example shows that a storage repository /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D is still mounted under /dev/sda3. Next, unmount the OCFS2 repository by typing “umount /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D” as shown in the next example. 
# umount /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D
Tip: A default Oracle VM server installation dedicates the majority of the local disk to the OVS partition and create a small root partition. If your Oracle VM server was installed with the default OVS partition with a small root partition, consider rebuilding the server to create a disk layout that allocates the disk space to the root partition.
 
Type “df -h” to list the size of an Oracle VM servers' partitions.
 
Another consideration for small roots that include /var is the potential for large saves from the xendomains service. If an Oracle VM server crashes, the xendomains service will save the state (the memory foot print of each guest) of all running guests in the /var/lib/xen/save directory, which could fill up a small root partition. If xendomains functionality is not needed disable it. The next example shows how to disable or edit the location of the saved xendomains files.

1. Edit /etc/sysconfig/xendomains
2. find the section:

## Type: string
## Default: /var/lib/xen/save
#
# Directory to save running domains to when the system (dom0) is
# shut down. Will also be used to restore domains from if # XENDOMAINS_RESTORE
# is set (see below). Leave empty to disable domain saving on shutdown
# (e.g. because you rather shut domains down).
# If domain saving does succeed, SHUTDOWN will not be executed.
#
XENDOMAINS_SAVE=/var/lib/xen/save

3. Clear the XENDOMAINS_SAVE path to disable saves. Or point the XENDOMAINS_SAVE path to a partition with available space.
4. Rest the xendomains services by typing “service xendomains restart” to enable a new configuration.
 
9. If an Oracle VM server have previously been added to an Oracle VM server pool, the Oracle VM server's cluster configurations will need to be cleaned before added to a new Oracle VM server pool. To clean an Oracle VM server's cluster configurations it is necessary to a) empty the /etc/ocfs/cluster.conf file b) delete and recreate the local BerkleyDB and c) run the cleanup.py script to stop o2cb heartbeat, offline o2cb, remove o2cb configuration file, umount ovs-agent storage repositories and to cleanup ovs-agent local database.
 
To clear /etc/ocfs/cluster.conf file type “cat /dev/null> /etc/ocfs2/cluster.conf” from dom0, as shown in the next example.
# cat /dev/null> /etc/ocfs2/cluster.conf
To remove the local BerkleyDB first type “service ovs-agent stop”, which stops the Oracle VM agent. Next, type “rm -fr /etc/ovs-agent/db/*” to delete the BerkleyDB. Finally, type “service ovs-agent start” to start the Oracle VM agent, which also recreate a new local BerkleyDB.
 
To stop the o2cb heartbeat, offline o2cb, remove o2cb configuration file, unmount ovs-agent storage repositories and to cleanup ovs-agent local database, type "/opt/ovs-agent-2.3/utils/cleanup.py" and then type “y” as shown in the next example. 
# /opt/ovs-agent-2.3/utils/cleanup.py
This is a cleanup script for ovs-agent.
It will try to do the following:
 
*) stop o2cb heartbeat
*) offline o2cb
*) remove o2cb configuration file
*) umount ovs-agent storage repositories
*) cleanup ovs-agent local database
 
Would you like to continue? [y/N] y
Cleanup done.
Step 1: The first step is to validate that the HBAs are listed in the /sys/class/fc_host directory. The goal of this step is to record the host adapter ID number(s) and to troubleshoot any SAN connectivity issues. You can skip this step if you’re able to view the HBAs listed in the /sys/class/fc_host directory.
 
Tip: If there are no host adapters listed in the /sys/class/fc_host directory, check if the HBAs are properly zoned and masked.
 
As shown in the next example, type “ll /sys/class/fc_host” to list the host adapters.
 
# ll /sys/class/fc_host
total 0
drwxr-xr-x 3 root root 0 Oct 11 08:24 host6
drwxr-xr-x 3 root root 0 Jul 5 09:28 host7
 
The output from “ll /sys/class/fc_host” shows that there are two host adapters, host6 and host7.
 
Once you’re able to list the host adapters in the /sys/class/fc_host directory, cat each host adapter’s “port_name” file to get the host adapter number ID number.
 
#cat /sys/class/fc_host/host6/port_name
0x10000000c0ffee7e
 
The above example shows that 0x10000000c0ffee7e is the host adapter number ID number for host6.
 
Next, cat the host7/port_name file to get the host adapter number ID number.
 
cat /sys/class/fc_host/host7/port_name
0x10000000c0ffee7f 
 
The above example shows that 0x10000000c0ffee7f is the host adapter number ID number for host7.
 
If you need to rescan the bus, echo the “/sys” filesystem as shown in the next examples.
 
#echo “- - -” > /sys/class/scsi_host/hostH/scan
 
For example.
 
#echo “- - -” > /sys/class/scsi_host/host6/scan
#echo “- - -” > /sys/class/scsi_host/host7/scan
 
We have successfully discovered the host adapter ID numbers from each Oracle VM server.
 
Step 2: Next, validate that multipath daemon is properly configured on each Oracle VM server. From dom0, type “service multipathd status”, as shown in the following example.
 
#service multipathd status
multipathd (pid 10333) is running...
 
If your multipath daemon is running, please skip to Step 3.
 
If your system’s multipath daemon is stopped, use chkconfig to configure the multipath daemon, as shown in the below example.
 
Next, type “chkconfig --list multipathd” to view the multipath daemon configuration, as shown in the next example.
 
#chkconfig --list multipathd
multipathd      0:off   1:off   2:off   3:off   4:off   5:off   6:off
 
The output of “chkconfig --list multipathd” shows that the multipath daemon is not configured to run at any system run level i.e. run level 0 through 6.
 
Next, type “chkconfig multipathd on” in order to automatically start the multipath daemon at run level 3, 4 and 5, as shown in the next example.
 
#chkconfig multipathd on
 
Next, validate the multipathd startup configuration by typing “chkconfig --list multipathd” as shown in the next example.
 
# chkconfig --list multipathd
multipathd      0:off   1:off   2:on    3:on    4:on    5:on    6:off
 
The output of “chkconfig --list multipathd” validates that the multipath daemon is configured to run at run level 2, 3, 4, and 5.
 
Finally, start the multipath daemon by typing “service multipathd start”, as shown in the following example.
 
# service multipathd start
Starting multipathd daemon:                                [ OK ]
 
We have successfully configured and started the multipath daemon on each Oracle VM server.
 
Step 3: Next, we will configure dm-multipath on each Oracle VM server by replacing or modifying the default /etc/multipath.conf file with a multipath.conf file crafted for your storage solution. multipath.conf file settings can be vendor specific, please check with your storage vendor for a multipath.conf file for Oracle VM or RHEL 5U3. If you already have a working multipath.conf file please skip to Step 4.
 
Each Oracle VM server has an example multipath.conf file located at /etc/multipath.conf. The example multipath.conf file should be modified or replaced with a vendor specific multipath.conf file to support your storage array. A multipath.conf file is divided into four sections. List 4 shows the four sections of a multipath.conf file.
 
1.      blacklist
2.      defaults
3.      multipaths
4.      devices
 
Next, we will review the four sections of a multipath.conf file.
 
Blacklist
The blacklist section lists devices that are to be excluded from multipath control. For example, if the server boots from a local disks i.e. sda, sdb, hda, hdb, etc…then we need to include those disks in the blacklist. The example multipath.conf file has two blacklist entries.
 
As shown below, the first blacklist entry is uncommented and will blacklist all devices.
 
blacklist {
        devnode "*"
}
 
If you are going to test the example multipath.conf file, comment the blacklist entry to allow devices to be managed by dm-multipath. The next example shows the blacklist entries commented.
 
#blacklist {
#       devnode "*"
#}
 
The second blacklist entry is commented and shows how to blacklist WWIDs, ram, raw, loop, fd, md, dm-, sr, scd, st, and hd devices. The next example shows the second black list entry.
 
#blacklist {
#       wwid 26353900f02796769
#       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
#       devnode "^hd[a-z]"
#}
 
The first entry wwid 26353900f02796769 is an example that shows how to blacklist WWID. If you need to blacklist WWIDs, add an entry for each WWID, for example.
 
wwid "3600508b40008dc480000500000670000"
wwid "3600508b40008dc480000500000640000"
wwid "3600508b40008dc480000500000610000"
 
The second devnode line devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" blacklists ram, raw, loop, fd, md, dm-, sr, scd, st, and hd devices.
 
The third devnode entry devnode "^hd[a-z]" will blacklist hda disks.
 
To test the default blacklist entry, first comment out the blacklist entry that blacklist all devices. Next, remove the wwid line, then uncomment the blacklist, devnode and the } sections, as shown below.
 
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
}
 
You will need to restart the multipath daemon to test any new settings. The next example shows how to restart the multipath daemon.
 
 # service multipathd restart
Stopping multipathd daemon:                                [ OK ]
Starting multipathd daemon:                                [ OK ]
 
If your Oracle VM servers use a serial or SCSI controller for local disks then the interface names will be similar to sda, sdb, etc. To exclude the local disks i.e. sda and sdb you need to add a devnode entry for the devices. The example devnode entry below will blacklist the sda and the sdb device.
 
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^sd[a-b]$"
}
 
Defaults
The defaults section allows you to configure default settings for dm-multipath that “may” be supported by your storage array. If you determined that your storage array does not use the settings in the defaults section, use the devices section of the multipath.conf file.
 
The defaults section settings are overwritten when the devices and multipaths sections are used. The default “defaults” setting in the example multipath.conf uses the “user_friendly_names yes” entry. The “user_friendly_names yes” entry will allow you to use an alias, instead of WWID (World Wide Identifier) names.
 
Multipath devices can be identified by a WWID names or by an alias. A WWID is a unique identifier for the multipath device that does not change. Device names such as /dev/sdx and /dev/dm-x can change on reboot, so defining multipath devices by their ID is preferred. The multipath device names in the /dev/mapper directory references LUN IDs that do not change and are user friendly, i.e. mpath0, mpath1, etc…
 
Multipaths
The multipaths section is where you map devices to a user friendly name. Each multipath entry will specify the UUID or wwid and the alias of a LUN along with path_checker variables, which will regularly check the path. The settings in the multipaths section overwrite the settings specified in the defaults and devices sections.
 
Devices
The devices section is used to define vendor specific settings. Consult your storage vendor for the entries for your storage array. If you are using multiple SAN storage systems, several device entries are necessary.
 
After changing settings in a multipath.conf file, administrators must restart the dm multipath daemon by typing service multipathd restart, as shown in the next example.
 
 # service multipathd restart
Stopping multipathd daemon:                                [ OK ]
Starting multipathd daemon:                                [ OK ]
 
To generate detailed return messages, administrators can type “multipath -ll”. The “multipath -ll” command will list all LUNs by WWID with their multipath device names and the individual paths used to create the multipath.
 
The next example shows Oracle VM’s default multipath.conf file.
 
# This is a basic configuration file with some examples, for device mapper
# multipath.
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated
 
 
# Blacklist all devices by default. Remove this to enable multipathing
# on the default devices.
blacklist {
        devnode "*"
}
 
## By default, devices with vendor = "IBM" and product = "S/390.*" are
## blacklisted. To enable  multipathing on these devices, uncomment the
## following lines.
#blacklist_exceptions {
#       device {
#               vendor "IBM"
#               product "S/390.*"
#       }
#}
 
## Use user friendly names, instead of using WWIDs as names.
defaults {
        user_friendly_names yes
}
##
## Here is an example of how to configure some standard options.
##
#
#defaults {
#       udev_dir                /dev
#       polling_interval        10
#       selector                "round-robin 0"
#       path_grouping_policy    multibus
#       getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
#       prio_callout            /bin/true
#       path_checker            readsector0
#       rr_min_io               100
#       max_fds                 8192
#       rr_weight               priorities
#       failback                immediate
#       no_path_retry           fail
#       user_friendly_names     yes
#}
##
## The wwid line in the following blacklist section is shown as an example
## of how to blacklist devices by wwid. The 2 devnode lines are the
## compiled in default blacklist. If you want to blacklist entire types
## of devices, such as all scsi devices, you should use a devnode line.
## However, if you want to blacklist specific devices, you should use
## a wwid line. Since there is no guarantee that a specific device will
## not change names on reboot (from /dev/sda to /dev/sdb for example)
## devnode lines are not recommended for blacklisting specific devices.
##
#blacklist {
#       wwid 26353900f02796769
#       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
#       devnode "^hd[a-z]"
#}
#multipaths {
#       multipath {
#               wwid                    3600508b4000156d700012000000b0000
#               alias                   yellow
#               path_grouping_policy    multibus
#               path_checker            readsector0
#               path_selector           "round-robin 0"
#               failback                manual
#               rr_weight               priorities
#               no_path_retry           5
#       }
#       multipath {
#               wwid                    1DEC_____321816758474
#               alias                   red
#       }
#}
#devices {
#       device {
#               vendor                  "COMPAQ "
#               product                 "HSV110 (C)COMPAQ"
#               path_grouping_policy    multibus
#               getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
#               path_checker            readsector0
#               path_selector           "round-robin 0"
#               hardware_handler        "0"
#               failback                15
#               rr_weight               priorities
             no_path_retry           queue
#       }
#       device {
#               vendor                  "COMPAQ "
#               product                 "MSA1000         "
#               path_grouping_policy    multibus
#       }
#}
 
Note: If you do not have a working multipath.conf file please reference Appendix A for example multipath.conf files.
 
Once you have a working multipath.conf file and have restarted the multipath daemon you can list the mapped devices in the /dev/mapper directory. Next we will access dom0 as root and type “ll /dev/mapper” to view the mapped devices, as shown in the following example.
 
# ll /dev/mapper
total 0
crw------- 1 root root  10, 62 Jan 23 16:44 control
brw-rw---- 1 root disk 253,  0 Jan 23 17:01 mpath0
 
The "mpath0" entry validates that the mapped device is available from dom0.
 
We can also list the mapped devices with their major and minor numbers by typing “dmsetup ls”. The minor numbers corresponds with the dm device name. In the following example the minor number of 0 corresponds to the multipath device /dev/dm-0.
 
#dmsetup ls 
mpath0  (253, 0)
 
We can also validate the mapped devices with the corresponding multipath devices by listing the /dev/mpath/ directory as shown in the next example.
 
# ll /dev/mpath/
total 0
lrwxrwxrwx 1 root root 7 Jan 23 17:01 mpath0 -> ../dm-0
 
To list all the storage devices and the available paths type “multipath –l”, as shown in the next example.
 
# multipath -l
mpath0 (2000b080000002369) dm-0 Pillar,Axiom 600
[size=603G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][enabled]
 \_ 7:0:0:0 sdb 8:16  [active][undef]
 \_ 7:0:1:0 sdc 8:32  [active][undef]
 \_ 8:0:0:0 sdd 8:48  [active][undef]
 
The "multipath -l" command queries sysfs and the device mapper only, it does not invoke path checkers. The "multipath -ll" gets information from all relevant sources, including path checkers.
 
In Step 3 we reviewed the syntax of a multipath.conf file and showed how to restart the multipath daemon to view the mapped devices, and the storage devices, along with the available paths.
 
Step 4: Next we will create an OCFS2 storage repository on a LUN or LUNs from “one” Oracle VM server. We will format the OCSF partition from the pool master.
 
The next example shows the syntax to format an OCFS2 volume.
 
#mkfs.ocfs2 -L mylabel -Tdatafiles -N8 <device>
 
In the above example, the mkfs.ocfs2 utility is used to format the device. The “-L” parameter is optional and can be used to add a descriptive label to the OCFS2 volume. The “-Tdatafiles” parameter makes mkfs.ocfs2 choose the optimal filesystem parameters for the device. The -N parameter selects the number of slots. The number of slots determines the number of pool members that can concurrently mount the OCFS2 volume. The OCFS2 file system can support up to 255 nodes. For example, if your Oracle VM server pool will have 20 pool members, select -N20. The slot number can later be increased or decreased using the tunefs.ocfs2 utility.
 
Next, from dom0, format an OCFS2 volume by typing “mkfs.ocfs2 -L root-sr -Tdatafiles -N16 /dev/mapper/mpath0”, as shown in the next example.
 
Substitute root-sr with your desired label name and /dev/mapper/mpath0 with the proper device path for your environment.
 
# mkfs.ocfs2 -L root-sr -Tdatafiles -N16 /dev/mapper/mpath0
mkfs.ocfs2 1.4.3
Cluster stack: classic o2cb
Filesystem Type of datafiles
Filesystem label=root-sr
Block size=4096 (bits=12)
Cluster size=1048576 (bits=20)
Volume size=497142464512 (474112 clusters) (121372672 blocks)
15 cluster groups (tail covers 22528 clusters, rest cover 32256 clusters)
Journal size=33554432
Initial number of node slots: 16
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 5 block(s)
 
After the OCFS2 volume has been formatted you can use the mounted.ocfs2 utility to detect and list the OCFS2 volume.
 
To detect and list the OCFS2 volume from the above example, type “mounted.ocfs2 –d” from dom0, as shown in the next example.
 
# mounted.ocfs2 -d
Device                FS       UUID                                                                        Label
/dev/mapper/mpath0       ocfs2 35c601e9-b0da-4950-9a90-ec0193baa205 root-sr
 
The above mounted.ocfs2 -d example lists the device name, the file system type, the UUID and the label.
 
You can also use the full (-f) mode to lists the status of each OCFS2 volume.
 
# mounted.ocfs2 -f
Device                              FS     Nodes
/dev/mapper/mpath0       ocfs2 Not mounted
 
The above mounted.ocfs2 -f example lists the device name, the file system type, and the status of the node.
 
Note: Sparse files and unwritten extents are activated by default when using Oracle VM 2.2’ mkfs.ocfs2 utility. If your system was upgraded from 2.1 to 2.2, it’s necessary to enable sparse files and unwritten extents using the following procedure.
 
# umount <device>
# tunefs.ocfs2 --fs-features=sparse,unwritten <device>
 
To validate the enabled OCSF2 features, type “tunefs.ocfs2 -Q "%M %H %O\n" <device>” as shown in the next example.
 
# tunefs.ocfs2 -Q "%M %H %O\n" <device>
backup-super strict-journal-super sparse inline-data unwritten
 
We have successfully formatted the /dev/mpath0 device with OCSF2 using the mkfs.ocfs2 utility as well as reviewed mounted.ocfs2 –d, mounted.ocfs2 -f and how to list the OCFS2 features using tunefs.ocfs2.
 
Step 5: Next, on the pool master configure the root repository using the repos script. After the pool master is configured, configure all other pool members.
 
1.      From dom0 type "/opt/ovs-agent-2.3/utils/repos.py -l" to list any configured storage repositories, as shown in the next example.
 
# /opt/ovs-agent-2.3/utils/repos.py -l
#
 
Typing "/opt/ovs-agent-2.3/utils/repos.py -l" should result with an empty entry. If a storage repository is listed, type "/opt/ovs-agent-2.3/utils/repos.py -d UUID" to remove the repository. Next, unmout the storage repository in /var/ovs/mount/UUID by typing “umount /var/ovs/mount/UUID”.
 
2.      Next, type "/opt/ovs-agent-2.3/utils/repos.py -n /dev/mpath0" to add the new device to the list of managed devices, as shown in the next example. Substitute /dev/sdb for the correct device path for your environment.
 
# /opt/ovs-agent-2.3/utils/repos.py -n /dev/mapper/mpath0
[ NEW ] 002463a4-8998-4423-a797-8a1544739409 => /dev/mapper/mpath0
 
3.      Next, type "/opt/ovs-agent-2.3/utils/repos.py -r UUID" to tag the storage repository as the root storage repository, as shown in the next example.
 
# /opt/ovs-agent-2.3/utils/repos.py -r 002463a4-8998-4423-a797-8a1544739409
[ R ] 002463a4-8998-4423-a797-8a1544739409 => /dev/mapper/mpath0
 
Note: The UUID will be listed in step 2 or you can list the UUID by typing repos.py -l.
 
4.      Next, "only" on the pool master type /opt/ovs-agent-2.3/utils/repos.py -i to mount the root storage repository, as shown in the next example. This step only needs to be performed on the pool master.
 
# /opt/ovs-agent-2.3/utils/repos.py -i
*** Storage repositories initialized.
 
Note: When repos.py -i is run, the new storage repository will be mounted under /var/ovs/mount/UUID, although the new storage repository will not be linked to /OVS. The Oracle VM agent is responsible for mounting and linking storage repositories for pool members.
 
Next, validate that the storage repository has been mounted by typing “mounted.ocfs2 -f“, as shown in the next example.
 
# mounted.ocfs2 -f
Device                              FS     Nodes
/dev//mapper/mpath0       ocfs2 ovs1.sf.itnc.com
 
You can also validate the ocfs2 mounts by typing mount|grep ocfs2, as shown in the next example.
 
# mount|grep ocfs2
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/mapper/mpath0 on /var/ovs/mount/A85D7145957842F988293FDA43F8754D type ocfs2 (rw,_netdev,heartbeat=local)
 
Typing df –h would also validate that the root storage repository is mounted, as shown in the next example.
 
# df -h
Filesystem            Size Used Avail Use% Mounted on
/dev/sda2             451G 961M 426G   1% /
/dev/sda1              99M   46M   48M 49% /boot
tmpfs                 285M     0 285M   0% /dev/shm
/dev/mapper/mpath0       463G 541M 463G   1% /var/ovs/mount/A33BF3D2E09B45F3931F830CB1A404AA
 
5.      Next, create the pool in Oracle VM Manager. When creating the pool select the Oracle VM server that was used to format the OCFS2 file system as the pool master.
6.      Next, add all of the other configured Oracle VM servers to the pool.
 
To add storage to an Oracle VM storage repository, the first step is to provision the storage. The HBAs must be zoned and masked to the storage to be able to use the LUNs. Next, connect the pool master and format the storage using the steps outlined in Step 4. Subsequently, connect the other pool member to the storage using the /opt/ovs-agent-2.3/utils/repos.py script with the -n (new) followed by the -i (initialize, aka mount) switches, to add and then mount the sub storage repository. Finally, the new mount point in /var/ovs/mount/UUID needs to be linked to /OVS/UUID, by typing “ln -nsf /var/ovs/mount/<UUID>/ /OVS”, again from dom0. The end result is a root repository with an “extended” sub repository mounted under /var/ovs/mount/UUID which is linked to /OVS/UUID. 
 
Once a pool is configured, the Oracle VM agent will automatically place resources such as virtual machines, templates, or ISO files on the storage repository with available space. The Oracle VM agent is also responsible for mounting and linking storage repositories.
 
This section will review how to configure a root and an extended iSCSI storage repository with Oracle VM 2.2. Network interface bonding as well as dm-multipath may be used with iSCSI storage to provide multiple path support with Oracle VM.
 
  • Create the LUN(s)
  • Create the masking rules
1. All Oracle VM servers must be patched from the Unbreakable Linux Network (ULN) to ensure that the storage configurations will not be hampered by unpatched bugs.
 
2. Select an Oracle VM server that will be used as the Oracle VM pool master. After the Oracle VM pool master and all the other Oracle VM pool members meet the prerequisites outlined in the following steps, access Oracle VM Manager and create an HA enabled pool using the Oracle VM pool master server.
 
Note: An HA enabled pool automatically mounts and links root and extended storage repositories for each Oracle VM pool member that is added to a pool.
 
3. Ensure that all the Oracle VM servers’ clocks are synchronized using NTP.
 
First, open the “/etc/ntp.conf” file by typing “vi /etc/ntp.conf” and validate that at least two available NTP servers entries are listed. The next example shows two bold NTP server entries in an ntp.conf file.
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server myntp1.com
server myntp2.com
 
Ping each NTP server listed in the ntp.conf file from each Oracle VM server to ensure network connectivity.
 
Next, type "ntpstat" on each Oracle VM server to validate the NTP configuration. The next example shows the output from typing the ntpstat command on an Oracle VM server that has its time synchronized to an NTP server with the IP address of 192.168.4.251.
# ntpstat

synchronized to NTP server (192.168.4.251) at stratum 4 time correct to within 54 ms polling server every 1024 s

Finally, validate that the time, date and time zone on each Oracle VM server as well as on the Oracle VM Manager host is synchronized by typing the "date" command.
 
4. All Oracle VM servers have consistent name resolution using DNS with both forward and reverse lookups.
 
First, open the “/etc/resolv.conf” file by typing “vi /etc/resolv.conf” and validate that two available DNS servers are listed. The next example shows two DNS servers listed in a resolve.conf file.
# vi /etc/resolve.conf
nameserver <MY DNS SERVER1 IP ADDRESS>
nameserver <MY DNS SERVER2 IP ADDRESS>
From each Oracle VM server ping each DNS server listed in the resolv.conf file to ensure network connectivity.
 
Next, validate the forward and reverse lookups for each Oracle VM pool member and the Oracle VM Manager host using the “host” command. For example, to validate server2's forward lookup from server1 type “host server2” as shown in the next example.
# host server2
server2 has address 192.168.4.6
Next, to validate server2's reverse lookup from server1 type “host 192.168.4.6” as shown in the next example.
# host 192.168.4.6
6.4.168.192.in-addr.arpa domain name pointer
server2
Note: Using hosts files without DNS is not advised and may produce unpredictable results.
 
5. The Oracle VM server’s host name in the /etc/hosts file must be associated with the Oracle VM server's public IP address. If an Oracle VM pool member's host name is associated with 127.0.0.1, the cluster.conf file will be malformed and the Oracle VM pool will not be operational. The next example shows the improper syntax from an Oracle VM server's hosts file entry.
127.0.0.1               servername.com servername localhost.localdomain localhost
192.168.4.8           servername.com servername
The next example shows the proper syntax for an Oracle VM server’s hosts file entry. 
127.0.0.1               localhost.localdomain localhost
192.168.4.8           servername.com servername

6. ocfs2 network connectivity between all Oracle VM server pool members must be operational before creating a multiple server pool. Check the ocfs2 network connectivity between all Oracle VM pool members by typing "nc -zv <myoraclevmserver1> 7777". For example, if you have two Oracle VM servers named ovs1 and ovs2, from ovs1 type "nc -zv ovs2 7777". Typing "nc -zv ovs2 7777" from ovs1 should return "succeeded!". If you receive a "failed: Connection refused" message between any Oracle VM servers, something (firewall, switch, router, cable, etc..) is restricting communication between the hosts.

 
The iptables firewall on an Oracle VM server may be blocking the ocfs2 connectivity. If iptables is disabled and allowing all connections, the output from typing “iptables -L” will look like the next example. 
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
 
Chain FORWARD (policy ACCEPT)
target prot opt source destination
 
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
If typing “iptables -L” lists firewall rules, you can a) disable iptables by typing "service iptables stop && chkconfig iptables off" or b) add the following bold iptables rule to the /etc/sysconfig/iptables file prior to the last line on all Oracle VM pool members.
 
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 7777 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
 
After you have added the above bold iptables rule, restart the iptables service by typing "service iptables restart".
 
7. If an Oracle VM server was originally installed with a local ocfs2 storage repository, it is necessary to remove and unmount the local ocfs2 storage repository before adding the Oracle VM server to a pool. To determine if an Oracle VM server is using a local storage repository type "/opt/ovs-agent-2.3/utils/repos.py -l" to list all configured storage repositories. If a storage repository is listed, type "/opt/ovs-agent-2.3/utils/repos.py -d UUID" to remove the local repository from the Oracle VM server.
 
Next, check if the local storage repository is still mounted under /var/ovs/mount/UUID. Type “mount |grep mount”, as shown in the next example to list the mounts. 
# mount |grep mount
/dev/sda3 on /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D type ocfs2 (rw,heartbeat=none)
The above example shows that a storage repository /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D is still mounted under /dev/sda3. Next, unmount the OCFS2 repository by typing “umount /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D” as shown in the next example. 
# umount /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D
Tip: A default Oracle VM server installation dedicates the majority of the local disk to the OVS partition and create a small root partition. If your Oracle VM server was installed with the default OVS partition with a small root partition, consider rebuilding the server to create a disk layout that allocates the disk space to the root partition.
 
Type “df -h” to list the size of an Oracle VM servers' partitions.
 
Another consideration for small roots that include /var is the potential for large saves from the xendomains service. If an Oracle VM server crashes, the xendomains service will save the state (the memory foot print of each guest) of all running guests in the /var/lib/xen/save directory, which could fill up a small root partition. If xendomains functionality is not needed disable it. The next example shows how to disable or edit the location of the saved xendomains files.

1. Edit /etc/sysconfig/xendomains
2. find the section:

## Type: string
## Default: /var/lib/xen/save
#
# Directory to save running domains to when the system (dom0) is
# shut down. Will also be used to restore domains from if # XENDOMAINS_RESTORE
# is set (see below). Leave empty to disable domain saving on shutdown
# (e.g. because you rather shut domains down).
# If domain saving does succeed, SHUTDOWN will not be executed.
#
XENDOMAINS_SAVE=/var/lib/xen/save

3. Clear the XENDOMAINS_SAVE path to disable saves. Or point the XENDOMAINS_SAVE path to a partition with available space.
4. Rest the xendomains services by typing “service xendomains restart” to enable a new configuration.
 
8. If an Oracle VM server have previously been added to an Oracle VM server pool, the Oracle VM server's cluster configurations will need to be cleaned before added to a new Oracle VM server pool. To clean an Oracle VM server's cluster configurations it is necessary to a) empty the /etc/ocfs/cluster.conf file b) delete and recreate the local BerkleyDB and c) run the cleanup.py script to stop o2cb heartbeat, offline o2cb, remove o2cb configuration file, umount ovs-agent storage repositories and to cleanup ovs-agent local database.
 
To clear /etc/ocfs/cluster.conf file type “cat /dev/null> /etc/ocfs2/cluster.conf” from dom0, as shown in the next example.
# cat /dev/null> /etc/ocfs2/cluster.conf
To remove the local BerkleyDB first type “service ovs-agent stop”, which stops the Oracle VM agent. Next, type “rm -fr /etc/ovs-agent/db/*” to delete the BerkleyDB. Finally, type “service ovs-agent start” to start the Oracle VM agent, which also recreate a new local BerkleyDB.
 
To stop the o2cb heartbeat, offline o2cb, remove o2cb configuration file, unmount ovs-agent storage repositories and to cleanup ovs-agent local database, type "/opt/ovs-agent-2.3/utils/cleanup.py" and then type “y” as shown in the next example. 
# /opt/ovs-agent-2.3/utils/cleanup.py
This is a cleanup script for ovs-agent.
It will try to do the following:
 
*) stop o2cb heartbeat
*) offline o2cb
*) remove o2cb configuration file
*) umount ovs-agent storage repositories
*) cleanup ovs-agent local database
 
Would you like to continue? [y/N] y
Cleanup done.
Step 1: The first step is to validate that the iscsi service is running on each Oracle VM server. Access dom0 as root and check the status of the iscsi service by typing “service iscsi status”, as shown in the next example.
 
# service iscsi status
iscsid (pid 2314 2313) is running...
 
If the iscsi service is not running, start the iscsi service by typing “service iscsi start” as shown in the next example.
 
# service iscsi start
 
We have successfully validated that the iscsi service is running on each Oracle VM pool member.
 
Step 2: Next, on each Oracle VM server discover the iSCSI LUNs using the iscsiadm utility. Once the iSCSI LUNs have been discovered, if necessary, remove any entries that will not be used. Next, we will verify that the unused LUNs are removed.
 
List 1 shows the procedure to discover, remove and validate iSCSI LUNs.
1.      First, from dom0 type “iscsiadm -m discovery -t sendtargets -p iSCSI-Target-IPADDRESS”, to discover the entries from your iSCSI target. Substitute “iSCSI-Target-IPADDRESS” with the IP address or FQDN of your iSCSI target.
2.      Second, remove any unused entries by typing “iscsiadm -m node -p iSCSI Qualified Name -o delete”, for example, iscsiadm -m node -p 192.168.4.10:3260,1 -T iqn.2006-01.com.openfiler:tsn.a83c0838952c -o delete.
3.      Finally, validate that only the desired LUNs are discovered by typing “iscsiadm -m node”.
 
As shown in the next example, the output from “iscsiadm -m discovery -t sendtargets -p 192.168.4.10” lists two entries. The first entry, 192.168.4.10:3260,1 iqn.2006-01.com.openfiler:tsn.a83c0838952c will be removed. The second entry, 192.168.4.10:3260,1 iqn.2006-01.com.openfiler:tsn.db29e77712c0 will become the root repository.
 
# iscsiadm -m discovery -t sendtargets -p 192.168.4.10
192.168.4.10:3260,1 iqn.2006-01.com.openfiler:tsn.a83c0838952c
192.168.4.10:3260,1 iqn.2006-01.com.openfiler:tsn.db29e77712c0
 
Note: Discovered LUNs will appear in /proc/partitions only after restarting the iscsi service.
 
In general, if your Oracle VM server lists entries that you will not use, you will need to remove all of the entries.
 
To remove an unused entry, for example type “iqn.2006-01.com.openfiler:tsn.a83c0838952c” type “iscsiadm -m node -p 192.168.4.10:3260,1 -T iqn.2006-01.com.openfiler:tsn.a83c0838952c -o delete
”, as shown in the next example.
 
#iscsiadm -m node -p 192.168.4.10:3260,1 -T iqn.2006-01.com.openfiler:tsn.a83c0838952c -o delete
 
The next example shows how to verify that the unused entry has been removed by typing “iscsiadm -m node".
 
# iscsiadm -m node
192.168.4.10:3260,1 iqn.2006-01.com.openfiler:tsn.aa231ffd6ef2
 
As shown in the above example only one entry is listed.
 
In Step 2 we reviewed how to discover, remove and validate iSCSI LUNs.
 
Step 3: Next, on each Oracle VM server list /proc/partitions to review each Oracle VM server’s devices. After we review the devices listed in /proc/partitions, restart the iscsi service to mount the discovered iSCSI LUN. After the iSCSI service is restarted, the iSCSI LUN will be listed in /proc/partitions as a new device, i.e. sdb.
 
Before we restart the iscsi service, review the devices listed on each Oracle VM server by typing “cat /proc/partitions”, as shown in the next example.
 
# cat /proc/partitions
major minor #blocks name
 
   8     0 488386584 sda
   8     1     104391 sda1
   8     2 487227352 sda2
   8     3    1052257 sda3
 
Note the sda, sda1, sda2 and the sda3 devices.
 
Next, type “service iscsi restart” to restart the iscsi service, which also mounts the discoved LUN.
 
# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                                                  [ OK ]
Turning off network shutdown. Starting iSCSI daemon:       [ OK ]
                                                                                                [ OK ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.db29e77712c0, portal: 192.168.4.10,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.db29e77712c0, portal: 192.168.4.10,3260]: successful
                                                                                                [ OK ]
#
 
After the iscsi service is restarted, any discovered iSCSI LUNs will be listed in /proc/partitions as a new device, as shown in the next example.
 
# cat /proc/partitions
major minor #blocks name
 
   8     0 488386584 sda
   8     1     104391 sda1
   8     2 487227352 sda2
   8     3    1052257 sda3
  8    16 485490688 sdb
 
Note that the LUN i.e. the sdb device is listed in /proc/partitions. Now the new device can be partitioned using the mkfs.ocfs2 utility.
 
In Step 3 we reviewed /proc/partitions on each Oracle VM server. Next, we restarted the iscsi service which mounted the discovered iSCSI LUN. After the iSCSI LUN was mounted, we validated that the new device was listed in /proc/partitions.
 
Step 4: Next, “only on the pool master” you have to format an OCFS2 volume on the new device (the iSCSI LUN), using the mkfs.ocfs2 utility. The OCSF2 volume should be formatted “only” from one Oracle VM server, i.e. the pool master server.
 
Note: If you already have created a server pool, format the OCSF2 volume “only” on the server pool master server. If you have not created a server pool in Oracle VM manager, use the Oracle VM server that you will select as the pool master to format the OCFS2 volume.
 
The next example shows the syntax to format an OCFS2 volume.
 
#mkfs.ocfs2 -L mylabel -Tdatafiles -N8 /dev/sdx
 
In the above example, the mkfs.ocfs2 utility is used to format the device. The “-L” parameter is optional and can be used to add a descriptive label to the OCFS2 volume. The “-Tdatafiles” parameter makes mkfs.ocfs2 chose the optimal filesystem parameters for the device. The -N parameter selects the number of slots. The number of slots determines the number of pool members that can concurrently mount the OCFS2 volume. The OCFS2 file system can support up to 255 nodes. For example, if your Oracle VM server pool will have 20 pool members, select -N20. The slot number can later be increased or decreased using the tunefs.ocfs2 utility. Next, from dom0, format an OCFS2 volume by typing “mkfs.ocfs2 -L root-sr -Tdatafiles -N16 /dev/sdb”, as shown in the next example.
 
Substitute root-sr with your desired label name and /dev/sdb with the proper device path for your enviroment.
 
# mkfs.ocfs2 -L root-sr -Tdatafiles -N16 /dev/sdb
mkfs.ocfs2 1.4.3
Cluster stack: classic o2cb
Filesystem Type of datafiles
Filesystem label=root-sr
Block size=4096 (bits=12)
Cluster size=1048576 (bits=20)
Volume size=497142464512 (474112 clusters) (121372672 blocks)
15 cluster groups (tail covers 22528 clusters, rest cover 32256 clusters)
Journal size=33554432
Initial number of node slots: 16
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 5 block(s)
 
After the OCFS2 volume has been formatted you can use the mounted.ocfs2 utility to detect and list the OCFS2 volume.
 
To detect and list the OCFS2 volume from the above example, type “mounted.ocfs2 –d” from dom0, as shown in the next example.
 
# mounted.ocfs2 -d
Device                FS       UUID                                                           Label
/dev/sdb              ocfs2 35c601e9-b0da-4950-9a90-ec0193baa205 root-sr
 
The above mounted.ocfs2 -d example lists the device name, the file system type, the UUID and the label.
 
You can also use the full (-f) mode to lists the status of each OCFS2 volume.
 
# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdb              ocfs2 Not mounted
 
The above mounted.ocfs2 -f example lists the device name, the file system type, and the status of the node.
 
Note: Sparse files and unwritten extents are activated by default when using Oracle VM 2.2’ mkfs.ocfs2 utility. If your system was upgraded from 2.1 to 2.2, it’s necessary to enable sparse files and unwritten extents using the following procedure.
 
# umount <device>
# tunefs.ocfs2 --fs-features=sparse,unwritten <device>
 
To validate the enabled OCSF2 features, type “tunefs.ocfs2 -Q "%M %H %O\n" <device>” as shown in the next example.
 
# tunefs.ocfs2 -Q "%M %H %O\n" <device>
backup-super strict-journal-super sparse inline-data unwritten
 
We have successfully formatted the /dev/sdb volume with OCSF2 using the mkfs.ocfs2 utility as well as reviewed mounted.ocfs2 –d, mounted.ocfs2 -f and how to list the OCFS2 features using tunefs.ocfs2.
 
Step 5: Next, on the pool master configure the root repository using the repos script. After the pool master is configured, configure all other pool members.
 
1.      From dom0 type "/opt/ovs-agent-2.3/utils/repos.py -l" in order to list any configured storage repositories, as shown in the next example.
 
# /opt/ovs-agent-2.3/utils/repos.py -l
#
 
Typing "/opt/ovs-agent-2.3/utils/repos.py -l" should result with an empty entry. If a storage repository is listed, type "/opt/ovs-agent-2.3/utils/repos.py -d UUID" to remove the repository. Next, unmout the storage repository in /var/ovs/mount/UUID by typing “umount /var/ovs/mount/UUID”.
 
2.      Next, type "/opt/ovs-agent-2.3/utils/repos.py -n /dev/sdb" to add the new device to the list of managed devices, as shown in the next example. Substitute /dev/sdb for the correct device path for your environment.
 
# /opt/ovs-agent-2.3/utils/repos.py -n /dev/sdb
[ NEW ] 002463a4-8998-4423-a797-8a1544739409 => /dev/sdb
 
3.      Next, type "/opt/ovs-agent-2.3/utils/repos.py -r UUID" to tag the storage repository as the root storage repository, as shown in the next example.
 
# /opt/ovs-agent-2.3/utils/repos.py -r 002463a4-8998-4423-a797-8a1544739409
[ R ] 002463a4-8998-4423-a797-8a1544739409 => /dev/sdb
 
Note: The UUID will be listed in step 2 or you can list the UUID by typing repos.py -l.
 
4.      Next, "only" on the pool master type /opt/ovs-agent-2.3/utils/repos.py -i to mount the root storage repository, as shown in the next example. This step only needs to be performed on the pool master.
 
# /opt/ovs-agent-2.3/utils/repos.py -i
*** Storage repositories initialized.
 
Note: When repos.py -i is run, the new storage repository will be mounted under /var/ovs/mount/UUID, although the new storage repository will not be linked to /OVS.
 
Next, validate that the storage repository has been mounted by typing “mounted.ocfs2 -f“, as shown in the next example.
 
# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdb              ocfs2 ovs2.sf.itnc.com
 
You can also validate the ocfs2 mounts by typing mount|grep ocfs2, as shown in the next example.
 
# mount|grep ocfs2
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdb on /var/ovs/mount/A85D7145957842F988293FDA43F8754D type ocfs2 (rw,_netdev,heartbeat=local)
 
Typing df –h would also validate that the root storage repository is mounted, as shown in the next example.
 
# df -h
Filesystem            Size Used Avail Use% Mounted on
/dev/sda2             451G 961M 426G   1% /
/dev/sda1              99M   46M   48M 49% /boot
tmpfs                 285M     0 285M   0% /dev/shm
/dev/sdb              463G 541M 463G   1% /var/ovs/mount/A33BF3D2E09B45F3931F830CB1A404AA
 
5.      Next, create the pool in Oracle VM Manager. When creating the pool select the Oracle VM server that was used to format the OCFS2 file system as the pool master.
6.      Next, add all of the other configured Oracle VM servers to the pool.
 
To add storage to an Oracle VM storage repository, the first step is to provision the storage. The storage will need to be zoned and masked to be able to use the LUNs.
Next, discover the new device by typing “iscsiadm -m node -T target --rescan”. 
 
Note: If you restart the iscsi service to detect the new LUN, the iscsi service will log out of the existing storage repository, discover the new LUN and reboot the server. To avoid a reboot of your Oracle VM server use the rescan option with the iscsiadm utility, i.e. “iscsiadm -m node -T target --rescan”. 
 
Next, connect the pool master and format the storage using the steps outlined in Step 4. Then, connect the other pool member to the storage using the /opt/ovs-agent-2.3/utils/repos.py script with the -n (new) followed by the -i (initialize, aka mount) switches, to add and then mount the sub storage repository. Finally, the new mount point in /var/ovs/mount/UUID needs to be linked to /OVS/UUID, by typing “ln -nsf /var/ovs/mount/<UUID>/ /OVS”, again from dom0. The end result is a root repository with an “extended” sub repository mounted under /var/ovs/mount/UUID which is linked to /OVS/UUID. 
 
Once a pool is configured, the Oracle VM agent will automatically place resources such as virtual machines, templates, or ISO files on the storage repository with available space. The Oracle VM agent is also responsible for mounting and linking storage repositories.
 
This section will review how to configure a root and an extended NFS storage repository with Oracle VM 2.2.
 
  • Create the LUN(s)
  • Create the masking rules
  • The NFS share must have the “no_root_squash” option enabled.

1. All Oracle VM servers must be patched from the Unbreakable Linux Network (ULN) to ensure that the storage configurations will not be hampered by unpatched bugs. 

2. Select an Oracle VM server that will be used as the Oracle VM pool master. After the Oracle VM pool master and all the other Oracle VM pool members meet the prerequisites outlined in the following steps, access Oracle VM Manager and create an HA enabled pool using the Oracle VM pool master server.
 
Note: An HA enabled pool automatically mounts and links root and extended storage repositories for each Oracle VM pool member that is added to a pool.
 
3. Ensure that all the Oracle VM servers’ clocks are synchronized using NTP.
 
First, open the “/etc/ntp.conf” file by typing “vi /etc/ntp.conf” and validate that at least two available NTP servers entries are listed. The next example shows two bold NTP server entries in an ntp.conf file.
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server myntp1.com
server myntp2.com
 
Ping each NTP server listed in the ntp.conf file from each Oracle VM server to ensure network connectivity.
 
Next, type "ntpstat" on each Oracle VM server to validate the NTP configuration. The next example shows the output from typing the ntpstat command on an Oracle VM server that has its time synchronized to an NTP server with the IP address of 192.168.4.251.
# ntpstat

synchronized to NTP server (192.168.4.251) at stratum 4 time correct to within 54 ms polling server every 1024 s

Finally, validate that the time, date and time zone on each Oracle VM server as well as on the Oracle VM Manager host is synchronized by typing the "date" command.
 
4. All Oracle VM servers have consistent name resolution using DNS with both forward and reverse lookups.
 
First, open the “/etc/resolv.conf” file by typing “vi /etc/resolv.conf” and validate that two available DNS servers are listed. The next example shows two DNS servers listed in a resolve.conf file.
# vi /etc/resolve.conf
nameserver <MY DNS SERVER1 IP ADDRESS>
nameserver <MY DNS SERVER2 IP ADDRESS>
From each Oracle VM server ping each DNS server listed in the resolv.conf file to ensure network connectivity.
 
Next, validate the forward and reverse lookups for each Oracle VM pool member and the Oracle VM Manager host using the “host” command. For example, to validate server2's forward lookup from server1 type “host server2” as shown in the next example.
# host server2
server2 has address 192.168.4.6
Next, to validate server2's reverse lookup from server1 type “host 192.168.4.6” as shown in the next example.
# host 192.168.4.6
6.4.168.192.in-addr.arpa domain name pointer
server2
Note: Using hosts files without DNS is not advised and may produce unpredictable results.
 
5. The Oracle VM server’s host name in the /etc/hosts file must be associated with the Oracle VM server's public IP address. If an Oracle VM pool member's host name is associated with 127.0.0.1, the cluster.conf file will be malformed and the Oracle VM pool will not be operational. The next example shows the improper syntax from an Oracle VM server's hosts file entry.
127.0.0.1               servername.com servername localhost.localdomain localhost
192.168.4.8           servername.com servername
The next example shows the proper syntax for an Oracle VM server’s hosts file entry. 
127.0.0.1               localhost.localdomain localhost
192.168.4.8           servername.com servername

6. ocfs2 network connectivity between all Oracle VM server pool members must be operational before creating a multiple server pool. Check the ocfs2 network connectivity between all Oracle VM pool members by typing "nc -zv <myoraclevmserver1> 7777". For example, if you have two Oracle VM servers named ovs1 and ovs2, from ovs1 type "nc -zv ovs2 7777". Typing "nc -zv ovs2 7777" from ovs1 should return "succeeded!". If you receive a "failed: Connection refused" message between any Oracle VM servers, something (firewall, switch, router, cable, etc..) is restricting communication between the hosts.

 
The iptables firewall on an Oracle VM server may be blocking the ocfs2 connectivity. If iptables is disabled and allowing all connections, the output from typing “iptables -L” will look like the next example. 
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
 
Chain FORWARD (policy ACCEPT)
target prot opt source destination
 
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
If typing “iptables -L” lists firewall rules, you can a) disable iptables by typing "service iptables stop && chkconfig iptables off" or b) add the following bold iptables rule to the /etc/sysconfig/iptables file prior to the last line on all Oracle VM pool members.
 
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 7777 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
 
After you have added the above bold iptables rule, restart the iptables service by typing "service iptables restart".
 
7. If an Oracle VM server was originally installed with a local ocfs2 storage repository, it is necessary to remove and unmount the local ocfs2 storage repository before adding the Oracle VM server to a pool. To determine if an Oracle VM server is using a local storage repository type "/opt/ovs-agent-2.3/utils/repos.py -l" to list all configured storage repositories. If a storage repository is listed, type "/opt/ovs-agent-2.3/utils/repos.py -d UUID" to remove the local repository from the Oracle VM server.
 
Next, check if the local storage repository is still mounted under /var/ovs/mount/UUID. Type “mount |grep mount”, as shown in the next example to list the mounts. 
# mount |grep mount
/dev/sda3 on /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D type ocfs2 (rw,heartbeat=none)
The above example shows that a storage repository /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D is still mounted under /dev/sda3. Next, unmount the OCFS2 repository by typing “umount /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D” as shown in the next example. 
# umount /var/ovs/mount/62C757BA5E174DF7B5AB01BBAE0F765D
Tip: A default Oracle VM server installation dedicates the majority of the local disk to the OVS partition and create a small root partition. If your Oracle VM server was installed with the default OVS partition with a small root partition, consider rebuilding the server to create a disk layout that allocates the disk space to the root partition.
 
Type “df -h” to list the size of an Oracle VM servers' partitions.
 
Another consideration for small roots that include /var is the potential for large saves from the xendomains service. If an Oracle VM server crashes, the xendomains service will save the state (the memory foot print of each guest) of all running guests in the /var/lib/xen/save directory, which could fill up a small root partition. If xendomains functionality is not needed disable it. The next example shows how to disable or edit the location of the saved xendomains files.

1. Edit /etc/sysconfig/xendomains
2. find the section:

## Type: string
## Default: /var/lib/xen/save
#
# Directory to save running domains to when the system (dom0) is
# shut down. Will also be used to restore domains from if # XENDOMAINS_RESTORE
# is set (see below). Leave empty to disable domain saving on shutdown
# (e.g. because you rather shut domains down).
# If domain saving does succeed, SHUTDOWN will not be executed.
#
XENDOMAINS_SAVE=/var/lib/xen/save

3. Clear the XENDOMAINS_SAVE path to disable saves. Or point the XENDOMAINS_SAVE path to a partition with available space.
4. Rest the xendomains services by typing “service xendomains restart” to enable a new configuration.
 
8. If an Oracle VM server have previously been added to an Oracle VM server pool, the Oracle VM server's cluster configurations will need to be cleaned before added to a new Oracle VM server pool. To clean an Oracle VM server's cluster configurations it is necessary to a) empty the /etc/ocfs/cluster.conf file b) delete and recreate the local BerkleyDB and c) run the cleanup.py script to stop o2cb heartbeat, offline o2cb, remove o2cb configuration file, umount ovs-agent storage repositories and to cleanup ovs-agent local database.
 
To clear /etc/ocfs/cluster.conf file type “cat /dev/null> /etc/ocfs2/cluster.conf” from dom0, as shown in the next example.
# cat /dev/null> /etc/ocfs2/cluster.conf
To remove the local BerkleyDB first type “service ovs-agent stop”, which stops the Oracle VM agent. Next, type “rm -fr /etc/ovs-agent/db/*” to delete the BerkleyDB. Finally, type “service ovs-agent start” to start the Oracle VM agent, which also recreate a new local BerkleyDB.
 
To stop the o2cb heartbeat, offline o2cb, remove o2cb configuration file, unmount ovs-agent storage repositories and to cleanup ovs-agent local database, type "/opt/ovs-agent-2.3/utils/cleanup.py" and then type “y” as shown in the next example. 
# /opt/ovs-agent-2.3/utils/cleanup.py
This is a cleanup script for ovs-agent.
It will try to do the following:
 
*) stop o2cb heartbeat
*) offline o2cb
*) remove o2cb configuration file
*) umount ovs-agent storage repositories
*) cleanup ovs-agent local database
 
Would you like to continue? [y/N] y
Cleanup done.
 
Step 1: On the pool master configure the root repository using the repos script. After the pool master is configured, configure all other pool members.
 
1.      From dom0 type "/opt/ovs-agent-2.3/utils/repos.py -l" to list any configured storage repositories, as shown in the next example.
 
# /opt/ovs-agent-2.3/utils/repos.py -l
#
 
Typing "/opt/ovs-agent-2.3/utils/repos.py -l" should result with an empty entry. If a storage repository is listed, type "/opt/ovs-agent-2.3/utils/repos.py -d UUID" to remove the repository. Next, unmout the storage repository in /var/ovs/mount/UUID by typing “umount /var/ovs/mount/UUID”.
 
2.      Next, type "/opt/ovs-agent-2.3/utils/repos.py -n nfsserver:/mnt/vol1/ovs-root/" to add the new share, as shown in the next example. Substitute nfsserver with the FQDN or IP address of your filer and substitute :/path/to/ovs-root-share with the path to your NFS share.
 
# /opt/ovs-agent-2.3/utils/repos.py -n 192.168.4.10:/mnt/vg-931/nfs-root/nfs-sr/
[ NEW ] ef47b8a9-620f-4ed1-aac1-7ac10f4f7fcf => 192.168.4.10:/mnt/vg-931/nfs-root/nfs-sr/
 
3.      Next, type "/opt/ovs-agent-2.3/utils/repos.py -r UUID" to tag the storage repository as the root storage repository, as shown in the next example.
 
# /opt/ovs-agent-2.3/utils/repos.py -r ef47b8a9-620f-4ed1-aac1-7ac10f4f7fcf
[ R ] ef47b8a9-620f-4ed1-aac1-7ac10f4f7fcf => 192.168.4.10:/mnt/vg-931/nfs-root/nfs-sr/
 
Note: The UUID will be listed in step 2 or you can list the UUID by typing repos.py -l.
 
4.      Next, type /opt/ovs-agent-2.3/utils/repos.py -i to mount the root storage repository, as shown in the next example. This step only needs to be performed on the pool master. When Oracle VM pool members are added to a pool the agent will mount and link the root storage repository.
 
# /opt/ovs-agent-2.3/utils/repos.py -i
*** Storage repositories initialized.
 
Note: When repos.py -i is run, the new storage repository will be mounted under /var/ovs/mount/UUID, although the new storage repository will not be linked to /OVS. Once the pool is created the Oracle VM agent will auto mount and link the storage repository.
 
5.      Next, create the pool in Oracle VM Manager. When creating the pool select the Oracle VM server that was used to format the OCFS2 file system as the pool master.
6.      Next, add all of the other configured Oracle VM servers to the pool.
 
To add storage to an Oracle VM storage repository, the first step is to provision the storage. The storage needs to be zoned and masked before it can be added as an extended storage repository. Next, connect the pool master and all the pool members to the share using the /opt/ovs-agent-2.3/utils/repos.py script with the -n (new) as outlined in Step 2. Next, type /opt/ovs-agent-2.3/utils/repos.py -i mount the sub storage repository. Finally the new mount point in /var/ovs/mount/UUID needs to be linked to /OVS/UUID, by typing “ln -nsf /var/ovs/mount/<UUID>/ /OVS”, again from dom0. The end result is a root repository with an “extended” sub repository mounted under /var/ovs/mount/UUID which is linked to /OVS/UUID. 
 
This section will review the virtual machine layer of the storage stack. The virtual machine layer is where the storage is presented to virtual machines as either a flat file, as a LUN, or as a combination of flat files and LUNs. The virtual machine storage layer is referred to as the guest front-end storage.
 
The section starts with a review of file-backed block devices and the file-backed block device driver options. The section concludes with a review of physical backed block devices.
 
A file-backed block device uses a flat file in the storage repository as the guest’s primary storage. By default, Oracle VM Manager and the Oracle VM Management Pack create a file named System.img for each guest. For example, a guest named racnode1 would have a directory named xxx_racnode1 in the in the /OVS/*_pool/ xxx_racnode1/ directory that contains the System.img file.
 
By default, Oracle VM Manager and the Oracle VM Management Pack configure guest storage as file-backed block device using the fast-loopback driver in dom0. You can validate that a guest is using a file-backed block device by looking at the 'disk =' directive in a guest’s vm.cfg file. Each guest has a vm.cfg file in the /OVS/*_pool/vmname/ directory. A 'file:' reference indicates the use of the fast-loopback driver.
 
File-backed block devices can use one of two drivers a) the default fast-loopback driver or b) the blktap driver. In certain circumstances, the blktap driver may provide better performance than the fast-loopback driver. Oracle VM Manager and the Oracle VM Management Pack do not support editing the file-backed block device driver settings. To test the blktap driver you must edit the 'file' directive by hand in the desired guest’s vm.cfg file from ‘file’ to 'tap:aio:'.
 
The next example shows a vm.cfg file from an 11g Oracle VM template that is configured with two virtual disks using file-backed block devices with the fast-loopback driver. The first of two virtual disks is defined in the 'disk =' directive contains the OS and the second virtual disk defined in the 'disk =' directive is an ASM disk.
 
bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/v52x8611g1/System.img,xvda,w',
'file:/OVS/running_pool/v52x8611g1/oracle11g_x86_asm.img,xvdb,w',
]
memory = '2048'
name = 'v52x8611g1'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'f8725f79-c6c8-26d4-a51e-1f32cf010c84'
vcpus = 2
vif = ['bridge=xenbr0,mac=00:16:3E:56:20:63,type=netfront']
vif_other_config = []
 
The next example shows the same vm.cfg file from the above 11g Oracle VM template that is configured using file-backed block devices with the blktap driver. The first of two virtual disks is defined in the 'disk =' directive contains the OS and the second virtual disk defined in the 'disk =' directive is an ASM disk.
 
bootloader = '/usr/bin/pygrub'
disk = ['tap:aio:/OVS/running_pool/v52x8611g1/System.img,xvda,w',
'tap:aio:/OVS/running_pool/v52x8611g1/oracle11g_x86_asm.img,xvdb,w',
]
memory = '2048'
name = 'v52x8611g1'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'f8725f79-c6c8-26d4-a51e-1f32cf010c84'
vcpus = 2
vif = ['bridge=xenbr0,mac=00:16:3E:56:20:63,type=netfront']
vif_other_config = []
 
You can quickly benchmark guest front-end performance by typing “hdparm -tT <device>” within the guest. Replace <device> with the device you would like to benchmark. Type df to list the devices.
 
For example, if a guest is using the default file-backed block device with the fast-loopback driver, as root from the guest console type “hdparm -tT <device>” to gather cached reads and buffered disk reads statistics, as shown in the next example.
 
# hdparm -tT /dev/xvda2
 
/dev/xvda2:
 Timing cached reads:   26968 MB in 1.99 seconds = 13536.94 MB/sec
 Timing buffered disk reads: 148 MB in 3.03 seconds = 48.87 MB/sec
 
Record the data from “hdparm -tT <device>” and power off the guest. Once the guest is powered of edit the guest’s vm.cfg file and replace the file directive with tap:aio. Power on the guest and run the same “hdparm -tT <device>” command to gather the cached reads and the buffered disk reads statistics using the with the blktap driver.
 
The second guest storage option is a physical backed block device. A physical backed block device offers the lowest overhead and best performance of the two Oracle VM guest storage options. In most cases, a physical backed block device will be the best option for hight I/O workloads.
 
For example, Oracle’s certified Oracle VM RAC configuration uses physical backed block devices to provide the best performance for RAC. To use a physical backed block device, you export a physical block device e.g. a LUN from dom0 to the guest, as a virtual block device.
 
As of this writing, Oracle VM Manager or the Oracle VM Management Pack cannot manage physical backed block devices. To use physical backed block devices with Oracle VM, you need to edit the guest’s vm.cfg file manually to use a physical backed block device.
 
Note: Oracle VM 2.2 Manager can manage physical multipath devices as Shared Disks.
 
The next example shows a vm.cfg file that uses physical backed block devices. The first of two disks is defined in the 'disk =' directive, and contains the OS, the second disk defined in the 'disk =' directive is an ASM disk.
 
bootloader = '/usr/bin/pygrub'
disk = ['phy:/dev/sdu,xvda,w!’, 'phy:/dev/sdv,xvda,w!’
]
memory = '2048'
name = 'v52x8611g1'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'f8725f79-c6c8-26d4-a51e-1f32cf010c84'
vcpus = 2
vif = ['bridge=xenbr0,mac=00:16:3E:56:20:63,type=netfront']
vif_other_config = []
The next example shows a vm.cfg file from a guest that uses a file backed block device for the OS and eight physical backed block devices.
bootloader = '/usr/bin/pygrub'
disk = ['file:/var/ovs/mount/
4C42F6B3FCB841499D595C0CC36D7695/
running_pool/266_lax005112pvm07/System.img,xvda,w',
'phy:/dev/sda,/dev/xvda3,w!',
'phy:/dev/sdb,/dev/xvda4,w!',
'phy:/dev/sdc,/dev/xvda5,w!',
'phy:/dev/sdd,/dev/xvda6,w!',
'phy:/dev/sde,/dev/xvda7,w!',
'phy:/dev/sdf,/dev/xvda8,w!',
'phy:/dev/sdg,/dev/xvda9,w!',
'phy:/dev/sdh,/dev/xvda10,w!',
]
keymap = 'en-us'
memory = '4096'
name = 'oel5u5pv07'
on_crash = 'restart'
on_reboot = 'restart'
uuid = '4fa8e2f0-4514-5169-467a-7fd64fe62147'
vcpus = 2
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0']
vif = ['bridge=xenbr0,mac=00:16:3E:69:E4:04,type=netfront',
'bridge=xenbr1,mac=00:16:3E:61:DE:5E,type=netfront',
]
vif_other_config = []
 
Configuring a physical backed block device is a manual multi step process.
 
List 5 shows the steps to configure a physical backed block device.
  1. The first step is to create a guest using Oracle VM Manager. Please note, that after the guest is created, the default file backed block device, i.e. the System.img file can be used or deleted and replaced with a physical backed block device. 
  2. Provision a disk for the guest, i.e. one or more LUNs. The Oracle VM servers must be zoned and masked to be able to access the storage.
  3. Configure the storage in each dom0. For example, if the guest will run on 4 servers within a pool, the storage must be configured in dom0 on all four servers.
  4. Once the LUN is presented in dom0, export the LUN to the guest by editing the vm.cfg file using a physical backed block device as show in the above examples.
Appendix A lists five multipath.conf files from production Oracle VM systems. Appendix A starts with two example multipath.conf files for EMC CLARiiON and SYMMETRIX, followed with multipath.conf example files for Pillar Data Axiom 600, HP EVA SAN, IBM 2145 SAN and a 3PAR 224MU6 Storage Array.
 
1. EMC CLARiiON and SYMMETRIX
Example 1 – Quick and Simple 
#vi /etc/multipath.conf
devnode_blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
}
defaults {
        user_friendly_names yes
}
 
2. EMC CLARiiON and SYMMETRIX
Example 2 – Verbose  
#vi /etc/multipath.conf
## This is the /etc/multipath.conf file recommended for
## EMC storage devices.
##
## OS : RHEL5
## Arrays : CLARiiON and SYMMETRIX
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
}
## The blacklist is the enumeration of all devices that are to be
## excluded from multipath control
devnode_blacklist {
## Replace the wwid with the output of the command
## 'scsi_id -g -u -s /block/internal scsi disk name'
## Enumerate the wwid for all internal scsi disks.
## Optionally, the wwid of VCM database may also be listed here.
# wwid 20010b9fd080b7321
devnode "sd[a]$"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)0-9*"
devnode "^hda-z"
devnode "^cciss!c0-9d0-9*"
}
devices {
## Device attributes for EMC SYMMETRIX
device {
vendor "EMC "
product "SYMMETRIX"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_selector "round-robin 0"
features "0"
hardware_handler "0"
failback immediate
}
## Device attributes for EMC CLARiiON
device {
vendor "DGC"
product "*"
path_grouping_policy group_by_prio
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_emc /dev/%n"
hardware_handler "1 emc"
features "1 queue_if_no_path"
no_path_retry 300
path_checker emc_clariion
failback immediate
}
}
 
3. Pillar Data Axiom 600
 
First add the following lines to /etc/modprobe.conf
alias qla2100 qla2xxx
alias qla2200 qla2xxx
alias qla2300 qla2xxx
alias qla2322 qla2xxx
alias qla2400 qla2xxx
 
#vi /etc/multipath.conf
blacklist {
       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
       devnode "^hd[a-z]"
       devnode "^cciss!c[0-9]d[0-9]*"
       wwid
}
devices {
        device {
                vendor                  "Pillar"
                product                 "Axiom 600"
                getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
                prio_callout            "/sbin/mpath_prio_alua %n"
                features                "0"
                hardware_handler        "0"
                path_grouping_policy    group_by_prio
                rr_weight               priorities
                rr_min_io               1000
                path_checker            tur
        }
        device {
                vendor                  "Pillar"
                product                 "Axiom 500"
                getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
                prio_callout            "/sbin/mpath_prio_alua_pillar %n"
                features                "0"
                hardware_handler        "0"
                path_grouping_policy    group_by_prio
                rr_weight               priorities
                rr_min_io               1000
                path_checker            tur
        }
 
4. HP EVA SAN
# vi /etc/multipath.conf
defaults {
        user_friendly_names yes
}
 
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd)[a-z]*"
        wwid "*"
}
 
# Make sure our multipath devices are enabled.
 
blacklist_exceptions {
        wwid "3600508b40008dc480000500000550000"
        wwid "3600508b40008dc4800005000005b0000"
        wwid "3600508b40008dc4800005000005e0000"
        wwid "3600508b40008dc480000500000670000"
        wwid "3600508b40008dc480000500000640000"
        wwid "3600508b40008dc480000500000610000"
}
 
multipath   {
             wwid     3600508b40008dc4800005000005b0000
             alias    mpath1
}
multipath   {
             wwid     3600508b40008dc4800005000005e0000
             alias    mpath2
}
multipath   {
             wwid     3600508b40008dc480000500000610000
             alias    mpath3
}
multipath   {
             wwid     3600508b40008dc480000500000640000
 
}
multipath   {
             wwid     3600508b40008dc480000500000670000
             alias    mpath5
}
 
5. IBM 2145 SAN
# vi /etc/multipath.conf
devnode_blacklist {
    # wwid 26353900f02796769
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*|sda"
    devnode "^hd[a-z]"
}
multipaths {
     multipath {
        wwid            3600507680194011ef000000000000b46    
        alias            oraclevm-lun0
        path_grouping_policy    failover
        path_checker        readsector0
        path_selector        "round-robin 0"
        failback        0
        rr_weight        priorities
#        no_path_retry        5
    }
      multipath {
         wwid            3600507680194011ef000000000000dbd 
         alias            oraclevm-lun1
     }
      multipath {
         wwid            3600507680194011ef000000000000dbe
         alias            oraclevm-lun2
     }
      multipath {
         wwid            3600507680194011ef000000000000dc4
         alias            oraclevm-lun3
     }
      multipath {
         wwid            3600507680194011ef000000000000dbf
        alias            oraclevm-lun4
     }
     multipath {
                wwid                    3600507680194011ef000000000000dc0
                alias                   oraclevm-lun5
         }
    multipath {
                wwid                    3600507680194011ef000000000000eaf
                alias                   oraclevm-lun6
         }
     multipath {
                wwid            3600507680194011ef000000000000eb0                  
                alias                   oraclevm-lun7
         }
    multipath {                     
                wwid                    3600507680194011ef000000000000eb1
                alias                   oraclevm-lun8
         }
    multipath {
                wwid                    3600507680194011ef000000000000eb2
                alias                   oraclevm-lun9
         }
devices {
# IBM 2145
    device {
        vendor            "IBM"
        product            "2145"
        path_grouping_policy    group_by_prio
        prio_callout        "/sbin/mpath_prio_alua /dev/%n"
    }
}
 6. 3PAR 224MU6 Storage Array
# Default Blacklist - Ignore Internal Devices
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*"
}
 
defaults {
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout /bin/true
rr_min_io 100
rr_weight priorities
failback immediate
}
 
devices {
device {
vendor "3PARdata"
product "VV"
path_grouping_policy multibus
path_checker tur
no_path_retry 60
    }
}