Jump to Navigation

Chapter 3: Hard and Soft Partitioning Oracle Technologies with Oracle VM

Last update 12-07-2010
Copyright © 2009 - 2012 Roddy Rodstein. All rights reserved.
 
This chapter will review hard and soft partitioning Oracle technologies with Oracle VM. The goal of this chapter is to clarify how Oracle VM can be used with hard and soft partitioning to help manage your Oracle enterprise technology license costs. The chapter starts with a brief introduction to Oracle licensing. Next, we will review Oracle technology named user plus licensing followed with processor licensing with hard and soft partitioning using Oracle VM. The chapter concludes with hard partitioning examples and virtual CPU binding testing techniques.
 
Note: While Oracle recognizes hard and soft partitioning with Oracle VM, this does not imply that this applies when using other vendor's virtualization technologies. Please refer to the SIG or your Oracle representative if you have questions about the licensing impact of other vendor's virtualization approaches.
 
Table of Contents
 
Oracle Technology Licensing
Oracle segments its product portfolio into two categories, technology and applications. The Oracle technology and applications license models are very different. The only similarity between the technology and applications licensing models is the ability to execute an unlimited license agreement (ULA). Technology products have three forms of licensing, 1) processor 2) named user plus (NUP) and 3) unlimited license agreement (ULA). Applications licensing also have three forms of licensing, 1) component pricing, 2) custom applications suite pricing, and 3) enterprise pricing aka an unlimited license agreement (ULA).
 
List 1 shows Oracle’s technology offering.
  • Database
  • Enterprise Managers
  • Application and System Management
  • Application Server
  • Fusion Middleware
  • Business Intelligence
  • Identity Management
  • Tools
  • Enterprise 2.0
  • Collaboration
  • Data Warehousing Products
  • Integration products
List 2 shows Oracle’s application offering.
  • Oracle Analytic and Business Intelligence (BI) Applications
  • Oracle Customer Relationship Management (CRM)
  • Oracle Financial Management
  • Oracle Governance, Risk, and Compliance (GRC) Management
  • Oracle Human Capital Management (HCM)
  • Oracle Procurement
  • Oracle Project Management
  • Oracle Supply Chain Management (SCM)
The only way to determine the most beneficial licensing model for your Oracle software investment is to evaluate your organization’s Oracle software requirements, along with your hardware, operating system and virtualization configurations. Most organizations initially engage their Oracle sales representatives as a first step, in order to help evaluate and quote license options. Customers typically use the initial licensing evaluation and quotes as a starting point to help determine which licensing model and configuration provides the best value.
 
An important Oracle technology licensing consideration is your organization’s hardware, operating system and virtualization configurations. Oracle recognizes a wide variety of hardware, operating system and virtualization configurations that directly affect the CPU count used to calculate Oracle technology processor licenses. For example, Oracle recognizes various hard and soft partitioning configurations for Oracle VM, as well as the big 3 UNIX platforms, that directly affect how to count Oracle technology CPU licenses.
 
From an Oracle technology licensing perspective, hard partitioning allows customers to license a subset of a server’s CPUs. Conversely, soft partitioning counts the total number of a server’s CPUs.
 
Note: Oracle VM is not a licensed technology product.
 
Of the three Oracle technology licensing options, 1) processor 2) named user plus (NUP) and 3) unlimited license agreement (ULA), Oracle VM can help manage and reduce processor licensing costs with Oracle enterprise edition technology products. Oracle VM helps manage enterprise edition technology processor licensing, by using hard and soft partitioning.
 
Hard partitioning with Oracle VM allows a customer to license a subset of an Oracle VM server’s CPUs. Soft partitioning is used to take advantage of Live Migration, which is not supported with hard partitioning. Along with Live Migration, soft partitioning provides the ability to manage the number of licensed Oracle technology product CPUs within an Oracle VM pool.
 
Hard and soft partitioning with Oracle VM provide the ability to manage the number of licensed Oracle technology CPUs. Named user plus licensing and unlimited license agreements are not CPU regulated, which preclude using Oracle VM as a license management option.
 
The SIG states that Oracle technology standard edition products are limited to 2 or 4 sockets, i.e. installed on a server with no more than 2 or 4 physical CPUs. Standard edition products can run on an Oracle VM as long as the Oracle VM server meets the standard edition’s CPU requirements. Most contemporary virtualization servers are equipped with 2 or 4 sockets, which may preclude hosting standard edition products on Oracle VM.  
 
Understanding how and where to use hard and soft partitioning with Oracle VM can help organizations to better manage their Oracle Enterprise technology licensing costs for development and production environments.
 
Smaller deployment with less than 50 users regularly select named user plus licensing. For smaller environments, named user plus licensing can be more cost effective when compared with processor licensing. Along with named user plus licensing, customers regularly select standard editions products over enterprise editions products to further reduce costs.
 
Oracle VM with processor licensing could provide an alternative to named user plus licensing by leveraging hard partitioning to manage the number of licensed CPUs. The ability to manage the number of licensed CPUs can help control licensing costs, which may provide a cost advantage over named user plus licensing. You would need to run the numbers to determine if hard partitioning could provide a cost advantage over named user plus licensing.
 
Processor licensing is when an Oracle customer pays per processor (CPU), to run an Oracle technology product. Larger deployments, with 50 or more users, typically use processor based licensing. Oracle recognizes each CPU core as a separate CPU and each CPU type with a different processor factor. The processor factor determines the CPU count. The CPU count determines the number of CPUs required to license the Oracle technology product.
 
Note: Be sure to refer to the latest Oracle Processor Core Factor Table to find out the processor core factor for your hardware.
 
Table 1 lists the processor factors.
Oracle Processor Licensing
Processor Factor
UltraSparc T1
0.25
AMD/Intel
0.50
All other Multi-core Servers
0.75
Single Core Servers
1.00
To better understand how to calculate a processor factor, List 1 shows the processor factor for a single quad core Intel, AMD, Sun Sparc and an IBM Power CPU.
  • Intel or AMD CPU
    • 1 quad core CPU requires 2 processor licenses (4 cores multiplied by a factor of .50 equal 2 processor licenses).
  • Sun SPARC64 VI CPU (* different models of Sun CPUs may have different core factors)
    • 1 quad core SPARC64 VI CPU requires 3 processor licenses (4 cores multiplied by a factor of .75 equal 3 processor license).
  • IBM Power6 CPU (* different models of IBM CPUs may have different core factors)
    • 1 quad core Power6 CPU requires 4 processor licenses (4 cores multiplied by a factor of 1.0 equal 4 processor licenses)
Two and four core CPUs are now end of life. New Intel x86 servers’ ship with six or eight core CPUs. AMD plans to ship their 12 core CPUs in the first half of 2010. Both Sun Sparc and IBM Power servers now ship with eight core CPUs. As the chip vendors add more cores to CPUs, Oracle technology licensing costs can increase.  
 
To better understand the impact to Oracle processor licensing with multi-core CPUs, let’s review List 2.  
 
List 2 shows the processor factor for a single eight core Intel, AMD, Sun Sparc, and IBM Power CPU.
 
  • Intel or AMD CPU
    • 1 eight core CPU requires 2 processor licenses (8 cores multiplied by a factor of .50 equal 4 processor licenses).
  • Sun SPARC64 VI CPU (* different models of Sun CPUs may have different core factors)
    • 1 eight core SPARC64 VI CPU requires 6 processor licenses (8 cores multiplied by a factor of .75 equal 6 processor license).
  • IBM Power6 CPU (* different models of IBM CPUs may have different core factors)
    • 1 eight core Power6 CPU requires 8 processor licenses (8 cores multiplied by a factor of 1.0 equal 8 processor licenses)
As illustrated in the above examples, a single eight core CPU doubled the Oracle technology license CPU count, when compared to the single four cores CPU in List 1. Oracle customers using processor licensing will have to carefully consider the licensing impact of a hardware refresh due to the additional CPU cores.
 
List 3 highlights various options to help manage the Oracle technology license CPU count, with multi-core CPUs.
  • One of the options is to move from processor licensing to an unlimited license agreement. Customers with an unlimited license agreement (ULA) have no CPU restrictions with Oracle technology products.
  • Another option would be to use hard partitioning with processor licensing to control the number of licensed CPUs. Hard partitioning allows a customer to license a subset of a server’s CPUs. However, Live Migration is not supported with hard partitioning.
  • Customers can also use soft partitioning with processor licensing and Oracle VM to limit the number of licensed CPUs within a server pool. Soft partitioning supports Live Migration.
Oracle recognizes both hard and soft partitioning for Oracle technologies with Oracle VM. Hard and soft partitioning with Oracle VM can be used with processor licensing and enterprise edition products to manage the number of licensed CPUs, for development and production environments.
 
Note: Standard edition products can run on an Oracle VM as long as the Oracle VM server meets the standard edition’s CPU requirements. Please refer to the relevant licensing documentation for the Standard Edition product in question to verify if the Standard Edition product can be hosted on your server platform with Oracle VM.  
 
The difference between hard and soft partitioning is how Oracle recognizes the Oracle technology CPU license count, and the supported virtualization feature set. For example, Live Migration is not supported with Oracle VM when used with hard partitioning. Conversely, soft partitioning can be used within an Oracle VM pool to take advantage of Live Migration, along with the ability to manage the Oracle technology CPU count.
 
From an Oracle technology licensing perspective, hard partitioning allows customers to license a subset of a server’s CPUs. Conversely, soft partitioning counts the total number of a server’s CPUs. Soft partitioning with Live Migration requires each Oracle VM server, running a guest with an Oracle technology product to be licensed. We can limit the number of soft partitioned pool members, where a guest can run, by configuring an Oracle VM Manager manual placement policy.
 
Table 1 provides an overview of hard and soft partitioning.
 
Overview
Requirements
Hard Partitioning
Hard partitioning allows a customer to license a subset of a server’s CPUs.
1.      All hard partitioned guests must pin the virtual CPUs to the Oracle VM server’s physical CPU cores in the guest’s vm.cfg file.
2.      All hard partitioned guests must have an Oracle VM Manager manual placement policy to confine the guests to the pinned Oracle VM server(s).
3.      All hard partitioned guests “cannot” use Live Migration.
Soft Partitioning
Soft partitioning requires the sum of an Oracle VM server’s CPU cores to be licensed.
1.      Each Oracle VM server running a guest with an Oracle technology product must be licensed. We can use a manual placement policy to license a subset of pool member servers. For example, in a 10 server pool, you could license 2 of the 10 pool members.
List 4 shows three hard and soft partitioning examples with Oracle technology licensing.
  1. A single Intel server with 16 CPU cores, with Linux installed running 11G has a processor factor of 8 CPUs. The Linux server can run one 11G instance.
  2. A single Intel server with 16 CPU cores, with Oracle VM installed using soft partitioning has a processor factor of 8 CPUs. The Oracle VM server could run more than 16 single CPU guests each with 11G.
  3. A single Intel server with 16 CPU cores with Oracle VM installed using hard partitioning. The Oracle VM server is capable of running more that 16 single CPU guests, although only one of the guests is running 11G with 2 virtual CPUs. In this example, using hard partitioning, we could license a subset of the 8 CPUs. For example, we could hard partition 1 of the 8 CPUs.
A single Oracle VM server with 2 Intel eight core CPUs (16 cores), could run 16 one CPU guests, without oversubscribing the servers’ CPUs. Oracle VM supports both CPU and memory oversubscription, which allows a single Oracle VM server to oversubscribe CPU and memory resources to guests. For example, an oversubscribed host with 2 Intel eight core CPUs (16 cores), could provision more than 16 cores to guests.
 
Figure 1 shows three hosts. The first host has two eight core CPUs with Linux installed running 11G. The second host has two eight core CPUs using soft partitioning with Oracle VM installed, hosting 8 guests running 11G. The third host has two eight core CPUs using hard partitioning with Oracle VM installed, hosting 8 guests. Only one of the guests is running 11G.
 
 
As shown in Figure 1, the Linux server requires eight Oracle technology CPU licenses and is hosting one 11G application. Servers that host one application are commonly referred to as application silos. The traditional one application per server deployment methodology, shown in Figure 1, inevitably leads to over-provisioning and underutilization of hardware. Studies show that most servers run at 5-15% of their total capacity. For example, most servers spend the majority of their life idle, consuming electricity and taking up valuable data center space. Underutilized servers can be consolidated using Oracle VM with hard or soft partitioning to provide better license and resource utilization when compared to application silos.
 
The soft partitioning example in Figure 1 shows how an Oracle VM server with eight processor licenses can host multiple isolated guests, running 11G, for the same CPU cost as the application silo. Oracle VM with soft partitioning provides superior license and resource utilization when compared to an application silo. Oracle VM supports CPU and memory oversubscription, which allows you to run even more workloads per server when compared to an application silo.
 
The hard partitioning example in Figure 1 shows how a shared infrastructure can be used to support Oracle technology products along with the ability to license a subset of the server’s CPUs. For example, we can license one of the eight CPUs. Hard partitioning with Oracle VM can be used with processor licensing and enterprise edition products to manage the number of licensed CPUs for development and production environments.
 
Soft partitioning with Oracle VM can be used with processor licensing for both development and production environments. The use case for Oracle VM and soft partitioning with development environments is to consolidate application silos to a shared Oracle VM infrastructure. Migrating from application silos to a shared infrastructure can help reduce the total number of licensed CPUs, reduce electricity consumption, consolidate underutilized server, and free up data center space.
 
The use case for Oracle VM and soft partitioning with production environments, is the ability to use Live Migration, along with the ability to manage the number of licensed CPUs. For example, it is not necessary to license the sum of all Oracle VM pool member’s CPU cores when using Live Migration with soft partitioning. We can configure an Oracle VM Manager manual placement policy to control which pool members a guest can run on. Using a placement policy with soft partitioning allows us to license a subset of an Oracle VM pool’s CPU cores. A manual placement policy confines a guest to run on the pool members listed in the manual placement policy.
 
Figure 2 shows an Oracle VM server pool with six Oracle VM servers.  Each Oracle VM server has two eight core CPUs. The Oracle VM server pool has a total of 96 cores, and a processor factor of 48 CPUs. In Figure 2, there is a total of 8 guests in the pool running 11G, with the ability to run on all 6 Oracle VM pool members. The example shown in Figure 2 would require 48 Oracle technology processor licenses.
 
 
Figure 3 shows the same server pool as in Figure 2, with a total of 96 cores and a processor factor of 48 CPUs. In Figure 3, there is a total of 8 guests in the pool running 11G, with the ability to run on two Oracle VM pool members. The scenario shown in Figure 3 requires 16 Oracle technology processor licenses.
 
 
Figure 4 shows the same server pool as in Figure 2, with 96 cores, and a processor factor of 48 CPUs. In Figure 4, there is a total of 96 guests in the pool, each running 11G, with the ability to run on any of the Oracle VM pool members. The scenario shown in Figure 4 would require 48 Oracle technology processor licenses.
 
 
We can limit the number of Oracle VM pool members that a guest can run on, by configuring a manual placement policy in Oracle VM Manager. A manual placement policy allows you to limit which Oracle VM pool members a guest is allowed to run on. Once a manual placement policy is configured, HA events and Live Migration will be limited to the Oracle VM pool members listed in the manual placement policy.
 
Tip: An auto placement policy will start a guest on the least busy pool member and does not limit a guest’s ability to HA or Live Migrate to any pool members.
 
Hard partitioning, also referred to as sub-capacity licensing, allows Oracle customers to license a subset of a server’s CPUs. Hard partitioning is a two step process. The first step is to create a manual placement policy. The manual placement policy will confine the guest to the pinned Oracle VM server. The second step, is to edit the hard partitioned guest’s vm.cfg file, to pin the guest’s virtual CPU to the Oracle VM server’s physical CPU cores.
 
Oracle’s hard partitioning policy states that a hard partitioned guest’s virtual CPUs mapping must be hardcoded in the guest’s vm.cfg file. To confine the hard partitioned guest to the mapped host, a manual placement policy must be configured. Oracle restricts the use of Live Migration with hard partitioning.
 
Our first hard partitioning example in Figure 5 shows an Oracle VM server with two eight core Intel CPUs, with one hard partitioned guest running 11G. The guest is pinned to two of the Oracle VM server’s cores (2 cores = 1 CPU). From an Oracle technology licensing perspective, the server has a processor factor of 8. Using hard partitioning, the pinned guest would require only 1 CPU license. The additional 7 CPUs could be used to license other Oracle technologies or be shared to run other workloads on the same Oracle VM server.
 
 
Another example with the server in Figure 5 would be to hard partition two guests, each guest running 11G. Each of the two guests is pinned to 1 of the Oracle VM server’s cores (2 cores = 1 CPU). The additional 7 CPUs could be used to license other Oracle technologies or be shared to run other workloads on the same Oracle VM server.
 
Figure 6 shows an Oracle VM server with two eight core Intel CPUs, with two hard partitioned guests running 11G. Each guest is pinned to 1 of the Oracle VM server’s cores, 2 cores = 1 CPU.
 
 
Hard partitioning an Oracle VM guest is a two step process. The first step is to create a manual placement policy for the hard partitioned guest using Oracle VM Manager. The manual placement policy will confine the guest to the pinned Oracle VM server. The second step is to edit the hard partitioned guest’s vm.cfg file to pin the guest’s virtual CPU to the Oracle VM server’s physical CPU cores.
 
The ability to limit a guest to an Oracle VM server is accomplished by configuring an Oracle VM Manager manual placement policy. A manual placement policy allows you to configure which Oracle VM pool members a guest is allowed to run on. Once a manual placement policy is configured, HA events and Live Migration will be limited to the pool members listed in the manual placement policy. A manual placement policy is a guest property that can be configured during or after guest creation.
 
In the next section, we will walk through the configuration of a manual placement policy. Please note that guests must be powered off to configure a placement policy.
 
The first step is to access Oracle VM Manager and power off the guest. Next, click on the guest’s Virtual Machine Name as shown in Figure 7 to access the guest’s properties.
 
 
From the General Information page, click the Policies link to access the Policies properties page, as shown in Figure 8.
 
 
From the Policies page click the Placement Policy tab, as shown in Figure 9.
 
 
From the Placement Policy page click the Manual button to access the Prefer Server page, as shown in Figure 10.
 
 
From the Preferred Server page, select the desired Oracle VM server(s) from the preferred server list. When you select an Oracle VM server from the preferred server list, the manual placement policy will limit the guest to the selected server(s). Once you have selected the preferred server, click the Confirm button, as shown in Figure 11.
 
 
After clicking the Confirm button, the page refreshes and displays the Placement Policy page. The new manual placement policy will be displayed as shown Figure 12.
 
 
We have successfully configured a manual placement policy for an Oracle VM hard partitioned guest.
 
The next and final step to hard partition an Oracle VM guest is to pin the guest’s virtual CPUs to the Oracle VM server’s CPU cores. Each hard partitioned guest should be pinned to the Oracle VM server that is listed in the guest’s manual placement policy.
 
This section will start with a brief review of the credit scheduler. Next, we walk through the procedure to hard partitioning an Oracle VM guest by adding the “cpus=” directive in a guest’s vm.cfg file. We conclude the section with CPU pinning examples, using the xm and virsh commands.
 
Oracle VM’s default CPU scheduler is the credit scheduler. The credit scheduler uses a credit/debit system to fairly share CPU resources between guests. Credits are assigned to each running guest, along with the fraction of CPU resources. The credit scheduler continually increments/decrements credits from running guests, which is how the credit scheduler balances resources. In many ways, the credit scheduler is like the Linux scheduler. The Linux scheduler is used as the default CPU scheduler with the KVM hypervisor. Both schedulers can preempt processes as needed while trying to ensure proportional fair share allocations.
 
The default behavior of the credit scheduler is to bind each virtual CPU to a separate physical core. For example, when you create a guest with two virtual CPUs, the credit scheduler will map the two virtual CPUs to two physical cores. So when pinning virtual CPUs, we should follow the credit scheduler’s default behavior of mapping virtual CPUs to a server’s individual CPU cores.
 
Unless you have pinned a guest’s virtual CPUs, virtual CPUs will occasionally bind to different physical cores. Virtual CPUs bind to different physical cores, due to the credit scheduler’s use of the credit/debit system, which dynamically re-balances CPU resources. For example, if you where to periodically check an unpinned guest’s CPU mapping, you would see a different CPU mapping throughout the day.
 
There are two methods to pin virtual CPUs. We can use the xm command to pin a guests’s virtual CPUs or we can hardcode the CPU mapping in a guest’s vm.cfg file. The difference between pinning CPUs with xm and hard coding the CPU mapping in a guest’s vm.cfg file is the persistence of the CPU mapping. CPUs that are pinned with xm are not persistent between reboots. Hard coding the CPU mapping in a guest’s vm.cfg file is persistent between reboots. To comply with Oracle’s hard partitioning policy, we must hardcode the CPU mapping in a guest’s vm.cfg file.
 
Please note that hard partitioning could cause guest performance issues. For example, if you pin a guest’s virtual CPU to a specific subset of named CPUs without considering how the lower-level I/O interrupts are being assigned, you can end-up hurting performance. I/O interrupts are typically mapped to a specific CPU. If that CPU is not the same as the pinned CPU, the interrupts have to be "re-directed" to the CPU you pinned, which could cause the performance of the guest to decrease. If a hard partitioned guest is experiencing performance issues, the CPU pinning would be an area to investigate.
 
Next, we will review how to hard partition an Oracle VM guest. After the hard partitioning example, we will review pinning an Oracle VM guest using the xm command. Unfortunately, all CPU cores are not equal, so you may need to test various virtual CPU mappings using the xm and virsh commands.
 
In the following example, we will hard partition a guest running Oracle Database 10g enterprise edition with two virtual CPUs. Two virtual Intel or AMD CPUs equal one Oracle technology CPU. The guest will be pinned to an Oracle VM server with two four core CPUs. The Oracle VM server has a processor factor of four Oracle technology CPUs. Using hard partitioning, we will license only one of the Oracle VM server’s four licensable CPUs.
 
To comply with Oracle’s hard partitioning policy, we must hardcode a guest’s virtual CPU mapping, by adding the “cpus=” directive in the guest’s vm.cfg file. By adding the “cpus=” directive in the guest’s vm.cfg file, we pin the guest’s virtual CPUs to the Oracle VM server’s cores.
 
Let’s review two different “cpus=” configurations, to help explain how to pin a guest’s virtual CPUs to an Oracle VM server’s CPU cores.
 
In the first vm.cfg example, we add a new line in the vm.cfg file, cpus = '0-3'.The cpus = '0-3' entry pins the guest’s virtual CPUs to the Oracle VM server’s CPU cores 0, 1, 2, and 3.
 
Please note the vcpus = 4 entry, one line above the cpus = '0-3' entry. The vcpus = 4 entry defines the number of virtual CPUs. The vcpus = directive can be edited to select the desired number of virtual CPUs.
#vi /OVS/running_pool/v52x6410g1/vm.cfg
bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/v52x6410g1/System.img,xvda,w',
'file:/OVS/running_pool/v52x6410g1/oracle10g_x86_64_asm.img,xvdb,w',
]
memory = '2048'
name = 'v52x6410g1'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'd428ba07-31b9-5667-2085-8753a0342425'
vcpus = 4
cpus = '0-3'
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0']
vif = ['bridge=xenbr0,mac=00:16:3E:20:18:19,type=netfront']
vif_other_config = []
The above example vm.cfg file shows a hard partitioned guest with 4 virtual CPUs. The guest’s 4 virtual CPUs are pinned to the Oracle VM server’s CPU cores 0, 1, 2, and 3. The same guest could also be pinned using cpus = '0' in the vm.cfg file. Using cpus = '0' would pin all 4 virtual CPUs to the same physical core, number 0, on the Oracle VM server. The same guest could also be pinned using cpus = '0,1' in the vm.cfg file. Using cpus = '0,1’ would pin 2 virtual CPUs to core number 0 and 2 virtual CPUs to core number 1.
 
We can also use regular expression inversion. For example, we could use cpus=’^0-1’, which means any core but 0 and 1.
 
In the second vm.cfg example, we add a new line in the vm.cfg file, cpus = '0,1'.The cpus = '0,1' entry pins the guest’s virtual CPUs to the Oracle VM server’s CPU cores 0 and 1.
 
Please note the vcpus = 2 entry above the cpus = '0,1' entry. The vcpus = 2 entrydefines the number of virtual CPUs. The vcpus = directive can be edited to select the desired number of virtual CPUs.
#vi /OVS/running_pool/v52x6410g1/vm.cfg
bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/v52x6410g1/System.img,xvda,w',
'file:/OVS/running_pool/v52x6410g1/oracle10g_x86_64_asm.img,xvdb,w',
]
memory = '2048'
name = 'v52x6410g1'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'd428ba07-31b9-5667-2085-8753a0342425'
vcpus = 2
cpus = '0,1'
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0']
vif = ['bridge=xenbr0,mac=00:16:3E:20:18:19,type=netfront']
vif_other_config = []
The above example vm.cfg file shows a hard partitioned guest with 2 virtual CPUs pinned to the Oracle VM server’s CPU cores 0 and 1.
 
We can also use regular expression inversion. For example, we could use cpus=’^0-1’, which means any core but 0 and 1.
 
Note: We must reboot the virtual machine to enforce any new hard partitioning configurations.
 
To be able to hard partition a guest, we need to know the number of CPUs and the number of cores on the pinned Oracle VM server. There are a number of commands to list the CPU details of an Oracle VM server. From dom0 we could type “xm info” or “virsh nodeinfo” to list the CPU and core details as shown in the next example.
# virsh nodeinfo
libvir: Remote error : No such file or directory
libvir: warning : Failed to find the network: Is the daemon running ?
CPU model:           i686
CPU(s):              8
CPU frequency:       2992 MHz
CPU socket(s):       2
Core(s) per socket: 4
Thread(s) per core: 1
NUMA cell(s):        1
Memory size:         16775168 kB
The “virsh nodeinfo” example shows that the Oracle VM server has two four core CPUs (sockets) with a total of eight cores. The example Oracle VM server has an Oracle technology license processor factor of 4 CPUs.
 
To list the CPU cores, we can type “grep -i processor /proc/cpuinfo”, as shown in the next example.  
# grep -i processor /proc/cpuinfo
processor       : 0
processor       : 1
processor       : 2
processor       : 3
processor       : 4
processor       : 5
processor       : 6
processor       : 7
The “grep -i processor /proc/cpuinfo” example lists the number of all eight CPU cores. If you would like to list all of the CPU details type “cat /proc/cpuinfo”. 
 
Once we have the Oracle VM server’s CPU and core details, we can pin the guest’s virtual CPUs to any of the physical cores. We will follow the default behavior of the credit scheduler and bind each virtual CPU to a separate physical core.
 
Before we pin the guest, let’s review the guest’s vm.cfg file. Please note the vcpus = 2 directive, which indicates the number of virtual CPUs for the guest. 
#vi /OVS/running_pool/v52x6410g1/vm.cfg
bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/v52x6410g1/System.img,xvda,w',
'file:/OVS/running_pool/v52x6410g1/oracle10g_x86_64_asm.img,xvdb,w',
]
memory = '2048'
name = 'v52x6410g1'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'd428ba07-31b9-5667-2085-8753a0342425'
vcpus = 2
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0']
vif = ['bridge=xenbr0,mac=00:16:3E:20:18:19,type=netfront']
vif_other_config = []
Next, we will pin the two virtual CPUs to core 7 and 3 on the Oracle VM server. We will add the cpus =’7,3’ directive to pin the guest’s two virtual CPUs to core 7 and 3 on the Oracle VM server. 
#vi /OVS/running_pool/v52x6410g1/vm.cfg
bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/v52x6410g1/System.img,xvda,w',
'file:/OVS/running_pool/v52x6410g1/oracle10g_x86_64_asm.img,xvdb,w',
]
memory = '2048'
name = 'v52x6410g1'
on_crash = 'restart'
on_reboot = 'restart'
uuid = 'd428ba07-31b9-5667-2085-8753a0342425'
vcpus = 2
cpus =’7,3’
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0']
vif = ['bridge=xenbr0,mac=00:16:3E:20:18:19,type=netfront']
vif_other_config = []
We must reboot the virtual machine to enforce the new hard partitioning configuration.  
 
Note: If a hard partitioned guest uses Live Migration, or has the CPU properties edited with the xm command, the hard coded CPU mapping in the vm.cfg file will be lost. If the CPU mappings get removed by Live Migration, or xm, you will need to re-pin the virtual CPUs in the guest’s vm.cfg file.
 
Once we reboot the guest we can validate our new hard partition configuration by accessing dom0 as root and type xm vcpu-list [domain], as shown in the next example.
# xm vcpu-list v52x6410g1
Name                                ID VCPU   CPU State   Time(s) CPU Affinity
v52x6410g1                          19     0     7   -b-      12.8 3,7
v52x6410g1                          19     1     3   -b-       5.6 3,7
The “xm vcpu-list v52x6410g1” command validates that our hard partition configuration is enforced. By adding the cpus =”7,3” directive, we pinned the guest’s two virtual CPUs, one virtual CPU is pinned to core 7 and one virtual CPU is pinned to core 3.
 
The above hard partition example showed how to license a subset of an Oracle VM servers’ CPUs. The example Oracle VM server has a processor factor of four CPUs. Using hard partitioning, we licensed only one of the Oracle VM server’s four licensable CPUs.
 
We can also manage the number of virtual CPUs for a running guest using the “xm vcpu-set” command. Using the “xm vcpu-set” command allows us to test and troubleshoot virtual CPU mappings. Please note that using the “xm vcpu-set” command to pin virtual CPU is not recognized by Oracle for hard partitioning.
 
Note: If you have hard coded the CPU mapping in a guest’s vm.cfg file and use the xm command to change the CPU properties, the hard coded CPU mapping will be lost.  
 
The “xm vcpu-set” command allow us to select any number of virtual CPUs up to number of virtual CPUs listed in the vcpus = n directive in the vm.cfg file. For example, if a guest has four virtual CPUs, (vcpus = 4) we could use the “xm vcpu-set” command to reconfigure a guest to use 1, 2 or all four of the virtual CPUs.
 
To view a guest’s virtual CPU statistics, from dom0 as root, type xm vcpu-list [domain], as shown in the next example. If you type “xm vcpu-list”, it will list all of the running guest’s virtual CPU statistics.
# xm vcpu-list v52x6410g1
Name                                ID VCPU   CPU State   Time(s) CPU Affinity
v52x6410g1                          18     0     2   -b-     351.4 2
v52x6410g1                          18     1     6   -b-     220.7 6
We can also use the virsh command to list a guest’s virtual CPU details by typing virsh vcpuinfo [domain], as shown in the next example. 
# virsh vcpuinfo v52x6410g1
libvir: Remote error : No such file or directory
libvir: warning : Failed to find the network: Is the daemon running ?
VCPU:           0
CPU:            2
State:          blocked
CPU time:       236.2s
CPU Affinity:   ---y---y
 
VCPU:           1
CPU:            6
State:          blocked
CPU time:       178.5s
CPU Affinity:   ---y---y
In the above example, the guest has two virtual CPUs, 0 and 1. Virtual CPU 0 is in the "blocked" state on the physical core number 2. Virtual CPU 1 is in the "blocked" state on the physical core number 6. Both virtual CPUs are in the blocked state, which means the guest is waiting on I/O or has gone to sleep. There is a total of six virtual CPU states, r for running, b for blocked, p for paused, s for shutdown, c for crashed and finally, d for dying.
 
The next example shows how to change the virtual CPU count from two virtual CPUs to one virtual CPU using the “xm vcpu-set” command.
# xm vcpu-set v52x6410g1 1
# xm vcpu-list v52x6410g1
Name                                ID VCPU   CPU State   Time(s) CPU Affinity
v52x6410g1                          18     0     2   -b-     359.5 2
v52x6410g1                          18     1     -   --p     227.0 6
As shown in the above example, typing “xm vcpu-set v52x6410g1 1” paused one of the two virtual CPUs. A paused virtual CPU is not eligible for scheduling by the credit scheduler. The paused virtual CPU will remain paused until resumed, for example, by typing “xm vcpu-set v52x6410g1 2”, as shown in the next example.
# xm vcpu-set v52x6410g1 2
# xm vcpu-list v52x6410g1
Name                                ID VCPU   CPU State   Time(s) CPU Affinity
v52x6410g1                          19     0     2   -b-     266.4 2
v52x6410g1                          19     1     6   -b-     190.5 6
Next, we will pin the guest’s virtual CPUs to the Oracle VM server’s physical cores using the “xm vcpu-pin <domain> <vcpu> <pcpu>” command. In the next example, we will pin the guest’s virtual CPU 0 to core 1, and virtual CPU 1 to core 4.
# xm vcpu-pin v52x6410g1 0 1
# xm vcpu-pin v52x6410g1 1 4
# xm vcpu-list v52x6410g1
Name                                ID VCPU   CPU State   Time(s) CPU Affinity
v52x6410g1                          19     0     1   -b-     268.4 4
v52x6410g1                          19     1     4   -b-     190.5 6
As shown in the above example, typing “xm vcpu-pin v52x6410g1 0 1” followed by typing “xm vcpu-pin v52x6410g1 1 4” pinned the guest’s virtual CPU 0 to core 1 and virtual CPU 1 to core 4.