You are here

ROAD for Oracle VM Operation Toolkit for Oracle VM for x86 Release 3.x

The ROAD™ for Oracle® VM Operations Toolkit use native Linux commands to provide in depth Oracle VM server, cluster, virtual machine, and server pool details. The Oracle VM server, cluster, virtual machine and server pool details allow IT operations to pro-actively respond to issues before they impact the business.
 
 
 
The next table describes the ROAD for Oracle VM Operation Toolkit Command Definitions and Usage. The ROAD for Oracle VM Operation Toolkit commands/scripts are installed on the Oracle VM servers, and must be executable by root.
Command
Description
fixvnc3.2.sh
This command will resolve a known issue when an Oracle VM Manager 3.2 VNC console opens with “Status: Connected to Server” and no VNC console appears. Confirm which Oracle VM server the virtual machine is running on, then run the command to on the Oracle VM server to remove the broken ovm-consoled pid and restart the ovm-consoled service.
 
To run the command on an Oracle VM server, as root or using sudo type:
./fixvnc3.2.sh
or
sh fixvnc3.2.sh
Example Run
# ./fixvnc3.2.sh
Deleting broken ovm-consoled pid
Starting ovm-consoled
Starting OVM console server:                               [  OK  ]
clncluster.sh
This command will remove an Oracle VM servers cluster configurations to a clean state to allow an Oracle VM server to be cleanly removed, then added back to a server pool. The command stops the ovs-agent, deletes the local Berkley Databases, deletes the cluster.conf and o2cb files, with ISCSI it stops the ISCSI service, then deletes the ISCSI config file, shuts down any running virtual machines, then shows a reboot prompt.
 
This command preserves the networking set-up. Once the Oracle VM server is discovered, and the ownership is set, only the storage must be setup before the server can be placed into a server pool.
 
To run the command on an Oracle VM server, as root or using sudo type:
./clncluster.sh
or
sh clncluster.sh
Example Run
# ./clncluster.sh
Shutting down any running VMs
Shutting down VM with id 14
Shutting down VM with id 12
Shutting down VM with id 11
Shutting down VM with id 13
Shutting down VM with id 10
Shutting down VM with id 9
Stopping ovs-agent
Stopping Oracle VM Agent:                                  [  OK  ]
Wiping the ovs-agent db
Cleaning up ocfs2 cluster.conf
Cleaning up ocfs2 o2cb
Starting ovs-agent
Starting Oracle VM Agent:                                  [  OK  ]
Cluster is now clean....
Reboot? [Y/n] y

Broadcast message from root (pts/13) (Thu Feb 18 20:55:38 2016):

The system is going down for reboot NOW!
lscluster.sh
This command is used to troubleshoot Oracle VM cluster issues. The command will generate a status report in the running directory named hostname-date-lscluster.out showing the running virtual machines, the local and shared Berkeley database details, distributed lock management (dlm) details, and virtual machine locks.
 
How to Troubleshoot using the out files:
Confirm that the details in the out files do not contain unexpected entries. For example, incorrect or malformed mount points, incorrect server, manager and cluster details, unexpected lock files, and inconsistencies between the details Oracle VM servers in the same pool.
 
To run the command on an Oracle VM server, as root or using sudo type:
./lscluster.sh
or
sh lscluster.sh
Example Run
# ./lscluster.sh
The command will generate a status report in the running directory named hostname-date-lscluster.out listing the local and shared Berkeley database details, distributed lock management (dlm) details, and virtual machine locks. The next example shows a sample out file.
# cat ovs-pickard-2016-02-18-lscluster.sh.out
Thu Feb 18 21:06:09 PST 2016
ovs-pickard
Running Virtual Machines
Name                                        ID   Mem VCPUs      State   Time(s)
0004fb00000600000cbc711b8e54ee80           23  2048     2     -b----   1169.1
0004fb000006000017b9a07fa59d765a           25  2048     2     -b----   1173.0
0004fb00000600003eeb0f3e53924f7d            20  8192     2     -b---- 263877.6
0004fb00000600004fc9500e826b6c68            21  8192     2     -b----  26018.8
0004fb00000600007280bb57d5018de4          24  2048     2     -b----   1159.8
0004fb000006000073b599b5b9fa0656           26  8192     2     -b----    582.6
0004fb0000060000a40a4977b395fe81           22  2048     2     -b----   1171.2
Domain-0                                     0  1976    20     r----- 413941.0
Local Berkeley DB - aproc, exports, repository, server
ovs-agent-db dump_db aproc
{}
ovs-agent-db dump_db exports
{}
ovs-agent-db dump_db repository
{'0004fb00000300004af79ef1fffed38e': {'alias': u'1TBJan212016',
                                     'filesystem': 'ocfs2',
                                     'fs_location': '/dev/mapper/360014052c41ebfcd3c85d343adb806d3',
                                     'manager_uuid': u'0004fb00000100008275c77e34c0765e',
                                     'mount_point': '/OVS/Repositories/0004fb00000300004af79ef1fffed38e',
                                     'version': u'3.0'},
'0004fb00000300006de3dba49e87b8f7': {'alias': u'nfs_prod',
                                     'filesystem': 'nfs',
                                     'fs_location': '192.168.2.100:/volume1/nfs_prod',
                                     'manager_uuid': u'0004fb00000100008275c77e34c0765e',
                                     'mount_point': '/OVS/Repositories/0004fb00000300006de3dba49e87b8f7',
                                     'version': u'3.0'},
'0004fb0000030000732541521e134825': {'alias': u'nfs_migration1',
                                     'filesystem': 'nfs',
                                     'fs_location': '192.168.2.100:/volume1/nfs_migration1',
                                     'manager_uuid': u'0004fb00000100008275c77e34c0765e',
                                     'mount_point': '/OVS/Repositories/0004fb0000030000732541521e134825',
                                     'version': u'3.0'}}
ovs-agent-db dump_db server
{'cluster_state': 'DLM_Ready',
'clustered': True,
'is_master': True,
'manager_uuid': '0004fb00000100008275c77e34c0765e',
'node_number': 0,
'pool_alias': 'Prod',
'pool_member_ip_list': ['192.168.3.108', '192.168.3.106', '192.168.3.107'],
'pool_uuid': '0004fb00000200005284f32e78dc72a4',
'pool_virtual_ip': '192.168.3.99',
'poolfs_nfsbase_uuid': '',
'poolfs_target': '/dev/mapper/36001405616b814edd716d3444dbd68df',
'poolfs_type': 'lun',
'poolfs_uuid': '0004fb00000500005a9b745eb940c981',
'registered_hostname': 'ovs-pickard',
'registered_ip': '192.168.3.108',
'roles': set(['xen', 'utility'])}
Pool FS Berkeley DB
{'auto_remaster': True,
'pool_alias': 'Prod',
'pool_master_hostname': 'ovs-pickard',
'pool_member_ip_list': ['192.168.3.108', '192.168.3.106', '192.168.3.107'],
'pool_uuid': '0004fb00000200005284f32e78dc72a4',
'pool_virtual_ip': '192.168.3.99'}
{'ovs-pickard': {'is_master': True,
                'node_number': 0,
                'registered_ip': '192.168.3.108',
                'roles': set(['xen', 'utility'])},
'ovs-spock': {'is_master': False,
              'node_number': 1,
              'registered_ip': '192.168.3.106',
              'roles': set(['xen', 'utility'])},
'ovs-sulu': {'is_master': False,
             'node_number': 2,
             'registered_ip': '192.168.3.107',
             'roles': set(['xen', 'utility'])}}
ls -ltr /dlm/ovm
total 0
-rwx------ 1 root root 64 Feb  2 08:57 master
-rwxr-xr-x 1 root root 64 Feb  9 05:09 004fb00000600003eeb0f3e53924f7d
-rwxr-xr-x 1 root root 64 Feb  9 08:28 004fb00000600004fc9500e826b6c68
-rwxr-xr-x 1 root root 64 Feb  9 08:54 004fb0000060000a40a4977b395fe81
-rwxr-xr-x 1 root root 64 Feb  9 08:54 004fb00000600000cbc711b8e54ee80
-rwxr-xr-x 1 root root 64 Feb  9 08:54 004fb00000600007280bb57d5018de4
-rwxr-xr-x 1 root root 64 Feb  9 08:55 004fb000006000017b9a07fa59d765a
-rwxr-xr-x 1 root root 64 Feb  9 08:56 004fb000006000073b599b5b9fa0656
ls -latr /var/run/ovs-agent/
total 36
-rw-------  1 root root    5 Feb  2 08:57 log.pid
-rw-------  1 root root    5 Feb  2 08:57 notification.pid
-rw-------  1 root root    5 Feb  2 08:58 xmlrpc.pid
-rw-------  1 root root    5 Feb  2 08:58 stats.pid
-rw-------  1 root root    5 Feb  2 08:58 remaster.pid
-rw-------  1 root root    5 Feb  2 08:58 monitor.pid
-rw-------  1 root root    5 Feb  2 08:58 ha.pid
-rwxr-xr-x  1 root root    0 Feb  9 05:09 vm-0004fb00000600003eeb0f3e53924f7d.lock
-rwxr-xr-x  1 root root    0 Feb  9 08:28 vm-0004fb00000600004fc9500e826b6c68.lock
-rwxr-xr-x  1 root root    0 Feb  9 08:54 vm-0004fb0000060000a40a4977b395fe81.lock
-rwxr-xr-x  1 root root    0 Feb  9 08:54 vm-0004fb00000600000cbc711b8e54ee80.lock
-rwxr-xr-x  1 root root    0 Feb  9 08:54 vm-0004fb00000600007280bb57d5018de4.lock
-rwxr-xr-x  1 root root    0 Feb  9 08:55 vm-0004fb000006000017b9a07fa59d765a.lock
-rwxr-xr-x  1 root root    0 Feb  9 08:56 vm-0004fb000006000073b599b5b9fa0656.lock
srwx------  1 root root    0 Feb 11 12:55 notification-server.sock
drwxr-xr-x 17 root root 4096 Feb 18 20:49 ..
drwx------  2 root root 4096 Feb 18 21:06 .
lsfc.sh
This command is used to troubleshoot Fibre Channel HBA and multipath issues. The command will generate a status report in the running directory named hostname-date-lsfc.out with multipath details, HBA port, node, state, speed, name details.
 
To run the command on an Oracle VM server, as root or using sudo type:
./lsfc.sh
or
sh lsfc.sh
Example Run
# ./lsfc.sh
The script will generate a status report in the running directory named hostname-date-san-details.out listing the multipath details, HBA port, node, state, speed, name details. The next example is a sample out file.
# cat ovs-pickard-2016-02-18-lsfc.sh.out
multipath -ll:
360060160b3933500c3d84f9cc384e311 dm-2 DGC,VRAID
size=400G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 2:0:0:4 sdg 8:96   active ready  running
| `- 3:0:0:4 sds 65:32  active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
 |- 2:0:1:4 sdm 8:192  active ready  running
 `- 3:0:1:4 sdy 65:128 active ready  running
360060160b3933500d889ecdaf155e311 dm-1 DGC,VRAID
size=6.1T features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 2:0:0:1 sdd 8:48   active ready  running
| `- 3:0:0:1 sdp 8:240  active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
 |- 2:0:1:1 sdj 8:144  active ready  running
 `- 3:0:1:1 sdv 65:80  active ready  running
360060160b3933500d271efb4f155e311 dm-0 DGC,VRAID
size=6.1T features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 2:0:0:0 sdc 8:32   active ready  running
| `- 3:0:0:0 sdo 8:224  active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
 |- 2:0:1:0 sdi 8:128  active ready  running
 `- 3:0:1:0 sdu 65:64  active ready  running
360060160b3933500e1e68e1fcb84e311 dm-4 DGC,VRAID
size=4.4T features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 2:0:1:5 sdn 8:208  active ready  running
| `- 3:0:1:5 sdz 65:144 active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
 |- 2:0:0:5 sdh 8:112  active ready  running
 `- 3:0:0:5 sdt 65:48  active ready  running
360060160b39335009dbd56bbdda3e311 dm-3 DGC,VRAID
size=2.2T features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 2:0:1:2 sdk 8:160  active ready  running
| `- 3:0:1:2 sdw 65:96  active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
 |- 2:0:0:2 sde 8:64   active ready  running
 `- 3:0:0:2 sdq 65:0   active ready  running
360060160b39335009a63420fd056e311 dm-5 DGC,VRAID
size=14G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 2:0:0:3 sdf 8:80   active ready  running
| `- 3:0:0:3 sdr 65:16  active ready  running
`-+- policy='round-robin 0' prio=0 status=enabled
 |- 2:0:1:3 sdl 8:176  active ready  running
 `- 3:0:1:3 sdx 65:112 active ready  running
Port_name:
0x10008c7cff97ae01
0x10008c7cff979901
Node_name:
0x20008c7cff97ae01
0x20008c7cff979901
Port_state:
Online
Online
Port speed:
8 Gbit
8 Gbit
Symbolic_name:
Brocade-1860 | 3.0.2.2 
Brocade-1860 | 3.0.2.2 
lsvm.sh
This command will print a list to the terminal (stdout) of all of the virtual machines, templates and assemblies in storage repositories connected to the Oracle VM server. The list shows the path to each virtual machines vm.cfg file.
 
To run the command on an Oracle VM server, as root or using sudo type:
./lsvm.sh
or
sh lsvm.sh
Example Run
# ./lsvm.sh
/OVS/Repositories/0004fb0000030000732541521e134825/Templates/0004fb0000140000954181255ae83f19/vm.cfg:OVM_simple_name = OVM_OL7U2_x86_64_PVHVM
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb0000060000a65767acfac6a7a1/vm.cfg:OVM_simple_name = devrac104
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb00000600007057ef271c5db5df/vm.cfg:OVM_simple_name = testebsapp04
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb0000060000070f9af390ae1586/vm.cfg:OVM_simple_name = devsoa02
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb000006000082b0d16e511330c2/vm.cfg:OVM_simple_name = testobidwh02
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb000006000068c63f14f0115a39/vm.cfg:OVM_simple_name = testrac104
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb00000600008226c4fc7bd3254b/vm.cfg:OVM_simple_name = testrac102
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb00000600003fd1924503182f64/vm.cfg:OVM_simple_name = devdb04
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb0000060000517440394a7df613/vm.cfg:OVM_simple_name = devdb02
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb0000060000e3f242a0bc2d57b6/vm.cfg:OVM_simple_name = testdb04
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb0000060000b4a63f1c601f5dfa/vm.cfg:OVM_simple_name = testdb02
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb00000600006f1324ddbb07b25c/vm.cfg:OVM_simple_name = testsoa02
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb000006000045bccf8ec3906c8e/vm.cfg:OVM_simple_name = devrac102
/OVS/Repositories/0004fb0000030000732541521e134825/VirtualMachines/0004fb0000060000a81a1bb27abfffcc/vm.cfg:OVM_simple_name = testhyp02
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/Templates/0004fb00001400001f9e15593161535c/vm.cfg:OVM_simple_name = OVM_OL7U2_x86_64_PVHVM
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb000006000017d298b0ad0a7712/vm.cfg:OVM_simple_name = prodobidwh02
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb00000600005e22c6209748c485/vm.cfg:OVM_simple_name = prodobidwh04
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb000006000073b599b5b9fa0656/vm.cfg:OVM_simple_name = ovm-crusher
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb0000060000c4d1b8475a70d244/vm.cfg:OVM_simple_name = prodrac102
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb000006000041ccefec362d5365/vm.cfg:OVM_simple_name = prodrac104
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb0000060000a40a4977b395fe81/vm.cfg:OVM_simple_name = proddb02
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb00000600000cbc711b8e54ee80/vm.cfg:OVM_simple_name = proddb04
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb0000060000cb55a31373548385/vm.cfg:OVM_simple_name = prodsoa02
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb000006000035a64987d67752f7/vm.cfg:OVM_simple_name = prodsoa04
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb000006000017b9a07fa59d765a/vm.cfg:OVM_simple_name = prodebsapp04
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb00000600007280bb57d5018de4/vm.cfg:OVM_simple_name = prodebsapp02
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb000006000022fbdceef56b172a/vm.cfg:OVM_simple_name = prodhyp02
/OVS/Repositories/0004fb00000300004af79ef1fffed38e/VirtualMachines/0004fb00000600001d289504abed055e/vm.cfg:OVM_simple_name = prodhyp04
/OVS/Repositories/0004fb00000300006de3dba49e87b8f7/VirtualMachines/0004fb00000600003eeb0f3e53924f7d/vm.cfg:OVM_simple_name = ovm-thedoctor
/OVS/Repositories/0004fb00000300006de3dba49e87b8f7/VirtualMachines/0004fb00000600004fc9500e826b6c68/vm.cfg:OVM_simple_name = ovm-mccoy
repobench.sh
This command will allow you to select one OCFS2 or NFS storage repository and run a quick and simple dd benchmark. The command outputs to a file in the select repository named hostname-date.repobench.out.
 
This command is used to quickly confirm OCFS2 and NFS repository read and write numbers. If the read and write numbers are very low, confirm that the vendor storage, HBA, multipath and/or PowerPath, networking and NIC settings have been applied.
 
To run the command on an Oracle VM server, as root or using sudo type:
./repobench.sh
or
sh repobench.sh
Example Run
# ./repobench.sh
Available OCFS2 and NFS repositories:
/OVS/Repositories/tmpH5oARI type ocfs2
/OVS/Repositories/tmpy95-GQ type ocfs2
Enter the path from one of the above repositories to be benchmarked, i.e /OVS/Repositories/UUID
/OVS/Repositories/tmpy95-GQ
you have entered /OVS/Repositories/tmpy95-GQ as the source, running benchmark...
------------------------------------------------------
 
real    0m2.193s
user    0m0.003s
sys     0m1.889s
 
real    0m2.344s
user    0m0.004s
sys     0m1.614s
 
real    0m2.035s
user    0m0.002s
sys     0m1.566s
 
real    0m3.332s
user    0m0.004s
sys     0m0.600s
 
real    0m2.891s
user    0m0.006s
sys     0m0.600s
 
real    0m2.905s
user    0m0.000s
sys     0m0.604s
# cat /OVS/Repositories/tmpy95-GQ/ovsdfw01-2016-02-18.repobench.out
ovs-pickard.local.mokumsolutions.com
Thu Feb 18 23:50:29 CST 2016
/OVS/Repositories/tmpy95-GQ
Write:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.15567 seconds, 498 MB/s
Write:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.32997 seconds, 461 MB/s
Write:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.01534 seconds, 533 MB/s
Read:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.32351 seconds, 323 MB/s
Read:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.87934 seconds, 373 MB/s
Read:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.89363 seconds, 371 MB/s
uuid2vm.sh
This command will print a list of the running virtual machines with the uuid to simple name mappings (the name displayed in the GUI) to the terminal (stdout).
 
To run the command on an Oracle VM server, as root or using sudo type:
./uuid2vm.sh
or
sh uuid2vm.sh
This example shows the output from xm list. note that the Names have the machine generated UUIds, not the user-friendly names. 
# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
0004fb000006000006ce39bbe316d5b4            16  2051     4     -b----   2823.3
0004fb0000060000070f9af390ae1586            15  1027     1     -b----   1582.5
0004fb000006000012f5fdc22c8c622d            11  1027     1     -b----   2129.8
0004fb00000600003fd1924503182f64            17  1027     1     -b----   1567.1
0004fb000006000045bccf8ec3906c8e            18  1027     1     -b----   1582.0
0004fb0000060000517440394a7df613            19  1027     1     -b----   1561.2
0004fb000006000068c63f14f0115a39            10  1027     1     -b----   1587.1
0004fb00000600006f1324ddbb07b25c            12  1027     1     -b----   1568.5
0004fb00000600008226c4fc7bd3254b            20  1027     1     -b----   1573.3
0004fb0000060000a65767acfac6a7a1            21  1027     1     -b----   1568.8
0004fb0000060000b4a63f1c601f5dfa            14  1027     1     -b----   1574.5
0004fb0000060000f3334933284eec8d            13  1027     1     -b----   1832.0
Domain-0                                     0  1076     8     r----- 167031.2
 
This example shows the output from uuid_to_vm.sh with the UUID to user-friendly name mappings. 
# ./uuid2vm.sh
0004fb00000600000cbc711b8e54ee80    proddb04
0004fb000006000017b9a07fa59d765a    prodebsapp04
0004fb00000600003eeb0f3e53924f7d    ovm-thedoctor
0004fb00000600004fc9500e826b6c68    ovm-mccoy
0004fb00000600007280bb57d5018de4  prodebsapp02
0004fb000006000073b599b5b9fa0656   ovm-crusher
0004fb0000060000a40a4977b395fe81   proddb02
 

Appendix

When running commands I receive: /bin/bash^M: bad interpreter: No such file or directory

The ^M is a carriage return character. Linux uses the line feed character to mark the end of a line, whereas Windows uses the two-character sequence CR LF. The ROAD for Oracle VM command (the file) has end up with a Windows line ending, which makes Bash generate the "/bin/bash^M: bad interpreter: No such file or directory" message.  
Solution:  Edit the command (the file) with vim and use the 'fileformat' and 'fileformats' options to set the file format. Using vim open the file and enter :set ff=unix to remove ^M, or use the dos2unix command, i.e. dos2unix file_name to remove the ^Ms.