Virtual machine is inaccessible after vMotion | VMware

Issue :

VM fails to power on, power off or modify. If tried to migrate to another ESXi host the VM gets disconnected.

Error from vCenter :

VM will be shown in vCenter as inaccessible. 

Snippets from the logs from ESXi host:

[root@ESXi01:/vmfs/volumes/570248ff-86524429-7f05-848f691451f9/VMname] cat VMname.vmx | grep vmdk
cat: can't open 'VMname.vmx': Device or resource busy


  1. The logs shows that the VM is busy or locked due to some reason. Check the lock status using vmx

[root@ESXi01:/vmfs/volumes/570248ff-86524429-7f05-848f691451f9/VMname] vmkfstools -D VMname.vmx
Lock [type 10c00001 offset 151009280 v 326, hb offset 3198976
gen 61, mode 1, owner 58109c0d-10ae7368-fd7b-848f69156ba8 mtime 15841905
num 0 gblnum 0 gblgen 0 gblbrk 0]
Addr <4, 326, 119>, gen 129, links 1, type reg, flags 0, uid 0, gid 0, mode 100755
len 5230, nb 1 tbz 0, cow 0, newSinceEpoch 1, zla 2, bs 8192

  1. The highlighted number in the log output 848f69156ba8  refers to the MAC address of the current lock owner. This was another host in the cluster. The host ESXi02 has the MAC and although the VM was unregistered from vCenter it is still showing has registered on this host.
  1. SSH to the other host - ESXi02 and get the VM id

[root@ESXi02:~] vim-cmd vmsvc/getallvms | grep VMname
346  VMname  [Datastore1] VMname/VMname.vmx  windows8Server64Guest vmx-08 

  1. Check whether there is any running process for that VM on this host.

[root@ESXi02:~]  localcli vm process list | grep ^VMname -B1 -A7
World ID: 10946465
Process ID: 0
VMX Cartel ID: 10946464
UUID: 42 0e e0 75 93 63 00 89-5c 67 4a a4 2a 0f f0 cc
Display Name: VMname
Config File: /vmfs/volumes/570248ff-86524429-7f05-848f691451f9/VMname/VMname.vmx

  1. Try to stop the process holding the VM - World id - 10946465

[root@ESXi02:~]  localcli vm process kill --type=force --world-id=10946465

  1. Check if the lock has released after this process using the command cat VMname.vmx | grep vmdk
  1. After stopping the running process, if the lock is still in place, try to restart the hostd service to release the lock using the command /etc/hostd restart
  1. If this doesn't resolve the issue, migrate all other VMs to other hosts in the cluster and reboot the server
  1. After the host reboot, the lock will get released.


Popular posts from this blog

VMware and Windows Interview Questions: Part 2

VMware and Windows Interview Questions: Part 3

VMware vMotion error at 14%