Thursday 11 September 2014

Cloning Virtual Machine with thin provisioned vmdks is slower than cloning Virtual Machine with thick provisioned vmdks

I encountered very interesting phenomenon during cloning VMs across the cluster. Cloning of VM with thick (eager or lazy zeroed)virtual disk (vmdk) is much faster than cloning VM with thin virtual disk (vmdk). I made some research to explain this behavior.

I made some experiments for cloning using GUI and command-line vmkfstools. As far as I know they are in different code branch of esxi.


Here we see network utilizaton during thin and thick cloning they are almost the same:



Here is disk utilization and we notice some difference, 'thick' cloning utilize more disk than 'thin' cloning:



   
Something interesting start to happend 'thin' cloning use much more CPU cycles than 'thick cloning':




Let's check if results are consistent from command-line. To simplify experiment I clone only on one datastore and one esxi host to bypass network overhead.

I run command below to clone 30GB (with data) thin vmdk to thin vmdk:

# time vmkfstools -v3 -i /vmfs/volumes/datastore1/thin.vmdk -d thin /vmfs/volumes/datastore1/thin_clone.vmdk

 
It took 1m 2.17s ~ 62 seconds

I run command below to clone the same 30GB converted to thick vmdk to thick vmdk:

# time vmkfstools -v3 -i /vmfs/volumes/datastore1/thick.vmdk -d zeroedthick /vmfs/volumes/datastore1/thick_clone.vmdk


It took 26.63 seconds

That's is much faster !

In esxtop we see that disk was clone with  231MB/s READ and 234MB/s WRITE:

  

For thick cloning we notice 416MB/s READ and 407MB/s WRITE:


How can we explain that ?

Cloning is heavily sequential reads workload. Sequential reads like data to be laid out sequentially on spinning disks in order to minimize head movement. As thin-provisioned virtual disk (or thin-provisioned volumes) consume drive space only on the backend as data is written to them, this data tends not to be from contiguous areas, leading to the address space of the virtual disk or volume being fragmented on the array's backend. This can lead to thin-provisioned virtual disks (or volumes) performing significantly lower than thick provisioned virtual disks (or volumes) for this particular workload type. 

Please bear in mind that this has nothing to do with workload inside virtual disk. Performance overhead of VM GuestOS on thin-provisioned virtual disk (vmdk) in negligible and comparable to thick-provisionend vmdk.

In another words we have sequential access versus random access.By definition random access - read or write I/O's that are distributed throughout the relevant address space; sequential access - read or write I/O's that are logically contiguous within the relevant address space.

The End.

No comments:

Post a Comment