@Mubaraq , have you looked into doing snapshot backups?
https://documentation.commvault.com/11.25/expert/36282_vmware_intellisnap.html
The doc above should help get you started,
Thank you, @Mike Struening
With this feature…...how long should a 32TB take, please?
This isn’t an easy question to answer as it would be dependent on your infrastructure.
What transport mode is being utilized here?
@Aplynx Media agent NIC teamed network card 40GBPS network card. SSD DDBs and good throughput from the media agent to the storage.Media agent also serves as proxy.
@Mubaraq - Have you found the answers that you are looking for?
An intellisnap backup would most likely manage it within this timeframe as it only needs to quiesce the VM, snapshot the storage array volume and consolidate the disks for the VM. For such VM sizes I assume the VM is running on fast storage, preferably NVMe. When running on slow storage the disk consolidation will take a while depending on the delta created during this activity.
But a snapshot is not a backup in my opinion. If you want to backup copy it to disk, to be honest, I don't think you will manage this in the required timeframe if you focus on VM backup via LAN.
So it all depends on the context of your requirements.
A quick thought on a valid solution, for backup to disk, could be:
- Devide data on multiple in-guest LUN's, not on VMDK
- VM snapshot with intellisnap for OS
- Make Intellisnap backups with the File System Agent for the in-guest LUN's
- Offload the backup copy for the VM and LUN's to a proxy or multiple proxies and run them in parallel
If you keep a certain amount of snapshots for both the OS and Data sections you can revert both the VM and Data with snapshots, or restore from disk if this would be required.
The questions here would be:
- Do you have an intellisnap supported storage array
- Does your storage array provide FC connectivity
- Do you have proxies available with FC connectivity
- Which FC speeds are available
- Can/will your customer facilitate in this change
Other solutions might be there, but 32 TB is simply a lot of data for that timeframe...
File restores seem to be the weak spot for IntelliSnap backups of large VM’s.
File restores seem to be the weak spot for IntelliSnap backups of large VM’s.
Yes this is true if restoring from the snapshot copy. This is because the snap has to be presented back to the ESXi host, the VM mounted and read using a transport mode to access the file content. If your array supported non-persistent storage then a live mount from snap and copy out of the files may work much faster. Hoping we will see improvement to file restore performance from snap backup in the future - I’ll see if there is anything to improve this in the pipeline.
“Yes this is true if restoring from the snapshot copy. This is because the snap has to be presented back to the ESXi host, the VM mounted and read using a transport mode to access the file content.”
The mounting of a snapshot back to an ESXi host or a cluster from NetApp for example can manually be done within a minute.
The mounting of a VMDK to another VM for example can also manually be done in under minute. Not sure how presenting the snap to a single host is the issue.
“Yes this is true if restoring from the snapshot copy. This is because the snap has to be presented back to the ESXi host, the VM mounted and read using a transport mode to access the file content.”
The mounting of a snapshot back to an ESXi host or a cluster from NetApp for example can manually be done within a minute.
The mounting of a VMDK to another VM for example can also manually be done in under minute. Not sure how presenting the snap to a single host is the issue.
The issue is not the delay or slowness with the mounting operation. The issue is that once the VM is back in vCenter the disk is accessed in ‘file’ mode though VMware APIs, which allows Commvault to access the contents of the disk. This is an intensive operation and is completely different to how the backup is done, which of course is at the block level. Parsing actual file contents from a VMDK disk through a vmware transport mode is the cause of the delay.
Yes, if the process was to use a proxy machine to mount the disk and use the OS to read the files, the operation would be faster, but that is not how it is implemented currently and I can’t commend as to why that is the case. There is a restore option that allows you to mount the VMDK to a machine - the last leg of the restore would be up to you to copy the files manually through the proxy machine it is mounted to, and that would likely be more performant.