Skip to main content

Hi,

We are trying to select multiple VMs and taking backups of all those VMs on our S3 Compatible HCP Storage.

We know that the snapshots get taken for the respective VMs and then pushed to the cloud library. Correct me if I may be wrong.

Even though we have good amount of space in Datastore, it is not possible to take multiple VMs backup as it is sizing out the datastore. 

We even changed the check on datastore space.

Is there any particular setting that can help achieve the backups of the VMs, lets say 10 in number, more effectively and successfully ?

Hello @Jass Gill 

How much space you have in the datastore and how much is free? We recommend 10% free space to be available. 

Could you please check with your vmware admin and confirm if there are old snapshots (that are not required) that can deleted to free up space? 

Best,

Rajiv Singal


Hi @Jass Gill 

 

Are there any particular VM’s that have a higher rate of change which are consuming more space when snapshot is present? And do you see any concerns with the backup performance? (slower backup = longer snapshot duration = larger snap sizes).

In this scenario it might be suitable to either:

  1. Reduce the Subclient/VM Group Streams so that less VM’s are protected at once.
  1. Split the VM’s between subclients and schedule those for different times. (Perhaps isolate any higher rate of change machines into a separate subclient)

Best Regards,

Michael


Thank you for the suggestions.

Isn’t there a way that a snapshot doesn’t get taken on datastore?


@Jass Gill , we just use the API to have the vMware create a snapshot and post that we start the backup process. If you want to change the Datastore of these VM’s where you have plenty of space, you can consult your VM admin.

Best,

Rajiv Singal


Reply