I`m not sure how this part “We need to foresee the backup capacity of VSA backups using hotadd transport mode. The VSA proxies are virtual machines and they send the data to physical media agents.”
Connects with this “which value do you take into account to make your disk capacity calculations from the VMWare RVTools?”
I think they are unrelated but from reading the rvtools doc I would say vPartition as it will give you all the disk info you need.
https://robware.net/download/RVTools.pdf
The “vPartition” tab displays for each virtual machine, if the VMware Tools are active, the
name of the VM, powerstate, template, SRM Placeholder, Disk Key, Disk name, total disk
capacity, consumed disk capacity, total free disk capacity, percentage free disk capacity,
Internal sort column, annotations, custom fields, datacenter name, cluster name, ESX
host name, VM folder name, operating system name according to the config file,
operating system name acoording to the VMware tools, VM ID, VM UUID, virtual machine
tags, VI SDK Server and VI SDK UUID.
@brucquat
Below is what you should be looking.
- vInfo Tab -> Column “In Use MiB”
Actual FET size in Commvault might be lowerthan that as the software is smartenough to remove pagefile blocks..etc from backup.
FYI - Commvault has System discovery tool to provide the same information for sizing calculations.
https://cloud.commvault.com/webconsole/cloud/edc/SystemDiscovery.jsp#tabs-SysHYPV
The question is, in order to estimate the disk capacity required to store the backups:
Do we need to take into account the capacity at the hypervisor level, i.e.,
vInfo Tab -> Column “In Use MiB”
OR
do we need to take into account the capacity at the guest level (VM), i.e.,
vPartition Tab -> Column “Consumed MiB”
Thanks
Enjoy the holiday period.
For the initial landing you could use the vpartition CONSUMED value. This is the amount of data stored on the disk(s) who are mounted by the proxy. However do note that deduplication will remove duplicates and follow-up jobs will leverage CBT hence it will only pull the changed blocks who are also compressed and deduplicated which reduces the amount of data that is send to the library.