Hi All,
We have a HSX Storage Pool that completes backups for an OnPrem VMWare Environment. In this scenario, our HSX Environment completes backups through SAN and HotAdd Proxies.
When running a Subclient backup, VMs will use their perspective proxies, though 1 HSX node is always the Media Agent for all child VMs jobs.
Does the above imply that data written to the Hedvig Library is only using that HSX Node’s physical disks for the first point write? I may be fundamentally misunderstanding the storage technology, though my assumption is that each node would write to its local disk with the hblock process handling it from there. In this scenario, are we therefore only using a % of the total spindles for direct writes, or is that simply not how Hedvig works that this is not a concern?
In the same line-of-thinking, would this also apply to the Network data flow and is all traffic is being routed through the 1 node, and therefore limiting the maximum throughput for a single subclient.
If the above is correct, is the solution to scale on a subclient-level (instead of say the # of data readers) to improve parallel operations?