Hey @KevinH
There is no maximum mountpath size - only what the OS can support.
That being said, historically mount path sizes of 1-4 TB were recommended so you could do maintenance on them individually without affecting the entire library (i.e defrag). That is not really a thing these days, especially if the underlying path is pointing to the same NAS/device. In the latter case you can mess up space reporting as mentioned here, since Commvault won’t know multiple paths are on the same storage (i.e two 4 TB mount paths should have a max free space of 4TB, but will report as 8TB).
In essence, if the underlying storage device is the same, I don’t believe there is any benefit to carve it out into smaller mount path sizes - you could get more control over the streams per path but there isn't really a point if it's the same device. But I’ll let others chime in and see if there is another opinion based on observed scenarios in the wild!
Hi Kevin,
As @Damian Andre rightly said above, the limit is the OS supported maximum.
In terms of sizing the mount paths, I think that this is one of those “it depends” questions. - Depending on what the environment is going to look like/what the requirements are.
Giving one MA one large volume is creating a single point of failure. For example: Losing disks in the back end, could potentially cause performance issues or volume corruptions. - Having multiple smaller volumes could mean that less impact is caused in that scenario “if” they are created from different RAID Groups. (Depends on the configuration)
Also, if there was a need to perform a “Move Mount Path” to move the data to another location for whatever reason, all data in that large path would need to be moved in one go. Compared to having multiple smaller volumes, you could move smaller volumes of data separately and still have some accessible paths.
There could be a need to distribute load between storage controllers on an array presenting the storage, therefore you could have separate volumes presented from different controllers.
I’m sure there are many other reasons why you “could” have a requirement for multiple smaller paths, Let’s see what the rest of the community have to add!
Best Regards,
Michael
Thank you for the feedback. It was very good information.
In our case all the backend disks are on the same RAID 6 array so there is no benefit performance wise to spread out the mount paths. The only potential small argument you could make is if the Media Agents could queue disk requests more optimally over multiple paths to the backend storage than just one path. I recall seeing this in VMware where spreading the load over multiple virtual SCSI adapters can lead to better performance as the OS can handle the disk queues better across multiple virtual adapters.
With all that said, I think we are going to carve the mount paths to six 50 TB volumes so that we are not spending a ton of work creating volumes and mount paths but we still have a semi more manageable mount path size than two 150 TB volumes.
Thank you again,
Kevin