Maximum size of a mount path on a disk library

  • 27 March 2021
  • 3 replies

Badge +4


We are in the process of migrating to a new disk library.  This disk library is a pair of NAS devices with 300 TB on each NAS.

We can carve out the NAS into multiple volumes with a maximum size of 150 TB per volume.

When we first setup our disk library about 10 years ago, the maximum recommended size of a mount path was 4 TB.  I know that is old guidance and I am sure this has increased over the years.

We were trying to find something on the documentation and the closest we found was a reference to the maximum mount path being 25 TB, but it appears that the limitation can be overridden with a registry setting.

So a few questions:

  1. Is there a maximum mount path size in a disk library?
    • If there is, what is it?
    • If there is, what happens if you hit the limit without adjusting the registry and can it be overridden with a registry setting?
  2. Regardless of a maximum mount path size, from a performance and management perspective is there a best practices on sizing the mount paths.
    • We have three sets of mount paths sizes across disk libraries right now, some are carved out into 4 TB volumes, some 8 TB volumes and some are carved out at 50 TB volumes.  All have seemed to work fine so far.

Thank You for any feedback!



Best answer by MichaelCapon 29 March 2021, 17:15

View original

3 replies

Userlevel 7
Badge +23

Hey @KevinH 

There is no maximum mountpath size - only what the OS can support.

That being said, historically mount path sizes of 1-4 TB were recommended so you could do maintenance on them individually without affecting the entire library (i.e defrag). That is not really a thing these days, especially if the underlying path is pointing to the same NAS/device. In the latter case you can mess up space reporting as mentioned here, since Commvault won’t know multiple paths are on the same storage (i.e two 4 TB mount paths should have a max free space of 4TB, but will report as 8TB).

In essence, if the underlying storage device is the same, I don’t believe there is any benefit to carve it out into smaller mount path sizes - you could get more control over the streams per path but there isn't really a point if it's the same device. But I’ll let others chime in and see if there is another opinion based on observed scenarios in the wild!

Userlevel 6
Badge +14

Hi Kevin,


As @Damian Andre rightly said above, the limit is the OS supported maximum.

In terms of sizing the mount paths, I think that this is one of those “it depends” questions. - Depending on what the environment is going to look like/what the requirements are.

Giving one MA one large volume is creating a single point of failure. For example: Losing disks in the back end, could potentially cause performance issues or volume corruptions. - Having multiple smaller volumes could mean that less impact is caused in that scenario “if” they are created from different RAID Groups. (Depends on the configuration)

Also, if there was a need to perform a “Move Mount Path” to move the data to another location for whatever reason, all data in that large path would need to be moved in one go. Compared to having multiple smaller volumes, you could move smaller volumes of data separately and still have some accessible paths.


There could be a need to distribute load between storage controllers on an array presenting the storage, therefore you could have separate volumes presented from different controllers.


I’m sure there are many other reasons why you “could” have a requirement for multiple smaller paths, Let’s see what the rest of the community have to add!


Best Regards,


Badge +4

Thank you for the feedback.  It was very good information.

In our case all the backend disks are on the same RAID 6 array so there is no benefit performance wise to spread out the mount paths.  The only potential small argument you could make is if the Media Agents could queue disk requests more optimally over multiple paths to the backend storage than just one path.  I recall seeing this in VMware where spreading the load over multiple virtual SCSI adapters can lead to better performance as the OS can handle the disk queues better across multiple virtual adapters.

With all that said, I think we are going to carve the mount paths to six 50 TB volumes so that we are not spending a ton of work creating volumes and mount paths but we still have a semi more manageable mount path size than two 150 TB volumes.

Thank you again,