Background: We have 2 media agents at a secondary site that each have a single “data” mount path to storage that we just hit 256 TB on (OS refuses to expand them further, its using 64kb blocks). The media agents have their storage/mount path mapped to that dedicated storage volume (like X:\) and then they also map (in commvault) to the other media agent. So each media agent can read/write to either X:\ on either media agent.
Note: All the volumes in commvault are pointing to the same dedicated storage array (that only commvault uses)
Questions: Since we cannot expand the existing mount path storage beyond 256 TB for either mount, a few options are being explored:
- Make a new volume and format it to 128 kb blocks, so it can grow to 512 TB is needed. Then do a “move mount path” of the 256 TB bound volume to the 512TB bound one. result: we have a single mount point per media agent, and it just is really big. using 128 kb blocks seems like a “probably don’t want to do” from some online things I have read about Windows performance of these. Why do the move? I missed the max was going to be 256 TB so there’s quite a bit of TB’s we allocated we cannot use, so it would be to free up all the extra TB’s when the old mount point was deleted.
- Make several new volumes (say Y:\, Z:\) and have each be “smaller” (like 64 or 128 TB). Add the mount points and just let commvault use them as it sees fit. leave the large 256 TB mount point alone, as adding more mount points will allow growth.
- ???
Basically: I have seen some posts about this here, and it seems like having a single large mount point is kinda a bad thing vs having many small (dedicated) ones. I’m wondering if there is a recommended “max mount point size” for “practical” reasons (not system limits!) for management/administration/performance purposes The only obvious one is “its easier to move them as they are smaller” and “if something happens to one, its less of a disaster vs losing the one giant one”.

