Skip to main content

I have recently expanded our NetApp E-Series array with 48 more 12TB drives (it previously had 24 drives of 4TB each).  In the past we were told to carve out a large DDP (Dynamic Disk Pool) for all of the storage we had present, then carve out 4 or 6 TB LUNs, create NTFS volumes on our Windows Server MAs with a 64K Allocation Unit Size, and set those as our Mount Paths.  I read the following thread which makes me feel like that is certainly overkill and unnecessary:  

 

I would like to and/or have questions about the following steps:

 

  1. Create a DDP using all of the space available (392 TB after RAID-DP and spares are taken into account).  I’ve created similar sized pools on other NetApps for different workflows without issue in the past so I see no problems here.
     
  2. Created a single Volume using up all of that space save for 6 TB which will be carved out separately for the DDB backup of our sister MA in a separate DC.  This will be 386 TB.
     
  3. Focusing just on the mega volume, I will mount that on the Server 2019 MA and create a disk using NTFS and 128K Allocation Unit Size, mounting it in an empty NTFS folder to avoid needing a drive letter.  We would need 128K to support a volume larger than 256TB which was the previous limit with a 64K allocation unit size.  I assume there is no value in using ReFS and using NTFS is the more accepted method.
     
  4. Add the mount path to my existing library, set all 20 of my previous mount paths to not accept new data, wait till everything has bled out to the new mount path, and then repeat the process with the older DDP so we can get that down to a single 96TB volume, and then add that as another mount path for a total of almost 500TB.

Regarding point 3 above, I am referencing this article:  https://learn.microsoft.com/en-us/windows-server/storage/file-server/ntfs-overview

 

My main concerns are if this workload is fine for using a single large mount path, and if 128K allocation unit size is fine to help accommodate a single 386TB volume/mount path.  

For reference, I am running Commvault 11.24.60 at this time.

 

Please let me know if I got any of the above wrong and if my proposed procedure looks sound.  Thank you!

From a Commvault perspective, there is no issue reading and writing to a huge volume/LUN. I have routinely seen PB scale mount paths - although not sure what the allocation unit is as those where typically from Isilon style NAS storage. 64K is recommended but 128K should be fine.

Architecturally, in the past there may have been a case around using smaller volumes for ‘maintenance’ - i.e if you had to take one offline to perform some sort of maintenance, backups could continue on another. But that totally depends on your storage platform, and feels like an outdated recommendation for modern disk storage - but think if that has any relevance to your config.

Agree on ReFS, I don't think there is tangible benefit at the moment - NTFS is tried and true!


Thanks for your help, Damian.  I don’t believe that smaller LUNs has any relevance to our config and I’ve never brought a mount path down for maintenance.  Doesn’t see relevant to us.


Reply