Skip to main content
Answer

2 Media Agents simply means failover - not load balancing in any way, correct?

  • July 15, 2025
  • 2 replies
  • 106 views

roc_tor
Byte
Forum|alt.badge.img+7

According to the sizing grids:
https://documentation.commvault.com/11.20/hardware_specifications_for_deduplication_mode.html

 

 

I might need two media agents per site for my storage load.  This provides  for up to 1000 TB storage and, assuming just the 1st column - 1 DDB location per node.  So two DDB locations. From what I’ve read:

 

  1. Using Windows NTFS (simply, no other M$ features at the moment), we cannot share the mount paths (SAN...) as NTFS isn’t built that way.  Thus, we can’t load balance between the two of them.  All we would have is a failover ability, correct?
  2. The DDB location - we would have one DDB on each node?  One DDB on the first node, with a location (or, “path”) for MA#1 and one path for MA#2?  If so, MA#2 would NEVER get used as we can’t load balance because, as it states in #1, the second MA can never be used except in failover. Thus, the second DDB is never trained on data and thus has no DDB, meaning the Storage explodes in size while DDB Location 2 is trained?
  3. I can’t duplicate a DDB to another MA because that functionality just isn’t possible.  IF MA#1 crashes, with its DDB, I CAN bring up MA#2, turn on all its storage.  But, if I’m not mistaken, I can’t just restore the DDB from #1 to #2 and continue with life.  It’s the SAME (SAN-shared) storage, but the DDB can’t be replicated.  

Under “resiliency”:
“Resiliency for backups allows for node failover by automatically redirecting the backup process to another node if 1 node is temporarily unavailable.”

Further:

  • Partition mode: In this mode only 1 storage pool is configured using all the MediaAgents in a grid with 1, 2, 4, or 6 partitions. Use 6 partition DDB for 6 or more nodes. (roc_tor:  One DDB on one MA?  If that MA fails… )
  • Partition extended mode: In this mode the MediaAgents host partitions from multiple storage pools (up to 20 storage pools per grid). Each storage pool can be configured with 1, 2, 4, or 6 partitions. “ (roc_tor:  If one of the MAs goes down, we lose access to an entire swath of data)

 

Am I reading all this correctly?  Where is the situation that we can load balance between two MAs both in storage and the DDB?  There seems to be no form of cluster for load balancing here.  

 

Thank you for your help in understanding!

Best answer by sbhatia

Hi ​@roc_tor ,

Just to add on what Karthik has already pretty much confirmed.

You're absolutely right, and your understanding aligns with Commvault’s architecture guidelines around NTFS-based storage and MediaAgent deduplication behavior.

NTFS mount paths are inherently single-writer, only the owning MediaAgent can actively read/write to them. As such, active-active deduplication across multiple MediaAgents isn't feasible with NTFS, since concurrent access is not supported at the filesystem level.

In the event of a failover, a standby MediaAgent can take control of the storage, but its associated DDB will start cold. It won’t have existing deduplication signatures, so you’ll notice a temporary dip in dedup efficiency until new data is processed and the DDB starts learning.

There’s no supported mechanism in Commvault to replicate or sync DDBs across MediaAgents in NTFS setups. Deduplication is always scoped to the MA that owns the DDB and its corresponding mount path.

For scenarios requiring load balancing or active-active deduplication, Commvault recommends shifting to storage platforms that support concurrent access, such as object storage, cloud-based targets, or clustered filesystems (e.g., NFS, SMB, or XFS on Linux). These platforms allow for partitioned or grid deduplication models, which are designed for scalability and availability.

2 replies

Forum|alt.badge.img+5
  • Vaulter
  • July 17, 2025

Hi ​@roc_tor 

 

Correct, we cannot share the mount paths to other media agents, because even if it is shared if the media agent hosting the mount path is offline the other media agent cannot access it.

If the partitions on each media agent is created at the same time, each DDB partition will be having its own set of unique blocks.

And if the option "allow backups to run if even 1 partition online" from ddb settings is enabled, the backups will continue to run if the second partition is online, but the signatures on the first partition will not referred and it will write the signatures if it is unique data and will increase the storage space consumption.

Please let me know incase of any queries.

Regards,
Karthik


sbhatia
Vaulter
Forum|alt.badge.img+9
  • Vaulter
  • Answer
  • July 17, 2025

Hi ​@roc_tor ,

Just to add on what Karthik has already pretty much confirmed.

You're absolutely right, and your understanding aligns with Commvault’s architecture guidelines around NTFS-based storage and MediaAgent deduplication behavior.

NTFS mount paths are inherently single-writer, only the owning MediaAgent can actively read/write to them. As such, active-active deduplication across multiple MediaAgents isn't feasible with NTFS, since concurrent access is not supported at the filesystem level.

In the event of a failover, a standby MediaAgent can take control of the storage, but its associated DDB will start cold. It won’t have existing deduplication signatures, so you’ll notice a temporary dip in dedup efficiency until new data is processed and the DDB starts learning.

There’s no supported mechanism in Commvault to replicate or sync DDBs across MediaAgents in NTFS setups. Deduplication is always scoped to the MA that owns the DDB and its corresponding mount path.

For scenarios requiring load balancing or active-active deduplication, Commvault recommends shifting to storage platforms that support concurrent access, such as object storage, cloud-based targets, or clustered filesystems (e.g., NFS, SMB, or XFS on Linux). These platforms allow for partitioned or grid deduplication models, which are designed for scalability and availability.