Skip to main content

Hi i have a question about DDB assignment.

In my environment i have 2 file DDB for same disklibrary

I think it was created by mistake when creating a new storage policy. So my question is can i remove (seal) the 1 DDB that i dont need? and will the storage policy using that DDB switch over to the other DDB? they are both under the same Global DDB.

the reason this is a problem is that this DDB is located on the comserve nodes and they dont handle this load well. i could of course move the DDB but i would prefer to remove them instead.

Hello @J_R 

Thanks for the great question! If you have two partition DDB’s in a GDDB used for File i think this was not an accident but instead was horizonal scaling. 

https://documentation.commvault.com/2023e/expert/deduplication_building_block_guide.html#horizontal-scaling-of-ddbs

 

Horizontal Scaling of DDBs

When you create a storage pool with deduplication, the software creates a DDB with the name StoragePoolName_DDBStoreID. When you perform a backup operation for a subclient, the software renames the DDB to StoragePoolName_SubclientDataType_DDBStoreID. The value of SubclientDataType is Files for File System agents, VMs for virtual machines and Databases for databases. 


In this case i suspect your Q&I time was high for a long time or your primary signature count got over 200million so it scaled out for new subclients. 
 

DDB horizontal scaling threshold number of primary entries per DDB:

When the average number of primary records available on a DDB partition disk reaches the threshold of 800 million, the software creates a new DDB.

DDB horizontal scaling threshold QI Time:

The average Query and Insert (QI) time to reach the 1000μs threshold is 30 days. The average number of primary records available on a DDB partition disk should be 200 million or more.

When the average QI time exceeds the threshold and the average number of primary records is above 200 million per partition, then the software creates a new DDB.


Note that when it does this it means new subclients will use the new partition created and all the old subclients will carry on working like nothing changed. 

In your case it does not sound like there are any issues with the function of the environment outside of potentially under performing DDB disks. Make sure they are hosted on dedicated SSD’s and follow our hardware guidelines.
https://documentation.commvault.com/v11/expert/hardware_specifications_for_deduplication_mode.html

Hope this helps answer your question!

Kind regards

Albert Williams


OK that could actually explain why this ddb was created since we had issues before with our response times. since then we have added more ssd’s and moved our other ddb there. So my question is can i seal this ddb or will it only create a new one? Because i dont want my comserve server to have any DDB’s on it.


Hello @J_R 

Sealing the DDB wont change anything. with Horizontal scaling you end up with something like this: 
 

DDB_1_VM
DDB_1_Database
DDB_1_ File

Lets say you have 100 subclients all doing File system backups. At this time your environment goes over the Q&I time threshold so the system now looks like this: 

DDB_1_VM
DDB_1_Database
DDB_1_ File
DDB_1_ File2

Now when you create a new subclients it will write and dedup into File2 but the original File is still active. This is because the original 100 subclients are still using File and all your new subclients are going to be using File2. If you were to seal DDB_1_ File a new DDB would be created called “DDB_1_ File3” and it would have the original 100 subclients using it. They would not move into File2. 

You can use a workflow to move clients around from one DDB to another but in your case i think you need to just find a SSD that is large enough and hosted on a machine that is suitable to manage what you are protecting and move the entire DDB into it. 

In your case it sounds like you want to move the entire DDB from the CS to a MA so the load is taken away. To do this you should move the entire DDB onto the new server once a SSD large enough is setup. 
https://documentation.commvault.com/2023e/expert/moving_deduplication_database.html

Please advise if any of the above does not make sense and I am happy to explain further.

Kind regards

Albert Williams


Thank you very much for clearing this out for me now i think i understand :)

 

/regards Jorgen


Reply