Question

Massive legacy DDB's - what are you all doing with them?

  • 21 June 2023
  • 9 replies
  • 141 views

Userlevel 2
Badge +6

Hi Team,

 

We have a very large, Infinite retention Storage Policy, associated to Storage Pool “Pool1”.


It has grown to the point, that we will soon be creating another Storage Pool and Storage Policy. Let’s call these Pool2. 

All clients from Pool1 will be migrated to Pool2, so Pool1 will stop receiving any fresh data, since Pool2 will start receiving it all.

 

The question I have is around the massive, leftover DDB’s from Pool 1. They are 2 x 1.8 TB and are hosted on the two Media Agents associated with Pool1.

 

Since Pool1 will stop receiving data, I am keen to decommission the Pool1 Media Agents - noting that the Secondary-copy Cloud-based backup data can be accessed from a number of Media Agents and so it does not necessarily have to be the Pool1 Media Agents. It can be any Media Agents, provided they are mapped to the relevant Cloud Library Mount Points.

 

So the questions I have are :-

 

1 -  What do we do with these large, legacy DDB’s?

      I understand we need to keep for Commvault Sync operations. Any other reason?

2 -  What activity will continue against these DDB’s once I migrate all clients away from them?

      I’m aware a DDB BAckup will likely run. Anything else?

3 -  Should I seal them, once all clients have migrated across?

      This is an Infinite retention SP, but I wonder if sealing will further reduce potential CPU or memory overhead.

4 - Would anyone know the likely CPU overhead of leaving these DDB’s lying around?

      This would factor in to which Media Agent I could move them to.

 

Thanks


9 replies

Userlevel 4
Badge +11

Hi @MountainGoat 

You have the scenarios covered correctly, although I suspect the way its worded to safeguard users from trying to use the retired DDB for additional backups.

It would cause an issue if the retired DDB was to be used for backups as a retried DDB is not traditionally sealed, it’s sealed as well as it does not create a new DDB so I still believe it would fit your scenario.

It seems that you’ve already addressed the issue without retiring the DDB. 

I will ask internally and confirm this point.

Userlevel 2
Badge +6

Thanks Emils,

 

As per that link you sent:-

 

“ You can retire a deduplication database (DDB) only when the storage policy copy has another DDB associated to the copy apart from the DDB that you plan to retire. “

 

So this is not for a “full” retirement of the SP or SP Copy, but maybe where we have sealed (for example due to corruption or sizing).

Or it could also be useful if you want to downsize from say a partitioned config, and you want to go to a single DDB?

Do I have those use-case scenario’s correct?

In the event we have migrated all clients away from a now-legacy SP, and it is a single DDB, I’m not sure we can use that retire option, as it is saying we would need another DDB to remain, apart form the one we are attempting to retire.

Have I read that correctly?

Thanks for the feedback so far ...

 

Userlevel 4
Badge +11

Hi @MountainGoat 

 We have documented process which I believe fits this query perfectly 

 

Retiring a Deduplication Database

https://documentation.commvault.com/11.24/expert/135146_retiring_deduplication_database.html

 

I would move the DDBs first then retire.

Userlevel 7
Badge +23

 

My logic around sealing these legacy stores, was that it could further minimize any related DDB activity.

 

Sounds good. Sealed stores are still referenced for data aging but will never be used for other purposes. Of course if jobs never meet retention then no data aging will happen anyway 👌

Userlevel 2
Badge +6

Thanks.

 

Yep, given that these “Pool1” DDB’s will be fully legacy, and possibly sealed, I think I will move to a very low-specced MA. Possibly even a purpose-built one, purely for hosting legacy DDB’s. We already have a number of other legacy DDB’s dotted around the place.

And as they are legacy and inactive, then I don’t believe they would feature in any Data Aging operations (all clients will be fully relocated to Pool2 etc).

 

My logic around sealing these legacy stores, was that it could further minimize any related DDB activity.

Userlevel 7
Badge +23

DDB is needed for data aging - but since its infinite retention I’d do what @Emils mentioned and move it to a low spec Media Agent for archival. Technically if you plan to never backup to the associated storage policies or auxcopy, we could figure out a way to remove it. Indeed we have DDB backups so it is technically archived anyway. However that is a little messy, so if you can find a place to preserve it that would be a better idea.

Userlevel 4
Badge +11

Hi @MountainGoat 

While sealing is not necessary, it would be logical to consider moving to a MediaAgent with lower specifications if the goal is to redistribute resources for the MediaAgent Server.

Userlevel 2
Badge +6

Hi Emils,

 

This is an Infinite retention SP, so I’m not expecting anything to age out.

 

What I am thinking is that I seal them to make sure they are not “active”, and then migrate them to a low-specced MA.

 

 

Userlevel 4
Badge +11

Hi @MountainGoat 

Thank you for reaching out.

Once Pool1’s associated clients have moved off, the only operations/activity that will be done against the legacy DDB’s will be for aging purposes.

Reply