I’m working on a new design which will consists of several ROBOs and 1 DC which will act as DRC for branch offices. Each ROBO should have local MediaAgent with primary copy and replicate to DRC as secondary copies. In this design I would like to Protect customer data against Ransomware using WORM Lock on Disk libraries.
This will require DDB Sealing and periodic Full Jobs to be transferred to target DRC site which can be a lengthy process consuming a lot of resources. To mitigate that, I’ve been looking into DDB Priming feature which seems like a perfect solution for this issue.
I run a simple test in my lab:
Create a primary storage on MA1
Create a secondary storage on MA2
Run Full backup on client
Run Aux copy, note the “Data Transferred on Network”
Seal DDB on secondary target
Enable DDB Priming on secondary target
Run Full backup on client
Run Aux copy, note the “Data Transferred on Network”
Comparing this numbers it’s obvious that data are transferred again between primary and secondary copy.
Documentation barely even mentions DDB Priming option and it’s suffice to say that it does not explain a thing about it, but looking through community I found this topic
On which @Damian Andre mentions my exact case.
So I am wondering if there are any additional steps required here. Any ideas?
Page 1 / 1
Hello @Robert Horowski
You are correct that there is very little on documentation around this feature.
The wording of this feature makes me think that it is only for backup jobs and not Aux copy operations. I can see Damien's post does reference Aux copy jobs but i have not seen it used for that practice and there is a difference between Dash copy ( aux copy with Dedup on both sides ) and a source side backup job.
I would recommend documenting all your findings and raising a support ticket. they can get an official response from Dev in regards to this functionality and its limitation. From that ticket a Doc MR will be raised to get Books online reflecting the correct details so people in the future do not hit the same wall as you.
Kind regards
Albert Williams
Hello @Albert Williams
Appreciate your post!
On high level it seems like both operations - backup with source side dedup and DASH copy - have a lot in common, but I agree that this may be misleading and in the end only devs know how does it work under the hood.
It is a shame though, because you could really hit two birds with one stone and even if these processes are more different then what it seems, one could think that a simple workaround should be possible i.e. after Sealing DDB for WORM lock just AUX last Full to newly created DDB. All of the pieces are there but there is no process to handle this.
Anyway, since this is a design phase I cannot really open a support case, cause there is no support contract in place yet. It looks like I will just have to figure out how to do things differently.
Hello @Robert Horowski
I have sent an unofficial email to the Dev team asking the question and if there is a limitation. No guaranty they don't have a rule for me yet putting my emails to the Trash but i will keep you posted!
It is also Xmas time so I am sure it wont be a quick response.
Kind regards Albert Williams
Hello @Albert Williams ,
Thanks, it is much appreciated.
Fingers crossed for your name on devs white list in 2025!
Please, let me know once they will get back to you.
Have a happy new year!
Kind regards,
Robert
Hello @Robert Horowski
Can you confirm if you have WORM enabled at all? There is a new feature with WORM that once the DDB is sealed it gets purged. If this were to happen there would be no references available for the priming when the new job runs.
Kind regards Albert Williams
Hello @Albert Williams
Yes, the WORM is enabled on secondary copy
There is a new feature with WORM that once the DDB is sealed it gets purged. If this were to happen there would be no references available for the priming when the new job runs.
If that is what is happening, then it makes sense, that jobs succeeding DDB sealing need to be fully copied to secondary copy. Unfortunately this new feature defeats the ability to have a central, WORM locked Vault, without the need to periodically re-send Full over the long distance wire.
Is there an Additional Settings that could disable this new feature maybe?
I've run some additional tests:
AUX copy to secondary copy with WORM lock disabled.
the sequence here is as follows: Full backup on client, AUX Copy, Seal secondary DDB, another Full backup on client, AUX Copy.
This should allow to prime against sealed DDB but this is not happening. We still don't have an answear if DDB Priming is supported with AUX copy so this may be the reason.
Full backup to a primary copy with WORM lock enabled.
If this new feature kicks in, I would expect that a Full backup after sealing DDB, would need to re-send data over the wire, just like in original scenario with AUX copy to WORM enabled secondary copy. This, however is not happening. After sealing DDB with WORM lock enabled on primary storage, next Full is not sent over the wire, only the metadata from what I can tell.
So either I got this wrong or it does not work as expected.
Currently in my lab I am on 11.36.35
Hello @Robert Horowski
Thanks for all the testing and prompt responses.
I have gone back on my thread that i have with Dev and really hammered the question ( Caps lock on ) “Do secondary copies support DDB Priming”. This is the root of our question and needs to be answered before we look into other things.
I will keep you posted!
Kind regards Albert Williams
Hello @Albert Williams
Of course, since I am the interested party it cannot be otherwise! But also, I really do appreciate your time and effort to help me, so keeping you waiting is the last thing I wanna do.
Let me know once you hear from dev or if there is anything else I need to check or test.
Thanks,
Robert Horowski
hi Robert
Sorry for the late arrival to the thread - but to answer the basic question - yes, any dedupe store can employ the priming option. As this was an older feature combination it has not made the transition to the Command Center easy config yet. As you note if you set this up on the DRC store ( assuming this is a WORM object or file lock library type), you can configure the priming option on the destination store.
With the rise of WORM storage pools to secure backups against RW risks - we have been enlisting some of the older features that are now operationally relevant again. The priming with WORM is a good example.
It is very effective in delivering the outcome you are looking for. The WORM storage option when configured with the DRC store will periodically seal the ddb collections into vaults and orient all the data to be locked base on the basic retention days + ddb period. It forces the macro pruning of the collection, which normally eliminated the need for the old DDB / that is where the seal and sweep option was used. As Albert pointed out WORM was setting the sweep of the old sealed DDB to recycle that DDB volume space. But in the primary case you need it to exist for a while longer to help reference links to the data in the prior vault (sealed DDB) so they can be copied forward into the new DDB; that is the role of the priming option. As DASH copy runs (especially when it hits the first batch of Full or SF jobs into the new DDB/vault) it will trigger the destination MA at the DRC to reach back and copy the required segments from the prior store, avoiding the transmission from the robo source copies.
This is on the list for migration to Command Center but for the moment you can use the quick option below to set it up on the destination store.
To help illustrate.
You will need to set the options in the DDB advanced settings for the destination DRC dedupe store.
This older BOL link offers a some context on setting the options in command line for the DDB advanced settings
Names provided in the Data Retention (SP/Copy) report.
Disable (set to 0) the sealed Archive DDB rule (DDB settings: enableSIDBArchive)
To disable delete the DDB once a DDB is sealed: operation execute -af <download location>\Update_DDB_Settings.xml -copy name xxx -storagePolicyName xxx -enableSIDBArchive 0
storagePolicyName "<StoragePool Name>"
-copyName "<Storage Pool copy name>"
Names provided in the Data Retention (SP/Copy) report.
Confirmation of the settings and check if the Sealed DDB is present.
We’ve been enlisting this for service again in many new WORM storage deployments.
Cheers
brock
Hi @Brock
Late or not, I am very glad you joined us!
I really appreciate the detailed description of the feature, this makes it easier to understand and follow the whole process.
Not wasting any time I tried to run this in my lab. To do so on my existing MediaAgents I created new pools, setup a plan with secondary location, applied settings you mentioned in your post and run new backups. Unfortunately it seems like I still can’t get it to work as expected and I still can see data flying through the wire whenever I seal target DDB.
The test was as follow:
Run Full backup on client (JID 371)
Run Auxilary copy to SPC: Pool006_DR(WORM ON) (JID 372)
Seal Deduplication Database - Pool006_DR-ma2_Files_46 iID: 46]
Run Full backup on client (JID 373)
Run Auxilary copy to SPC: Pool006_DR(WORM ON) (JID 374)
You can tell that DDB was sealed by looking at “DDB store” column.
You also can tell that data was transferred looking at “Transferred” column.
Looking at the cv-ma1 and cv-ma2 in real time shows that data is indeed transferred through ethernet interface
More about the setup:
Primary copy is part of a Pool006-ma1 created on cv-ma1 server, WORM lock is disabled on this Pool
Pool006_DR(WORM ON) is a secondary copy and it is part of a Pool006_DR-ma2 created on cv-ma2 server, WORM lock is enabled on this Pool.
I’ve been sealing DDBs like crazy but data is still flying between servers.
Not sure if relevant, but I wanted to comment on the thing you mentioned:
As you note if you set this up on the DRC store ( assuming this is a WORM object or file lock library type), you can configure the priming option on the destination store.
I do not have any supported WORM lock device at hand, both, source and destination Pools, are created on a regular vmdks (my mediaAgents are virtualized), with WORM enabled on destination Pool, but not on source. I may be wrong here, but - in the context of DDB Priming Feature - since it’s just a filesystem instead of a protocol (like in S3 cloud library case) I don’t think it should matter whether underlying storage is actually WORM capable here or not.