Solved

Secondary Copy on Disk - Implications of WORM setting

  • 20 January 2021
  • 9 replies
  • 895 views

Userlevel 2
Badge +3

Commvault 11.18 (soon to be 11.20).    - We are on the cusp of eliminating our secondary backups to tape.   The benefit of a secondary copy on tape was the build-in air-gapping (and ability to move it offsite for safe-keeping).    We plan to move to creating our secondary copies on disk in a different city.   Commvault’s built-in Ransomware protection is  no-brainer, but WHAT ABOUT WORM?    What are the implications of the WORM storage on space consumption?   Is there any scenario in which a WORM-enabled deduplicated secondary copy that is a true copy of the deduplicated primary copy (and with the same retention) would be any LARGER than the primary copy?   Presumably, if a 1000 jobs share one block on the secondary storage, that block will not be removed until the last of those 1000 jobs ages out.

Any info is appreciated.  

 

Thanks!

icon

Best answer by Jordan 21 January 2021, 00:21

View original

9 replies

Userlevel 3
Badge +7

Hi Mike,

 

Wanting to check if you are planning to use CV WORM disk copies with a storage array that is also “WORM” enabled at all? e.g Smarlock, Immutable storage, etc?

 

If you are using WORM with on-prem disk storage that also have WORM feature enabled, then you will want to disable the CV option “Prevent Accidental Deletion” as the storage may throw errors when CV tries to modify permissions on the file before deletion.

https://documentation.commvault.com/commvault/v11_sp20/article?p=9319.htm#o109090

 

Additionally, as a general rule to prevent aux copies from causing secondary copies to be larger than the source copy, you will want to have Scalable Resources enabled during the aux copy job.

https://documentation.commvault.com/commvault/v11_sp20/article?p=11492.htm

 

Scalable resources help copy the jobs in an order that will not causing unnecessary disk utilization on destination DDB. 

 

Lastly, when creating the secondary copy and DDB, if the source copy has large amounts of SQL data, there is a chance for data bloat on destination copy. To prevent this from happening, follow the steps here before running the first aux copy. This will ensure even SQL data will be aligned on destination copy and there will not be any increased disk utilization.

http://kb.commvault.com/article/55258

 

Please note, that once you enable WORM copy, it will require CV Engineering engagement to either change retention or disable WORM and this process takes a little bit of time. Best to plan what the requirements are before implementation to avoid any delays.

 

If you have any further questions to these, feel free to ask.

 

Thank you

 

Userlevel 2
Badge +3

Thank you for the very complete response, Jordan!  

My target storage is NOT WORM-capable.  So this would be a 100% Commvault deal.   

I have much to read about here!   :-)

Cheers!

Userlevel 5
Badge +10

@Jordan Can you describe how Commvault Deduplication maintains the CHUNK_META_DATA, CHUNK_META_DATA.idx, and SFILE_CONTAINER.idx when the destination media is WORM?

Userlevel 3
Badge +7

Hey Anthony, 

 

When using software to do WORM functions, most of the impact is at the admin/GUI level where users would not be able to:

  • change retention
  • disable WORM
  • delete jobs manually

In the back end, the changes are minimal compared to the normal Commvault data management lifecycle. As jobs meet retention, they age off. Once jobs are aged, CHUNK_META_DATA are purged from disk during pruning (since these are the job related info) and DDB processes unique blocks to find out which blocks within SFILE_CONTAINERs can be removed. Since SFILE_CONTAINERs may be referenced by newer jobs, these files would always have essentially met WORM retention from the original job that laid the blocks so Commvault software can try to manipulate that in once retention has met (such as drill holes, truncate and delete). 

 

The main thing to note was what I mentioned earlier, if the storage array is also WORM enabled, then during Commvault’s manipulation of the SFILEs, storage may prevent some of these functionalities, such as the “Prevent Accidental Deletion” feature, so would be best to turn this feature off on storage arrays like Isilon SmartLock etc. 

Userlevel 5
Badge +10

Thanks @Jordan, sorry I should have been more a lot more clear but I was curious deduplication to WORM target could work if it were a primary copy.

Userlevel 3
Badge +7

No issues, it definitely works :relaxed:

Userlevel 5
Badge +10

How do the various index and metadata files get updated if WORM is built into the hardware target @Jordan (that is unless you are just referring WORM handled at the software layer)?

Userlevel 4
Badge +5

 

@Anthony.Hodges There is a bit of architecture involved here to configure this properly when using a hardware WORM target.  Thankfully we do have a workflow to simplify the setup a bit for cloud.   Obviously as you pointed out whatever is written to the locked storage target cannot be changed until the WORM lock is expired.  But since old blocks of data written could be referenced by new backup jobs, you could end up in a scenario wherein dedupe data is exposed after the WORM lock expires putting new jobs at risk.  For this reason you need to seal the dedupe store within Commvault periodically (to write new baseline data), and turn of micro pruning.  When configured  in the correct way, your old dedupe stores will prune off just when retention is met.   Our workflow makes this configuration automatically, so sealing of dedupe stores occurs behind the scenes without manual intervention.

I understand that may sound a bit complex, but its part of the implications when using a hardware target WORM feature.

Otherwise you can use our native ransomware locks and build a hardened solution around HyperScale or your own storage to achieve protection without the above implications.  It just depends on your requirements.

 

For more info check out this whitepaper: https://www.commvault.com/resources/greater-data-protection-immutable-backups-to-the-cloud-with-commvault

 

And check out this new workflow to simplify the configuration for Cloud WORM setups above: 
https://documentation.commvault.com/11.22/expert/128636_workflow_for_configuring_worm_storage_mode_on_cloud_storage.html

 

Also here is info on Commvaults native locking capability:
https://documentation.commvault.com/11.22/expert/9398_protecting_mount_paths_from_ransomware_01.html

 

Userlevel 7
Badge +15

How do the various index and metadata files get updated if WORM is built into the hardware target @Jordan (that is unless you are just referring WORM handled at the software layer)?

Those metadata files are rarely retrospectively modified unless for data aging purposes and other edge conditions, so it should not be an issue.

Reply