Hi Mike,
Wanting to check if you are planning to use CV WORM disk copies with a storage array that is also “WORM” enabled at all? e.g Smarlock, Immutable storage, etc?
If you are using WORM with on-prem disk storage that also have WORM feature enabled, then you will want to disable the CV option “Prevent Accidental Deletion” as the storage may throw errors when CV tries to modify permissions on the file before deletion.
https://documentation.commvault.com/commvault/v11_sp20/article?p=9319.htm#o109090
Additionally, as a general rule to prevent aux copies from causing secondary copies to be larger than the source copy, you will want to have Scalable Resources enabled during the aux copy job.
https://documentation.commvault.com/commvault/v11_sp20/article?p=11492.htm
Scalable resources help copy the jobs in an order that will not causing unnecessary disk utilization on destination DDB.
Lastly, when creating the secondary copy and DDB, if the source copy has large amounts of SQL data, there is a chance for data bloat on destination copy. To prevent this from happening, follow the steps here before running the first aux copy. This will ensure even SQL data will be aligned on destination copy and there will not be any increased disk utilization.
http://kb.commvault.com/article/55258
Please note, that once you enable WORM copy, it will require CV Engineering engagement to either change retention or disable WORM and this process takes a little bit of time. Best to plan what the requirements are before implementation to avoid any delays.
If you have any further questions to these, feel free to ask.
Thank you
Thank you for the very complete response, Jordan!
My target storage is NOT WORM-capable. So this would be a 100% Commvault deal.
I have much to read about here! :-)
Cheers!
@Jordan Can you describe how Commvault Deduplication maintains the CHUNK_META_DATA, CHUNK_META_DATA.idx, and SFILE_CONTAINER.idx when the destination media is WORM?
Hey Anthony,
When using software to do WORM functions, most of the impact is at the admin/GUI level where users would not be able to:
- change retention
- disable WORM
- delete jobs manually
In the back end, the changes are minimal compared to the normal Commvault data management lifecycle. As jobs meet retention, they age off. Once jobs are aged, CHUNK_META_DATA are purged from disk during pruning (since these are the job related info) and DDB processes unique blocks to find out which blocks within SFILE_CONTAINERs can be removed. Since SFILE_CONTAINERs may be referenced by newer jobs, these files would always have essentially met WORM retention from the original job that laid the blocks so Commvault software can try to manipulate that in once retention has met (such as drill holes, truncate and delete).
The main thing to note was what I mentioned earlier, if the storage array is also WORM enabled, then during Commvault’s manipulation of the SFILEs, storage may prevent some of these functionalities, such as the “Prevent Accidental Deletion” feature, so would be best to turn this feature off on storage arrays like Isilon SmartLock etc.
Thanks @Jordan, sorry I should have been more a lot more clear but I was curious deduplication to WORM target could work if it were a primary copy.
No issues, it definitely works
How do the various index and metadata files get updated if WORM is built into the hardware target @Jordan (that is unless you are just referring WORM handled at the software layer)?
@Anthony.Hodges There is a bit of architecture involved here to configure this properly when using a hardware WORM target. Thankfully we do have a workflow to simplify the setup a bit for cloud. Obviously as you pointed out whatever is written to the locked storage target cannot be changed until the WORM lock is expired. But since old blocks of data written could be referenced by new backup jobs, you could end up in a scenario wherein dedupe data is exposed after the WORM lock expires putting new jobs at risk. For this reason you need to seal the dedupe store within Commvault periodically (to write new baseline data), and turn of micro pruning. When configured in the correct way, your old dedupe stores will prune off just when retention is met. Our workflow makes this configuration automatically, so sealing of dedupe stores occurs behind the scenes without manual intervention.
I understand that may sound a bit complex, but its part of the implications when using a hardware target WORM feature.
Otherwise you can use our native ransomware locks and build a hardened solution around HyperScale or your own storage to achieve protection without the above implications. It just depends on your requirements.
For more info check out this whitepaper: https://www.commvault.com/resources/greater-data-protection-immutable-backups-to-the-cloud-with-commvault
And check out this new workflow to simplify the configuration for Cloud WORM setups above:
https://documentation.commvault.com/11.22/expert/128636_workflow_for_configuring_worm_storage_mode_on_cloud_storage.html
Also here is info on Commvaults native locking capability:
https://documentation.commvault.com/11.22/expert/9398_protecting_mount_paths_from_ransomware_01.html
How do the various index and metadata files get updated if WORM is built into the hardware target @Jordan (that is unless you are just referring WORM handled at the software layer)?
Those metadata files are rarely retrospectively modified unless for data aging purposes and other edge conditions, so it should not be an issue.
Hello, In order to protect our backup images from ransomware attacks, I want to enable the WORM copy (from Commvault side) for Primary Storage Policy.
The steps are just to check to “WORM copy” to “Storage Policies” level ?
https://documentation.commvault.com/11.24/expert/112844_enabling_worm_copy_on_storage_policy_copy.html
Thank you in advance,
Nikos
Hi @Nikos.Kyrm
Yes this is correct, but one side note.
This is a software managed work option, if your storage for instance has an API and ransomware or a hacker gets hold of the API and an authorized account it can still delete the volume/LUN.
If your storage is shown in this section, you can also use this workflow to configure the Worm Storage:
https://documentation.commvault.com/2022e/expert/146623_configuring_worm_storage_mode_on_disk_libraries.html
Hi @Nikos.Kyrm
Yes this is correct, but one side note.
This is a software managed work option, if your storage for instance has an API and ransomware or a hacker gets hold of the API and an authorized account it can still delete the volume/LUN.
If your storage is shown in this section, you can also use this workflow to configure the Worm Storage:
https://documentation.commvault.com/2022e/expert/146623_configuring_worm_storage_mode_on_disk_libraries.html
@Jos Meijer Thanks for your quick reply!
For now, I proceed it just with WORM copy (Retention Lock) from Commvault side for our Disks Libraries (managed disks & Blob Storage).
About Worm Storage, my only concern for now is about the double extra storage space that is going to need for sealing the DDB...
Currently all Storage Policies are with DDB.
Please for your feedback,
Nikos