Question

Workflow to Configure WORM Storage Mode on Disk Libraries, not Cloud Libraries


Userlevel 2
Badge +9

Is there a workflow for configuring WORM Storage Mode on Disk Libraries, not cloud ones? The article here (https://documentation.commvault.com/11.26/expert/146623_configuring_worm_storage_mode_on_disk_libraries.html) describes the steps to enable WORM Storage running a workflow to configure the WORM storage mode on disk libraries, not cloud ones. 
Is it still available? 

I downloaded the Enable WORM Storage workflow from Commvault store, but it’s not working here. After executing the workflow (Enable WORM Storage) the workflow got stuck in a pending state. Error Code: [19:857] Description: Parse error at line 5, column 1. Encountered: workflow Source: cvault-cs, Process: Workflow

I took a look inside the script and It seems to me that this workflow is specific to cloud libraries, not traditional disk Libraries. 

 

Incident 230313-409


11 replies

Userlevel 2
Badge +9

I’ve ran into the same problem today (we’re on 11.28) and can confirm your suspicion as to why the workflow is not working on Disk Libraries.

In the SQL Statement that is parsing the associated Mount Paths there’s an “AND” filter fixed onto “MountPathTypeId” = 7, which is for Cloud Libraries. Disk Libraries have the Type 4.

Hence it’s not returning any valid Mount Paths and the workflow fails/gets stuck there.

If you modify that value in the workflow from 7 to 4 you should be able to run it successfully.

I’m also curious about the correct procedure for this in regards for Disk Libraries.

 

Thank you, ChrisK. It's too late for us. I decided to run the Enable Retention Lock workflow and apply a set of adjustments manually. Here they are: 

  • Hardware WORM lock should be set to twice as long as copy retention. (Ok) See Screencaps from Tela 1 to Tela 4
  • All dependent copies of the DDB must have same retention. (Ok) Screencaps from Tela 5 to Tela 7
  • DDB seal frequency will be set to same as copy retention in days value (Ok) ScreencapTela 8
  • Micro pruning will be disabled on all mount paths. (don’t know how to confirm that)

More here:

  • Disable the CV option “Prevent Accidental Deletion” as the storage may throw errors when CV tries to modify permissions on the file before deletion. (Ok) Screencap Tela 9
  • Additionally, as a general rule to prevent aux copies from causing secondary copies to be larger than the source copy, you will want to have Scalable Resources enabled during the aux copy job. (Ok) Screencap Tela 10
  • Lastly, when creating the secondary copy and DDB, if the source copy has large amounts of SQL data, there is a chance for data bloat on destination copy. To prevent this from happening, follow the steps here before running the first aux copy. This will ensure even SQL data will be aligned on destination copy and there will not be any increased disk utilization. (I don't know how to do it and I don't even know if I still need to do it since my version is much more recent than the version described in the documentation)
Badge +5

I’ve ran into the same problem today (we’re on 11.28) and can confirm your suspicion as to why the workflow is not working on Disk Libraries.

In the SQL Statement that is parsing the associated Mount Paths there’s an “AND” filter fixed onto “MountPathTypeId” = 7, which is for Cloud Libraries. Disk Libraries have the Type 4.

Hence it’s not returning any valid Mount Paths and the workflow fails/gets stuck there.

If you modify that value in the workflow from 7 to 4 you should be able to run it successfully.

I’m also curious about the correct procedure for this in regards for Disk Libraries.

 

Userlevel 3
Badge +12

I’ve ran into the same problem today (we’re on 11.28) and can confirm your suspicion as to why the workflow is not working on Disk Libraries.

In the SQL Statement that is parsing the associated Mount Paths there’s an “AND” filter fixed onto “MountPathTypeId” = 7, which is for Cloud Libraries. Disk Libraries have the Type 4.

Hence it’s not returning any valid Mount Paths and the workflow fails/gets stuck there.

If you modify that value in the workflow from 7 to 4 you should be able to run it successfully.

I’m also curious about the correct procedure for this in regards for Disk Libraries.

 

Thank you, ChrisK. It's too late for us. I decided to run the Enable Retention Lock workflow and apply a set of adjustments manually. Here they are: 

  • Hardware WORM lock should be set to twice as long as copy retention. (Ok) See Screencaps from Tela 1 to Tela 4
  • All dependent copies of the DDB must have same retention. (Ok) Screencaps from Tela 5 to Tela 7
  • DDB seal frequency will be set to same as copy retention in days value (Ok) ScreencapTela 8
  • Micro pruning will be disabled on all mount paths. (don’t know how to confirm that)

More here:

  • Disable the CV option “Prevent Accidental Deletion” as the storage may throw errors when CV tries to modify permissions on the file before deletion. (Ok) Screencap Tela 9
  • Additionally, as a general rule to prevent aux copies from causing secondary copies to be larger than the source copy, you will want to have Scalable Resources enabled during the aux copy job. (Ok) Screencap Tela 10
  • Lastly, when creating the secondary copy and DDB, if the source copy has large amounts of SQL data, there is a chance for data bloat on destination copy. To prevent this from happening, follow the steps here before running the first aux copy. This will ensure even SQL data will be aligned on destination copy and there will not be any increased disk utilization. (I don't know how to do it and I don't even know if I still need to do it since my version is much more recent than the version described in the documentation)

Hi @Eduardo Braga,

 

Did you face any issue with data pruning while enabling WORM from storage side? I’d love to have a chat with you to get more details about how WORM is handled by commvault.

Userlevel 2
Badge +9

I’ve ran into the same problem today (we’re on 11.28) and can confirm your suspicion as to why the workflow is not working on Disk Libraries.

In the SQL Statement that is parsing the associated Mount Paths there’s an “AND” filter fixed onto “MountPathTypeId” = 7, which is for Cloud Libraries. Disk Libraries have the Type 4.

Hence it’s not returning any valid Mount Paths and the workflow fails/gets stuck there.

If you modify that value in the workflow from 7 to 4 you should be able to run it successfully.

I’m also curious about the correct procedure for this in regards for Disk Libraries.

 

Thank you, ChrisK. It's too late for us. I decided to run the Enable Retention Lock workflow and apply a set of adjustments manually. Here they are: 

  • Hardware WORM lock should be set to twice as long as copy retention. (Ok) See Screencaps from Tela 1 to Tela 4
  • All dependent copies of the DDB must have same retention. (Ok) Screencaps from Tela 5 to Tela 7
  • DDB seal frequency will be set to same as copy retention in days value (Ok) ScreencapTela 8
  • Micro pruning will be disabled on all mount paths. (don’t know how to confirm that)

More here:

  • Disable the CV option “Prevent Accidental Deletion” as the storage may throw errors when CV tries to modify permissions on the file before deletion. (Ok) Screencap Tela 9
  • Additionally, as a general rule to prevent aux copies from causing secondary copies to be larger than the source copy, you will want to have Scalable Resources enabled during the aux copy job. (Ok) Screencap Tela 10
  • Lastly, when creating the secondary copy and DDB, if the source copy has large amounts of SQL data, there is a chance for data bloat on destination copy. To prevent this from happening, follow the steps here before running the first aux copy. This will ensure even SQL data will be aligned on destination copy and there will not be any increased disk utilization. (I don't know how to do it and I don't even know if I still need to do it since my version is much more recent than the version described in the documentation)

Hi @Eduardo Braga,

 

Did you face any issue with data pruning while enabling WORM from storage side? I’d love to have a chat with you to get more details about how WORM is handled by commvault.

Yes. 

 

We noticed activity in the SIDBPrune.log and SIDBPhysicalDeletes.log files. Multiple error messages like those below:

 

5416 f2ac 04/13 00:32:49 ### 16-3 PruneChunk:1598 Cannot delete file [/cvault/hua/nfs/worm1/dat/TYT948_02.17.2023_15.47/CV_MAGNETIC/V_613214/CHUNK_13197104/CHUNK_META_DATA_13197104.idx], error [0xECCC0001:{CQiFile::Delete(1451)/Failed to change the mode of the file [/cvault/hua/nfs/worm1/dat/TYT948_02.17.2023_15.47/CV_MAGNETIC/V_613214/CHUNK_13197104/CHUNK_META_DATA_13197104.idx] to read/write} + {CQiFile::Chmod(642)/ErrNo.1.(Operation not permitted)}]

 

We took all the precautions we knew about, but something didn't work right. We have a disk library (not a cloud one) with 4 Mount Paths. Each one is a volume that the storage system allows application servers (like Commvault) to access shared files using different protocols, such as Common Internet File System (CIFS) and Network File System (NFS). In our case we’re using NFS.

 

 

 

 

The Disk Library is the destination for Primary Copy of the Storage Pool STGPOOL_WORM and It’s not shared with any other Storage Pool and/or Storage Policy. We had 3 copies associated with the Storage Pool STGPOOL_WORM. 

 

 

Here are the specs of one copy:

 

 

It appears that Commvault tries to change attributes when it is not possible, because the files are already protected on the storage side. The storage starts protecting the file after two hours (Lockout Wait Time (hours)). We don't know why CVLT tries to change it or when exactly it happens.

 

As MA servers are shared with other storage policies, the system suddenly slowed down and Q&I times increase when Commvault tries to change attributes of its structures and persistently fails. 

 

Until we find a solution, we will disable these copies and to avoid unnecessary error messages in the SIDBPrune.log file, we disable the Physical Pruning.

 

 

 

Unfortunately we have to do this for each new DDB database created by Commvault. 

 

Incident 230423-108

 

 

Userlevel 2
Badge +9

We checked the option Automatticaly Delete on storage side. We know this is wrong because Commvault is supposed to manage this data, but this is a workaround. On the Commvault side, we disabled data pruning, but to avoid wasting space on the storage side, we automatically remove the files after the retention period.

 

 

Userlevel 2
Badge +9

As a side effect this workaround messes up with reports like DDB Performance and Status. 

 

Userlevel 2
Badge +9

I’ve ran into the same problem today (we’re on 11.28) and can confirm your suspicion as to why the workflow is not working on Disk Libraries.

In the SQL Statement that is parsing the associated Mount Paths there’s an “AND” filter fixed onto “MountPathTypeId” = 7, which is for Cloud Libraries. Disk Libraries have the Type 4.

Hence it’s not returning any valid Mount Paths and the workflow fails/gets stuck there.

If you modify that value in the workflow from 7 to 4 you should be able to run it successfully.

I’m also curious about the correct procedure for this in regards for Disk Libraries.

 

Thank you, ChrisK. It's too late for us. I decided to run the Enable Retention Lock workflow and apply a set of adjustments manually. Here they are: 

  • Hardware WORM lock should be set to twice as long as copy retention. (Ok) See Screencaps from Tela 1 to Tela 4
  • All dependent copies of the DDB must have same retention. (Ok) Screencaps from Tela 5 to Tela 7
  • DDB seal frequency will be set to same as copy retention in days value (Ok) ScreencapTela 8
  • Micro pruning will be disabled on all mount paths. (don’t know how to confirm that)

More here:

  • Disable the CV option “Prevent Accidental Deletion” as the storage may throw errors when CV tries to modify permissions on the file before deletion. (Ok) Screencap Tela 9
  • Additionally, as a general rule to prevent aux copies from causing secondary copies to be larger than the source copy, you will want to have Scalable Resources enabled during the aux copy job. (Ok) Screencap Tela 10
  • Lastly, when creating the secondary copy and DDB, if the source copy has large amounts of SQL data, there is a chance for data bloat on destination copy. To prevent this from happening, follow the steps here before running the first aux copy. This will ensure even SQL data will be aligned on destination copy and there will not be any increased disk utilization. (I don't know how to do it and I don't even know if I still need to do it since my version is much more recent than the version described in the documentation)

Hi Braga,

Can you download the WORM work flow ?  I can’t download it. T^T  Anyone can help download for me? please..

Sorry, budy. 

Userlevel 3
Badge +5

With Platform Release 2023 (11.30) you can take a look at “Locking Retention and Deletions with Compliance Lock” otherwise perhaps the “Enable Retention Lock Workflow” is what you are looking for.  You can find it in the store here.

Userlevel 3
Badge +12

We checked the option Automatticaly Delete on storage side. We know this is wrong because Commvault is supposed to manage this data, but this is a workaround. On the Commvault side, we disabled data pruning, but to avoid wasting space on the storage side, we automatically remove the files after the retention period.

 

 

 

 

Thanks a lot @Eduardo Braga for sharing with us all those details.

 

I was worried about physical pruning, since we had the same issue once, same error messages were seen on SIDBPrune.log and SIDBPhysicalDeletes.log files, and our WORM Storage was quickly being fulled since the retention on storage side was double the Commvault’s one. We had to open a case with Commvault’s support in order for them to disable WORM from Commvault side and be able to delete some old backup jobs manually and free-up some space from the Storage.

 

It would be nice to have a proper way to implement this without having to set a workaround :)

 

Thanks again for all the details you shared previously.

 

Userlevel 2
Badge +9

With Platform Release 2023 (11.30) you can take a look at “Locking Retention and Deletions with Compliance Lock” otherwise perhaps the “Enable Retention Lock Workflow” is what you are looking for.  You can find it in the store here.

Steven, Thank you, I need some clarification here. 

 

Documentation from ver. 11.30: 

 

Enabling WORM Storage and Retention for Disk Libraries

https://documentation.commvault.com/2023/expert/157131_enabling_worm_storage_and_retention_for_disk_libraries.html

 

Configuring WORM Storage Mode on Disk Libraries

https://documentation.commvault.com/2023/expert/146623_configuring_worm_storage_mode_on_disk_libraries.html

 

What is the difference?

It seems to me that the Configuring WORM Storage Mode on Disk Libraries page is specific to environment where disk storage vendors support the WORM (Write Once Read Many) functionality and on those cases the CommVault provides the workflow Enable Retention Lock Workflow. But how It works? Do I need to provide the IP address of the disk storage’s admin console? Is there any kind of exposed API? The workflow will set up the WORM parameters on storage side? 

 

Documentation from ver. 11.28 (the ver. I’m running now):

 

Configuring WORM Storage Mode on Disk Libraries

https://documentation.commvault.com/2022e/expert/146623_configuring_worm_storage_mode_on_disk_libraries.html

 

This workflow didn’t work on my environment. It didn’t complete and I said after executing the workflow (Enable WORM Storage) the workflow got stuck in a pending state. Error Code: [19:857] Description: Parse error at line 5, column 1. Encountered: workflow Source: cvault-cs, Process: Workflow

I took a look inside the script and It seems to me that this workflow is specific to cloud libraries, not traditional disk Libraries. 

 

What are my options? Update to ver. 11.30 and download the Enable WORM Storage workflow to configure the WORM storage mode for disk storage vendors that support the WORM (Write Once Read Many) functionality or keep the ver. 11.28 and download the Enable Retention Lock Workflow, but Do I need to set up something at Commvault side? 

 

I'm pretty disappointed with the way Commvault documents some of its functionality.

 

 

 

 

 

 

Badge

I’ve ran into the same problem today (we’re on 11.28) and can confirm your suspicion as to why the workflow is not working on Disk Libraries.

In the SQL Statement that is parsing the associated Mount Paths there’s an “AND” filter fixed onto “MountPathTypeId” = 7, which is for Cloud Libraries. Disk Libraries have the Type 4.

Hence it’s not returning any valid Mount Paths and the workflow fails/gets stuck there.

If you modify that value in the workflow from 7 to 4 you should be able to run it successfully.

I’m also curious about the correct procedure for this in regards for Disk Libraries.

 

Thank you, ChrisK. It's too late for us. I decided to run the Enable Retention Lock workflow and apply a set of adjustments manually. Here they are: 

  • Hardware WORM lock should be set to twice as long as copy retention. (Ok) See Screencaps from Tela 1 to Tela 4
  • All dependent copies of the DDB must have same retention. (Ok) Screencaps from Tela 5 to Tela 7
  • DDB seal frequency will be set to same as copy retention in days value (Ok) ScreencapTela 8
  • Micro pruning will be disabled on all mount paths. (don’t know how to confirm that)

More here:

  • Disable the CV option “Prevent Accidental Deletion” as the storage may throw errors when CV tries to modify permissions on the file before deletion. (Ok) Screencap Tela 9
  • Additionally, as a general rule to prevent aux copies from causing secondary copies to be larger than the source copy, you will want to have Scalable Resources enabled during the aux copy job. (Ok) Screencap Tela 10
  • Lastly, when creating the secondary copy and DDB, if the source copy has large amounts of SQL data, there is a chance for data bloat on destination copy. To prevent this from happening, follow the steps here before running the first aux copy. This will ensure even SQL data will be aligned on destination copy and there will not be any increased disk utilization. (I don't know how to do it and I don't even know if I still need to do it since my version is much more recent than the version described in the documentation)

Hi Braga,

Can you download the WORM work flow ?  I can’t download it. T^T  Anyone can help download for me? please..

Reply