Question

Commvault sent delete request to Chunk files seven days after retention was over

  • 6 October 2023
  • 4 replies
  • 125 views

Badge +2

We took on September 25th. The storage policy was set as below. 

  • Deduplication: Enabled
  • Retention: set to 1 day and 0 cycles
  • Worm Lock: Disabled

On the 26th, when the retention period had ended, CommVault sent in delete requests to the Cloud Storage for deletion of the metadata and indexes. However, no delete requests were sent for the archived data chunks. The Cloud Storage responded to these delete requests by deleting the metadata and indexes and responding to CommVault with a positive 204 response.

MediaAgent CloudFileAccess log: 

5916  1760  09/26 12:15:28 ### [CVMountd] RemoveFile() - deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/CHUNK_META_DATA_14571
5916 1760 09/26 12:15:28 ### [CVMountd] RemoveFile() - deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/CHUNK_META_DATA_14571.idx
5916 1760 09/26 12:15:28 ### [CVMountd] RemoveFile() - deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14575/CHUNK_META_DATA_14575

Cloud Storage Log:

default_access_log.2023-09-26.log.1:172.18.105.239 - admin [26/Sep/2023:19:15:26 +0000] - "DELETE /deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/CHUNK_META_DATA_14571.FOLDER/0?versionId=18ad2eaed57 HTTP/1.1" 204 - 4ms

default_access_log.2023-09-26.log.1:172.18.105.239 - admin [26/Sep/2023:19:15:26 +0000] - "DELETE /deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/CHUNK_META_DATA_14571.FOLDER/2?versionId=18ad2eaed57 HTTP/1.1" 204 - 6ms

 

From 26 onwards, CommCell console was showing that the pruning of these files was complete, and any Browse and Restore operation returned "no backups found for this chosen time range." But the archives were still present in our cloud storage (HCP-CS).

On October 3rd, CommVault makes delete requests for the actual data chunks, to which the cloud storage responds by deleting all archived data chunks. 

MediaAgent CloudFileAccess log:

11174: 5916  8b0   10/03 12:15:16 ### [CVMountd] RemoveFile() - deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/SFILE_CONTAINER_086
11177: 5916 8b0 10/03 12:15:17 ### [CVMountd] RemoveFile() - deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/SFILE_CONTAINER.idx
11179: 5916 8b0 10/03 12:15:17 ### [CVMountd] DeleteFolder() - deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571
11181: 5916 8b0 10/03 12:15:17 ### [CVMountd] RemoveFile() - deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNKMAP_TRAILER_14571

Cloud Storage log: 

87: default_access_log.2023-10-03.log.0:172.18.105.239 - admin [03/Oct/2023:19:15:10 +0000] - "DELETE /deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/SFILE_CONTAINER_032 HTTP/1.1" 204 - 6ms
88: default_access_log.2023-10-03.log.0:172.18.105.239 - admin [03/Oct/2023:19:15:10 +0000] - "GET /deleteimmediatezerocycle?versions&encoding-type=url&delimiter=%2F&max-keys=1000&prefix=7Z8CZ4_09.25.2023_11.54%2FCV_MAGNETIC%2FV_3222%2FCHUNK_14571%2FSFILE_CONTAINER_032 HTTP/1.1" 200 799 7ms
89: default_access_log.2023-10-03.log.0:172.18.105.239 - admin [03/Oct/2023:19:15:10 +0000] - "DELETE /deleteimmediatezerocycle/7Z8CZ4_09.25.2023_11.54/CV_MAGNETIC/V_3222/CHUNK_14571/SFILE_CONTAINER_032?versionId=18af6f732e7 HTTP/1.1" 204 - 5ms

We want to know why CommVault delays the delete operation for 7 days in this case.


4 replies

Userlevel 1
Badge +1

Good day @Dipayan Sarkar

I apologize for the delay in responding to your query. We are currently in the process of arranging for  commvault expert review for your question. Please rest assured that we are working diligently to provide you with a detailed response as soon as possible. Thank you for your patience and understanding.
 

 

Badge +3

@Dipayan Sarkar - 

Since these are de-duplication enabled backups, the chunks written on 09/25 may have been referenced by subsequent full/incremental jobs. The data in CHUNK_METADATA_## files contain non-dedupable data, and hence they are deleted as soon as the job ages. Whereas, the SFILE_CONTAINIER_## contains dedupable bloks & any subsequent job can reference the blocks therein. As long as there is a valid reference the SFILE_CONTAINER_* files will not be deleted.

Badge +2

@Satya Narayan Mohanty 

Thank you for your explanation. 

This backup was created on a storage policy that did not have WORM lock enabled. I understand that when WORM lock is enabled, Commvault takes an additional 7 days to seal the DDB and create a new one. 

My question is whether this behavior is the same for a backup with a policy associated with WORM lock disabled? 

Also, I've been looking for a document that explains this behavior in the context of non-WORM policy but haven't found one; If you could direct me to the document, it would be very helpful.
 

Badge +3

yes @Dipayan Sarkar, this is the behavior of deduplication data pruning. The behavior is same irrespective of WORM lock is used or not. As long as an SFILE_CONTAINER is having valid reference from other jobs, it will not be pruned. 

 

This documentation may help you understand -

https://documentation.commvault.com/2023e/expert/11947_data_aging_for_deduplication.html

Reply