What happened to data in the storage after reaching retain date in Commvault?

  • 10 June 2024
  • 4 replies

Badge +2

What happened to Data in the storage after reaching retain date in Commvault? Does this get delete from storage and free up the space that took on day 1?

for example 

have a job retention policy set 30 days. Every day data written to the storage 1 BG.

When this reach to 31 day will that 1 BG data got free up from storage and next daily backup job write data on this space on day 31st? 

How can I check from Commvault how much data will free up when it will reached to retention date?

4 replies

Badge +2

Do I need to Run age data from commvault in order to free up space to storage or will data automatically deleted when retention data reached. 

Userlevel 2
Badge +5

Typically jobs written and retained for the specified number of days and cycles combination. So if the full job written of day 1 and retained for 30 days and 1 cycle. Then the full job on 31st day will age the job on day 1. 

Jobs are aged by a Data Aging job. Data Aging is scheduled to run every day by the system. You can check this by right-clicking on the Commserve > View > Admin Job History and filtering for Data Aging.

You can also run a Data Retention Forecast and Compliance Report to get more details.

This will generate a report that will show all of the jobs on the Disk Library and each jobs’ Retain Until Date, as well as each jobs’ Reason for Not Aging.

Userlevel 5
Badge +14

Hello @Malik 

I also wanted to add that if the data is deduplicated then the backup jobs will age but the physical data we wrote will remain as DDB Baseline until all subsequent jobs have aged. That is to say, a block of data won’t physically age until all blocks referencing it have also aged.

Optimize Storage Space Using Deduplication -

How Deduplication Works

The following is the general workflow for deduplication:

  • Generating signatures for data blocks

    A block of data is read from the source and a unique signature for the block of data is generated by using a hash algorithm.

    Data blocks can be compressed (default), encrypted (optional), or both. Data block compression, signature generation, and encryption are performed in that order on the source or destination host.

  • Comparing signatures

    The new signature is compared against a database of existing signatures for previously backed up data blocks on the destination storage. The database that contains the signatures is called the Deduplication Database (DDB).

    • If the signature exists, the DDB records that the existing data block is being reused. The block location is obtained from the associated MediaAgent and this information is used in creating the object's index entry. The duplicate data block is discarded.

    • If the signature does not exist, the new signature is added to the DDB. The associated MediaAgent writes the data block to the destination storage and uses the location and creates the object's index entry.

    Signature comparison is done on a MediaAgent. For improved performance, you can use a locally cached set of signatures on the source host for the comparison. If a signature does not exist in the local cache set, it is sent to the MediaAgent for comparison.


Thank you,

Badge +2

Much appreciated both of you that helps a lot of clearing my understanding :-)