Hi All.
Can someone please explain how data aging functions on an Indexing job?
Referring to "https://community.commvault.com/self-hosted-q-a-2/why-are-index-server-jobs-agent-type-big-data-apps-being-retained-on-storage-longer-than-the-storage-policy-indicates-5132".
I have a storage policy with several copies to different libraries. This is a mix of disk, S3 and S3 with Object Lock enabled.
There are several Indexing Jobs which reside on the S3 library with Object Lock enabled but the dependent backup jobs are located on a different selective copy. Is there a way to have the Indexing jobs kept with the backup jobs or is this by design?
The issue here is that the Object Lock enabled library is configured with 90 days retention and the other S3 library is configured with long-term retention varying between 365 days to 1825 days.
The above behaviour is preventing the sealed DDBs from being removed when all backup jobs have met retention as the Indexing jobs are retained.
I have seen the information at https://documentation.commvault.com/2022e/expert/backupset_level_indexing_index_cache_cleanup.html but I am not sure that this is applicable.
I do see the section "An index backup on the primary copy is deleted only when more than three backed-up versions of an index (per storage policy) exist on the secondary storage". In my instance the Index backups also exist on the Selective copies which have the backup data.
Thanks.
Ignes