Hi @Ken_H
Assuming your using deduplication, from Storage Resources > Deduplication Engines, you can compare the source and destination DDBs.
The key things to check are: Number of Jobs, Data size on Disk, Application Size.
Let me know the outcome or send through a screenshot of those numbers and we’ll try understand your discrepancy better.
Thanks and Kind Regards
Jason
Has your retention matured on your DR site?
Has all data correctly aged-out of your Primary library?
It is not uncommon for backup data to retain inadvertently in a library.
For example, licenses may not have been released for decommissioned clients.
One-off backups of subclients may not have met the rules of cycles (so may hang around until cleaned up).
If you view jobs against your Primary libraries mount paths, have a look at the oldest jobs that are hanging around. If they are older than your expected retention, then you might find a few quick-wins where you can clean up.
Hope that helps …. there may be other factors in the mix, but checking the actual jobs that are hanging around is a reasonable place to start.
Thanks for the replies. Using the directions from @jgeorges I’ve tracked down the issue. My production retention is:
- Daily backups are kept for 7 days
- Weekly backups are kept for 30 days
- Monthly backups are kept for 365 days (one year)
- Yearly backups are kept for 1096 days (three years)
My DR site was running low on storage so we added CommVault cloud to hold the yearly backups with the three year retention. When I checked I found the cloud storage to be 99% full because the previous administrator had configured CommVault to send both monthly and yearly backups to cloud. I’ve updated the configuration so monthly jobs go to my DR site and yearly backups go to the cloud. I’ll just leave it and let things settle out over time.
Ken