Skip to main content

Hello 

 

 

2nd Copies to Azure Cool Storage.

In the context of architectural best practices for MediaAgent sizing (page #49), there's a statement that advises against enabling data verification for cloud storage libraries.
https://documentation.commvault.com/2022e/expert/assets/pdf/public-cloud-architecture-guide-for-microsoft-azure11-25.pdf

  1. As of October 2023, is this advice still considered a valid practice?
  2. Furthermore, what methods are suggested to avoid issues with corrupted data chunks in cloud storage? Does Commvault propose the practice of sealing the DDBevery 90 or 180 days for this purpose?

 

thanks

re: Verification - The drawback is cost which is why its not advised. You can absolutely do it, but consider the read costs associated and/or egress depending on your storage tier and media agent location.

 

Sealing only required with some storage tiers like glacier in certain scenarios. In general is is not required. In some scenarios it may be required to recoup a lot of fragmented space (as S3 data is stored in 8mb files, so if one in-use block is occupying the file, we can’t delete it). That should take years before its an actual problem, if it ever becomes one.


@Damian Andre thanks

 

If the Azure MediaAgent server and the storage container are located in the same region, will there be any additional charges(read/ingress/egress) for DV (Data Verification) jobs?


 


Reply