Skip to main content
Answer

Question about Amazon S3 Intelligent-Tiering actually works

  • August 27, 2025
  • 4 replies
  • 91 views

PatrickDijkgraaf
Bit
Forum|alt.badge.img+1

Hi all,

My customer is currently using S3 Glacier Deep Archive with Combined Tier: S3 Standard-Infrequent Access. So far, so good.

However, they would like to use S3 Cross Region Replication, which does not replicate data written to the Deep Archive tier. So we are looking for an alternative solution.

One alternative solution would be to use S3 Intelligent-Tiering (the Combined Tier is not available then). We believe that would allow the objects to be replicated before they are moved to the Deep Archive Tier.

But I am unable to find a clear description on how that actually works with Commvault.

 

  1. Is all data indeed initially written to the “warm” tier?
  2. Will it allow the data to be replicated by CRR?
  3. Who determines which data can be moved to the Deep Archive Tier? Is that Amazon or Commvault?
  4. If it’s Amazon that determines which data can be moved to the Archive Tier, how do we ensure metadata and indexes are kept in the warm tier, even if they are not accessed for a while?

 

Hope somebody can explain this in a clear manner.

Thanks!

Best answer by Jace Ross

Hi Patrick,

Thanks for reaching out. Commvault typically treats S3 Intelligent tiering like it’s any other AWS S3 storage. Intelligent tiering is automated to ingest data to a frequent access tier, analyse it’s access, and then move it between the less accessed storage tiers based on data access frequency.

1) So when Commvault writes to Intelligent tiering, it will go into a frequent access tier, then when the bucket determines the data isn’t being accessed frequently then it moves into the archive tiers from warmest to coldest.

2) I cannot see anything stating Intelligent tiering cannot be used and given the nature of how it works (when data is accessed it shifts to frequent-use tier), I don’t believe there would be any issue.

3) This is determined purely by amazon.

4) Unless their is a specific need, I don’t see why this is needed. When Commvault requires access to metadata/indexing it will pull it with the backup data. Your indexing is primarily on your media agent with logs only needing to be pulled for pruned data. It might also be worth noting the DDB pre-requisites for intelligent tiering (micro-prune and DDB sealing) here: https://documentation.commvault.com/11.40/commcell-console/supported_cloud_storage_products.html

Hopefully this provides some clarity for your query.

4 replies

Forum|alt.badge.img+8
  • Vaulter
  • Answer
  • September 11, 2025

Hi Patrick,

Thanks for reaching out. Commvault typically treats S3 Intelligent tiering like it’s any other AWS S3 storage. Intelligent tiering is automated to ingest data to a frequent access tier, analyse it’s access, and then move it between the less accessed storage tiers based on data access frequency.

1) So when Commvault writes to Intelligent tiering, it will go into a frequent access tier, then when the bucket determines the data isn’t being accessed frequently then it moves into the archive tiers from warmest to coldest.

2) I cannot see anything stating Intelligent tiering cannot be used and given the nature of how it works (when data is accessed it shifts to frequent-use tier), I don’t believe there would be any issue.

3) This is determined purely by amazon.

4) Unless their is a specific need, I don’t see why this is needed. When Commvault requires access to metadata/indexing it will pull it with the backup data. Your indexing is primarily on your media agent with logs only needing to be pulled for pruned data. It might also be worth noting the DDB pre-requisites for intelligent tiering (micro-prune and DDB sealing) here: https://documentation.commvault.com/11.40/commcell-console/supported_cloud_storage_products.html

Hopefully this provides some clarity for your query.


PatrickDijkgraaf
Bit
Forum|alt.badge.img+1

Hi Jace. Thanks for your clarification. It aligns with my expectation.

 

About item 4.

If for some reason the Indexes are not available OnPrem anymore, we would first need to recall the Indexes to allow browsing the backup content, after which a second recall would be required for the data (similar to what we saw when backing up to Glacier Direct in the past). Am I correct? This is not something the customer wants, so this is why they are currently using the Combined Tier.

The ideal solution for this customer would be if we could use Combined Tier, with “Intelligent Tiering” as a target for backup data, and “Standard Infrequent Access” as a target for Index/metadata. But that combination is currently not available in the CV Software as far as I can see.


Forum|alt.badge.img+8
  • Vaulter
  • September 14, 2025

Hi Patrick,

With intelligent tiering recalls shouldn’t be necessary as it will pull the data from the cold tier to warm on request. AWS should handle the pull from cold to warm tiers on request.

Regarding the Combined Tier suggestion, you can contact your account manager to clarify if this can be raised as a feature request. It can then be put to our development team to look at implementing.

Cheers,


PatrickDijkgraaf
Bit
Forum|alt.badge.img+1

OK, thanks for clarifying!