Solved

Best practices to ensure maximum streams for Aux to MCSS

  • 28 July 2022
  • 2 replies
  • 374 views

Badge +4

Hello Community

 

We are doing a pilot by pushing around 50t of data over to MCSS. We have noticed that when the jobs are progressing, there are multiple streams of different sizes at the beginning of the job and as the job progresses the number of streams go down. Is there a way to ensure that the data that is being pushed to MCSS is distributed among multiple stream equally so that we get consistent performance out of the job till the end?

 

 

icon

Best answer by Jordan 28 July 2022, 08:06

View original

2 replies

Userlevel 5
Badge +11

Hi @AbdulWajid 

 

By default, Commvault will try to optimize deduplication space saving by picking 1 job per subclient to process first until at least 1 full backup for each subclient has already been processed. This is to prevent data blocks that may be the same to be sent to the destination DDB concurrently that will result in 2+ blocks of the same data to be written down (and thus wasting storage). 

 

Performance wise, with DASH Copy, yes it may seem like it starts off relatively slow however as the destination DDB is seeded, performance will get better over time (where minimal unique blocks needs to sent over the wire due to deduplication savings). 

Userlevel 6
Badge +17

Also, if you’re using Synth Fulls, make sure you’re using Indexing v2 and “Multi-Stream” Synth Fulls so the larger jobs are split across multiple streams.  Back in Indexing v1 days, the Synth Fulls were single stream and could be painful to aux copy.

Thanks,
Scott
 

Reply