Hi @AbdulWajid
By default, Commvault will try to optimize deduplication space saving by picking 1 job per subclient to process first until at least 1 full backup for each subclient has already been processed. This is to prevent data blocks that may be the same to be sent to the destination DDB concurrently that will result in 2+ blocks of the same data to be written down (and thus wasting storage).
Performance wise, with DASH Copy, yes it may seem like it starts off relatively slow however as the destination DDB is seeded, performance will get better over time (where minimal unique blocks needs to sent over the wire due to deduplication savings).
Also, if you’re using Synth Fulls, make sure you’re using Indexing v2 and “Multi-Stream” Synth Fulls so the larger jobs are split across multiple streams. Back in Indexing v1 days, the Synth Fulls were single stream and could be painful to aux copy.
Thanks,
Scott