Restore speed vmware

  • 17 September 2021
  • 32 replies

Userlevel 2
Badge +6

Hi All.


I am having issues with restore speed when restoring a vmware server.


When a restore is done where “vCenter Client”  is set to the vcenter, restore speeds are slow.

When a restore is done where “vCenter Client” is set directly to the ESXI host and the vcenter is bypassed, we see a factor 4 in restore speeds.


Anyone who can explain this behavior? I thought that the vcenter was only used for control data and not data movement.





Best answer by Mike Struening RETIRED 7 January 2022, 23:12

View original

32 replies

Userlevel 7
Badge +19

I'm puzzled to hear this setting (DataMoverLookAheadLinkReaderSlots) is still present and that there might be a lot of room for customers to improve their recovery times without knowing.

What were the specific conditions that were key for development to come-up with these specific additional setting as a possible workaround .e.g. which customer type can be targeted for these settings to be applied? 

Sure, each and every customer environment is different but there are deepening on factors like storage type still settings that can improve recovery speed which imho is the most important reason why we have Commvault in place. So when can we expect this fine-tuning to be automated? I would envision the possibility within Commvault that allows you to perform (automated) dummy backup/restore operations. The results can then be used to fine-tune performance related settings in the background to deliver the most optimal backup and most important recovery experience. That would deliver a solution that can be beneficial to all Commvault customers and would also deliver valuable benchmark information to development to further optimize the experience. 

Userlevel 7
Badge +23

@Onno van den Berg my understanding is that this was specific to this exact issue and not an overall suggestion for all.

I’ll reach out to the engineer who suggested the key and confirm the above.

If not, I’m with you.  Something that can benefit all should be standard (which is something we do request from dev quite regularly).

Userlevel 7
Badge +19

Ok. I'm really keen to learn what specific circumstance was in place in this case that required this setting to be put in place. 

Hope we get an answer soon! 

Userlevel 7
Badge +23

I spoke to my colleague who explained that it is definitely helpful for some people, but not necessarily everyone.  It’s designed to increase the size of the sfiles to reduce read times (for what would instead be many smaller files).

Depending on various factors like retention, it may not be beneficial for everyone, though when it is then it works very well.

Userlevel 7
Badge +19

You are not making it very easy @Mike Struening because it's the factors where I'm interested in. The issue I'm having with it is that you will always run into these kind of situations when you need Commvault to be there, as this situation impacts your RTO. As we both know not a lot of customers do automated frequent recovery tests and based on your feedback and while reading back the information from the entire thread I came to the following conclusion and please correct me if I'm wrong:

  • User perform VM-level backup and tried to recover the VM. 
  • Recovery was "slow"
  • Advise from development was to set a key to increase the size of the sfile and seal the DDB. The value of the key has become 1GB since version X and most likely the user was still running an older version that didn't had this value set already. 
  • DDB was sealed
  • Recovery was much faster after applying DataMoverLookAheadLinkReaderSlots which changes the amount of slots it reads from the DDB at the same time to reduce overhead. 
  • According to support this only affects Hyperscale installations.

Now I'm curious why they needed to apply the DataMoverLookAheadLinkReaderSlots setting because it should only affect aux-copies.We had the setting in place ourselves in the past to speedup tape-out copies.

The user reported much faster VM-level restores but also SQL restores were much faster. My take on this thread is that there is a very big possibility that customers who are running Hyperscale for a long time already should consider sealing their older DDBs, who have SPs with long-term retention, (would be nice have guidance on this which DDBs might be affected) to get much better restore performance. 


Userlevel 7
Badge +23

I’m asking similar questions myself.  Ideally, if such a setting can be detected as beneficial, there should be some notice advising you to seal the store after setting the value (automating the latter part, perhaps).

I’ll reach out to the right people and see what we can start working on here.

Userlevel 7
Badge +19

Exactly! And make them become pro-active instead of re-active. So to be clear I catch the change of the sfile and the sealing of the DDB. Improvements are being made but I would expect Commvault to be pro-active in informing customers. I t.b.h. doubt if this is Hyperscale specific so if you can get some details why they stated this than that would be great and last but not least the DataMoverLookAheadLinkReaderSlots setting. What's the relation between this key and a restore because from what i know this is truly something that has an effect on reads from the DDB while during a recovery it should not use the DDB. Only thing I can think of is that besides DDB lookups that it also affect the amount of segments it reads at the same time from sfiles but this is not documented.