Vmware restore speed - what do you get?

  • 25 November 2021
  • 8 replies
  • 251 views

Userlevel 1
Badge +5

Hi all.

 

Open Question!

 

I would like to know what I can expect from a Vmware restore, regards to the speed. At the moment I don’t know if my restore speeds are fast or slow. I’m not talking about reverting from a intellisnap snapshot, but fair and square network traffic.

 

What Vmware restore speeds do you get when restoring a single Vmware server?

I get:

nbd: around 500Gb/h

SAN: around 250Gb/h

Hotadd: Around 900Gb/h

 

Thanks for helping me to determine if my restores are slow.

 

Regards

-Anders

 

 

 


8 replies

Userlevel 6
Badge +14

Hi Anders,


To get a better understanding here, I have a few questions: 

Can you confirm what Disk Provisioning type you have for the VM Disk? - From the above results I am assuming Thin Provisioned?
Was the NBD restore using the Physical MA/VSA or the Virtual VSA Proxy?

Also, do you have any features on the Datastore Volume in play here? I.e. Thin-Provision, Deduplication or Compression?

 

Best Regards,

Michael

Userlevel 1
Badge +5

Hi @MichaelCapon 

 

Sure.

 

Thin provisioning used.

NBD proxy is a physical HyperScale server.

Datastores are not deduplicated or compressed and do use thin-provision.

 

Regards

-Anders

Userlevel 6
Badge +14

Hi Anders, Apologies for the delay in getting back here.

 

Given you are using Thin Provisioning Disks, It makes sense that NBD and HOTADD are faster here. - Ref: https://documentation.commvault.com/11.24/expert/32040_san_transport_for_vmware.html


HOTADD can be faster than NBD/NBDSSL where there are network limitations between MA and ESX. - On the HyperScale node does the ESX Name resolve to a 1GB or 10GB/Faster interface here? 

 

I can’t give numbers on the “expected” speeds here, since there are factors such as Storage Performance, Network Performance, MA Performance etc.

If you wanted to check the performance from the logs, you can check the “stat-” counters in CVD.log (Media Agent) and vsrst.log on VSA Proxies used for restore.

 

Best Regards,

Michael

Userlevel 1
Badge +5

Hi @MichaelCapon .

 

Thanks for your reply.

 

We are using 10Gbit.

When restoring a bundle of 6 servers in one job via hotadd I now hit 2 - 2,5 Tb/h, so that’s great. Do you know if it’s the same HyperScale node doing the restore or is it balanced out between the HyperScale nodes?

 

I just hoped that other customers with live environments would tip in with there restore speeds, so I had something to compare with. You never know if your fast or slow, if you can’t compare with anything.

 

 

Userlevel 7
Badge +23

@ApK , can you clarify what you mean about the Hyperscale nodes?  You want to see which node is doing the restore?

Userlevel 1
Badge +5

Hi @Mike Struening .

 

My question was, if it was only one of the HyperScale nodes providing restore data or if it was balanced between all 6 nodes. 6 nodes would provide faster restore speeds, because of 6 times as many disks to restore from.

Regards

-Anders

Userlevel 6
Badge +15

hi @ApK 

Are you using VSA indexing v1 or v2 ?

As indexing v2 should perform multiple streams restore, if your VMs have multiple vmdks, this should increase performance.

I understood from your answers that your’re using Hyperscale appliances, which I don’t use on the environment under my scope. So maybe they’re all using v2.. 

Userlevel 6
Badge +14

What @ApK is asking if all the nodes in the HyperScale cluster are used to pull information from and the answer is yes/no. Yes, because data is scattered across nodes in the cluster. No, because one of them is holding the session to the client. So 10Gb will most likely be the limit. But do mind it also depends on the client HW configuration so is it also equipped with a 10Gb and it is dedicated? No, other devices in between like firewalls? 

Reply