Hi Michael.
I have made some more restore tests, and looking into vsrst.log for the restores, shows me some cracy differencies in readmedia speed, so I might have a different issue than in the begnining where I thought it was a vcenter issue.
from the vsrst.log file:
Same vmware server restore, same HyperScale server, two different dates, very different read speeds.
09/17 10:31:52 10456669 stat- ID Dwritedisk], Bytes s90393542656], Time e702.082991] Sec(s), Average Speed d122.786054] MB/Sec
09/17 10:31:57 10456669 stat- ID Dreadmedia], Bytes s83501234239], Time e43.402569] Sec(s), Average Speed d1834.752742] MB/Sec
09/17 10:31:58 10456669 stat- ID DDatastore Write eSN771-D2250-L0009]], Bytes s91067777024], Time e708.095798] Sec(s), Average Speed d122.651483] MB/Sec
09/20 12:05:19 10481089 stat- ID readmedia], Bytes t152791654482], Time i5328.833140] Sec(s), Average Speed e27.344350] MB/Sec
09/20 12:05:21 10481089 stat- ID Datastore Write iSN771-D224E-L0008]], Bytes t162756820992], Time i1126.319442] Sec(s), Average Speed e137.809039] MB/Sec
09/20 12:05:21 10481089 stat- ID writedisk], Bytes t162756820992], Time i1126.369482] Sec(s), Average Speed e137.802917] MB/Sec
I will create a case to have this investigated.
@Damian Andre, the restores was done via nbd, but thanks for your suggestion :-)
Regards
-Anders
Hey folks,
This sounds like a textbook case of “clear lazy zero” if you are doing SAN restores - article here:
https://documentation.commvault.com/11.24/expert/32721_vmw0074_san_mode_restores_slow_down_and_display_clear_lazy_zero_or_allocate_blocks_vmware.html
I was writing the description but the KB article sums it up well
Hi @ApK ,
Yes, you can raise a case for this. - We’ll need the Logs and the Job ID’s to check it further.
Once raised, let us know the case number and we can monitor it internally.
Best Regards,
Michael
Hi Michael.
Would it be better to raise a case for this issue, to further investigate?
Thanks
-Anders
Thanks @ApK ,
Would you be able to share the vsrst.log and a JobId of vCenter and ESX?
Best Regards,
Michael
Hi Michael.
Thanks for your reply.
That was my owne thought, that it was only using the vcenter for control data, thats why im wondering what is happening here.
I’m using nbd for the restores and thin provisioning disks.
I have made 10 tests this morning, and all restores via the esxi host directly is 3-4 times faster.
Checked the vsrst.log file, and MediaAgent read speeds are fast, so this is not the issue for sure. Issue is, that vcenter is involved in the restore for some how.
Regards
-Anders
Hi @ApK ,
The vCenter should only be used for control data here, such as create VM, create VM Snap, etc.
What transport method was used for both jobs here? Was the same disk provisioning used also?
In the vsrst.log on the VSA Proxy used for restore, you should see counters under Stat-. These should give a good indication of the media read and disk write speeds here.
I’d suggest reviewing the log and comparing, there may have been an operation that took longer or a difference in speeds (for some reason). - Hopefully the log will give more insight into this!
Best Regards,
Michael