Skip to main content
Question

slow sql restores Dell ISOlon libraries

  • September 24, 2025
  • 4 replies
  • 56 views

Forum|alt.badge.img+3

We have a Dell Isoilon storage cluster hosting smb shares which are configured as libraries in Commvault 

The SQL backup to the libraries seem okay again we are using DDB and the stats say 

write load 40% DDB lookup 15% and network load 40% for backups; however the throughput is 660GB which is irrelevant although unique data of 34 GB took over an hour also there are 4 objects during the backup so I suspect it had used multiple streams during backup

 

however when doing the restore of a single database; it took over 31 hours and looking at the logs the culprit is slow reads; can I increase streams during restore when restoring just 1 DB ; also given the issue is stemming at reading and everything looks fine at Dell Isolion side (cpu memory etc); I had paused all the other jobs what shall I do next may be a read test?  

 


CVPerfMgrRestore.log:19611: |*27612*|*Perf*|11076969|     Perf-Counter                                                             Time(seconds)              Size
 |_SDT: Receive Data.......................................................    111900              212317453809  [197.74 GB]  [Samples - 9690095] [Avg - 0.011548] [6.36 GBPH]

CVPerfMgrRestore.log:19627: |*27612*|*Perf*|11076969|  |_SDT-Head: Network transfer..............................................       184              212317396177  [197.74 GB]  [Samples - 9690094] [Avg - 0.000019] [3868.75 GBPH]
CVPerfMgrRestore.log:19637: |*27612*|*Perf*|11076969|  |_SDT-Tail: Wait to receive data from source..............................    112231              212317454041  [197.74 GB]  [Samples - 9690096] [Avg - 0.011582] [6.34 GBPH]
CVPerfMgrRestore.log:19638: |*27612*|*Perf*|11076969|  |_SDT-Tail: Decryption....................................................        39              212317396409  [197.74 GB]  [Samples - 9690095] [Avg - 0.000004] [18252.55 GBPH]
CVPerfMgrRestore.log:19639: |*27612*|*Perf*|11076969|  |_SDT-Tail: Uncompression.................................................      1505              212347687920  [197.76 GB]  [Samples - 9690095] [Avg - 0.000155] [473.06 GBPH]
CVPerfMgrRestore.log:19643: |*27612*|*Perf*|11076969| Microsoft SQL Server Agent
CVPerfMgrRestore.log:19644: |*27612*|*Perf*|11076969|  |_Writer: DM: Physical Write..............................................       244              423366361088  [394.29 GB] [5817.40 GBPH]
CVPerfMgrRestore.log:19650: |*27612*|*Perf*|11076969| Microsoft SQL Server Agent
CVPerfMgrRestore.log:19651: |*27612*|*Perf*|11076969|  |_Writer: DM: Physical Write..............................................       244              423366361088  [394.29 GB] [5817.40 GBPH]
 

 

4 replies

Mohammed Ramadan
Explorer
Forum|alt.badge.img+7

hello ​@Rajeev Mehta 
I hope you're having a great day

To better understand and address the slow read issue, I recommend using the Performance Analysis Tool in Commvault. It can help pinpoint where the bottleneck is happening during the restore.

You can find the documentation and usage guide here:
https://documentation.commvault.com/11.42/commcell-console/performance_analysis_tool.html

quick guide let the job run for about 10 minutes, then send over the job logs. Also, open the Media Agent (MA) logs and search for the perfanalysis_jobID log file that should give us more insight into the read performance.

Let me know once you’ve got the logs
Best Regards,
Mohammed Ramadan
Data Protection Engineer


Forum|alt.badge.img+3
  • Author
  • Apprentice
  • September 24, 2025

I had a look at the peranalysis and it sys that the bottleneck is read 

CVPerfMgrRestore.log:19611: |*27612*|*Perf*|11076969|     Perf-Counter                                                             Time(seconds)              Size
 |_SDT: Receive Data.......................................................    111900              212317453809  [197.74 GB]  [Samples - 9690095] [Avg - 0.011548] [6.36 GBPH]


Erase4ndReuseMedia
Community All Star
Forum|alt.badge.img+17

Unfortunately, that sounds pretty typical for Isilon. 

In previous environments, we had followed every support recommendation and best-practice guide available and were never able to achieve a reasonable level of read performance of deduplicated data from an Isilon array (fortunately, we were only using it for long-term copies that we were unlikely to need to restore from, but it did hurt us when we needed to migrate off it).

SMB Signing was always something to consider - but I’m not sure it is still an issue with modern infrastructure.


Forum|alt.badge.img+3
  • Author
  • Apprentice
  • September 25, 2025

Unfortunately, that sounds pretty typical for Isilon. 

In previous environments, we had followed every support recommendation and best-practice guide available and were never able to achieve a reasonable level of read performance of deduplicated data from an Isilon array (fortunately, we were only using it for long-term copies that we were unlikely to need to restore from, but it did hurt us when we needed to migrate off it).

SMB Signing was always something to consider - but I’m not sure it is still an issue with modern infrastructure.

Actually I ran other restore during the day from the same storage policy of the same DB and the restore speed was around  [22.21 GBPH] compared to last time when it was only  [6.36 GBPH]; the only difference being it was run at a different time the whole db got restored in 5 hours compared to last time when it was 31 hours.  The only explanation I can think of is other running jobs in the evening could have impacted the restore.  Also although the whole DB size is 700GB 20GB/HR *5 approx 100GB of unique data??

Also for restore DDB is not utilized so slowness in QNI time wont be a factor 

 

DB team want to understand what changed; and looking at the CV logs there is nothing much we can do from CV perspective as the load is still on the target read but it is an old hardware and I have advised them to try few more restores so that we can get a baseline