Solved

Read Performance on NetApp iSCSI Disk Library


Userlevel 1
Badge +6

Is anyone using NetApp for CommVault disk library, we just replaced the HP MSA2050 with NetApp FAS2720, this is 72 SATA 14.5 disk enclosure. The write performance is great but read is supper slow. The Aux copy is very slow for SQL DB backup and Exchange. I run Validate Storage on 1 path, all backup and aux was disabled, that is the max read of 31 MB/Sec

 

 

 

 

icon

Best answer by Mike Struening RETIRED 8 June 2022, 22:41

View original

10 replies

Badge +4

What was performance for READ/WRITE outside of CV?

Did you run IOMETER on the disks involved?

Typically the read speed concerns are either situations where Anti-Virus in the Media Agents are compromising performance. I have seen 20-30mb/sec jump to 450-600mb/sec once the appropriate Exceptions for AV have been placed on AV software.

The other issue to look at is to contact your HW vendor and have them explain the read speeds.

Please let me know your progress.

 

Dwayne Mowers

Userlevel 1
Badge +6

@DMowers I didn’t use IO meter yet but used ATTO Disk Benchmark the read performance 50 times slower

The AV is disabled

Badge +4

Is the disk 80% full or more?

I think it best at this juncture to get the hardware vendor on the call.

If you see errors in logs for that library note them and provide to vendor.

 

Userlevel 1
Badge +6

I tested on few this, the read performance is bad regardless of disk space usage. This NetApp is used on for CommVault library, each LUN is iSCSI 

Userlevel 6
Badge +15

Hi !

For write performance, I suspect that the writes are sent through the FAS cache, so of course performance is better than on commvault usage, while you would have to read from the spindle disks themselves and not the cache.

From your screenshots I notice that you have multiple logical volumes used as mount paths, like I:, G:, K:, M, Q:, etc..

If you use all of those Mount Paths on the same Media agent at the same time, even if you created as many volumes in the Netapp device, they are probably all pointing to some striped parts of the whole spindle disk sets in the disk array, which means that it’s as if you would like to write to all regions of all the disks of the array.

And in that case you have very bad performance.

I’m not sure that this would suit your need, but if you configure Commvault to use this disk library not to Spill and Fill, but to Fill and Spill the volumes, it could improve performance, as not all disks would be required for paralel operations at the same time, but would target only a few.

 

But in any case, that slow performance without any activity is suspect. If you had a bottleneck on one side, you would have it on the other side. 

That’s why, as the others already did, I would recommend you to ask for explanation and tuning of those figures by the ones that were involved in the setup this Netapp device to your environment, or even Netapp support themselves...

Userlevel 1
Badge +6

Hi Laurent, our vendor that configured this NetApp is working with support but over last 10 days no progress at all. I have open case with CommVault support they run  Validate Storage on one patch and read speed is 37MB/s pointing out on NetApp read performance. If validate storage test is slow on read how would Fill and Spill improve read performance, validate storage run only on one path

Userlevel 7
Badge +23

@MaxJamakovic , can you share the case number so I can track it?

Userlevel 1
Badge +6

Hi Mike I’m working with Matt Medvedeff

Userlevel 6
Badge +15

Hi @MaxJamakovic 

Regarding Fill and Spill, I thought that your NETAPP would have been already in Production, so with lots of data already in the array.

From your screenshot it looks like it’s almost 50% already used.

There are internal mechanisms in the Netapp device that would ‘reorganize’ the data per aggregates and LUNs/volumes created.  I do not remember which ones, because I stopped using Netapp NAS devices in my environment (because of such performance issue mostly) 3 years ago, so sorry for few details from my faraway memories.. 

Make sure all Netapp background tasks such as reorganization of volumes and so on are suspended/stopped on the array, through scheduler and log entries.

Also if possible try to have not an iSCSI volume but some NFS/SMB test volume created, mount it locally and re-run the performance tests (even using ATTO benchmark) to see if it could be protocol-related.

Userlevel 7
Badge +23

sharing solution from the incident:

we are talking to netapp to add more disk spindles

Reply