Hi Commvault Community
I am performing backups of a NetApp cluster. This backup consists of two logical partitions (called "vserver", in case you are familiar with NetApp) and each of these partitions is domain-joined offering CIFS/SMB file services. The content of these servers are thousands of files, but we could say that their size ranges from medium to large.
The backups for these two servers are performed using the NAS Files agent — we are not using NDMP. Most importantly, the backups are written directly to a tape library. Each server’s backup goes to its own LTO9 drive, always as a full backup — we do not use incremental backups.
Over the past few months, we have noticed a drop in backup performance. No infrastructure changes have been made; the only change was updating the Commvault maintenance release.
Here is a comparison of backups for the same server on different dates. Both servers have essentially the same data, but in the May backup, we can see a latency increase, which caused the backups to take significantly longer.
April

May

For this, we created a case 250516-205 . But support told us it was a storage problem.
We have observed it that Commvault uses a method to perfom backup very large files named Extend-Base Technology which, in our particular case and within our infrastructure, does not seem very efficient.
I am thinking to disable Extend-Based on the client to perform tests via the additional key “bEnableFileExtentBackup”, but it’s not avalaible on a NAS Filer client.

Finally, I was able to activate the key at the MediaServer level.










