I have a subclient with a single 4.6TB file, that will only get larger. Since it’s one file I don’t think additional data readers are going to help. I’m wondering if anyone has come across this challenge and has some tuning suggestions.
The client is Linux and the filesystem of this file is in 4K blocks. Based on that I didn’t think increasing network agents from 2 to 4 or increasing application read size to say 4MB would help but I tried anyway and no increase in performance during testing.
The target storage does dedup and compression, so it’s currently off at the client level.