Hello all, I have a question about my aux copy behaviour.
We use S3 type storage on site for our backups and aux these to an private cloud provider. I’ve noticed that in the aux copies certain clients jobs will consistently be skipped. Initially I thought the issue was bandwidth as the aux copy never completed. We have improved the bandwidth and throughput is much better now, however the aux copies are still not completing. If I go to the secondary copy and show jobs (unticking time range and excluding “available”) I will have a number of jobs going back weeks, with none of the jobs from that client showing partially available (implying it is part way through copying). Interestingly, today I have started the aux copy with “Use scalable resource allocation” UNTICKED, and those old jobs have immediately been picked up and started copying.
Anyone have any ideas why this would be? I’m curious what impact this will have on my environment. I just don’t get why most jobs were copying and it was somehow not queueing these ones even though it knew they were not copied.
Many thanks!