The goal on our Linux file server is to replace the current /project1/ directory with new /project1/ directory, all the same data, but the data stored on different back-end storage. All of this data is backed up by Commvault with the Linux File Server Agent but not doing block level backups.
However, when doing a test, we discovered that Commvault will back up the data again after its moved back to the new /project1/test/ directory, even though the data is exactly the same (same file sizes, modified dates, etc).
I populated a folder on the file server /project1/test and created a Commvault sub-client for only that directory. I ran a manual full backup, which backed up 344MB of data.
Then I ran the following commands (as root on the linux file server)
rsync -avHxAPS /project1/test/ ~user/temp/
Removed /project1/test/ using:
rm -fr /project1/test/
Synced the data back from /temp/
rsync -avHxAPS ~user/temp/ /project1/test/
Reinitialize the xfs project quota:
xfs_quota -x -c 'project -s test' /project1 #
All the mtimes/file sizes, etc. on the replacement data are the same as the original. I then ran an incremental job on the same sub-client. The job also backed up 344MB of data. I then ran another incremental job, and it backed up 0 bytes.
We had hoped the first incremental would back up 0 bytes as well because even though the data was temporarily housed in a temp location, it was rsynced from/to the same directory and nothing had changed.
Is this expected behavior? Any suggestions on how to test this differently before we do this with 200TB of data? Or is there a way we can finesse this so Commvault doesn’t think it needs to back this data up again?