Background: I have a few Windows clients that are performing a lot of backups via subclients. Most of these are UNC paths, and some that are pointing to large file shares (some are 10-50 TB each). We need to “migrate” from the clients being on 2012 Windows OS to 2019 Windows OS, and “just upgrading the OS” isn’t an option we have to stand up new 2019 servers and retire the old 2012 ones. The storage policies for these subclients are tied to DDB’s for primary copies, and secondary/aux copies.
For each subclient on the old clients we want to retire: I want to just “deschedule” the old subclient, then make a brand new “identical” one with the same name and “path its backing up” on the new client (with all the same storage policies, filters, etc.). Maybe take advantage of any “new” default settings on the subclient. I will run a synthetic full at some point on this to make the last backup a ‘full’ to clean up storage.
I have read up a little on NDMP and other types of backups for these (and how some are uch more efficient/performant), but I’m not sure I’ll have time to get new things set up (I have to complete this in a few months, I am not the storage admin) and it would be more desirable to just “set up the new backups like the old ones” and determine a path forward for “new client/backup types” at a later date, afterI’m on 2019 and not under the microscope for OS migrations.
Question: Are there any gotchas for doing this? I don’t want something to “back up another full set of data” and not deduplicte it properly, blowing up a DDB or our storage. I was under the impression as long as the storage policy was the same, and the “path being backed up” was the same, it didn’t matter which client (or backupset) backed it up it looked the same to the DDB’s and what’s put down on storage.