Hi All,
We are using the File Agent on our Windows File servers to archive data, to free up disk space, so that it leaves a stub behind to allow the users to open the file if they need to. Over time due to mergers and acquisitions the client may have changed so we now have data archived under a number of clients. This means that when we need to restore and rehydrate the data back to the current client or perform an out of place restore to another client we have to:
- Restore the stub files from the backupset.
- Run the gxhsmutility against the stub files to create a mapfile. This creates a map file with each subclient ID number.
- Run a restore from each of the corresponding archiveset subclient IDs of the original clients that the data was archived under.
We are finding that when we have to recover a large amount of data which has been archived the mapfile can be very large and using the mapfile crashes the Commvault software. After raising this with Commvault Support we have been advised to split the mapfile in to smaller files as there is a limitation of 100k items in a mapfile. Many of or file servers have millions of files so this is not a workable solution. I have requested a CCR to be raised to increase the limitation of the mapfile but have been advised that its not being accepted by the engineering team as its “not a frequent operation” and there being a “lack of a broader use case”. I feel that restoring and rehydrating large amounts of data is a quite frequent operation and something that an enterprise level application should be capable of doing without manually creating multiple mapfiles and running multiple restores from a single client.
I am aware of another company which also has this problem and that Commvault Support are splitting the mapfiles for them when required. This suggests that this is a broader case issue. Does anyone else have this issue with mapfiles?
Regards,
Mark Shaw