Skip to main content

We have a request to backup and archive a share on NetAPP device;  we have a Windows MediaAgent with direct attached storage onsite.   I was wondering what is the best approach.  Also, does anyone know a 3TB of Data would consume how much of Index Cache? 

I can configure the backup to occur either via NDMP  or via Windows share(utilizing a windows proxy proxy) and then configure Archiving to occur as a network share.  Shall I configure Archiving first and then the backup or should it be other way around both would go to a library on the onsite Mediaagent.

Benefit or archiving first would be smaller backups 

Benefit of Backup first would be if something happens to the data on library we have a aux copy of full backup at our end    

 

also, say we setup archiving to the local Mediaagent connected libraries; and these libraries fail for some reason (hardware error); how would stub recalls work?  Or if we need to replace the existing Mediaagent with a new one; usually; as we have a aux copy; we just ship a new MA and take new full backups; however that would not work in this senario it appears as the Mediaagent is also used for storing the archiving data. Would we be required to copy that data to the new MediaAgent? Also, as there is no way to differentiate between backup and archived data; would we not be required to copy all the data from one MediaAgent to the another

 

 

I’m curious to your definition of archive. Is the share being removed permanently or do you want the ability for users be able to recall files automatically upon access (i.e HSM - Hierarchical storage management). If the latter, we have NetApp archiving using fpolicy which will initiate a recall of the files upon user access - no need to install anything client-side.

https://documentation.commvault.com/11.24/expert/27123_netapp_archiving.html

 

The benefit of backing up or archiving over a share is that you tent to get better deduplication, and you retain compatibility for restore. NDMP does a great job with backing up millions of files with much less ‘scan’ time than a traditional SMB/share backup.

The general rule of thumb used to be 2% index size for the amount of data being protected. Indexing V2 is a little different but its a good estimate to use - It also depends on number of objects vs raw size.

 

Benefit of Backup first would be if something happens to the data on library we have a aux copy of full backup at our end    

 

You can still auxcopy an archive job too - so either way works. I would certainly recommend at least two copies of your archive data.


Reply