Skip to main content
Solved

Delete Mount Path from de-dupe library and decommission Media agent


Mohit Chordia
Byte
Forum|alt.badge.img+11

Team,

We are using windows servers as Backup media agents , I want to decommission one of the media agent “x”  which is part of 3 libraries and dedupe storage policies. I have disbaled the mount paths on all 3 libraries associated with media agent “x”  , view content shows that there is no data present on the mount path . When I try to delete the mount path associated with media agent ‘x’ faces below error -

Mount path is used by a Deduplication database.The data on this mount path used by the deduplication DB could be referenced by other backup jobs. The mount path can be deleted only when all associated storage policies/copies with deduplication enabled are deleted. See Deduplication DBs tab on the property dialog of this mount path to view the list of DDBs and storage policies/copies.

 

if I unshare the mount paths associated with media agent ‘x’ with other mouth paths of the same library and remove media agent ‘x’ from data paths tab in dedupe storage policy , the restore jobs starts failing. Some of my backup jobs in one of the library has infinite retention . Can anyone suggest how to gracefully delete a mount path from existing library and dedupe storage policy without affecting restore and backup operations .

Best answer by Mike Struening RETIRED

@Mohit Chordia , looks like the case was closed: We found that the top level folder did not have the system account added. I went ahead and added it and now the mount path is back online.

Shall I mark this as the best answer?

Thanks!!

View original
Did this answer your question?

14 replies

Mike Struening
Vaulter
Forum|alt.badge.img+23

Great question, @Mohit Chordia and one that comes up often.  For another, see this thread:

Keep in mind that “view jobs” is misleading with deduplication since the actual blocks in a job are really spread all over. The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs.  For that reason, jobs listed in other mount paths are referencing blocks in this mount path as well. 

 

I’m going to share the very detailed answer that @Jordan gave on that thread as it should help you:

 

In recent FR versions, the CSDB will automatically check and remove referenced volume folders in the DB once they have been removed from disk.

 

This means that after you have enabled the “Prevent data block references for new backups” option, it will take approximately retention period + a few days before you will be able to easily remove this mount path. The reason for this time is because once you enable this option, other jobs written prior to this option being enabled may still be referencing the blocks residing in the volumes and chunks on this mount path. Once retention period is over (make sure all jobs prior to enabling this option has actually aged off), it will still take 1-2 days for the CS to receive all the volume check information from the respective MA(s), before it removes the references in the CSDB itself. 

 

Once all data volume references in the CSDB for this mount path have been removed, it will also remove the DDB association and thus allow you to delete the mount path. 

 

Another easy way to check if the mount path can be removed is to check the “Deduplication DBs” tab and see if the association still exists or not.

https://documentation.commvault.com/11.22/expert/9319_disk_libraries_advanced.html#view-deduplication-databases-ddbs-associated-with-mount-path

If it still exists and you feel retention period should have expired already, run a “Data Forecast and Retention Report” to determine what jobs may not have been aged yet and why. 

https://documentation.commvault.com/11.22/expert/39786_data_retention_forecast_and_compliance_report_overview.html


Mohit Chordia
Byte
Forum|alt.badge.img+11
Mike Struening wrote:

Great question, @Mohit Chordia and one that comes up often.  For another, see this thread:

Keep in mind that “view jobs” is misleading with deduplication since the actual blocks in a job are really spread all over. The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs.  For that reason, jobs listed in other mount paths are referencing blocks in this mount path as well. 

 

I’m going to share the very detailed answer that @Jordan gave on that thread as it should help you:

 

In recent FR versions, the CSDB will automatically check and remove referenced volume folders in the DB once they have been removed from disk.

 

This means that after you have enabled the “Prevent data block references for new backups” option, it will take approximately retention period + a few days before you will be able to easily remove this mount path. The reason for this time is because once you enable this option, other jobs written prior to this option being enabled may still be referencing the blocks residing in the volumes and chunks on this mount path. Once retention period is over (make sure all jobs prior to enabling this option has actually aged off), it will still take 1-2 days for the CS to receive all the volume check information from the respective MA(s), before it removes the references in the CSDB itself. 

 

Once all data volume references in the CSDB for this mount path have been removed, it will also remove the DDB association and thus allow you to delete the mount path. 

 

Another easy way to check if the mount path can be removed is to check the “Deduplication DBs” tab and see if the association still exists or not.

https://documentation.commvault.com/11.22/expert/9319_disk_libraries_advanced.html#view-deduplication-databases-ddbs-associated-with-mount-path

If it still exists and you feel retention period should have expired already, run a “Data Forecast and Retention Report” to determine what jobs may not have been aged yet and why. 

https://documentation.commvault.com/11.22/expert/39786_data_retention_forecast_and_compliance_report_overview.html

 

Thank you for sharing this. I have a replacement new media agent ready to be added in production,

Also, I have few jobs associated with one of the libraries( out of 3 ) which has infinite retention. Thus I am not sure if the actual data on the mount path of media agent “x” will ever expire on its own. can I follow the below steps to get rid of the mount paths associated with media agent “x” gracefully from all 3 libraries without affecting Backup -Restore operations.

  • Create a mount path on the new media agent which is just added to commserver( not part of any library and storage policy ).
  • Perform move mount path operation 
  • Once move mount path operation is completed, the existing mount path will be updated to new mount path associated with the new media agent .
  • Old media agent can be decommissioned gracefully.

Mike Struening
Vaulter
Forum|alt.badge.img+23

That should work.  the challenge is having infinite retention on disk.  If you can get the files to another physical device, then you should be good to remove the older physical device once complete.


Mohit Chordia
Byte
Forum|alt.badge.img+11

I have completed the move mount path operation successfully for 1 of the library.

Removed old media agent from de-dupe storage policy and also removed it from sharing on all other mount paths of the same library.

I am facing an issue with the new mount path that is showing as not accessible in the sharing tab.

CVMA logs:

Failed to calculate the space usage for path [D:\xxxxxxxx\8SKOP8_Folder1] in function [ProcessMagneticConfigRequest] exists or not in func, error=0x80070005:{CQiDirectory::GetSpaceUsage(494)/W32.5.(Access is denied. (ERROR_ACCESS_DENIED.5))}

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

@Mohit Chordia , can you confirm what that folder actually is?  i.e. is it a local drive, a mapped drive, a LUN, etc.?

The error is saying that the account the Media Agent is using has no permission to the folder.

Can you confirm if that full path is actually accessible on the Media Agent (that you found that log on, assuming multiple MAs have access)?

It’s either not in the path expected, or there’s a permissions issue.


Mohit Chordia
Byte
Forum|alt.badge.img+11

it's a local drive on Windows Server. I am able to access it when login into the Media agent.


Mike Struening
Vaulter
Forum|alt.badge.img+23
Mohit Chordia wrote:

it's a local drive on Windows Server. I am able to access it when login into the Media agent.

That certainly simplifies things.  Let me ask around.

One thing I did find was that cycling services after this move often resolves the issue, though I am going to see if I can get anyone here involved while you can possibly try that.


Mohit Chordia
Byte
Forum|alt.badge.img+11

I have recycled services on the media agent but still seeing the same issue.


Mike Struening
Vaulter
Forum|alt.badge.img+23

@Mohit Chordia , I see you have a case in development for this issue.  I’ll keep an eye on it, though feel free to come back and share the solution before I get the chance (and credit yourself with the Best Answer)!


Forum|alt.badge.img+11
  • Vaulter
  • 135 replies
  • February 21, 2021

@Mohit Chordia - can you advise what the underlying storage is here and how it is presented to the MA for D:\ drive?

 

Thank you


Mohit Chordia
Byte
Forum|alt.badge.img+11

@Jordan :

Media agent Is Dell Server with local disks. 

We are using RAID 6 and create one large drive to be used as a backup storage library out of multiple NL SAS disks.

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

@Mohit Chordia , looks like the case was closed: We found that the top level folder did not have the system account added. I went ahead and added it and now the mount path is back online.

Shall I mark this as the best answer?

Thanks!!


Mohit Chordia
Byte
Forum|alt.badge.img+11
Mike Struening wrote:

@Mohit Chordia , looks like the case was closed: We found that the top level folder did not have the system account added. I went ahead and added it and now the mount path is back online.

Shall I mark this as the best answer?

Thanks!!

Yes, Definitely 

Thank you, Mike. 


Mike Struening
Vaulter
Forum|alt.badge.img+23

Always, @Mohit Chordia !!


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings