Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,674 Replies
Hi, I try to run a data verification but the job take this error:Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_], at the time of error in library [LibStorage] and mount path [[LibStorage] R:\], for storage policy [SP_BackupSystem] copy [Aux_Disk] MediaAgent : Backup job . Mount path inaccessible. Source: , Process: AuxCopyMgr
Hello,I need a little clarification about long running Auxiliary Copies.On a daily basis we run a Primary Copy schedule which is followed by three auxiliary copies. Quite often the auxiliaries take a very long time to complete, enough to overlap with the primary copy schedule of the following day. This makes the jobs on day 2 to “enter” the still running auxiliary copies from day 1, is there any way to avoid this? Is it possible to set a boundary on the latest job to be included in an auxiliary copy?To my understanding, the Copy Policy->Backup Period->End Time setting cannot be used as it would provide a fixed date rather than a moving one.Sorry if this sounds a bit dummy and thanks for you supportGaetano
I want to move jobs from disk to tape, when I run the auxiliary copy it gives me an error when it finds a CHUNK and some of the jobs are marked as bad, try to mark them as good and it gives me the following error "the selectted job cannont be marked good , since it has been marked bad by the system during a read operations"Already validate at the Operating System level that the CHUNKs are in the corresponding route.If I try to view the backup data for those bad jobs through a restore I can view and navigate to the file to restore Is there any way to recover the good status of the jobs?
Hi all If I can just confirm something that I think makes sense.In my case, I have a Media Agent that only backed itself up and hosted it’s own DDB.This client no longer backs itself up. It’s retentions have all been met and no data resides on the Storage Policy or the DDB.I want to remove the alert in the Command Centre that notifies me the DDB isn’t being backed up.If I understand correctly, I just do the below in this order: Retire DDB using the ddb_sealing.xml Delete Storage Policy Delete Media Agent and associated Library? Thanks.Mauro
Hello All,I have to replicate the S3 cloud library (backed up data) to another S3 bucket in different region which can be achieved by Configuring Replication for Cloud Storage (https://documentation.commvault.com/11.25/expert/9279_configuring_replication_for_cloud_storage.html) Although I need a clarity on how it works.Step 1: As per my understanding I have to create two buckets with versioning enabled on it and I should configure two cloud libraries accessible by each media agent specific to the region where S3 is hosted.Step 2: We need to request the AWS admins to use the native replication to copy the backed up data to new S3. Post copy, we can use another media agent to read and restore the data using commvault.Am I correct with my understanding?Also documentation says an additional information : For Amazon S3 replication, enable versioning in both the source and destination buckets in Amazon S3. Commvault only uses the current version of an object. Hence, when Commvault sends a de
Hi Commvault Community, i would like to see, if anyone else is facing the same Issue. We have a few customer that are still using Tapes to move weekly/monthy Backups to a save location.With the change to “forever incremental” they are now facing a big issue to collect all the backups of the agent on to the exported tape. There is no process to generate a Synth. Full or everything likewise.One customer is still holding on to the “old” exchange one-pass classic agent, so he don’t lose the Synth Full Options.Another customer is manually creating a new Storage Policy Copy every month to get all the Backups on the same tape.We have shared this problem a few months ago, we even had a talk at the GO 2019 with dev. Back then we there told there will be a solution on its way, we could expect it for SP20. Now we are at SP26 and there is still no option for these customer.
Getting the following when CommVault attempts to reconstruct the DDB User: Administrator Job ID: 353464 Status: Failed Storage Policy Name: BTR_Global_Dedupe Copy Name: BTR_Global_Dedupe_Primary Start Time: Sun Nov 28 17:33:46 2021 End Time: Tue Nov 30 03:45:22 2021 Error Code: [62:2035] Failure Reason: One or more partitions of the active DDB for the storage policy copy is not available to use. Is there any way around this error so I can get backups going again? It has tried a couple of times for multiple days.
Problem with DDBOne or more active DDB partitions for the storage policy copy are not available for use.Could anyone direct me to fix this problem?2736 664 02/18 09:26:07 ### RecvAnyMsg: Unexpected message received. Waiting [4F000019], I have , Group 2736 664 02/18 09:26:07 ### SendAndRecvMsg: RecvMsg returned failure. iRet [-1]2736 664 02/18 09:26:07 ### PruneRecords: SendAndRecvMsg failed. iRet [-1]2736 664 02/18 09:26:07 ### 5-3 PruneZRRecInt:2551 Failed to purge  primary SIDB records. Error  [The network module failed to send/receive data.]2736 664 02/18 09:26:07 ### 5-3 PruneZRRec:2293 Finishing zero reference record pruning. Attempts , iRet 2736 664 02/18 09:26:07 ### 5-3 DedupPrnPhase3:5247 Unable to remove unreferenced primary records from SIDB. Error 2736 664 02/18 09:26:07 ### stat-ID [Avg GetDirContents], Samples , Time [0.095575] Sec(s), Average [0.001991] Sec/Sample2736 664 02/18 09:26:07 ### stat-ID [Avg CanPruneVolume], Samples
Hello team, I noticed two sealed DDBs have a space warning under DDB disk space Utilization section in webconsole health report.We have a long term retention for mailbox backup that prevent DDB store from reclaim I’m looking to see if there is a way to exclude sealed DDB from the DDB disk space utilization /strike count (I have search CV bol but it doesn’t bring any search result)Or i’ll need to contact support to manually free up sealed ddb space. Do need upload CS DB for CV staging ? thank you
Hello Commvault Community! I would like to ask you if there is any option to do a "tape mirror". I mean 1:1 copying the data from one tape to another, but keeping the data on both tapes. I am aware that there is a "Tape to tape copy" option, but if I understand correctly it deletes the data from the source tape and copies it to the new one available in Spare Media. The reason why the customer wants two copies of the data on two different copies is the security rules of their company, where they must have one tape in the safe and the other in the active Tape Library. I suggested to make another copy in Storage Policy backup to tape with the source copy from disk, but then we can't be sure that the tapes will be 1:1. Is there any chance we can make a tape mirror? Move Contents of Media from One Tape to Anotherhttps://documentation.commvault.com/11.24/expert/10538_move_contents_of_media_from_one_tape_to_another.html Thanks&Regards,Kamil
Hi AllOver the years we have been creating multiple partitions and using each partition as a mount path in the disk library. This method gives the exact size of the disk libraryHowever , if you create mount paths using different folders within the same partition then Commvault disk library size is multiplied by the number of folders we createFor e.g. , if the E: partition is 10 TB and we create 5 mount paths folders as Folder1 2 3 4 and 5 , we expect the total size to be seen as 10 TB . But commvault calculates it as 5 * 10 and shows the disk library with a wrong size of 50 TBAny ideas on how to fix this ?RegardsJithendra Krishnakumar
In the CommVault client application, I can view the “Media in Library” throughStorage Resources/Libraries\QUANTUM Scalar i3-i6 3/Media By Location/Media In LibraryThis window shows all the tapes in the library whether they are in tape drives or in regular slots. I’ve looked for a window like this in Command Center, but I’ve only been able to view either the tapes outside the drives, or the tapes inside the drives -- but not both at the same time. Is there a place where I can view all of them at once? Or perhaps a view that I can custom-configure to show this information? Along with this, the “Slot View” in Command Center shows all the tapes, but on multiple pages. This makes finding several tapes across the all the slots very slow because one must tap between the different pages of tapes. Is there a way to expand the list length per page? Or disable page-separation completely? Solutions and/or suggestions will be greatly appreciated.
Hello All,I would like to use the Amazon KMS for the encryption, how do I achieve this. Do I need to register the Amazon KMS in our commcell and use it in our policies?As per below documentation we were asked to add additional keys to enable encryption. How that works, can anyone explain.https://documentation.commvault.com/11.24/expert/9263_enabling_server_side_encryption_with_amazon_s3_managed_keys_sse_s3.htmlwhat is the difference between the above documentation and registering the Amazon KMS in commcell?
Error Code: [62:1419] Description: The required media is currently in a different library. Source: hq-vm-commserv, Process: MediaManager
Error Code: [62:1419]Description: The required media is currently in a different library.Source: hq-vm-commserv, Process: MediaManagerpls urgent the tape is in same libray that copy it why is it saying is in DR
HiThe current tape type is V7M8.I also changed the type to V7, V7M8, and V8.But I keep getting the following error message:Please review if there is a solution.Kind Regards Error Code: [62:1174] Description: Failed to mount media with barcode [AUS000L8], side [A_2087], into drive [HPE Ultrium 8-SCSI_2], in library [HP MSL G3 Series 70] on MediaAgent [KRDJMASP01]. SCSI Operation: Move Media From Slot To Drive. Reason: The device reported an illegal request error during the execution of the command. Advice: If this error is persistent, check if there are any visible hardware errors reported by the device or Operating System logs. Please contact your hardware vendor. Source: glbcomcel, Process: MediaManager
Hello Community ,Did anyone implemented or have an idea how to implement offline backup copy in Commvault ?Backup is being written to commodity servers with local storage attached .The goal is that once backup job or aux copy is completed , the target media agent should go offline and it should come online automatically during recovery request or next backup/aux copy schedule .By any workflow or cv feature can we control power ON or power DOWN of media agent ? What are the best practices or procedure to implement offline backup copy .Regards, Mohit
We are struggling to commission an HPe MSL6480 which has 6 drives. Each drive has 2 FC ports which in turn are connected such that port 1 connects to Fabric A switch, and port 2 to Fabric B switch and the HSX hosts are connected respectively. I’ve build and installed the lin_tape control path failover driver. The OS sees a device for each path. Does multipath need to be configured if so, can someone provide the process as linux is not my strength? Has anyone else installed and configured an MSL6480 and used with Commvault?
I have a storage policy that has 7 days, 1 cycle retention with two copies: Primary and DR. It appears that none of the incremental backups are replicated to DR. Does CommVault somehow treat incremental backups differently than full backups when it comes to replication to the DR storage?Second question: I just ran a full backup of a small production server, verified that it’s only on the primary storage, did right-click on the storage policy > All tasks > Run Aux Copy > select Copy: 2-DASH-to-DR > OK > and get “No data needs to be copied”. I don’t understand how I can have so many backups, both incremental and full, that show as only being on the primary storage. Does anyone have any idea why my Aux Copy doesn’t seem to be working?Thanks in advance for any help.Ken
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.