Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 675 Topics
- 3,383 Replies
Error Code: [13:138] - Data Verification DDBs Fails
Hi, I try to run a data verification but the job take this error:Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_], at the time of error in library [LibStorage] and mount path [[LibStorage] R:\], for storage policy [SP_BackupSystem] copy [Aux_Disk] MediaAgent : Backup job . Mount path inaccessible. Source: , Process: AuxCopyMgr
Auxiliary copy is stuck at 30%
Our weekly secondary AuxCopy is stuck at 30% since this weekend (so that it is blocking all the primary disk to disk incremental copies), with the below 2 error messages Thinking it might be some ports communication issue between the Media Server (S01190) where the tape library is attached to, and the CommCell Server (S02116), so I did the below ports check between the 2 Servers: Telnet from the Media Server (S01190) to the CommCell Server (S02116)Port 8400 OKPort 8401 OKPort 8403 OK Telnet from the CommCell Server (S02116) to the Media Server (S01190)Port 8400 OKPort 8401 Not OKPort 8403 Not OK Now, before I speak to our Network/Security administrator who have recently installed SentinelOne AV on both of the above 2 Servers, I’m wondering if I’m heading the right direction, and if I have done all the ports checking ? Thanks,Kelvin
Badly marked LTO8 tapes
Hi all, We have a following issue. The LTO8 tapes were badly marked with LTO6 barcodes. Then, the tapes have been remarked with right barcode. However, Commvault can not recognise the newly remarked tapes - Commvault says that the barcodes had been already used and tapes were moved to retired group. We tried to perform Discover, Full Scan (inventory) and Update barcode for the given tape without success. Is there any workaround for the issue? Is there only one way to somehow fix the tapes within the tape library?
Disk Library mount path is offline due to nfs local_lock option set in mount options after upgrading to 11.20 or higher
Sharing this information Proactively.Issue:After upgrading to 11.20 or higher, NFS Mount Paths show offline in the Comcell GUI with the error "The mount path is marked offline due to nfs local_lock option set in mount options"CVMA.log on the Media Agent will show:102415 1901f 01/13 19:06:53 ### WORKER [96/0/0 ] :CVMAMagneticWorker.cpp:6992: Marking mount path \[<mount path>] mounted on dir \[/commvault\_fas\-syd\] offline due to mount options [rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=126.96.36.199,mountvers=3,mountport=635,mountproto=tcp,local_lock=all,addr=<IP Address>]Cause:Checking the NFS mount options by running mount -v will reveal the path is not set to "local_lock=none":In earlier releases, it was advised to set local_lock=none as per https://documentation.commvault.com/commvault/v11/article?p=12567.htm. However, 11.20 has enforced the check. This was done due to issues where
Media stuck in drive
Hi,After a power failure the tape library is showing offline with the error initializing device failed. I restarted the Commvault server and the error went away however the drive is showing a media but in actuality it is empty. I can’t mount a tape since Commvault thinks there’s still a tape inside the drive. How to resolve this issue?
How to move data from LTO4 (from one Tape library) to LTO7 on another Tape library?
Hello,I have old Tape library TS3200 with LTO4 tapes and new Tape library HPE with LTO7 tapes. I would like to copy data from LTO4 to LTO7 tapes.I worked according documentation Media Refresh (commvault.com) .I have Enabled Media Refresh on Storage policy copy, media for Refresh I have marked Full, and Pick for Refresh. I have run Media Refresh job and choose tab Start new media…No, I have error “There are not enough drives in the drive pool. This could be due to drive limit set on master pool if this failure is for magnetic library.”I don’t know where the problem is?Is there anything else to do for Media refresh operation?Best regards,Elizabeta
Stream allocation for Auxcopy
Could someone explain a flow of process for allocating readers for an auxcopy job?What does stream allocation take place in CVJobReplicatorODS exactly? I see that the number of readers is a function of num of CPU and RAM. By default, each CPU in a proxy can support 10 streams, and each stream requires 100 MB of memory. For e.g for VSA.E.g. My VSA backups were backed up with x number of readers and y number of writers to HPE StoreOnce Catalyst.How does the replication agent (auxcopy) decide and allocate readers for copying a list of N jobs to tape. Imagine there is a primary copy → Storeonce Catalystsecondary copy → tape (combine streams to 2 tape drives with multiplexing factor of 25). Number of readers varies from 12 to 38 in my environment. It appears that the number of readers used for the subclients plays a role in the number of readers that will be assigned in the auxcopy. My goal is to increase the readers for the auxcopy jobs for improving the performance. My auxcopy with 38 rea
Mount path is showing offline
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
Data Written vs Size on disk (HyperScale)
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
Auxilary Copy not copied some jobs
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
Data aging, deleting old jobs
Hello, We have some issues with getting storage space free. When i run Forecast repoort i dont see sometging wrong, mostly jobs are, basic days or last of the week/month underOn the webconsole under Storage - Data Retantion i see 80TB above year. Really i those data i cannot find under the SP.Last week under SP> Summory> Storage Policy / Copy Space storage Recovery prediction i sow 16-4 35TB and 30 TB on 17-4. Under are prediction for this weekAfter aging i sow alot of prunable records in DDB, i run DDB verification and after really didnt sow any space freed on storage. I run Space reclamintation with lvl 1 with Clear orphan data, nothing changes on Storage. I wil uploade dataging log, if need to see sidbprune log i can upload it aswel
Mount path to Mount path Data Migration
I need to check if there is any option to move the data from one mount path to another mount path in the same library. I need this to be done for mitigating over commit issue at back end storage. I am having 3-4 mount paths in which only one job is there , i want to move that one job to any other MP within the library and get that deleted so that over commit issue will get solved. Current Version: V11 SP26.23Backend Storage : Netapp
S3 Compatible Storage Untrusted Certificate
Hi! I’m trying to add S3 Compatible Storage as a Cloud Library, but I get error: 3292 1194 12/20 10:31:44 ### [cvd] CVRFAMZS3::SendRequest() - Error: Error = 440373292 1194 12/20 10:31:48 ### [cvd] CURL error, CURLcode= 60, SSL peer certificate or SSH remote key was not OK I already troubleshoot it and I was able to successfully add the storage as a Cloud Lib using “nCloudServerCertificateNameCheck” mentioned in another thread (Thanks @Damian Andre ! ) The thing is that the provider has a valid certificate Issued by:CN = R3O = Let's EncryptC = USwith root cert:CN = ISRG Root X1O = Internet Security Research GroupC = USSo I am wondering if instead of ignoring all the possible certificates I could just add this one, valid certificate to Commvault so it trust this provider and allow me to configure DiskLib. Is this possible? Also not sure it that’s related since certificate administration is not my cup of tea, but curl-ca-bundle.crt is dated to FEB 2016 on this MA which is a fresh in
System Created DDB Space Reclamation schedule policy
Hello,after upgrading from V11FR20 to V11FR24 I noticed a new schedule policy named “System Created DDB Space Reclamation schedule policy” which was disabled by default.I basically know what the Space Reclamation functionality is about, and the policy has all our deduplication engines assigned. But when initializing this policy it finished in less than a quarter of an hour and from the logs only one dedup engine was processed. Another manual start just gives below error message. Can anybody explain to me what this schedule policy is about and how it is supposed to work?
Mount Path move failed
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
Chunk and Block Size for Cloud Libraries
Hello all,I am trying to find some specific guidelines for Block and Chunk settings when related to Cloud Storage. The information I have found is generally related to Disk and Tape Media.I have been reviewing an environment that uses Chunk settings of either ‘Application setting’ for Primary Copy and a mixture of ‘Application copy’ and 4096 for secondary copies.Block size has been set to 1024 and in other cases set to ‘use media type setting’.I am wondering what the best practices for these settings are and if any of these user set settings are overridden?
Remove DDB with no associated jobs
Hi, I’ve an old sealed DDB with no more jobs associated to it, it only shows some number of unique blocks left ( for a size of 1,18TB ) , secondary blocks is 0 , application size is already at 0. How can I then do what you proposed to do: “then remove ALL of the blocks for that store in one big macro prune.”?Cause I’d like to get totally rid of that sealed DDB ?
Dedup DB reconstruction job failed
Getting the following when CommVault attempts to reconstruct the DDB User: Administrator Job ID: 353464 Status: Failed Storage Policy Name: BTR_Global_Dedupe Copy Name: BTR_Global_Dedupe_Primary Start Time: Sun Nov 28 17:33:46 2021 End Time: Tue Nov 30 03:45:22 2021 Error Code: [62:2035] Failure Reason: One or more partitions of the active DDB for the storage policy copy is not available to use. Is there any way around this error so I can get backups going again? It has tried a couple of times for multiple days.
Create a tape job for new full backup
Hello,I need to create a backup job with all full backups to tape in a physical library(new library) that the customer has purchased.Today all backups are on disk (File Library).After the new Jobs to tape has been succesfully executed (3 backup jobs to tape will be done), i need to erase the full backup, which today is very old and needs to be done a new full backup, because the size will be smaller and will free up more space in disk.Wait guys for the best practices to do this.Another question: i will create the Storage Police for the library with permanente retention and include all tapes in the same SP or create other for example: Database, Exchange, etc.@Mike Struening
AuxiIllary copy slow and architecture review
Hi, Everyone. I have a customer with this exact problem. After the Commvault refresh/reconfiguration which was concluded some months back, we had some issues backing up to tape which we finally understood was related to the tape drives we were using then. We have resolved the issue with the drives but we are still having copy to tape running at a very low speed (As low as 13GB/hr). Kindly assist us with below. We have sister companies running this same Commvault and we want to know how their setup is different from ours that make them better. We need to review our architecture to be sure the copy to disk and copy to tape can happen at the same time from the primary source. The difference in storage in terms of I/O and disk rpm from what we have here and that of our sister companies. Is there any way i can help them, please?
Disk Library IOPS requirement
Hi AllDo the below IOPS numbers in the second table below correspond to the test condition specified here :Excerpt from: https://documentation.commvault.com/11.24/expert/8852_testing_iops_for_disk_library_mount_path_with_iometer.html Access Specification Settings Percent Read 50 Percent Write 50 Percent Random Distribution 50 Percent Sequential Distribution 50 Transfer Request Size 64K The minimum IOPS required for each mount path of the disk library of extra large, large and medium MediaAgents is: Components Extra Large Large Medium Disk Library 1000 IOPS 1000 IOPS 800 IOPS
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.