Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 764 Topics
- 3,636 Replies
Hi, I try to run a data verification but the job take this error:Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_], at the time of error in library [LibStorage] and mount path [[LibStorage] R:\], for storage policy [SP_BackupSystem] copy [Aux_Disk] MediaAgent : Backup job . Mount path inaccessible. Source: , Process: AuxCopyMgr
We are working on Commvault with Falconstor VTL POC. Our Commserve running on SP20.17. At first we have been provided with emulated HP tape library with LTO4 drives. When we initiate the backup, it failed with below error Then, we also try with emulated HP tape library with LTO7 drives. Then we received almost the same previous error. We also already update Tape Drive driver on Windows, but still the same. FYI, we are using Windows 2016 with Commvault SP20.17 on the Commserve server which running on VM. Need someone opinion. Please help. Thanks.
Hello, We have some issues with getting storage space free. When i run Forecast repoort i dont see sometging wrong, mostly jobs are, basic days or last of the week/month underOn the webconsole under Storage - Data Retantion i see 80TB above year. Really i those data i cannot find under the SP.Last week under SP> Summory> Storage Policy / Copy Space storage Recovery prediction i sow 16-4 35TB and 30 TB on 17-4. Under are prediction for this weekAfter aging i sow alot of prunable records in DDB, i run DDB verification and after really didnt sow any space freed on storage. I run Space reclamintation with lvl 1 with Clear orphan data, nothing changes on Storage. I wil uploade dataging log, if need to see sidbprune log i can upload it aswel
Hello,During a DDB reconstruction process, how does it reconstruct the missing data? In our case, it first performed a restore of the last backup which was from sometime early morning. Does it then access the library storage directly to reconstruct the missing data or does it access the Commserve? I’m asking because I’ve been told conflicting information from different Commvault technicians. I just want to make sure that I understand this process clearly. Thank you. Bill
Good day,Please Advise.Error Code: [7:314] The job has failed because the VSS snapshot could not be created.I have checked vssadmin list writers: No errors found. State:  Stable Last error: No errorDDB Backup Disk free space is 260 GBProcess Manager service: VSS Provider Service is running.Warm regardsGlenn Ngobeni
Hello,we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB every  days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.Please has somebody experience with cloud library deduplication (with immutable blobs) ? Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?After test we would like to give a realisti
Disk Library mount path is offline due to nfs local_lock option set in mount options after upgrading to 11.20 or higher
Sharing this information Proactively.Issue:After upgrading to 11.20 or higher, NFS Mount Paths show offline in the Comcell GUI with the error "The mount path is marked offline due to nfs local_lock option set in mount options"CVMA.log on the Media Agent will show:102415 1901f 01/13 19:06:53 ### WORKER [96/0/0 ] :CVMAMagneticWorker.cpp:6992: Marking mount path \[<mount path>] mounted on dir \[/commvault\_fas\-syd\] offline due to mount options [rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=18.104.22.168,mountvers=3,mountport=635,mountproto=tcp,local_lock=all,addr=<IP Address>]Cause:Checking the NFS mount options by running mount -v will reveal the path is not set to "local_lock=none":In earlier releases, it was advised to set local_lock=none as per https://documentation.commvault.com/commvault/v11/article?p=12567.htm. However, 11.20 has enforced the check. This was done due to issues where
Hi all, We have a following issue. The LTO8 tapes were badly marked with LTO6 barcodes. Then, the tapes have been remarked with right barcode. However, Commvault can not recognise the newly remarked tapes - Commvault says that the barcodes had been already used and tapes were moved to retired group. We tried to perform Discover, Full Scan (inventory) and Update barcode for the given tape without success. Is there any workaround for the issue? Is there only one way to somehow fix the tapes within the tape library?
Could someone explain a flow of process for allocating readers for an auxcopy job?What does stream allocation take place in CVJobReplicatorODS exactly? I see that the number of readers is a function of num of CPU and RAM. By default, each CPU in a proxy can support 10 streams, and each stream requires 100 MB of memory. For e.g for VSA.E.g. My VSA backups were backed up with x number of readers and y number of writers to HPE StoreOnce Catalyst.How does the replication agent (auxcopy) decide and allocate readers for copying a list of N jobs to tape. Imagine there is a primary copy → Storeonce Catalystsecondary copy → tape (combine streams to 2 tape drives with multiplexing factor of 25). Number of readers varies from 12 to 38 in my environment. It appears that the number of readers used for the subclients plays a role in the number of readers that will be assigned in the auxcopy. My goal is to increase the readers for the auxcopy jobs for improving the performance. My auxcopy with 38 rea
Hi! I’m trying to add S3 Compatible Storage as a Cloud Library, but I get error: 3292 1194 12/20 10:31:44 ### [cvd] CVRFAMZS3::SendRequest() - Error: Error = 440373292 1194 12/20 10:31:48 ### [cvd] CURL error, CURLcode= 60, SSL peer certificate or SSH remote key was not OK I already troubleshoot it and I was able to successfully add the storage as a Cloud Lib using “nCloudServerCertificateNameCheck” mentioned in another thread (Thanks @Damian Andre ! ) The thing is that the provider has a valid certificate Issued by:CN = R3O = Let's EncryptC = USwith root cert:CN = ISRG Root X1O = Internet Security Research GroupC = USSo I am wondering if instead of ignoring all the possible certificates I could just add this one, valid certificate to Commvault so it trust this provider and allow me to configure DiskLib. Is this possible? Also not sure it that’s related since certificate administration is not my cup of tea, but curl-ca-bundle.crt is dated to FEB 2016 on this MA which is a fresh in
Hello,I have old Tape library TS3200 with LTO4 tapes and new Tape library HPE with LTO7 tapes. I would like to copy data from LTO4 to LTO7 tapes.I worked according documentation Media Refresh (commvault.com) .I have Enabled Media Refresh on Storage policy copy, media for Refresh I have marked Full, and Pick for Refresh. I have run Media Refresh job and choose tab Start new media…No, I have error “There are not enough drives in the drive pool. This could be due to drive limit set on master pool if this failure is for magnetic library.”I don’t know where the problem is?Is there anything else to do for Media refresh operation?Best regards,Elizabeta
Hi,After a power failure the tape library is showing offline with the error initializing device failed. I restarted the Commvault server and the error went away however the drive is showing a media but in actuality it is empty. I can’t mount a tape since Commvault thinks there’s still a tape inside the drive. How to resolve this issue?
Our weekly secondary AuxCopy is stuck at 30% since this weekend (so that it is blocking all the primary disk to disk incremental copies), with the below 2 error messages Thinking it might be some ports communication issue between the Media Server (S01190) where the tape library is attached to, and the CommCell Server (S02116), so I did the below ports check between the 2 Servers: Telnet from the Media Server (S01190) to the CommCell Server (S02116)Port 8400 OKPort 8401 OKPort 8403 OK Telnet from the CommCell Server (S02116) to the Media Server (S01190)Port 8400 OKPort 8401 Not OKPort 8403 Not OK Now, before I speak to our Network/Security administrator who have recently installed SentinelOne AV on both of the above 2 Servers, I’m wondering if I’m heading the right direction, and if I have done all the ports checking ? Thanks,Kelvin
Hello, TeamI am getting the error below on majority of jobs running.I have checked the storage end and also the media agents (The LUNS are attached to the Media Agent) Everything looks good.What could be the cause of the error. Failed to mount the disk media in library [ARCHIVE_DISKPROD] with mount path [B:\Archive_DiskLibrary\MP10] on MediaAgent [hq_media_svr3]. Operation could not be completed in timeout interval. Please check the following: 1. Library and drive is functioning correctly. 2. Library and Drive management services are running. 3. All other MediaAgent services are running. 4. The time out period on the Expert Storage Configuration Properties Window in the CommCell Console. 5. Cleaning media in Assigned Media Group. Source: hq-vm-commserv, Process: MediaManager
Hello,after upgrading from V11FR20 to V11FR24 I noticed a new schedule policy named “System Created DDB Space Reclamation schedule policy” which was disabled by default.I basically know what the Space Reclamation functionality is about, and the policy has all our deduplication engines assigned. But when initializing this policy it finished in less than a quarter of an hour and from the logs only one dedup engine was processed. Another manual start just gives below error message. Can anybody explain to me what this schedule policy is about and how it is supposed to work?
Hello all,I am trying to find some specific guidelines for Block and Chunk settings when related to Cloud Storage. The information I have found is generally related to Disk and Tape Media.I have been reviewing an environment that uses Chunk settings of either ‘Application setting’ for Primary Copy and a mixture of ‘Application copy’ and 4096 for secondary copies.Block size has been set to 1024 and in other cases set to ‘use media type setting’.I am wondering what the best practices for these settings are and if any of these user set settings are overridden?
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
Hello,I want to use Commvault to backup 10 laptops.The files used are :Vidéo : .MOV, MP4, .RAW, .BRAW, AVCHD, BOO, DOO, TBL, Editing files: .FCP ou .SRT Audio : .MP3, WAVE, AACCould I use the deduplication and compression on these files? If yes then what will be the ratio? Thanks.Best Regards,Ben
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
Hi AllDo the below IOPS numbers in the second table below correspond to the test condition specified here :Excerpt from: https://documentation.commvault.com/11.24/expert/8852_testing_iops_for_disk_library_mount_path_with_iometer.html Access Specification Settings Percent Read 50 Percent Write 50 Percent Random Distribution 50 Percent Sequential Distribution 50 Transfer Request Size 64K The minimum IOPS required for each mount path of the disk library of extra large, large and medium MediaAgents is: Components Extra Large Large Medium Disk Library 1000 IOPS 1000 IOPS 800 IOPS
Hi,Im having issues with throttling network utilisation for aux copies. I have about 10 Storage Policies all with Secondary copies.I’ve configured “Throttle Network Bandwidth (MB/HR)” to 25000 for the secondary copy for all storage policy. If my maths is correct that would be approx 50Mbps per aux copy. Even with all 10 running at the same times utilisation shouild only be approx 500Mbps.However through network montioring i can see when these aux copies are running they are using well over the 50Mbps configured (and saturating the network).Is my maths wrong is or the throttling configuration not working how i expect it to work?Thanks in advance for any responses.
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
Getting the following when CommVault attempts to reconstruct the DDB User: Administrator Job ID: 353464 Status: Failed Storage Policy Name: BTR_Global_Dedupe Copy Name: BTR_Global_Dedupe_Primary Start Time: Sun Nov 28 17:33:46 2021 End Time: Tue Nov 30 03:45:22 2021 Error Code: [62:2035] Failure Reason: One or more partitions of the active DDB for the storage policy copy is not available to use. Is there any way around this error so I can get backups going again? It has tried a couple of times for multiple days.
Hi, I’ve an old sealed DDB with no more jobs associated to it, it only shows some number of unique blocks left ( for a size of 1,18TB ) , secondary blocks is 0 , application size is already at 0. How can I then do what you proposed to do: “then remove ALL of the blocks for that store in one big macro prune.”?Cause I’d like to get totally rid of that sealed DDB ?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.