Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 621 Topics
- 3,252 Replies
Hi all,after some time we are facing another serious issue. There is no available space on the disk library. Ayayay.We have tried to find if there are any unprunable jobs. There were some of them and therefore we have set option to ignore cycle retention for disable subclients. Unfortunately, only small amount of GBs have been aged. Now, there is a question what to do next. I have no idea how to find what can be deleted in order to make more free space for the backups. Moreover, there will be quit a big deduplication ration, so even manual deletion of some jobs do not have to be useful. Maybe, one useful information can be that during the last month there were increase in data cca 10TB, which is 10 percent increase. Is there possibility to figure out what data did this increase? Is there any generule rule or useful tool within the Commvault to fight with this issue?
Hi all, We have a following issue. The LTO8 tapes were badly marked with LTO6 barcodes. Then, the tapes have been remarked with right barcode. However, Commvault can not recognise the newly remarked tapes - Commvault says that the barcodes had been already used and tapes were moved to retired group. We tried to perform Discover, Full Scan (inventory) and Update barcode for the given tape without success. Is there any workaround for the issue? Is there only one way to somehow fix the tapes within the tape library?
Hello,I have old Tape library TS3200 with LTO4 tapes and new Tape library HPE with LTO7 tapes. I would like to copy data from LTO4 to LTO7 tapes.I worked according documentation Media Refresh (commvault.com) .I have Enabled Media Refresh on Storage policy copy, media for Refresh I have marked Full, and Pick for Refresh. I have run Media Refresh job and choose tab Start new media…No, I have error “There are not enough drives in the drive pool. This could be due to drive limit set on master pool if this failure is for magnetic library.”I don’t know where the problem is?Is there anything else to do for Media refresh operation?Best regards,Elizabeta
Hi,After a power failure the tape library is showing offline with the error initializing device failed. I restarted the Commvault server and the error went away however the drive is showing a media but in actuality it is empty. I can’t mount a tape since Commvault thinks there’s still a tape inside the drive. How to resolve this issue?
We are working on Commvault with Falconstor VTL POC. Our Commserve running on SP20.17. At first we have been provided with emulated HP tape library with LTO4 drives. When we initiate the backup, it failed with below error Then, we also try with emulated HP tape library with LTO7 drives. Then we received almost the same previous error. We also already update Tape Drive driver on Windows, but still the same. FYI, we are using Windows 2016 with Commvault SP20.17 on the Commserve server which running on VM. Need someone opinion. Please help. Thanks.
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
Hi, I try to run a data verification but the job take this error:Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_], at the time of error in library [LibStorage] and mount path [[LibStorage] R:\], for storage policy [SP_BackupSystem] copy [Aux_Disk] MediaAgent : Backup job . Mount path inaccessible. Source: , Process: AuxCopyMgr
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
Could someone explain a flow of process for allocating readers for an auxcopy job?What does stream allocation take place in CVJobReplicatorODS exactly? I see that the number of readers is a function of num of CPU and RAM. By default, each CPU in a proxy can support 10 streams, and each stream requires 100 MB of memory. For e.g for VSA.E.g. My VSA backups were backed up with x number of readers and y number of writers to HPE StoreOnce Catalyst.How does the replication agent (auxcopy) decide and allocate readers for copying a list of N jobs to tape. Imagine there is a primary copy → Storeonce Catalystsecondary copy → tape (combine streams to 2 tape drives with multiplexing factor of 25). Number of readers varies from 12 to 38 in my environment. It appears that the number of readers used for the subclients plays a role in the number of readers that will be assigned in the auxcopy. My goal is to increase the readers for the auxcopy jobs for improving the performance. My auxcopy with 38 rea
Hi all, i will try here, We have 2 MA in Azure that acts as proxy as well.. When we tried to backup VM from azure (to cloud storage) , if we configured MA 1 as proxy job completed, when we try to configure MA 2 as proxy its failed with "failed to fetch a valid sas token" error. Anyone had a clue what cause this error? Both MA with same OS, Disks, Permission, Version..No drops from FW & Network settings are configured (client/CS)
Hi, I’ve an old sealed DDB with no more jobs associated to it, it only shows some number of unique blocks left ( for a size of 1,18TB ) , secondary blocks is 0 , application size is already at 0. How can I then do what you proposed to do: “then remove ALL of the blocks for that store in one big macro prune.”?Cause I’d like to get totally rid of that sealed DDB ?
Hello,I need to create a backup job with all full backups to tape in a physical library(new library) that the customer has purchased.Today all backups are on disk (File Library).After the new Jobs to tape has been succesfully executed (3 backup jobs to tape will be done), i need to erase the full backup, which today is very old and needs to be done a new full backup, because the size will be smaller and will free up more space in disk.Wait guys for the best practices to do this.Another question: i will create the Storage Police for the library with permanente retention and include all tapes in the same SP or create other for example: Database, Exchange, etc.@Mike Struening
Hi there! There is a System Created DDB Verification schedule policy (Data Verification). In our case it starts everyday at 6AM. Is it possible to decrease the frequency of the schedule to e.g. once a week without any risk?What is the optimum value of the System Created DDB Verification schedule policy? I am asking because there is a quit big amount of data to be proceess during this task, which can reduce the performance of other tasks.
Hi, Everyone. I have a customer with this exact problem. After the Commvault refresh/reconfiguration which was concluded some months back, we had some issues backing up to tape which we finally understood was related to the tape drives we were using then. We have resolved the issue with the drives but we are still having copy to tape running at a very low speed (As low as 13GB/hr). Kindly assist us with below. We have sister companies running this same Commvault and we want to know how their setup is different from ours that make them better. We need to review our architecture to be sure the copy to disk and copy to tape can happen at the same time from the primary source. The difference in storage in terms of I/O and disk rpm from what we have here and that of our sister companies. Is there any way i can help them, please?
Getting the following when CommVault attempts to reconstruct the DDB User: Administrator Job ID: 353464 Status: Failed Storage Policy Name: BTR_Global_Dedupe Copy Name: BTR_Global_Dedupe_Primary Start Time: Sun Nov 28 17:33:46 2021 End Time: Tue Nov 30 03:45:22 2021 Error Code: [62:2035] Failure Reason: One or more partitions of the active DDB for the storage policy copy is not available to use. Is there any way around this error so I can get backups going again? It has tried a couple of times for multiple days.
Hello,after upgrading from V11FR20 to V11FR24 I noticed a new schedule policy named “System Created DDB Space Reclamation schedule policy” which was disabled by default.I basically know what the Space Reclamation functionality is about, and the policy has all our deduplication engines assigned. But when initializing this policy it finished in less than a quarter of an hour and from the logs only one dedup engine was processed. Another manual start just gives below error message. Can anybody explain to me what this schedule policy is about and how it is supposed to work?
Hi! I’m trying to add S3 Compatible Storage as a Cloud Library, but I get error: 3292 1194 12/20 10:31:44 ### [cvd] CVRFAMZS3::SendRequest() - Error: Error = 440373292 1194 12/20 10:31:48 ### [cvd] CURL error, CURLcode= 60, SSL peer certificate or SSH remote key was not OK I already troubleshoot it and I was able to successfully add the storage as a Cloud Lib using “nCloudServerCertificateNameCheck” mentioned in another thread (Thanks @Damian Andre ! ) The thing is that the provider has a valid certificate Issued by:CN = R3O = Let's EncryptC = USwith root cert:CN = ISRG Root X1O = Internet Security Research GroupC = USSo I am wondering if instead of ignoring all the possible certificates I could just add this one, valid certificate to Commvault so it trust this provider and allow me to configure DiskLib. Is this possible? Also not sure it that’s related since certificate administration is not my cup of tea, but curl-ca-bundle.crt is dated to FEB 2016 on this MA which is a fresh in
Hello, We have some issues with getting storage space free. When i run Forecast repoort i dont see sometging wrong, mostly jobs are, basic days or last of the week/month underOn the webconsole under Storage - Data Retantion i see 80TB above year. Really i those data i cannot find under the SP.Last week under SP> Summory> Storage Policy / Copy Space storage Recovery prediction i sow 16-4 35TB and 30 TB on 17-4. Under are prediction for this weekAfter aging i sow alot of prunable records in DDB, i run DDB verification and after really didnt sow any space freed on storage. I run Space reclamintation with lvl 1 with Clear orphan data, nothing changes on Storage. I wil uploade dataging log, if need to see sidbprune log i can upload it aswel
Hi!Commvault FR 11.20. We’ll be upgrading to FR 11.24 in the Fall. We’ll also be standing up new Media Agents, with new DDBs. Is anyone aware of a write-up on DDB Version 5? All I can find are references to upgrading from DDB Version 4 to Version 5 (though it doesn’t tell you how to identify what version you’re currently using). https://documentation.commvault.com/11.24/expert/134342_upgrading_deduplication_database_to_v5.htmlI also can’t find any references in it in the release notes, though I think it first shows up in FR11.20. It appears to be far more scalable than previous versions. I just want to read an explanation of how it works. Thanks!
Hello, We have multiple sites and all these sites have different WAN bandwidths. All are DASH copying to a single location. All these locations have different working hours. We want to create multiple bandwidth throttling rules. What would be the best way to approach this? Should we create the rules at the source Media Agent with throttling the send traffic? Thank You
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.