Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 731 Topics
- 3,543 Replies
Do you see these error for your jobs?When we updated from 11.20.32 to 11.20.60 we started getting Cache-Database errors on various backup types, FileSystem idata agent, NDMP backups and others.The exact error we get is:Error Code: [40:110]Description: Client-side deduplication enabled job found Cache-Database and Deduplication-Database are out of sync.I have a ticket open with support, but I am wondering if the issue is unique to us or if it is happening to other customers as well. Thank you,
Hello,I need to create a backup job with all full backups to tape in a physical library(new library) that the customer has purchased.Today all backups are on disk (File Library).After the new Jobs to tape has been succesfully executed (3 backup jobs to tape will be done), i need to erase the full backup, which today is very old and needs to be done a new full backup, because the size will be smaller and will free up more space in disk.Wait guys for the best practices to do this.Another question: i will create the Storage Police for the library with permanente retention and include all tapes in the same SP or create other for example: Database, Exchange, etc.@Mike Struening
Hi there, could you please help me what to do if all drives within the tape library went offline? The offline reason seems to be very strange: [Cannot communicate with Media Mount Manager Service.Please ensure that.a. The MediaAgent is reachable from CommServe.b. All MediaAgent services are running.] A - Checked that MA is reachableB - Checked that all services are running What can help us to make drives online? Also verified in Windows device manager that drives are present and visible.
Hi,i have a question regarding the implementation of a cloud library with Scality Ring.We can create two type of mount path S3 Compatible Storage or Scality Ring.Which is required ? (i have some cloud libraries already created in S3 compatible storage instead of scality ring type).There is a difference between them ?Kind regards, Christophe
I’m a little confused about multiple partitions and or DDBs limitation. There are several places when limits are mentioned like the one and only Hardware Specifications for Deduplication Mode1) https://documentation.commvault.com/11.24/expert/111985_hardware_specifications_for_deduplication_mode.htmlwhich mentions “2 DDB Disks” per MA andConfiguring Additional Partitions for a Deduplication Database2) https://documentation.commvault.com/11.24/expert/12455_configuring_additional_partitions_for_deduplication_database.htmlwhich mentions “30 DDB partitions” per MA.Also, In my recent discussion with PS member I was told that a partition should be treated as DDB itself.All of this creates a lot of confusion likea) is “DDB Disk” the same as DDB?b) is “DDB partition” the same as DDB?If you look at 1) /Scaling and Resiliency there is a informationThe back-end size of the data. For example: Each 2 TiB DDB disk holds up to 250 TiB for disk and 500 TiB for cloud extra large MediaAgent. That means
Hello community , We are trying to migrate SAN storage to S3 cloud library .Per suggestions followed these steps . 1. configured new global dedupe storage policy using your new S3 bucket and MA2. configured new secondary copies in your existing storage policies pointing to your new S3 dedupe storage3. ran aux-copyWe have huge data and contacted commvault support to determine the timelines to know when the aux copy will get complete . Currently aux-copy is running for more than 4 months.Support has mentioned below points .-Your current configuration is allowing the selection and prioritization of new backups over older data-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. How to make sure have optimal Aux copy configurations Please share your inputs . Thanks in advanceSpartan9
Hi all,some days ago we have temporarily disabled deduplication only under storage policy copy (Storage policy tab). Now, we want to allow deduplication again, however if we uncheck the option (temporarily disable deduplication) we see the message - Deduplication cannot be enabled on dependent copy when disabled on Storage Pool. And this is very strange. Under Storage Pool there have been no changes made, moreover, there is no option to uncheck temporarily disable deduplication (Storage Pool tab). Maybe, only one thing can be in play, the DDB is in Maintenance state because of verification in progress. Storage Pool window - no option to switch disabled deduplication
is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots)
I am running a migration from other vendor to CV. For a time being, is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots). Just to avoid conflict as the tape library would be having CV as well as other vendor media.
Hello,The customer bought new servers MS SQL and migrate to new 2 servers.Scenery today:2 servers MS SQL1 production2 copyNew Scenery:2 servers MS SQL (New Version of O.S. and Database)1 production2 copyand i need to copy the config of backup jobs, with the same configuration of current backup: retention, backups jobs, DDB, schedulers.Does anyone have any procedure? the best practices to do.
Scenario. We´re creating an Aux Copy for all existing backup jobs from a particular storage policy, during the process we faced the following error message:Error Code: 13:187Description: Some backup jobs are skipped because they are aged on destination copy or marked do not copy.Source: commserve, Process JobManager. The Aux Copy was configured to take all existing jobs from the primery and only was able to run and move 10% of the total data on the primary copy and skiiped the rest, any idea how move the skipped jobs?? Regards. Ramon.
Hi,We are in CV deployement and Initially we build a single MA with 4 partitioned DDB in the Azure cloud. When data is growing, we moved the two partition into new Media Agent and its running in two MA with two DDB disk each.Now, both MA reached its bottleneck planning to scalling up further, but management allowed me to add one MA alone.So, I have one possiblity of running the backup with four-partitioned DDB splited with three MAs as shown below.MA 1 - Single DDB Disk MA 2 - Single DDB Disk MA3 - Two DDB DiskI am bit worry about to do that, as I am thinking it make some instablity between MAs. But, I couldn’t find relevent CV documents. Can you suggest whether the above design make any sense or not? and will it make any issue in the future? Thanks,Mani
I have a shared tape library, used by the “old” and the “new” commvault environment. The “new” is Hyperscale. I want the “new” to use only tapes with a barcodes prefoxed with ‘NP’ and the “old” to use everything else but not ‘NP’. Hopefully, that makes sense! Question: How is this achievable? cheers
We have a partitioned DDB that uses a Disk library with 12 mount points. Spill and fill has been configured.An oracle DB is backed up with 4 streams/channels. The backup allocates 4 streams but these streams are allocated to one mount point and via one MA. How can the streams be spread across multiple mountpoints such that 2 go via MA1 and 2 via MA2.===Second Oracle DB is being backed up. This takes the same partition as the above job and uses another mount point with all 4 streams going to the same mountpoint. Any ideas how the streams to make Commvault distribute the streams evenly?
Came across this setting and was wondering when this should be used? What is a possible usecase to tweak this setting? Configure Disk Volume SizeDisk volumes are created based on the volume size. When the size of the volume reaches the maximum size, then a new volume is created. The maximum size of a disk volume is set to 25 GB by default, and this value can be modified. On the ribbon in the CommCell Console, click the Storage tab, and then click Media Management. Click the Resource Manager Configuration tab. Modify the Disk volume physical size high watermark in GB parameter as required. From the Disk volume physical size high watermark in GB box, enter the disk volume size. Click OK. https://documentation.commvault.com/11.24/expert/9319_disk_libraries_advanced.html#b9365_use_unbuffered_io
Hi there, I want to share my experience with DDB reconstruction. My colleagues started the DDB database reconstruction because the DDB database was not in good condition. There were no current backups of DDB database (the last backup was 14 days old) and maybe some other error messages were active. They decided to start full reconstruction, maybe there was no option to make manual backup of the DDB database - hm, but the database had not been updated with new records anyway. The thing is that the reconstruction is very time consuming and moreover, since the DDB database is not running the backup jobs are not possible. The workaround we did was to temporarily disable deduplication. Another caveat is that we are running out of disk space. And that is how a disaster looks like . At the end I wil put my hypothetic questions. Is the DDB database needed for restoring of data - I wouldnt say so. And what would happen if the broken DDB database was deleted and completely the new one was built
Hi there, most likely we will need to temporarily change disk library for a couple of storage policies because of slow DDB database reconstruction.My question is how to change disk library for primary storage policy copy? There is no scroll down menu to change disk library in Default Destionation field Does it mean that I need to create new secondary copy and after that promote this newly created secondary copy to be the primary one? Is there any potential data loss? Do you have any hints or caveats for this task?
Hi,After a power failure the tape library is showing offline with the error initializing device failed. I restarted the Commvault server and the error went away however the drive is showing a media but in actuality it is empty. I can’t mount a tape since Commvault thinks there’s still a tape inside the drive. How to resolve this issue?
I’m new to CV and still trying to sort out Commcell Console versus Command Center, so I appreciate your patience. After 4 years with Veeam and two decades with Data Protector, Commvault is proving to be quite a different animal.My first concern is why it takes googling some arcane code (ActivateHPECatalyst) to enter in the Commcell Console properties to make visible the StoreOnce option for library creation in the UI. What is the rationale for hiding the StoreOnce option in the first place?Now that I’ve added a Catalyst-backed disk library in Storage Resources > Libraries via Commcell Console, I go back over to Command Center, look at Storage > Disk, and I do not see my new disk library. How then am I supposed to add it to anything as a backup destination?
Hi, do I have an option to Import media from catalogic app? I have a customer that migrated to commvault from catalogic and he wants to know if he can import the catalogic tapes to commvault library.he has tapes with the last backup from catalogic.I think he will have to maintain his old backup system, but I’m don’t 100% sure.
The auditors want to see if my backups are encrypted and I’m not sure where to go in the CommVault GUI to show that. I don’t see anything about encryption in the properties for my storage libraries or my storage policies. Where do I show whether or not my backups are encrypted?Ken
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.