Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 764 Topics
- 3,634 Replies
Hello everyone,All my backups are set to replicate from my primary site to my DR site and all copies have the same retention. Weirdly, the Disk Library Growth report shows the media agent at my primary site has 178TB worth of data but my DR site is only 132TB so I’m wondering where the 46TB difference comes from.Question: Is there a way to compare the contents of two media agents to see where discrepancies are coming from?ThanksKen
Hi, I’ve an old sealed DDB with no more jobs associated to it, it only shows some number of unique blocks left ( for a size of 1,18TB ) , secondary blocks is 0 , application size is already at 0. How can I then do what you proposed to do: “then remove ALL of the blocks for that store in one big macro prune.”?Cause I’d like to get totally rid of that sealed DDB ?
Hi all,I would like to ask you about utilization of tape drives in the tape library. In our situation, however there are 4 tape drivers in the master pool, the Commvault is choosing only 2 of them. The rest of them is idle. Is there any way to force using the tape drive with the given storage policy? Can be helpful to associate the tape drive with only one media agent, that can only use that tape drive? Is the number of streams a variable in this case? Thanks for any suggestions!
Hello ! In the next days, I have the opportunity to replace an existing Windows MA (with VSA) with a Linux MA (also with VSA).The windows MA has a local (internal) disk library hosting deduplicated backups of a single storage policy. The target Linux MA also has internal disks library + an LTO8 tape library.Both MAs are online and can communicate over the IP network. It looks possible to perform a move of the DDB of the windows MA to the Linux MA, but it is not possible to perform a move of the windows Mount Path to a linux server. I would like to keep my existing Storage policy (that of course can be edited for this purpose) and existing backups so they could age normally (ideally) without manual action at the expected end date of retention. Has anyone already performed in real such action ?How have you done this ? Or would you ? Of course, I can create a new storage policy with deduplication targetting the linux MA, and attach all the clients to this new SP, but this is what I wish
HelloI have a problem of auxiliary copy between two media agents with HP StoreOnceI have a 2*100MB WAN link between the two MA.I got 13:138 errors but job is still runningSee attachments for detailsIf someone could help meSP11.22No firewallingFirewall is Off on the two MA Regards
I’m a little confused about multiple partitions and or DDBs limitation. There are several places when limits are mentioned like the one and only Hardware Specifications for Deduplication Mode1) https://documentation.commvault.com/11.24/expert/111985_hardware_specifications_for_deduplication_mode.htmlwhich mentions “2 DDB Disks” per MA andConfiguring Additional Partitions for a Deduplication Database2) https://documentation.commvault.com/11.24/expert/12455_configuring_additional_partitions_for_deduplication_database.htmlwhich mentions “30 DDB partitions” per MA.Also, In my recent discussion with PS member I was told that a partition should be treated as DDB itself.All of this creates a lot of confusion likea) is “DDB Disk” the same as DDB?b) is “DDB partition” the same as DDB?If you look at 1) /Scaling and Resiliency there is a informationThe back-end size of the data. For example: Each 2 TiB DDB disk holds up to 250 TiB for disk and 500 TiB for cloud extra large MediaAgent. That means
Hello Community ! I am trying to add a tape library HP MSL G3 Series to Commvault.I am using the Expert Storage Configuration. i have selected the Two Media agents (they are already zone with the tape libraries)I have followed the procedure. Now it asks if the library have a barcode reader and I don’t know :)can you help me please ? Thanks !
Hello,Is there someone that can interpret and let me know my assumption on the switches in the below command to are.-j = jobId-i = Deduplication Partition-c = ??-in = CV Instance-cn = Client-group = ??“C:\Program Files\Commvault\ContentStore\Base\SIDB2.exe -j 0 -i 74 -c 178 -in Instance001 -cn client-server 1 -group 0”Also, if anyone know why the above doesen’t have a JobId, or rather why it’s 0. Is it just an internal thing to the SIDB2 process?I’ve seen the command have -j 0 as well as a -j xyz jobId that is searchable. BRHenrik
Hey Community,2 QuestionsIs it possible to change the AWS storage tier from Glacier to Deep Archive on copies that already exist beside running an aux copy? We ran 7 months of aux copies to Glacier until we were told to send it to Deep Archive.. Now we have 500TB sitting in the Glacier class and of course, costing money. I’ve started the process of just aux coping them, but it is tedious to do one job ID at a time on the Cloud Storage Archive Recall workflow. Is there a way to add multiple job IDs to the workflow?Thanks in advanced~John
Hi,has anyone experiences with VTL self Deduplication Backup Appliances, like Quantum DXi4800 for Example (smaller Modells ~27 TB)? The deveices are non Deduped backuptargets for commvault. Later Aux-Copy to a physical Tape-Library is performed.I got a Performance Best Practice Guide from Quantum. They said use small cartridge size 50/100 GB (like Ultrium 1) but not how many Tape Deveices in parallel and how this relates in backup and restore performance. To much Tape-Drives maybe slow down the throughput but i have no idea which throughput per drive stream is possible comparred to a physicale tape drive which normaly only receive 100-200mb/s.Would be nice if someone else use this "old"stuff and can explain something in correlation with commvault. My last VTL implementation was 10 years ago.ThanksChristoph
Hi Alli have a question for the expert's :) we using Metallic Storage for secondary copy.we backup SQL Instance’s there, with T-Log (lot of jobs) and Daily Full.i want to ask if i can somehow to exclude only the T-Log from going to the Second copy.i want only the Daily Full will go to the Auxiliary Metallic Storage.is it possible? (without create a new policy of course). thanks in advance
I think there are several deduplication-related subjects that require thorough documentation by Commvault. 1.) There is a long-standing recommendation to limit deduplication database disks to TWO PER MEDIA AGENT.’ I was able to track this recommendation as far back as Simpana 9.0, but I’m not sure if it pre-dates that. This limit feels rather arbitrary, and it doesn’t take into account the performance capabilities of the host platform, or advances in computing kit since the original recommendation was made. I can’t find any references showing WHY such a limit should exist.In my case, I have three Deduplication disks on my media agents (all NVMe SSD), and that runs with zero issues. The Media Agents are spec’d over the recommended Extra Large spec, and they don’t even sweat. I would like to issue a call to Commvault to really explain why this limitation is in place. The Deduplication Building Block section of the documentation would be an ideal place for this information. Th
is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots)
I am running a migration from other vendor to CV. For a time being, is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots). Just to avoid conflict as the tape library would be having CV as well as other vendor media.
Hi there,I would like to ask you, in general, which elements are in play during the DDB verification? Is there communication between the DDB database and the disk library during the DDB verification job?In documentation, there is written that "Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database." So, it means that there is communication between media agent and master server… The thing is, that in our case this verification job even Incremental takes a very long time. However, the performance of the disk with DDB database looks quite good.
Hello,I need to create a backup job with all full backups to tape in a physical library(new library) that the customer has purchased.Today all backups are on disk (File Library).After the new Jobs to tape has been succesfully executed (3 backup jobs to tape will be done), i need to erase the full backup, which today is very old and needs to be done a new full backup, because the size will be smaller and will free up more space in disk.Wait guys for the best practices to do this.Another question: i will create the Storage Police for the library with permanente retention and include all tapes in the same SP or create other for example: Database, Exchange, etc.@Mike Struening
Hi alli need your help to understand table architecture for Deduplication database v4 gen2 table structure and there functioning. There is as such no information available on documentation explaining current ddb table structure. Please help with the information if possible.
Hi there, could you please help me what to do if all drives within the tape library went offline? The offline reason seems to be very strange: [Cannot communicate with Media Mount Manager Service.Please ensure that.a. The MediaAgent is reachable from CommServe.b. All MediaAgent services are running.] A - Checked that MA is reachableB - Checked that all services are running What can help us to make drives online? Also verified in Windows device manager that drives are present and visible.
Hello everyone,At my DR site I use HP MSA disk devices for CommVault backup storage and they are currently over 86% full. I have asked my manager to add the cost of another MSA to the budget and she’s asking for growth rates to ensure that one MSA will be sufficient. I’m searching for that information and am not able to find it even though (I would have thought) that this is a pretty basic information. I did find the “Disk Library Growth Trend” report but don’t trust its numbers as the used space and free space increase at the same time despite the fact that the total storage has remained constant. I’m not sure where it’s getting its information but, well, it just doesn’t make sense. Screen capture below.So my question is: How can I find a storage growth rate to use for media agent capacity planning?Ken
I have several cloud libraries, where the storage and DDB are controlled by an on-premises MA. I would like to switch several of them to a different on-premises MA. But cannot seem to find anything here on it the docs on how to switch MAs for an existing cloud library.
We have a partitioned DDB that uses a Disk library with 12 mount points. Spill and fill has been configured.An oracle DB is backed up with 4 streams/channels. The backup allocates 4 streams but these streams are allocated to one mount point and via one MA. How can the streams be spread across multiple mountpoints such that 2 go via MA1 and 2 via MA2.===Second Oracle DB is being backed up. This takes the same partition as the above job and uses another mount point with all 4 streams going to the same mountpoint. Any ideas how the streams to make Commvault distribute the streams evenly?
Hi,i have a question regarding the implementation of a cloud library with Scality Ring.We can create two type of mount path S3 Compatible Storage or Scality Ring.Which is required ? (i have some cloud libraries already created in S3 compatible storage instead of scality ring type).There is a difference between them ?Kind regards, Christophe
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.