Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
Hi All. I have a question regarding restore of data which resides on a offsite private cloud where clients have no access. Setup is:On prem Client network with servers in backup.→ Firewall in between with port open for backup traffic.On prem Backup network with MediaAgents and storage.→ Firewall in between with port open for aux copy data.MediaAgents with offsite private cloud storage containing Aux copies of client data. Clients do not have access to the offsite MediaAgents/copy, so my question is, how do I make a restore of data from the offsite cloud, when the On prem clients and the offsite MediaAgents cannot talk to each other directly? Is it possible to route restore traffic via the on prem MediaAgents acting as a proxy some how? Would the network gateway via outgoing routes be the solution? Any suggestion would be appreciated. Thanks for helping.-Anders
HI ThereIHAC that will use exagrid as backup storage with Commvault.Exagrid states that they can add to Commvault deduplication to obtain a higher dedup ratio (up to 20:1 for long term retention data).I couldn’t find any information on Exagrid on BoL and my understanding was that we do not use CV deduplication when using a deduplication storage as primary target.Did anyone implemented CV with Exagrid ? and if so any specifics/culprit or best practices ? ThanksAbdel
Hello All,I am from TSM background and when VTL was introduced in TSM, the scratch tapes were not automatically deleted in VTL. Later, RELABELSCRATCH parameter was introduced and it allows to automatically relabel volumes when they are returned to scratch. I remember that specific settings we have do in Backup Exec and HP data Protector backup software too.→ I want to know whether any similar setting is there in Commvault?More details as per TSM perspective → Virtual Tape Libraries (VTLs) maintain volume space allocation after Tivoli Storage Manager has deleted a volume and returned it to a scratch state. The VTL has no knowledge that the volume was deleted and it keeps the full size of the volume allocate. This can be extremely large depending on the devices being emulated. As a result of multiple volumes that return to scratch, the VTL can maintain their allocation size and run out of storage space. Relabel processing on the Tivoli Storage Manager server are started for libraries (V
Hi all, We have a following issue. The LTO8 tapes were badly marked with LTO6 barcodes. Then, the tapes have been remarked with right barcode. However, Commvault can not recognise the newly remarked tapes - Commvault says that the barcodes had been already used and tapes were moved to retired group. We tried to perform Discover, Full Scan (inventory) and Update barcode for the given tape without success. Is there any workaround for the issue? Is there only one way to somehow fix the tapes within the tape library?
Hello community , We are trying to migrate SAN storage to S3 cloud library .Per suggestions followed these steps . 1. configured new global dedupe storage policy using your new S3 bucket and MA2. configured new secondary copies in your existing storage policies pointing to your new S3 dedupe storage3. ran aux-copyWe have huge data and contacted commvault support to determine the timelines to know when the aux copy will get complete . Currently aux-copy is running for more than 4 months.Support has mentioned below points .-Your current configuration is allowing the selection and prioritization of new backups over older data-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. How to make sure have optimal Aux copy configurations Please share your inputs . Thanks in advanceSpartan9
Hello,The customer bought new servers MS SQL and migrate to new 2 servers.Scenery today:2 servers MS SQL1 production2 copyNew Scenery:2 servers MS SQL (New Version of O.S. and Database)1 production2 copyand i need to copy the config of backup jobs, with the same configuration of current backup: retention, backups jobs, DDB, schedulers.Does anyone have any procedure? the best practices to do.
Hello everyone,All my backups are set to replicate from my primary site to my DR site and all copies have the same retention. Weirdly, the Disk Library Growth report shows the media agent at my primary site has 178TB worth of data but my DR site is only 132TB so I’m wondering where the 46TB difference comes from.Question: Is there a way to compare the contents of two media agents to see where discrepancies are coming from?ThanksKen
Hi, I’ve an old sealed DDB with no more jobs associated to it, it only shows some number of unique blocks left ( for a size of 1,18TB ) , secondary blocks is 0 , application size is already at 0. How can I then do what you proposed to do: “then remove ALL of the blocks for that store in one big macro prune.”?Cause I’d like to get totally rid of that sealed DDB ?
Hi all,I would like to ask you about utilization of tape drives in the tape library. In our situation, however there are 4 tape drivers in the master pool, the Commvault is choosing only 2 of them. The rest of them is idle. Is there any way to force using the tape drive with the given storage policy? Can be helpful to associate the tape drive with only one media agent, that can only use that tape drive? Is the number of streams a variable in this case? Thanks for any suggestions!
Hello ! In the next days, I have the opportunity to replace an existing Windows MA (with VSA) with a Linux MA (also with VSA).The windows MA has a local (internal) disk library hosting deduplicated backups of a single storage policy. The target Linux MA also has internal disks library + an LTO8 tape library.Both MAs are online and can communicate over the IP network. It looks possible to perform a move of the DDB of the windows MA to the Linux MA, but it is not possible to perform a move of the windows Mount Path to a linux server. I would like to keep my existing Storage policy (that of course can be edited for this purpose) and existing backups so they could age normally (ideally) without manual action at the expected end date of retention. Has anyone already performed in real such action ?How have you done this ? Or would you ? Of course, I can create a new storage policy with deduplication targetting the linux MA, and attach all the clients to this new SP, but this is what I wish
HelloI have a problem of auxiliary copy between two media agents with HP StoreOnceI have a 2*100MB WAN link between the two MA.I got 13:138 errors but job is still runningSee attachments for detailsIf someone could help meSP11.22No firewallingFirewall is Off on the two MA Regards
I’m a little confused about multiple partitions and or DDBs limitation. There are several places when limits are mentioned like the one and only Hardware Specifications for Deduplication Mode1) https://documentation.commvault.com/11.24/expert/111985_hardware_specifications_for_deduplication_mode.htmlwhich mentions “2 DDB Disks” per MA andConfiguring Additional Partitions for a Deduplication Database2) https://documentation.commvault.com/11.24/expert/12455_configuring_additional_partitions_for_deduplication_database.htmlwhich mentions “30 DDB partitions” per MA.Also, In my recent discussion with PS member I was told that a partition should be treated as DDB itself.All of this creates a lot of confusion likea) is “DDB Disk” the same as DDB?b) is “DDB partition” the same as DDB?If you look at 1) /Scaling and Resiliency there is a informationThe back-end size of the data. For example: Each 2 TiB DDB disk holds up to 250 TiB for disk and 500 TiB for cloud extra large MediaAgent. That means
Hello Community ! I am trying to add a tape library HP MSL G3 Series to Commvault.I am using the Expert Storage Configuration. i have selected the Two Media agents (they are already zone with the tape libraries)I have followed the procedure. Now it asks if the library have a barcode reader and I don’t know :)can you help me please ? Thanks !
Is it possible to install the BoostFS library on a HyperScale X Media Agent, and if so is it supported?I’m looking to write a additional backup copy to storage outside of the appliance and hope I can do it in a deduplicated manner rather than having to send a full copy to the data domain. The topic of BoostFS is discussed on https://documentation.commvault.com/commvault/v11/article?p=9404.htm#o132281, but nothing specific to HyperScale X and BoostFS at https://documentation.commvault.com/commvault/v11/article?p=128105.htm.Thanks
Hello,Is there someone that can interpret and let me know my assumption on the switches in the below command to are.-j = jobId-i = Deduplication Partition-c = ??-in = CV Instance-cn = Client-group = ??“C:\Program Files\Commvault\ContentStore\Base\SIDB2.exe -j 0 -i 74 -c 178 -in Instance001 -cn client-server 1 -group 0”Also, if anyone know why the above doesen’t have a JobId, or rather why it’s 0. Is it just an internal thing to the SIDB2 process?I’ve seen the command have -j 0 as well as a -j xyz jobId that is searchable. BRHenrik
Hey Community,2 QuestionsIs it possible to change the AWS storage tier from Glacier to Deep Archive on copies that already exist beside running an aux copy? We ran 7 months of aux copies to Glacier until we were told to send it to Deep Archive.. Now we have 500TB sitting in the Glacier class and of course, costing money. I’ve started the process of just aux coping them, but it is tedious to do one job ID at a time on the Cloud Storage Archive Recall workflow. Is there a way to add multiple job IDs to the workflow?Thanks in advanced~John
Hi,has anyone experiences with VTL self Deduplication Backup Appliances, like Quantum DXi4800 for Example (smaller Modells ~27 TB)? The deveices are non Deduped backuptargets for commvault. Later Aux-Copy to a physical Tape-Library is performed.I got a Performance Best Practice Guide from Quantum. They said use small cartridge size 50/100 GB (like Ultrium 1) but not how many Tape Deveices in parallel and how this relates in backup and restore performance. To much Tape-Drives maybe slow down the throughput but i have no idea which throughput per drive stream is possible comparred to a physicale tape drive which normaly only receive 100-200mb/s.Would be nice if someone else use this "old"stuff and can explain something in correlation with commvault. My last VTL implementation was 10 years ago.ThanksChristoph
Hi Alli have a question for the expert's :) we using Metallic Storage for secondary copy.we backup SQL Instance’s there, with T-Log (lot of jobs) and Daily Full.i want to ask if i can somehow to exclude only the T-Log from going to the Second copy.i want only the Daily Full will go to the Auxiliary Metallic Storage.is it possible? (without create a new policy of course). thanks in advance
I think there are several deduplication-related subjects that require thorough documentation by Commvault. 1.) There is a long-standing recommendation to limit deduplication database disks to TWO PER MEDIA AGENT.’ I was able to track this recommendation as far back as Simpana 9.0, but I’m not sure if it pre-dates that. This limit feels rather arbitrary, and it doesn’t take into account the performance capabilities of the host platform, or advances in computing kit since the original recommendation was made. I can’t find any references showing WHY such a limit should exist.In my case, I have three Deduplication disks on my media agents (all NVMe SSD), and that runs with zero issues. The Media Agents are spec’d over the recommended Extra Large spec, and they don’t even sweat. I would like to issue a call to Commvault to really explain why this limitation is in place. The Deduplication Building Block section of the documentation would be an ideal place for this information. Th
is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots)
I am running a migration from other vendor to CV. For a time being, is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots). Just to avoid conflict as the tape library would be having CV as well as other vendor media.
Hi there,I would like to ask you, in general, which elements are in play during the DDB verification? Is there communication between the DDB database and the disk library during the DDB verification job?In documentation, there is written that "Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database." So, it means that there is communication between media agent and master server… The thing is, that in our case this verification job even Incremental takes a very long time. However, the performance of the disk with DDB database looks quite good.
Hello,I need to create a backup job with all full backups to tape in a physical library(new library) that the customer has purchased.Today all backups are on disk (File Library).After the new Jobs to tape has been succesfully executed (3 backup jobs to tape will be done), i need to erase the full backup, which today is very old and needs to be done a new full backup, because the size will be smaller and will free up more space in disk.Wait guys for the best practices to do this.Another question: i will create the Storage Police for the library with permanente retention and include all tapes in the same SP or create other for example: Database, Exchange, etc.@Mike Struening
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.