Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 652 Topics
- 3,323 Replies
Is it possible to create a storage policy to backup data to tape library in weekly basis with manual barcode/media labels?
I am looking for a documents that we can setup storage policy to backup data to tape on a scheduled basis like tape#1 for week1 of the month, tape#2 for week2 of the month etc.. in round robin scenario from 1 backup job only Thank you
Low deduplication ratio on Oracle RAC DBs: why?
Hello everyone and thanks in advance for any hint.I’m running a Commvault V11Sp20 installation of the software and recently had a 6TB Oracle Database migrated from a stand-alone Oracle 12 DB to a three-node Oracle RAC 19 installation. Backups work properly and, “randomly” the three nodes are involved in backup jobs.Problem is: dedupe ratio is low.While the stand-alone data reached 95/97% of dedupe ratio, these backups for the Oracle RAC instances reach 40/65% of dedupe ratio. That is, a single Full backup takes something like 3TB of disk library (the very same disk library hosting the GDP previously used for storing the stand-alone Oracle DBs backups). Any hints about why the dedupe ratio is so low? Thanks in advance for your kind opinion. Regards
Read Performance on NetApp iSCSI Disk Library
Is anyone using NetApp for CommVault disk library, we just replaced the HP MSA2050 with NetApp FAS2720, this is 72 SATA 14.5 disk enclosure. The write performance is great but read is supper slow. The Aux copy is very slow for SQL DB backup and Exchange. I run Validate Storage on 1 path, all backup and aux was disabled, that is the max read of 31 MB/Sec
Sealing a DDB - what happens to the old DDB?
Hi Commvault-people. I have a large partitioned DDB which has been writing to a Cloud-based library, and has been for some time. The DDB partitions are roughly 2 TB’s in size. As is recommended when you have been writing to Cloud libraries, it should at some point be sealed, and I would like to go ahead. We are also on the cusp of the maximum threshold for Q&I times. However, I need to make sure I have enough space for my DDB on the current volumes. So the question is, what happens to the old DDB? I am assuming that it will remain at 2 TB’s in size until there is a corresponding reference for the blocks in the new DDB, or the blocks eventually age and are therefore not required. Maybe that will take months. Quite probably. Whilst it ages out old blocks, will the old DDB reduce in size? But what can I expect from the new DDB? If I only have a 3 TB volume, and 2TB is taken up by the old DDB then I really only have 1 TB available. if anyone has recently been through this scenario, it
Mount Path does not have enough space/Disk Library running out of space
Hi all,after some time we are facing another serious issue. There is no available space on the disk library. Ayayay.We have tried to find if there are any unprunable jobs. There were some of them and therefore we have set option to ignore cycle retention for disable subclients. Unfortunately, only small amount of GBs have been aged. Now, there is a question what to do next. I have no idea how to find what can be deleted in order to make more free space for the backups. Moreover, there will be quit a big deduplication ration, so even manual deletion of some jobs do not have to be useful. Maybe, one useful information can be that during the last month there were increase in data cca 10TB, which is 10 percent increase. Is there possibility to figure out what data did this increase? Is there any generule rule or useful tool within the Commvault to fight with this issue?
AuxCopy between 2 HPE StoreOnce Catalyst
We have 2 data centers, each having a HPE StoreOnce Catalyst Appliance and configured to a media agent in the data center.DC1- dc1mediaagent <> DC1 HPE StoreDC2-dc2mediaagent ↔ DC2 HPE StoreNow we want to get an AuxCopy going between DC1-HPE Store to DC2-HPE Store and another DC2-HPE store to DC1-HPE Store. We have tried configuring the libraries and sharing them to the other media agent. but the AuxCopy fails with Chunk errors Failures constantlyWe had no issues using Synology arrays between DC1 and DC2.Any ideas on best practice with HPE StoreOnce appliances? ThanksLarry
S3-IA early deletion
I am doing this sortof. I have an S3-IA bucket and send my Aux copies to it. I am just not doing the tiering to Glacier or something else. Everything stays in S3IA. My Aux copy retention is for 91 days, but I am getting blasted with AWS charges for early deletes. Does anyone know how I can find what is deleting early? CV support verified I have proper retention set.Thanks,Stephanie
HPE StoreOnce disk libraries and Command Center
I’m new to CV and still trying to sort out Commcell Console versus Command Center, so I appreciate your patience. After 4 years with Veeam and two decades with Data Protector, Commvault is proving to be quite a different animal.My first concern is why it takes googling some arcane code (ActivateHPECatalyst) to enter in the Commcell Console properties to make visible the StoreOnce option for library creation in the UI. What is the rationale for hiding the StoreOnce option in the first place?Now that I’ve added a Catalyst-backed disk library in Storage Resources > Libraries via Commcell Console, I go back over to Command Center, look at Storage > Disk, and I do not see my new disk library. How then am I supposed to add it to anything as a backup destination?
Hedvig umount 'device is busy
When following the document on how to stop/start a Hyperscale X appliance node https://documentation.commvault.com/2022e/expert/133467_stopping_and_starting_hyperscale_x_appliance_node.html I get to the step unmount the CDS vdisk, and after getting the proper vdisk name and running the cmd# umount /ws/hedvig/<vdiskname>i get the message ‘device is busy’.What’s the proper way to remediate this and continue?ThanksG
Aux Copy performance
Hello, we would like to tier out the data, wich is stored on the disk library to an Huawei Object Storage. I created a secoundary copy and configured an aux copy schedule. The problem is that the disk library disc space is running low because the job is not as fast as I was hoping.The amount of data for the copy job can be up to 10 TB.Is there a solution to speed up the aux copy job ? The Media Agents provide 2x10 Gbit cards.RegardsThomas
Audit Reporting: Confirm data exists in all storage locations
Hey everyone, I’ve got a bit of a puzzler. We have several years of data on prem, and in a 3rd party S3 bucket. We’re looking to reduce the footprint of this on prem and 3rd party S3 bucket somewhat, and are moving the data to AWS and Azure combined storage tier libraries as it’s long term data that we need to keep per SLA, but do not expect to recover from unless a project is resurrected or a legal search request comes in, and as such, we can lower some costs by storing it on the lower cost AWS and Azure offerings.The test Aux copies worked quite well - I can see that both my AWS and Azure libraries have the same number of jobs, and the same total data, but if I am asked by an auditor to show that during this work, for client X, that data was on prem, at the 3rd party S3 site, and at AWS and Azure, before I clear it from on prem and the 3rd party S3, I have no idea how to get a report showing that there are 4 copies of the data. Alternately, an Auditor could say show me for job XXX
Maglib status implementation in Grafana
Hi All,Has anybody worked on bringing in the Commvault maglib status and tape media usage status to Grafana dashboard? Is there anyway to showcase the usage trend and capacity reporting based on maglib utilization and publish in Grafana?Possibly if we can take the real time data from Commvault then we can pretty much show this metric in Grafana. Any leads?
Isilon Smart lock as WORM Enabled Library
Hi Team,Iam testing Isilon Smart lock feature for storage level immutability with Commvault DASH Aux Copies.My Dedupe is hosted on local drive ( non-worm) on windows media agent .Library is created from smart lock directory from Isilon .My question is that how does CV writes to WORM enabled storage ? As per the current settings on Isilon Smart lock there is autocommit time period of 5 mins ... files that have been in a SmartLock directory for a 5 mins of time without being modified will automatically committed to a WORM state.Can Commvault handle this because i guess there will be chunks which will not modify for 5 mins and need to update later during Aux copy process? Will the Aux copy fail or create new chunks/folders ?What about DDB ? Keeping DDB on non-worm and data on worm is recommended way ?Regards,Mohit
Showing library storage use over time
Hello everyone,At my DR site I use HP MSA disk devices for CommVault backup storage and they are currently over 86% full. I have asked my manager to add the cost of another MSA to the budget and she’s asking for growth rates to ensure that one MSA will be sufficient. I’m searching for that information and am not able to find it even though (I would have thought) that this is a pretty basic information. I did find the “Disk Library Growth Trend” report but don’t trust its numbers as the used space and free space increase at the same time despite the fact that the total storage has remained constant. I’m not sure where it’s getting its information but, well, it just doesn’t make sense. Screen capture below.So my question is: How can I find a storage growth rate to use for media agent capacity planning?Ken
Enabling encryption on dedupe storage - creates a new set dedupe data?
This assumption correct? - If you to start CV encrypting data sent dedupe storage my guess would be that it a completely new set of dedupe data? Once encryption is turned on, the dedupe engine will see it as new data rather than encryption version of the old. While the unencrypted and encrypted data from the same servers remains in the same dedupe storage, storage usage could higher than usual.
Failed to read media during restore
Hi everyone This is my first post. I was trying to restore some files I lost from an earlier backup. I could view the files and folders. I selected the copy precedence to be 1 (my primary copy)However, whenever the operation starts, it gives the error “Failed to read media during restore” and stops at 5%Everything seems fine. Until I tried to skip the errors, and I find out that the folders are restored, but no files in them.Please help out. Is there something in the settings I have to change?
where to config "preferred setting" in the "Select mount path for MediaAgent according to the preferred setting" of the Library properties
I’d like to consult where to config “preferred setting” mentioned in “Select mount path for MediaAgent according to the preferred setting” option of the library properties? thank you in advanceBest regards
Running Auxiliary Copy overlaps with Primary schedule
Hello,I need a little clarification about long running Auxiliary Copies.On a daily basis we run a Primary Copy schedule which is followed by three auxiliary copies. Quite often the auxiliaries take a very long time to complete, enough to overlap with the primary copy schedule of the following day. This makes the jobs on day 2 to “enter” the still running auxiliary copies from day 1, is there any way to avoid this? Is it possible to set a boundary on the latest job to be included in an auxiliary copy?To my understanding, the Copy Policy->Backup Period->End Time setting cannot be used as it would provide a fixed date rather than a moving one.Sorry if this sounds a bit dummy and thanks for you supportGaetano
Hello Commvault Community! I would like to ask you if there is any option to do a "tape mirror". I mean 1:1 copying the data from one tape to another, but keeping the data on both tapes. I am aware that there is a "Tape to tape copy" option, but if I understand correctly it deletes the data from the source tape and copies it to the new one available in Spare Media. The reason why the customer wants two copies of the data on two different copies is the security rules of their company, where they must have one tape in the safe and the other in the active Tape Library. I suggested to make another copy in Storage Policy backup to tape with the source copy from disk, but then we can't be sure that the tapes will be 1:1. Is there any chance we can make a tape mirror? Move Contents of Media from One Tape to Anotherhttps://documentation.commvault.com/11.24/expert/10538_move_contents_of_media_from_one_tape_to_another.html Thanks&Regards,Kamil
Aux copy jobs are stuck on 99%
Hi there, have you ever seen in the Job Controller view, that there are some Aux copy jobs on 99% (progress bar) for a couple of days? Moreover, Estimated Completion Time says Not Applicable. However, Application size and Total Data Process number is slightly increasing during the time. And of course, aux copy jobs are in therunning state. My assumption is and it looks like that there are still some jobs needed to be copied from the primary to the secondary policy copy. Is it possible? And is it worth to wait for the job completion or better to kill the job.
Commvault Official Deduplication Limits
I think there are several deduplication-related subjects that require thorough documentation by Commvault. 1.) There is a long-standing recommendation to limit deduplication database disks to TWO PER MEDIA AGENT.’ I was able to track this recommendation as far back as Simpana 9.0, but I’m not sure if it pre-dates that. This limit feels rather arbitrary, and it doesn’t take into account the performance capabilities of the host platform, or advances in computing kit since the original recommendation was made. I can’t find any references showing WHY such a limit should exist.In my case, I have three Deduplication disks on my media agents (all NVMe SSD), and that runs with zero issues. The Media Agents are spec’d over the recommended Extra Large spec, and they don’t even sweat. I would like to issue a call to Commvault to really explain why this limitation is in place. The Deduplication Building Block section of the documentation would be an ideal place for this information. Th
DDB v4 gen 2 table structure
Hi alli need your help to understand table architecture for Deduplication database v4 gen2 table structure and there functioning. There is as such no information available on documentation explaining current ddb table structure. Please help with the information if possible.
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.