Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 588 Topics
- 3,129 Replies
Hello All, We are using Azure Cold data to store our off-shite copies. We have been using this from last several years. Lately, we decided to use Azure Combined storage and planned to move/copy data from Azure cold to Archive storage. After having a discussion with Commvault, we implemented what was suggested but this process seems to be really slow and case is now escalated to Dev. Being honest, we are seeing terrible delay from their side too. I am question is now, If instead of using aux copy to copy the jobs from cold blob to the combined tier library, what if we changed the tier of cold blob from cool to combined tier? If we did that, would the data convert to archive, or would that only effect new data written to that storage?
Hello everyone and thanks in advance for any hint.I’m running a Commvault V11Sp20 installation of the software and recently had a 6TB Oracle Database migrated from a stand-alone Oracle 12 DB to a three-node Oracle RAC 19 installation. Backups work properly and, “randomly” the three nodes are involved in backup jobs.Problem is: dedupe ratio is low.While the stand-alone data reached 95/97% of dedupe ratio, these backups for the Oracle RAC instances reach 40/65% of dedupe ratio. That is, a single Full backup takes something like 3TB of disk library (the very same disk library hosting the GDP previously used for storing the stand-alone Oracle DBs backups). Any hints about why the dedupe ratio is so low? Thanks in advance for your kind opinion. Regards
Is anyone using NetApp for CommVault disk library, we just replaced the HP MSA2050 with NetApp FAS2720, this is 72 SATA 14.5 disk enclosure. The write performance is great but read is supper slow. The Aux copy is very slow for SQL DB backup and Exchange. I run Validate Storage on 1 path, all backup and aux was disabled, that is the max read of 31 MB/Sec
Hi Commvault-people. I have a large partitioned DDB which has been writing to a Cloud-based library, and has been for some time. The DDB partitions are roughly 2 TB’s in size. As is recommended when you have been writing to Cloud libraries, it should at some point be sealed, and I would like to go ahead. We are also on the cusp of the maximum threshold for Q&I times. However, I need to make sure I have enough space for my DDB on the current volumes. So the question is, what happens to the old DDB? I am assuming that it will remain at 2 TB’s in size until there is a corresponding reference for the blocks in the new DDB, or the blocks eventually age and are therefore not required. Maybe that will take months. Quite probably. Whilst it ages out old blocks, will the old DDB reduce in size? But what can I expect from the new DDB? If I only have a 3 TB volume, and 2TB is taken up by the old DDB then I really only have 1 TB available. if anyone has recently been through this scenario, it
Hi all,after some time we are facing another serious issue. There is no available space on the disk library. Ayayay.We have tried to find if there are any unprunable jobs. There were some of them and therefore we have set option to ignore cycle retention for disable subclients. Unfortunately, only small amount of GBs have been aged. Now, there is a question what to do next. I have no idea how to find what can be deleted in order to make more free space for the backups. Moreover, there will be quit a big deduplication ration, so even manual deletion of some jobs do not have to be useful. Maybe, one useful information can be that during the last month there were increase in data cca 10TB, which is 10 percent increase. Is there possibility to figure out what data did this increase? Is there any generule rule or useful tool within the Commvault to fight with this issue?
We have 2 data centers, each having a HPE StoreOnce Catalyst Appliance and configured to a media agent in the data center.DC1- dc1mediaagent <> DC1 HPE StoreDC2-dc2mediaagent ↔ DC2 HPE StoreNow we want to get an AuxCopy going between DC1-HPE Store to DC2-HPE Store and another DC2-HPE store to DC1-HPE Store. We have tried configuring the libraries and sharing them to the other media agent. but the AuxCopy fails with Chunk errors Failures constantlyWe had no issues using Synology arrays between DC1 and DC2.Any ideas on best practice with HPE StoreOnce appliances? ThanksLarry
I am doing this sortof. I have an S3-IA bucket and send my Aux copies to it. I am just not doing the tiering to Glacier or something else. Everything stays in S3IA. My Aux copy retention is for 91 days, but I am getting blasted with AWS charges for early deletes. Does anyone know how I can find what is deleting early? CV support verified I have proper retention set.Thanks,Stephanie
I’m new to CV and still trying to sort out Commcell Console versus Command Center, so I appreciate your patience. After 4 years with Veeam and two decades with Data Protector, Commvault is proving to be quite a different animal.My first concern is why it takes googling some arcane code (ActivateHPECatalyst) to enter in the Commcell Console properties to make visible the StoreOnce option for library creation in the UI. What is the rationale for hiding the StoreOnce option in the first place?Now that I’ve added a Catalyst-backed disk library in Storage Resources > Libraries via Commcell Console, I go back over to Command Center, look at Storage > Disk, and I do not see my new disk library. How then am I supposed to add it to anything as a backup destination?
When following the document on how to stop/start a Hyperscale X appliance node https://documentation.commvault.com/2022e/expert/133467_stopping_and_starting_hyperscale_x_appliance_node.html I get to the step unmount the CDS vdisk, and after getting the proper vdisk name and running the cmd# umount /ws/hedvig/<vdiskname>i get the message ‘device is busy’.What’s the proper way to remediate this and continue?ThanksG
Hi All,Has anybody worked on bringing in the Commvault maglib status and tape media usage status to Grafana dashboard? Is there anyway to showcase the usage trend and capacity reporting based on maglib utilization and publish in Grafana?Possibly if we can take the real time data from Commvault then we can pretty much show this metric in Grafana. Any leads?
Hi Team,Iam testing Isilon Smart lock feature for storage level immutability with Commvault DASH Aux Copies.My Dedupe is hosted on local drive ( non-worm) on windows media agent .Library is created from smart lock directory from Isilon .My question is that how does CV writes to WORM enabled storage ? As per the current settings on Isilon Smart lock there is autocommit time period of 5 mins ... files that have been in a SmartLock directory for a 5 mins of time without being modified will automatically committed to a WORM state.Can Commvault handle this because i guess there will be chunks which will not modify for 5 mins and need to update later during Aux copy process? Will the Aux copy fail or create new chunks/folders ?What about DDB ? Keeping DDB on non-worm and data on worm is recommended way ?Regards,Mohit
Hello everyone,At my DR site I use HP MSA disk devices for CommVault backup storage and they are currently over 86% full. I have asked my manager to add the cost of another MSA to the budget and she’s asking for growth rates to ensure that one MSA will be sufficient. I’m searching for that information and am not able to find it even though (I would have thought) that this is a pretty basic information. I did find the “Disk Library Growth Trend” report but don’t trust its numbers as the used space and free space increase at the same time despite the fact that the total storage has remained constant. I’m not sure where it’s getting its information but, well, it just doesn’t make sense. Screen capture below.So my question is: How can I find a storage growth rate to use for media agent capacity planning?Ken
Hi everyone This is my first post. I was trying to restore some files I lost from an earlier backup. I could view the files and folders. I selected the copy precedence to be 1 (my primary copy)However, whenever the operation starts, it gives the error “Failed to read media during restore” and stops at 5%Everything seems fine. Until I tried to skip the errors, and I find out that the folders are restored, but no files in them.Please help out. Is there something in the settings I have to change?
where to config "preferred setting" in the "Select mount path for MediaAgent according to the preferred setting" of the Library properties
I’d like to consult where to config “preferred setting” mentioned in “Select mount path for MediaAgent according to the preferred setting” option of the library properties? thank you in advanceBest regards
Hello,I need a little clarification about long running Auxiliary Copies.On a daily basis we run a Primary Copy schedule which is followed by three auxiliary copies. Quite often the auxiliaries take a very long time to complete, enough to overlap with the primary copy schedule of the following day. This makes the jobs on day 2 to “enter” the still running auxiliary copies from day 1, is there any way to avoid this? Is it possible to set a boundary on the latest job to be included in an auxiliary copy?To my understanding, the Copy Policy->Backup Period->End Time setting cannot be used as it would provide a fixed date rather than a moving one.Sorry if this sounds a bit dummy and thanks for you supportGaetano
Hello Commvault Community! I would like to ask you if there is any option to do a "tape mirror". I mean 1:1 copying the data from one tape to another, but keeping the data on both tapes. I am aware that there is a "Tape to tape copy" option, but if I understand correctly it deletes the data from the source tape and copies it to the new one available in Spare Media. The reason why the customer wants two copies of the data on two different copies is the security rules of their company, where they must have one tape in the safe and the other in the active Tape Library. I suggested to make another copy in Storage Policy backup to tape with the source copy from disk, but then we can't be sure that the tapes will be 1:1. Is there any chance we can make a tape mirror? Move Contents of Media from One Tape to Anotherhttps://documentation.commvault.com/11.24/expert/10538_move_contents_of_media_from_one_tape_to_another.html Thanks&Regards,Kamil
I think there are several deduplication-related subjects that require thorough documentation by Commvault. 1.) There is a long-standing recommendation to limit deduplication database disks to TWO PER MEDIA AGENT.’ I was able to track this recommendation as far back as Simpana 9.0, but I’m not sure if it pre-dates that. This limit feels rather arbitrary, and it doesn’t take into account the performance capabilities of the host platform, or advances in computing kit since the original recommendation was made. I can’t find any references showing WHY such a limit should exist.In my case, I have three Deduplication disks on my media agents (all NVMe SSD), and that runs with zero issues. The Media Agents are spec’d over the recommended Extra Large spec, and they don’t even sweat. I would like to issue a call to Commvault to really explain why this limitation is in place. The Deduplication Building Block section of the documentation would be an ideal place for this information. Th
Hi alli need your help to understand table architecture for Deduplication database v4 gen2 table structure and there functioning. There is as such no information available on documentation explaining current ddb table structure. Please help with the information if possible.
Since noon on Saturday (May 15), my Disaster recovery Backup admin jobs have been failing with this error:Error Code: [34:85]Description: CommServeDR: Error Performing Transfer: Error : [Failed to initialize with Commvault cloud service, The service may be down for maintenance.].Source: inf-srv57, Process: commserveDRIs anyone else having issues with the CommVault cloud service?Ken
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
Hi all, i will try here, We have 2 MA in Azure that acts as proxy as well.. When we tried to backup VM from azure (to cloud storage) , if we configured MA 1 as proxy job completed, when we try to configure MA 2 as proxy its failed with "failed to fetch a valid sas token" error. Anyone had a clue what cause this error? Both MA with same OS, Disks, Permission, Version..No drops from FW & Network settings are configured (client/CS)
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.