Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
I am attempting to replicate an existing Storage Policy with some differences associated with Media Agents for the copies. There is a existing setting called Archiver data Retention with a setting of 63 days for certain copies, but looking at the Copy properties for Retention, I am unable to find that setting.As a result, in my replicated Storage Policy, the default Archiver data Retention value is Infinite.Do you know how to set the settings for Archiver data Retention?
Hi all,Wondering if by chance anyone has experience running DDB space reclamation with orphan data cleanup against a cloud library. We have a cloud library in Azure cool tier which we expect may have some data that was not pruned successfully, thus increasing our storage consumption in Azure. The deduplication database for this data lives on local storage.We’d love to run a space reclamation with orphan data cleanup against this cloud library, but we’re concerned about the possible cost of storage transactions against the Azure cool library.Has anyone performed this operation before and observed the related cloud storage costs? For reference, we have just under 100 million blobs and a total of about 400TB of storage utilization in Azure. Many thanks for any input folks may have!
Good day!I would like to know how the Hyperscale X cluster is selected for the VMware backup VSA access node and how agent-based backup can point to the HSX cluster in the policy/plan.This is the first time, planning for an HSX deployment. Will there be a virtual name automatically created which can be selected in the Backup Plan/policy?
Hi, good morning.I did install Commvault Version 11.28.70 .Meantime on my Vcenter infrastructure I did create and install one Appliance Quantum VTL DXi 5000 Community Edition (this appliance it did develope on CentOS O.S.)Both servers are on the same network, I can ping the vtl without problems.I would like to know how can I make for connect/configure the VTL on my Commvault Environment.I did try without success to configure the same on the Commvault COmmand Center with the produce : Storage/Disk/Add. ecc. I did insert the information of the server( user and password ) but when I try to insert the path (the vtl appliance it’s on CentOS O.S. ) I can’t configure correctly the path.THanks in advance.Best regardsRicardo
Hello!Customer has this TL that is partitioned. There's a partition for Commvault. Originally it had 11 slots and some drives.Customer added 30 slots but they don't get recognized by Commault (and so the tapes on them).Full scan does not update the slot count.What must be done to recognize the new added slots?Regards,Pedro
Greetings, We have some Aux copies that go to our AWS s3 bucket. The storage policy this is under has a 30 day on prem and 365 day cloud policy. The 30 day on prem (primary) has data aging turned on and seems to be pruning and getting rid of jobs past 30 days. I took a look at the properties of the Aux copy job though and noticed that the check box for data aging was not selected. When I view all jobs for this Aux copy, it showed jobs back from years ago unfortunately. So that tells me that nothing is aging out or getting cleaned up. Our s3 bucket is getting very large and we need to clean up all of these old jobs to bring it down to a reasonable size. My question is how best to do this clean up? Can I view the jobs under the Aux copy and then just select all of them past our retention and delete? Would this delete data out of the s3 bucket also if I did this? I did select the data aging check box now and hit ok, then ran a data aging job from the commcell root and just ran it against
Hi Community,With reference to the following post where there is no longer an option to select:https://community.commvault.com/topic/show?tid=5967&fid=49Is there a way to confirm which option is being selected for an Aux Copy Operation? I would like to confirm because I would assume that when copying between 2 cloud libraries, Disk Read Optimized copy would be more efficient and cheaper in cost being that each block does not need to be read to generate the signature - or at least that's the way I read doco. Would there be an entry in logs to determine this?Thanks in advance for the advice/assistance.
New to Hyperscale nodes and trying to figure out how to increase the size available to be used for the DDB paths.We have multiple GDSP's using the HS for their DDB's and are receiving warnings that the free space on DDB MediaAgent is very low.Looking at the disk space it looks as though there is 1.4TB left on the mount path. I'm a windows person so maybe I'm not understanding?Is there a way to give more space to the DDB’s? Thanks
Hello Community, I am new to Commvault. I am trying to check the status of a DDB Reconstruction failed job. I checked the Storage policy but I don’t see the job that created the internal ticket. Type: Job Management - DeDup DB ReconstructionDetected Criteria: Job StartedDetected Thanks.
Hi Team,We created Private Cloud Library using Dell EMC ECS S3 protocol. However, today I noticed that the DDB Engines of the related libraries are unchecked in the DDB Verification schedule created automatically by the system.Is this the default behavior?Is it not recommended for DDB Verification Cloud Libraries ?Is DDB Verification recommended even if our private cloud data is still in our own DataCenter? Best Regards.
Hello, We have some auxiliary copy jobs configured that run just once a week copyng only the full backp-us to tape All seems working ok but be received this alarm attached and to be honest,we don't really fully understand it…Can you kindly tell us why do you consider thta we have this behavior Thank you
We noticed on a RHEL8.7 build with a local disklib & ransomware enabled that it breaks xfs utils.Opened a case with RH support. They said for mount options “try context=unconfined_u:object_r:user_home_dir_t:s0” instead of the recommended “context=system_u:object_r:cvstorage_t:s0”, which does fix the issue with xfs commands. Now they are asking me to ask you all do other customers have the same experience? Or, they suggested use the context that “allows xfs to work” to which I replied “I have no idea but this is what the script sets up”.Basically if you setup a diskib using xfs (not NFS or S3 obviously), simply try to run xfs_info or xfs_growfs, can’t be done. The workaround I found if I need to grow the FS is stop service, remount without ransomware context in place (or use their options), fiddle with xfs, then remount with proper context. Looking for some way to fix rather than workaround but they seem firm in “it’s not a problem” with selinux or xfs, it’s the way it’s mounted.th
May I know if anyone encountered this error before when adding cloud storage on Command Center? May I know what is the resolution? I tried to search to any sites but got not luck Error below:"Operating System could not find the device file specified. The device may be unreachable from the MediaAgent. Please ensure that the file is present in the given path and is accessible."
I have a library with 2x LTO drives that is used for some direct to tape jobs and for aux copies of some disk jobs.Ideally what I’d like is for both LTO drives to be free to aux copy data but if another job runs that needs a drive for the aux copy to throttle back down to using one drive.So for example right now I have 50TB of data to aux copy that is going to a single drive when it could go to both drives (I’ve got them so why not use them) except if I set the aux copy to do that it seems that any new backup jobs to tape pause with “no resources available”.Thanks 😀
Hi we are running 11.30 and we want to start testing WORM Storage capabilities on our Datadomain. We have configured the retention-lock feature on Datadomain and activated the WORM Storage Lock on Commvault through the CommandCenterTalking to the Dell specialist, he has told us that there is a config that could affect on Commvault, the “automatic-lock-delay” value. That´s the time that the file remains “opened” while are being written on the DD by the backup aplication (in this case, Commvault), until it confirms the file closure and the it locks the file with the retention set before. As we don´t know how much time does CV need, we have set it to 120min on the DDHas any of you any experience with WORM on Datadomain in Commvault? Do you know how much time does Commvault keeps the files open on DD until are closed?
Hello community,We have Storage Policy which keeps all backups for 30 days and monthly backups for 18 months all on the same storage.To free up some space, we could create another copy pointing to the cloud to keep our monthly backups for 18 months. Then we would have 30 days on prem and all monthly backups in the cloud.Now, a new idea has come up to keep 30 days on prem as well as 6 months for monthly backups.And when the monthly backups are 6 month old, the should be copied to the cloud until they are 18 months old and removed on prem.Is there a way to delay the copy of the monthly backups for 6 months?
Hi team I am trying to add an network mount path to commvault storage library. I assign the mediaagent then choose network then pick the credential and input the path. When I click OK it takes a long time to load then gives the error: "Failed to read db". This is commvault version 11.24.94 recently upgraded from SP16. I tried looking at the logs but I can't se to find the relevant logs. Anyone with an idea?
Currently our client is using the built-in KMS server which stores encryption keys in the Commvault Database. As far as I can find, there is no way to extract these keys.We are looking to transition to Azure Key Vault for storing these keys. It is very easy to change the KMS server, but in theory this would leave us unable to access the previous backups as we technically do not have access to those keys for decryption.I have searched this extensively and there is no documentation for this (confirmed via Commvault support phone call). What is the proper process for changing the KMS server on a backup location, particularly the built-in KMS server over to a third-party, without losing access to backups? I did find 1 forum post stating this “just works”, but I need to provide some kind of concrete answer for my higher-ups to be happy. Thank you in advance!
HiMy backups seem to get to 70% then stop and sit in a pending state with the following message being displayed in the job controller.“i have confirmed that the media agent and the CommServe can communicate (have run CVPing and CVIPInfo both return success).have also tried searching on the Error Code 62:468 and get no results for that, at a bit of a loss as to what is going on?
Hello,The customer has installed a tape library and the commcell do not detect any cleaning tape on the console. When i select Discover Cleaning Tape, i have this message “There are no new media to discover” and my setting is configured to discovered automatically the media. I see my cleaning tape on my library, but not on my Commcell console. Do you have a solution for me? Thank you very much!
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.