Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 763 Topics
- 3,630 Replies
Hi all,Wondering if by chance anyone has experience running DDB space reclamation with orphan data cleanup against a cloud library. We have a cloud library in Azure cool tier which we expect may have some data that was not pruned successfully, thus increasing our storage consumption in Azure. The deduplication database for this data lives on local storage.We’d love to run a space reclamation with orphan data cleanup against this cloud library, but we’re concerned about the possible cost of storage transactions against the Azure cool library.Has anyone performed this operation before and observed the related cloud storage costs? For reference, we have just under 100 million blobs and a total of about 400TB of storage utilization in Azure. Many thanks for any input folks may have!
Hi, good morning.I did install Commvault Version 11.28.70 .Meantime on my Vcenter infrastructure I did create and install one Appliance Quantum VTL DXi 5000 Community Edition (this appliance it did develope on CentOS O.S.)Both servers are on the same network, I can ping the vtl without problems.I would like to know how can I make for connect/configure the VTL on my Commvault Environment.I did try without success to configure the same on the Commvault COmmand Center with the produce : Storage/Disk/Add. ecc. I did insert the information of the server( user and password ) but when I try to insert the path (the vtl appliance it’s on CentOS O.S. ) I can’t configure correctly the path.THanks in advance.Best regardsRicardo
Hello!Customer has this TL that is partitioned. There's a partition for Commvault. Originally it had 11 slots and some drives.Customer added 30 slots but they don't get recognized by Commault (and so the tapes on them).Full scan does not update the slot count.What must be done to recognize the new added slots?Regards,Pedro
Hi Community,With reference to the following post where there is no longer an option to select:https://community.commvault.com/topic/show?tid=5967&fid=49Is there a way to confirm which option is being selected for an Aux Copy Operation? I would like to confirm because I would assume that when copying between 2 cloud libraries, Disk Read Optimized copy would be more efficient and cheaper in cost being that each block does not need to be read to generate the signature - or at least that's the way I read doco. Would there be an entry in logs to determine this?Thanks in advance for the advice/assistance.
New to Hyperscale nodes and trying to figure out how to increase the size available to be used for the DDB paths.We have multiple GDSP's using the HS for their DDB's and are receiving warnings that the free space on DDB MediaAgent is very low.Looking at the disk space it looks as though there is 1.4TB left on the mount path. I'm a windows person so maybe I'm not understanding?Is there a way to give more space to the DDB’s? Thanks
We’re using Commvault v11.28.48 and have several jobs that are in a “waiting” status and sitting at 10%-20%. The reason is simple: Mount path does not have enough space.I’m new to Commvault (our backup admin left recently and I wasn’t involved in the initial config or daily use) so I may be missing something obvious, but I thought aged data would be deleted automatically. I’ve run the “Data Retention Forecast and Compliance Report” and see the various estimated aging dates, the only thing that sticks out is the last line under the Disk Media Summary:Estimated Size to Free: 9406 GB (this box is green: “prunable job / recyclable media”) Delay reason for physical space cleanup: Archive file has been queued for pruning from the DDBOur Netapp library is 24.7TB with a size on disk of 24.46TB, reserve space set to 100GB, so an extra 9TB would be great. “Enable pruning of aged data” is checked on the library.Am I correct in thinking that 9406GB is data that should be automatically removed (pru
Hello,I'm not sure if compression/dedupe savings for this DB is fine. Actually, it seems poor… any ideas on what it could be? It's not a special DB…The online fulls are below 30% in savings. Incrementals and archive logs are above 50% in general..Regards,Pedro Rocha
Hello Team. I have a storage policy with a snap primary which snaps a netapp a700, and a vault/replica (not a mirror) that CV orchestrates the replication between the a700 and a fas8700. There is a primary copy as well but I don’t want to use this as the source copy for the storagegrid copy; i want to use the vault copy as the source copy for the storagegrid copy. I am creating a new storagegrid synchronous aux copy and the only options to pick for source copy are primary and storagegrid. The only configuration I can figure out that works is to have a primary copy, which copies the snap primary copy, and to choose the primary copy as the source copy for the storagegrid copy. Is this expected behavior of the program ? Is there any way to configure the program so a storagegrid copy can have as its source copy the replica/vault copy ?Thank you!
Hi Good Day!I’m planning to deploy new HSX servers. I have a question about the DNS record requirement. Pls help.Environment detail:Bond1VLAN1.bond1 (CS-Registration) -→ This will be used for connecting to Commserve and VMware backup, DNS, and MGMT.VLAN2.bond1 (Data Protection)-→ This will be used to backup Agent-based backup. Bond2 -→ This is for the Storage Pool. DNS record not required.DNS record planning to create for the bond1 interfaces.VLAN1.bond1: hsxcsreg01.smaple.com , hsxcsreg02.smaple.com, hsxcsreg03.smaple.comVLAN2.bond1: hsxdp01.sample.com, hsxdp02.sample.com & hsxdp03.sample.com ? Do I need to create a DNS record for the Agent-based backup sub-interface? In this context, I doubt I can have a DNS entry for a sub-interface other than the CS registration. Please help.
Hi,I have so far replaced three drives in our DELL/EMC ML3 Tape Lib and still experiencing the same problem. Currently, there’s one valid drive that is working fine - when it comes to the other one, it is mounted and immediately unmounted after a couple of seconds - one try after another in an endless loop.According to DELL, there’s nothing in the library log that would indicate the root of the problem (the log is clean) - they send me another drive and then the case is closed until the issue rises again.I tried swapping the drives in slots but the problem is following the same drive no matter the slot. At this point, I’d like to exclude software-related cause of the issue.Currently, the number of errors associated with mounting and unmounting the drive is way over its threshold; that might be the reason why CV is ignoring the drive completely - how can I reset the counter? Is there a particular procedure I should follow when replacing a drive? I can see there’s an option ‘Mark drive a
Hello, We have some auxiliary copy jobs configured that run just once a week copyng only the full backp-us to tape All seems working ok but be received this alarm attached and to be honest,we don't really fully understand it…Can you kindly tell us why do you consider thta we have this behavior Thank you
May I know if anyone encountered this error before when adding cloud storage on Command Center? May I know what is the resolution? I tried to search to any sites but got not luck Error below:"Operating System could not find the device file specified. The device may be unreachable from the MediaAgent. Please ensure that the file is present in the given path and is accessible."
Hi we are running 11.30 and we want to start testing WORM Storage capabilities on our Datadomain. We have configured the retention-lock feature on Datadomain and activated the WORM Storage Lock on Commvault through the CommandCenterTalking to the Dell specialist, he has told us that there is a config that could affect on Commvault, the “automatic-lock-delay” value. That´s the time that the file remains “opened” while are being written on the DD by the backup aplication (in this case, Commvault), until it confirms the file closure and the it locks the file with the retention set before. As we don´t know how much time does CV need, we have set it to 120min on the DDHas any of you any experience with WORM on Datadomain in Commvault? Do you know how much time does Commvault keeps the files open on DD until are closed?
Hello community,We have Storage Policy which keeps all backups for 30 days and monthly backups for 18 months all on the same storage.To free up some space, we could create another copy pointing to the cloud to keep our monthly backups for 18 months. Then we would have 30 days on prem and all monthly backups in the cloud.Now, a new idea has come up to keep 30 days on prem as well as 6 months for monthly backups.And when the monthly backups are 6 month old, the should be copied to the cloud until they are 18 months old and removed on prem.Is there a way to delay the copy of the monthly backups for 6 months?
Currently our client is using the built-in KMS server which stores encryption keys in the Commvault Database. As far as I can find, there is no way to extract these keys.We are looking to transition to Azure Key Vault for storing these keys. It is very easy to change the KMS server, but in theory this would leave us unable to access the previous backups as we technically do not have access to those keys for decryption.I have searched this extensively and there is no documentation for this (confirmed via Commvault support phone call). What is the proper process for changing the KMS server on a backup location, particularly the built-in KMS server over to a third-party, without losing access to backups? I did find 1 forum post stating this “just works”, but I need to provide some kind of concrete answer for my higher-ups to be happy. Thank you in advance!
Hi Good Day,I like to know whether two HyperscaleX located in different DCs (25 KM distance between DCs) can register with a Single Commserv sitting in one of its DC? Is this workable and any thing to consider before setting up.Also like to know an auxilary copy can be send to opposing DCs HSX devices managed by Single commserve.
Hi Good day!Environment Details Planned initial:Commserve: dual home. has network 10.x.x.x (agent backup) and 172.x.x.x for (VMware)Network Topology: Bonded LACPTotal Portal on HSX: 2 dual cards 25 GB ports. and 1 ILO/IPMI. Total available port per node is 5 including ILO.How many nodes: 3Plan to define only two network DP with CS and SP.DP/CS Registration: 172.x.x.x SP: 192.x.x.xPresent issue:Customer has non-routable network (10.x.x.x for agent backup DB/SQL) which cannot be routed with DP network mentioned above. This is a reference architecture and HSX can have only one routable network possible.Does below topology will work, looking for best suggestion:Topology: Bonded VLAN LACPDP bond1 has VLAN 100 and 200.and SP is 192.x.x.x no VLAN.bond1.100 VLAN on DP for CS Registration and default gateway will mention on this network. (DNS, NTP, SSH, VMware, replication aux copy) all should work using this network 172.x.x.x, it is a routable network. Note: DNS and VMware are on different
Hello fellow Vaulters. Are there any out there, that have experience with the Dell ECS storage system used for AuxCopy data as a S3 library on prem? Or do anyone have other on-prem S3 storage systems to recommend? I am in the process of evaluating a new S3 target storage system to host AuxCopy data. Thanks-Anders
HiMy backups seem to get to 70% then stop and sit in a pending state with the following message being displayed in the job controller.“i have confirmed that the media agent and the CommServe can communicate (have run CVPing and CVIPInfo both return success).have also tried searching on the Error Code 62:468 and get no results for that, at a bit of a loss as to what is going on?
Hello,The customer has installed a tape library and the commcell do not detect any cleaning tape on the console. When i select Discover Cleaning Tape, i have this message “There are no new media to discover” and my setting is configured to discovered automatically the media. I see my cleaning tape on my library, but not on my Commcell console. Do you have a solution for me? Thank you very much!
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.