Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
Hello, I am looking for the best solution for immutable copy of data built on Commvault in Azure. We have some configuration, but I would like to see what Commvault Community can propose. I am looking for few approaches: First copy of data is in local. Azure copy is working as DR site. Dedicated storage for Immutable in Azure per site (locally). First copy of data is in local. Azure copy is working as DR site. Shared storage for Immutable in Azure per site (locally).In that both approach I would like to know about the cost and how to estimate decrease usage storage between dedicated - shared storage ? What will be the best storage configuration for that solution Deduplicated or Not-deduplicated? Retention for Immutable is 14 days and 2 cycles. Storage usage for Immutable solution (Hot or Cool)?And I think the most important how to calculate cost for those solutions if I have only local backup solution in current state. Regards, Michal
Hello fellow Vaulters. Are there any out there, that have experience with the Dell ECS storage system used for AuxCopy data as a S3 library on prem? Or do anyone have other on-prem S3 storage systems to recommend? I am in the process of evaluating a new S3 target storage system to host AuxCopy data. Thanks-Anders
HI All, We have introduced Exagrid storage system as our backup storage solution, after the implementation we have noticed that the restore job is taking longer time than the expected one.We used Netapp as storage solution before Exagrid and also at the current setup with Exagrid we have enable the dedupe in both CommVault and Exagrid as well. How to identify where is the bottleneck. We have enabled the encryption and compression also in the Exagrid system.
Hi Good day!Environment Details Planned initial:Commserve: dual home. has network 10.x.x.x (agent backup) and 172.x.x.x for (VMware)Network Topology: Bonded LACPTotal Portal on HSX: 2 dual cards 25 GB ports. and 1 ILO/IPMI. Total available port per node is 5 including ILO.How many nodes: 3Plan to define only two network DP with CS and SP.DP/CS Registration: 172.x.x.x SP: 192.x.x.xPresent issue:Customer has non-routable network (10.x.x.x for agent backup DB/SQL) which cannot be routed with DP network mentioned above. This is a reference architecture and HSX can have only one routable network possible.Does below topology will work, looking for best suggestion:Topology: Bonded VLAN LACPDP bond1 has VLAN 100 and 200.and SP is 192.x.x.x no VLAN.bond1.100 VLAN on DP for CS Registration and default gateway will mention on this network. (DNS, NTP, SSH, VMware, replication aux copy) all should work using this network 172.x.x.x, it is a routable network. Note: DNS and VMware are on different
Here is the situation: Primary copy - main siteSecondary Copy - DR site Tape Copy - DR Site We are doing Aux copy from Primary > Secondary and Secondary > TapeWe got alot of errors reading chunks when doing Aux to Tape. we noticed that those jobs fail to restore from Secondary copy also , on Primary site , jobs can restore Can i mark chunks/jobs as bad on secondary copy and have commvault recopy the jobs INCLUDING the bad chunks from Primary copy again and not save bad chunks due to dedup ?
Hi Good Day,I like to know whether two HyperscaleX located in different DCs (25 KM distance between DCs) can register with a Single Commserv sitting in one of its DC? Is this workable and any thing to consider before setting up.Also like to know an auxilary copy can be send to opposing DCs HSX devices managed by Single commserve.
We noticed on a RHEL8.7 build with a local disklib & ransomware enabled that it breaks xfs utils.Opened a case with RH support. They said for mount options “try context=unconfined_u:object_r:user_home_dir_t:s0” instead of the recommended “context=system_u:object_r:cvstorage_t:s0”, which does fix the issue with xfs commands. Now they are asking me to ask you all do other customers have the same experience? Or, they suggested use the context that “allows xfs to work” to which I replied “I have no idea but this is what the script sets up”.Basically if you setup a diskib using xfs (not NFS or S3 obviously), simply try to run xfs_info or xfs_growfs, can’t be done. The workaround I found if I need to grow the FS is stop service, remount without ransomware context in place (or use their options), fiddle with xfs, then remount with proper context. Looking for some way to fix rather than workaround but they seem firm in “it’s not a problem” with selinux or xfs, it’s the way it’s mounted.th
Hi Team, We have a very large, Infinite retention Storage Policy, associated to Storage Pool “Pool1”.It has grown to the point, that we will soon be creating another Storage Pool and Storage Policy. Let’s call these Pool2. All clients from Pool1 will be migrated to Pool2, so Pool1 will stop receiving any fresh data, since Pool2 will start receiving it all. The question I have is around the massive, leftover DDB’s from Pool 1. They are 2 x 1.8 TB and are hosted on the two Media Agents associated with Pool1. Since Pool1 will stop receiving data, I am keen to decommission the Pool1 Media Agents - noting that the Secondary-copy Cloud-based backup data can be accessed from a number of Media Agents and so it does not necessarily have to be the Pool1 Media Agents. It can be any Media Agents, provided they are mapped to the relevant Cloud Library Mount Points. So the questions I have are :- 1 - What do we do with these large, legacy DDB’s? I understand we need to keep for Commvault Sync
The auditors want to see if my backups are encrypted and I’m not sure where to go in the CommVault GUI to show that. I don’t see anything about encryption in the properties for my storage libraries or my storage policies. Where do I show whether or not my backups are encrypted?Ken
I have a library with 2x LTO drives that is used for some direct to tape jobs and for aux copies of some disk jobs.Ideally what I’d like is for both LTO drives to be free to aux copy data but if another job runs that needs a drive for the aux copy to throttle back down to using one drive.So for example right now I have 50TB of data to aux copy that is going to a single drive when it could go to both drives (I’ve got them so why not use them) except if I set the aux copy to do that it seems that any new backup jobs to tape pause with “no resources available”.Thanks 😀
Currently our client is using the built-in KMS server which stores encryption keys in the Commvault Database. As far as I can find, there is no way to extract these keys.We are looking to transition to Azure Key Vault for storing these keys. It is very easy to change the KMS server, but in theory this would leave us unable to access the previous backups as we technically do not have access to those keys for decryption.I have searched this extensively and there is no documentation for this (confirmed via Commvault support phone call). What is the proper process for changing the KMS server on a backup location, particularly the built-in KMS server over to a third-party, without losing access to backups? I did find 1 forum post stating this “just works”, but I need to provide some kind of concrete answer for my higher-ups to be happy. Thank you in advance!
Hello, We have some auxiliary copy jobs configured that run just once a week copyng only the full backp-us to tape All seems working ok but be received this alarm attached and to be honest,we don't really fully understand it…Can you kindly tell us why do you consider thta we have this behavior Thank you
HiMy backups seem to get to 70% then stop and sit in a pending state with the following message being displayed in the job controller.“i have confirmed that the media agent and the CommServe can communicate (have run CVPing and CVIPInfo both return success).have also tried searching on the Error Code 62:468 and get no results for that, at a bit of a loss as to what is going on?
We are trying to use CommVault to backup an Oracle 19c Database straight to our hyperscale servers ( was backing to as separate SAN disk and then to hyperscale hope to reduce the extra step) backup is working, but a full backup gets all the white space on the data drive instead of only the used space and takes about 13 hrs to complete. Currently we are thinking that the SBT switch in the RMAN is what is doing this to us. Anyone else doing this? The length of time to backup has the DBA’s worried.
Hi Community,With reference to the following post where there is no longer an option to select:https://community.commvault.com/topic/show?tid=5967&fid=49Is there a way to confirm which option is being selected for an Aux Copy Operation? I would like to confirm because I would assume that when copying between 2 cloud libraries, Disk Read Optimized copy would be more efficient and cheaper in cost being that each block does not need to be read to generate the signature - or at least that's the way I read doco. Would there be an entry in logs to determine this?Thanks in advance for the advice/assistance.
Hello, Recently we had to make a configuration with:1x CommServe1x MediaAgent2x StoreOnce Catalyst MediaAgent and the 2 StoreOnce are Connected through Fiber. Backups to the first StoreOnce Catalyst works fine without any errors but when we initiate Aux Copy from the First StoreOnce to the Second StoreOnce its say its fails due to Chunk errors. Any ideas about the Configuration we need to follow in order for the Aux Copy to succeed ?
I am testing commvaults connection to Wasabi.My Wasabi test bucket is object locked, so Commvault can’t delete older data. To test a loss of commvault database I didn’t configure my commvault jobs to be worm protected.Consequentially, I was able to delete some jobs, although the data in the Wasabi bucket remains.I can’t seem to find the option in Commvault to scan the bucket for existing backups to reimport.Is this not available?
Hi, good morning.I did install Commvault Version 11.28.70 .Meantime on my Vcenter infrastructure I did create and install one Appliance Quantum VTL DXi 5000 Community Edition (this appliance it did develope on CentOS O.S.)Both servers are on the same network, I can ping the vtl without problems.I would like to know how can I make for connect/configure the VTL on my Commvault Environment.I did try without success to configure the same on the Commvault COmmand Center with the produce : Storage/Disk/Add. ecc. I did insert the information of the server( user and password ) but when I try to insert the path (the vtl appliance it’s on CentOS O.S. ) I can’t configure correctly the path.THanks in advance.Best regardsRicardo
Hi all,we are implementing a new netapp infrastructure. This infrastructure will be composed by 2 clusters with SM-BC -- Snapmirror Business ContinuityWe will use it to present luns on vmwareI don’t find any information about the compatibility with intellisnap.Any Idea ?Thanks a lot
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Hello, we would like to tier out the data, wich is stored on the disk library to an Huawei Object Storage. I created a secoundary copy and configured an aux copy schedule. The problem is that the disk library disc space is running low because the job is not as fast as I was hoping.The amount of data for the copy job can be up to 10 TB.Is there a solution to speed up the aux copy job ? The Media Agents provide 2x10 Gbit cards.RegardsThomas
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.