Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
Hi Good day!Environment Details Planned initial:Commserve: dual home. has network 10.x.x.x (agent backup) and 172.x.x.x for (VMware)Network Topology: Bonded LACPTotal Portal on HSX: 2 dual cards 25 GB ports. and 1 ILO/IPMI. Total available port per node is 5 including ILO.How many nodes: 3Plan to define only two network DP with CS and SP.DP/CS Registration: 172.x.x.x SP: 192.x.x.xPresent issue:Customer has non-routable network (10.x.x.x for agent backup DB/SQL) which cannot be routed with DP network mentioned above. This is a reference architecture and HSX can have only one routable network possible.Does below topology will work, looking for best suggestion:Topology: Bonded VLAN LACPDP bond1 has VLAN 100 and 200.and SP is 192.x.x.x no VLAN.bond1.100 VLAN on DP for CS Registration and default gateway will mention on this network. (DNS, NTP, SSH, VMware, replication aux copy) all should work using this network 172.x.x.x, it is a routable network. Note: DNS and VMware are on different
Hi Good Day,I like to know whether two HyperscaleX located in different DCs (25 KM distance between DCs) can register with a Single Commserv sitting in one of its DC? Is this workable and any thing to consider before setting up.Also like to know an auxilary copy can be send to opposing DCs HSX devices managed by Single commserve.
Here is the situation: Primary copy - main siteSecondary Copy - DR site Tape Copy - DR Site We are doing Aux copy from Primary > Secondary and Secondary > TapeWe got alot of errors reading chunks when doing Aux to Tape. we noticed that those jobs fail to restore from Secondary copy also , on Primary site , jobs can restore Can i mark chunks/jobs as bad on secondary copy and have commvault recopy the jobs INCLUDING the bad chunks from Primary copy again and not save bad chunks due to dedup ?
Hello, I am looking for the best solution for immutable copy of data built on Commvault in Azure. We have some configuration, but I would like to see what Commvault Community can propose. I am looking for few approaches: First copy of data is in local. Azure copy is working as DR site. Dedicated storage for Immutable in Azure per site (locally). First copy of data is in local. Azure copy is working as DR site. Shared storage for Immutable in Azure per site (locally).In that both approach I would like to know about the cost and how to estimate decrease usage storage between dedicated - shared storage ? What will be the best storage configuration for that solution Deduplicated or Not-deduplicated? Retention for Immutable is 14 days and 2 cycles. Storage usage for Immutable solution (Hot or Cool)?And I think the most important how to calculate cost for those solutions if I have only local backup solution in current state. Regards, Michal
We noticed on a RHEL8.7 build with a local disklib & ransomware enabled that it breaks xfs utils.Opened a case with RH support. They said for mount options “try context=unconfined_u:object_r:user_home_dir_t:s0” instead of the recommended “context=system_u:object_r:cvstorage_t:s0”, which does fix the issue with xfs commands. Now they are asking me to ask you all do other customers have the same experience? Or, they suggested use the context that “allows xfs to work” to which I replied “I have no idea but this is what the script sets up”.Basically if you setup a diskib using xfs (not NFS or S3 obviously), simply try to run xfs_info or xfs_growfs, can’t be done. The workaround I found if I need to grow the FS is stop service, remount without ransomware context in place (or use their options), fiddle with xfs, then remount with proper context. Looking for some way to fix rather than workaround but they seem firm in “it’s not a problem” with selinux or xfs, it’s the way it’s mounted.th
Hello, We have some auxiliary copy jobs configured that run just once a week copyng only the full backp-us to tape All seems working ok but be received this alarm attached and to be honest,we don't really fully understand it…Can you kindly tell us why do you consider thta we have this behavior Thank you
Hi Community,With reference to the following post where there is no longer an option to select:https://community.commvault.com/topic/show?tid=5967&fid=49Is there a way to confirm which option is being selected for an Aux Copy Operation? I would like to confirm because I would assume that when copying between 2 cloud libraries, Disk Read Optimized copy would be more efficient and cheaper in cost being that each block does not need to be read to generate the signature - or at least that's the way I read doco. Would there be an entry in logs to determine this?Thanks in advance for the advice/assistance.
We are trying to use CommVault to backup an Oracle 19c Database straight to our hyperscale servers ( was backing to as separate SAN disk and then to hyperscale hope to reduce the extra step) backup is working, but a full backup gets all the white space on the data drive instead of only the used space and takes about 13 hrs to complete. Currently we are thinking that the SBT switch in the RMAN is what is doing this to us. Anyone else doing this? The length of time to backup has the DBA’s worried.
Hi, good morning.I did install Commvault Version 11.28.70 .Meantime on my Vcenter infrastructure I did create and install one Appliance Quantum VTL DXi 5000 Community Edition (this appliance it did develope on CentOS O.S.)Both servers are on the same network, I can ping the vtl without problems.I would like to know how can I make for connect/configure the VTL on my Commvault Environment.I did try without success to configure the same on the Commvault COmmand Center with the produce : Storage/Disk/Add. ecc. I did insert the information of the server( user and password ) but when I try to insert the path (the vtl appliance it’s on CentOS O.S. ) I can’t configure correctly the path.THanks in advance.Best regardsRicardo
Hello, Recently we had to make a configuration with:1x CommServe1x MediaAgent2x StoreOnce Catalyst MediaAgent and the 2 StoreOnce are Connected through Fiber. Backups to the first StoreOnce Catalyst works fine without any errors but when we initiate Aux Copy from the First StoreOnce to the Second StoreOnce its say its fails due to Chunk errors. Any ideas about the Configuration we need to follow in order for the Aux Copy to succeed ?
HiMy backups seem to get to 70% then stop and sit in a pending state with the following message being displayed in the job controller.“i have confirmed that the media agent and the CommServe can communicate (have run CVPing and CVIPInfo both return success).have also tried searching on the Error Code 62:468 and get no results for that, at a bit of a loss as to what is going on?
Hi guys,has anyone experience with iSCSI-attached tape libraries and the sharing of the drives? It should work, but it is not really documentated and I am concerned about performance, stability and functionality. Even more, if only one network is available for both, regular network traffic and iSCSI connection. Any information/experiences would be apreciated. thanks a lot!
Hi All, We have Linux cluster with shared disks. Both servers are resides on a VMWare environment. Due to a VMWare issue we cannot take snapshot of the vms. Now we want to backup these VMs from commvault. We are currently takin backups from few file systems on these VMs.I tried to enable 1-Touch recovery and initiate a full backup. Backup fails on “Backup” phase showing following error. +++++++++++++++++++Enor Code: [82: 172]Description: Could not connect to the DeDuplication Database process for Store ld.+++++++++++++++++++ This mentioned DeDuplication Database is online and shows no errors. When removed 1-Touch backup is running smoothly.
Currently our client is using the built-in KMS server which stores encryption keys in the Commvault Database. As far as I can find, there is no way to extract these keys.We are looking to transition to Azure Key Vault for storing these keys. It is very easy to change the KMS server, but in theory this would leave us unable to access the previous backups as we technically do not have access to those keys for decryption.I have searched this extensively and there is no documentation for this (confirmed via Commvault support phone call). What is the proper process for changing the KMS server on a backup location, particularly the built-in KMS server over to a third-party, without losing access to backups? I did find 1 forum post stating this “just works”, but I need to provide some kind of concrete answer for my higher-ups to be happy. Thank you in advance!
Hi All, Can I just check what is the expected status of the Object Lock ‘Default Retention Period’ on an AWS S3 bucket after running the ‘Enable WORM Storage’ Workflow?As part of the Documentation it states to ‘Disable Default retention when Object Lock is enabled’ prior to running the workflow. When running the workflow (which completes successfully) on test it doesn’t update Default Retention to 60 Days (My copy retention is 30days) and remains disabled. However, when copying data up to the bucket when running a backup to the locked Storage Policy Copy, objects are correctly locked to the required retention period by Commvault (60 Days) in compliance mode. So the question being: Should the Default Retention be set to the specified Retention lock period or remain disabled for Commvault to set on a per object basis?Thanks in advance.G.
New to Hyperscale nodes and trying to figure out how to increase the size available to be used for the DDB paths.We have multiple GDSP's using the HS for their DDB's and are receiving warnings that the free space on DDB MediaAgent is very low.Looking at the disk space it looks as though there is 1.4TB left on the mount path. I'm a windows person so maybe I'm not understanding?Is there a way to give more space to the DDB’s? Thanks
I am testing commvaults connection to Wasabi.My Wasabi test bucket is object locked, so Commvault can’t delete older data. To test a loss of commvault database I didn’t configure my commvault jobs to be worm protected.Consequentially, I was able to delete some jobs, although the data in the Wasabi bucket remains.I can’t seem to find the option in Commvault to scan the bucket for existing backups to reimport.Is this not available?
Hi Team, We have a very large, Infinite retention Storage Policy, associated to Storage Pool “Pool1”.It has grown to the point, that we will soon be creating another Storage Pool and Storage Policy. Let’s call these Pool2. All clients from Pool1 will be migrated to Pool2, so Pool1 will stop receiving any fresh data, since Pool2 will start receiving it all. The question I have is around the massive, leftover DDB’s from Pool 1. They are 2 x 1.8 TB and are hosted on the two Media Agents associated with Pool1. Since Pool1 will stop receiving data, I am keen to decommission the Pool1 Media Agents - noting that the Secondary-copy Cloud-based backup data can be accessed from a number of Media Agents and so it does not necessarily have to be the Pool1 Media Agents. It can be any Media Agents, provided they are mapped to the relevant Cloud Library Mount Points. So the questions I have are :- 1 - What do we do with these large, legacy DDB’s? I understand we need to keep for Commvault Sync
Hi Folks I’ve hit a problem in seeding data to azure using a DataboxCopy had been created and DDB is ready to be shipped also. I’ve followed this procedure:https://documentation.commvault.com/v11/expert/97276_migrating_data_to_microsoft_azure_using_azure_data_box.htmlthis has also helped: https://commvaultondemand.atlassian.net/wiki/spaces/ODLL/pages/351142608/Deduplication+Database+Seeding#DeduplicationDatabaseSeeding-DDBSeedingusingDeduplicatedStorage I’m on step 4 “Once the jobs associated with the initial seeding is complete, shutdown the data box using the recommended shut down process for Azure Data Box.” Running the validation i get this error:https://aka.ms/dberr5 - Large file shares are not enabled on your storage account(s). To disregard this errorThe CV_Magnetic is 36TB an so easily hits the 5TB limits stipulated here: https://learn.microsoft.com/en-us/azure/databox/data-box-disk-limits under “Object size limits and Azure Files”So the only think i can do is drop the storag
Hi together, I am planning a MediaAgent that has enough storage capacity to be my disk storage. Deduplication should be active, it is Back-end Size for Disk Storage Small (Up to 50TB).Are there any recommendations or requirements regarding the RPM of the backup disks? I did not find anything in the CommVault documentation, only the RPM recommendation for the OS/software disk.Thanks in advance for your help!
I have 3 LTO7 tapes that were previously used for Microfocus Data Protector, then we wanted to use them for Commvault backups. But strange things started to happen with those tapes.1. Commvault recognized them as completely empty tapes, despite having information from the other backup tool.2. When I launched the copies the first 2 were filled with 6TB and 300GB.3. The third tape fills up with 2TB and goes into append state and doesn't let me use it by asking for a new tape.I already formatted it twice but the same thing keeps happening. But I see that the formatting that it does does not take it to the drive, it only does it at a logical level with the data from Commvault. In Commvault, is there a way to format it and purge all the data to be able to use the tapes in their entirety?
Hello!Customer has this TL that is partitioned. There's a partition for Commvault. Originally it had 11 slots and some drives.Customer added 30 slots but they don't get recognized by Commault (and so the tapes on them).Full scan does not update the slot count.What must be done to recognize the new added slots?Regards,Pedro
Hi all, NetApp has mentioned to a customer that Commvault can leverage NetApp’s SnapLock feature on a FAS CIFS Disk Library to provide immutablility of backup data. I found the following 2 articles regarding this: https://documentation.commvault.com/2023/expert/146623_configuring_worm_storage_mode_on_disk_libraries.htmlhttps://documentation.commvault.com/2023/essential/155629_enabling_worm_storage_and_retention_for_disk_storage.html These docs seem to be about two different features? One is enabled using a workflow and mentions DDB sealing. The other is enabled using a slider in Command Center, and mentions nothing about DDB sealing. Can anyone tell me the difference between the two (and what they do exactly)?And which feature should I best use to leverage SnapLock on a FAS CIFS Disk Library to provide immutablility of backup data? Thanks!
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.