Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 621 Topics
- 3,252 Replies
Upgrading my DDB’s to V5, trying to avoid fully stopping backups while I squeeze in the upgrade. Is it possible to check “Temporarily Disable Deduplication” under Dedupe Engines > [DDB name] > properties > Deduplication > advanced tab, and perform the upgrade? The DDB needs to come offline for compaction. Thanks in advance, Joel Bates
Due to the fact of changing the backup storage infrastructure, I only want to change the storage policy.Our former strategy was to have a spool copy and to aux copies - one to NAS and one to LTO.This was done due to the fact that we had a performance/backup time issueWe solved this by direct attached SSDs to the backup server.Now I want to set a retention to the primary copy and delete the no longer needed Aux copy to NAS. This points also to the same storage (the local SSDs) and wastes the need space on this.If I want to change the retention on the primary storage policy I get error shown in the screenshot and changes are discarded.I don’t know where to find these settings (Archiver retention rule and “Onepass”) I have never set a archiver (retention rule), we are not using archiving
Hi Community - We have a circa 2PB Cloudlin currently writing to AWS S3. Our customer is asking if we can migrate this data into Azure. Has anyone out had experience of the best way to accomplish this - I am thinking doing this using AUX copies is going to take rather a long time to copy 2 PB!!
i have 3 Filer client and a media agent al three on the same swtich no firewall or router.all filer have subclient.one filer have 8 subclient and backup copy are not working on two of the subclient give below error.Error Code: [39:501]Description: Client [XYZ03] was unable to connect to the tape server [ABC123] on IP(s) [10.0.0.34] port . Please check the network connectivity.Source: ABC123, Process: NasBackup
We seem to run into multiple problems in our new HSX environment. Metadata disk d2 silently filled 100% on one of three nodes, data disks are all 90-95%, but in GUI only 550 of 720TB are shown as used. Not a single alert for this, all green in GUI. An then there is disk d22 / sdv on one node that failed a few weeks ago and was replaced together with support. In GUI it’s shown as mounted but in reality its not. sdu 65:64 0 16.4T 0 disk /hedvig/d21sdv 65:80 0 16.4T 0 disksdw 65:96 0 16.4T 0 disk /hedvig/d23 I followed Replacing Disks in an HyperScale X Reference Architecture Node (commvault.com) but the disk is not mounted. Nov 9 11:22:38 sdes1701-dp systemd: Dependency failed for /hedvig/d22. Nov 9 11:22:38 sdes1701-dp systemd: Job hedvig-d22.mount/start failed with result 'dependency'. Nov 9 11:22:38 sdes1701-dp systemd: Job dev-disk-by\x2duuid-dfcc3e6c\x2d8152\x2d42b2\x2db0a1\x2d6742d4748d3c.d
I have an old InfiniGuard VTL that is replicated to a new InfiniGuard VTL and I need to decom the old one. Is there a way, with CommVault down, to uninstall the medium changer and drives and install the new medium changer and have CommVault recognize the new VTL as the old one? The replicated VTL has all of the same barcodes.
Afternoon allI have a few questions - hopefully nothing too complicated!I have recently configured a tape library in our Commvault environment which appears to be working OK - I just had a few questions around configuring the tape library to suite our needs.The plan is to backup to tapes so they can be take offsite daily and used in a DR scenario. We would like to keep 3 weeks worth of data on the tapes and would like to have a tape for each day.I’ve configured the auxiliary copy job and the related schedules to backup at a suitable time which I think so far is fine however where I appear to be struggling is with configuring Commvault to backup to a new tape each day.We will have 21 tapes and as an example tape 001 will be week 1 Monday, tape 002 will be week 1 Tuesday etc - tape 007, 008 and 009 will be Friday, Saturday and Sunday respectively.There will be x2 SP’s that will be backing up to tape - my question is, is it possible to configure CV so that one the backup job from SP2 is c
we are setting up AUX copy to MCSS cold storage for one of my customer and have used IN premise DDB to send dash copy to cloud Storage.I have confusion here.Support our In premise Site/Server goes down.Will i be able to recover my data from cloud after rebuilding CV server? because both in DDB were in local storage only.or Does it copies DDB to MCSS also?
I am creating a partitioned DDB with two media agents. What interface of the media agent should i Add? I have dedicated NIC available.Do in need ta add the IP-address of the media agent? What happens if i leave it to default? I have implemented this in the past but I cannot remember this part.
HQ-VM-CommServ 32:162 Replacing the active media for job  from Mount Path [[dr_media_svr2] V:\DRMA02AR_LUN06] to [[dr_media_svr2] U:\DRMA02AR_LUN05].
HQ-VM-CommServ - 32:162 - Replacing the active media for job  from Mount Path [[dr svr2] V:\DRMA02AR_LUN06] to [[dr svr2] U:\DRMA02AR_LUN05]. What does this mean? AND MAKING THE JOB TO RUN VERY SLOW WITH LOW Current Throughput: OF 0.3:
In the CommVault client application, I can view the “Media in Library” throughStorage Resources/Libraries\QUANTUM Scalar i3-i6 3/Media By Location/Media In LibraryThis window shows all the tapes in the library whether they are in tape drives or in regular slots. I’ve looked for a window like this in Command Center, but I’ve only been able to view either the tapes outside the drives, or the tapes inside the drives -- but not both at the same time. Is there a place where I can view all of them at once? Or perhaps a view that I can custom-configure to show this information? Along with this, the “Slot View” in Command Center shows all the tapes, but on multiple pages. This makes finding several tapes across the all the slots very slow because one must tap between the different pages of tapes. Is there a way to expand the list length per page? Or disable page-separation completely? Solutions and/or suggestions will be greatly appreciated.
Hi guys, We are implementing an Air Gapped DR site, so the DR site only receives an offline copy from the main one through aux copies.For aux copies, we only disabled their schedule from the “System Created Autocopy Schedule” and created a new specific schedule which start the aux copies to the DR site after the blackout window. Since the DR MAs and Library are only accessible during the Air Gap window, I was wondering if rescheduling the remaining system created jobs (DDB Backup, DDB Verification, DDB Space Reclamation, Data Aging) to the Air Gap period doesn’t cause any issue when running at the same time with the Aux Copies (Our Aux Copies use Deduplication). I know that, when running the DDB Backup jobs, this may cause issues with Aux Copies as of the screenshot below : https://documentation.commvault.com/11.24/expert/12504_deduplication_database_backup.html So my question is, is it possible to launch all the previously listed system jobs at the same time with Aux Copies without an
Dear colleagues,We have been using CommVault Simpana for many years. And currently we are looking for a solution how we can backup our data to cloud with Simpana. I mean that target of backup will be AWS, Azure or any others cloud providers. I found in the documentation that backup to cloud is possible for CommServer DR, but what about production data?Note: We are looking options for BACKUP to CLOUD (where backup destination is CLOUD) but NOT backup of the CLOUD (where CLOUD is backup source)!I would appreciate any information on this topic.Thanks!
When following the document on how to stop/start a Hyperscale X appliance node https://documentation.commvault.com/2022e/expert/133467_stopping_and_starting_hyperscale_x_appliance_node.html I get to the step unmount the CDS vdisk, and after getting the proper vdisk name and running the cmd# umount /ws/hedvig/<vdiskname>i get the message ‘device is busy’.What’s the proper way to remediate this and continue?ThanksG
Hello,let me ask your opinion about the following situation.When I check the details of the System Created DDB Space Reclamation schedule policy I see that it looks like “corrupted”.As you can see in the attached image, in the summary screen it shows Type “Data Verification” while in the dialog the type is “Data Protection”. Moreover, the Associations tab shows the list of Clients instead of the DDB list.Is this normal? How can I get rid of this? Thank you in advanceGaetano
Commvault are aware of any issues with routing Commvault Traffic through the local internet in China.
Hello All, We have configured the Aux copy with AWS cloud storage library destination, we are facing Auxcopy slowness issue only for China location. Other locations are working fine. (US & UK).Any one facing same issue in China location…? Commvault are aware of any issues with routing Commvault Traffic through the local internet in China.
We initiated a move a mount path operation on our cmvlt environment,but it seems that is stuck somehow .There is no other jobs running in that moment for this media agent.Also we can see that all the data seems to be already copied.We see Estimated Total Data Size:1.65 TBSize of Data Copied:1:65 TBare the same It is like this for 1 day and some hours
We have an offsite facility we send our aux copies to for DR purposes. PB’s of data for CommVault to go through . We do not have a firewall, and port 8400 can reach the offsite ok. Some of us in house think using the network topology to create a persistent connection and then open 8 routes will speed up the process over letting Commvault handle the traffic automatically. Does anyone have any insight into which process is better to use for us? or more technical how the network routes work or a recommended setup?
i see this all the time and i never understood it. we make backup copies from disk to tape, both attached to the same media agent. During these aux copies it has the Data Transferred over Network number, which I think should be 0, but there is usually a number there. For example this aux job has these numbers, still running:Total Data Processed: 3.23 TBData Transferred Over Network: 107.95 GBTotal Data to Process: 4.7 TB
Hello all, I’m trying to configure my OCI into Commvault to test this tool (I’m using trial licence) but I’m dealing with some errors as shown bellow: What certificate is this? How can I install it? And where?Then, I use the CloudTestTool and it shows this: Look at the log file: 4228 2274 10/12 12:54:02 ### GetTenancy() failed. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### Failed to get the namespace. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### CheckServerStatus() - Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### EvMMConfigMgr::onMsgConfigStorageLibrary() - CVMACMediaConfig::AutoConfigLibrary returns error creating Library [Failed to verify the device from MediaAgent [dphwinbkpprd] with the error [Failed to check cloud serv
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.