Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 647 Topics
- 3,299 Replies
MA GRID setup | potential
Hi, Do you have a best practice how to utilize media agents in a GRID format?For example we have 4 MA’s and let’s say 3 subclients..1 sub client always uses 1 MA (more vsa proxies within the job for vm’s) for VMware backups. So there is a some sorting mechanism for other 2 sub clients to either receive idling MA’s to start the backup, or use the one which is already used by sub client 1… But is there an option to utilize the potential of a GRID solution where we can use more than 1 MA for a backup of a sub client? Hope it’s clear of what I am trying to achieve.
Migrate a Data Path
Hi, we just started to use an object storage to tier out the data after 7 days. We created one bucket and added it to commvault as backup target. Den we changed the config and created another bucket but forget to delete the first one and now commvault is using both buckets to store the data. How can I migrate the already stored data in bucket 2 (data path 1) into bucket 1 (data path 2) ?After I could migrate the data I would like to delete the second data path and then remove the bucket in the Object Storage.RegardsThomas
VTL to VTL migration
I have an old InfiniGuard VTL that is replicated to a new InfiniGuard VTL and I need to decom the old one. Is there a way, with CommVault down, to uninstall the medium changer and drives and install the new medium changer and have CommVault recognize the new VTL as the old one? The replicated VTL has all of the same barcodes.
Mount path is showing offline
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
Unable to add network disk storage
We have CV 11 .24.56 and using Dedupe. We already have this Disk Library setup and were using 60% of the library but now we wanted to add another mount path and use the other 40% because no one ever used the storage space! I cannot send screenshots so I will try to explain. We have a Primary site (Pri-1) and an Alternate site (Alt-1) and Commvault copies the data to the Pri-1 Disk Library and then we do a Dash Copy of the data to the Alt-1 Disk Library so we have 2 online disk libraries one at each site and the libraries each have their on Folder or Mount Path and we have Global Dedupe policies one setup for the Pri-1 Library and another Global Dedupe setup for Alt-1 Library. (This is how it was setup by installer). My issue is that when I went to add the extra storage I was able to add the storage as a new Disk Library with a new Mount path successfully. The libraries show up in Commvault as Alt-2 and Pri-2 under the library. I then was able to add this second Primary Data site
The signature does not match. Message: The required information to complete authentication was not provided or was incorrect.
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket. What do I have to do to solve this problem?
What type of storage array do you use?
We currently use IBM V5000 arrays for our Commvault backup target to land our deduped backups. We are starting to review other options to see what other fast, cost effective options are out there. I do prefer to use Fiber Channel connections, but open to options. Since Commvault is really the brain in our scenario, the storage array does not really need any features, just good speed. What Vendor Storage arrays do you use? Are you happy with it?
Moving mount path hangs and stuck at 96%
We initiated a move a mount path operation on our cmvlt environment,but it seems that is stuck somehow .There is no other jobs running in that moment for this media agent.Also we can see that all the data seems to be already copied.We see Estimated Total Data Size:1.65 TBSize of Data Copied:1:65 TBare the same It is like this for 1 day and some hours
Enabling Ransomware Protection on a MediaAgent for disk library
Hi Community , Does Enabling Ransomware Protection on a windows MediaAgent CV feature make my disk library and backup copies immutable ? Do we also need WORM enabled primary or secondary copies even after enabling this CV native feature for full proof ransomware protection ? If yes , what is the use of ransomware protection feature ?Regards, Mohit
Setting up a Proxy Server to Access the Cloud Storage Library
Hi, We are configuring a Cloud Library as the Export Destination for Disaster Recovery (DR) Backups.So whenever we take a DR Backup, the metadata is exported to our Cloud Library.However, the CommServe has no direct access to the Cloud Library, it must connect to the cloud storage through a proxy server as explained below:https://documentation.commvault.com/v11/expert/9171_setting_up_proxy_server_to_access_cloud_storage_library.html I am wondering which port should we use in step “8”, because using random port number doesn’t work.Do you have any idea? Best Regards
OK to kill long running DDBBackup job?
I’m having problems with DDBBackup jobs at my DR site. I changed the configuration from every 6 hours to once per day but the backup from yesterday is still running and I’m at the 22 hour point. I’d like to kill this job and let a fresh one start but I understand that the DDBBackup uses snapshots and I’m afraid that if I kill it there won’t be a proper snapshot cleanup. Is it OK to kill a DDBBackup job that’s been running almost a long time?Ken
Auxiliary Copy - restart schedule
Afternoon folksI have an auxiliary copy that backs up to tape - I have deleted all of the existing jobs on the tape media with the hope of starting the backup chain from scratch. However, since deleting the backup jobs, if I go back to the storage policy and right click on the tape auxiliary copy and view “media not copied” it is blank. I was expecting to see all backup jobs for the backup period that I have selected.Is it possible for me to “restart the schedule” so to speak without deleting the auxiliary job? TIA
Hedvig umount 'device is busy
When following the document on how to stop/start a Hyperscale X appliance node https://documentation.commvault.com/2022e/expert/133467_stopping_and_starting_hyperscale_x_appliance_node.html I get to the step unmount the CDS vdisk, and after getting the proper vdisk name and running the cmd# umount /ws/hedvig/<vdiskname>i get the message ‘device is busy’.What’s the proper way to remediate this and continue?ThanksG
Copy backup data from tape library to Cloud Library (Cloudian)
We have a storage policy with aux copies that was sending disk backups to a tape library. The library in question that contained all these tapes has been decommissioned. A new library was stood up and all tapes were put into this new tape library. However the aux copy that represents this data belonged to a different media server and physical library. We are trying to figure out how to take the data sent to the AUX copy in the old storage policy and move it to a new Cloudian array that has been configured as a Cloud library.
Failed to verify the device from MediaAgent - Failed to check cloud server status Error: The certificate file is not found. Error = 44336
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket.Also created a config file in .oci folder.What do I have to do to solve this problem?
Tapes with jobs that have multiple retentions - what happens?
Greetings!I’ve been involved in backups for quite a while, but have mercifully been using drives ~ not tapes. I’m now having to consider tapes. We have multiple SLAs, including:A monthly full backup to tape, retention to 62 days, lasts 1 month only A monthly full backup to tape, retention of 365 days - so 12 tape backups A quarterly full backup to tape, retention of 365 days - so 4 tape backupsThe last full backup of the month goes to tape. So, I forsee a single tape (or a group of tapes with a mess of… ) consisting of 1 month + 12 month retention times. Some of these tapes will have jobs that last a year as well as jobs that expired months earlier. What is everyone’s experience with such a thing? We have over 750 servers involved here. CV has but one tape drive currently and an operations group to rotate tapes. Thank you in advance for any experience you can lend me… Mike Rucker
I have an Auxiliary Disk-to-Disk copy and the throughput is very low, I see a lot of intermittent reading from the disk where the data lives.
the CVJobReplicatorODS, the job number is 177027 346796 56e20 09/02 18:11:14 177027 Target copy is single instanced346796 56e20 09/02 18:11:14 177027 Block level SI is set. Going to set minimum single instanceable size to block size346796 56e20 09/02 18:11:14 177027 Min SI Data Size [128 KB], SI Block Size [128 KB]346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU for target copy:346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU(NOENCRYPTION) for target copy: as there are no encrypted src copy files.346796 56e20 09/02 18:11:14 177027 N/w agents configured before/after firewall check = [2/2]. Firewalled = 1346796 56e20 09/02 18:11:14 177027 CVArchive::StartPipeline() - StartPipeline SI configuration -[srcClientName - commvault-shf] Block Level [true], Block Size , File Level [false], Min Signature Size 346796 56e20 09/02 18:11:14 177027 CPipelayer::InitiatePipeline Initiating SDT connection [000000D50C41C7E0] from 10.10.165.221:8400(commvault-shf) to
Description: Error occurred while processing chunk - error Code: [13:138]
Hi all!could you advice me how to troubleshoot following types of error: Error Code: [13:138] Description: Error occurred while processing chunk [xxx] in media [xxx], at the time of error in library [disklib01] and mount path [[xxx] /srv/commvault/disklib01/xxx], for storage policy [XXX] copy [Xxx] MediaAgent [svma1]: Backup Job [xxx]. Unable to setup the copy pipeline. Please check connectivity between Source MA [svma1] and Destination MA [svma1]. At a glance, it seems that it is not possible for CV to process chunk from the (index?)/disk library...However, the issue is connected with storage policy copy, that moves data from the disk library to the tape library (secondary copy). The main problem for us is that it is not possible to copy data to the tapes. Therefore, it may say Unable to setup the copy pipeline. The media agent is one server/device, that communicates with both disk and tape library. Lastly, the files in the related directories dont seem to be corrupted...Any suggesti
Drives Tape Library with mount status:"Reservation Stuck"
I am executing a database tape backup with RMAN, the backup fails with RMAN errors and in the tape library the drives with Reservation Stuck status are observed, the status of the drives is "Drive Fully Accessible", which indicates Reservation Stuck ? RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03009: failure of uncatalog command on ch1 channel at 08/29/2022 16:13:07ORA-00204: error in reading (block 1, # blocks 1) of control fileORA-00202: control file: '+DS242600413/backup.ctl.galaxy.1'ORA-15078: ASM diskgroup was forcibly dismountedRMAN> Recovery Manager complete.ORACLE error from target database: ORA-00204: error in reading (block 1, # blocks 1) of control fileORA-00202: control file: '+DS242600413/backup.ctl.galaxy.1'ORA-15078: ASM diskgroup was forcibly dismounted]l file: '+DS242600413/backup.ctl.galaxy.1'
What is the Impact on DDB after WORM option is enabled at Primary Copy level of storage policies.
Hi, For data security I was told to do search on the WORM option at Primary copy level of our storage policies.. Just a bit of background of our environment, we have short data retention set on our Primary Copy - 35 days 1 cycle. My understanding is that this WORM option in storage polices work within Commvault software and no admin(or anyone) can delete backup jobs after it is enabled, and we have to wait for job to age out and then be pruned by commvault automatically. Then I was advised that if I enable WORM, DDB for its storage policy will be sealed. A new DDB will be created automatically after rebaselined. So I have some questions below. is DDB sealing an automate process? i did enable WORM option more than a month ago for a storage policy for testing but I cannot see a sealed ddb under ‘Deduplication engines’. Our disk libraries are quite big (from 300 TB to 800TB). if we need to do the rebaseline every time, will this take long time and have impact on performance? With only 35
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.