Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 557 Topics
- 2,976 Replies
Hello all, is there a possibility to move the saved data from one library to another ?We have two full backups on which we have set the retention until next year. The rest of the backups continue with the normal retention of 30 days and 4 cycles. However, I would like to move the backups that need to be retained until next year to our object storage. I have already prepared the connection to the object storage. I just need to know how to move the data. Kind RegardsThomas
Hello CV community!I see that from 11.24, you can add snapshot copies to server plans.https://documentation.commvault.com/v11/essential/139040_new_features_for_snapshot_management_in_1124.htmlIm not sure, this snap copy in supported only with specific type of storages?Does anyone actually use it ?Please for your feedback,Nikos
The other day I noticed a Critical item in health dashboard under DDB backup section stating one of the store isn't protected.. Noticed the DDB store itself got auto created a day back keeping the old one active.. I noticed this across various environments where we have multiple DDB store with different ID gets created and all of them are actively used.. Couldn't find any documentation that explains it.. It would be helpful if someone throws some light here.. Thanks.. Below is the one i was referring to where DDB store 72 got auto created and if you notice for FS and DB agent store we have multiple ID’ present...
I have installed Linux MA with RHEL 8 OS and attached a 2.9TB NVMe Disk formatted using LVM and divided In equally into two partitions (1.4TB for each one ) when trying to add storage pool and specify DDB path I get an error (The path doesn’t have sufficient space to perform a DDB backup) although no data written yet NVMe Partition with no data
Hi,I’m working on an issue with DDB Verification on a HyperScale X environment.I talked to someone at Commvault Support who explained that planned DDB verifications were removed from the best practices on HyperScale X because it could impact Space Reclamation Jobs, aswell as impact performance. However, our customer needs to be able to produce reports to document and prove compliance in regards to ISO certifications. They were able to import a report on their Private Metrics Reporting server so they can generate reports based on Admin jobs.However, they are noticing that DDB verifications are taking a very long time and they aren’t actually completing, so all subsequent DDB verification are just queueing up. The DDBs in this environment are very large.Does anybody have a suggestion how we could solve this issue and actually getting these DDB verification jobs to finish in a timely manner? Would scheduling them more often solve the issue?Looking forward to your suggestions.Jeremy
I’m getting the following alert email roughly once an hour:> Anomaly Notification> The system detected an unusual drop in the pruning performance for the following databases in commcell <CommServer_Host>> Deduplication Database Reason> HyperScale_Primary inrease in (CommServe Job Records to be Deleted)> CV Cloud Storage increase in (CommServe Job Records to be Deleted)> Please click here for more details.When I follow the “click here” link, I see:> 1 CommServe Job Records to be DeletedThis has been going on for a couple of weeks. I don’t think an annoying email is a big enough deal to open a ticket for but I’d still like to clean this up. Does anyone know what the problem is and how to fix it? I’m sorry to say the CommVault help pages for this are not very useful. Ken
Hello,I’ve some older data at one mount path and want to move it to different host where new storage has been already configured and it’s up and running. Is there a way to somehow merge the data from old MP into new one? So I want to move data from Host1 - “D:\MP1” to Host2 - “D:\New MP” to keep everything in single place. Simple move is enough?
Hi, Do you have a best practice how to utilize media agents in a GRID format?For example we have 4 MA’s and let’s say 3 subclients..1 sub client always uses 1 MA (more vsa proxies within the job for vm’s) for VMware backups. So there is a some sorting mechanism for other 2 sub clients to either receive idling MA’s to start the backup, or use the one which is already used by sub client 1… But is there an option to utilize the potential of a GRID solution where we can use more than 1 MA for a backup of a sub client? Hope it’s clear of what I am trying to achieve.
Hi, we just started to use an object storage to tier out the data after 7 days. We created one bucket and added it to commvault as backup target. Den we changed the config and created another bucket but forget to delete the first one and now commvault is using both buckets to store the data. How can I migrate the already stored data in bucket 2 (data path 1) into bucket 1 (data path 2) ?After I could migrate the data I would like to delete the second data path and then remove the bucket in the Object Storage.RegardsThomas
I have an old InfiniGuard VTL that is replicated to a new InfiniGuard VTL and I need to decom the old one. Is there a way, with CommVault down, to uninstall the medium changer and drives and install the new medium changer and have CommVault recognize the new VTL as the old one? The replicated VTL has all of the same barcodes.
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
We have CV 11 .24.56 and using Dedupe. We already have this Disk Library setup and were using 60% of the library but now we wanted to add another mount path and use the other 40% because no one ever used the storage space! I cannot send screenshots so I will try to explain. We have a Primary site (Pri-1) and an Alternate site (Alt-1) and Commvault copies the data to the Pri-1 Disk Library and then we do a Dash Copy of the data to the Alt-1 Disk Library so we have 2 online disk libraries one at each site and the libraries each have their on Folder or Mount Path and we have Global Dedupe policies one setup for the Pri-1 Library and another Global Dedupe setup for Alt-1 Library. (This is how it was setup by installer). My issue is that when I went to add the extra storage I was able to add the storage as a new Disk Library with a new Mount path successfully. The libraries show up in Commvault as Alt-2 and Pri-2 under the library. I then was able to add this second Primary Data site
The signature does not match. Message: The required information to complete authentication was not provided or was incorrect.
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket. What do I have to do to solve this problem?
We currently use IBM V5000 arrays for our Commvault backup target to land our deduped backups. We are starting to review other options to see what other fast, cost effective options are out there. I do prefer to use Fiber Channel connections, but open to options. Since Commvault is really the brain in our scenario, the storage array does not really need any features, just good speed. What Vendor Storage arrays do you use? Are you happy with it?
We initiated a move a mount path operation on our cmvlt environment,but it seems that is stuck somehow .There is no other jobs running in that moment for this media agent.Also we can see that all the data seems to be already copied.We see Estimated Total Data Size:1.65 TBSize of Data Copied:1:65 TBare the same It is like this for 1 day and some hours
Hi Community , Does Enabling Ransomware Protection on a windows MediaAgent CV feature make my disk library and backup copies immutable ? Do we also need WORM enabled primary or secondary copies even after enabling this CV native feature for full proof ransomware protection ? If yes , what is the use of ransomware protection feature ?Regards, Mohit
Hi, We are configuring a Cloud Library as the Export Destination for Disaster Recovery (DR) Backups.So whenever we take a DR Backup, the metadata is exported to our Cloud Library.However, the CommServe has no direct access to the Cloud Library, it must connect to the cloud storage through a proxy server as explained below:https://documentation.commvault.com/v11/expert/9171_setting_up_proxy_server_to_access_cloud_storage_library.html I am wondering which port should we use in step “8”, because using random port number doesn’t work.Do you have any idea? Best Regards
Hi all!My company, using six MA to create and store backups. One MA with a separated storage for long term retention outside, and an another one for create local backups on a branch office site. On the main site there are four MA in two node grids. MA1 & MA2 is a grid and MA3 & MA4 is an another. They are sharing their libraries and DDBs.From the branch office, local backups are copied to the main site and main backups are copied to long term site as DR backups. MAs are physical on main site and virtual on others, and disk storages are used on all sites.Currently, we are planing to change our disk storages and physical MAs on main site. And of course, it is a good chance to upgrade OS on MAs from Win2012R2 to Win2019. During the process, library content should be moved from the old disk storage to the new one, and DDBs from old MA to new. One MA stores 40 - 60 TB backup data, and of course, I would like to do it with minimum downtime. I have found descriptions about library mov
Hey good people at CV!We are getting hourly “Unusual performance drop detected in pruning” event viewer events.I have noticed there was a hotfix here: https://kb.commvault.com/article/77146 , however it seems that 11.28.10 is the currently installed version.At first glance and after reading a few threads here concerning the same issue before posting a repeat topic, I figured it might work itself out.Is there anything else that can be done? Just trying to be proactive.Here is the message to follow, and there seems to be a single Job Record to be Deleted when viewing the DDB Pruning Performance Anomaly Report.Unusual performance drop detected in pruning for following deduplication databases due to increase in (CommServe Job Records to be Deleted)Thanks in advance!
I’m having problems with DDBBackup jobs at my DR site. I changed the configuration from every 6 hours to once per day but the backup from yesterday is still running and I’m at the 22 hour point. I’d like to kill this job and let a fresh one start but I understand that the DDBBackup uses snapshots and I’m afraid that if I kill it there won’t be a proper snapshot cleanup. Is it OK to kill a DDBBackup job that’s been running almost a long time?Ken
Afternoon folksI have an auxiliary copy that backs up to tape - I have deleted all of the existing jobs on the tape media with the hope of starting the backup chain from scratch. However, since deleting the backup jobs, if I go back to the storage policy and right click on the tape auxiliary copy and view “media not copied” it is blank. I was expecting to see all backup jobs for the backup period that I have selected.Is it possible for me to “restart the schedule” so to speak without deleting the auxiliary job? TIA
When following the document on how to stop/start a Hyperscale X appliance node https://documentation.commvault.com/2022e/expert/133467_stopping_and_starting_hyperscale_x_appliance_node.html I get to the step unmount the CDS vdisk, and after getting the proper vdisk name and running the cmd# umount /ws/hedvig/<vdiskname>i get the message ‘device is busy’.What’s the proper way to remediate this and continue?ThanksG
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.