Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 619 Topics
- 3,243 Replies
Hi, I came into work and noticed dozens of jobs in a waiting state due to the mount path not having enough free space. I am aware that we need to add more storage and we are going to, but in the interim I tried lowering the reserve space from 6TB to 2TB.so that the jobs can finish, and I also see what can be cleared. It’s not letting me change it. It will only go to 5960 GB. I currently have a ticket in with CV. 221017-401. Is there a way to fix this?
I am creating a partitioned DDB with two media agents. What interface of the media agent should i Add? I have dedicated NIC available.Do in need ta add the IP-address of the media agent? What happens if i leave it to default? I have implemented this in the past but I cannot remember this part.
Hello all, I’m trying to configure my OCI into Commvault to test this tool (I’m using trial licence) but I’m dealing with some errors as shown bellow: What certificate is this? How can I install it? And where?Then, I use the CloudTestTool and it shows this: Look at the log file: 4228 2274 10/12 12:54:02 ### GetTenancy() failed. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### Failed to get the namespace. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### CheckServerStatus() - Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### EvMMConfigMgr::onMsgConfigStorageLibrary() - CVMACMediaConfig::AutoConfigLibrary returns error creating Library [Failed to verify the device from MediaAgent [dphwinbkpprd] with the error [Failed to check cloud serv
I’m getting the following alert email roughly once an hour:> Anomaly Notification> The system detected an unusual drop in the pruning performance for the following databases in commcell <CommServer_Host>> Deduplication Database Reason> HyperScale_Primary inrease in (CommServe Job Records to be Deleted)> CV Cloud Storage increase in (CommServe Job Records to be Deleted)> Please click here for more details.When I follow the “click here” link, I see:> 1 CommServe Job Records to be DeletedThis has been going on for a couple of weeks. I don’t think an annoying email is a big enough deal to open a ticket for but I’d still like to clean this up. Does anyone know what the problem is and how to fix it? I’m sorry to say the CommVault help pages for this are not very useful. Ken
Hello,We are seeing a very large random read load on our Hitachi G350 backup storages with NL-SAS disks. These random reads are completely consuming our backup storage performance. We have two G350s on campus and a third at a remote site. Commvault runs copy jobs between these three G350s.DDB is on NVMe locally in the Media Agent, also the Index Cache Disk.We ran several analyses and Live Optics showed us that the daily change rate is 334.9%, which is mainly due to the Windows File System policy, for which we see 2485.1% daily change rate.Does anyone know how the random read load could be reduced since our disk backup is otherwise unusable. What steps could we take to optimize the Commvault configuration?Screenshot: Thanks for your help!
Hello, TeamI am getting the error below on majority of jobs running.I have checked the storage end and also the media agents (The LUNS are attached to the Media Agent) Everything looks good.What could be the cause of the error. Failed to mount the disk media in library [ARCHIVE_DISKPROD] with mount path [B:\Archive_DiskLibrary\MP10] on MediaAgent [hq_media_svr3]. Operation could not be completed in timeout interval. Please check the following: 1. Library and drive is functioning correctly. 2. Library and Drive management services are running. 3. All other MediaAgent services are running. 4. The time out period on the Expert Storage Configuration Properties Window in the CommCell Console. 5. Cleaning media in Assigned Media Group. Source: hq-vm-commserv, Process: MediaManager
We have a tape library with two drives, Is it possible to have two secondary copies aimed to this same tape library and to start two auxiliary copies at the same time? My desired goal is to have at final two tapes with the same data so way to keep one of them on another safe offsite location.
Hello Expers, Recently, customers want to apply WORM function to VTL storage in order to respond to ransomware issue.I searched BOL and the Commvault community, but I could not find a guide on how to configure and operate in detail except for WORM media configuration.https://documentation.commvault.com/2022e/expert/10496_worm_media_configuration.htmlI hope I get detailed guide information for implementing WORM function in VTL or Tape Library. For example, once the WORM media is fully used, It is moved automatically to the Retired Media pool.https://documentation.commvault.com/2022e/expert/10493_worm_media.htmlAnd then is this media reusable? If possible, through what procedure can it be reused? RegardsKim KK
Hey good people at CV!We are getting hourly “Unusual performance drop detected in pruning” event viewer events.I have noticed there was a hotfix here: https://kb.commvault.com/article/77146 , however it seems that 11.28.10 is the currently installed version.At first glance and after reading a few threads here concerning the same issue before posting a repeat topic, I figured it might work itself out.Is there anything else that can be done? Just trying to be proactive.Here is the message to follow, and there seems to be a single Job Record to be Deleted when viewing the DDB Pruning Performance Anomaly Report.Unusual performance drop detected in pruning for following deduplication databases due to increase in (CommServe Job Records to be Deleted)Thanks in advance!
Hello all, is there a possibility to move the saved data from one library to another ?We have two full backups on which we have set the retention until next year. The rest of the backups continue with the normal retention of 30 days and 4 cycles. However, I would like to move the backups that need to be retained until next year to our object storage. I have already prepared the connection to the object storage. I just need to know how to move the data. Kind RegardsThomas
Hello CV community!I see that from 11.24, you can add snapshot copies to server plans.https://documentation.commvault.com/v11/essential/139040_new_features_for_snapshot_management_in_1124.htmlIm not sure, this snap copy in supported only with specific type of storages?Does anyone actually use it ?Please for your feedback,Nikos
The other day I noticed a Critical item in health dashboard under DDB backup section stating one of the store isn't protected.. Noticed the DDB store itself got auto created a day back keeping the old one active.. I noticed this across various environments where we have multiple DDB store with different ID gets created and all of them are actively used.. Couldn't find any documentation that explains it.. It would be helpful if someone throws some light here.. Thanks.. Below is the one i was referring to where DDB store 72 got auto created and if you notice for FS and DB agent store we have multiple ID’ present...
I have installed Linux MA with RHEL 8 OS and attached a 2.9TB NVMe Disk formatted using LVM and divided In equally into two partitions (1.4TB for each one ) when trying to add storage pool and specify DDB path I get an error (The path doesn’t have sufficient space to perform a DDB backup) although no data written yet NVMe Partition with no data
Hi,I’m working on an issue with DDB Verification on a HyperScale X environment.I talked to someone at Commvault Support who explained that planned DDB verifications were removed from the best practices on HyperScale X because it could impact Space Reclamation Jobs, aswell as impact performance. However, our customer needs to be able to produce reports to document and prove compliance in regards to ISO certifications. They were able to import a report on their Private Metrics Reporting server so they can generate reports based on Admin jobs.However, they are noticing that DDB verifications are taking a very long time and they aren’t actually completing, so all subsequent DDB verification are just queueing up. The DDBs in this environment are very large.Does anybody have a suggestion how we could solve this issue and actually getting these DDB verification jobs to finish in a timely manner? Would scheduling them more often solve the issue?Looking forward to your suggestions.Jeremy
Hello,I’ve some older data at one mount path and want to move it to different host where new storage has been already configured and it’s up and running. Is there a way to somehow merge the data from old MP into new one? So I want to move data from Host1 - “D:\MP1” to Host2 - “D:\New MP” to keep everything in single place. Simple move is enough?
Hi, Do you have a best practice how to utilize media agents in a GRID format?For example we have 4 MA’s and let’s say 3 subclients..1 sub client always uses 1 MA (more vsa proxies within the job for vm’s) for VMware backups. So there is a some sorting mechanism for other 2 sub clients to either receive idling MA’s to start the backup, or use the one which is already used by sub client 1… But is there an option to utilize the potential of a GRID solution where we can use more than 1 MA for a backup of a sub client? Hope it’s clear of what I am trying to achieve.
Hi, we just started to use an object storage to tier out the data after 7 days. We created one bucket and added it to commvault as backup target. Den we changed the config and created another bucket but forget to delete the first one and now commvault is using both buckets to store the data. How can I migrate the already stored data in bucket 2 (data path 1) into bucket 1 (data path 2) ?After I could migrate the data I would like to delete the second data path and then remove the bucket in the Object Storage.RegardsThomas
I have an old InfiniGuard VTL that is replicated to a new InfiniGuard VTL and I need to decom the old one. Is there a way, with CommVault down, to uninstall the medium changer and drives and install the new medium changer and have CommVault recognize the new VTL as the old one? The replicated VTL has all of the same barcodes.
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
We have CV 11 .24.56 and using Dedupe. We already have this Disk Library setup and were using 60% of the library but now we wanted to add another mount path and use the other 40% because no one ever used the storage space! I cannot send screenshots so I will try to explain. We have a Primary site (Pri-1) and an Alternate site (Alt-1) and Commvault copies the data to the Pri-1 Disk Library and then we do a Dash Copy of the data to the Alt-1 Disk Library so we have 2 online disk libraries one at each site and the libraries each have their on Folder or Mount Path and we have Global Dedupe policies one setup for the Pri-1 Library and another Global Dedupe setup for Alt-1 Library. (This is how it was setup by installer). My issue is that when I went to add the extra storage I was able to add the storage as a new Disk Library with a new Mount path successfully. The libraries show up in Commvault as Alt-2 and Pri-2 under the library. I then was able to add this second Primary Data site
The signature does not match. Message: The required information to complete authentication was not provided or was incorrect.
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket. What do I have to do to solve this problem?
We currently use IBM V5000 arrays for our Commvault backup target to land our deduped backups. We are starting to review other options to see what other fast, cost effective options are out there. I do prefer to use Fiber Channel connections, but open to options. Since Commvault is really the brain in our scenario, the storage array does not really need any features, just good speed. What Vendor Storage arrays do you use? Are you happy with it?
We initiated a move a mount path operation on our cmvlt environment,but it seems that is stuck somehow .There is no other jobs running in that moment for this media agent.Also we can see that all the data seems to be already copied.We see Estimated Total Data Size:1.65 TBSize of Data Copied:1:65 TBare the same It is like this for 1 day and some hours
Hi Community , Does Enabling Ransomware Protection on a windows MediaAgent CV feature make my disk library and backup copies immutable ? Do we also need WORM enabled primary or secondary copies even after enabling this CV native feature for full proof ransomware protection ? If yes , what is the use of ransomware protection feature ?Regards, Mohit
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.