Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 618 Topics
- 3,243 Replies
Hi All.I have a DDB on a Linux Media Agent running in Azure. The “CommServe Job Records to be Deleted” is very high and reporting as Critical in the Command Center Health Report.I have confirmed that the Storage Policies associated to this DDB is enabled for Data Aging. Physial pruning is also enabled on the DDB.When running Data Aging to this specific DDB/Copy there is no entry in the MediaManagerPrune log on the CommServe for this DDB ID. All the other DDBs are listed.There is also no SIDBPhysicalDeletes log file on the MA.I have checked the jobs and no jobs are retained past the retention period.Any idea what would cause the records to remain on this DDB?Let me know if you require any additional information that could assist.Thank you.Ignes
I have a media agent installed on sparc with SunOS 5.11 operating system on it, and it does not have DDB MediaAgent role assigned, thus i cannot create local deduplication database partition. Is there a way to assign a DDB MediaAgent role to this media agent?
Someone on my team wanted to try to add a new Tape Library to the environment using Command Center. I had never done this in Command Center before, so I watched the user go through the steps. We found that it had created the library and the storage pools. But we could not find any way to create barcode patterns or to create another scratch pool from the Command Center.We are an MSP and this is an essential step in being able to use the library.We then tried to create the barcode patterns and other scratch pools from the CommCell Console. We created the entities but then found we could not associate it to the pool/plan.It appears that the option to change the scratch pool is greyed out in this case.Is it expected that you cannot edit the scratch pool when the library/storage policy/plan is created from the Command Center? Or are we missing something here?Furthermore I was expecting a much more user-friendly approach to adding tape to the environment. For example, the user had to s
Hi, I came into work and noticed dozens of jobs in a waiting state due to the mount path not having enough free space. I am aware that we need to add more storage and we are going to, but in the interim I tried lowering the reserve space from 6TB to 2TB.so that the jobs can finish, and I also see what can be cleared. It’s not letting me change it. It will only go to 5960 GB. I currently have a ticket in with CV. 221017-401. Is there a way to fix this?
Hi ,Please help me whare is the problem , & how can I solve?Problem:Error Code: [62:2855] Description: Error occurred in Disk Media, Path [Test_cat\UE3QA4_10.14.2022_15.56\CV_MAGNETIC\V_1] [-1451 OSCLT_ERR_MAXIMUM_DEVICE_LOCKS]. For more help, please call your vendor's support hotline. Source: CDBL-CS-MA, Process: cv
I am creating a partitioned DDB with two media agents. What interface of the media agent should i Add? I have dedicated NIC available.Do in need ta add the IP-address of the media agent? What happens if i leave it to default? I have implemented this in the past but I cannot remember this part.
Hello all, I’m trying to configure my OCI into Commvault to test this tool (I’m using trial licence) but I’m dealing with some errors as shown bellow: What certificate is this? How can I install it? And where?Then, I use the CloudTestTool and it shows this: Look at the log file: 4228 2274 10/12 12:54:02 ### GetTenancy() failed. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### Failed to get the namespace. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### CheckServerStatus() - Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### EvMMConfigMgr::onMsgConfigStorageLibrary() - CVMACMediaConfig::AutoConfigLibrary returns error creating Library [Failed to verify the device from MediaAgent [dphwinbkpprd] with the error [Failed to check cloud serv
We have a tape library with two drives, Is it possible to have two secondary copies aimed to this same tape library and to start two auxiliary copies at the same time? My desired goal is to have at final two tapes with the same data so way to keep one of them on another safe offsite location.
Hello Expers, Recently, customers want to apply WORM function to VTL storage in order to respond to ransomware issue.I searched BOL and the Commvault community, but I could not find a guide on how to configure and operate in detail except for WORM media configuration.https://documentation.commvault.com/2022e/expert/10496_worm_media_configuration.htmlI hope I get detailed guide information for implementing WORM function in VTL or Tape Library. For example, once the WORM media is fully used, It is moved automatically to the Retired Media pool.https://documentation.commvault.com/2022e/expert/10493_worm_media.htmlAnd then is this media reusable? If possible, through what procedure can it be reused? RegardsKim KK
Hello CV community!I see that from 11.24, you can add snapshot copies to server plans.https://documentation.commvault.com/v11/essential/139040_new_features_for_snapshot_management_in_1124.htmlIm not sure, this snap copy in supported only with specific type of storages?Does anyone actually use it ?Please for your feedback,Nikos
Hello all, is there a possibility to move the saved data from one library to another ?We have two full backups on which we have set the retention until next year. The rest of the backups continue with the normal retention of 30 days and 4 cycles. However, I would like to move the backups that need to be retained until next year to our object storage. I have already prepared the connection to the object storage. I just need to know how to move the data. Kind RegardsThomas
I have installed Linux MA with RHEL 8 OS and attached a 2.9TB NVMe Disk formatted using LVM and divided In equally into two partitions (1.4TB for each one ) when trying to add storage pool and specify DDB path I get an error (The path doesn’t have sufficient space to perform a DDB backup) although no data written yet NVMe Partition with no data
Hello,I’ve some older data at one mount path and want to move it to different host where new storage has been already configured and it’s up and running. Is there a way to somehow merge the data from old MP into new one? So I want to move data from Host1 - “D:\MP1” to Host2 - “D:\New MP” to keep everything in single place. Simple move is enough?
Hi,I’m working on an issue with DDB Verification on a HyperScale X environment.I talked to someone at Commvault Support who explained that planned DDB verifications were removed from the best practices on HyperScale X because it could impact Space Reclamation Jobs, aswell as impact performance. However, our customer needs to be able to produce reports to document and prove compliance in regards to ISO certifications. They were able to import a report on their Private Metrics Reporting server so they can generate reports based on Admin jobs.However, they are noticing that DDB verifications are taking a very long time and they aren’t actually completing, so all subsequent DDB verification are just queueing up. The DDBs in this environment are very large.Does anybody have a suggestion how we could solve this issue and actually getting these DDB verification jobs to finish in a timely manner? Would scheduling them more often solve the issue?Looking forward to your suggestions.Jeremy
I have an old InfiniGuard VTL that is replicated to a new InfiniGuard VTL and I need to decom the old one. Is there a way, with CommVault down, to uninstall the medium changer and drives and install the new medium changer and have CommVault recognize the new VTL as the old one? The replicated VTL has all of the same barcodes.
The signature does not match. Message: The required information to complete authentication was not provided or was incorrect.
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket. What do I have to do to solve this problem?
Hey good people at CV!We are getting hourly “Unusual performance drop detected in pruning” event viewer events.I have noticed there was a hotfix here: https://kb.commvault.com/article/77146 , however it seems that 11.28.10 is the currently installed version.At first glance and after reading a few threads here concerning the same issue before posting a repeat topic, I figured it might work itself out.Is there anything else that can be done? Just trying to be proactive.Here is the message to follow, and there seems to be a single Job Record to be Deleted when viewing the DDB Pruning Performance Anomaly Report.Unusual performance drop detected in pruning for following deduplication databases due to increase in (CommServe Job Records to be Deleted)Thanks in advance!
We have CV 11 .24.56 and using Dedupe. We already have this Disk Library setup and were using 60% of the library but now we wanted to add another mount path and use the other 40% because no one ever used the storage space! I cannot send screenshots so I will try to explain. We have a Primary site (Pri-1) and an Alternate site (Alt-1) and Commvault copies the data to the Pri-1 Disk Library and then we do a Dash Copy of the data to the Alt-1 Disk Library so we have 2 online disk libraries one at each site and the libraries each have their on Folder or Mount Path and we have Global Dedupe policies one setup for the Pri-1 Library and another Global Dedupe setup for Alt-1 Library. (This is how it was setup by installer). My issue is that when I went to add the extra storage I was able to add the storage as a new Disk Library with a new Mount path successfully. The libraries show up in Commvault as Alt-2 and Pri-2 under the library. I then was able to add this second Primary Data site
We initiated a move a mount path operation on our cmvlt environment,but it seems that is stuck somehow .There is no other jobs running in that moment for this media agent.Also we can see that all the data seems to be already copied.We see Estimated Total Data Size:1.65 TBSize of Data Copied:1:65 TBare the same It is like this for 1 day and some hours
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.