Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 621 Topics
- 3,252 Replies
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Hello,I’ve some older data at one mount path and want to move it to different host where new storage has been already configured and it’s up and running. Is there a way to somehow merge the data from old MP into new one? So I want to move data from Host1 - “D:\MP1” to Host2 - “D:\New MP” to keep everything in single place. Simple move is enough?
i have next problem Alert:Aux copy job FailedType:Job Management - Auxiliary CopyDetected Criteria:Job FailedIs escalated:Detected Time:Wed Dec 28 23:42:51 2022CommCell:CommServeUser:AdministratorJob ID:63139Status:FailedStorage Policy Name:CommServeDRCopy Name:SecondaryStart Time:Wed Dec 28 23:00:11 2022Scheduled Time:Wed Dec 28 23:00:08 2022End Time:Wed Dec 28 23:42:51 2022Error Code:[13:138] [40:91] [40:65]Failure Reason:Error occurred while processing chunk  in media [V_845], at the time of error in library [RezervnaKopija] and mount path [[CommServe] \\192.168.99.51\RezervnaKopija], for storage policy [CommServeDR] copy [Secondary] MediaAgent [CommServe]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [CommServeDR], Copy [Primary], Host [DRI-COMMVAULT.dri.local], Path [\\192.168.99.51\RezervnaKopija\MX19RW_07.26.2022_08.59\CV_M
Hello,let me ask your opinion about the following situation.When I check the details of the System Created DDB Space Reclamation schedule policy I see that it looks like “corrupted”.As you can see in the attached image, in the summary screen it shows Type “Data Verification” while in the dialog the type is “Data Protection”. Moreover, the Associations tab shows the list of Clients instead of the DDB list.Is this normal? How can I get rid of this? Thank you in advanceGaetano
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
Hello,We are seeing a very large random read load on our Hitachi G350 backup storages with NL-SAS disks. These random reads are completely consuming our backup storage performance. We have two G350s on campus and a third at a remote site. Commvault runs copy jobs between these three G350s.DDB is on NVMe locally in the Media Agent, also the Index Cache Disk.We ran several analyses and Live Optics showed us that the daily change rate is 334.9%, which is mainly due to the Windows File System policy, for which we see 2485.1% daily change rate.Does anyone know how the random read load could be reduced since our disk backup is otherwise unusable. What steps could we take to optimize the Commvault configuration?Screenshot: Thanks for your help!
Do you see these error for your jobs?When we updated from 11.20.32 to 11.20.60 we started getting Cache-Database errors on various backup types, FileSystem idata agent, NDMP backups and others.The exact error we get is:Error Code: [40:110]Description: Client-side deduplication enabled job found Cache-Database and Deduplication-Database are out of sync.I have a ticket open with support, but I am wondering if the issue is unique to us or if it is happening to other customers as well. Thank you,
I have read a couple or articles on CommVault Online that say defragmentation of the Magnetic libraries is a good idea. Diskeeper, now Dymax IO, was listed as a certified product for online volumes. I am wondering if others defrag theire libraries for performance purposes, and what products they use. I have read in older articles that the native windows defragmentation tool can be used. Also it states that it should be done outside of backup hours (makes sense). Any feedback or information would be appreciated. Thanks
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
Hi,I would like to ask on how we can perform recovery of data from our offsite copy. We have CVfailover setup in our environment below are the options we need to perform. From Commcell console in Site A, How to recovery our data form offsite copy which is in a secondary storage of site B? When commserv in Site A went unavailable and the commserv server in site B become active after successful CVfailover. From commcell in Site B, how to recovery our data from primary and secondary storage in site A? and how to recovery our data from secondary storage in Site B when commcell servers and storage arrays in Site A a is also unavailable?Refer to the attached photo for the commcell environment. Appreciate a lot to anyone who will provide a response. rolan.
Hi There! I have Vmware vm backed up on-premises and auxiliary copy to Azure cloud library. When I try recovery a VMs that I supposed already transferred data to azure, I’m able to see bandwidth on firewall ports increase. So I think this scenario report a local data recovery to azure.I’d like to recovery data that already on azure cloud ( that was transfer by auxiliary copy). Someone could hep me with this steps?Thanks!
Our Media Agent needs replacing.The replacement is probably going to be a Dell PowerEdge R5xx or R7xx with dual CPU and 128GB RAM and the intention is to use M.2/NVMe for the OS/Commvault binaries and DDB/Indexes but I’d appreciate any guidance and best practise on the build and storage.We’re doing a lot of synthetic fulls with small nightly incremental backups and we aux about 70TB to LTO8 tape every week.The disk library we have right now is approx 50TB.Is there any best practise that would favour NAS/network storage for the disk library over filling the PowerEdge with large SAS disks?With local disk on Windows Server is there a preference between NTFS and ReFS and is there a best practise over mount path size as we’re currently using 4-5TB mount paths carved out as separate Windows volumes on a single underlying hardware RAID virtual disk.Given modern hardware performance can anyone see a definite reason to do any more than buy a single PowerEdge for this other than redundancy/avail
Good day allThis one is a bit complicated, so will try keep it as brief as I can.The customer has a faulty storage device with the backup data on it.They’ve received a loan VAST unit which uses it’s own deduplication engine in addition to the Commvault dedupe in place. We would like to turn the Commvault deduplication off as we’re having errors on the DDB which may require us to seal it.The concerns I have are below, and I’m hoping to get some clarity on it.The faulty storage and VAST storage are in the same Data Centre (DC1).We want to turn DDB off in this DC and do a Move Mount Path(s) from the faulty storage to the VAST.At the same time, new backups are running to the VAST device on different Mount Paths.When this is all complete and the faulty storage is replaced (it won’t have it’s own DDB capabilities, so Commvault will handle that), we will move the VAST Mount Paths back to this storage.With Commvault DDB off, does non-deduplicated data get deduplicated during a ‘Move Mount Path
Hi Community, I want to know about the strategy which we can take for data protection of cloud workloads using CommVault .Do we need to deploy CS in cloud or we can use on-prem cs for backup of cloud as well as on prem workloads ? if yes , How ?Please share if there is any sample reference architecture diagram for backup of cloud workloads ?What type of backup library to be used for cloud workloads backups ?
Hello team, I noticed two sealed DDBs have a space warning under DDB disk space Utilization section in webconsole health report.We have a long term retention for mailbox backup that prevent DDB store from reclaim I’m looking to see if there is a way to exclude sealed DDB from the DDB disk space utilization /strike count (I have search CV bol but it doesn’t bring any search result)Or i’ll need to contact support to manually free up sealed ddb space. Do need upload CS DB for CV staging ? thank you
HI ThereIHAC that will use exagrid as backup storage with Commvault.Exagrid states that they can add to Commvault deduplication to obtain a higher dedup ratio (up to 20:1 for long term retention data).I couldn’t find any information on Exagrid on BoL and my understanding was that we do not use CV deduplication when using a deduplication storage as primary target.Did anyone implemented CV with Exagrid ? and if so any specifics/culprit or best practices ? ThanksAbdel
Hi, we just started to use an object storage to tier out the data after 7 days. We created one bucket and added it to commvault as backup target. Den we changed the config and created another bucket but forget to delete the first one and now commvault is using both buckets to store the data. How can I migrate the already stored data in bucket 2 (data path 1) into bucket 1 (data path 2) ?After I could migrate the data I would like to delete the second data path and then remove the bucket in the Object Storage.RegardsThomas
Hello Commvault Community, I need help with data pruning issue after Seal DDB (every 3 months) - micro pruning is disabled, but jobs shouldn't be held for that long anyway. (screen1.png) Micro pruning is disabled due to the cost of operations in the Cloud Storage.As I remember correctly, Commvault also recommended disabling micro pruning on the cloud library. We have generated a Forecast report, where some of the described Jobs are actually kept by not being copied to Secondary Copy, but it's about 30TB, and for example (screen2.png) it says that the data is 126TB within one of the Sealed DDBs. There are several such databases and a total of about 600TB of data remains in the Cloud, which should probably be deleted. I am asking for help in solving the problem. Screenshots and Forecast report are attached. Thanks & RegardsKamil
Hello,The customer bought new servers MS SQL and migrate to new 2 servers.Scenery today:2 servers MS SQL1 production2 copyNew Scenery:2 servers MS SQL (New Version of O.S. and Database)1 production2 copyand i need to copy the config of backup jobs, with the same configuration of current backup: retention, backups jobs, DDB, schedulers.Does anyone have any procedure? the best practices to do.
Is it possible to install the BoostFS library on a HyperScale X Media Agent, and if so is it supported?I’m looking to write a additional backup copy to storage outside of the appliance and hope I can do it in a deduplicated manner rather than having to send a full copy to the data domain. The topic of BoostFS is discussed on https://documentation.commvault.com/commvault/v11/article?p=9404.htm#o132281, but nothing specific to HyperScale X and BoostFS at https://documentation.commvault.com/commvault/v11/article?p=128105.htm.Thanks
Hello community , We are trying to migrate SAN storage to S3 cloud library .Per suggestions followed these steps . 1. configured new global dedupe storage policy using your new S3 bucket and MA2. configured new secondary copies in your existing storage policies pointing to your new S3 dedupe storage3. ran aux-copyWe have huge data and contacted commvault support to determine the timelines to know when the aux copy will get complete . Currently aux-copy is running for more than 4 months.Support has mentioned below points .-Your current configuration is allowing the selection and prioritization of new backups over older data-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. How to make sure have optimal Aux copy configurations Please share your inputs . Thanks in advanceSpartan9
Hi! I’m trying to add S3 Compatible Storage as a Cloud Library, but I get error: 3292 1194 12/20 10:31:44 ### [cvd] CVRFAMZS3::SendRequest() - Error: Error = 440373292 1194 12/20 10:31:48 ### [cvd] CURL error, CURLcode= 60, SSL peer certificate or SSH remote key was not OK I already troubleshoot it and I was able to successfully add the storage as a Cloud Lib using “nCloudServerCertificateNameCheck” mentioned in another thread (Thanks @Damian Andre ! ) The thing is that the provider has a valid certificate Issued by:CN = R3O = Let's EncryptC = USwith root cert:CN = ISRG Root X1O = Internet Security Research GroupC = USSo I am wondering if instead of ignoring all the possible certificates I could just add this one, valid certificate to Commvault so it trust this provider and allow me to configure DiskLib. Is this possible? Also not sure it that’s related since certificate administration is not my cup of tea, but curl-ca-bundle.crt is dated to FEB 2016 on this MA which is a fresh in
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.