Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 617 Topics
- 3,237 Replies
Hello, I have a matter that could require some help.We have a primary copy on a disk Library, a secondary on another disk Library and a third copy on a MCSS cloud Library. All copies have dedup enabled.Auxiliary copies run fine between the disk librairies but when it comes to sending the data to the MCSS library it takes forever. We tried to increase the number of streams, use either of the disk libraries as source for the aux copies but we can’t achieve suitable performances. What we see is that the process generate excessive read operations on the source library. Dedup block size is 128 on the disk libraries and 512 on MCSS.Commvault version 11.24.Any help would be appreciated.Regards, Jean-xavier
So,I’m having a project with Commvault and a HPE Apollo 4200 with 24x 12 TB NLSAS drives.What would be a best practice approach when creating the array(and logical volume) in the aerver Raid Controller?I read some people saying that the max volume should be 4 TB per mount point, does this make sense these days ?If yes, that means that I need to create 66 volumes of 4TB each within raid controller.Any beneficts on doing this versus a single volume with 264 TB?thanks
I understand the DEVICE_LABEL file is used for pruning so it missing could be an issue.I have a customer who's deleted the file off a volume.Will this affect pruning and would there be any other potential issues that could arise?Is there a way to recreate the file if it is required, without recreating the volume?The volume is configured as a WORM device for compliance.Thanks so much.
Hi!We’ve recently moved media-agents to new servers, and i’m now in the process of “cleaning up”. And for 3 of the 4 media-agents the report showed nothing, and i could delete the mediaagent without any issue. However, for the last media-agent, it complains about “Proxy Server to Perform Intellisnap Backup Operations”, and lists a client name, a backupset name, and a subclient name.This was “correct”, because there was a subclient there with that name that was configured for intellisnap (but not in use, and had no backups). So i deleted the subclient… but even so, now several days later, when i try to delete the media-agent, the report still complains about the same “issue”. I could ofcourse just ignore the “warning” and proceed with the deletion, but i’d much prefer for the report to show that there is no association, before proceeding with the deletion.
Unusual performance drop detected in pruning for following deduplication databases: [GDSP_ALL_<Servername_8]: due to increase in (CommServe Job Records to be Deleted) - Check details in: http://backupserver:80/webconsole/reportsplus/reportViewer.jsp?idstringIf i check the report, it’s grown from 0 to 12399 commserve job records to be deleted from 9’th january to today the 18’th.….However, this media-agent is a media-agent that is spending half it’s life completely offline and shut down, and it has been shut down since the 9’th, and powered on again today, so i assume this is expected behavior, but because i have not seen this message before, i figured i’d check with the hivemind first :-) Software version is: 11.24.78
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Oracle Archive Log Backups are backed up to a storage policy. The primary copy’s retention is configured for 30 days with 0 cycles. The Data Retention Forecast and Compliance Report shows the Archive log is required by the RMAN Full backup which has been aux copied from a primary copy (Disk Library) to a secondary copy (Tape Library). The Secondary Copy retention is 7 years. I would like for the Log Backup on Disk to follow the primary copy retention of 30 days and expire once 30 days have been met, however it is not expiring since itis required by the Full backup on tape with the longer retention. I had a similar issue with SQL backups where the log backups were not expiring because the log backups were required for the SQL full that had been aux copy to tape with a long 7 year retention. The setting to correct this behavior is under Storage → Media Management Configuration → Data Aging → and disabling <Honor SQL Chaining for Full jobs on Selective copy>.Does anyone know if
Hello,Planning to add a new partiotion to an exiting DDB. I’ve went through the documention and have a doubt on the below.The addtional 0.5MB of data will be added only to the magnetic disk the holds the DDB or will it be also added to the Disk library mount paths ? “After running the Backup1, you add Partition2 and run Backup2 of the same 1 MB of data. After the second backup, 4 signatures of 128 KB size will be added to Partition2 (even though the same signatures exists in the original store) and for the other 4 signatures only the reference will be added in the original store (as the signatures already exists). The magnetic disk will have 1.5 MB of data (1 MB from the first Backup + 500 KB from Partition2 from Backup2).On running data aging, if Backup1 is aged, then from the first partition the first 4 signatures will be aged and also 500 KB of data will be pruned from the magnetic disk.”https://documentation.commvault.com/2022e/expert/12455_configuring_additional_partitions_for_ded
Hello community,I have a customer whose backing up in a disk lib, then auxcopies to an Azure Cloud lib.However, when customer looks at his Azure costs for the last 5 days, they spend over 100 dollars a day in Iterative read operations. I’m trying to figure out what is reading so much from the cloud library. The total written size for a day is 200GB. why 41 million read requests? The DDB Verification is disabled for this library.Is there anything else I should look into?Thanks in advance.
Is it possible that high Q&I times can cause the following error message: Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]
Hello, I have Oracle backups running which are scheduled through RMAN which occasionally fail with the following error code: “Error Code: [82:177] Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]” Steps that I have taken to troubleshoot the issue: Dedupe engine is active and running Created a one way persistent tunnel from the client to the commserve and Media agentsas suggested by another thread pertaining to the same issue “ERROR CODE [82:172]: Could not connect to the DeDuplication Database process for Store Id [xx]. Source: xxxx, Process: backint_oracle | Community (commvault.com)”Our Q&I times across the board for our media agents are quite high. For this particular engine the times are 6,416 microseconds (307%). I believe the ddb is not running on SSD, but regular disk. Is it possible that this could be causing the above error? If not I’m not sure what else could be the reason for this.
Hi and happy new year to all of you !I would like to know if some of you have already implemented some LTO9 drives / tape libraries, and would love to get your feedback about it using Commvault.My experience on the LTO9 media, using dual tape drives tape libraries, is quite bad.The Media calibration / optimization / characterization phase that any new LTO9 media has to deal with is a pain, on my side.Looks like on the first mount of a media -- let me reword it in my ‘old guy’ words -- it has to be somehow formatted to be able to be used by your favourite backup software. Below a link to Quantum’s FAQs about this :https://www.quantum.com/globalassets/products/tape-storage-new/lto-9/lto-9-quantum-faq-092021.pdf Short calculation : 50 LTO9 brand new tapes may require up to 2hours each of ‘calibration’ before they can be used. So this equals to 100 hours of ‘calibration’ before you could use the full 50 tape pool.. 😱 My 1st issue was that I had to adjust all the mount timeouts in that LT
Due to the fact of changing the backup storage infrastructure, I only want to change the storage policy.Our former strategy was to have a spool copy and to aux copies - one to NAS and one to LTO.This was done due to the fact that we had a performance/backup time issueWe solved this by direct attached SSDs to the backup server.Now I want to set a retention to the primary copy and delete the no longer needed Aux copy to NAS. This points also to the same storage (the local SSDs) and wastes the need space on this.If I want to change the retention on the primary storage policy I get error shown in the screenshot and changes are discarded.I don’t know where to find these settings (Archiver retention rule and “Onepass”) I have never set a archiver (retention rule), we are not using archiving
There is a related topic regarding my question but I don’t feel it was adequately answered: Essentially there is a conflict in documentation regarding versioning and how object locking works. Firstly the public cloud architecture guide for AWS states that bucket and object versioning is not supported in Commvault (p. 85 of AWS Cloud Architecture Guide - Feature Release 11.25 (commvault.com))However, when enabling object locking in AWS, versioning is enabled by default on the bucket and objects - it is by design. Under ‘Enabling S3 Object Lock’ section (https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html) it states that versioning is enabled.So if Commvault doesn’t support versioning (and will result in orphaned objects), how can Commvault support object locking, which also enables versioning?We have also tested this in the lab and can confirm when object locking is enabled, we do not see any data prune from the cloud library. We have run the workflow, store
i have next problem Alert:Aux copy job FailedType:Job Management - Auxiliary CopyDetected Criteria:Job FailedIs escalated:Detected Time:Wed Dec 28 23:42:51 2022CommCell:CommServeUser:AdministratorJob ID:63139Status:FailedStorage Policy Name:CommServeDRCopy Name:SecondaryStart Time:Wed Dec 28 23:00:11 2022Scheduled Time:Wed Dec 28 23:00:08 2022End Time:Wed Dec 28 23:42:51 2022Error Code:[13:138] [40:91] [40:65]Failure Reason:Error occurred while processing chunk  in media [V_845], at the time of error in library [RezervnaKopija] and mount path [[CommServe] \\192.168.99.51\RezervnaKopija], for storage policy [CommServeDR] copy [Secondary] MediaAgent [CommServe]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [CommServeDR], Copy [Primary], Host [DRI-COMMVAULT.dri.local], Path [\\192.168.99.51\RezervnaKopija\MX19RW_07.26.2022_08.59\CV_M
Hello community ! I try to find a way to send only weekly and monthly backups to a secondary storage (Azure Blob Storage) from a Primary Storage Policy that haves only daily backups (7 days retention).I see from Aux-Copy wizard, that the only way is to create a Selective Copy with only full backup jobs. So, basically to copy only synthetic fulls once a week in my case.But, I cant figure it out, how to copy also monthly jobs!Please for your feedback!Best regards,Nikos
My customer has configured a dedup disklib. The shared mount path is a directory in a filesystem working on ceph layer. One night he found a lot of jobs waiting because the library was open for jobs but did not write data to disk. There was an unknown error on the ceph layer and a reboot of nodes solve this issue. The point is that the library was online and did not went offline during the “ceph error”. Customer reach the maximum limit of jobs and new partial important backup jobs could not start. Now we analyzed what happened. My question here:Is ceph supported as target of a disklib.In BOL I found entries about kubernetes, S3 connection etc. but nothing about using it as mountpath in a filesystem. I know that ceph supports block, file and object level.Thanks in advance for your answers Joerg
i see this all the time and i never understood it. we make backup copies from disk to tape, both attached to the same media agent. During these aux copies it has the Data Transferred over Network number, which I think should be 0, but there is usually a number there. For example this aux job has these numbers, still running:Total Data Processed: 3.23 TBData Transferred Over Network: 107.95 GBTotal Data to Process: 4.7 TB
How and where to identify the traditional DDB version on the management agent login the log "SIDBEngine.log on the media agent we find this 6292 126c 12/16 13:00:19 ### 1-0-1-0 LoadConfig 313 Use MemDB [false] But you can't see the version number? And do you have a version number? does it exist?
Upgrading my DDB’s to V5, trying to avoid fully stopping backups while I squeeze in the upgrade. Is it possible to check “Temporarily Disable Deduplication” under Dedupe Engines > [DDB name] > properties > Deduplication > advanced tab, and perform the upgrade? The DDB needs to come offline for compaction. Thanks in advance, Joel Bates
I have a new Dell Poweredge R750 sitting in a box with SSD RAID BOSS and 12x18TB NL-SAS on a H755 RAID controller.It's going to be running Windows Server 2019 or 2022 and will be a backup server holding both Commvault and Veeam data.The server will boot off the SSD BOSS and the NL-SAS is purely backup storage.At a PERC level I expect I'll go with RAID6.At a the Windows level I'm not clear whether there are IO benefits splitting it out into lots of small (5-10TB) volumes (which are all sitting on the same physical RAID) or whether to keep it simple and have one or two large volumes and segment data using folders.The current disk library is sitting on a pair of older Windows Poweredges with around 12x4TB mount paths which need migrating to the new hardware.Unless there’s a real benefit I’d sooner have a single large flexible Windows volume with 12 subfolders for each mount path than be messing around creating 12x volumes.I presume there is no way to consolidate down the number of mount p
My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?
Hi, quick on the ISO 2.3 for reference architecture deployment dvd_10072022_113351.iso ?I don't know if I’m in the right place for the question but !This ISO is based on which FR ? FR.24 !?My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?I’ve seen a lot of new features for monitoring and securing nodes !Thank you,
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.