Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 619 Topics
- 3,243 Replies
Hi,I’m working on a project where a large Hyperscale environment needs to be migrated to Azure. Looking at using either Azure storage or Metallic recovery reserve and/or possibly a mix of the two. There’s short term 30 days as well as LTR 5 to 7 years so will use Hot / Cool tiers (possibly combined storage tiers if Metallic not used)HS is setup using the default DDB block size 128KB. In an ideal world - one could just setup a secondary copy for the cloud storage (either Metallic or Azure) and kick off the Aux copy to cloud, just let it run to get the data over in to Azure then eventually promote it to primary copy… however…As the cloud storage will then be used as a primary copy - ideally, we want to configure it with 512KB DDB block size. Media agents will be setup in Azure as they will eventually become the production MA’s once things get cut over. Some key questions on the above:copying between storage policies with different DDB block sizes – how will this affect overall dedupl
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Hello, I have a matter that could require some help.We have a primary copy on a disk Library, a secondary on another disk Library and a third copy on a MCSS cloud Library. All copies have dedup enabled.Auxiliary copies run fine between the disk librairies but when it comes to sending the data to the MCSS library it takes forever. We tried to increase the number of streams, use either of the disk libraries as source for the aux copies but we can’t achieve suitable performances. What we see is that the process generate excessive read operations on the source library. Dedup block size is 128 on the disk libraries and 512 on MCSS.Commvault version 11.24.Any help would be appreciated.Regards, Jean-xavier
HI Team,Is there any API or report using which we can monitor individual library mount paths ?CV alerts are for complete library but i need to configure an alert specific to mount paths( coming from different media agents or servers ) part of same disk library. We receive below alerts when one or few of the mount paths of library met reserved space .Failure Reason: Insufficient disk space. Available mount paths are not enabled for write or have met reserved space limit. Enable/add more mount paths or add more disk space to existing mount paths. Please check mount paths on the library Need to configure alerts on mount path level so that we can disable the mount paths in advance .Regards, Mohit
Hello All, We are using Azure Cold data to store our off-shite copies. We have been using this from last several years. Lately, we decided to use Azure Combined storage and planned to move/copy data from Azure cold to Archive storage. After having a discussion with Commvault, we implemented what was suggested but this process seems to be really slow and case is now escalated to Dev. Being honest, we are seeing terrible delay from their side too. I am question is now, If instead of using aux copy to copy the jobs from cold blob to the combined tier library, what if we changed the tier of cold blob from cool to combined tier? If we did that, would the data convert to archive, or would that only effect new data written to that storage?
So,I’m having a project with Commvault and a HPE Apollo 4200 with 24x 12 TB NLSAS drives.What would be a best practice approach when creating the array(and logical volume) in the aerver Raid Controller?I read some people saying that the max volume should be 4 TB per mount point, does this make sense these days ?If yes, that means that I need to create 66 volumes of 4TB each within raid controller.Any beneficts on doing this versus a single volume with 264 TB?thanks
HI All,currently only on RHEL / CentOS ransomware protection can be used. https://documentation.commvault.com/11.23/expert/126625_system_requirements_for_ransomware_protection.htmlthe main reason for this is usage of the SELINUX modules. Are there also plan for creating this feature on Ubuntu Linux?or are the other ways to achieve this?
Hi and happy new year to all of you !I would like to know if some of you have already implemented some LTO9 drives / tape libraries, and would love to get your feedback about it using Commvault.My experience on the LTO9 media, using dual tape drives tape libraries, is quite bad.The Media calibration / optimization / characterization phase that any new LTO9 media has to deal with is a pain, on my side.Looks like on the first mount of a media -- let me reword it in my ‘old guy’ words -- it has to be somehow formatted to be able to be used by your favourite backup software. Below a link to Quantum’s FAQs about this :https://www.quantum.com/globalassets/products/tape-storage-new/lto-9/lto-9-quantum-faq-092021.pdf Short calculation : 50 LTO9 brand new tapes may require up to 2hours each of ‘calibration’ before they can be used. So this equals to 100 hours of ‘calibration’ before you could use the full 50 tape pool.. 😱 My 1st issue was that I had to adjust all the mount timeouts in that LT
Hi!We’ve recently moved media-agents to new servers, and i’m now in the process of “cleaning up”. And for 3 of the 4 media-agents the report showed nothing, and i could delete the mediaagent without any issue. However, for the last media-agent, it complains about “Proxy Server to Perform Intellisnap Backup Operations”, and lists a client name, a backupset name, and a subclient name.This was “correct”, because there was a subclient there with that name that was configured for intellisnap (but not in use, and had no backups). So i deleted the subclient… but even so, now several days later, when i try to delete the media-agent, the report still complains about the same “issue”. I could ofcourse just ignore the “warning” and proceed with the deletion, but i’d much prefer for the report to show that there is no association, before proceeding with the deletion.
I understand the DEVICE_LABEL file is used for pruning so it missing could be an issue.I have a customer who's deleted the file off a volume.Will this affect pruning and would there be any other potential issues that could arise?Is there a way to recreate the file if it is required, without recreating the volume?The volume is configured as a WORM device for compliance.Thanks so much.
Unusual performance drop detected in pruning for following deduplication databases: [GDSP_ALL_<Servername_8]: due to increase in (CommServe Job Records to be Deleted) - Check details in: http://backupserver:80/webconsole/reportsplus/reportViewer.jsp?idstringIf i check the report, it’s grown from 0 to 12399 commserve job records to be deleted from 9’th january to today the 18’th.….However, this media-agent is a media-agent that is spending half it’s life completely offline and shut down, and it has been shut down since the 9’th, and powered on again today, so i assume this is expected behavior, but because i have not seen this message before, i figured i’d check with the hivemind first :-) Software version is: 11.24.78
I have a new Dell Poweredge R750 sitting in a box with SSD RAID BOSS and 12x18TB NL-SAS on a H755 RAID controller.It's going to be running Windows Server 2019 or 2022 and will be a backup server holding both Commvault and Veeam data.The server will boot off the SSD BOSS and the NL-SAS is purely backup storage.At a PERC level I expect I'll go with RAID6.At a the Windows level I'm not clear whether there are IO benefits splitting it out into lots of small (5-10TB) volumes (which are all sitting on the same physical RAID) or whether to keep it simple and have one or two large volumes and segment data using folders.The current disk library is sitting on a pair of older Windows Poweredges with around 12x4TB mount paths which need migrating to the new hardware.Unless there’s a real benefit I’d sooner have a single large flexible Windows volume with 12 subfolders for each mount path than be messing around creating 12x volumes.I presume there is no way to consolidate down the number of mount p
Oracle Archive Log Backups are backed up to a storage policy. The primary copy’s retention is configured for 30 days with 0 cycles. The Data Retention Forecast and Compliance Report shows the Archive log is required by the RMAN Full backup which has been aux copied from a primary copy (Disk Library) to a secondary copy (Tape Library). The Secondary Copy retention is 7 years. I would like for the Log Backup on Disk to follow the primary copy retention of 30 days and expire once 30 days have been met, however it is not expiring since itis required by the Full backup on tape with the longer retention. I had a similar issue with SQL backups where the log backups were not expiring because the log backups were required for the SQL full that had been aux copy to tape with a long 7 year retention. The setting to correct this behavior is under Storage → Media Management Configuration → Data Aging → and disabling <Honor SQL Chaining for Full jobs on Selective copy>.Does anyone know if
Hello,Planning to add a new partiotion to an exiting DDB. I’ve went through the documention and have a doubt on the below.The addtional 0.5MB of data will be added only to the magnetic disk the holds the DDB or will it be also added to the Disk library mount paths ? “After running the Backup1, you add Partition2 and run Backup2 of the same 1 MB of data. After the second backup, 4 signatures of 128 KB size will be added to Partition2 (even though the same signatures exists in the original store) and for the other 4 signatures only the reference will be added in the original store (as the signatures already exists). The magnetic disk will have 1.5 MB of data (1 MB from the first Backup + 500 KB from Partition2 from Backup2).On running data aging, if Backup1 is aged, then from the first partition the first 4 signatures will be aged and also 500 KB of data will be pruned from the magnetic disk.”https://documentation.commvault.com/2022e/expert/12455_configuring_additional_partitions_for_ded
Hello community,I have a customer whose backing up in a disk lib, then auxcopies to an Azure Cloud lib.However, when customer looks at his Azure costs for the last 5 days, they spend over 100 dollars a day in Iterative read operations. I’m trying to figure out what is reading so much from the cloud library. The total written size for a day is 200GB. why 41 million read requests? The DDB Verification is disabled for this library.Is there anything else I should look into?Thanks in advance.
Is it possible that high Q&I times can cause the following error message: Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]
Hello, I have Oracle backups running which are scheduled through RMAN which occasionally fail with the following error code: “Error Code: [82:177] Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]” Steps that I have taken to troubleshoot the issue: Dedupe engine is active and running Created a one way persistent tunnel from the client to the commserve and Media agentsas suggested by another thread pertaining to the same issue “ERROR CODE [82:172]: Could not connect to the DeDuplication Database process for Store Id [xx]. Source: xxxx, Process: backint_oracle | Community (commvault.com)”Our Q&I times across the board for our media agents are quite high. For this particular engine the times are 6,416 microseconds (307%). I believe the ddb is not running on SSD, but regular disk. Is it possible that this could be causing the above error? If not I’m not sure what else could be the reason for this.
Due to the fact of changing the backup storage infrastructure, I only want to change the storage policy.Our former strategy was to have a spool copy and to aux copies - one to NAS and one to LTO.This was done due to the fact that we had a performance/backup time issueWe solved this by direct attached SSDs to the backup server.Now I want to set a retention to the primary copy and delete the no longer needed Aux copy to NAS. This points also to the same storage (the local SSDs) and wastes the need space on this.If I want to change the retention on the primary storage policy I get error shown in the screenshot and changes are discarded.I don’t know where to find these settings (Archiver retention rule and “Onepass”) I have never set a archiver (retention rule), we are not using archiving
There is a related topic regarding my question but I don’t feel it was adequately answered: Essentially there is a conflict in documentation regarding versioning and how object locking works. Firstly the public cloud architecture guide for AWS states that bucket and object versioning is not supported in Commvault (p. 85 of AWS Cloud Architecture Guide - Feature Release 11.25 (commvault.com))However, when enabling object locking in AWS, versioning is enabled by default on the bucket and objects - it is by design. Under ‘Enabling S3 Object Lock’ section (https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html) it states that versioning is enabled.So if Commvault doesn’t support versioning (and will result in orphaned objects), how can Commvault support object locking, which also enables versioning?We have also tested this in the lab and can confirm when object locking is enabled, we do not see any data prune from the cloud library. We have run the workflow, store
Hi there, have you ever seen in the Job Controller view, that there are some Aux copy jobs on 99% (progress bar) for a couple of days? Moreover, Estimated Completion Time says Not Applicable. However, Application size and Total Data Process number is slightly increasing during the time. And of course, aux copy jobs are in therunning state. My assumption is and it looks like that there are still some jobs needed to be copied from the primary to the secondary policy copy. Is it possible? And is it worth to wait for the job completion or better to kill the job.
i have next problem Alert:Aux copy job FailedType:Job Management - Auxiliary CopyDetected Criteria:Job FailedIs escalated:Detected Time:Wed Dec 28 23:42:51 2022CommCell:CommServeUser:AdministratorJob ID:63139Status:FailedStorage Policy Name:CommServeDRCopy Name:SecondaryStart Time:Wed Dec 28 23:00:11 2022Scheduled Time:Wed Dec 28 23:00:08 2022End Time:Wed Dec 28 23:42:51 2022Error Code:[13:138] [40:91] [40:65]Failure Reason:Error occurred while processing chunk  in media [V_845], at the time of error in library [RezervnaKopija] and mount path [[CommServe] \\192.168.99.51\RezervnaKopija], for storage policy [CommServeDR] copy [Secondary] MediaAgent [CommServe]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [CommServeDR], Copy [Primary], Host [DRI-COMMVAULT.dri.local], Path [\\192.168.99.51\RezervnaKopija\MX19RW_07.26.2022_08.59\CV_M
Hello community ! I try to find a way to send only weekly and monthly backups to a secondary storage (Azure Blob Storage) from a Primary Storage Policy that haves only daily backups (7 days retention).I see from Aux-Copy wizard, that the only way is to create a Selective Copy with only full backup jobs. So, basically to copy only synthetic fulls once a week in my case.But, I cant figure it out, how to copy also monthly jobs!Please for your feedback!Best regards,Nikos
My customer has configured a dedup disklib. The shared mount path is a directory in a filesystem working on ceph layer. One night he found a lot of jobs waiting because the library was open for jobs but did not write data to disk. There was an unknown error on the ceph layer and a reboot of nodes solve this issue. The point is that the library was online and did not went offline during the “ceph error”. Customer reach the maximum limit of jobs and new partial important backup jobs could not start. Now we analyzed what happened. My question here:Is ceph supported as target of a disklib.In BOL I found entries about kubernetes, S3 connection etc. but nothing about using it as mountpath in a filesystem. I know that ceph supports block, file and object level.Thanks in advance for your answers Joerg
i have 3 Filer client and a media agent al three on the same swtich no firewall or router.all filer have subclient.one filer have 8 subclient and backup copy are not working on two of the subclient give below error.Error Code: [39:501]Description: Client [XYZ03] was unable to connect to the tape server [ABC123] on IP(s) [10.0.0.34] port . Please check the network connectivity.Source: ABC123, Process: NasBackup
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.