Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 666 Topics
- 3,368 Replies
Adding new DDB partition
Hello,Planning to add a new partiotion to an exiting DDB. I’ve went through the documention and have a doubt on the below.The addtional 0.5MB of data will be added only to the magnetic disk the holds the DDB or will it be also added to the Disk library mount paths ? “After running the Backup1, you add Partition2 and run Backup2 of the same 1 MB of data. After the second backup, 4 signatures of 128 KB size will be added to Partition2 (even though the same signatures exists in the original store) and for the other 4 signatures only the reference will be added in the original store (as the signatures already exists). The magnetic disk will have 1.5 MB of data (1 MB from the first Backup + 500 KB from Partition2 from Backup2).On running data aging, if Backup1 is aged, then from the first partition the first 4 signatures will be aged and also 500 KB of data will be pruned from the magnetic disk.”https://documentation.commvault.com/2022e/expert/12455_configuring_additional_partitions_for_ded
Is it possible that high Q&I times can cause the following error message: Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]
Hello, I have Oracle backups running which are scheduled through RMAN which occasionally fail with the following error code: “Error Code: [82:177] Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]” Steps that I have taken to troubleshoot the issue: Dedupe engine is active and running Created a one way persistent tunnel from the client to the commserve and Media agentsas suggested by another thread pertaining to the same issue “ERROR CODE [82:172]: Could not connect to the DeDuplication Database process for Store Id [xx]. Source: xxxx, Process: backint_oracle | Community (commvault.com)”Our Q&I times across the board for our media agents are quite high. For this particular engine the times are 6,416 microseconds (307%). I believe the ddb is not running on SSD, but regular disk. Is it possible that this could be causing the above error? If not I’m not sure what else could be the reason for this.
Can't change storage policy
Due to the fact of changing the backup storage infrastructure, I only want to change the storage policy.Our former strategy was to have a spool copy and to aux copies - one to NAS and one to LTO.This was done due to the fact that we had a performance/backup time issueWe solved this by direct attached SSDs to the backup server.Now I want to set a retention to the primary copy and delete the no longer needed Aux copy to NAS. This points also to the same storage (the local SSDs) and wastes the need space on this.If I want to change the retention on the primary storage policy I get error shown in the screenshot and changes are discarded.I don’t know where to find these settings (Archiver retention rule and “Onepass”) I have never set a archiver (retention rule), we are not using archiving
S3 Object Locking and Versioning
There is a related topic regarding my question but I don’t feel it was adequately answered: Essentially there is a conflict in documentation regarding versioning and how object locking works. Firstly the public cloud architecture guide for AWS states that bucket and object versioning is not supported in Commvault (p. 85 of AWS Cloud Architecture Guide - Feature Release 11.25 (commvault.com))However, when enabling object locking in AWS, versioning is enabled by default on the bucket and objects - it is by design. Under ‘Enabling S3 Object Lock’ section (https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html) it states that versioning is enabled.So if Commvault doesn’t support versioning (and will result in orphaned objects), how can Commvault support object locking, which also enables versioning?We have also tested this in the lab and can confirm when object locking is enabled, we do not see any data prune from the cloud library. We have run the workflow, store
Aux copy jobs are stuck on 99%
Hi there, have you ever seen in the Job Controller view, that there are some Aux copy jobs on 99% (progress bar) for a couple of days? Moreover, Estimated Completion Time says Not Applicable. However, Application size and Total Data Process number is slightly increasing during the time. And of course, aux copy jobs are in therunning state. My assumption is and it looks like that there are still some jobs needed to be copied from the primary to the secondary policy copy. Is it possible? And is it worth to wait for the job completion or better to kill the job.
Secondary Copy from Primary Storage Policy only with daily backups
Hello community ! I try to find a way to send only weekly and monthly backups to a secondary storage (Azure Blob Storage) from a Primary Storage Policy that haves only daily backups (7 days retention).I see from Aux-Copy wizard, that the only way is to create a Selective Copy with only full backup jobs. So, basically to copy only synthetic fulls once a week in my case.But, I cant figure it out, how to copy also monthly jobs!Please for your feedback!Best regards,Nikos
Is ceph supported as mount path of a dedup disklib
My customer has configured a dedup disklib. The shared mount path is a directory in a filesystem working on ceph layer. One night he found a lot of jobs waiting because the library was open for jobs but did not write data to disk. There was an unknown error on the ceph layer and a reboot of nodes solve this issue. The point is that the library was online and did not went offline during the “ceph error”. Customer reach the maximum limit of jobs and new partial important backup jobs could not start. Now we analyzed what happened. My question here:Is ceph supported as target of a disklib.In BOL I found entries about kubernetes, S3 connection etc. but nothing about using it as mountpath in a filesystem. I know that ceph supports block, file and object level.Thanks in advance for your answers Joerg
Client [XYZ] was unable to connect to the tape server
i have 3 Filer client and a media agent al three on the same swtich no firewall or router.all filer have subclient.one filer have 8 subclient and backup copy are not working on two of the subclient give below error.Error Code: [39:501]Description: Client [XYZ03] was unable to connect to the tape server [ABC123] on IP(s) [10.0.0.34] port . Please check the network connectivity.Source: ABC123, Process: NasBackup
Data Transferred over network
i see this all the time and i never understood it. we make backup copies from disk to tape, both attached to the same media agent. During these aux copies it has the Data Transferred over Network number, which I think should be 0, but there is usually a number there. For example this aux job has these numbers, still running:Total Data Processed: 3.23 TBData Transferred Over Network: 107.95 GBTotal Data to Process: 4.7 TB
DDB Update to V5 question - Stop backups -vs.- Temporarily Disable Dedupe
Upgrading my DDB’s to V5, trying to avoid fully stopping backups while I squeeze in the upgrade. Is it possible to check “Temporarily Disable Deduplication” under Dedupe Engines > [DDB name] > properties > Deduplication > advanced tab, and perform the upgrade? The DDB needs to come offline for compaction. Thanks in advance, Joel Bates
Identifying traditionnel DDB version on the log to the MA
How and where to identify the traditional DDB version on the management agent login the log "SIDBEngine.log on the media agent we find this 6292 126c 12/16 13:00:19 ### 1-0-1-0 LoadConfig 313 Use MemDB [false] But you can't see the version number? And do you have a version number? does it exist?
My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?
Hi, quick on the ISO 2.3 for reference architecture deployment dvd_10072022_113351.iso ?I don't know if I’m in the right place for the question but !This ISO is based on which FR ? FR.24 !?My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?I’ve seen a lot of new features for monitoring and securing nodes !Thank you,
S3 with WORM and extended retention
Situation:primary Backup on site A onto S3 with deduplication, retention of 30 days / 4 cycles, no WORMBackup copy (synchronous) to another site onto S3 with deduplication, retention of 30 days / 4 cycles, extended retention for 365 days (monthly fulls) and 10 years (yearly full), WORM Question:if we enable (object-level) WORM with the Workflow on the Backup Copy storage pool then by default the WORM Lock period is twice the retention, meaning in the above example WORM Lock would be 60 days (2x 30 days). However, how does that affect the extended retention for the monthly / yearly fulls? To what value would the WORM lock have to be set to guarantee that the monthly and yearly fulls are WORM protected for 365 days / 10 years, but all the other backups should follow the “normal” WORM lock of 30 days?If we set the WORM Lock to 10 years (how long the yearly fulls need to be WORM protected) then all backups that get copied would get that 10 year lock, even all the incrementals that get copi
Error Code : Error Code: [62:2855] Description: Error occurred in Disk Media
Hi ,Please help me whare is the problem , & how can I solve?Problem:Error Code: [62:2855] Description: Error occurred in Disk Media, Path [Test_cat\UE3QA4_10.14.2022_15.56\CV_MAGNETIC\V_1] [-1451 OSCLT_ERR_MAXIMUM_DEVICE_LOCKS]. For more help, please call your vendor's support hotline. Source: CDBL-CS-MA, Process: cv
Windows media agent boostfs to DD6900
Is anyone using a data domain (we are using a 6900). its attached to a Windows media agent using Boostfs. Its new and we started doing backups about two weeks. All the backups completed successfully. Then we tried to do a tape aux copy and started gettiing a couple data verification errors some of the jobs. Id like to compare settings with someone because i can’t figure out whats causing it.
How to increase streams for DAG backup and restore jobs?
Hi all, I have a DAG cluster and it has 7 TB data. Backup job complate 1 day about. Restore jobs duration time worse than backup jobs duration time(1.5 day about). How can I reduse this times(Backup and restore but restore important than backup)? I tried increase stream number but its not working.
Waiting for send queue to get emptied.
Hi all,one question about DDB Verification jobs.The verification jobs for one of our DDBs takes quite a long time, usually a matter of days. When I check the ScalableDDBVerf.log file I see many messages like this WARNING - Waiting for send queue to get emptied. Curr Size  Should I consider this a symptom of a problem? Thank you in advanceGaetano
Auxiliary copy to use and fill same tapes info
Hello,We have bought a new tape library and we want to create some auxiliary copy jobs to run on a weekly basis for 4 storage policies only for the full backup job's(the client schedule is a full once a day/incr every 1 hour)The auxiliary copy jobs must all run during the weekend and occupy not more than 2 tapes( we have tapes with big capacity-16TB) because we have only 8 tapesThe question is, how can we make the copy jobs to write and fill up one tape and after to try and fill up another tape when the first one is full?For example: after the first copy jobs is finished, the second one needs to use the same tape like the first one (if there is still space available of course)and when the tape is full, write on another one and so onBest regards,Stefan
Proper disk configuration for new CV server
Hi, We have recently acquired a new server and storage as part of our hardware refresh for the Commvault server.The new server have the following:2x 480GB SATA SSD configured as Raid 1 (OS installed)2x 1.6TB PCIe SSD - still deciding whether to use host based mirroring or leave it as standalone disks, intended use is for SQL database, DDB, and index. The old server has the following disk configuration:OS - 558GB - 173GB usedSQL - 278GB - 866MB usedDDB - 418GB - 25.6GB usedCommvault V11 SP16 HPK17 Any recommendation for the new server’s disk configuration? If I will use host based mirroring will it have impact on the server’s performance? Which Commvault version should I use? 2022E or 11.26? Thank you in advance.
Weird System Created DDB Space Reclamation schedule policy
Hello,let me ask your opinion about the following situation.When I check the details of the System Created DDB Space Reclamation schedule policy I see that it looks like “corrupted”.As you can see in the attached image, in the summary screen it shows Type “Data Verification” while in the dialog the type is “Data Protection”. Moreover, the Associations tab shows the list of Clients instead of the DDB list.Is this normal? How can I get rid of this? Thank you in advanceGaetano
Configuring WORM on cloud storage
Hi All, Was documenting about the WORM activation on our cloud storage, from different threads here and using the documentation. Came across different questions, which I hope will get some answer through this topic.1 - On the link that follows, it’s stating that “Note: Once applied, the WORM functionality is irreversible”.Does that mean when we activate the WORM on the storage through the workflow, we cannot change the retention ? We wanted as a first time test the WORM, with the setting of the retention of one storage policy copy on the storage pool as a test with 1 day only. Does that mean that we cannot change the retention of the workflow to something else ? Let's say 15 days. 2 - Same from the link, since our storage pool is using deduplication, it’s stated that the retention on the storage will be set twice of the one on the storage pool, our copies on the storage pool will be set to 15 days, does that mean the data will remain for 30 days on the storage without being deleted, af
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.