Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 621 Topics
- 3,252 Replies
My customer has configured a dedup disklib. The shared mount path is a directory in a filesystem working on ceph layer. One night he found a lot of jobs waiting because the library was open for jobs but did not write data to disk. There was an unknown error on the ceph layer and a reboot of nodes solve this issue. The point is that the library was online and did not went offline during the “ceph error”. Customer reach the maximum limit of jobs and new partial important backup jobs could not start. Now we analyzed what happened. My question here:Is ceph supported as target of a disklib.In BOL I found entries about kubernetes, S3 connection etc. but nothing about using it as mountpath in a filesystem. I know that ceph supports block, file and object level.Thanks in advance for your answers Joerg
i have 3 Filer client and a media agent al three on the same swtich no firewall or router.all filer have subclient.one filer have 8 subclient and backup copy are not working on two of the subclient give below error.Error Code: [39:501]Description: Client [XYZ03] was unable to connect to the tape server [ABC123] on IP(s) [10.0.0.34] port . Please check the network connectivity.Source: ABC123, Process: NasBackup
i see this all the time and i never understood it. we make backup copies from disk to tape, both attached to the same media agent. During these aux copies it has the Data Transferred over Network number, which I think should be 0, but there is usually a number there. For example this aux job has these numbers, still running:Total Data Processed: 3.23 TBData Transferred Over Network: 107.95 GBTotal Data to Process: 4.7 TB
Upgrading my DDB’s to V5, trying to avoid fully stopping backups while I squeeze in the upgrade. Is it possible to check “Temporarily Disable Deduplication” under Dedupe Engines > [DDB name] > properties > Deduplication > advanced tab, and perform the upgrade? The DDB needs to come offline for compaction. Thanks in advance, Joel Bates
How and where to identify the traditional DDB version on the management agent login the log "SIDBEngine.log on the media agent we find this 6292 126c 12/16 13:00:19 ### 1-0-1-0 LoadConfig 313 Use MemDB [false] But you can't see the version number? And do you have a version number? does it exist?
My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?
Hi, quick on the ISO 2.3 for reference architecture deployment dvd_10072022_113351.iso ?I don't know if I’m in the right place for the question but !This ISO is based on which FR ? FR.24 !?My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?I’ve seen a lot of new features for monitoring and securing nodes !Thank you,
Situation:primary Backup on site A onto S3 with deduplication, retention of 30 days / 4 cycles, no WORMBackup copy (synchronous) to another site onto S3 with deduplication, retention of 30 days / 4 cycles, extended retention for 365 days (monthly fulls) and 10 years (yearly full), WORM Question:if we enable (object-level) WORM with the Workflow on the Backup Copy storage pool then by default the WORM Lock period is twice the retention, meaning in the above example WORM Lock would be 60 days (2x 30 days). However, how does that affect the extended retention for the monthly / yearly fulls? To what value would the WORM lock have to be set to guarantee that the monthly and yearly fulls are WORM protected for 365 days / 10 years, but all the other backups should follow the “normal” WORM lock of 30 days?If we set the WORM Lock to 10 years (how long the yearly fulls need to be WORM protected) then all backups that get copied would get that 10 year lock, even all the incrementals that get copi
Hi ,Please help me whare is the problem , & how can I solve?Problem:Error Code: [62:2855] Description: Error occurred in Disk Media, Path [Test_cat\UE3QA4_10.14.2022_15.56\CV_MAGNETIC\V_1] [-1451 OSCLT_ERR_MAXIMUM_DEVICE_LOCKS]. For more help, please call your vendor's support hotline. Source: CDBL-CS-MA, Process: cv
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
Is anyone using a data domain (we are using a 6900). its attached to a Windows media agent using Boostfs. Its new and we started doing backups about two weeks. All the backups completed successfully. Then we tried to do a tape aux copy and started gettiing a couple data verification errors some of the jobs. Id like to compare settings with someone because i can’t figure out whats causing it.
Hi all, I have a DAG cluster and it has 7 TB data. Backup job complate 1 day about. Restore jobs duration time worse than backup jobs duration time(1.5 day about). How can I reduse this times(Backup and restore but restore important than backup)? I tried increase stream number but its not working.
Hi all,one question about DDB Verification jobs.The verification jobs for one of our DDBs takes quite a long time, usually a matter of days. When I check the ScalableDDBVerf.log file I see many messages like this WARNING - Waiting for send queue to get emptied. Curr Size  Should I consider this a symptom of a problem? Thank you in advanceGaetano
Hello,We have bought a new tape library and we want to create some auxiliary copy jobs to run on a weekly basis for 4 storage policies only for the full backup job's(the client schedule is a full once a day/incr every 1 hour)The auxiliary copy jobs must all run during the weekend and occupy not more than 2 tapes( we have tapes with big capacity-16TB) because we have only 8 tapesThe question is, how can we make the copy jobs to write and fill up one tape and after to try and fill up another tape when the first one is full?For example: after the first copy jobs is finished, the second one needs to use the same tape like the first one (if there is still space available of course)and when the tape is full, write on another one and so onBest regards,Stefan
Hi, We have recently acquired a new server and storage as part of our hardware refresh for the Commvault server.The new server have the following:2x 480GB SATA SSD configured as Raid 1 (OS installed)2x 1.6TB PCIe SSD - still deciding whether to use host based mirroring or leave it as standalone disks, intended use is for SQL database, DDB, and index. The old server has the following disk configuration:OS - 558GB - 173GB usedSQL - 278GB - 866MB usedDDB - 418GB - 25.6GB usedCommvault V11 SP16 HPK17 Any recommendation for the new server’s disk configuration? If I will use host based mirroring will it have impact on the server’s performance? Which Commvault version should I use? 2022E or 11.26? Thank you in advance.
Hello,let me ask your opinion about the following situation.When I check the details of the System Created DDB Space Reclamation schedule policy I see that it looks like “corrupted”.As you can see in the attached image, in the summary screen it shows Type “Data Verification” while in the dialog the type is “Data Protection”. Moreover, the Associations tab shows the list of Clients instead of the DDB list.Is this normal? How can I get rid of this? Thank you in advanceGaetano
Hi All, Was documenting about the WORM activation on our cloud storage, from different threads here and using the documentation. Came across different questions, which I hope will get some answer through this topic.1 - On the link that follows, it’s stating that “Note: Once applied, the WORM functionality is irreversible”.Does that mean when we activate the WORM on the storage through the workflow, we cannot change the retention ? We wanted as a first time test the WORM, with the setting of the retention of one storage policy copy on the storage pool as a test with 1 day only. Does that mean that we cannot change the retention of the workflow to something else ? Let's say 15 days. 2 - Same from the link, since our storage pool is using deduplication, it’s stated that the retention on the storage will be set twice of the one on the storage pool, our copies on the storage pool will be set to 15 days, does that mean the data will remain for 30 days on the storage without being deleted, af
My company is planning to do a tech refresh on our aging data domain to a newer version. We also highlighted that we’re having backup slowness issues on some of our large Oracle databases & some NDMP backups. Our current configuration is backing up to a VTL located in our data domain & no compression or deduped are enabled from Commvault layer.Dell’s sales team advice us to purchase an additional license for DDBoost for the new Data Domain. This is because DDBoost will be able to perform a very good dedupe & compression rate from the source level before transferring it to our data domain, thus saving the time it took to transfer via network.However I’ve been checking around Commvault’s KB and looks like commvault only works with BoostFS & not DD Boost. I haven’t check with Dell yet regarding this. May I know has anyone implemented DDBoost on your environment for backing up databases/vm & NDMP?
The auditors want to see if my backups are encrypted and I’m not sure where to go in the CommVault GUI to show that. I don’t see anything about encryption in the properties for my storage libraries or my storage policies. Where do I show whether or not my backups are encrypted?Ken
Hello,I am trying to fully understand restore point retention to achieve my goals. I also have Incident 221117-488 currently open about this.Here’s what I am trying to achieve: a low priority backup plan that runs on a daily schedule, and consistently retains as close to 3 restore points as possible.Here are the settings I currently have for my base and derived planBase plan: WIN_SYS_STD_BASE_LOWSLA: 1 week, inherited from CommCellBackup destinationsPrimary - 3 days retention periodSecondary - 3 days retention periodDatabase optionsLog backup RPO - 4 hour(s)Run full backup every: 1 weekStorage pool Override not allowedRPO Override requiredFolders to backup Override optionalDerived plan (Defines scheduling only): WIN_SYS_STD_BASE_LOW_10PMDefined in Java GUI: Run synthetic full every 3 daysBackup frequency: Run incremental every day at 10:00 PMBackup destinations (Inheriting from base plan)Primary - 3 days retention periodSecondary - 3 days retention periodDatabase optionsLog backup RPO
Hi All,Has anybody worked on bringing in the Commvault maglib status and tape media usage status to Grafana dashboard? Is there anyway to showcase the usage trend and capacity reporting based on maglib utilization and publish in Grafana?Possibly if we can take the real time data from Commvault then we can pretty much show this metric in Grafana. Any leads?
Hello, a customer asked if it would be possible to make all their primary copies (for disk libraries) WORM-protected and what the implications were.Up until now our standard is n-days/1cycle retention on primary copies and n-days/0cycle retention on a secondary copy.We basically use the 1 cycle as a safety net. So if for whatever reason the backup of a client does not work for a long time, they always have one backup available without setting a manual retention on those jobs.Now we have an internal discussion about how the retention for WORM works. Specifically if the cycle retention is also relevant for manual deletion of the jobs/clients that hold those jobs.So for example: A client is using a WORM storage policy with 14 days/1 cycle retention.Data Aging will not age out and delete the jobs automatically until both conditions are met.But is it possible to manually delete the jobs on day 15? I would say it is not possible because it is still retained by the cycles. If that is the case
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.