Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 778 Topics
- 3,676 Replies
Do you see these error for your jobs?When we updated from 11.20.32 to 11.20.60 we started getting Cache-Database errors on various backup types, FileSystem idata agent, NDMP backups and others.The exact error we get is:Error Code: [40:110]Description: Client-side deduplication enabled job found Cache-Database and Deduplication-Database are out of sync.I have a ticket open with support, but I am wondering if the issue is unique to us or if it is happening to other customers as well. Thank you,
Hello, I am enabling encryption for my backup data per new requirement by enabling it through the storage policy copy encryption setting. After subsequent backup jobs have completed I have verified encryption to the backup data is set from the storage policy copy report. I do not see an indication that the DDB backups are encrypted. Do they need to be encrypted? This is a requirement by our auditors and they will see this in the report like I did and might ask me why the DDB backups are excluded.Thanks.
How do I read this job status data? What is my deduplication savings if any, or is it just compression alone. Where do I look for actual deduplication savings for the job.Can I leave deduplication enabled for transaction logs, or will that affect performance?Once job is completed I only see this :Could it be thatmy dedupe savings is nil? Or am I am mixing the terms and compression is actually a deduplication?
Hi All, Was documenting about the WORM activation on our cloud storage, from different threads here and using the documentation. Came across different questions, which I hope will get some answer through this topic.1 - On the link that follows, it’s stating that “Note: Once applied, the WORM functionality is irreversible”.Does that mean when we activate the WORM on the storage through the workflow, we cannot change the retention ? We wanted as a first time test the WORM, with the setting of the retention of one storage policy copy on the storage pool as a test with 1 day only. Does that mean that we cannot change the retention of the workflow to something else ? Let's say 15 days. 2 - Same from the link, since our storage pool is using deduplication, it’s stated that the retention on the storage will be set twice of the one on the storage pool, our copies on the storage pool will be set to 15 days, does that mean the data will remain for 30 days on the storage without being deleted, af
Hi,I would like to ask on how we can perform recovery of data from our offsite copy. We have CVfailover setup in our environment below are the options we need to perform. From Commcell console in Site A, How to recovery our data form offsite copy which is in a secondary storage of site B? When commserv in Site A went unavailable and the commserv server in site B become active after successful CVfailover. From commcell in Site B, how to recovery our data from primary and secondary storage in site A? and how to recovery our data from secondary storage in Site B when commcell servers and storage arrays in Site A a is also unavailable?Refer to the attached photo for the commcell environment. Appreciate a lot to anyone who will provide a response. rolan.
Hi Guys,Is there a way to catalog jobs from a bucket within a cloud storage library, like below:The tool offers only a Tape or a Disk as a Media. How do we retrieve our DR backups from a Cloud storage in case we lose everything in order to perform a Disaster Recovery.I found the link below, however it doesn’t show how to retrieve the DR DB.https://documentation.commvault.com/11.24/expert/43588_retrieving_disaster_recovery_dr_backups_from_cloud_storage_using_cloud_test_tool.htmlI’ve also found the below note:Does this mean that if deduplication is enabled, there is no way to retrieve the DR DB?Thanks a lot. Best Regards
DDB Backups: Is the media agent that has a DDB partition associated with it supposed to back that partition up (and not another media agent)?
We have several DDB’s, all partitioned across several media agents. When the DDB backups run, I’m seeing most of the Media agents doing a backup for “themselves” meaning the client and Media agent are the same when the DDB backup runs. But for one of them, I cannot seem to get the DDB backup copy to choose the primary/default media agent (in the copy → data paths settings), to do any of the DDB backups, it always chooses the alternate media path for both DDB backup partitions.I have *not* yet chosen the “use preferred data path” setting (where it should only? use the primary media argent and not use any alternates) as I feel that it should choose the primary and it would auto choose the secondary media agent for the other partition if it needs to.Also: I want the DDBBackups to be slit over 2 media agents because 1 media agent is very overpowered (lots of CPU/memory) relative to the other (older and few CPU’s). The media agent the DDB backups is choosing is this underpowered media age
Our Media Agent needs replacing.The replacement is probably going to be a Dell PowerEdge R5xx or R7xx with dual CPU and 128GB RAM and the intention is to use M.2/NVMe for the OS/Commvault binaries and DDB/Indexes but I’d appreciate any guidance and best practise on the build and storage.We’re doing a lot of synthetic fulls with small nightly incremental backups and we aux about 70TB to LTO8 tape every week.The disk library we have right now is approx 50TB.Is there any best practise that would favour NAS/network storage for the disk library over filling the PowerEdge with large SAS disks?With local disk on Windows Server is there a preference between NTFS and ReFS and is there a best practise over mount path size as we’re currently using 4-5TB mount paths carved out as separate Windows volumes on a single underlying hardware RAID virtual disk.Given modern hardware performance can anyone see a definite reason to do any more than buy a single PowerEdge for this other than redundancy/avail
Good day allThis one is a bit complicated, so will try keep it as brief as I can.The customer has a faulty storage device with the backup data on it.They’ve received a loan VAST unit which uses it’s own deduplication engine in addition to the Commvault dedupe in place. We would like to turn the Commvault deduplication off as we’re having errors on the DDB which may require us to seal it.The concerns I have are below, and I’m hoping to get some clarity on it.The faulty storage and VAST storage are in the same Data Centre (DC1).We want to turn DDB off in this DC and do a Move Mount Path(s) from the faulty storage to the VAST.At the same time, new backups are running to the VAST device on different Mount Paths.When this is all complete and the faulty storage is replaced (it won’t have it’s own DDB capabilities, so Commvault will handle that), we will move the VAST Mount Paths back to this storage.With Commvault DDB off, does non-deduplicated data get deduplicated during a ‘Move Mount Path
Hi Community, I want to know about the strategy which we can take for data protection of cloud workloads using CommVault .Do we need to deploy CS in cloud or we can use on-prem cs for backup of cloud as well as on prem workloads ? if yes , How ?Please share if there is any sample reference architecture diagram for backup of cloud workloads ?What type of backup library to be used for cloud workloads backups ?
Hi,One of our Media agent is down. It has windows server OS. We are unable to bring up the server. Currently MA is offline. The server also have over 10TB of critical backed up data on it.Our OS team has failed to bring up the server. Please suggest how can we recover from this situation.
Hi,We have data to backup with a retention period of 4 weeks. The challenge is the following:the data within the fist week of retention period must be copied to SSD disksthe data within 2nd week to 4th week of retention must go to NLSAS disks. So, the goal is to not have the data of the first week retention in NLSAS disks to reduce the space.Is there a way to reach this goal? ThanksRegards,
Hello team, I noticed two sealed DDBs have a space warning under DDB disk space Utilization section in webconsole health report.We have a long term retention for mailbox backup that prevent DDB store from reclaim I’m looking to see if there is a way to exclude sealed DDB from the DDB disk space utilization /strike count (I have search CV bol but it doesn’t bring any search result)Or i’ll need to contact support to manually free up sealed ddb space. Do need upload CS DB for CV staging ? thank you
Hi Community ,We are using disk library as our Primary Backup Storage . We would like to configure immutable secondary DASH copy on Pure Flash Blade . I would like to understand -- Can i create a disk library from Pure Flash Blade with hardware immutability ? Can i use Pure Flash Blade array to create a DASH Aux copy with source as disk library primary copy to target as library created from Pure Flash Blade . Backups are Streaming & VSA , not IntelliSnap . As per the CV documentation and videos , i see that Pure is only configured as Primary backup storage with IntelliSnap backups . Can we use it as backup library target for aux copies ?
HI ThereIHAC that will use exagrid as backup storage with Commvault.Exagrid states that they can add to Commvault deduplication to obtain a higher dedup ratio (up to 20:1 for long term retention data).I couldn’t find any information on Exagrid on BoL and my understanding was that we do not use CV deduplication when using a deduplication storage as primary target.Did anyone implemented CV with Exagrid ? and if so any specifics/culprit or best practices ? ThanksAbdel
Hi Community ,Can we take a backup of S3 bucket which is 80 TB in size using Commvault ?Consider 10-15% daily change of data. How does Commvault takes backup of S3 . IIs it streaming backup , reading objects 1 by 1 which i expect would be very slow or some sort of Intellisnap capability is available for S3 backup ?Regards, Mohit
Hello, One of our client has Archive backup on tapes which is configured in one location and that type of infra is managed by local Commserve server. There is a question about the possibility of migration that Archive backup to AZURE. In the end client would like to get all archive backup in AZURE cloud with possibility of manage the backup from Commserve in AZURE cloud which is a different machine (Global solution for organization). And my question what will be the best approach for that migration and how to calculate the storage usage in AZURE for that migration process? Regards, Michal
Hello Commvault Community, I need help with data pruning issue after Seal DDB (every 3 months) - micro pruning is disabled, but jobs shouldn't be held for that long anyway. (screen1.png) Micro pruning is disabled due to the cost of operations in the Cloud Storage.As I remember correctly, Commvault also recommended disabling micro pruning on the cloud library. We have generated a Forecast report, where some of the described Jobs are actually kept by not being copied to Secondary Copy, but it's about 30TB, and for example (screen2.png) it says that the data is 126TB within one of the Sealed DDBs. There are several such databases and a total of about 600TB of data remains in the Cloud, which should probably be deleted. I am asking for help in solving the problem. Screenshots and Forecast report are attached. Thanks & RegardsKamil
Hello, When using plans we would like to extend the retention time for some client using a selective copies and reuse the same Global deduplication policy. Most of our backups are IntelliSnap snapshots. Is this possible, how would we best do it? Are there any best practices for this?
Hi! I’m trying to add S3 Compatible Storage as a Cloud Library, but I get error: 3292 1194 12/20 10:31:44 ### [cvd] CVRFAMZS3::SendRequest() - Error: Error = 440373292 1194 12/20 10:31:48 ### [cvd] CURL error, CURLcode= 60, SSL peer certificate or SSH remote key was not OK I already troubleshoot it and I was able to successfully add the storage as a Cloud Lib using “nCloudServerCertificateNameCheck” mentioned in another thread (Thanks @Damian Andre ! ) The thing is that the provider has a valid certificate Issued by:CN = R3O = Let's EncryptC = USwith root cert:CN = ISRG Root X1O = Internet Security Research GroupC = USSo I am wondering if instead of ignoring all the possible certificates I could just add this one, valid certificate to Commvault so it trust this provider and allow me to configure DiskLib. Is this possible? Also not sure it that’s related since certificate administration is not my cup of tea, but curl-ca-bundle.crt is dated to FEB 2016 on this MA which is a fresh in
We’re running CV11.24.25 with a two-node grid (physical) with CIFS mount paths from a Nexsan Unity that takes secondary copies from MAs that perform backups (no direct backups other than DDB), with a a partition on each MA. We decided to replace this with a four-node (virtual) grid with S3 (NetApp) storage. The four-node grid was set up with a global dedupe policy based on 512KB dedupe block size with a partition on each node. The two-node grid is the standard 128KB dedupe block size.We had ~600TB of back-end storage (~3.3PB of front-end) and have ~1.75PB front-end left to process after about two months of copying. There were 105 storage policies (multi-tenant env) with retentions ranging from 30 days to 12 years (DB, file, VM, O365 apps) with anything higher than 30 days being extended retentions (normally 30 days/1 cycle and then monthly/yearly with extended retention).We do not seem able to maintain any reasonably high copy rates. Having looked at other conversations here we’ve trie
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.