Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 675 Topics
- 3,384 Replies
I want to move my data from one mount point to other in a DDB setup
I want to move my data from one mount point to other in a DDB setup...These mountpaths are from different storage.Basically we are decommissioning an old storage that was used to store Infinite retention backup so will have to move all those data to a new storage.Please suggest the best approach for this
S3 Compatible Storage Untrusted Certificate
Hi! I’m trying to add S3 Compatible Storage as a Cloud Library, but I get error: 3292 1194 12/20 10:31:44 ### [cvd] CVRFAMZS3::SendRequest() - Error: Error = 440373292 1194 12/20 10:31:48 ### [cvd] CURL error, CURLcode= 60, SSL peer certificate or SSH remote key was not OK I already troubleshoot it and I was able to successfully add the storage as a Cloud Lib using “nCloudServerCertificateNameCheck” mentioned in another thread (Thanks @Damian Andre ! ) The thing is that the provider has a valid certificate Issued by:CN = R3O = Let's EncryptC = USwith root cert:CN = ISRG Root X1O = Internet Security Research GroupC = USSo I am wondering if instead of ignoring all the possible certificates I could just add this one, valid certificate to Commvault so it trust this provider and allow me to configure DiskLib. Is this possible? Also not sure it that’s related since certificate administration is not my cup of tea, but curl-ca-bundle.crt is dated to FEB 2016 on this MA which is a fresh in
What to do after losing backup storage
Hello everyone,My secondary site has a Windows server connected to HPE MSA disk shelves. Earlier this year we added another disk shelf but my storage admin now tells me it’s improperly configured and he wants to correct it which will result in the lose of all the files on that individual shelf. The storage appears to CommVault as J: (51TB), K: (51TB), L: (51TB), and M: (18TB). It is just the M: allocation that needs to be corrected.All backups have two copies with one at my primary and one at my secondary site so I *should* be able to re-copy the affected backups over to the secondary site once the storage is fixed. Does anyone know if there’s a procedure or web site that shows how to do this? It would be the equivalent of recovering from an disk failure.Any help would be appreciated.Ken
Commvault Backing up to Cloud - Reservation lookup
Hi Team,I am running a job to cloud and the job comes up with “Reservation lookup failure”andError Code: [62:1539]Description: [Cloud] There is a name lookup error.Source: NZVMCVSA01, Process: CVD Any thoughts on the origins of this error? It looks like something in the cloud is the issue. cheers4884 59a0 12/20 14:04:49 206728 Servant [---- IMMEDIATE BACKUP REQUEST ----], taskid  Clnt[NZVMCVSA01] AppType[Windows File System] BkpSet[defaultBackupSet] SubClnt[DDBBackup] BkpLevel[Full]4884 e0 12/20 14:04:50 206728 Scheduler Phase [4-Scan] (0,0) started on [NZVMCVSA01] in  second(s) - ifind.exe -j 206728 -a 2:1136 -t 1 -d NZVMCVSA01*NZVMCVSA01*8400 -r 1639710108 -ab 0 -i 1 -cs AWSSYD-COMMCELL -s "DDBBackup" -jt 206728:4:1:0:51234 -systemFiles -mountPath -seb -lf 205120 -li 0 -ls 0 -attrEx 04884 2d18 12/20 14:04:52 206728 Servant Reg [Control] received. Client [NZVMCVSA01] plattype = 4. Token [206728:4:1:0:51234]4884 38f8 12/20 14:04:54 206728 Schedule
Optimized tape utilization
We make daily backups of our servers.Every month we write the backup data on tapes.The retention period for these tapes is one year.For December, we create an end-of-year backup with unlimited retention time.Our goal is to have an annual backup for December and reuse the remaining months' tapes after one year.Due to server retirement, there are monthly tapes that we cannot reuse without first deleting the contents (Retain time until: infinite). If we do so, we lost data.Is there a solution to this problem?
Certain tapes not written to after initial writing and before appendable date setting
In my organization, I have two HP MSL4048 tape libraries containing two Ultrium drives which uses LTO7 tapes with a capacity of 5.47 TB. The firmware for both libraries are the same. The Use Appendable media setting is 14 days.For the two years, for one of the tape libraries, about once every two months one of the empty tapes gets written to but it doesn’t fill up to the capacity. And if in three days the tape is not written to, then it will not be written to at all and will choose a different tape, even though it’s before the appendable setting of 14 days, and its media info properties clearly states Yes of it’s appendable. The other tape library on the other hand will always fill tapes to its capacity.Attached file no_writing_prematurely.jpg shows such an example. The last time it was written to was on 12/6/21, and the tape is not full and there’s 7 more days to be appendable. But it will not be written to at all. As mentioned, this occurs about once a month with a random tape. I won
DDB Data Verification on cloud enviroment
Hello,I’m wondering what’s the proper approach to data verification in native cloud environments. The environment is built within the cloud, CS, MA, Cloud Libs etc. are placed in the same cloud solution, so the infrastructure traffic basically stays within the cloud. The only reference that I’ve found in the docs was:Tip: By default, the data verification schedule policy that is created by the system is not configured with data mover MediaAgents that use a cloud storage product, because the read operations from the cloud are very slow and are performed on low latency media. If necessary, you can perform the data verification on the cloud storage manually.To run data verification on data that is stored on archive cloud storage, first recall the data to the main cloud storage location. Then you can run the data verification job on the recalled data.https://documentation.commvault.com/11.24/expert/12567_verification_of_deduplicated_data.html But in this case the MediaAgents, are not data
VSS DDB Backup Failures
Hey all - we are having a VSS failure for our DDB backups. Providers look fine but seems like there is some sort of issue with it.Windows Application Log error says the provider could not be started. The provider referenced is the Galaxy VSS Provider Service. It is telling us there are no associated devices.This has been working just fine and then just randomly started failing on Nov 22nd. Nothing has changed on the media agent.When running the vss creation manually - the error is VSS_E_UNEXPECTED_PROVIDER_ERRORCustomer will most likely open a ticket but wondering if anyone has any ideas?PS- we rebooted just for the heck of it...
Cleaning tape deprecated
Hello,I would like to discuss the issue with a cleaning tape.Our cleaning tape is marked as deprecated and it is not possible to use it for cleaning anymore. Is it possible to somehow reset the counter for number of uses? We tried to delete the cleaning tape and rediscovered it, but the Commvault remembers the number of uses. The cleaning tape seems to be new, on the tape library there is written, that it was used almost 60 times.Any help much appriciated!
Reducing data retention (Aux copy)
Hi all, I start on this community, so thanks about all help you can give me. Near 2 years ago, we make a mis configuration on our infrastucture.Result : We have datas on Tape, which have bad data retention date.When I modify the retention date of jobs, I have 2 different behaviors :When I increase retention date : COMMVAULT show the right date When I reduce retention date : COMMVAULT don’t make any changesWhy ?How can I expect that datas expire at the right date ? My COMMServe is in 11.24.21 Thanks in advance
Workflow: Cloud Upload and Download Throttling Control
I’ve used the subjected workflow some times and it’s working.However when I looked at it today it seems it was changed @ some point in time.The documentation isn’t corresponding to whats shown in the GUI.It seems that it’s not possible to target a specific cloud library any longer.Is it just on my installation this has changed, or anyone else seen this?Screenshots provided below.//Henke
how to create metallic license library capacity dashboard widget
We have CommVault Complete licensing and also Metallic Cloud Storage Service (MCSS) licensing. However, the Command Center dashboard lacks a widget for Metallic. Can someone share how to add a widget that would display the %used of a library? This would effectively show MCSS as a % of licensed use. The existing Current Capacity widget only shows the % of licensed Complete usage, not the % of storage used; the MCSS library falls into the latter category since MCSS is not a “Complete” accounting of backup potential, it is an actual % of backup used. From dashboard:From Storage>CloudSo it would be nice to see the cloud library as a pie chart, similar to the complete licensing in the Current Capacity widget. How do you make widgets and put them on the dashboard?
Replacing Media Agent Hardware
Our Media Agent needs replacing.The replacement is probably going to be a Dell PowerEdge R5xx or R7xx with dual CPU and 128GB RAM and the intention is to use M.2/NVMe for the OS/Commvault binaries and DDB/Indexes but I’d appreciate any guidance and best practise on the build and storage.We’re doing a lot of synthetic fulls with small nightly incremental backups and we aux about 70TB to LTO8 tape every week.The disk library we have right now is approx 50TB.Is there any best practise that would favour NAS/network storage for the disk library over filling the PowerEdge with large SAS disks?With local disk on Windows Server is there a preference between NTFS and ReFS and is there a best practise over mount path size as we’re currently using 4-5TB mount paths carved out as separate Windows volumes on a single underlying hardware RAID virtual disk.Given modern hardware performance can anyone see a definite reason to do any more than buy a single PowerEdge for this other than redundancy/avail
Synthetic Full & Aux Copy Performance
I do several 10-20TB synthetic full jobs each week which are then aux copied off to tape on a global tape copy policy with a single stream (one LTO drive) with multiplexing enabled.I noticed that under the advanced settings on the synthetic full job the “Use Multiple Streams” option is NOT selected.What I think this means is that each 10-20TB synthetic full backup gets written to disk as a single stream which means it gets aux copied to tape as a single stream.If so am I right in thinking I want to enable this setting so the synthetic full jobs get written as multiple streams as this means they’ll get aux copied as multiple streams?I’m going to be moving the DDB to a new NVMe volume which seems a good time to review things.We’re on 11.20.73.Thanks in advance.
Aux Copy size and throughput summary for a given period
Hi Team, We are in the process of evaluating our recent Aux Copy performance to our DR site as we are scoping for future capacity and bandwidth requirements. It would be very useful if there was a suitable report to show me the rate of throughput and the amount of data copied in a summary format, similar to what you see when running Job reports and aligning by Storage Policy (as opposed to client). Surprisingly, I can’t see a summary of the Aux Copy results in any reports.Although the Aux Copy entry in the report shows the data copied, and the overall throughput it is only for each Aux Copy job. You can see below in this example, where we have a few days worth of Aux Copies:- What I really need is a report with a summary line at the top of all data copied and the overall throughput figure.At the moment, I need to export into CV and then reorganize the fields so I can add things up manually.There is one other report which might also be of use, which is the Jobs in Storage Policy Copies
Hello,I want to use Commvault to backup 10 laptops.The files used are :Vidéo : .MOV, MP4, .RAW, .BRAW, AVCHD, BOO, DOO, TBL, Editing files: .FCP ou .SRT Audio : .MP3, WAVE, AACCould I use the deduplication and compression on these files? If yes then what will be the ratio? Thanks.Best Regards,Ben
Recovery from offsite copy
Hi,I would like to ask on how we can perform recovery of data from our offsite copy. We have CVfailover setup in our environment below are the options we need to perform. From Commcell console in Site A, How to recovery our data form offsite copy which is in a secondary storage of site B? When commserv in Site A went unavailable and the commserv server in site B become active after successful CVfailover. From commcell in Site B, how to recovery our data from primary and secondary storage in site A? and how to recovery our data from secondary storage in Site B when commcell servers and storage arrays in Site A a is also unavailable?Refer to the attached photo for the commcell environment. Appreciate a lot to anyone who will provide a response. rolan.
Office 365 Backup Storage Calculator
Is there a calculator (or formula) to estimate the required backup storage capacity for Office 365 Backups (Exchange Online, OneDrive for Business, SharePoint Online & Teams)?For “normal” backups (File, VMs, Databases) there is such a calculator (Link), but I haven’t found anything regarding Office 365.How do others run such capacity calculations for Office 365? RegardsChris
Implementing S3 combined tiers on AWS
Hey all We are creating a secondary copy and writing to a combined tier library. When we create this, do we need 1 bucket or 2? I only created 1 bucket and specified the combined tiers. Does an additional bucket get created - or a different folder within the bucket created and that gets changed to Glacier?We are planning on using S3-IA/Glacier for the 2nd copy.I guess I just want more specifics on what exactly happens in this process. ThanksMelissa
Migrate Cloudlib from AWS to Azure
Hi Community - We have a circa 2PB Cloudlin currently writing to AWS S3. Our customer is asking if we can migrate this data into Azure. Has anyone out had experience of the best way to accomplish this - I am thinking doing this using AUX copies is going to take rather a long time to copy 2 PB!!
DR backup to CV cloud
We are currently preforming a DR backups to On-Prem (multiple locations) and CV cloud. As viewed in cloud.commvault.com it appears that we are limited to only 5 iterations of the CV DBs. In “DR backup” options we have 20 set, but that seems to only be On-PremQuestion> Is there a way to expand the amount of CV cloud backups retained?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.