Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 588 Topics
- 3,129 Replies
Hi,Is there a way to use Ransomeware Protection on Windows MediaAgents, using a Disklibrary with Cluster Share Volumes ?Once Ransomeware Protection is activated, the filter driver “CVDLP” with the altitude of 145180 (encryption) is added to the Filesystem Filter.this results in redirected I/Os to all ClusterSharedVolumes :BlockRedirectedIOReason : NotBlockRedirectedFileSystemRedirectedIOReason : IncompatibleFileSystemFilterName : volume21Node : node1StateInfo : FileSystemRedirectedas a result the Clusted Events are flooded with Warnings:Cluster Shared Volume 'volume21' ('volume21') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared
Hi AllI’m having performance issue with the auxiliary copy. This log capture from media agent. Having low data transfer rate. There is 2 media agent installed in this site, one is working fine, another one got issue. |*5850951*|*Perf*|592185| =======================================================================================|*5850951*|*Perf*|592185| Job-ID: 592185 [Pipe-ID: 5850951] [App-Type: 0] [Data-Type: 1]|*5850951*|*Perf*|592185| Stream Source: xxxxx|*5850951*|*Perf*|592185| Simpana Network medium: SDT|*5850951*|*Perf*|592185| Head duration (Local): [29,June,21 11:58:57 ~ 30,June,21 05:31:55] 17:32:58 (63178)|*5850951*|*Perf*|592185| Tail duration (Local): [29,June,21 11:58:57 ~ 30,June,21 05:32:31] 17:33:34 (63214)|*5850951*|*Perf*|592185| ----------------------------------------------------------------------------------------------------------------------------------------|*5850951*|*Perf*|592185| Perf-Counter
Hi Everyone, I need to provide figures on the rate of change for our backup data, as we are looking to send data to another location with two weeks retention. I have a mature, deduplicated environment so the figures I am seeing on reports and the like, are not too much use at the moment. I need to really factor in two things:-1 - Expected size of the baseline (I will be created a new copy, targeting the new location)2 - The rate of change of future backups. So I have two main questions:- 1 - How do I calculate my expected dedupe and compression savings for my first Auxcopy.I realize this will effectively be copying over an equivalent full backup, since it will be seeding the new library.I am thinking along the lines of assuming a 50% saving (compression and some dedupe combined) but I am wondering if there a better or more accurate way of doing this? My data is largely filesystem, so OS and server, but I may need to look at application data too (SQL\ Oracle). 2 - How do I calculate my
Hi all,Sorry for yet another question. But I love the quick feedback I get on this platform! :-)I have a customer running DDBv4 who wants to leverage DDBv5 with Horizontal Scaling.I know that we have the ConvertDDBToV5 workflow, or alternativelywe could convert the DDB manually.But would that also enable Horizontal Scaling automatically? Or is there some separate procedure required for getting horizontal scaling?Thanks in advance for your reply!
We are looking into adding an additional layer of offsite backup storage. Amazon S3. The current idea would be add a 3rd copy to an existing storage policy. This copy would reflect the AWS S3 library. Can I Aux copy on premise backup data using physical on premise media agents directly to S3 Any suggestions would be appreciated. Thank you
Hi All,i have created some new GDSP and i notice that DDB are separate in 3 ID by type of backup (Files, Databases and VM)i have created this commserve (CS2) in FR20.11.22, (currently the commserve is in FR20.11.55).i have another commserve CS1 in the same version 11.20.55 but created in SP16. on this CS1 when i create a new GDSP there is only one ID for the DDB.Why i can’t create a DDB like in the CS2 ?How can i have the same type of DDB on all my Commserve ? Do they need to run a converison script ? or a patch ?Kind regards,Christophe
Hello,I have two Tape library, old with LTO4 tapes and new Tape library with LTO7 tapes.I created an Aux copy separately for each storage policy and successfully copied the content from LTO4 tapes from old Tape library to LTO7 tapes on new Tape library.We plan to shut down the old Tape library soon.When that happens, will I be able to do a Restore normally from LTO7 tapes to which content from old LTO4 tapes has been copied?Best regards,Elizabeta
Hi there! There is a System Created DDB Verification schedule policy (Data Verification). In our case it starts everyday at 6AM. Is it possible to decrease the frequency of the schedule to e.g. once a week without any risk?What is the optimum value of the System Created DDB Verification schedule policy? I am asking because there is a quit big amount of data to be proceess during this task, which can reduce the performance of other tasks.
Goodmorning family pls am kind of new to commvault, Now my question is what happen once AUXCopy has been copy to tape will data on media server backup still remain?and if it remain then can i manually delete those data on it? because most of the mounth path luns has been fill up waiting for ageing to effect and the rate by which our data increase is too fast because is a big environment. i check our VM retention policy is set to 30day am thinking if it less risky to manually delete some data off from this disknote we have 5 HQ media server with 80TB each with 10TB each LUNs 2 DR media servers also.
We are using a tape environment for our backups. One of the issues i have with commvaults storage policy is that it doesn't really have a difference on how it does full backups. Our retention cycles are a daily backup (Differentials) of 7 days 1 cycle, weekly full 28 days 1 cycle, and a monthly full with 1095 days +. Currently with how the storage policy is set up it is running to the default, all the data is being dumped to one tape and usually has monthly, daily and weekly data on the same tape. I notice when i pull my monthlies i sometimes will see that the weekly fulls and daily are eating up space that could be used to fill more monthly fulls unto the tape.I was wondering what would be the best practice to separate the monthly full away from the other backup retentions of the storage policy? Would it be something along th3e lines of creating an aux copy in the storage policy and just copy the last weekly full to the aux copy to separate the monthly full from the other data? or is
Hi all,I have a client still running v10 (yes, I know ...their call) and last year new additional storage was configured in their CommCell (2 sites). A new library and mount paths were created in each of the sites and new Primary and DASH copies created with data paths to the new storage in the respective sites.I’m finding that the storage for the Primary copy in their main site is filling up and there is nothing obvious in the forecast report. Is there such a setting in v10 which corresponds with the Enable Physical Pruning in v11? Thanks in advance.JamieK
Hello,I’m trying to do some math over effects of changing 2:nd copy to be stored on immutable volumes in azure.I have retention of 365 days, currently no seal of db database but seems recommendation is 180 days. According to the formula in another post, that would give me that I should have 545 days as immutable days on the storage volume.Our baseline is 18TB.Since there is 18TB as baseline and there will be 3 seals of the DDB, does that mean that I will add 3 times 18TB to the volume since no data pruning will happen for 545 days?//Henke
HelloScenary: 2 MA´s in a CommCell.The Customer has bought 1 more MA and he wants to add the SSD disk space to the existing DDB. I was reading the manual(Commvault site) and have a question, the site says “before you start”.Where can i get this “Authenticate Code”?https://documentation.commvault.com/m/commvault/v11_sp5/article?p=features/deduplication/t_configuring_additional_partitions.htm
Hello,Sorry I can’t correct the title are/or :)On a Storage policy that has deduplication enabled I have this message and I understood why.My question is :can I unselect the Extended Retention Rule 1 set for 90 daysand increase the basic one to for example to 90 days, if yes, how many cycle is good to have for 90 ? Thanks
The Tape Storage Matrix is getting quite “interesting” over time. I am not sure why there are still references to outdated unsupported OS’s and architectures, but that’s more of an comment than a question. I’m trying to verify whether a drive is on the matrix and not coming up with anything. Under the filter section there’s no windows OS. Hmm. If I search for T50e, Windows 2008R2 is the newest “supported” OS for it? What? Is this to say if I call in a ticket for a newer OS on that autoloader might support tell me “sorry Charlie”? Latest code for that unit is BlueScale12.7.07.03-20180817F, it’s not in the Matrix at all. Not sure how to take this or not. Anyone have a similar situation. Plenty of life left in these units and still fully supported by Spectra. Will LTO-8 work?thanks
HiBackground: Our backups are stored on a Data Domain and the Data Domain replicates itself to a remote site, additionally we've got daily snapshots on the data domain itself to prevent backup data deletion (these snapshots can only be deleted using the sysadmin account).Because we use Data Domain we are storing our backups without compression or deduplication. But my question can be the same for the following situation:Lets assume I've got a media agent, with a data disk (where the disk library is on) attached to it. the building catches fire and I am only able to unplug the data drive. I loose everything expect for the data drive. If I install a new commcell, can I then import the backups from that data drive somehow?
Hi there! Could you please explain here why there is discrepancy between Application Size and Total Data to Process in the Job Controller view? My assumption is that there are some leftovers from previous backups, that need to be backed up/to be copied. Or is there any other reason?
Hi there!Is there any way how to investigate very poor reader time in NDMP backup in Commvault?Quick look at a part of the log suggests slow reader time to be a culprit of the poor backup performance: |*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353| Perf-Counter Time(seconds) Size|*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353||*1292266*|*Perf*|696353| Replicator DashCopy|*1292266*|*Perf*|696353| |_Buffer allocation............................ - [Samples - 477421] [Avg - 0.000000]|*1292266*|*Perf*|696353| |_Media Open................................... 20 [Samples - 5] [Avg - 4.000000]|*1292266*|*Perf*|696353| |_Chunk Recv...................................
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.