Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 617 Topics
- 3,237 Replies
We currently use IBM V5000 arrays for our Commvault backup target to land our deduped backups. We are starting to review other options to see what other fast, cost effective options are out there. I do prefer to use Fiber Channel connections, but open to options. Since Commvault is really the brain in our scenario, the storage array does not really need any features, just good speed. What Vendor Storage arrays do you use? Are you happy with it?
Our weekly secondary AuxCopy is stuck at 30% since this weekend (so that it is blocking all the primary disk to disk incremental copies), with the below 2 error messages Thinking it might be some ports communication issue between the Media Server (S01190) where the tape library is attached to, and the CommCell Server (S02116), so I did the below ports check between the 2 Servers: Telnet from the Media Server (S01190) to the CommCell Server (S02116)Port 8400 OKPort 8401 OKPort 8403 OK Telnet from the CommCell Server (S02116) to the Media Server (S01190)Port 8400 OKPort 8401 Not OKPort 8403 Not OK Now, before I speak to our Network/Security administrator who have recently installed SentinelOne AV on both of the above 2 Servers, I’m wondering if I’m heading the right direction, and if I have done all the ports checking ? Thanks,Kelvin
Hi @Jordan @Mike Struening I have a question during the topic. So i try delete MP - i working with your advice, but also send mail for authorization to admin and i have below error. ERROR CODE [19:857]: waiting on user input [Delete Mount Path [ [cvbackup] H:\P_QNAP (MQNWX2_02.08.2021_13.16) from Library - DiskLibQnap ] requested by [ UMO\mjosko.domadm ]] View Contents returns an empty list.There seems to be data on the disk as Size on Disk indicatesseveral / several hundred GB (similarly the size of the folder on the disk).Despite the empty list of View Contents, the data on the disk was only deleted aftersome time. As I can see, there is something else left.What does the data erasure mechanism depend on?
Hello,I have old Tape library TS3200 with LTO4 tapes and new Tape library HPE with LTO7 tapes. I would like to copy data from LTO4 to LTO7 tapes.I worked according documentation Media Refresh (commvault.com) .I have Enabled Media Refresh on Storage policy copy, media for Refresh I have marked Full, and Pick for Refresh. I have run Media Refresh job and choose tab Start new media…No, I have error “There are not enough drives in the drive pool. This could be due to drive limit set on master pool if this failure is for magnetic library.”I don’t know where the problem is?Is there anything else to do for Media refresh operation?Best regards,Elizabeta
Hello,I need to create a backup job with all full backups to tape in a physical library(new library) that the customer has purchased.Today all backups are on disk (File Library).After the new Jobs to tape has been succesfully executed (3 backup jobs to tape will be done), i need to erase the full backup, which today is very old and needs to be done a new full backup, because the size will be smaller and will free up more space in disk.Wait guys for the best practices to do this.Another question: i will create the Storage Police for the library with permanente retention and include all tapes in the same SP or create other for example: Database, Exchange, etc.@Mike Struening
Hello,we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB every  days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.Please has somebody experience with cloud library deduplication (with immutable blobs) ? Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?After test we would like to give a realisti
My company is planning to do a tech refresh on our aging data domain to a newer version. We also highlighted that we’re having backup slowness issues on some of our large Oracle databases & some NDMP backups. Our current configuration is backing up to a VTL located in our data domain & no compression or deduped are enabled from Commvault layer.Dell’s sales team advice us to purchase an additional license for DDBoost for the new Data Domain. This is because DDBoost will be able to perform a very good dedupe & compression rate from the source level before transferring it to our data domain, thus saving the time it took to transfer via network.However I’ve been checking around Commvault’s KB and looks like commvault only works with BoostFS & not DD Boost. I haven’t check with Dell yet regarding this. May I know has anyone implemented DDBoost on your environment for backing up databases/vm & NDMP?
Hello Guys,I have a question: My Commvault licensing is 3Tb, and i´m using 1.8Tb:And i bought A Fujitsu Tape LT20 Library and the license of this library acquired doesn´t count in the licesing of Commvault File Library, Is it Correct?if any one have a site explain this.Thanks
Good morning I have a customer that is backing up in car videos (local sheriff’s dept) and he is deduping this data. We have created a new stand alone dedupe database for this data and are using a selective copy to copy weekly fulls to another target. We have set up another stand alone dedupe database for this copy. The customer is having issues getting this data copied over - it’s taking multiple days. The full backup is about 40 TB in size. We have recommended not using dedupe for this copy, but the customer does not have enough capacity for non-deduped copies. Are there any optimizations that can be made to speed up this aux copy? They currently have only 1 subclient backing up all the data - if they split this out, would that help in using multiple streams for the aux copy? Any other suggestions? Thanks
Hello Commvault Community, Today I come with a question about the Commvault deduplication mechanism. We noticed that there are two deduplication base engines with identical values but differing in one parameter - unique blocks.(engine1.png)(engine2.png)The difference between these engines is close to 1 billion unique blocks, where other values are almost identical to each other. Where could this difference come from? Is there any explainable reason why there is such a difference considering the rest of the parameters? DASH Copy is enabled between the two deduplication database engines that are managed by different Media Agents.Below I am sending examples from the other two DDB engines where the situation looks correct - the DASH Copy mechanism is also enabled.(otherengine1.png)(otherengine2.png)I am asking for help in answering what may be caused by such differences in the number of unique blocks between DDB engines.---Another issue is whether, in the case of this deduplication databa
Hi,I restored the Commserver DR backup to another server. This is the first time I’m doing this so after restore I noticed that the configured disk and tape library in the destination server has changed reflecting the disk and tape library from the original server. I tried to add the destination server as another media agent but it is not allowing. How do i proceed to use the tape and disk library in the destination server after the restore?Please advise.
Hi guys, I'm having a customer who is actually using Standard MA with Commvault to back up to copy 1 to disk (PureFlash with NVMe Drives), copy 2 (and other PureFlash) and a copy 3 to Tape around 350 TB per week to send to tapes (LTO7) 4 tape drives (weekly Full) at a sustain throughput of 700 GB/Hour per drive (4 drive in parallel). We are looking to replace both MA with two HyperScaleX clusters. The questions are ! How we need to configure the HyperScaleX (reference architecture) to sustain the weekly tape creation of 350 TB per week with the same throughput knowing that we are going to use NearLine SAS in the HyperscaleX Cluster ! Or ca we use SSD for the Storage Pool drive in a HyperScaleX.
Hello I have an issue related to DDB as shown below the q&i time is very high, known that the media agent is serving Oracle and SAP dbs only with daily full backup around 23 oracle rac and 18 sap client.library is from flash storage.DDB Disks is ssd and moved to pure storage [ NVMe disks] due to insufficient space on local disks any idea how to maintain this ?
Hi,Is there a way to use Ransomeware Protection on Windows MediaAgents, using a Disklibrary with Cluster Share Volumes ?Once Ransomeware Protection is activated, the filter driver “CVDLP” with the altitude of 145180 (encryption) is added to the Filesystem Filter.this results in redirected I/Os to all ClusterSharedVolumes :BlockRedirectedIOReason : NotBlockRedirectedFileSystemRedirectedIOReason : IncompatibleFileSystemFilterName : volume21Node : node1StateInfo : FileSystemRedirectedas a result the Clusted Events are flooded with Warnings:Cluster Shared Volume 'volume21' ('volume21') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared
Hi Guys, I finally found the exact article that describes a solution I wanted to implement and am seeking opinions to do this or not. Basically I want to create multiple Selective Copies under a storage policy and associate with different subclients/computers to meet a client’s tiering model and different retention. Because I also want to implement Deduplication between the Primary and each selective copy, Weekly and Monthly, I’m weighing an option to Create Selective Copy using a Library instead of using a Storage Pool. In the article below;Article: https://documentation.commvault.com/commvault/v11/article?p=119730.htmStep 17b, can I select Partition Path as a normal Windows folder e.g. D:\<randomFolder>. If I create additional selective copies under the same storage policy, can I use the same D:\randomFolder> to deduplicate data between Primary Copy and additional selective copies or I have to create a D:\<randomFolder1> and so forth for each copy. I ask because I do n
Hi, Everyone. I have a customer with this exact problem. After the Commvault refresh/reconfiguration which was concluded some months back, we had some issues backing up to tape which we finally understood was related to the tape drives we were using then. We have resolved the issue with the drives but we are still having copy to tape running at a very low speed (As low as 13GB/hr). Kindly assist us with below. We have sister companies running this same Commvault and we want to know how their setup is different from ours that make them better. We need to review our architecture to be sure the copy to disk and copy to tape can happen at the same time from the primary source. The difference in storage in terms of I/O and disk rpm from what we have here and that of our sister companies. Is there any way i can help them, please?
Hi all!could you advice me how to troubleshoot following types of error: Error Code: [13:138] Description: Error occurred while processing chunk [xxx] in media [xxx], at the time of error in library [disklib01] and mount path [[xxx] /srv/commvault/disklib01/xxx], for storage policy [XXX] copy [Xxx] MediaAgent [svma1]: Backup Job [xxx]. Unable to setup the copy pipeline. Please check connectivity between Source MA [svma1] and Destination MA [svma1]. At a glance, it seems that it is not possible for CV to process chunk from the (index?)/disk library...However, the issue is connected with storage policy copy, that moves data from the disk library to the tape library (secondary copy). The main problem for us is that it is not possible to copy data to the tapes. Therefore, it may say Unable to setup the copy pipeline. The media agent is one server/device, that communicates with both disk and tape library. Lastly, the files in the related directories dont seem to be corrupted...Any suggesti
Hello, I need to remove the Sealed DDB Partition in the commvault.This DDB isn´t in use, the last write in 2015, just for a test.someone has a procedure to remove?and i have another DDB partition in the same DDB, and is recording.i need just remove this DDB Partition because the sever will be turn off.
I have several cloud libraries, where the storage and DDB are controlled by an on-premises MA. I would like to switch several of them to a different on-premises MA. But cannot seem to find anything here on it the docs on how to switch MAs for an existing cloud library.
Hi,Is there a way to “reset” the ‘Picked for Refresh / Prevented’ status of media? Some media was marked as prevented, and then the auto refresh criteria was changed after. How do I see which of these “Prevented” media should be auto picked now, I can only flip the manually “prevented” media back to manually “picked” Thanks.
Hello,I have two Tape library, old with LTO4 tapes and new Tape library with LTO7 tapes.I created an Aux copy separately for each storage policy and successfully copied the content from LTO4 tapes from old Tape library to LTO7 tapes on new Tape library.We plan to shut down the old Tape library soon.When that happens, will I be able to do a Restore normally from LTO7 tapes to which content from old LTO4 tapes has been copied?Best regards,Elizabeta
The auditors want to see if my backups are encrypted and I’m not sure where to go in the CommVault GUI to show that. I don’t see anything about encryption in the properties for my storage libraries or my storage policies. Where do I show whether or not my backups are encrypted?Ken
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.