Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 621 Topics
- 3,252 Replies
Hi Team, My DDB Backup operations are failing with the following error message: Snap creation failed on volumes holding DDB paths. A quick review of the job logs points to insufficient space (0 extents). What could this mean? RegardsWinston
Hello Team, Just thought to discuss with everyone, we have lots of medias thats turn into "Deprecated” moved to Retire Pool which are very less usage. When am looked the information in Media Properties shows below message, When I could see Side information, shows ver less usage info : As per the default CommVault Media Hardware Maintenance shows as below, Here my question is : Whether those medias mark as good and re-use for future its advisible? Can you someone clarifity from MM. @Christian Kubik
Hi all!could you advice me how to troubleshoot following types of error: Error Code: [13:138] Description: Error occurred while processing chunk [xxx] in media [xxx], at the time of error in library [disklib01] and mount path [[xxx] /srv/commvault/disklib01/xxx], for storage policy [XXX] copy [Xxx] MediaAgent [svma1]: Backup Job [xxx]. Unable to setup the copy pipeline. Please check connectivity between Source MA [svma1] and Destination MA [svma1]. At a glance, it seems that it is not possible for CV to process chunk from the (index?)/disk library...However, the issue is connected with storage policy copy, that moves data from the disk library to the tape library (secondary copy). The main problem for us is that it is not possible to copy data to the tapes. Therefore, it may say Unable to setup the copy pipeline. The media agent is one server/device, that communicates with both disk and tape library. Lastly, the files in the related directories dont seem to be corrupted...Any suggesti
I’m a little confused about multiple partitions and or DDBs limitation. There are several places when limits are mentioned like the one and only Hardware Specifications for Deduplication Mode1) https://documentation.commvault.com/11.24/expert/111985_hardware_specifications_for_deduplication_mode.htmlwhich mentions “2 DDB Disks” per MA andConfiguring Additional Partitions for a Deduplication Database2) https://documentation.commvault.com/11.24/expert/12455_configuring_additional_partitions_for_deduplication_database.htmlwhich mentions “30 DDB partitions” per MA.Also, In my recent discussion with PS member I was told that a partition should be treated as DDB itself.All of this creates a lot of confusion likea) is “DDB Disk” the same as DDB?b) is “DDB partition” the same as DDB?If you look at 1) /Scaling and Resiliency there is a informationThe back-end size of the data. For example: Each 2 TiB DDB disk holds up to 250 TiB for disk and 500 TiB for cloud extra large MediaAgent. That means
Hi Community , Does Enabling Ransomware Protection on a windows MediaAgent CV feature make my disk library and backup copies immutable ? Do we also need WORM enabled primary or secondary copies even after enabling this CV native feature for full proof ransomware protection ? If yes , what is the use of ransomware protection feature ?Regards, Mohit
Hi Commvault-people. I have a large partitioned DDB which has been writing to a Cloud-based library, and has been for some time. The DDB partitions are roughly 2 TB’s in size. As is recommended when you have been writing to Cloud libraries, it should at some point be sealed, and I would like to go ahead. We are also on the cusp of the maximum threshold for Q&I times. However, I need to make sure I have enough space for my DDB on the current volumes. So the question is, what happens to the old DDB? I am assuming that it will remain at 2 TB’s in size until there is a corresponding reference for the blocks in the new DDB, or the blocks eventually age and are therefore not required. Maybe that will take months. Quite probably. Whilst it ages out old blocks, will the old DDB reduce in size? But what can I expect from the new DDB? If I only have a 3 TB volume, and 2TB is taken up by the old DDB then I really only have 1 TB available. if anyone has recently been through this scenario, it
Hello to all!In case of jobs replication (secondary copy), I see that these 2 options are available: Auxiliary copy & Dash copy.Whats the pros and cons of each?Dash copy is for deduplicated data (with Commvault dedup engine) and the Auxiliary copy for the rest “transitional data” / jobs?Lets share your thoughts.Nikos
Hello, I need to remove the Sealed DDB Partition in the commvault.This DDB isn´t in use, the last write in 2015, just for a test.someone has a procedure to remove?and i have another DDB partition in the same DDB, and is recording.i need just remove this DDB Partition because the sever will be turn off.
So the ticketing service is kind of being slow with this process, and i want to know if I can do this on my own. We have an alert for No Index Backup in the last 30 days. I don't know why they are not storing the index information on the media agents but storing the index data to tape. How do i get a client that is on this list for no index backup to get the index to backup to the media agent so i can get rid of this list?
Yesterday our environment did an automatic upgrade from 11.20.xx to 11.24.29.Previously I have had absolutely zero issues with data verification and backups and aux copies and all jobs show as passing data verification.Now I’m seeing errors/issues on aux copies and data verification along the lines of:Failed to process chunk  in media [V_98073], at the time of error in library [Disk Library] and mount path [[ma02] C:\Mount\maglib04], for storage policy [Backup to Disk] copy [Primary] MediaAgent : Data read from media appears to be invalid..We’re using global dedupe and lots of synthetic full backups so I’m trying a proper full backup on a problem job to see what happens.Before I raise with support does anyone have any suggestions or thoughts please?This is a Windows MA environment where there have been zero issues this literally started to happen after the 11.24.29 install.Obviously this doesn’t seem good :(
We currently have a ‘dual-site’ scenario - each with 2 media agents attached to a Dell/EMC ME4084 disk library. Commvault is configured with a CommCell in each site - with failover enabled. Backup images are secured in each local site and then a secondary copy replicated to the alternate site.As I am sure is common - the questions are being raised around immutable backups in this CV environmentI have seen documentation regarding immutability of cloud based backups, and discussions of WORM technology - but am unsure as to what applies to us here with our CommVault / disk library configuration.V11 SP20Any input appreciated…..
Hi Everyone, I need to provide figures on the rate of change for our backup data, as we are looking to send data to another location with two weeks retention. I have a mature, deduplicated environment so the figures I am seeing on reports and the like, are not too much use at the moment. I need to really factor in two things:-1 - Expected size of the baseline (I will be created a new copy, targeting the new location)2 - The rate of change of future backups. So I have two main questions:- 1 - How do I calculate my expected dedupe and compression savings for my first Auxcopy.I realize this will effectively be copying over an equivalent full backup, since it will be seeding the new library.I am thinking along the lines of assuming a 50% saving (compression and some dedupe combined) but I am wondering if there a better or more accurate way of doing this? My data is largely filesystem, so OS and server, but I may need to look at application data too (SQL\ Oracle). 2 - How do I calculate my
Our weekly secondary AuxCopy is stuck at 30% since this weekend (so that it is blocking all the primary disk to disk incremental copies), with the below 2 error messages Thinking it might be some ports communication issue between the Media Server (S01190) where the tape library is attached to, and the CommCell Server (S02116), so I did the below ports check between the 2 Servers: Telnet from the Media Server (S01190) to the CommCell Server (S02116)Port 8400 OKPort 8401 OKPort 8403 OK Telnet from the CommCell Server (S02116) to the Media Server (S01190)Port 8400 OKPort 8401 Not OKPort 8403 Not OK Now, before I speak to our Network/Security administrator who have recently installed SentinelOne AV on both of the above 2 Servers, I’m wondering if I’m heading the right direction, and if I have done all the ports checking ? Thanks,Kelvin
Hi, TeamHope we are all doing fine.I will like to get ideas on a topic.I have a client who uses Dell EMC data domain as their backup repository and they do not use commvault deduplication at all, they rely on dedup on the storage.. The backups do not seem to flying at the moment. I have done CVDisk and the storage is not doing bad. Also done CVNetwork test and the throughputs are great.I am suspecting the Storage deduplication is the culprit….I am not able to check the Q & I because it is it not a CV DDB. The storage team cannot really help out to check, as they have no idea of what is going on..I proposed a new Media Agent to load balance the workload and i will like to know if they can use CV Dedupe for the new storage policies with the same Data Domain storage. The point man says they have a 3TB 10k SAS which is a bit manageable for CV Dedupe.Please, advise.
Hello,we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB every  days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.Please has somebody experience with cloud library deduplication (with immutable blobs) ? Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?After test we would like to give a realisti
I’m looking to migrate to new Server Hardware for one of my Media Agents and looking for best approach and minimal downtime.Was thinking I could setup the new hardware with the media agent role/software and start to move mount paths to the new server. Once all mount paths have been moved to the new MA I can then update my storage policies to point to the new MA. Is there anything else I need to look out for or would need to do?Any advice or knowledge on this would be great.Thanks
Since the other thread was stated as solved I’ll start a new one. From What I understand the Network Throttling on the media agent only effects backup jobs, not Aux copies jobs bandwidth.There is an option to limit the Aux copy bandwidth usage by setting the Advanced setting "Throttle Network Bandwidth(MB/HR)" on the stg policy copy.Unfortunately that effects the usage all day as there is no possibility to set it per time interval.I have two Aux Copy jobs running for two different Storage Policy copies with the setting for bandwidth limitation set to 5000 MB/HR which is roughly 11 Mbit/s.So running two of those shouldn't use more then 22 Mbit/s.Looking at the Current Through put they are 3,92 GB/hr and 2,97 GB/hr in Comccell console. Those combined gives 6,89 GB/hr roughly 15 Mbit/s. I understand that it's not 100% accurate as the data is deduplicated.But looking at our monitoring I see two streams going to Azure one using 30 Mb/s and the other at 20 Mb/s.In fact the Aux copies tend to
Hi everyone This is my first post. I was trying to restore some files I lost from an earlier backup. I could view the files and folders. I selected the copy precedence to be 1 (my primary copy)However, whenever the operation starts, it gives the error “Failed to read media during restore” and stops at 5%Everything seems fine. Until I tried to skip the errors, and I find out that the folders are restored, but no files in them.Please help out. Is there something in the settings I have to change?
We’re setting up a POC to use a cloud MA to copy longer term retention copies (1 and 7 year) from Azure cool blob storage to archive, and would like to use combined tier storage for the storage in the library where the long term copies will be kept. This being our first time configuring combined tier, I tried to find documentation describing how to configure it, but as of yet have not been able to. One question I’m hoping to answer is do we need to (or can we) pre-create the cool and archive storage accounts that will be used when configuring the new library, or is there some other way this gets done?
Disk Library mount path is offline due to nfs local_lock option set in mount options after upgrading to 11.20 or higher
Sharing this information Proactively.Issue:After upgrading to 11.20 or higher, NFS Mount Paths show offline in the Comcell GUI with the error "The mount path is marked offline due to nfs local_lock option set in mount options"CVMA.log on the Media Agent will show:102415 1901f 01/13 19:06:53 ### WORKER [96/0/0 ] :CVMAMagneticWorker.cpp:6992: Marking mount path \[<mount path>] mounted on dir \[/commvault\_fas\-syd\] offline due to mount options [rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=126.96.36.199,mountvers=3,mountport=635,mountproto=tcp,local_lock=all,addr=<IP Address>]Cause:Checking the NFS mount options by running mount -v will reveal the path is not set to "local_lock=none":In earlier releases, it was advised to set local_lock=none as per https://documentation.commvault.com/commvault/v11/article?p=12567.htm. However, 11.20 has enforced the check. This was done due to issues where
Hi, I am facing error regarding connection to DeDuplication Database. It can be caused by many reason but my specific problem is that the directory with DDB is locked for write. This was confirmed by SIDBEngine.log and simply you cannot create folder in DDB folder.Status of DDB is ONLINE. I can freely read from it, I can freely travers through it, I can copy the folder to different place and here I can create files/folders inside without limit. There is no way to manage from CMV - it is not possible to backup, restore, seal, verify etc. I cannot move the partition. It seems as Windows/Hardware problem but….I checked windows logs - no events from disk, scsi etc.I checked filesystem - all looks fineI checked hardware events (it is HPE server, so iLO log, IML log, I checked smart array status)It seems that it is limited only to this DDB folder. On the same drive, same volume, different directory all working as usual. So there is no write protection on drive or volume level (checked by dis
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.