Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
Hi, I am facing error regarding connection to DeDuplication Database. It can be caused by many reason but my specific problem is that the directory with DDB is locked for write. This was confirmed by SIDBEngine.log and simply you cannot create folder in DDB folder.Status of DDB is ONLINE. I can freely read from it, I can freely travers through it, I can copy the folder to different place and here I can create files/folders inside without limit. There is no way to manage from CMV - it is not possible to backup, restore, seal, verify etc. I cannot move the partition. It seems as Windows/Hardware problem but….I checked windows logs - no events from disk, scsi etc.I checked filesystem - all looks fineI checked hardware events (it is HPE server, so iLO log, IML log, I checked smart array status)It seems that it is limited only to this DDB folder. On the same drive, same volume, different directory all working as usual. So there is no write protection on drive or volume level (checked by dis
I’m looking to migrate to new Server Hardware for one of my Media Agents and looking for best approach and minimal downtime.Was thinking I could setup the new hardware with the media agent role/software and start to move mount paths to the new server. Once all mount paths have been moved to the new MA I can then update my storage policies to point to the new MA. Is there anything else I need to look out for or would need to do?Any advice or knowledge on this would be great.Thanks
Hello Team, Just thought to discuss with everyone, we have lots of medias thats turn into "Deprecated” moved to Retire Pool which are very less usage. When am looked the information in Media Properties shows below message, When I could see Side information, shows ver less usage info : As per the default CommVault Media Hardware Maintenance shows as below, Here my question is : Whether those medias mark as good and re-use for future its advisible? Can you someone clarifity from MM. @Christian Kubik
I’m a little confused about multiple partitions and or DDBs limitation. There are several places when limits are mentioned like the one and only Hardware Specifications for Deduplication Mode1) https://documentation.commvault.com/11.24/expert/111985_hardware_specifications_for_deduplication_mode.htmlwhich mentions “2 DDB Disks” per MA andConfiguring Additional Partitions for a Deduplication Database2) https://documentation.commvault.com/11.24/expert/12455_configuring_additional_partitions_for_deduplication_database.htmlwhich mentions “30 DDB partitions” per MA.Also, In my recent discussion with PS member I was told that a partition should be treated as DDB itself.All of this creates a lot of confusion likea) is “DDB Disk” the same as DDB?b) is “DDB partition” the same as DDB?If you look at 1) /Scaling and Resiliency there is a informationThe back-end size of the data. For example: Each 2 TiB DDB disk holds up to 250 TiB for disk and 500 TiB for cloud extra large MediaAgent. That means
Hi there,I would like to ask you, in general, which elements are in play during the DDB verification? Is there communication between the DDB database and the disk library during the DDB verification job?In documentation, there is written that "Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database." So, it means that there is communication between media agent and master server… The thing is, that in our case this verification job even Incremental takes a very long time. However, the performance of the disk with DDB database looks quite good.
Hi Team, My DDB Backup operations are failing with the following error message: The snapshot of the Dedupe database from the previous attempt of this job is not available and a new one could not be created, as the job cannot continue under this condition, it will fail I can't really find anything in the Commvault logs outside of a few VSS-related errors. What could this mean? RegardsWinston
Hi, TeamHope we are all doing fine.I will like to get ideas on a topic.I have a client who uses Dell EMC data domain as their backup repository and they do not use commvault deduplication at all, they rely on dedup on the storage.. The backups do not seem to flying at the moment. I have done CVDisk and the storage is not doing bad. Also done CVNetwork test and the throughputs are great.I am suspecting the Storage deduplication is the culprit….I am not able to check the Q & I because it is it not a CV DDB. The storage team cannot really help out to check, as they have no idea of what is going on..I proposed a new Media Agent to load balance the workload and i will like to know if they can use CV Dedupe for the new storage policies with the same Data Domain storage. The point man says they have a 3TB 10k SAS which is a bit manageable for CV Dedupe.Please, advise.
Hi Community , Does Enabling Ransomware Protection on a windows MediaAgent CV feature make my disk library and backup copies immutable ? Do we also need WORM enabled primary or secondary copies even after enabling this CV native feature for full proof ransomware protection ? If yes , what is the use of ransomware protection feature ?Regards, Mohit
Hi Team, My DDB Backup operations are failing with the following error message: Snap creation failed on volumes holding DDB paths. A quick review of the job logs points to insufficient space (0 extents). What could this mean? RegardsWinston
Yesterday our environment did an automatic upgrade from 11.20.xx to 11.24.29.Previously I have had absolutely zero issues with data verification and backups and aux copies and all jobs show as passing data verification.Now I’m seeing errors/issues on aux copies and data verification along the lines of:Failed to process chunk  in media [V_98073], at the time of error in library [Disk Library] and mount path [[ma02] C:\Mount\maglib04], for storage policy [Backup to Disk] copy [Primary] MediaAgent : Data read from media appears to be invalid..We’re using global dedupe and lots of synthetic full backups so I’m trying a proper full backup on a problem job to see what happens.Before I raise with support does anyone have any suggestions or thoughts please?This is a Windows MA environment where there have been zero issues this literally started to happen after the 11.24.29 install.Obviously this doesn’t seem good :(
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
Hi Everyone, I need to provide figures on the rate of change for our backup data, as we are looking to send data to another location with two weeks retention. I have a mature, deduplicated environment so the figures I am seeing on reports and the like, are not too much use at the moment. I need to really factor in two things:-1 - Expected size of the baseline (I will be created a new copy, targeting the new location)2 - The rate of change of future backups. So I have two main questions:- 1 - How do I calculate my expected dedupe and compression savings for my first Auxcopy.I realize this will effectively be copying over an equivalent full backup, since it will be seeding the new library.I am thinking along the lines of assuming a 50% saving (compression and some dedupe combined) but I am wondering if there a better or more accurate way of doing this? My data is largely filesystem, so OS and server, but I may need to look at application data too (SQL\ Oracle). 2 - How do I calculate my
Hi everyone This is my first post. I was trying to restore some files I lost from an earlier backup. I could view the files and folders. I selected the copy precedence to be 1 (my primary copy)However, whenever the operation starts, it gives the error “Failed to read media during restore” and stops at 5%Everything seems fine. Until I tried to skip the errors, and I find out that the folders are restored, but no files in them.Please help out. Is there something in the settings I have to change?
Hello, I need to remove the Sealed DDB Partition in the commvault.This DDB isn´t in use, the last write in 2015, just for a test.someone has a procedure to remove?and i have another DDB partition in the same DDB, and is recording.i need just remove this DDB Partition because the sever will be turn off.
We currently have a ‘dual-site’ scenario - each with 2 media agents attached to a Dell/EMC ME4084 disk library. Commvault is configured with a CommCell in each site - with failover enabled. Backup images are secured in each local site and then a secondary copy replicated to the alternate site.As I am sure is common - the questions are being raised around immutable backups in this CV environmentI have seen documentation regarding immutability of cloud based backups, and discussions of WORM technology - but am unsure as to what applies to us here with our CommVault / disk library configuration.V11 SP20Any input appreciated…..
Since the other thread was stated as solved I’ll start a new one. From What I understand the Network Throttling on the media agent only effects backup jobs, not Aux copies jobs bandwidth.There is an option to limit the Aux copy bandwidth usage by setting the Advanced setting "Throttle Network Bandwidth(MB/HR)" on the stg policy copy.Unfortunately that effects the usage all day as there is no possibility to set it per time interval.I have two Aux Copy jobs running for two different Storage Policy copies with the setting for bandwidth limitation set to 5000 MB/HR which is roughly 11 Mbit/s.So running two of those shouldn't use more then 22 Mbit/s.Looking at the Current Through put they are 3,92 GB/hr and 2,97 GB/hr in Comccell console. Those combined gives 6,89 GB/hr roughly 15 Mbit/s. I understand that it's not 100% accurate as the data is deduplicated.But looking at our monitoring I see two streams going to Azure one using 30 Mb/s and the other at 20 Mb/s.In fact the Aux copies tend to
We’re setting up a POC to use a cloud MA to copy longer term retention copies (1 and 7 year) from Azure cool blob storage to archive, and would like to use combined tier storage for the storage in the library where the long term copies will be kept. This being our first time configuring combined tier, I tried to find documentation describing how to configure it, but as of yet have not been able to. One question I’m hoping to answer is do we need to (or can we) pre-create the cool and archive storage accounts that will be used when configuring the new library, or is there some other way this gets done?
So the ticketing service is kind of being slow with this process, and i want to know if I can do this on my own. We have an alert for No Index Backup in the last 30 days. I don't know why they are not storing the index information on the media agents but storing the index data to tape. How do i get a client that is on this list for no index backup to get the index to backup to the media agent so i can get rid of this list?
Hi all,after some time we are facing another serious issue. There is no available space on the disk library. Ayayay.We have tried to find if there are any unprunable jobs. There were some of them and therefore we have set option to ignore cycle retention for disable subclients. Unfortunately, only small amount of GBs have been aged. Now, there is a question what to do next. I have no idea how to find what can be deleted in order to make more free space for the backups. Moreover, there will be quit a big deduplication ration, so even manual deletion of some jobs do not have to be useful. Maybe, one useful information can be that during the last month there were increase in data cca 10TB, which is 10 percent increase. Is there possibility to figure out what data did this increase? Is there any generule rule or useful tool within the Commvault to fight with this issue?
Hi, I try to run a data verification but the job take this error:Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_], at the time of error in library [LibStorage] and mount path [[LibStorage] R:\], for storage policy [SP_BackupSystem] copy [Aux_Disk] MediaAgent : Backup job . Mount path inaccessible. Source: , Process: AuxCopyMgr
We are working on Commvault with Falconstor VTL POC. Our Commserve running on SP20.17. At first we have been provided with emulated HP tape library with LTO4 drives. When we initiate the backup, it failed with below error Then, we also try with emulated HP tape library with LTO7 drives. Then we received almost the same previous error. We also already update Tape Drive driver on Windows, but still the same. FYI, we are using Windows 2016 with Commvault SP20.17 on the Commserve server which running on VM. Need someone opinion. Please help. Thanks.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.