Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 777 Topics
- 3,676 Replies
New to Hyperscale nodes and trying to figure out how to increase the size available to be used for the DDB paths.We have multiple GDSP's using the HS for their DDB's and are receiving warnings that the free space on DDB MediaAgent is very low.Looking at the disk space it looks as though there is 1.4TB left on the mount path. I'm a windows person so maybe I'm not understanding?Is there a way to give more space to the DDB’s? Thanks
I am testing commvaults connection to Wasabi.My Wasabi test bucket is object locked, so Commvault can’t delete older data. To test a loss of commvault database I didn’t configure my commvault jobs to be worm protected.Consequentially, I was able to delete some jobs, although the data in the Wasabi bucket remains.I can’t seem to find the option in Commvault to scan the bucket for existing backups to reimport.Is this not available?
Hi Team, We have a very large, Infinite retention Storage Policy, associated to Storage Pool “Pool1”.It has grown to the point, that we will soon be creating another Storage Pool and Storage Policy. Let’s call these Pool2. All clients from Pool1 will be migrated to Pool2, so Pool1 will stop receiving any fresh data, since Pool2 will start receiving it all. The question I have is around the massive, leftover DDB’s from Pool 1. They are 2 x 1.8 TB and are hosted on the two Media Agents associated with Pool1. Since Pool1 will stop receiving data, I am keen to decommission the Pool1 Media Agents - noting that the Secondary-copy Cloud-based backup data can be accessed from a number of Media Agents and so it does not necessarily have to be the Pool1 Media Agents. It can be any Media Agents, provided they are mapped to the relevant Cloud Library Mount Points. So the questions I have are :- 1 - What do we do with these large, legacy DDB’s? I understand we need to keep for Commvault Sync
Hi Folks I’ve hit a problem in seeding data to azure using a DataboxCopy had been created and DDB is ready to be shipped also. I’ve followed this procedure:https://documentation.commvault.com/v11/expert/97276_migrating_data_to_microsoft_azure_using_azure_data_box.htmlthis has also helped: https://commvaultondemand.atlassian.net/wiki/spaces/ODLL/pages/351142608/Deduplication+Database+Seeding#DeduplicationDatabaseSeeding-DDBSeedingusingDeduplicatedStorage I’m on step 4 “Once the jobs associated with the initial seeding is complete, shutdown the data box using the recommended shut down process for Azure Data Box.” Running the validation i get this error:https://aka.ms/dberr5 - Large file shares are not enabled on your storage account(s). To disregard this errorThe CV_Magnetic is 36TB an so easily hits the 5TB limits stipulated here: https://learn.microsoft.com/en-us/azure/databox/data-box-disk-limits under “Object size limits and Azure Files”So the only think i can do is drop the storag
Hi together, I am planning a MediaAgent that has enough storage capacity to be my disk storage. Deduplication should be active, it is Back-end Size for Disk Storage Small (Up to 50TB).Are there any recommendations or requirements regarding the RPM of the backup disks? I did not find anything in the CommVault documentation, only the RPM recommendation for the OS/software disk.Thanks in advance for your help!
I have 3 LTO7 tapes that were previously used for Microfocus Data Protector, then we wanted to use them for Commvault backups. But strange things started to happen with those tapes.1. Commvault recognized them as completely empty tapes, despite having information from the other backup tool.2. When I launched the copies the first 2 were filled with 6TB and 300GB.3. The third tape fills up with 2TB and goes into append state and doesn't let me use it by asking for a new tape.I already formatted it twice but the same thing keeps happening. But I see that the formatting that it does does not take it to the drive, it only does it at a logical level with the data from Commvault. In Commvault, is there a way to format it and purge all the data to be able to use the tapes in their entirety?
Hello!Customer has this TL that is partitioned. There's a partition for Commvault. Originally it had 11 slots and some drives.Customer added 30 slots but they don't get recognized by Commault (and so the tapes on them).Full scan does not update the slot count.What must be done to recognize the new added slots?Regards,Pedro
Hi all, NetApp has mentioned to a customer that Commvault can leverage NetApp’s SnapLock feature on a FAS CIFS Disk Library to provide immutablility of backup data. I found the following 2 articles regarding this: https://documentation.commvault.com/2023/expert/146623_configuring_worm_storage_mode_on_disk_libraries.htmlhttps://documentation.commvault.com/2023/essential/155629_enabling_worm_storage_and_retention_for_disk_storage.html These docs seem to be about two different features? One is enabled using a workflow and mentions DDB sealing. The other is enabled using a slider in Command Center, and mentions nothing about DDB sealing. Can anyone tell me the difference between the two (and what they do exactly)?And which feature should I best use to leverage SnapLock on a FAS CIFS Disk Library to provide immutablility of backup data? Thanks!
Hi I have a customer who’s migrated their backups off to another solution. Commvault will stay in place for recovery purposes.They currently have 6 physical Media Servers (3 per Data Centre) with the following configuration. 1 x short term DDB spread across 6 Media Agents.1 x long term DDB spread across 6 Median Agents. Effectively each Media agent hosts a portion of each DDB (short and long term).These Media Agents will be decomissioned and the intention is to consolidate the DDB’s into 1 x Media Agent in Azure.The Short Term DDB’s won’t be migrated as they will age off.I know CVLT only supports 2 partitions per Media Agent, but can I run 3 partitions on 1 Media Agent and not affect recovery operations?First prize would be having the 6 partitions hosted on a single Media Agent. Can I go this route?Alternatively, is there a way to consolidate all 5 partitions into 1 partition, hosted on 1 Media Agent? Thanks.
Hello,The customer has installed a tape library and the commcell do not detect any cleaning tape on the console. When i select Discover Cleaning Tape, i have this message “There are no new media to discover” and my setting is configured to discovered automatically the media. I see my cleaning tape on my library, but not on my Commcell console. Do you have a solution for me? Thank you very much!
Dear Team,We observed, Full & incremental aux copy backup size are almost same.Aux copy schedule are below:-Weekly Full :- 03:00 AM on SaturdayDaily Incremental :- 07:00 PM on SaturdayNote :- another day incremental backups are completed normally. but only problem for Saturday incremental aux copy backup.(I thing Saturday incremental aux copy backup are taking INCR + FULL, because we have checked XYZ Job ID reflecting both tapes.)
Hello there CV community,We are in a situation where we expect there may be some data in one of our disk libraries that is not associated to retained backup jobs. The jobs data stored on this disk library utilizes deduplication, so primarily the folder data contains deduplication chunks. We would like to validate that the content on the disk library is “current” i.e. associated with retained & deduplicated backup data. In this case, the storage is Azure object storage.Based on Damian’s post here: Clean Orphan Data , what is it? | Community (commvault.com), our hope was that we could use the DDB space reclamation feature alongside the ‘Clean orphaned data’ option to do this automatically. To test this, I ran the operation in a test environment after having stored some arbitrary files alongside legitimate backup data within the storage. This wasn’t successful, and I expect I have either misunderstood his description of the functionality or perhaps Commvault is specifically looking fo
Hello community,We have Storage Policy which keeps all backups for 30 days and monthly backups for 18 months all on the same storage.To free up some space, we could create another copy pointing to the cloud to keep our monthly backups for 18 months. Then we would have 30 days on prem and all monthly backups in the cloud.Now, a new idea has come up to keep 30 days on prem as well as 6 months for monthly backups.And when the monthly backups are 6 month old, the should be copied to the cloud until they are 18 months old and removed on prem.Is there a way to delay the copy of the monthly backups for 6 months?
Hi we are running 11.30 and we want to start testing WORM Storage capabilities on our Datadomain. We have configured the retention-lock feature on Datadomain and activated the WORM Storage Lock on Commvault through the CommandCenterTalking to the Dell specialist, he has told us that there is a config that could affect on Commvault, the “automatic-lock-delay” value. That´s the time that the file remains “opened” while are being written on the DD by the backup aplication (in this case, Commvault), until it confirms the file closure and the it locks the file with the retention set before. As we don´t know how much time does CV need, we have set it to 120min on the DDHas any of you any experience with WORM on Datadomain in Commvault? Do you know how much time does Commvault keeps the files open on DD until are closed?
May I know if anyone encountered this error before when adding cloud storage on Command Center? May I know what is the resolution? I tried to search to any sites but got not luck Error below:"Operating System could not find the device file specified. The device may be unreachable from the MediaAgent. Please ensure that the file is present in the given path and is accessible."
Hello all-I’m trying to find a report that will output something that I think is super basic: list my 5 storage policies, and tell me how much space each one is using. I already know about the Client Storage Utilization by Storage Policy Copy report, but it shows me the client backup sizes within each policy and I don’t see how to total it very easily or customize it to see what I want. If that report can do it, can someone point me in the right direction to tweak it so I can get the info I’m trying to pull? Or if someone knows of an easier way, that would be great. I’ve checked the built-in reports in the console, command center and store.Thanks!
Hello,What are the benefits to enabling the horizontal DDB, BOL explains how enable this feature, but nothing about the real benefits except that split the DDB in 3 section first for File system, another for Database and the last one for VM.Can I expect to see an improvement in backup performance or an increase in deduplication performance that would further reduce on-disk consumption?Thanks,
Hi all,Wondering if by chance anyone has experience running DDB space reclamation with orphan data cleanup against a cloud library. We have a cloud library in Azure cool tier which we expect may have some data that was not pruned successfully, thus increasing our storage consumption in Azure. The deduplication database for this data lives on local storage.We’d love to run a space reclamation with orphan data cleanup against this cloud library, but we’re concerned about the possible cost of storage transactions against the Azure cool library.Has anyone performed this operation before and observed the related cloud storage costs? For reference, we have just under 100 million blobs and a total of about 400TB of storage utilization in Azure. Many thanks for any input folks may have!
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.