Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 667 Topics
- 3,370 Replies
Deduplication block level factor
Hello,In the documentation Deduplication Building Block Guide (commvault.com), it is mentioned that:“The DDBs created for Windows Media Agent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period.”Is the format at 32 KB referring to the block size of the disk itself (NTFS block size) or referring to the “Block Level Deduplication Factor” parameter of the Storage Policy as shown below ?Thanks
Cloud library migration from Azure one tenant to another
We are running on Commvault 11.20.Currently backup jobs are using Azure Blob cloud disk library, with container default setting. (On Azure is container type is Cool). We would like to move this storage to another tenant with different storage account with Cool/Archieve type container. Looking for a best approach to migrate the storage, if it can be done from Commvault and not from Azure.
Is it possible to change Azure Cloud Archive library into Azure Cloud Combined Storage Archive/Cool Library
Hello Commvault Community!,I have a question in the name of one of our client.We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index res
Huawei Ocean Store 5500 and deduplication
when performing a backup with the NAS agent that has deduplication enabled in Commvault, is it convenient to enable on the source storage side its native compression and deduplication functions at the same time, specifically from a Huawei Ocean Store 5500 devicePlease advice if we can enable compression and deduplication on commvault and Huawei Ocean store end?
DDB - Cluster Windows - Doubts
Hello,I have a doubt in setting up DDB Creation in awindows cluster environment.2 physical servers with installed media agent and a virtual storage (starwind)1) can the DDB be on the storage shared by the nodes of the cluster?2) Can I use only 1 DDB for all nodes (MA) or should I have a DDB for each MA?
Separate T-Log from copy to Auxiliary Copy
Hi Alli have a question for the expert's :) we using Metallic Storage for secondary copy.we backup SQL Instance’s there, with T-Log (lot of jobs) and Daily Full.i want to ask if i can somehow to exclude only the T-Log from going to the Second copy.i want only the Daily Full will go to the Auxiliary Metallic Storage.is it possible? (without create a new policy of course). thanks in advance
WORM on VTL
Hello Expers, Recently, customers want to apply WORM function to VTL storage in order to respond to ransomware issue.I searched BOL and the Commvault community, but I could not find a guide on how to configure and operate in detail except for WORM media configuration.https://documentation.commvault.com/2022e/expert/10496_worm_media_configuration.htmlI hope I get detailed guide information for implementing WORM function in VTL or Tape Library. For example, once the WORM media is fully used, It is moved automatically to the Retired Media pool.https://documentation.commvault.com/2022e/expert/10493_worm_media.htmlAnd then is this media reusable? If possible, through what procedure can it be reused? RegardsKim KK
DDB Verification not completing in time
Hi,I’m working on an issue with DDB Verification on a HyperScale X environment.I talked to someone at Commvault Support who explained that planned DDB verifications were removed from the best practices on HyperScale X because it could impact Space Reclamation Jobs, aswell as impact performance. However, our customer needs to be able to produce reports to document and prove compliance in regards to ISO certifications. They were able to import a report on their Private Metrics Reporting server so they can generate reports based on Admin jobs.However, they are noticing that DDB verifications are taking a very long time and they aren’t actually completing, so all subsequent DDB verification are just queueing up. The DDBs in this environment are very large.Does anybody have a suggestion how we could solve this issue and actually getting these DDB verification jobs to finish in a timely manner? Would scheduling them more often solve the issue?Looking forward to your suggestions.Jeremy
Synth Full on Seconday Copy Storage Policy
Hi,we backup a File Server in China with Inc. On Saturday there is synth Full.The Aux Copy of the Synt Full ( 1.8 TB )( to Germany is really slow.Can i schedule the Synt Full on the secondy Storage Policy ?In China always incremental, aux copy to germany and then in germany the Synt Full.We need a full for the tape backups.RegardsPeter Rupp
HSX: failed disk was not correctly replaced
We seem to run into multiple problems in our new HSX environment. Metadata disk d2 silently filled 100% on one of three nodes, data disks are all 90-95%, but in GUI only 550 of 720TB are shown as used. Not a single alert for this, all green in GUI. An then there is disk d22 / sdv on one node that failed a few weeks ago and was replaced together with support. In GUI it’s shown as mounted but in reality its not. sdu 65:64 0 16.4T 0 disk /hedvig/d21sdv 65:80 0 16.4T 0 disksdw 65:96 0 16.4T 0 disk /hedvig/d23 I followed Replacing Disks in an HyperScale X Reference Architecture Node (commvault.com) but the disk is not mounted. Nov 9 11:22:38 sdes1701-dp systemd: Dependency failed for /hedvig/d22. Nov 9 11:22:38 sdes1701-dp systemd: Job hedvig-d22.mount/start failed with result 'dependency'. Nov 9 11:22:38 sdes1701-dp systemd: Job dev-disk-by\x2duuid-dfcc3e6c\x2d8152\x2d42b2\x2db0a1\x2d6742d4748d3c.d
Configuration of two different NAS disk library using single / two DDB partition
Hello Team,We are planning to configure two different NAS disk libraries with three media agent physical servers.Each media agent server is having dedicated two SSD drives and configuration of RAID1.We want to know each disk library will use seperate DDB or will use single DDB.DDB disk is only one in each media agent server.Thanks and Regards,Anand
Full backup or synthetic for streaming on agent-based backups?
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
Doubt about MA architecture
I will soon have to implement Commvault on a site consisting of the following elements:- 2 MediaAgent- A NetApp storage array presenting file per block via iSCSIThe idea is that the MAs work on the same storage and that when one goes down the other continues to provide availability to the infrastructure for both backup and restore operations and reading a lot I think that the best for my case would be to implement GridStor: https://documentation.commvault.com/2022e/expert/10842_gridstor_alternate_data_paths.htmlFor my knowledge and experience, reading it makes me a somewhat complicated configuration and I don't know if it fits the needs I am looking for.This is the procedure that is the most difficult for me because I do not understand very well why: https://documentation.commvault.com/2022e/expert/9788_san_attached_libraries_configuration.htmlI thought that presenting the LUN to the MA and formatting it and sharing the data path with the other MA would be enough but I see that it is n
add a FC tape library
Hello Community ! I am trying to add a tape library HP MSL G3 Series to Commvault.I am using the Expert Storage Configuration. i have selected the Two Media agents (they are already zone with the tape libraries)I have followed the procedure. Now it asks if the library have a barcode reader and I don’t know :)can you help me please ? Thanks !
DDB Network interface
I am creating a partitioned DDB with two media agents. What interface of the media agent should i Add? I have dedicated NIC available.Do in need ta add the IP-address of the media agent? What happens if i leave it to default? I have implemented this in the past but I cannot remember this part.
MA hardware refresh and library mounpath sharing
Hi,i have to perform OS refresh of some physical MA ( 2 grid of 4 MA). We will upgrade from Windows Server 2012 to 2019.my purpose is about data availability because our disk libraries are configured with Local LUNs on each MA shared with others via Dataserver-IP option.The problem is when one MA is unavailable for the upgrade, their mount paths were not readable for the others MA. I have search in the documentation and i don’t find any uses cases for doing a MA refresh with shared mount paths. please advice Kind regards,christophe
Data Transferred over network
i see this all the time and i never understood it. we make backup copies from disk to tape, both attached to the same media agent. During these aux copies it has the Data Transferred over Network number, which I think should be 0, but there is usually a number there. For example this aux job has these numbers, still running:Total Data Processed: 3.23 TBData Transferred Over Network: 107.95 GBTotal Data to Process: 4.7 TB
Cloud Storage Configuration
Hello All, We are configuring a new cloud based storage, and concerning that, we wanted to know if the CommServe has to have access to the cloud library ? Since the storage and the MA will have a private network in which the storage will present its buckets to the MA. and the CommServe with the MA will communicate through our backup network.In the configuration steps, we came through : So we wondered if that means that the CS has to have some sort of access to the storage (Which is not the case in our platform, since the storage is only seen by the MA through their private network), or it's just the information related to the MA accessing the storage ? Regards.
Errors CloudTestTool OCI object storage
Hello all, I’m trying to configure my OCI into Commvault to test this tool (I’m using trial licence) but I’m dealing with some errors as shown bellow: What certificate is this? How can I install it? And where?Then, I use the CloudTestTool and it shows this: Look at the log file: 4228 2274 10/12 12:54:02 ### GetTenancy() failed. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### Failed to get the namespace. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### CheckServerStatus() - Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### EvMMConfigMgr::onMsgConfigStorageLibrary() - CVMACMediaConfig::AutoConfigLibrary returns error creating Library [Failed to verify the device from MediaAgent [dphwinbkpprd] with the error [Failed to check cloud serv
DDB and Bitlocker
If we have an existing DDB on a drive for a media agent and that drive gets encrypted with Bitlocker does that cause a problem?My thought is that it isn’t since all reads/writes are happening inside the server. There might be a performance penalty though. Or am I totaly wrong?//Henke
Egress Charges on AWS Aux copy
Hey all - can someone explain this statement in the Cloud Architecture Guide for AWS (page 29)?Do not utilize an on-premises MediaAgent to perform Auxiliary copy as Cloud egress costs (S3 Pricing >) will be incurred.We have customers doing this very thing and one is having an issue with high egress charges. We are not sure that the charges are from Commvault but want to make sure. The customer turned off data verification but that did not make a difference. They are writing Oracle logs directly to the cloud library (instead of primary onsite) and are very tight on hard drive space and have a lot of index restores happening but I don’t see either of those things causing large Egress. The index restores would come from the primary onsite copy.Can someone point me in the right direction on if there are other things with this aux copy that could cause egress?Thanks!Melissa
Aux Copy Job missing ?
Hi, I have a customer with 2 copies :1-Primary dedup on disk with 66 jobs.2-Secondary dedup on disk with 133 jobs He created a copy 3 to replace the secondary but he chose #1 for the source, but there are jobs only in the secondary copy, is there a way to pick the missing jobs by changing the source of copies #3 for source = #2 ? If I change it and run and Aux Copy the missing jobs are not picked ? Or I have to delete the copy #3 and start over !?
Catalog jobs from a cloud storage object
Hi Guys,Is there a way to catalog jobs from a bucket within a cloud storage library, like below:The tool offers only a Tape or a Disk as a Media. How do we retrieve our DR backups from a Cloud storage in case we lose everything in order to perform a Disaster Recovery.I found the link below, however it doesn’t show how to retrieve the DR DB.https://documentation.commvault.com/11.24/expert/43588_retrieving_disaster_recovery_dr_backups_from_cloud_storage_using_cloud_test_tool.htmlI’ve also found the below note:Does this mean that if deduplication is enabled, there is no way to retrieve the DR DB?Thanks a lot. Best Regards
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.