Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 620 Topics
- 3,252 Replies
Hi. We have a complicated setup where we are using a Topology Group to send data between Media Agents through a Firewall and Proxy. Once the data hits the Firewall all data is forced into 2 tunnel ports.In addition to this control we would like to reduce the number of source ports being used so that these can be monitored for backup flow. Currently using the Dynamic Port Range 49152 and 65535 does not allow us to do this.Is it as simple as forcing all data traffic into the tunnel (8403 by default) and if so will this create a bottleneck.Thanks, Andy
Hi guys,I’m struggeling with encryption in a mixed environment.On the GlobalDedupePolicyCopy, I did not activate encryption.on the Client Advanced Property I enable encryptionon the subClient Property I enable encryption on Network & Media.executed Jobs are listed as ecryption enabled.does this mean, that the backups have been encrypted ? are the backups deduplicated against unencrypted backups within the same StoragePolicy ? (which might result in a mix of encrypted and unencrypted data for the same job) Since encryption is defined in the GDP and I already have two DDB partitions per MediaAgent, do I have to deploy additional MediaAgents to host the dedicated encryption backups, in case I want to enable that on the Storage Policy ?best regardsKlaus
Hi Guys, I finally found the exact article that describes a solution I wanted to implement and am seeking opinions to do this or not. Basically I want to create multiple Selective Copies under a storage policy and associate with different subclients/computers to meet a client’s tiering model and different retention. Because I also want to implement Deduplication between the Primary and each selective copy, Weekly and Monthly, I’m weighing an option to Create Selective Copy using a Library instead of using a Storage Pool. In the article below;Article: https://documentation.commvault.com/commvault/v11/article?p=119730.htmStep 17b, can I select Partition Path as a normal Windows folder e.g. D:\<randomFolder>. If I create additional selective copies under the same storage policy, can I use the same D:\randomFolder> to deduplicate data between Primary Copy and additional selective copies or I have to create a D:\<randomFolder1> and so forth for each copy. I ask because I do n
Hi, we just started to use an object storage to tier out the data after 7 days. We created one bucket and added it to commvault as backup target. Den we changed the config and created another bucket but forget to delete the first one and now commvault is using both buckets to store the data. How can I migrate the already stored data in bucket 2 (data path 1) into bucket 1 (data path 2) ?After I could migrate the data I would like to delete the second data path and then remove the bucket in the Object Storage.RegardsThomas
Hello,Sorry I can’t correct the title are/or :)On a Storage policy that has deduplication enabled I have this message and I understood why.My question is :can I unselect the Extended Retention Rule 1 set for 90 daysand increase the basic one to for example to 90 days, if yes, how many cycle is good to have for 90 ? Thanks
Hi Team, My DDB Backup operations are failing with the following error message: Snap creation failed on volumes holding DDB paths. A quick review of the job logs points to some sort of free space threshold being reached. What could this mean? RegardsWinston
Hello all,I am trying to find some specific guidelines for Block and Chunk settings when related to Cloud Storage. The information I have found is generally related to Disk and Tape Media.I have been reviewing an environment that uses Chunk settings of either ‘Application setting’ for Primary Copy and a mixture of ‘Application copy’ and 4096 for secondary copies.Block size has been set to 1024 and in other cases set to ‘use media type setting’.I am wondering what the best practices for these settings are and if any of these user set settings are overridden?
Hello,We are in the process of migrating to a new disk library. This disk library is a pair of NAS devices with 300 TB on each NAS.We can carve out the NAS into multiple volumes with a maximum size of 150 TB per volume.When we first setup our disk library about 10 years ago, the maximum recommended size of a mount path was 4 TB. I know that is old guidance and I am sure this has increased over the years.We were trying to find something on the documentation and the closest we found was a reference to the maximum mount path being 25 TB, but it appears that the limitation can be overridden with a registry setting.So a few questions:Is there a maximum mount path size in a disk library? If there is, what is it? If there is, what happens if you hit the limit without adjusting the registry and can it be overridden with a registry setting? Regardless of a maximum mount path size, from a performance and management perspective is there a best practices on sizing the mount paths. We have thre
Hello Commvault Community, Today I come with a question about the Commvault deduplication mechanism. We noticed that there are two deduplication base engines with identical values but differing in one parameter - unique blocks.(engine1.png)(engine2.png)The difference between these engines is close to 1 billion unique blocks, where other values are almost identical to each other. Where could this difference come from? Is there any explainable reason why there is such a difference considering the rest of the parameters? DASH Copy is enabled between the two deduplication database engines that are managed by different Media Agents.Below I am sending examples from the other two DDB engines where the situation looks correct - the DASH Copy mechanism is also enabled.(otherengine1.png)(otherengine2.png)I am asking for help in answering what may be caused by such differences in the number of unique blocks between DDB engines.---Another issue is whether, in the case of this deduplication databa
Hello, we would like to tier out the data, wich is stored on the disk library to an Huawei Object Storage. I created a secoundary copy and configured an aux copy schedule. The problem is that the disk library disc space is running low because the job is not as fast as I was hoping.The amount of data for the copy job can be up to 10 TB.Is there a solution to speed up the aux copy job ? The Media Agents provide 2x10 Gbit cards.RegardsThomas
Hi AllI’m having performance issue with the auxiliary copy. This log capture from media agent. Having low data transfer rate. There is 2 media agent installed in this site, one is working fine, another one got issue. |*5850951*|*Perf*|592185| =======================================================================================|*5850951*|*Perf*|592185| Job-ID: 592185 [Pipe-ID: 5850951] [App-Type: 0] [Data-Type: 1]|*5850951*|*Perf*|592185| Stream Source: xxxxx|*5850951*|*Perf*|592185| Simpana Network medium: SDT|*5850951*|*Perf*|592185| Head duration (Local): [29,June,21 11:58:57 ~ 30,June,21 05:31:55] 17:32:58 (63178)|*5850951*|*Perf*|592185| Tail duration (Local): [29,June,21 11:58:57 ~ 30,June,21 05:32:31] 17:33:34 (63214)|*5850951*|*Perf*|592185| ----------------------------------------------------------------------------------------------------------------------------------------|*5850951*|*Perf*|592185| Perf-Counter
Hi AllDo the below IOPS numbers in the second table below correspond to the test condition specified here :Excerpt from: https://documentation.commvault.com/11.24/expert/8852_testing_iops_for_disk_library_mount_path_with_iometer.html Access Specification Settings Percent Read 50 Percent Write 50 Percent Random Distribution 50 Percent Sequential Distribution 50 Transfer Request Size 64K The minimum IOPS required for each mount path of the disk library of extra large, large and medium MediaAgents is: Components Extra Large Large Medium Disk Library 1000 IOPS 1000 IOPS 800 IOPS
Hello,I want to use Commvault to backup 10 laptops.The files used are :Vidéo : .MOV, MP4, .RAW, .BRAW, AVCHD, BOO, DOO, TBL, Editing files: .FCP ou .SRT Audio : .MP3, WAVE, AACCould I use the deduplication and compression on these files? If yes then what will be the ratio? Thanks.Best Regards,Ben
We suddenly encountered low throughput & high DDB Lookup (~99%) for all backup job.We have remove a obsolete Media Server this week. We also deleted some Storage Policy & Aux copy with no sub-client associated with. I would like to ask if anyone encountered similar situation? is our DeDup database corrupted? Please help. Many thanks
To connect to S3 bucket as cloud library from Media Agent (on-premises) we can use below options:1.AWS Direct Connect2.VPN Gateway3. InternetMy query is if we are using option 3 internet to connect to S3 bucket how can we protect/secure S3 bucket from outside attackers or any non authorized users accessing the S3 bucket over internet.
HI ThereIHAC that will use exagrid as backup storage with Commvault.Exagrid states that they can add to Commvault deduplication to obtain a higher dedup ratio (up to 20:1 for long term retention data).I couldn’t find any information on Exagrid on BoL and my understanding was that we do not use CV deduplication when using a deduplication storage as primary target.Did anyone implemented CV with Exagrid ? and if so any specifics/culprit or best practices ? ThanksAbdel
Hi Team,I have a query . If a storage library has 8 mount paths , all configured from different media agents and shared with each other . Should we create DDB partition on all 8 media agents or only on 1 media agent ?What will help to increase performance of backup jobs , DDB is hosted only on 1 media agent or distributed across multiple media agents ? I am thinking that if DDB is hosted only on 1 MA backup job has to look only on 1 MA everytime for duplicate blocks and signatures , if the DDB is distributed wouldn't it make the backup job slower as the backup job will check for duplicate blocks and signatures across multiple configured DDB partitions .Let me know if my understanding is incorrect ?
Hello World, I’ve recently replaced my media server and noticed my auxiliary copy jobs is getting this error whenever i try running the backup:User specified a data path which is not part of the data paths in the storage policy copy. Advice: Please specify a job data path which is part of the Storage Policy copy. Can see that the new media server has access to the library, but not sure what else to check.
Hi @Jordan @Mike Struening I have a question during the topic. So i try delete MP - i working with your advice, but also send mail for authorization to admin and i have below error. ERROR CODE [19:857]: waiting on user input [Delete Mount Path [ [cvbackup] H:\P_QNAP (MQNWX2_02.08.2021_13.16) from Library - DiskLibQnap ] requested by [ UMO\mjosko.domadm ]] View Contents returns an empty list.There seems to be data on the disk as Size on Disk indicatesseveral / several hundred GB (similarly the size of the folder on the disk).Despite the empty list of View Contents, the data on the disk was only deleted aftersome time. As I can see, there is something else left.What does the data erasure mechanism depend on?
Hi,Is there a way to use Ransomeware Protection on Windows MediaAgents, using a Disklibrary with Cluster Share Volumes ?Once Ransomeware Protection is activated, the filter driver “CVDLP” with the altitude of 145180 (encryption) is added to the Filesystem Filter.this results in redirected I/Os to all ClusterSharedVolumes :BlockRedirectedIOReason : NotBlockRedirectedFileSystemRedirectedIOReason : IncompatibleFileSystemFilterName : volume21Node : node1StateInfo : FileSystemRedirectedas a result the Clusted Events are flooded with Warnings:Cluster Shared Volume 'volume21' ('volume21') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared
Hi,Im having issues with throttling network utilisation for aux copies. I have about 10 Storage Policies all with Secondary copies.I’ve configured “Throttle Network Bandwidth (MB/HR)” to 25000 for the secondary copy for all storage policy. If my maths is correct that would be approx 50Mbps per aux copy. Even with all 10 running at the same times utilisation shouild only be approx 500Mbps.However through network montioring i can see when these aux copies are running they are using well over the 50Mbps configured (and saturating the network).Is my maths wrong is or the throttling configuration not working how i expect it to work?Thanks in advance for any responses.
Hello, please i have an issue with my DDB reconstruction. Not quite long when i move my DDB to another folder e.g folder1 to folder2 but on same server. 3 days later, my colleague wants to restart the server and force kill the SID process for process manager. and now the server went into DDB recovery. since one week now file system recovery will complete but adding records will fail . now today i found out the revoery process is taken from folder1.reason becuase after DDB move the new path has not done DDB weekly backup.now my question is according to commvault https://documentation.commvault.com/11.24/expert/12582_moving_deduplication_database_to_another_location.htmlnow the file system recovery is pointing to folder1 instead of folder2 what do you suggest.can i move DDB back to his previous folder1 ? and what could happen since it keep doing reconstruction and failing. i log a ticket against support a lady came help still is still failing.what other way can i move this DDB folder ba
I need to check if there is any option to move the data from one mount path to another mount path in the same library. I need this to be done for mitigating over commit issue at back end storage. I am having 3-4 mount paths in which only one job is there , i want to move that one job to any other MP within the library and get that deleted so that over commit issue will get solved. Current Version: V11 SP26.23Backend Storage : Netapp
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.