Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 620 Topics
- 3,250 Replies
Hello all,I ran into a bit of an issue… Yesterday, one of the Disk Libraries got filled up and the backups went into waiting status. After haveing a look at the utilization, indeed it turned out to be 99,7% full.The main culprit were SQL server backups: There were some backup jobs with an extended retention - so I’ve deleted those, and some more of the old backup jobs to make space. I also ran Data Aging and could clearly see data chunks being deleted in SIDBPhysicalDeletes.log and after a while I got this: So, I assume quite a bit of data was deleted. The Primary copy (blue) went from 52.95TB down to 19.81 TB. However, when I check the Free Space on the Library I got very little: So I checked the Mountpaths Space Usage for that DL: Data Written corresponds to amount of space used by the Primary copy: 19.8 TBHowever Size on Disk, which takes into account Data Written + aged jobs which are still referenced by valid jobs is still very high. Almost unchanged. I am quite confused by this.
Hello,We are in the process of migrating to a new disk library. This disk library is a pair of NAS devices with 300 TB on each NAS.We can carve out the NAS into multiple volumes with a maximum size of 150 TB per volume.When we first setup our disk library about 10 years ago, the maximum recommended size of a mount path was 4 TB. I know that is old guidance and I am sure this has increased over the years.We were trying to find something on the documentation and the closest we found was a reference to the maximum mount path being 25 TB, but it appears that the limitation can be overridden with a registry setting.So a few questions:Is there a maximum mount path size in a disk library? If there is, what is it? If there is, what happens if you hit the limit without adjusting the registry and can it be overridden with a registry setting? Regardless of a maximum mount path size, from a performance and management perspective is there a best practices on sizing the mount paths. We have thre
Hi,I’ve needed to delete - create again the DDB and I need to use same partition paths as they were used before (disk was formatted). Currently I cannot create a DDB as I have message: “The specified mount path os already in use”Well I know that it’s described here:http://kb.commvault.com/DD0051 But do you know somebody if there is way to force cleanup and do not wait 24h ? The data aging is not doing the trick. Thanks
Hello,I have a doubt in setting up DDB Creation in awindows cluster environment.2 physical servers with installed media agent and a virtual storage (starwind)1) can the DDB be on the storage shared by the nodes of the cluster?2) Can I use only 1 DDB for all nodes (MA) or should I have a DDB for each MA?
Hi all,Is it possible if some explain the process of committing chunks to lib and signature in ddb. I read somewhere that chunk will keep on noting all data blocks even it is repeated. Actually m confused with that statement . Might b my understanding is wrong but need help to understand exactly when ddb signatures will be committed to ddb and when will chunk will get committed ?
Hello everyone, hopefully wanting to get some recommendations and possibly help. We are currently shopping to replace our HP LTO6 SAS Tape Libraries. We are looking at the Quantum i3’s to support LTO8. First off let me explain our setup2 Media Agents 4 Tape Libraries (HP LTO6) - 2 of them directly connected 6GB SAS to 1 media Agent and the other 2 drives directly connected 6GB SAS to the other media agentWe are looking at the i3 but would want both media agents to be able to utilize the same drives from the unit. We know how we currently have it setup and 2 drives belong to one media agent and the other 2 to the other. So some quick questions:Any recommendations on other brands we should look at? Anyone recommend the quantum tape drives? What would the setup of this look like from a SAS to Media Agent setup look like? Right now obviously 1 media agent can’t use the other’s tape drives and we dont want to be in this position in the new setup. If we have 4 drives we want all of our medi
Team,We are using windows servers as Backup media agents , I want to decommission one of the media agent “x” which is part of 3 libraries and dedupe storage policies. I have disbaled the mount paths on all 3 libraries associated with media agent “x” , view content shows that there is no data present on the mount path . When I try to delete the mount path associated with media agent ‘x’ faces below error -Mount path is used by a Deduplication database.The data on this mount path used by the deduplication DB could be referenced by other backup jobs. The mount path can be deleted only when all associated storage policies/copies with deduplication enabled are deleted. See Deduplication DBs tab on the property dialog of this mount path to view the list of DDBs and storage policies/copies. if I unshare the mount paths associated with media agent ‘x’ with other mouth paths of the same library and remove media agent ‘x’ from data paths tab in dedupe storage policy , the restore jobs starts f
Hi,Since few weeks two media agents get very high load during SIDBPrune. Media agent and all lib paths went offline. OS get load above 70 because number of concurrent operation to disk array. What i found in SIDBPrune.log :59123 e6f3 03/08 17:38:05 ### PRNCTLR GetThreadPoolSize:4140 Found  processor cores. Setting pruning thread pool size to  for [Disk] media.DedupPrunerThreadPoolSizeDisk registry need to limit threads and fix the issue. Does anyone know what is behind that? Setting thread pool to number of CPU for background task like prunning probably isn't good idea. For sure for media agent with high number of CPU and SATA array attached to it. Thanks.
Anyone know if this will work and be supported?It’s not listed as a supported platform for Intellisnap in the 11.22 documentation at https://documentation.commvault.com/11.22/essential/106164_supported_arrays_and_agents.html but does get a mention as generically supported for NDMP.Thank You
Hello, We have multiple sites and all these sites have different WAN bandwidths. All are DASH copying to a single location. All these locations have different working hours. We want to create multiple bandwidth throttling rules. What would be the best way to approach this? Should we create the rules at the source Media Agent with throttling the send traffic? Thank You
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
Hi all,I’m looking for some steer with regards to a disk library and mount paths move between MediaAgents. I may be over-thinking this but just looking for clarification.My client has a MediaAgent which to be decommissioned. The mount paths are volumes presented from the SAN which have also now been presented to the new MediaAgent. (Offline in Disk Mgmt on new MA awaiting action).Is this just as simple as following the Migrate Shared Disk Libraries option in Disk Libraries - Advanced to move the mount paths configuration to the new MediaAgent or are there any gotchas to be aware of? Normally, I’d just go through a mount path move process but can’t in this case.Thanks in advance.
I have a CloudLibrariy in Azure configured with three Cool Blobs. (three mount paths). Commvault reports Application Size 30TB and Data on Disk 50TB. In Azure the disks reports to be filled with 12TB. We have verified that WORM is disabled on the volumes in Azure. It seems that Commvault messes up the statistics for some reason. Have anyone seen this before on Azure CloudLibs?
We currently have a ‘dual-site’ scenario - each with 2 media agents attached to a Dell/EMC ME4084 disk library. Commvault is configured with a CommCell in each site - with failover enabled. Backup images are secured in each local site and then a secondary copy replicated to the alternate site.As I am sure is common - the questions are being raised around immutable backups in this CV environmentI have seen documentation regarding immutability of cloud based backups, and discussions of WORM technology - but am unsure as to what applies to us here with our CommVault / disk library configuration.V11 SP20Any input appreciated…..
Disk Library mount path is offline due to nfs local_lock option set in mount options after upgrading to 11.20 or higher
Sharing this information Proactively.Issue:After upgrading to 11.20 or higher, NFS Mount Paths show offline in the Comcell GUI with the error "The mount path is marked offline due to nfs local_lock option set in mount options"CVMA.log on the Media Agent will show:102415 1901f 01/13 19:06:53 ### WORKER [96/0/0 ] :CVMAMagneticWorker.cpp:6992: Marking mount path \[<mount path>] mounted on dir \[/commvault\_fas\-syd\] offline due to mount options [rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=220.127.116.11,mountvers=3,mountport=635,mountproto=tcp,local_lock=all,addr=<IP Address>]Cause:Checking the NFS mount options by running mount -v will reveal the path is not set to "local_lock=none":In earlier releases, it was advised to set local_lock=none as per https://documentation.commvault.com/commvault/v11/article?p=12567.htm. However, 11.20 has enforced the check. This was done due to issues where
Hi There! I have Vmware vm backed up on-premises and auxiliary copy to Azure cloud library. When I try recovery a VMs that I supposed already transferred data to azure, I’m able to see bandwidth on firewall ports increase. So I think this scenario report a local data recovery to azure.I’d like to recovery data that already on azure cloud ( that was transfer by auxiliary copy). Someone could hep me with this steps?Thanks!
I’ve been trying to figure out what my costs would be if I discontinued my off-site backup service (they physically come and take the tapes to an off-site location) and moved to S3 Glacier Deep Archive.We maintain on-premise backups as well, and in the past 20 years, we’ve never had to do a restore from off-site tapes -- so I’m definitely not concerned at all for that “100 year” event.The pricing of GDA is straightforward.. data lives (or, is billed) at a minimum of 6 months at $1/TB, but then there are also costs for PUTs and GETs (currently $0.05 per 1000). I’m very unsure how many requests I would consume per month when uploading data to the cloud.Trying to sell my boss on this.. but need an idea of how GETs/PUTs work in Commvault.
Hey guys,I’m currently using S3 IA for my cloud libraries (dedupe used) and looking to reduce costs. The combined storage tiers look promising, in particularly Intelligent Tearing/Glacier. Has anyone got any experience in using this, and can offer some insight into its suitability? Cheers,Steve
Hi Team,I have a query . If a storage library has 8 mount paths , all configured from different media agents and shared with each other . Should we create DDB partition on all 8 media agents or only on 1 media agent ?What will help to increase performance of backup jobs , DDB is hosted only on 1 media agent or distributed across multiple media agents ? I am thinking that if DDB is hosted only on 1 MA backup job has to look only on 1 MA everytime for duplicate blocks and signatures , if the DDB is distributed wouldn't it make the backup job slower as the backup job will check for duplicate blocks and signatures across multiple configured DDB partitions .Let me know if my understanding is incorrect ?
Could someone explain a flow of process for allocating readers for an auxcopy job?What does stream allocation take place in CVJobReplicatorODS exactly? I see that the number of readers is a function of num of CPU and RAM. By default, each CPU in a proxy can support 10 streams, and each stream requires 100 MB of memory. For e.g for VSA.E.g. My VSA backups were backed up with x number of readers and y number of writers to HPE StoreOnce Catalyst.How does the replication agent (auxcopy) decide and allocate readers for copying a list of N jobs to tape. Imagine there is a primary copy → Storeonce Catalystsecondary copy → tape (combine streams to 2 tape drives with multiplexing factor of 25). Number of readers varies from 12 to 38 in my environment. It appears that the number of readers used for the subclients plays a role in the number of readers that will be assigned in the auxcopy. My goal is to increase the readers for the auxcopy jobs for improving the performance. My auxcopy with 38 rea
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.