Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 647 Topics
- 3,299 Replies
MA hardware refresh and library mount paths move
Hi all,I’m looking for some steer with regards to a disk library and mount paths move between MediaAgents. I may be over-thinking this but just looking for clarification.My client has a MediaAgent which to be decommissioned. The mount paths are volumes presented from the SAN which have also now been presented to the new MediaAgent. (Offline in Disk Mgmt on new MA awaiting action).Is this just as simple as following the Migrate Shared Disk Libraries option in Disk Libraries - Advanced to move the mount paths configuration to the new MediaAgent or are there any gotchas to be aware of? Normally, I’d just go through a mount path move process but can’t in this case.Thanks in advance.
Azure CloudLib Data Written does not match Dedupe statistics
I have a CloudLibrariy in Azure configured with three Cool Blobs. (three mount paths). Commvault reports Application Size 30TB and Data on Disk 50TB. In Azure the disks reports to be filled with 12TB. We have verified that WORM is disabled on the volumes in Azure. It seems that Commvault messes up the statistics for some reason. Have anyone seen this before on Azure CloudLibs?
Disk Library mount path is offline due to nfs local_lock option set in mount options after upgrading to 11.20 or higher
Sharing this information Proactively.Issue:After upgrading to 11.20 or higher, NFS Mount Paths show offline in the Comcell GUI with the error "The mount path is marked offline due to nfs local_lock option set in mount options"CVMA.log on the Media Agent will show:102415 1901f 01/13 19:06:53 ### WORKER [96/0/0 ] :CVMAMagneticWorker.cpp:6992: Marking mount path \[<mount path>] mounted on dir \[/commvault\_fas\-syd\] offline due to mount options [rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=22.214.171.124,mountvers=3,mountport=635,mountproto=tcp,local_lock=all,addr=<IP Address>]Cause:Checking the NFS mount options by running mount -v will reveal the path is not set to "local_lock=none":In earlier releases, it was advised to set local_lock=none as per https://documentation.commvault.com/commvault/v11/article?p=12567.htm. However, 11.20 has enforced the check. This was done due to issues where
Immutable Backup Images
We currently have a ‘dual-site’ scenario - each with 2 media agents attached to a Dell/EMC ME4084 disk library. Commvault is configured with a CommCell in each site - with failover enabled. Backup images are secured in each local site and then a secondary copy replicated to the alternate site.As I am sure is common - the questions are being raised around immutable backups in this CV environmentI have seen documentation regarding immutability of cloud based backups, and discussions of WORM technology - but am unsure as to what applies to us here with our CommVault / disk library configuration.V11 SP20Any input appreciated…..
The real cost of AWS Glacier Deep Archive
I’ve been trying to figure out what my costs would be if I discontinued my off-site backup service (they physically come and take the tapes to an off-site location) and moved to S3 Glacier Deep Archive.We maintain on-premise backups as well, and in the past 20 years, we’ve never had to do a restore from off-site tapes -- so I’m definitely not concerned at all for that “100 year” event.The pricing of GDA is straightforward.. data lives (or, is billed) at a minimum of 6 months at $1/TB, but then there are also costs for PUTs and GETs (currently $0.05 per 1000). I’m very unsure how many requests I would consume per month when uploading data to the cloud.Trying to sell my boss on this.. but need an idea of how GETs/PUTs work in Commvault.
Cloud Libraries and AWS Combined Storage Tiers
Hey guys,I’m currently using S3 IA for my cloud libraries (dedupe used) and looking to reduce costs. The combined storage tiers look promising, in particularly Intelligent Tearing/Glacier. Has anyone got any experience in using this, and can offer some insight into its suitability? Cheers,Steve
Preferred restore location: Recovery data from azure cloud library
Hi There! I have Vmware vm backed up on-premises and auxiliary copy to Azure cloud library. When I try recovery a VMs that I supposed already transferred data to azure, I’m able to see bandwidth on firewall ports increase. So I think this scenario report a local data recovery to azure.I’d like to recovery data that already on azure cloud ( that was transfer by auxiliary copy). Someone could hep me with this steps?Thanks!
Multiple DDB Partitions or Single DDB Partition
Hi Team,I have a query . If a storage library has 8 mount paths , all configured from different media agents and shared with each other . Should we create DDB partition on all 8 media agents or only on 1 media agent ?What will help to increase performance of backup jobs , DDB is hosted only on 1 media agent or distributed across multiple media agents ? I am thinking that if DDB is hosted only on 1 MA backup job has to look only on 1 MA everytime for duplicate blocks and signatures , if the DDB is distributed wouldn't it make the backup job slower as the backup job will check for duplicate blocks and signatures across multiple configured DDB partitions .Let me know if my understanding is incorrect ?
Stream allocation for Auxcopy
Could someone explain a flow of process for allocating readers for an auxcopy job?What does stream allocation take place in CVJobReplicatorODS exactly? I see that the number of readers is a function of num of CPU and RAM. By default, each CPU in a proxy can support 10 streams, and each stream requires 100 MB of memory. For e.g for VSA.E.g. My VSA backups were backed up with x number of readers and y number of writers to HPE StoreOnce Catalyst.How does the replication agent (auxcopy) decide and allocate readers for copying a list of N jobs to tape. Imagine there is a primary copy → Storeonce Catalystsecondary copy → tape (combine streams to 2 tape drives with multiplexing factor of 25). Number of readers varies from 12 to 38 in my environment. It appears that the number of readers used for the subclients plays a role in the number of readers that will be assigned in the auxcopy. My goal is to increase the readers for the auxcopy jobs for improving the performance. My auxcopy with 38 rea
Problem with copy media LTO4 (IBM Tape library) to LTO7 (HPE Tape Library)
Hello,I have IBM Tape library TS3200 with LTO4 media. Now we have a new Tape library HPE with LTO7 media. How we can copy data from LTO4 Media (old Tape library) to LTO7 Tape library.Which way is recomended? Maybe Media Refresh??Thank you!Best regards,Elizabeta
Data Written vs Size on disk (HyperScale)
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
HyperScale X performance went time come to auxcopy to tape ?
Hi guys, I'm having a customer who is actually using Standard MA with Commvault to back up to copy 1 to disk (PureFlash with NVMe Drives), copy 2 (and other PureFlash) and a copy 3 to Tape around 350 TB per week to send to tapes (LTO7) 4 tape drives (weekly Full) at a sustain throughput of 700 GB/Hour per drive (4 drive in parallel). We are looking to replace both MA with two HyperScaleX clusters. The questions are ! How we need to configure the HyperScaleX (reference architecture) to sustain the weekly tape creation of 350 TB per week with the same throughput knowing that we are going to use NearLine SAS in the HyperscaleX Cluster ! Or ca we use SSD for the Storage Pool drive in a HyperScaleX.
Deduplication requirement for Long term retention copy in Cloud
Hi All,I came across the CommVault documentation, mentioning that deduplication won’t make much impact when I am keep my long retention copy in cloud as tape replacement.Can anyone share more details about the your own experience or CommVault documentation regarding pros/cons of keep the long term copy in cloud with/without dedup. Thanks,Mani
Secondary Copy To Tape - Content Indexing
Hello,I am hoping someone can direct me in the right direction.I have a secondary copy to tape with an infinite retention period. The tapes will be stored offsite for safe keeping, currently tapes are onsite. I noticed during restores of older data I would need to insert the tape media containing the index to browse content to restore.Is there a way/method to keep this index on a local storage so the tape is not needed to browse the contents?
Delete Mount Path from de-dupe library and decommission Media agent
Team,We are using windows servers as Backup media agents , I want to decommission one of the media agent “x” which is part of 3 libraries and dedupe storage policies. I have disbaled the mount paths on all 3 libraries associated with media agent “x” , view content shows that there is no data present on the mount path . When I try to delete the mount path associated with media agent ‘x’ faces below error -Mount path is used by a Deduplication database.The data on this mount path used by the deduplication DB could be referenced by other backup jobs. The mount path can be deleted only when all associated storage policies/copies with deduplication enabled are deleted. See Deduplication DBs tab on the property dialog of this mount path to view the list of DDBs and storage policies/copies. if I unshare the mount paths associated with media agent ‘x’ with other mouth paths of the same library and remove media agent ‘x’ from data paths tab in dedupe storage policy , the restore jobs starts f
Network Throttling for Aux Copies
Hello, We have multiple sites and all these sites have different WAN bandwidths. All are DASH copying to a single location. All these locations have different working hours. We want to create multiple bandwidth throttling rules. What would be the best way to approach this? Should we create the rules at the source Media Agent with throttling the send traffic? Thank You
Magnetic Library Defragmentation
I have read a couple or articles on CommVault Online that say defragmentation of the Magnetic libraries is a good idea. Diskeeper, now Dymax IO, was listed as a certified product for online volumes. I am wondering if others defrag theire libraries for performance purposes, and what products they use. I have read in older articles that the native windows defragmentation tool can be used. Also it states that it should be done outside of backup hours (makes sense). Any feedback or information would be appreciated. Thanks
Regarding Dedup path for cloud Storage pool creation
Hi Team, I have a HyperScale with 6 node cluster in on-prem for primary copy. For Secondary copy I need to move the data to Cloud (Archive Storage).My doubt is, when I am creating the Cloud Storage Pool, Do I select the existing (On-Prem) de-dup path (/ws/ddb/P_1/Copy/_21/Files/31) ( or) Do I need to create a dedicated the dedup path in the on-prem MAIf I need to option 2, then what is the recommended dedup partition value and reason behind that? And also share the your best practice for hybrid data protection, if any. Thanks,Manikandan
DDB v4 gen 2 table structure
Hi alli need your help to understand table architecture for Deduplication database v4 gen2 table structure and there functioning. There is as such no information available on documentation explaining current ddb table structure. Please help with the information if possible.
Auxilary Copy not copied some jobs
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
Delete Mount Path associated to DDB
Hello there,I’m have a minor issue, I cannot delete unused Mount Path, since it’s used by DDB. There’s a few of MPs under Disk Library dedicated to this DDB. In the DDB properites I can only remove whole Disk Lib, which is not the point. CommCell says that in order to delete this MP, I need to delete each Storage Policy Copy which is referencing to this Disk Lib. It’s not an option neither. Logs are saying something similar: EvMMConfigMgr::onMsgConfigStorageLibrary() - Error [470, Mount path is used by a Deduplication database.] occurred while deleting the mountPath[xx] ###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:6170: Failed to delete mountpath [xx] due to error [470, Mount path is used by a Deduplication database.].###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:5593: Failed to delete MountPath from database for Id [xx] due to error Mount path is used by a Deduplication database.:470 Do you have ideas, workaround to delete single MP in that situation?
Failed to fetch valid sas token
Hi all, i will try here, We have 2 MA in Azure that acts as proxy as well.. When we tried to backup VM from azure (to cloud storage) , if we configured MA 1 as proxy job completed, when we try to configure MA 2 as proxy its failed with "failed to fetch a valid sas token" error. Anyone had a clue what cause this error? Both MA with same OS, Disks, Permission, Version..No drops from FW & Network settings are configured (client/CS)
Using server 2008r2 as a media agent after FR22 .3 update
This is a conversation post after my initial post about FR22 .3 This is more of a findings topic/conversation .I had three older wk8r2 media agents ( now replaced) that experienced widespread issues after going to FR22 .3NOTE: none of these issues are/were recorded with Commvault as actual issues. The decision to replace/Migrate the OS was made at the 11th hour after working weeks on these issues.The basic application appears to work just fine with 2k8r2 fr22 .3 - Readiness, services running ,can run jobs etc. The issue we were running into was consistent across all three. And the only 2k8r2 media agents in our environment's I knew it was an issue. Seemed too coincidental not to be.After the Fr22 .3 update- within 4 hours our jobs started experiencing all or some of the following errors:Pipeline errorsMedia mount services device not readylibrary full.Even when attempting to select new snap mount hosts for jobs i was getting connection refused messages in the GXTail event logs.The mos
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.