Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 671 Topics
- 3,371 Replies
Stream allocation for Auxcopy
Could someone explain a flow of process for allocating readers for an auxcopy job?What does stream allocation take place in CVJobReplicatorODS exactly? I see that the number of readers is a function of num of CPU and RAM. By default, each CPU in a proxy can support 10 streams, and each stream requires 100 MB of memory. For e.g for VSA.E.g. My VSA backups were backed up with x number of readers and y number of writers to HPE StoreOnce Catalyst.How does the replication agent (auxcopy) decide and allocate readers for copying a list of N jobs to tape. Imagine there is a primary copy → Storeonce Catalystsecondary copy → tape (combine streams to 2 tape drives with multiplexing factor of 25). Number of readers varies from 12 to 38 in my environment. It appears that the number of readers used for the subclients plays a role in the number of readers that will be assigned in the auxcopy. My goal is to increase the readers for the auxcopy jobs for improving the performance. My auxcopy with 38 rea
Deduplication requirement for Long term retention copy in Cloud
Hi All,I came across the CommVault documentation, mentioning that deduplication won’t make much impact when I am keep my long retention copy in cloud as tape replacement.Can anyone share more details about the your own experience or CommVault documentation regarding pros/cons of keep the long term copy in cloud with/without dedup. Thanks,Mani
Failing DDB Backup on a Linux-based MA due to extent exhaustion
Hi Team, My DDB Backup operations are failing with the following error message: Snap creation failed on volumes holding DDB paths. A quick review of the job logs points to insufficient space (0 extents). What could this mean? RegardsWinston
Secondary Copy To Tape - Content Indexing
Hello,I am hoping someone can direct me in the right direction.I have a secondary copy to tape with an infinite retention period. The tapes will be stored offsite for safe keeping, currently tapes are onsite. I noticed during restores of older data I would need to insert the tape media containing the index to browse content to restore.Is there a way/method to keep this index on a local storage so the tape is not needed to browse the contents?
DDB Maintenance Mode (Resync required/Resync in progress) explanation
Uncommonly you may find your DDB partitions held in Maintenance Mode citing a need for a resync as per below: Status: Maintenance Mode(Resync required) There are also instances where the CS DB is aware the resync is taking place and as such will output the following status: Status: Maintenance(Resync in progress) This means there is currently a flag marked against the DDB within our Commserve DB that requires us to validate the DDB state given an inconsistency was detected and/or there was a Commserve DR (Disaster Recovery) taken place recently. A DDB in this state is not usable for backup operations and will not incur pruning of data. As per our documentation, a resynchronisation encompasses the following process: The data in the DDB is validated against the CS DB to ensure that both the databases are synchronised If the CS DB and the DDB are not synchronized, the resynchronisation process removes the additional data entries in the DDB to reconcile inconsistencies in the CS DB W
Regarding Dedup path for cloud Storage pool creation
Hi Team, I have a HyperScale with 6 node cluster in on-prem for primary copy. For Secondary copy I need to move the data to Cloud (Archive Storage).My doubt is, when I am creating the Cloud Storage Pool, Do I select the existing (On-Prem) de-dup path (/ws/ddb/P_1/Copy/_21/Files/31) ( or) Do I need to create a dedicated the dedup path in the on-prem MAIf I need to option 2, then what is the recommended dedup partition value and reason behind that? And also share the your best practice for hybrid data protection, if any. Thanks,Manikandan
Using server 2008r2 as a media agent after FR22 .3 update
This is a conversation post after my initial post about FR22 .3 This is more of a findings topic/conversation .I had three older wk8r2 media agents ( now replaced) that experienced widespread issues after going to FR22 .3NOTE: none of these issues are/were recorded with Commvault as actual issues. The decision to replace/Migrate the OS was made at the 11th hour after working weeks on these issues.The basic application appears to work just fine with 2k8r2 fr22 .3 - Readiness, services running ,can run jobs etc. The issue we were running into was consistent across all three. And the only 2k8r2 media agents in our environment's I knew it was an issue. Seemed too coincidental not to be.After the Fr22 .3 update- within 4 hours our jobs started experiencing all or some of the following errors:Pipeline errorsMedia mount services device not readylibrary full.Even when attempting to select new snap mount hosts for jobs i was getting connection refused messages in the GXTail event logs.The mos
Cloud storage https connection
Hi there, I have successfully added the cloud storage (S3 compatible). However, for the time being I am only able to set up connection based on http protocol. When I want to add a new cloud storage library using https there is the error message failed to do verification.To move further, I would like to utilize https protocol. I have self signed certificate from my netapp cloud storage S3 compatible - is it possible to allow using it since I dont have CA issued cert? Could Commvault be forced to use a self signed certificate? What I did try was to “Use this additional setting and set its value to 0 to skip the checking of the server's certificate claimed identity for the cloud libraries”, but it didnt help. Is it possible to check using of this settings? Do you have any suggestions for such situation? Thanks for you ideas.
Aux copy error: may not be copied as we failed to get array controller Media Agent, make sure to set an array controller Media Agent for source Array, will be retried soon
In our CIFN environment, we would like to take snapvaults from our primary (snap copy), under our storage policy. WE are currently using Open Replication (not OCUM). Our initial (copy) snapshot works fine but aux copy (snapvaults) are not. Here is the message under the Progress tab when job initiates. Error: Data to Storage Policy [storage-xxx] Copy [snap_vault] may not be copied as we failed to get array controller Media Agent, make sure to set an array controller Media Agent for source Array, will be retried soon. Any ideas?
Failing DDB Backup on a Linux-based MA due to thin pool/volume threshold
Hi Team, My DDB Backup operations are failing with the following error message: Snap creation failed on volumes holding DDB paths. A quick review of the job logs points to some sort of free space threshold being reached. What could this mean? RegardsWinston
Failing DDB Backup on a Windows-based MA due to shadow copy space allocation exhaustion
Hi Team, My DDB Backup operations are failing with the following error message: The snapshot of the Dedupe database from the previous attempt of this job is not available and a new one could not be created, as the job cannot continue under this condition, it will fail I can't really find anything in the Commvault logs outside of a few VSS-related errors. What could this mean? RegardsWinston
Changing iRMC (HyperScale Appliance HS1300 & HS3300) Password using IPMITool
The following procedure allows you to safely update Fujitsu Appliance iRMC Password which won't impact operations such as: RHEV-M Hardware failure alerting Important Note - for the HyperScale Appliance\, Commvault leverages the IPMI protocol to monitor the physical hardware by design\, and reports back to Command Center if there is a fault. IPMI - Intelligent Platform Management Interface is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware and operating system. The following procedure are applicable for the following use cases: Updating iRMC Password for security purpose Resetting iRMC Password if it is forgotten or lost IPMITool is installed at the Guest OS level (Redhat OS)Updating iRMC Password First, you will need to establish a SSH session onto the Guest OS (HyperScale RedHat 7.#) Then input the following command: # ipmitool user set password
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.