Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,674 Replies
DB2 backup size is 25TB, separate on 16 disks. I saw below article.Q1, any suggestion on maximum number of concurrent parallelism queries for my 25TB DB2 restore?You can improve restore operation performance by using parallelism.If the database contains a large number of table spaces and indexes, you can perform a restore operation more quickly when you set a maximum number of concurrent parallelism queries. This takes advantage of the available input/output bandwidth and processor power of the DB2 server.https://documentation.commvault.com/2023e/expert/using_parallelism_for_enhancing_db2_restore_performance_01.htmlQ2, any suggestion on the number of buffers and buffer size for 25TB DB2 restore?ou can improve the performance of restore operations of backup images by increasing the number of buffers.https://documentation.commvault.com/v11/expert/setting_db2_buffers_for_restore_01.html
We are trying to perform DR test, by installing commvault and adding the Commserve databases from DR backup. Installation went fine and Commserve is running, now all of the storage libraries are offline. I have copy of the data from a mount path that is unavailable but I am unable to do move mount point operation or import the backups. Mount path used in production is a CIFS share and in DR environment I also have a CIFS share that has all of the data from production copied. So my question is how do I move mount path from production (which is offline in DR) to new CIFS share (that has all the data). I have also tried to change the device name to the one used for primary mount path but did not work.
Hi Team,Greetings!!!In our Commvault infra , We are getting below error“The Disk used to store the index on the analytics engine is running at 80 percent of capacity, Please add more disk space to prevent failures during analytics operations”File\ C:\Program files\Commvault\ContentStore\IndexCache\AnalyticsIndexWe have already change the existing Index file path to other Drive Like E:\, But still we are getting the mentioned above error daily,Could you please suggest any solution for this issue
I need to figure out how to setup a subclient to backup vvol disks on our VMware environment. When I google this, it brings up everything but vvol and I am havng difficulty finding it in Commvault dicumentation. Our level is 11.28.83. Is this just the same as setting up any subclient? Or does it need a plugin to work? Any help yoou guys can give me on this is appreciated.
I want to configure extended retention for monthly fulls. I would prefer to have the extended full backups written to separate tapes than the regular tape copies so that a large number of tapes filled with mostly aged data are not being tied up. I know I can accomplish this with second tape copy for just the extended fulls. This method writes the full with extended retention to both tape copies. Is there a way to send the extended retention copies to a different set of tapes using on secondary tape copy and therefore, reduce the total number of tapes and at the same time performing only one copy to tape of the extended full backups?
I write backups to an on-prem SAN. I have a small amount of data on one client that needs to be encrypted to meet a data at rest encryption requirment.Is there a way to only encrypt data backed up from a singe directory on a Windows client? Or a single drive on that client?If not, what is the best option for encrypting my data at rest with the least impact on backup times. I do not need to encrypt it in transit, just at rest. Thanks,Larry Upton
Hi Team, We are running our current environment is Commvault V11 SP32 and we are in the process of deploying the HPE MSL3040 Scalable Base Module (upto 3 HH Drive) for tape out copy and physical server having Allma linux 8 Operating System. Whether OS will support tape library and Commvault Media agent software. Regards,ManiD
I am having issues with my Weekly Tape Copy Storage Policy. I already opened a case with Commvault, but they still couldn't find a solution. The Monthly policy works fine, so I don’t think the issue is our tape library. Is anyone here had the same issue?Error Code: [13:138]Description: Error occurred while processing chunk  in media [V_174502], at the time of error in library [XXX_MA1_DISKLIB] and mount path [[XXXX-backup-ma1] H:\XXXXX_DEDUP], for storage policy [XXXXX-FS-DEDUP] copy [3 - Weekly Tape Copy] MediaAgent [XXXXX]: Backup Job . Unable to setup the copy pipeline. Please check connectivity between Source MA [XXXX] and Destination MA [XXXXX]. Source: XXXXX, Process: CVJobReplicatorODSError occurred while processing chunk  in media [V_174502], at the time of error in library [XXXX_MA1_DISKLIB] and mount path [[XXXX-backup-ma1] H:\XXXX_DEDUP], for storage policy [XXXX-FS-DEDUP] copy [3 - Weekly Tape Copy] MediaAgent [XXXXX]: Backup Job . Unable t
Hello Team,Is there a way to determine storage allocation per server on a primary disk copy?I’ve tried to run reports, use Command Center and I can’t seem to find any options to include particular servers and then see how much space their backups are consuming on our primary disk copy. Thanks!
I recently configured Catalyst over ethernet and I haven’t any issues during the integration, even a single VM backup test was successful, however when I scheduled a subclient with multiple VMs (5) the job is going to waiting state with the below error: Error Code: [62:2910]Description: Error occurred in Disk Media, Path [CATVMsManagement\1NDD43_11.24.2023_02.03\CV_MAGNETIC\V_3] [-1404 OSCLT_ERR_SERVER_OFFLINE]. For more help, please call your vendor's support I did validate the Store in Commvault and it is in status Ready and access Read/Write, Moun Path Allocation Policy is set to Maximum Allowed Writers. Also ran a mountpath validation on the store and it was successful. The error is only thrown when there is more than 1 VM in the subclient. Is there something else to configure to allow multiple data streams run on the StoreOnce side? Here a sniplet of Job Manager.log19296 66c8 11/23 22:31:12 19 Servant Remote Stream Allocation request for  streams of type  received. Req
Hello, does anyone in the community have experience with offside backup to an S3 bucket in AWS?We created a bucket in the AWS with the "Object Lock" option in compliance mode with a retention of one day. We also did the same with a storage policy in Commvault. Also compliance lock and retention 1 day.We then sent a VSA backup to this storage policy and everything worked immediately without any errors. However, after the retention expired, the data in the AWS bucket was not deleted. To find out whether something was stuck here, I manually deleted the data in the storage policy. That was yesterday. Today, the data that Commvault wrote is still available in the AWS bucket. There is no delete marker on it and the bucket has not changed in size from 7.7 GB and the number of files and folders. RegardsThomas
AUX copy jobs failing with error "Error code: [62:294] error occoured to disk media, path[\\CVDisklib02\...\] cannot impersonate user.
environment is running at 11.32.23 version and we are in the process of refreshing the environment. We have 1 old media agent running backups to Network storage. we have added a new media server to environment, configured new mount paths to the same storage in Grid configuration. Backups and restores from and to this storage is working fine. We have a DR setup with the same configuration. When we are trying to run a AUX copy from the new media agent in DC to DR storage, the AUX copies are failing with this error. I am attaching the log and screen shot for referenceany urgent help will be appreciated. Going to support is taking a long time to get an answer. we are waiting for 4 days for an answer for an another issue
Hello everyone,there is some way to get a list of all jobs that are protected by specific Index Job in a Storage Policy?I have several tapes that are not being exported.Cross-checking the data between Index Backup Retention report and Forecast Report these taped are not being released because of an index job that is protecting backup jobs.The problem is that I cannot find in the history of whole storage policy the jobs indicated as protected. Thanks and regard
what will happen if a restore from tape is happening and I suspend the job and place CommVault on maintenance mode and reboot the tape library? Once the tape library reboots, and I resume the job will the restore be corrupted?
Hi Team, We are looking to configure media agents with active-active or active-standby with SAN attached disk library. Media agents are windows.Also we want to learn is there an option active-active or active-standby options with the Commvault.
We have recently expanded our MSL6480 Tape library with six new LTO9 drives. Initialized 10 new LTO9 tapes from the Library console. However upon checking the properties of the tapes from the CommVault console --→ under media info tab--→ format & type it is showing as ultrium-V7. have anyone faced this problem? please help me to fix this.
ANF SAP HANA will generate 12TB archive file daily, will save them on Azure cool tier with 1 year retention. Does Commvault support lifecycle? prefer to save first month archive file in archive tier for quick restore, remain 11 months archive file in archive tier for cost saving
Hello, We have Commvault server simpana version 11.28.68. In Tape library there is error for one tape “Remove the cartridge and inspect it for damage. Retry operation with another cartridge.”, but in Commvault console I do not see any error for this tape and it is used for backup.I plan to mark that tape somehow (perhaps prevent reuse) so that it is not backed up and can be used for restore until the retention expires.Best regards,Elizabeta
Loved Ghostbusters. One of the best quips: Spangler: “I’m fuzzy on the good/bad thing. Define ‘bad’”Egon: “Imagine all life as you know it ending instantly as every molecule in your body explodes at the speed of light.” I’m looking to replace our Media Agents, one with a dedupe database that’s getting up in the 1.5 µs range. Looking at this (properties of our dedupe storage policy):… and the dedupe drive/path has ~2.2TB in it. For the “Back-end size for disk storage” on the “Hardware Specs for Dedupe Mode” (https://documentation.commvault.com/11.24/expert/111985_hardware_specifications_for_deduplication_mode.html), I’m looking for the “Extra-large” with 2 x DDB… (1), as possibly referenced in the “Total Data Size on Disk...” or simply the “Extra small” (2) as shown here:My thoughts are #1 “up to 1000 TB” is the “total size on disk for all DBs” from the policy. I’d just like to be clear on that before crossing the streams.Thank you for your help ~
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.