Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,673 Replies
DB2 backup size is 25TB, separate on 16 disks. I saw below article.Q1, any suggestion on maximum number of concurrent parallelism queries for my 25TB DB2 restore?You can improve restore operation performance by using parallelism.If the database contains a large number of table spaces and indexes, you can perform a restore operation more quickly when you set a maximum number of concurrent parallelism queries. This takes advantage of the available input/output bandwidth and processor power of the DB2 server.https://documentation.commvault.com/2023e/expert/using_parallelism_for_enhancing_db2_restore_performance_01.htmlQ2, any suggestion on the number of buffers and buffer size for 25TB DB2 restore?ou can improve the performance of restore operations of backup images by increasing the number of buffers.https://documentation.commvault.com/v11/expert/setting_db2_buffers_for_restore_01.html
We are trying to perform DR test, by installing commvault and adding the Commserve databases from DR backup. Installation went fine and Commserve is running, now all of the storage libraries are offline. I have copy of the data from a mount path that is unavailable but I am unable to do move mount point operation or import the backups. Mount path used in production is a CIFS share and in DR environment I also have a CIFS share that has all of the data from production copied. So my question is how do I move mount path from production (which is offline in DR) to new CIFS share (that has all the data). I have also tried to change the device name to the one used for primary mount path but did not work.
I want to configure extended retention for monthly fulls. I would prefer to have the extended full backups written to separate tapes than the regular tape copies so that a large number of tapes filled with mostly aged data are not being tied up. I know I can accomplish this with second tape copy for just the extended fulls. This method writes the full with extended retention to both tape copies. Is there a way to send the extended retention copies to a different set of tapes using on secondary tape copy and therefore, reduce the total number of tapes and at the same time performing only one copy to tape of the extended full backups?
Hi Team,Greetings!!!In our Commvault infra , We are getting below error“The Disk used to store the index on the analytics engine is running at 80 percent of capacity, Please add more disk space to prevent failures during analytics operations”File\ C:\Program files\Commvault\ContentStore\IndexCache\AnalyticsIndexWe have already change the existing Index file path to other Drive Like E:\, But still we are getting the mentioned above error daily,Could you please suggest any solution for this issue
I need to figure out how to setup a subclient to backup vvol disks on our VMware environment. When I google this, it brings up everything but vvol and I am havng difficulty finding it in Commvault dicumentation. Our level is 11.28.83. Is this just the same as setting up any subclient? Or does it need a plugin to work? Any help yoou guys can give me on this is appreciated.
Hi Commvaulters, Can someone advise me with the needed network ports to be open in order to perform an aux copy between 2 MAs.We installed a new MA on a remote site, and we want to open only the needed ports between it and the CS (For communication) and between it and the MAs located on the main site.Please to note that we are running CV version 11.24, I know that the main ports are 8400 for communication and 8403 for data transfer.Is there any other once that needs to be opened ? We want to minimize port opening in order to fully secure the remote MA. Regards.
Hello, does anyone in the community have experience with offside backup to an S3 bucket in AWS?We created a bucket in the AWS with the "Object Lock" option in compliance mode with a retention of one day. We also did the same with a storage policy in Commvault. Also compliance lock and retention 1 day.We then sent a VSA backup to this storage policy and everything worked immediately without any errors. However, after the retention expired, the data in the AWS bucket was not deleted. To find out whether something was stuck here, I manually deleted the data in the storage policy. That was yesterday. Today, the data that Commvault wrote is still available in the AWS bucket. There is no delete marker on it and the bucket has not changed in size from 7.7 GB and the number of files and folders. RegardsThomas
Hi community,we have a lot of LTO9 tapes with only few Index Jobs on them (Jobs are in megabyte range) and I would like to re-use these LTO9 tapes (17.58 TB) for other tape-out copies. When these jobs will expire, nobody knows. Is there any way to copy still valid index jobs to another tape?Pick for refresh, tape to tape copy…?Thanks
I write backups to an on-prem SAN. I have a small amount of data on one client that needs to be encrypted to meet a data at rest encryption requirment.Is there a way to only encrypt data backed up from a singe directory on a Windows client? Or a single drive on that client?If not, what is the best option for encrypting my data at rest with the least impact on backup times. I do not need to encrypt it in transit, just at rest. Thanks,Larry Upton
Hi Team, We are running our current environment is Commvault V11 SP32 and we are in the process of deploying the HPE MSL3040 Scalable Base Module (upto 3 HH Drive) for tape out copy and physical server having Allma linux 8 Operating System. Whether OS will support tape library and Commvault Media agent software. Regards,ManiD
Hi Team, We are looking to configure media agents with active-active or active-standby with SAN attached disk library. Media agents are windows.Also we want to learn is there an option active-active or active-standby options with the Commvault.
Hello, I´m pretty new to Commvault and starting with the first steps.I have the problem that I have 8TB local storage in my media agent and formatted as 64k. Created a mount path but it shows only a couple of GBs usable. What can I do to recognize it correctly?The old hardware is the same way connected. Thanks in advance for your help. Greetings from a newbie
I am having issues with my Weekly Tape Copy Storage Policy. I already opened a case with Commvault, but they still couldn't find a solution. The Monthly policy works fine, so I don’t think the issue is our tape library. Is anyone here had the same issue?Error Code: [13:138]Description: Error occurred while processing chunk  in media [V_174502], at the time of error in library [XXX_MA1_DISKLIB] and mount path [[XXXX-backup-ma1] H:\XXXXX_DEDUP], for storage policy [XXXXX-FS-DEDUP] copy [3 - Weekly Tape Copy] MediaAgent [XXXXX]: Backup Job . Unable to setup the copy pipeline. Please check connectivity between Source MA [XXXX] and Destination MA [XXXXX]. Source: XXXXX, Process: CVJobReplicatorODSError occurred while processing chunk  in media [V_174502], at the time of error in library [XXXX_MA1_DISKLIB] and mount path [[XXXX-backup-ma1] H:\XXXX_DEDUP], for storage policy [XXXX-FS-DEDUP] copy [3 - Weekly Tape Copy] MediaAgent [XXXXX]: Backup Job . Unable t
Hello Team,Is there a way to determine storage allocation per server on a primary disk copy?I’ve tried to run reports, use Command Center and I can’t seem to find any options to include particular servers and then see how much space their backups are consuming on our primary disk copy. Thanks!
I recently configured Catalyst over ethernet and I haven’t any issues during the integration, even a single VM backup test was successful, however when I scheduled a subclient with multiple VMs (5) the job is going to waiting state with the below error: Error Code: [62:2910]Description: Error occurred in Disk Media, Path [CATVMsManagement\1NDD43_11.24.2023_02.03\CV_MAGNETIC\V_3] [-1404 OSCLT_ERR_SERVER_OFFLINE]. For more help, please call your vendor's support I did validate the Store in Commvault and it is in status Ready and access Read/Write, Moun Path Allocation Policy is set to Maximum Allowed Writers. Also ran a mountpath validation on the store and it was successful. The error is only thrown when there is more than 1 VM in the subclient. Is there something else to configure to allow multiple data streams run on the StoreOnce side? Here a sniplet of Job Manager.log19296 66c8 11/23 22:31:12 19 Servant Remote Stream Allocation request for  streams of type  received. Req
Hi, For data security I was told to do search on the WORM option at Primary copy level of our storage policies.. Just a bit of background of our environment, we have short data retention set on our Primary Copy - 35 days 1 cycle. My understanding is that this WORM option in storage polices work within Commvault software and no admin(or anyone) can delete backup jobs after it is enabled, and we have to wait for job to age out and then be pruned by commvault automatically. Then I was advised that if I enable WORM, DDB for its storage policy will be sealed. A new DDB will be created automatically after rebaselined. So I have some questions below. is DDB sealing an automate process? i did enable WORM option more than a month ago for a storage policy for testing but I cannot see a sealed ddb under ‘Deduplication engines’. Our disk libraries are quite big (from 300 TB to 800TB). if we need to do the rebaseline every time, will this take long time and have impact on performance? With only 35
Hello everyone,there is some way to get a list of all jobs that are protected by specific Index Job in a Storage Policy?I have several tapes that are not being exported.Cross-checking the data between Index Backup Retention report and Forecast Report these taped are not being released because of an index job that is protecting backup jobs.The problem is that I cannot find in the history of whole storage policy the jobs indicated as protected. Thanks and regard
AUX copy jobs failing with error "Error code: [62:294] error occoured to disk media, path[\\CVDisklib02\...\] cannot impersonate user.
environment is running at 11.32.23 version and we are in the process of refreshing the environment. We have 1 old media agent running backups to Network storage. we have added a new media server to environment, configured new mount paths to the same storage in Grid configuration. Backups and restores from and to this storage is working fine. We have a DR setup with the same configuration. When we are trying to run a AUX copy from the new media agent in DC to DR storage, the AUX copies are failing with this error. I am attaching the log and screen shot for referenceany urgent help will be appreciated. Going to support is taking a long time to get an answer. we are waiting for 4 days for an answer for an another issue
I’m looking to migrate to new Server Hardware for one of my Media Agents and looking for best approach and minimal downtime.Was thinking I could setup the new hardware with the media agent role/software and start to move mount paths to the new server. Once all mount paths have been moved to the new MA I can then update my storage policies to point to the new MA. Is there anything else I need to look out for or would need to do?Any advice or knowledge on this would be great.Thanks
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.