Commvault Cloud Topics
Q&A. Technical expertise. Configuration tips. And more!
I am getting “Internal error occurred -5” on Readiness Check on several clients after the 11.32.35 upgrade. Backup seems to be working still, but we can’t have this “false positive” alert every day.(Upgrade was done 4 days ago, so I don’t think it is something that will go away “by itself”.)We have tried restarting the commvault services on the clients, also the commvault repair option. No improvement.Anyone know what could be the cause? 🖖
Hi community,we have a lot of LTO9 tapes with only few Index Jobs on them (Jobs are in megabyte range) and I would like to re-use these LTO9 tapes (17.58 TB) for other tape-out copies. When these jobs will expire, nobody knows. Is there any way to copy still valid index jobs to another tape?Pick for refresh, tape to tape copy…?Thanks
Raw device backup and restore was successful for file system but validation is failing. Please find the error
Raw device backup and restore was successful for file system but validation is failing. Please find the error. An unrecoverable errors found on the same device.File system state is not clean for one of the file system, state is "clean with errors"
We are running 11.28.36 in AWS environment. The commserve sits in a Management VPC and Couchbase is running in our Production VPC. In the Production VPC, we have a media server (proxy server) that runs VM and Cassandra backups to a local S3 bucket.I have the URL for the Couchbase API Server and an admin userid/pass.I am looking for what ports are required from the Proxy(Media server) to the Couchbase server(s). We have a tunnel setup so that we can load the Couchbase software on the required servers, ThanksChuck
Hello I need to migrate an existing commserve with mediaagent role to a new site/server.During the migration both existing and new commserve should remain active for backups for about three months. Then old commserve will just be preserved for legacy restores.How can this be best achieved without buying a seperate license for new commserve? Thanks
We have a storage account with private endpoint connection configured, we tried to IAM role to backup this storage account, but got errors and not able to configure the backup. 3984 1e88 12/08 16:46:13 ### [cvd] == cURL Info: OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to xxxxxxxx.blob.core.windows.net:443 3984 1e88 12/08 16:46:13 ### [cvd] == cURL Info: Closing connection 0 other storage accounts without private endpoint are working fine
Good Day @everyone,quick question,we’re trying to move the s3dfsRootDir for Live Mount Operations to a dedicated drive.When Specifying the key: https://documentation.commvault.com/additionalsetting/details?name=s3dfsRootDiras an additional setting to the Mediaagent, we are not able to specify the location. If we put a value like D:\3dfscache into the key, the GUI displays, that the string is outside of the dictionary values. So I left the value “string”… But I am completely unsure where to define the actual location.Does someone struggle this behaviour?Best regards
Hello Guys! We know that working (especial a lot) in webfrontends is terrible, compared to a native Software/Programm/App installation. Every klick is a kind of reload. Klicks in the Java Gui are often instant, on web there is always a amount of waiting time. One way to avoid the command center as much as you can is to write your own little Software, with API Calls, or CV cmdled, or qcommands. But if you have to use the command center, and commvault is pushing us more and more into that, i want to improve the experience as much as possible. So for that I’m asking you, if someone did that already. Whats currently in my mind: - Find the fastes Browser. There a Benchmark websites/tools available. - Customize this Browser (= tuning for Performance) for just one special topic: The command center (Use other browser for all the different internet stuff/not Command Center related Stuff). Besides quick reload/refesh times, good features (if command center supports that) would be a smart preca
DB2 backup size is 25TB, separate on 16 disks. I saw below article.Q1, any suggestion on maximum number of concurrent parallelism queries for my 25TB DB2 restore?You can improve restore operation performance by using parallelism.If the database contains a large number of table spaces and indexes, you can perform a restore operation more quickly when you set a maximum number of concurrent parallelism queries. This takes advantage of the available input/output bandwidth and processor power of the DB2 server.https://documentation.commvault.com/2023e/expert/using_parallelism_for_enhancing_db2_restore_performance_01.htmlQ2, any suggestion on the number of buffers and buffer size for 25TB DB2 restore?ou can improve the performance of restore operations of backup images by increasing the number of buffers.https://documentation.commvault.com/v11/expert/setting_db2_buffers_for_restore_01.html
Hello CommunityWe want to save a VM backup “Commvault Version 11.28.73 of a virt.Windows 11 VM under Nutanix AHV Version el7.nutanix.20220304.423, but unfortunately that doesn't work. A snapshot can not be set “Internal server error” and this error message appeared here:Description: Unable to create a virtual machine snapshot of [windows11-testvm]. [Internal Server Error.] Please check the virtual machine snapshot tree.Source: cvcs, Process: JobManager
Hello community,We are try to configure SAP MaxDB backup. We have completed client installation and instance configuration, but we facing errors related to pipes / streams.Backint, param and streams are configured according to following guide:https://documentation.commvault.com/2022e/expert/22205_configuring_multiple_streams_for_backups_and_restores.htmlDuring backup (from Workflow) we are getting the following error:Error Code: [19:857] Description: OK ERR -24925,ERR_PREPARE: Preparation of backup operation failed Can not create pipe '\\.\pipe\pipe_mem1'. (System error 13; Permission denied) Source: cv*****, Process: Workflow Please for your feedback,Nikos
Requesting your assistance as we are experiencing an error in the Commvault console when attempting to query devices that do not have Commvault installed. This process was previously done through Active Directory. I am attaching the error message for your reference.If any of you have insights on how to resolve this problem or have encountered similar situations, I would greatly appreciate your guidance.
Good morning. We’re considering MS SQL Server always-on using the new 2022 feature “Contained Availability Groups” This feature adds an MSDB and Master database to the AG group, which allows jobs and users to be synced between machines. Is this functionality currently supported? How might it work for restores/backups? Thanks- Chris
We are trying to perform DR test, by installing commvault and adding the Commserve databases from DR backup. Installation went fine and Commserve is running, now all of the storage libraries are offline. I have copy of the data from a mount path that is unavailable but I am unable to do move mount point operation or import the backups. Mount path used in production is a CIFS share and in DR environment I also have a CIFS share that has all of the data from production copied. So my question is how do I move mount path from production (which is offline in DR) to new CIFS share (that has all the data). I have also tried to change the device name to the one used for primary mount path but did not work.
We have several remote offices in the Asia-Pacific region. We are interested in deploying a Media Agent at AWS to service those locations. These locations don’t have VPN access to AWS so we would need to send data securely from on premise servers in the several countries to the media agent for long term storage. Servers are small, windows servers, generally less than 2TB of file data with small rates of change. No VMs.I am looking for some help to configure networking from the on premise servers to the Media Agent at AWS.
Hi Community,In addition to the normal CV backup, I have the challenge of backing up the "virtual machine files" (*.vmdk, *.vmx, ...) on a stand-alone NAS, in an independent Commvault format.My idea was to restore existing backups via a dedicated proxy on which the NAS NFS shares are also mounted.Manually, via CommmandCenter, this works fine, but I would like to schedule it periodically with a monthly backup/restore procedure.Do any of you already have experience of how this could work?I would be happy about any answer ….
we have a ovm manager in that 3 nodes are added , we created proxy in one of the node and we are able to backup vm in the same node and for vm in other nodes getting below error Unable to attach disks for virtual machine [arc-orauat-db] to the proxy. arget Repository: is not presented on server do we need proxy in each nodes ? do we need to configure ovm as one with ovm manager or adding hypervisor ip also we have oracle rac server with shared disk how to take full vm backup of both
We have 1 FREL for each VMware cluster we’re using on CV 11.20.We’ve noticed the FREL trying to reach public IP addresses with NTP packets but we cannot allow that through our firewall.I logged onto the linux console but found that the ntpd service was not active and then it occurred to me that as an appliance, the configuration should be applied by the Commvault client.We have a highly available NTP service available that I’d like to point these clients at and would appreciate some guidance on how to configure them.
I want to configure extended retention for monthly fulls. I would prefer to have the extended full backups written to separate tapes than the regular tape copies so that a large number of tapes filled with mostly aged data are not being tied up. I know I can accomplish this with second tape copy for just the extended fulls. This method writes the full with extended retention to both tape copies. Is there a way to send the extended retention copies to a different set of tapes using on secondary tape copy and therefore, reduce the total number of tapes and at the same time performing only one copy to tape of the extended full backups?
My team is responsible for delivering Commvault as a Platform and we manage multiple environments. All environment leverage cloud storage as their primary storage target and we are deploying storage accelerator now to maximize the usage towards our on-premise S3 target which is FlashBlade/S target.Now the challenge we run into is the lack of visibility if storage accelerator is actually being used by the client or by the access node. According to the documentation you should be able to see on on the job details of a running backup if storage accelerator is used when the MediaAgent is showing the client name. However this didn't happen and we new for a fact that the client was able to connect. We opened a ticket more than 12 months ago and investigation lead to the resolution that the product contains a "cosmetic” bug resulting in the job controller detail not showing the change. Only way to investigate if the client is working and using storage accelerator is by going into the logs. A
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.