Commvault Q&A, release updates, and best practices
I am looking to up the debug level so my cloud logs show more detail. I remeber in the past creating an empty file in the root of the media agents did this for windows media agents, is there a similar method for linux. I am wanting to see the curl errors and tls handshakes.
Hello community, I have a question regarding an error message from a DDB Verification job.Pleas see the error marked in red in the abstract of the logs:8120 1010 09/04 19:42:54 22835333 [Controller] Getting Chunks for Reader_1738120 1098 09/04 19:42:54 22835333 [Reader_171] Failed to get chunk integrity record from SIDB8120 1098 09/04 19:42:54 22835333 [Reader_171] Cannot process data for the chunk.8120 1098 09/04 19:42:54 22835333 41087620-# [DM_BASE ] removeAllArchFilesMap(): Removing all the archive file ids from archive file map.8120 1098 09/04 19:42:54 22835333 [Reader_171] Discarding queued chunk list8120 1010 09/04 19:42:54 22835333 [Controller] Getting Chunks for Reader_173 Last ArchiveFileId  CommCellId  controller will wait for this call before proceeding to get next message for other workers I checked the documentation and the knowledge base to find out what the error is telling me.From my point of view the DDB doesn’t have any information about a certai
Hello,i’m searching a way to automate commcell migrations operations for migrating LTO-5 tape metadata on a new Commcell in order to be able to restore.i didn’t find any REST API or qcommand to do this.is there a way to automate these actions or it’s only manual operations ?regards,Christophe
Hello,I have Commvault server simpana version 11.28.68, and a few Virtual machines (VMWare).I have a problem with browse specific file from this Virtual machines from backup which is on Tape library.I read that live browse is not allowed for simpana version 11.28.68 and index version2. Is there any solution for this problem?Best regards,Elizabeta
We currently use IBM V5000 arrays for our Commvault backup target to land our deduped backups. We are starting to review other options to see what other fast, cost effective options are out there. I do prefer to use Fiber Channel connections, but open to options. Since Commvault is really the brain in our scenario, the storage array does not really need any features, just good speed. What Vendor Storage arrays do you use? Are you happy with it?
We have a “file server” that is a MacOS 12 montery machine I have built 2 seperate custom installers and both fail at the end with no error output just an installation failed message. is there something special about MacOS 12 ? I notice here they do not have it listed:https://documentation.commvault.com/2022e/essential/142144_system_requirements_for_mac.html
Hello all, Some doubt makes me ask some questions to community before acting please. I’ve never set such mssql backup. As i’m not a mssql expert, i’ll try to explain and understand as well as possible, so please be kind! I’ve a mssql failover cluster composed with 2 nodes.Only one node is active at the same time, and has the network share hosting the database mapped, not double mapped on passive during this.There is no special availability group (that’s mostly my point) I’ve experience in backuping an exchange dag, so i would suppose same backup configuration philosophy:Install mssql agent on both nodes, no config Create a virtual client “mssql ag client”But this is my doubt, when i select this, i’m asked for sql clients (ok, nodes), sql instances (ok, instances), but asked for availability group.I don’t have done now, i try to understand this before, so maybe i can just let availability group list empty? Is someone able to tell me if i’m in the right way? Or else? Thanks in advance Co
Hello, How can i check if big data apps clean running properly, because i see on the vm backup SP 142 big data apps jobs. We are using Inde V2 so the big data apps creating not on subclient level but on vm level? For example here is the printscreen of out vm SP. There 139 jobs of big data. Now i want to know if the clean up working or not. We are using System Created for Index Backups that runs once a day. But i dont see the scheduler for clean up. How can i check it?
Hi All, Reaching out to you with regards to a customer query that I am dealing with. It has been verified earlier with Dev (via 230614-889) that support and certification for RHEL 8.8 with RWP (Ransomware Protection for Redhat 8.8 Media agent) is expected to be completed by the end of August, based on the current timeline.However, upon reviewing through: https://documentation.commvault.com/2022e/expert/126625_system_requirements_for_ransomware_protection.html, it does not currently list Redhat 8.8 as a supported OS.Could you please confirm if Redhat 8.8 holds good for RWP support. PS: Current Environment is based on SP version 11.28 Looking forward in hearing from you,
We want to be able to restore all the data that we have archived for a customer back to the same server. They have archiving setup in both FS and a NAS location. They want to decommission archiving I can trigger a restore from the most recent synthetic fulll archive and restore all the data back to production servers; however just wanted to make sure it would completely replace all the stubs that were ever created on the source server. I was reading through the documentation as as I understand the retention of the stubbed data only works on copies at the moment the storage policy sends data to local ma and one aux copy to another location and third one to tape; so hopefully the SP retention policy wont apply to the local MediaAgent.“ Retention is only considered for copies that contain the job. From <https://documentation.commvault.com/11.24/expert/111875_creating_new_archiving_subclient_for_windows_file_system.html“
I have a slow running Auxcopy job that I wish to analyse , this job has been running over several days. I suspended and resumed the job nad then looked at the destination media agents CvPerfLogAnalyze.logfor the performance analysis. There are no entries in this log since July lasy year, is this a bug or has logging been inadvertently disabled. CV Version 11.28.73
Hi,We are running standard streaming backups (configured in Command Center) for virtual machines. We have 3 backup copy destinations for virtual machines:primary site - disks primary site - tapes (monthly fulls) - extended retention secondary site - disksWe would like to configure and run Disaster Recovery and Replication jobs for some of our protected virtual machines to replicate them from primary site and create replica in secondary site. We would like to run this as periodic(daily) replication with VM - hot site type.What is a best practice in scenario, where we need traditional backup and also replica in secondary site? Only DR and replication jobs with longer retention or coexistence of backup and DR and replication jobs in the same time? regards,Przemek
I get “Failed to start backups as no eligible subclients were found” when i attempt run run a Synthetic full backup schedule that contains Mongo Big Data clients. I can however run the incremental schedule with no problems. There are no blackout windows defined for the clients. I did have these clients associated with a Plan but have now disassociated these.
Hi,My client changed the port number of his SQL instance to a specific number. His reason was to hide the instance for security purposes. However, it caused Commvault failed to backup that SQL database. with the error “Failed to validate the credentials for instance...”The client told me that everything that needs to point to that instance has to has the instance name like this SQLDB01,port-number\DB01 I wonder, what could I do in Commvault to back up this SQL database?Thank you.
How to generate the command "qlist backupfiles" for a client MS SQL Server. Like this link, but for a MS SQL Server database https://documentation.commvault.com/commvault/v11_sp20/article?p=45143.htmEspecially how to indicate the parameter <paths path=.…What to indicate for a SQL Server database Thanks.Gustavo
Details below, but in a nutshell, I’ve successfully set up two VSA proxies for the hotadd transport thus far, one at each of two sites. One site, at least, needs more. I “swear” that I’ve set this proxy up the same as the others, but for the life of me, it’s not appearing on the access node list: Both the existing proxy and new one have the following software: They’re on the same subnet (sequential IPs, for that matter) and both have established communication with the Commcenter on port 8403. The working VSA also has a session with port 8400, altho it’s working so that may be the reason why…I can successfully cvping on ports 8400 & 8403 to the commcenter and our mediaagent from the new proxy (call it vmProxy2). VMWare tools are the same.Mono 6.8 is installed.Oracle 8.8.What.. on Earth, could I be missing at this point? Any ideas? Thank you for any help!
Does anyone have idea or any documentation regarding security hardening for Tomcat apache on windows machine , i can find it for Apache on Linux server , is there security precaution already done on the commvault package not Secured OVa package any help please
Good morning,I have the following scenario:2 tape libraries A (primary) and B (secondary) 1 tape on library A went bad, we’re marking it as bad and removing it from the library How do I trigger Commvault to copy those missing jobs from the secondary library B, to a new tape on A? Best regards,Daniel
Hello, Yesterday I upgraded the Commserve serve from FR28 to FR32. Unfortunately replication process for Commserve DB between the server is not working properly after upgrade. The issue which appeared is: Query Result [Microsoft.SqlServer.Management.Common.ExecutionFailureException:The database was backed up on a server running version 15.00.4280. That version is incompatible with this server, which is running version 13.00.5216. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server. RESTORE LOG is terminated. Please let me know me probability solution. Regards, Michal
We have AIX client where we haven’t configured Commvault, But our server team had cloned this AIX client from another server.When I give Commvault status command on this cloned server, it displays the name of the server that is in backup.I want to uninstall Commvault from the cloned server and if I select “Remove all packages locally” will it uninstall the Commvault from the server which we are backing up? As this cloned server displays the name of the server that is in backup.I understand that remove all package locally does decoupled uninstallation where it doesn’t communicate with Commserve computer. I just want to make sure, me uninstalling package from cloned server shouldn’t affect the original one.
Hi All! I'm trying to figure out what could be the problem during bottleneck during my backups/restore that I do through Commvault; Some jobs come up "abnormal" so I would like to investigate more about that. I saw that you can use the PerfAnalysis.log files found on the media agent, but if I try to open one, this is its contents. 06/27 15:45:53 --- main(196) - Could not read log file or Performance counters for JobID 5155311 not found in the file Is the file perhaps only generated correctly in some situations?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.