Commvault Q&A, release updates, and best practices
Hi, We are performing several Sybase & SAP Hana restores.One thing I noticed: the progress is in commvault is “completely wrong” slash misleading.It remains at 5%, while in the hana log & sybase log in commvault you can see that it’s for example 30% loaded.Because the progress remains at 5%, people think that the restore is stuck (can stay at 5% for hours if you have bigger databases), and sometimes people kill the restore because of that.Why can’t commvault show the real status of the restore? For sybase for example the clsybagent.log from commvault shows the exact status, so commvault knows that status...so why not showing it in the gui/command center? Even if it’s not exact figure, having that as job status would remove some frustrations for users. Only a question, not a request to change something :)
We do automatic rollouts of Active Directory in Azure for our Customers with Ansible.We want to install the AD iData Agent with the Unattend Package and Custom “install.xml” File.These we are already doing successful for MA, Proxy and Restore Agent.If you do Unattend AD Installation in the XML is PW for AD Agent needed. But it is expected a hash, but i cannot find what hash (for the PW) the Installer expect. Normal PW is not working and sha-256 hash of PW is not working, too.This is the line in XML:<activeDirectory> <userAccount password="3820c5c6992b6774aced93ecd88e04e35b4398d99e3f0a7d8" domainName="q1xyz.local" userName="svc-adbackup" /></activeDirectory>So it would be important to know with what the setup when you create the package the PW encryptes, because during automatic rollout we get it from our central key Vault during installation.
Documentation for Workflows mentions some Functions for Java to interact with or access other Resources during execution https://documentation.commvault.com/commvault/v11_sp20/article?p=49729.htm.But alone from browsing through the other Pages it gets clear there are way more options like activity.exitCode or workflow.setFailed().Dissecting built in Workflows reveals other treasures like csdb.execute(“sqlquery”).Is there a complete list of all available Functions and how/when to use them? Trial-and-Error and Reverse-Engineering by Customers seems error-prone and inefficient.
Hello, In our Global infrastructure we have some backup solution which is managed by one global Commserve. In current situation we worry about connectivity to some Media Agents in Russia and possibility in the further that the connection to Media Agents from Europe can be blocked and my question how to prepare infrastructure on that task, to not lost possibility of restore and backup server in Russia. Or what type of solution can be implemented ? I can imagine that the task can be difficult to divide Commserves on Two machines and when the situation will be more stable we would like to merge both Commservs. Situation between the Russia and rest World is very dynamic, so we don’t know when that solution will be useful. Regards, Michal
Hello everyone,Environment : SP11.23 behind a web proxyI deployed 365 apps through the CVO365CustomConfigHelper app.In the admin console I add the apps created but the status remains in 'Not available',Therefore impossible to add mailboxes to saveThe service account is correct (the account is Global administrator)If you have any idea about the problemRegards
Hi,My customer having issue with hyperv backup. There is a lot of vm with left snapshot/checkpoint after backup completed. Hence there is a lot of avhdx disk cause the storage full.Microsoft said that backup should be doing the merge/delete after backup completed. From my understanding it should be hyperv do the merge/delete since it is hyperv checkpoint, Commvault only give instruction for hyperv to delete/merge it.Anyway that we can do backup without checkpoint?
I noticed a new alert was added to my CommServe and I believe this was after an upgrade we did recently to a new FR.The alert in question is : Alert Data Verification Failure Detected and it is briefly discussed on the link.https://documentation.commvault.com/11.20/expert/12395_data_verification_faq.htmlFew things that come to mind and not completely clear to me.The alert says that it detected corrupted data on backup disk. It does not tell me the job id nor the path. Looking at the alert in the GUI, It also does not have any additional option to be selected such as job id, storage policy etc. How come? Job history to the subclient or storage policy brings everything green and no issues with the job themselves. My understanding is that I would potentially have issues trying to restore the particular subclient up to the specific transaction log. Why would job history still shows the job as succesfully completed? The alert gives you two options, one to convert the subclient to full duri
We are starting a new project with everything in AWS and planning to use commvault as our backup system via AWS. So Im still a novice with commvault and Im wondering when deploying the commvault image to an instance is it like everything wrapped up in one instance.? Such as the commserve, media agent etc…Or do I need to deploy various instances for each.? Do I need a server for commservr and another one for media agent etc…?
Hi, I’m getting this errorError Code: [30:385]Description: Failed to choose agent for backup for backup type [Transaction Log], reason [Failed to update AG information of MSSQLAGInstance. ].Source: hsapsqldb9, Process: SQLBackupMaster tried to restart the services on the cluster members - didn’t help
Hello,We are running an HANA database in a cluster setup, with a primary and secondery Linux machine.I have an problem when doing cross platform restore of an HANA database.We are trying to restore an production enviroment HANA database and restore it from the production Linux machine (host) to a stage/development host / machine.The restore is starting, but it is restoring the database into the secondary machine (host) in the cluster instead of the primary machine.I can describe further and attach screenshot to explain better.Best regardsFredrik AndreassonAxians Linux team
Hi Guys, I am creating the RHEV pseudo-client with the Customer.We have noticed that during creating RHEV pseudo-client, Commvault uses port 80 (http) to comunicate with RHEV Manager. Port 80 (http) is blocked by the firewall. They use only https (port 443) in the environment due to some safety restrictions. Is any way to force Commvault (VSA) to use port 443 during creating RHEV pseudo-client? Rgds,Kamil
Hello,Hypervisor is VMWare 6.7VMs are windows server 2019Media Agent is a linux one, with VSA installed. Full VM backup/restore are easily performed.But for granular restore (live browse) from the VSA backup of a VM, a Windows VM is required. It have difficulties to find documentation (or best practices) regarding the deployment/configuration of this VM. Simple questions, like which packages to deploy on it… Surely MediaAgent, but only this one?. A lot of documentation exists when the MediaAgent is running Windows and the VM is linux, as the FREL is available to download, or even how to install a MediaAgent over a linux VM and convert it to FREL. But when the OSes are reversed, I’m lost. Has anyone information to share (at least to me ) ?
Hi All, Has anyone had any experience with protecting an on-site APS PDW appliance?At the moment the appliance dumps massive .bak files to a unc path and Commvault backs that up with no dedupe or compression.It’s obviously be preferable to back the Appliance up directly to take advantage of dedupe and compression and skip the .bak portion altogether. Thanks in anticipation.
Hi guys, I'm having a customer who is actually using Standard MA with Commvault to back up to copy 1 to disk (PureFlash with NVMe Drives), copy 2 (and other PureFlash) and a copy 3 to Tape around 350 TB per week to send to tapes (LTO7) 4 tape drives (weekly Full) at a sustain throughput of 700 GB/Hour per drive (4 drive in parallel). We are looking to replace both MA with two HyperScaleX clusters. The questions are ! How we need to configure the HyperScaleX (reference architecture) to sustain the weekly tape creation of 350 TB per week with the same throughput knowing that we are going to use NearLine SAS in the HyperscaleX Cluster ! Or ca we use SSD for the Storage Pool drive in a HyperScaleX.
We are currently trying to test a restore using a Linux VSA for several windows servers (2012R2,2016,2019). After the first backup and restore we haven't had any issues restoring specific files to the same server of to other servers (all windows).Unfortunately after last nights backup, today we tried again to restore some files, with no success.Every time we try to select the destination server we get prompted with user/password for that server and then we try to browse for the specific path, but we keep getting the same error:Something went wrong. Server may be under maintenance. Trace ID: cvtZsQyx9YDid something happened during the backup process and now the servers are not visible?This is a first time issue and cant get a solution for it.Any advises are appreciated. Thank you.
HI Community, i am very interested in your thinking of customization of the following topics and if you do this and on which way you do this at the moment to achieve this goal. Webconsole Command Center Alerts Commcell Console GUI I am very excited about your answers and your experience with this topic. Cheers Philipp
Hi allWe have a customer that has an Isilon for disk storage.Backup speeds are ok, but DASH to DR or Copy to Tape speeds are terrible. “Last night’s” backups copy to DR at no more than 500GB/hr, if we’re lucky and copy to tape speeds do not exceed 200GB/hr.Index and DDB are on NVME and tested fine.Fallen Behind Copies are literally years behind.Both Commvault and Isilon have checked it out and cannot do anything about it.We did spec Hyperscale before implementation but we were overruled and now I sit with this issue. Very frustrating. Has anyone experienced dog-slow Isilon restores, DASH copies or Tape copies?How were you able to overcome this? It’s gotten so bad that we are going to ask Commvault to either try help us fix it or reconsider certifying it as a DL destination.
We have the following scenario:Cloud library → shared between MA1(windows vm on prem) ; MA2(azure VM windows) VMware VM (backed up using VSA) backed up by MA1Browse and restore from MA1 worksservices stopped on MA1Browse and restore from MA2 fails → “Browse failed due to error while mounting virtual machine.”We see that the cvvd driver is not loaded on MA2 wheras it is on MA1. (As per https://documentation.commvault.com/11.24/expert/30807_live_browse_and_block_level_browse.html Block-level browse uses a block-level driver to mount a pseudo disk on the MediaAgent being used for browse and restore operations.; Cloud Library Support for Live BrowseFor all VSA hypervisors that support live browse, you can perform live browse operations with the following cloud libraries: Microsoft Azure Storage: Default container storage using hot or cold access tiers with General Purpose v1 (GPv1) or General Purpose v2 (GPv2) storage account) Question is : Is live browse supported for MAs in Azure?
Hi,Is there a way to use Ransomeware Protection on Windows MediaAgents, using a Disklibrary with Cluster Share Volumes ?Once Ransomeware Protection is activated, the filter driver “CVDLP” with the altitude of 145180 (encryption) is added to the Filesystem Filter.this results in redirected I/Os to all ClusterSharedVolumes :BlockRedirectedIOReason : NotBlockRedirectedFileSystemRedirectedIOReason : IncompatibleFileSystemFilterName : volume21Node : node1StateInfo : FileSystemRedirectedas a result the Clusted Events are flooded with Warnings:Cluster Shared Volume 'volume21' ('volume21') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared
Hi,I have a strange behavior with Commvault and Intellisnap. We trigger Netapp Snapshots with Commvault Jobs. These Snaps are not copied to Commvault Library. Reason: We have a 2-Node Netapp Metrocluster in Place and a third Netapp with async Snapmirror syncronisation to keep the History of Snapshots there (so this is kind of a Backup Storage for the Metrocluster).We use Commvault only for triggering Snaps and for restores from these snaps. But Data is kept on Netapp with Snapshots. So this way our Service Desk personell needs only one GUI to restore Backups from Netapp and from all other kind of Backups we store in Commvault Library.Since about 3 Months I observe the following problem: When you browse a Netapp Snapshot Job with Commvault, you don’t see all Folders. When you browse the same Snapshot with Windows Explorer using ~snapshot after the UNC path, everything is there and you can restore it with no problem.Any idea why the Folders are not visible in Commvault Job browsing? I do
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.