Share Commvault best practices
Share use cases, tips & ideas with others
- 88 Topics
- 388 Replies
Since we launched a few months ago, we’ve had thousands of members sharing their tips and tricks, as well as helping each other out. Each and every one of you has helped a peer empower themselves through this amazing community.Take a moment to introduce yourself and share what project or challenge you are currently focused on and let us know how we can help. Let’s use the power of our awesome community to boost each other ever upwards!
Hi all-I have two commcells (primary and disaster recovery) and 4 media agents (2 primary, 2 DR), al running Windows 2012. I will be spinning up a new 2019 environment that I plan to run in parallel until I have everything running on 2019 with no issues. Question: is this the best solution? I’ve seen a couple of nicely detailed posts here about in-place upgrades but since I am also setting up new media agents in both locations, I figure I’d just spin up a new 2019 server for the Commserve in both locations. Does anyone have any pointers or guide on how to do this in the least painful way possible? I’ve been reading various kb articles as well, but I would love to hear from anyone who’s done this so I can get an idea of what’s ahead of me. Thanks everyone.
Hi there, I would like to ask you if it is better to have dedicated VSA proxy VM or if the VSA role should be taken by Commserve server? What should be taken into account? Let’s say, there is a smaller environment roughly said 20TB of data and only VMs. Unfortunately, I didnt find any recommendations or best practices.
Hi all,I would like to discuss location and sizing of fundamental installation directory. Installation Path - it should be the local system disk, required space will be used from the system diskIndex Cache Path - it shouldnt be the local system disk, at least 200GB free space neededDatabase Engine Installation Path - could it be the local system drive? The space is allocated by the installer itselfCommserve Database Installation Path - at least 200 GB free space for Medium environment, should it be different than local drive?Disaster Recovery Path - it has to be available if the local system crashesCheers!
Hi, We’re planning to use Commvault with on-prem disk storage that has the WORM feature enabled. At this moment, we know that when using WORM on the storage side, Commvault WORM must be also configured (WORM Copy property). I had an open Incident 221229-281 to describe additional settings. The support reply is to run a workflow, but this is specific to Storage Pools. That is not our case. We’re planning to use WORM with Secondary Copies with deduplication. We can’t use Storage Pools right now, because we already have some copies bound with a Global Deduplication Policy. What other settings do we need to? Below I have a list of things that need to be done.Could you confirm if is this correct and add something to it if necessary? Check WORM Copy property Disable the CV option “Prevent Accidental Deletion”Source: Have Scalable Resources enabled during the aux copy job (same source) Follow the steps here (https://kb.commvault.com/article/55258) before running the first aux copy (same sou
Hi Guys, We are using partitioned DDBs running on 2 media agents in the environment on the Azure VMs. Our current Deduplication setting on GDSP level is to seal and start a new DDB in case of any DDB corruption instead of Recover Pause and Recover current DDB. Also the option to “Allow jobs to run to this copy while at least 1 partition is online” is not selected. Just wanted to check what is the best practice for partitioned DDBs as per Commvault.CS : Azure VMPrimary Library : Azure BLOBSecondary : Metallic
We have 9 Hyperscale Appliances and an offline NetApp NAS Storage. The offline storage is used on Mondays only for Aux copy and path on Hyperscalar Appliances are mounted using NFS.Now Offline storage library is 90% full and we are in process of expanding the storage. As per existing design NetApp NAS is mounted on all 9 HSs. The new storage is HP MSA SAN storage supporting ISCSI. We have allocated the new disks to HS using ISCSI and mounted them on all 9 HyperScalar Appliances as per existing design.I manually tested by creating a file on one of the HS and then tried to remount the mount path on other HS. While doing so the mount point showed corruption and had to clean the FS. My concern is whether we have taken a right approach and there should to a change in design like adding an additional server to use NFS on 10G network. Recommendations are much appreciated.
With all of the cool features coming in our Feature Releases, I thought it would be handy to create a list of Best Practices for planning and Performing a Commserve update to a higher Feature Release.There’s definitely a lot of Proactive work that can go into your process that will save you from potential headaches or even nightmare scenarios!Check out the Planning documentation first Plan the Timing out in advance - Make sure any key stakeholders know the CS will be down during this planned upgrade Do you use any type of Commserve Failover? If so, note the instructions for those differ accordingly. Upgrading Hardware as well? Review the documentation on Hardware Refreshes. Suspending jobs - Ideally, have no jobs running; though some jobs can be suspended and resumed after others cannot be suspended and must be killed. Check and Double check that you have the Media Downloaded or if you are using the Commserve Software Cache, that it is updated with your required Maintenance Release.
Hi Fellas, Has anyone used the key below, is it safe to use? Will it have a negative impact on the server or the application running on it?I'm thinking of using this because while taking a backup on one of our servers, clbackup and cvfwd processes are using high cpu. Compression+dedupe are on media agent.By using this key, I aim to limit the max CPU usage of these processes, will the correct usage be as follows?https://documentation.commvault.com/additionalsetting/details?name=sSDTHeadMaxCPUUsage Best Regards.
We need to stage our customers databases. On CS on windows we use CSrecovery assistant and it it is easy. Staging A Commserve on Linux database I tried the document:https://documentation.commvault.com/2022e/expert/142005_staging_commserve_database_on_another_commserve_linux_host.htmlThe staging works as described with one exception, admin password was not resetted. That means I cannot open commcell consoleCSrecoveryassistant.log shows “skipping admin password update as per registry hook”I setup a standard commserve installation on Linux with no modification in registryWhat am I doing wrong.Can you help?
Important note: do not modify CSDB data and modules, just use READ operations only.You can refer official documentation to explain part of CSDB information as CommCell Views, but no more information as of now. So it's slightly hard nowadays to generate your own reports and/or workflows directly refer to CSDB tables which contain all CommCell information.This article provides simple examples to retrieve data by your own.In the first place, you can run any SQL queries from Microsoft SQL Server Management Studio or DBeaver as you like (and recent Linix CommServe environment, still there must be some tricks to gain access, though).To keep your activities safe you need to keep uncommitted read from any table as one of the following techniques:-- place the following at the top of queryset transaction isolation level read uncommitted;-- place with(nolock) statement in each table referenceselect * from APP_Application with(nolock)Most of you would like to get the list of subclients along with
The web console was created to solve a number of issues imposed by the Java console.It also slots right into the paradigm of web and rest api communication. That said the early console was… bad… like really bad.But over the years it has gotten better and it keeps getting better.So my suggestion for people who are new to commvault is to use the webconsole as it is intended to replace the java gui, the Java GUI has to drag along the all the cruft from the inception of the product, making it so that some things don’t even make any logical sense as presented in the Java GUI, also many new features simply do not get back ported to the java GUI. At one point, the Webconsole was a poor substitute for the Java GUI, but I would strongly suggest that you foster the habit of using the web console. There will always be the graybeards that stick to the old ways, and while there are definite benefits to doing but there are also downsides, the graybeards are often not aware of the new functionality b
Hi Community,I have a question about the compatibility of BoostFS 7.8 on redhat Linux 8.x.BoostFS for datadomain in commvault are available up to 7.4But on DELL the 7.4 become an old version of ddboost, do you plan to update this matrix compatibility ?(in comparison BoostFS on windows server 2012/2019 is compatible up to 7.7)please adviceregards,
how to resolve event id 1526006 for file anomaly alertbelow is the alert File Activity Anomaly Alert Type Operation - Event Viewer Events CommCell commvaultcls Detected Criteria Event Viewer Events Event ID 1526006 Monitoring Criteria (Event Code equals to 7:211|7:212|7:293|7:269) Severity Critical Event Date Mon Aug 15 22:26:47 2022 Program cvd Client 10.204.7.209-DR Description A suspicious file [D:\Inetpub\wwwroot\Accounts\AccModules\AccountsNewPrintingPayout\Z554PBEA-GI6X-KPYA-8AE5-C8B264369D24.odin] is detected on the machine [10.204.7.209]. Please alert your administrator. Generated At: Mon Aug 15 22:26:59 2022
Hi guys, I’m planning an upgrade and move of an Commserver version 11.20 on Windows Server 2012 R2 to a new server on Windows 2019 installed with Commserve 11.28 2022E to get the MSSQL upgraded as well. I will use DR Assistant Tool as I usually do and as far as I can see in docs this should work according to this statement…“Verify that the destination CommServe host installed with the same (or higher) service pack and hotfix pack as the database that is available in the DR backup that you plan to restore.” Has anyone done a similar upgrade/move between versions? Regards,Patrik
Hi Fellas,We have a BI server that we use in our own environment. Here we want to make a report on the BI server. This report will show the following information. It will show information such as Client Name, hostname, type of agent installed on it, content, last backup date according to Agent, whether there is Aux Copy, next backup date. Which tables should we read on CommServ DB for this information? Or is there a SQL Query you know about it? Best Regards.
SAP HANA Logcommandline Backup is invoked from HANA side (HANA Studio configuration), it automatically converted to Commvault backup jobs and normally there's nothing we should do, running every 15 minutes by default per HANA's setting.But this job is slightly different from the other "normal" jobs, when any interim errors, like disconnection of networks, shortage of Commvault resources (typically # of streams on libraries or strage policy copy level), the job would fail.As mentioned above, this job would repeat every 15 minutes (by default), so any failure would be recovered quickly so typically end user won't lose any of data. But sometimes there's another issues might be at CS/MA side, hard to keep watching even setting up alerts, job monitoring, etc. This is to list up all failure reasons from CSDB quickly (for a Japanese customer), if any suspicious errors identified you can dig into the specific job for detailed research:use CommServSET TRANSACTION ISOLATION LEVEL READ UNCOMMITTE
Introduction As a part of backup infrastructure implementation, using GCP (Google Cloud Platform) shared VPC’s (Virtual Private Cloud) to come up with multi-NIC media agent setup for CommVault infrastructure. This opened doors to backup data from one tier and restore the same across another tier. This has reduced the overall time and cost to migrate data to a separate client over the GCP network. It also ensured that there is network isolation between production and non-production tiers as no additional firewall ports had to be opened to transfer the backups.Media agent in GCP with two VPC’sBelow is the screenshot from GCP console which shows Commvault media agent has two NIC’s associated with different subnets from two separate shared VPC’s.Restore using Commvault multi-NIC media agentBelow is a high level diagram showing the refresh between production and non-production tiers (segregated across VPC’s) by using Multi-NIC media agent. Media agent will read data from storage bucket by
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.