Share Commvault best practices
Share use cases, tips & ideas with others
- 97 Topics
- 420 Replies
Upgrade FR24 to FR28
Hello We are planning an Commvault upgrade from FR11.24.xx to FR11.28.5X. We still have a lot of 2012 servers and SQL2012 servers. Wat would be better? Let the agent on win 2012 on FR11.24.9x, or upgrade the clients to 11.26.xx.Commvault 11.26.xx is the last Commvault version that supports Windwos 2012 and SQL2012. If we use additional setting we could upgrade the client to FR11.26.xx Thanks in advance!
scrclientID, scrclientID and clientname
Hello,I take scrClientID and destclientID from the command:qoperation execscript -sn DataInterfacePairConfig -si listByClient -si clientNameis there any way to associate scrClientID with client name (how to get client name using thise id’s)?thank you very much for helpDorothy
Retention Lock on MRR Storage Best Practice
I’m deploying MRR storage and planning to enable immutability via the “Enable Retention Lock” workflow at the storage pool level, I’m keen to know anyone’s experience with this because once it’s enabled it’s not reversible.Typically, when using Retention Lock on a policy copy, we wouldn’t use extended retention on a copy and have the basic retention set to the requirement e.g.:One Selective Copy for Monthly backups with the required basic retention Another Selective Copy for the yearlies with the required retention Both Selective Copies would be pointing to the MRR storage pool with WORM enabled as a dependant copy in the storage pool via the workflow.Is this the best way to setup MRR with immutability?
Best practices for a large linux file server besides File Agent?
Our linux file server is a large VMware VM - 48GB RAM, 8 virtual CPUs that are running on a Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz - and it has about 500TB of storage on it, that our linux admin has set up as 12x 60TB (max size) volumes on our SAN, that communicate back to VMware over iSCSI.We get reasonable performance with our linux users using this linux VM for day to day usage, but certain customers and projects are reaching crazy levels of unstructured file storage. We have one customer that has a folder that consumes 16TB of data, across 41 million files. And while that’s our worst offender, the top 5 projects are all pretty similar.We’ve been using the Linux File Agent installed into this VM since starting to use Commvault in 2018. We typically see about 3 hours for an incremental backup to run across this file server, with the majority of time just scanning across the entire volume, and the backup phase running relatively quickly. We run 3x incremental backups per day, at 6a
Commvault port requirements for site to site DR and Livesync
I have a customer that is looking for clarity on a port requirement question(s) Scenario:2 sites primary and secondary site A Primary site B DR Data backups from site A to B (Aux\Dash copy synchronous) Commserve LiveSync enabled Requirement if site A goes down that site B will need to recover all the VSA VM backups from site B copies of site A bkups. The question is what ports need to be opened to accomplish this since there will be a firewall between the two sites. Will all traffic need to be bidirectional or can some of the port connections be established solely with one way connection? I did point them to the commvault documentation:https://documentation.commvault.com/v11/essential/7102_port_requirements_for_commvault.htmlbut there was some confusion on their part and want to validate required ports for:LivesyncAUX copy data traffic from site A to BMedia Agent data from site A to site BCommunication ports from Commserve to secondary site.
Cloud Architecture Guide for AWS - 2022e Edition Now Available
Commvault is pleased to announce the availability of the Cloud Architecture Guide for AWS - 2022e Editionhttps://documentation.commvault.com/2022e/expert/others/pdf/AWSCloudArchitectureGuide_2022e_Edition.pdf https://documentation.commvault.com/2023/expert/others/pdf/AWSCloudArchitectureGuide_2022e_Edition.pdf This edition represents a major re-write of the CAG and includes improves in:Covering all new features and functions in Commvault 11.26 - CPR2022e (11.28) Detailing Commvault protection coverage for AWS services (300+ services) Added a new Zero Trust Security section for staying secure and protected with Commvault + AWS. Added a new Well-architected section to help detail how Commvault can be built 'well-architected' and operated in a well-architected manner Removed T-Shirt sizing (Small, Medium, Large). Commvault not longer requires customers to build specific sized MediaAgents and Access Nodes - start with our minimum recommended specifications and scale as needed.
Migrating an Index Cache from One Media Agent to another?
We had a media agent that was recently decommissioned. The location in which the media agent was located was shut down. The server was shipped to our site where we have it ready to be cabled up and redeployed. The initial thought from my colleagues is that we had to get this media agent back online in order for index cash on that server to be used again. But my management is asking whether or not we can just transfer the media agents index cash from that server to another media agent?So instead of having to rehook up this media agent in order for the tapes that came from this location to be read, that we can move the index cache that was on this server to another media agent, and then any tapes that came from this site would then just read from a current active media agent for its index cash?
SQL Database Backup best practice
We are currently looking to go Tapeless and I realize that some of our Backups are not setup as best they could be. One of the places is backup of SQL Databases. Currently we have the SQL Databases backed up by the SQL Program itself and then we make a backup of the .BAK files. This adds ALOT of extra time for restores as we have to restore the BAK and then restore the BAK again inside SQL. I have been told when Commvault was first installed years ago we tried using Commvault for the full backup of the DB’s but it was very slow and had issues. I am hoping with the newer technology things will be better. My question is what are best practices. I have been reading through the Docs from Commvault for SQL server and it seems straight forward, but I am not sure if the backup window needs to be when no-one is using the DB, or if the DB needs to be locked for a few hours as it backs up. We normally have processes running against the DB’s most of the day and night. I am sure many comp
VSA proxy server as part of the Commserve or dedicated VM
Hi there, I would like to ask you if it is better to have dedicated VSA proxy VM or if the VSA role should be taken by Commserve server? What should be taken into account? Let’s say, there is a smaller environment roughly said 20TB of data and only VMs. Unfortunately, I didnt find any recommendations or best practices.
Commcell installation - installation paths
Hi all,I would like to discuss location and sizing of fundamental installation directory. Installation Path - it should be the local system disk, required space will be used from the system diskIndex Cache Path - it shouldnt be the local system disk, at least 200GB free space neededDatabase Engine Installation Path - could it be the local system drive? The space is allocated by the installer itselfCommserve Database Installation Path - at least 200 GB free space for Medium environment, should it be different than local drive?Disaster Recovery Path - it has to be available if the local system crashesCheers!
CommVault with Storage WORM enabled
Hi, We’re planning to use Commvault with on-prem disk storage that has the WORM feature enabled. At this moment, we know that when using WORM on the storage side, Commvault WORM must be also configured (WORM Copy property). I had an open Incident 221229-281 to describe additional settings. The support reply is to run a workflow, but this is specific to Storage Pools. That is not our case. We’re planning to use WORM with Secondary Copies with deduplication. We can’t use Storage Pools right now, because we already have some copies bound with a Global Deduplication Policy. What other settings do we need to? Below I have a list of things that need to be done.Could you confirm if is this correct and add something to it if necessary? Check WORM Copy property Disable the CV option “Prevent Accidental Deletion”Source: Have Scalable Resources enabled during the aux copy job (same source) Follow the steps here (https://kb.commvault.com/article/55258) before running the first aux copy (same sou
Partitioned DDBs Best Practice
Hi Guys, We are using partitioned DDBs running on 2 media agents in the environment on the Azure VMs. Our current Deduplication setting on GDSP level is to seal and start a new DDB in case of any DDB corruption instead of Recover Pause and Recover current DDB. Also the option to “Allow jobs to run to this copy while at least 1 partition is online” is not selected. Just wanted to check what is the best practice for partitioned DDBs as per Commvault.CS : Azure VMPrimary Library : Azure BLOBSecondary : Metallic
Expanding Offline storage with new ISCSI disks
We have 9 Hyperscale Appliances and an offline NetApp NAS Storage. The offline storage is used on Mondays only for Aux copy and path on Hyperscalar Appliances are mounted using NFS.Now Offline storage library is 90% full and we are in process of expanding the storage. As per existing design NetApp NAS is mounted on all 9 HSs. The new storage is HP MSA SAN storage supporting ISCSI. We have allocated the new disks to HS using ISCSI and mounted them on all 9 HyperScalar Appliances as per existing design.I manually tested by creating a file on one of the HS and then tried to remount the mount path on other HS. While doing so the mount point showed corruption and had to clean the FS. My concern is whether we have taken a right approach and there should to a change in design like adding an additional server to use NFS on 10G network. Recommendations are much appreciated.
Linux staging Commserve DB:admin password not resetted
We need to stage our customers databases. On CS on windows we use CSrecovery assistant and it it is easy. Staging A Commserve on Linux database I tried the document:https://documentation.commvault.com/2022e/expert/142005_staging_commserve_database_on_another_commserve_linux_host.htmlThe staging works as described with one exception, admin password was not resetted. That means I cannot open commcell consoleCSrecoveryassistant.log shows “skipping admin password update as per registry hook”I setup a standard commserve installation on Linux with no modification in registryWhat am I doing wrong.Can you help?
Recover a CSDB to a higher level version?
Hi guys, I’m planning an upgrade and move of an Commserver version 11.20 on Windows Server 2012 R2 to a new server on Windows 2019 installed with Commserve 11.28 2022E to get the MSSQL upgraded as well. I will use DR Assistant Tool as I usually do and as far as I can see in docs this should work according to this statement…“Verify that the destination CommServe host installed with the same (or higher) service pack and hotfix pack as the database that is available in the DR backup that you plan to restore.” Has anyone done a similar upgrade/move between versions? Regards,Patrik
how to resolve event id 1526006 for file anomaly alert
how to resolve event id 1526006 for file anomaly alertbelow is the alert File Activity Anomaly Alert Type Operation - Event Viewer Events CommCell commvaultcls Detected Criteria Event Viewer Events Event ID 1526006 Monitoring Criteria (Event Code equals to 7:211|7:212|7:293|7:269) Severity Critical Event Date Mon Aug 15 22:26:47 2022 Program cvd Client 10.204.7.209-DR Description A suspicious file [D:\Inetpub\wwwroot\Accounts\AccModules\AccountsNewPrintingPayout\Z554PBEA-GI6X-KPYA-8AE5-C8B264369D24.odin] is detected on the machine [10.204.7.209]. Please alert your administrator. Generated At: Mon Aug 15 22:26:59 2022
HANA Logcommandline backup errors at single glance
SAP HANA Logcommandline Backup is invoked from HANA side (HANA Studio configuration), it automatically converted to Commvault backup jobs and normally there's nothing we should do, running every 15 minutes by default per HANA's setting.But this job is slightly different from the other "normal" jobs, when any interim errors, like disconnection of networks, shortage of Commvault resources (typically # of streams on libraries or strage policy copy level), the job would fail.As mentioned above, this job would repeat every 15 minutes (by default), so any failure would be recovered quickly so typically end user won't lose any of data. But sometimes there's another issues might be at CS/MA side, hard to keep watching even setting up alerts, job monitoring, etc. This is to list up all failure reasons from CSDB quickly (for a Japanese customer), if any suspicious errors identified you can dig into the specific job for detailed research:use CommServSET TRANSACTION ISOLATION LEVEL READ UNCOMMITTE
GCP CROSS REGION/ZONE BACKUP RETORES USING MULTI-NIC MEDIA AGENT
Introduction As a part of backup infrastructure implementation, using GCP (Google Cloud Platform) shared VPC’s (Virtual Private Cloud) to come up with multi-NIC media agent setup for CommVault infrastructure. This opened doors to backup data from one tier and restore the same across another tier. This has reduced the overall time and cost to migrate data to a separate client over the GCP network. It also ensured that there is network isolation between production and non-production tiers as no additional firewall ports had to be opened to transfer the backups.Media agent in GCP with two VPC’sBelow is the screenshot from GCP console which shows Commvault media agent has two NIC’s associated with different subnets from two separate shared VPC’s.Restore using Commvault multi-NIC media agentBelow is a high level diagram showing the refresh between production and non-production tiers (segregated across VPC’s) by using Multi-NIC media agent. Media agent will read data from storage bucket by
What should I use the Java Console or the Web Console? : A rant for users new to CommVault
The web console was created to solve a number of issues imposed by the Java console.It also slots right into the paradigm of web and rest api communication. That said the early console was… bad… like really bad.But over the years it has gotten better and it keeps getting better.So my suggestion for people who are new to commvault is to use the webconsole as it is intended to replace the java gui, the Java GUI has to drag along the all the cruft from the inception of the product, making it so that some things don’t even make any logical sense as presented in the Java GUI, also many new features simply do not get back ported to the java GUI. At one point, the Webconsole was a poor substitute for the Java GUI, but I would strongly suggest that you foster the habit of using the web console. There will always be the graybeards that stick to the old ways, and while there are definite benefits to doing but there are also downsides, the graybeards are often not aware of the new functionality b
Slow performance of Intellisnap VM backup with NetApp
Hello, We are getting slow throughput for VMware backups using intellisnap. Around 150 gb per hour with 5 readers.Data store is on NetApp and VSA/Media agent is VM.It is currently using NBD transport mode.Can hotadd or another transport mode be used and will that be faster? Is there any other best practice?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.