Share Commvault best practices
Share use cases, tips & ideas with others
- 106 Topics
- 476 Replies
Cloud Architecture Guide for AWS - 2022e Edition Now Available
Commvault is pleased to announce the availability of the Cloud Architecture Guide for AWS - 2022e Editionhttps://documentation.commvault.com/2023/expert/others/pdf/AWSCloudArchitectureGuide_2022e_Edition.pdfThis edition represents a major re-write of the CAG and includes improves in:Covering all new features and functions in Commvault 11.26 - CPR2022e (11.28) Detailing Commvault protection coverage for AWS services (300+ services) Added a new Zero Trust Security section for staying secure and protected with Commvault + AWS. Added a new Well-architected section to help detail how Commvault can be built 'well-architected' and operated in a well-architected manner Removed T-Shirt sizing (Small, Medium, Large). Commvault not longer requires customers to build specific sized MediaAgents and Access Nodes - start with our minimum recommended specifications and scale as needed.
Commserve Upgrade - Best Practices
With all of the cool features coming in our Feature Releases, I thought it would be handy to create a list of Best Practices for planning and Performing a Commserve update to a higher Feature Release.There’s definitely a lot of Proactive work that can go into your process that will save you from potential headaches or even nightmare scenarios!Check out the Planning documentation first Plan the Timing out in advance - Make sure any key stakeholders know the CS will be down during this planned upgrade Do you use any type of Commserve Failover? If so, note the instructions for those differ accordingly. Upgrading Hardware as well? Review the documentation on Hardware Refreshes. Suspending jobs - Ideally, have no jobs running; though some jobs can be suspended and resumed after others cannot be suspended and must be killed. Check and Double check that you have the Media Downloaded or if you are using the Commserve Software Cache, that it is updated with your required Maintenance Release.
Best practices for a large linux file server besides File Agent?
Our linux file server is a large VMware VM - 48GB RAM, 8 virtual CPUs that are running on a Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz - and it has about 500TB of storage on it, that our linux admin has set up as 12x 60TB (max size) volumes on our SAN, that communicate back to VMware over iSCSI.We get reasonable performance with our linux users using this linux VM for day to day usage, but certain customers and projects are reaching crazy levels of unstructured file storage. We have one customer that has a folder that consumes 16TB of data, across 41 million files. And while that’s our worst offender, the top 5 projects are all pretty similar.We’ve been using the Linux File Agent installed into this VM since starting to use Commvault in 2018. We typically see about 3 hours for an incremental backup to run across this file server, with the majority of time just scanning across the entire volume, and the backup phase running relatively quickly. We run 3x incremental backups per day, at 6a
Alta de cliente vcloud director - Failed to connect to virtual hosts [cloud not connected to vcloud]
estoy realizando la integración del commvault & vcloud director , al finalizar la configuración del plugin , en la parte donde selecciono el hypervisor , no me muestra ninguno , por lo que estoy queriendo dar de alta en mi comserve un cliente de vcloud sin embargo no me permite ya que falla con el siguiente error : en el parmetro vcloud director estoy colocando el hostname del vcloud sin https:// y el usuario para fines de prueba se esta intentando con todos los privilegios , el Access node es el commserve en el instale el vsa, alguien que me pueda apoyar en ver que es lo que estoy colocando mal?esta es la parte donde estoy tratando de configurar el nuevo cliente vcloud desde el commcel console saludos Dulce J Rico
Purge/Expire CVMedia folder on clients?
I’m looking for functionality similar to the old nExpireUpdates key, but built for how things work in v11 -- where the CVMedia folder could be programmatically removed from the clients following successful install (or after X days).We are an MSP and many of our client machines are very tight on free space. So I’m looking for a way to prune/expire/delete the CVMedia folder from clients. Is there a way to do this?I realize the reasons why this folder is kept on the clients - but we do not frequently install service packs/hotfix packs due to change control. We have some clients where we don’t have enough free space to write job results, so anything we can do to reduce the size of the footprint on clients would certainly help in our environment. Freeing up just 350MB in /opt would actually make backups work for some of our clients which are failing due to low disk space.We don’t have access to most of our clients, so looking for a way to do this without logging into clients.
If an application database is already configured to be backed up by an application agent, it will not be backed up using the application-aware VM group.
can some one please help me to understand the below point If an application database (oracle ,sql) is already configured to be backed up by an application agent, it will not be backed up using the application-aware VM group. if i have application aware backup cant we backup it application level?if i was taking vm backup before like incremental in all dayand database level in hours and if am planning to change it to application aware backup how can i achieve that
migrating access node , index server to a new server
hope someone can help for the below query i am taking exchange on premise backup , and currently my access node and index server are in a old physical machines , i am planning to change access node to a vmalso index server to a new physical machine , what would be the best approach without affecting the backup
OS Drivers or nic firmware matrix - supported or reccomended drivers etc
Is there a matrix or something that commvault supports for nic drivers for certain hardware for their intellensnap workflo? seeing PDL’s on the hosts during snap volume unmount and its causing issues with my virtual host. Anyone else see this? There is some firmware updates available- but not sure if they are reccomended etc. Thoughts?
Deconfigure Clients with no Backup History
I am trying to find a solution where we can automatically deconfigure clients which have no backup history. While also making sure we don’t mistakenly deconfigure any MediaAgents/proxies/CS/etc. We’re trying to implement a set of policies/settings that would result in decom of a server simply by disabling backup activity at the client level.I want to enable the Data Aging MM parameter to ignore cycles on backup activity disabled, so that if we disable backup activity it would prune the data after 35 days.Then I want the client to remain for some time and if it has no backup history, then I want to have it automatically deconfigured after X days. Then we would turn on the option to delete deconfigured clients that have no protected data, so that things would eventually clean up automatically. I am trying to find a setting to deconfigure clients which have no backup history - the closest thing I was able to engineer would be:Smart Client GroupDeconfigure Client Workflow scheduled again
Upgrade FR24 to FR28
Hello We are planning an Commvault upgrade from FR11.24.xx to FR11.28.5X. We still have a lot of 2012 servers and SQL2012 servers. Wat would be better? Let the agent on win 2012 on FR11.24.9x, or upgrade the clients to 11.26.xx.Commvault 11.26.xx is the last Commvault version that supports Windwos 2012 and SQL2012. If we use additional setting we could upgrade the client to FR11.26.xx Thanks in advance!
scrclientID, scrclientID and clientname
Hello,I take scrClientID and destclientID from the command:qoperation execscript -sn DataInterfacePairConfig -si listByClient -si clientNameis there any way to associate scrClientID with client name (how to get client name using thise id’s)?thank you very much for helpDorothy
Retention Lock on MRR Storage Best Practice
I’m deploying MRR storage and planning to enable immutability via the “Enable Retention Lock” workflow at the storage pool level, I’m keen to know anyone’s experience with this because once it’s enabled it’s not reversible.Typically, when using Retention Lock on a policy copy, we wouldn’t use extended retention on a copy and have the basic retention set to the requirement e.g.:One Selective Copy for Monthly backups with the required basic retention Another Selective Copy for the yearlies with the required retention Both Selective Copies would be pointing to the MRR storage pool with WORM enabled as a dependant copy in the storage pool via the workflow.Is this the best way to setup MRR with immutability?
Commvault port requirements for site to site DR and Livesync
I have a customer that is looking for clarity on a port requirement question(s) Scenario:2 sites primary and secondary site A Primary site B DR Data backups from site A to B (Aux\Dash copy synchronous) Commserve LiveSync enabled Requirement if site A goes down that site B will need to recover all the VSA VM backups from site B copies of site A bkups. The question is what ports need to be opened to accomplish this since there will be a firewall between the two sites. Will all traffic need to be bidirectional or can some of the port connections be established solely with one way connection? I did point them to the commvault documentation:https://documentation.commvault.com/v11/essential/7102_port_requirements_for_commvault.htmlbut there was some confusion on their part and want to validate required ports for:LivesyncAUX copy data traffic from site A to BMedia Agent data from site A to site BCommunication ports from Commserve to secondary site.
Migrating an Index Cache from One Media Agent to another?
We had a media agent that was recently decommissioned. The location in which the media agent was located was shut down. The server was shipped to our site where we have it ready to be cabled up and redeployed. The initial thought from my colleagues is that we had to get this media agent back online in order for index cash on that server to be used again. But my management is asking whether or not we can just transfer the media agents index cash from that server to another media agent?So instead of having to rehook up this media agent in order for the tapes that came from this location to be read, that we can move the index cache that was on this server to another media agent, and then any tapes that came from this site would then just read from a current active media agent for its index cash?
HI Community, i am very interested in your thinking of customization of the following topics and if you do this and on which way you do this at the moment to achieve this goal. Webconsole Command Center Alerts Commcell Console GUI I am very excited about your answers and your experience with this topic. Cheers Philipp
SQL Database Backup best practice
We are currently looking to go Tapeless and I realize that some of our Backups are not setup as best they could be. One of the places is backup of SQL Databases. Currently we have the SQL Databases backed up by the SQL Program itself and then we make a backup of the .BAK files. This adds ALOT of extra time for restores as we have to restore the BAK and then restore the BAK again inside SQL. I have been told when Commvault was first installed years ago we tried using Commvault for the full backup of the DB’s but it was very slow and had issues. I am hoping with the newer technology things will be better. My question is what are best practices. I have been reading through the Docs from Commvault for SQL server and it seems straight forward, but I am not sure if the backup window needs to be when no-one is using the DB, or if the DB needs to be locked for a few hours as it backs up. We normally have processes running against the DB’s most of the day and night. I am sure many comp
Upgrading (not in-place) existing 2012 environment to 2019
Hi all-I have two commcells (primary and disaster recovery) and 4 media agents (2 primary, 2 DR), al running Windows 2012. I will be spinning up a new 2019 environment that I plan to run in parallel until I have everything running on 2019 with no issues. Question: is this the best solution? I’ve seen a couple of nicely detailed posts here about in-place upgrades but since I am also setting up new media agents in both locations, I figure I’d just spin up a new 2019 server for the Commserve in both locations. Does anyone have any pointers or guide on how to do this in the least painful way possible? I’ve been reading various kb articles as well, but I would love to hear from anyone who’s done this so I can get an idea of what’s ahead of me. Thanks everyone.
VSA proxy server as part of the Commserve or dedicated VM
Hi there, I would like to ask you if it is better to have dedicated VSA proxy VM or if the VSA role should be taken by Commserve server? What should be taken into account? Let’s say, there is a smaller environment roughly said 20TB of data and only VMs. Unfortunately, I didnt find any recommendations or best practices.
Commcell installation - installation paths
Hi all,I would like to discuss location and sizing of fundamental installation directory. Installation Path - it should be the local system disk, required space will be used from the system diskIndex Cache Path - it shouldnt be the local system disk, at least 200GB free space neededDatabase Engine Installation Path - could it be the local system drive? The space is allocated by the installer itselfCommserve Database Installation Path - at least 200 GB free space for Medium environment, should it be different than local drive?Disaster Recovery Path - it has to be available if the local system crashesCheers!
CommVault with Storage WORM enabled
Hi, We’re planning to use Commvault with on-prem disk storage that has the WORM feature enabled. At this moment, we know that when using WORM on the storage side, Commvault WORM must be also configured (WORM Copy property). I had an open Incident 221229-281 to describe additional settings. The support reply is to run a workflow, but this is specific to Storage Pools. That is not our case. We’re planning to use WORM with Secondary Copies with deduplication. We can’t use Storage Pools right now, because we already have some copies bound with a Global Deduplication Policy. What other settings do we need to? Below I have a list of things that need to be done.Could you confirm if is this correct and add something to it if necessary? Check WORM Copy property Disable the CV option “Prevent Accidental Deletion”Source: Have Scalable Resources enabled during the aux copy job (same source) Follow the steps here (https://kb.commvault.com/article/55258) before running the first aux copy (same sou
Partitioned DDBs Best Practice
Hi Guys, We are using partitioned DDBs running on 2 media agents in the environment on the Azure VMs. Our current Deduplication setting on GDSP level is to seal and start a new DDB in case of any DDB corruption instead of Recover Pause and Recover current DDB. Also the option to “Allow jobs to run to this copy while at least 1 partition is online” is not selected. Just wanted to check what is the best practice for partitioned DDBs as per Commvault.CS : Azure VMPrimary Library : Azure BLOBSecondary : Metallic
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.