Share Commvault best practices
Share use cases, tips & ideas with others
- 114 Topics
- 454 Replies
Hi.. With MS SQL we can set the account used for connecting to SQL on a group level….. Any idea on how to do the same for DB2? This seems to involve some scripting which I suck at… (unless it starts with an @echo off) :-) Our DB2 team is considering creating a single DB2 user for all DB2 servers/ instances to be used by CLVT… This user will have its password changed on an often basis, but we do not want to traverse every single DB2 server /instance to change the password.. .(we’ve got quite a few) Any idea/ solution would be highly appreciated :-) Thank you.. Kind regardsRubeck
This is simple but working trick to maintain Commvault.Via Workflow built-in activity "ExecuteScript", you can call arbitrary shells (both on Windows/Linux) remotely.If you have full access to the remote server and able to place scripts, or if the script can be called via Workflow, no issues.But if you'd like to modify the script remotely for OS-side schedule jobs (Task Scheduler or crontab), it's slightly difficult to control this process remotely, since Commvault can restore the script but not so easy to modify the content itself.If the script contains only text data, you can utilize echo command to put contents remotely, one trick required though since arbitraryTo achieve this,First, prepare any scripts you want to put remotely (this is modified version of .bat file generated via Save as Script):Next, pass the generated script to the following logic, which "escapes" all strings per OS type:String text = <original script>;String osType = <Windows or Linux>;// Generate ech
This is simple but working trick to maintain Commvault.Backup jobs might fail at night and you'd like to find out the cause of errors, you need to collect log bundles in the first place.But when you realize the errors after a while (say couple of days later), job logs might be rolled over, important information which would contain the error messages gone.To avoid this situation, you can setup various alerts, collect logs immediately after you receive any.This also cumbersome so you can introduce simple workflow which can be called at the same timing of alert, also collect logs automatically.Rough process as follows: Generate an answer file for "Send Log Files". This procedure is utilizing Save as Script, which can be saved most of user operations with parameters, generates .bat file and XML file. The latter is called Answer file which contains the actual operaiton parameters in single file. To export this, you can start "Send Log Files" process from CommCell Console: Then you're getti
This is simple but working trick to maintain Commvault.When you want to start (typically backup) jobs via CLI instead of Schedule Policies, it involves qlogin in the first place to log into CommCell.This command is mostly straight-forward to use but when trying to invoke multiple jobs from one server it arises some errors, like Error 0x10b: User not logged in, Error 0x208 Token file is corrupted or relevalt errors.As BOL explains -f parameter to use Token file as follows:When using qlogin without -f option, it generates a file named "qsessions.OS-User, directly under Commvault installation directory.This file must be created with administrative privilege on this server to modify installation directory, also need to exist until any qoperations called in the shell (like qlist job), then it will be removed when the shell calls qlogout.This default token file is generated per OS user, not Commvault user so if multiple shells are called simultaneously it uses the same qsession file.So when
Hello,What is the best solution for VMware VSA backup, use of incremental or differential backups?currently we use incremental and synthetic fulls, is there any improvement for restores to use differentials backups instead of incremental?regards Juergen
Hi all,it's always exciting to start with something new. This year we start with deploying new disk library. In our scenario, we have 2 disk libraries (2 storage arrays), whose secondary copy policy is copied to each other. There are two options - either migrate all data from old storage arrays/storage libraries to new ones or just deploy two new disk storage libraries without worries about old backed up data (which is unlikely). Then, how to physically migrate data from old storage array to the new one? What keep in mind when deploying new storage library:with new storage library it has to be create new deduplication database I will be more than happy to see your experience and once done I will share our knowledge as well
The problemHave you ever needed to know how much data you need to backup incrementally from your VMware environment? Because you need to design a new backup storage. Because you need to know, if your WAN is capable of transferring everything into the cloud or from your branch office? Because you need to know the change rates of your VMs? The solutionNow you can track these data changes with a simple script! Let me introduce you to GetChangedBlocksV2! It is a PowerShell script which uses the VMware PowerCLI to read the changes from your VMware disks each time it is run and saves it as CSV. It keeps track of the changes between each run, between each day and between each week. In order to get good results, you need to run this tool on a regular basis, e.g., with the Task Scheduler.There is even a basic Excel file included to analyze the results for you. But if you have better tools feel free to utilize them. Where to get it?https://github.com/turboPasqual/GetChangedBlocksV2 Other stuffWh
Hi looking for some advice on Backing up Nutanix files.Nutanix ver 5.20.230 TB of capacityCommvault hyperscale 3 node cluster10 GB ethernet connectivity. CIFS And NFS enabled. Documentation states the backups run through Access nodes and at aminimum 1 access node per protocol. Is there any integration with intellisnap for Nutanix files? I did not see this covered in the documentation, but maybe not looking in the right place.
Hey All, I have a question to the following scenario: Windows MA with an NVME Flashcard Actually the flashcard is formatted with 4K blocksize and we are planning to format the disk for 32KB blocksize as written in BOL ( https://documentation.commvault.com/11.24/expert/12411_deduplication_building_block_guide.html ) On the Flashcard we are hosting 3 DDBs. DDB is for backups to an cloudian Backup Device over S3 Protocoll in local Datacenter DDB is for Backups to SAN Attached STorage DDB is for backups into S3 Storage out of the Datacenter I have the following question. Can we use one Windows Partition with 32K Blocksize or should we make 3 partitions with different windows block size ? The second question is which blocksize we need for the DDBs ? ( Block level Deduplication factor ) for all 3 DBs
Hi,I wanted to check what people’s thoughts were about keeping the Commvault components outside of the Active Directory Domain to reduce the risks of being compromised in case of a security breach, e.g. compromised AD Domain Admin credentials. I’ve had some customers ask me about this and wanted to check with the community.Regards,Jeremy
Our Database team which handles SQL servers and SQL databases has sent us a notice that they are wanting to upgrade the SQL Database from 2014 to 2019. “Hello Team,BKR-BKCOM-01/COMMVAULT is still running on an old version of SQL 2014, hence we need to upgrade the server to latest version of 2019. Can you help us on how to proceed? This is a physical server so we need a new physical server to host the 2019 version then move the databases. Once tested, then we can offline the old 2014 version and online the new server to production.”I don't know if they are saying they want a new server for the Commvault Commserve or if they are hinting at their own server, which makes no sense why he added that in the email. Our specs for the commserve easily matches the SQL recommendations for 2019. My only thing i need help on is besides the article:https://documentation.commvault.com/11.24/expert/142607_upgrading_microsoft_sql_server_2016_express_to_microsoft_sql_server_2019_standard_edition.htmlIs
Hello,I’m wondering what’s the proper approach to data verification in native cloud environments. The environment is built within the cloud, CS, MA, Cloud Libs etc. are placed in the same cloud solution, so the infrastructure traffic basically stays within the cloud. The only reference that I’ve found in the docs was:Tip: By default, the data verification schedule policy that is created by the system is not configured with data mover MediaAgents that use a cloud storage product, because the read operations from the cloud are very slow and are performed on low latency media. If necessary, you can perform the data verification on the cloud storage manually.To run data verification on data that is stored on archive cloud storage, first recall the data to the main cloud storage location. Then you can run the data verification job on the recalled data.https://documentation.commvault.com/11.24/expert/12567_verification_of_deduplicated_data.html But in this case the MediaAgents, are not data
Hello Guys, appreciate if you could help me filling the xml file as im not good with xmls i’m migrating 300+ subclients from old vcsa to a new one i’ve used the below xml file however, i dont know where i can put the destination vcenter ?what other values should be filled in vm subclient cloning.<App_CloneSubClientRequest><cloneEntity><test1/><defaultBackupSet/><VMware/><Virtual Server/><vcenter-01/></cloneEntity><subClientProperties><subClientEntity><test1/></subClientEntity><content><path/></content><vmContent><children equalsOrNotEquals="1" name="test1" displayName="test1" type=""/> </vmContent></subClientProperties></App_CloneSubClientRequest> Thank you!
We had to back up some exchange Dags for short term retentions to an 3Par Disk library. The Mount Path is from one of our media agents where i can see what's stored on the 3Par. My boss wants me to back up the month of September that we have on disk to tape, so we can clear it out from the 3Par for another month of Storage. Would the best way to back this up to tape would be to create a subclient on the media agent the drive is mounted to , target the Data Chuncks for the time of the month he wants backed up, and assign it a storage policy that will back it up to tape, or will that not capture the data we want from the 3Par?
Hi Community, I want to know about the strategy which we can take for data protection of cloud workloads using CommVault .Do we need to deploy CS in cloud or we can use on-prem cs for backup of cloud as well as on prem workloads ? if yes , How ?Please share if there is any sample reference architecture diagram for backup of cloud workloads ?What type of backup library to be used for cloud workloads backups ?
Report to find if a specific username is being used in multiple locations on Commvault We recently changed the password of a username that was being used to backup different environments.Is there a report in Commvault that I can use to search for a username such as domain\username so as to find out if it is being used to backup say 10 different environments.
Hi there! Could you please address me to the right procedure how to deploy Standby CommServe Server? I have found this page (https://documentation.commvault.com/11.24/essential/128066_deploying_standby_commserve_server.html), however we have no HyperScale Appliance it is just ordinary Commserve Server So, if I want to deploy Standby CommServe as DR site in case of speed failover, should I deploy the Standby CommServe as written there (https://documentation.commvault.com/11.24/essential/106129_installing_standby_commserve_host.html) In general, is deploying of the Standby CommServe host time consuming process or is it more intuitive task?
Hello,I have a question regarding this procedure (https://documentation.commvault.com/11.24/essential/105913_installing_feature_releases_on_standby_commserve_host_and_sql_clients.html).Should be the point 2 ( From the Command Center, install the feature release or the maintenance release in the Standby Commserve Host and the SQL clients in both the production and standby CommServe hosts as follows ) run completely from the standby host?
Commvault new licensing model very straight forward, but when we compare with earlier version its very flexible to match customer requirement. New license good for some one using hypervisor environment and if they planning to move cloud. But this new licensing model not suitable for onprem setup like they are using physical servers and are they have to backup separate NAS server etc….because its very costly backup solution.
Hi all, my task is to deploy multiple VSA proxies. Since I do this for the first time I can not really imagine all caveats and pitfalls that can be hidden.The right procedure should be:install Windows or Linux machine (physical or virtual) install VSA module during Commvault package installation (also specify Commserve server during installation) be sure that required ports are open on the newly created VSA proxyThere is a cuple of questions what to do next:is it necessary to create any Vcenter permissions/roles (as per video https://kb.commvault.com/article/63239 )? how to enable using this newly created VSA proxy? Any contributions will be much appreciated!
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.