Share Commvault best practices
Share use cases, tips & ideas with others
- 114 Topics
- 454 Replies
There seems to be some question surrounding VAULT TRACKER and how to manage PENDING ACTIONS.This is the correct process on how to manage the pending actions for Vault Tracker:https://documentation.commvault.com/11.26/essential/111089_managing_pending_vault_tracker_actions.htmlSince some organizations are retaining there tape footprint for archival and data protection from Ransomware, Vault Tracker is a excellent tool for tape management. Dwayne
Hi all.Just wanted to share here that although Okta is not officially supported we have been able to get it work for 2FA as a basic Other standalone time-based PIN. We set 2FA against a group and then place users in one at a time. I wrote a couple of docs about it if anyone would like the details.
Hello, Team. I have a customer whom i deploying commvault for.They are looking to backup Amazon EC2 Instance Virtual Machines and also some Hyper and EXSI vms (on-prem).I have given them requirements to have the proxy which will also serve as a media agent on Amazon for the cloud VMs (VMs are scattered across different regions(it would have been if they had 1 proxy for each region)). and also a proxy on-prem for the EXSI and Hyper-V. Customer is saying cost for another VM on the EC2 instance is high. They want to use On-prem server as proxy and Media agent for the EC2 instance.Is that ideal? Looking forward to your responses.
I am backing up a fairly large NFS volume off a Netapp using the network share for NAS method. There are a mix of large files and many small files. Is it normal for a full backup to take several days (3-4) to complete? How can I improve the speed? Currently I have a Linux Media Agent that is configured as the data access node. Would performance increase if I add additional access nodes?
Hi, I am still learning about commVault and I am trying to fix an issue with a Commvault > CloudApps job which have three buckets as Subclient. Two S3 buckets are backing up ok and the other one getting errors in the specific job history some times [82:129] and some times [19:583] Description: Another backup is running for client [Custom_AWS_S3], iDataAgent [Cloud Apps], Backup Set [defaultBackupSet], Subclient. I don’t know its anything to do with the credentials used for the S3_EU region but the client readiness checks saying “ready”. I am still struggling to find why two buckets backing up using the same IAM credentials? So why just one bucket backup task is failing for the job. Please give me some insight into it. ThanksSuj
CommVault has the classic problem of a product with so much wide ranging utility that it is practically impossible to keep track of every aspect of the product. That said I am sure we have all had the experience of finding a CommVault tool or website that is a game changer for your use case. Please share these links here: VSA Feature compatibility MatrixTape Storage Matrixadditional settings database
When creating workflows or reports you need to refer CSDB directly.If you can find out appropriate views or examples (in this community for instance), this would be helpful.But most of the cases there's no clue to identify the whereabouts of the data which you need to retrieve, this is a technique how to find out the data from CSDB.Suppose you want to identify the table location of subclient entry like: First prepare for "Full-text search" on CSDB using this kind of technique:Search all tables, all columns for a specific value SQL Server [duplicate], this query can search all tables for specific text, I'm creating this query as stored procedure for convenience and pass some literals to find out.Next, you should create easily-to-search text like:Then search entire CSDB using the query above, results like this:Bingo, the subclient content must be stored in a table named APP_ScFilterFile .There should be some knowledge to exclude unnecessary information, for instance, 3rd one indicates Au
Similar to this article, I'd like to show simple queries to retrieve DDB information from CSDB.Important note: do not modify CSDB data and modules, just use READ operations only.Also pleaseTo keep your activities safe you need to keep uncommitted read from any table as one of the following techniques:use CommServ -- just for convenience-- place the following at the top of any queryset transaction isolation level read uncommitted;-- place with(nolock) statement in each table referenceselect * from APP_Application with(nolock)Most of the DDB information is stored in tables started with Idx, for configuration of DDB is stored mainly the following 3 tables, first one is for DDB information, latter 2 for partitions:select * from IdxSIDBStoreselect * from IdxSIDBSubStoreselect * from IdxAccessPathTo combine this, including which MA is in use for each partition, like the following:select store.SIDBStoreName as 'DDB Name' ,apc.name as 'MediaAgent' ,ap.Path as 'Partition path'from Id
Important note: do not modify CSDB data and modules, just use READ operations only.You can refer official documentation to explain part of CSDB information as CommCell Views, but no more information as of now. So it's slightly hard nowadays to generate your own reports and/or workflows directly refer to CSDB tables which contain all CommCell information.This article provides simple examples to retrieve data by your own.In the first place, you can run any SQL queries from Microsoft SQL Server Management Studio or DBeaver as you like (and recent Linix CommServe environment, still there must be some tricks to gain access, though).To keep your activities safe you need to keep uncommitted read from any table as one of the following techniques:-- place the following at the top of queryset transaction isolation level read uncommitted;-- place with(nolock) statement in each table referenceselect * from APP_Application with(nolock)Most of you would like to get the list of subclients along with
Hi all!My company, using six MA to create and store backups. One MA with a separated storage for long term retention outside, and an another one for create local backups on a branch office site. On the main site there are four MA in two node grids. MA1 & MA2 is a grid and MA3 & MA4 is an another. They are sharing their libraries and DDBs.From the branch office, local backups are copied to the main site and main backups are copied to long term site as DR backups. MAs are physical on main site and virtual on others, and disk storages are used on all sites.Currently, we are planing to change our disk storages and physical MAs on main site. And of course, it is a good chance to upgrade OS on MAs from Win2012R2 to Win2019. During the process, library content should be moved from the old disk storage to the new one, and DDBs from old MA to new. One MA stores 40 - 60 TB backup data, and of course, I would like to do it with minimum downtime. I have found descriptions about library mov
Hi Community,I have a question about the compatibility of BoostFS 7.8 on redhat Linux 8.x.BoostFS for datadomain in commvault are available up to 7.4But on DELL the 7.4 become an old version of ddboost, do you plan to update this matrix compatibility ?(in comparison BoostFS on windows server 2012/2019 is compatible up to 7.7)please adviceregards,
Hello to all!In case of jobs replication (secondary copy), I see that these 2 options are available: Auxiliary copy & Dash copy.Whats the pros and cons of each?Dash copy is for deduplicated data (with Commvault dedup engine) and the Auxiliary copy for the rest “transitional data” / jobs?Lets share your thoughts.Nikos
8332 18a0 05/05 11:06:42 ####### GetFromUsersPropDB() - Enter8332 18a0 05/05 11:06:42 ####### GetFromUsersPropDB() - Exit8332 18a0 05/05 11:06:42 ####### ::processAdUser() - Blobsize returned from processSSORequest = , dwErr=[0xc000019b]8332 18a0 05/05 11:06:42 ####### ::processAdUser() - Unexpected return code [0xc000019b] from processSSORequest8332 18a0 05/05 11:06:42 ####### ::processAdUser() - error not retrieved from formatMessageA(..)8332 18a0 05/05 11:06:42 ####### EvSecurityMgr::userLogin() - processAdUser returned [-1], "error not retrieved from formatMessageA(..)"8332 18a0 05/05 11:06:42 ####### EvSecurityMgr::userLogin() - Socket [0x0000000000004028]: Database error [-1/].8332 18a0 05/05 11:06:42 ####### ::sendResponse() - FAILED [DataBase Error.]8332 18a0 05/05 11:06:42 ####### handleLoginOperations() - Encrypted Login Failed.Browser Session Id 8332 6310 05/05 11:06:43 ####### dropConnection() - Socket [0x0000000000004028]: Closing Browser Sess
Hello Team, We have configured Huawei FusionCompute VRM for virtual backup. There are physical Media agent servers, while running backup job is showing an error Ünable to read the virtual disks": Please advise, shall we use physical media agent server will use the virtual machine backup”
Does Commvault always need to create an AMI for virtual server backups in AWS?If we turn off CBT, will the AMI be deleted after the virtual server backup completes?If we turn off CBT, what are the pros and cons?Are hotadd and EBS direct the only two available transport modes?If there is a 3TB EC2 instance, how long will it take to create an AMI for the backup?Thanks
V11.25 SAN-Attached Tape LibraryHow many drives are supported at one library? We use a big VTL and have no problem to configure 500 and more to one tape lib. Now we have unclear failures and I want to check in general if we have reached a hard coded limit. We work with 600 and more drives.I looked up documentation.commvault.com but cannot find anything about limits.Can you help?
This is simple but working trick to maintain Commvault.Via Workflow built-in activity "ExecuteScript", you can call arbitrary shells (both on Windows/Linux) remotely.If you have full access to the remote server and able to place scripts, or if the script can be called via Workflow, no issues.But if you'd like to modify the script remotely for OS-side schedule jobs (Task Scheduler or crontab), it's slightly difficult to control this process remotely, since Commvault can restore the script but not so easy to modify the content itself.If the script contains only text data, you can utilize echo command to put contents remotely, one trick required though since arbitraryTo achieve this,First, prepare any scripts you want to put remotely (this is modified version of .bat file generated via Save as Script):Next, pass the generated script to the following logic, which "escapes" all strings per OS type:String text = <original script>;String osType = <Windows or Linux>;// Generate ech
This is simple but working trick to maintain Commvault.Backup jobs might fail at night and you'd like to find out the cause of errors, you need to collect log bundles in the first place.But when you realize the errors after a while (say couple of days later), job logs might be rolled over, important information which would contain the error messages gone.To avoid this situation, you can setup various alerts, collect logs immediately after you receive any.This also cumbersome so you can introduce simple workflow which can be called at the same timing of alert, also collect logs automatically.Rough process as follows: Generate an answer file for "Send Log Files". This procedure is utilizing Save as Script, which can be saved most of user operations with parameters, generates .bat file and XML file. The latter is called Answer file which contains the actual operaiton parameters in single file. To export this, you can start "Send Log Files" process from CommCell Console: Then you're getti
This is simple but working trick to maintain Commvault.When you want to start (typically backup) jobs via CLI instead of Schedule Policies, it involves qlogin in the first place to log into CommCell.This command is mostly straight-forward to use but when trying to invoke multiple jobs from one server it arises some errors, like Error 0x10b: User not logged in, Error 0x208 Token file is corrupted or relevalt errors.As BOL explains -f parameter to use Token file as follows:When using qlogin without -f option, it generates a file named "qsessions.OS-User, directly under Commvault installation directory.This file must be created with administrative privilege on this server to modify installation directory, also need to exist until any qoperations called in the shell (like qlist job), then it will be removed when the shell calls qlogout.This default token file is generated per OS user, not Commvault user so if multiple shells are called simultaneously it uses the same qsession file.So when
Hello,What is the best solution for VMware VSA backup, use of incremental or differential backups?currently we use incremental and synthetic fulls, is there any improvement for restores to use differentials backups instead of incremental?regards Juergen
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.