Commvault Q&A, release updates, and best practices
We are low on space. As we look through old jobs, or anything that is still being held on past our retention period. We have plenty of infinite and long term retained items mixed with our regular data in our primary DDB. I’m claiming that the infinite retention or long-term jobs could be hodling reference blocks. This is the reason why we see data size on disk being a reasonable number but the library being full. For example, 800 TB size on disk but the 1.5PB library is full. as we go through jobs, some are seen as not “big fish” because data written may show for example 85GB written for a 1.5TB App size server. We skip and don’t worry about this because, I’m told well 85 GB is only what will come back in space. Let's look for 1TB + being written. I’m thinking that even as we size our future library, we should consider a pool of space that will always sit there and hodl these reference blocks and be “unuseable” space. Hopefully this rant makes sense?
This is simple but working trick to maintain Commvault.Backup jobs might fail at night and you'd like to find out the cause of errors, you need to collect log bundles in the first place.But when you realize the errors after a while (say couple of days later), job logs might be rolled over, important information which would contain the error messages gone.To avoid this situation, you can setup various alerts, collect logs immediately after you receive any.This also cumbersome so you can introduce simple workflow which can be called at the same timing of alert, also collect logs automatically.Rough process as follows: Generate an answer file for "Send Log Files". This procedure is utilizing Save as Script, which can be saved most of user operations with parameters, generates .bat file and XML file. The latter is called Answer file which contains the actual operaiton parameters in single file. To export this, you can start "Send Log Files" process from CommCell Console: Then you're getti
This is simple but working trick to maintain Commvault.Via Workflow built-in activity "ExecuteScript", you can call arbitrary shells (both on Windows/Linux) remotely.If you have full access to the remote server and able to place scripts, or if the script can be called via Workflow, no issues.But if you'd like to modify the script remotely for OS-side schedule jobs (Task Scheduler or crontab), it's slightly difficult to control this process remotely, since Commvault can restore the script but not so easy to modify the content itself.If the script contains only text data, you can utilize echo command to put contents remotely, one trick required though since arbitraryTo achieve this,First, prepare any scripts you want to put remotely (this is modified version of .bat file generated via Save as Script):Next, pass the generated script to the following logic, which "escapes" all strings per OS type:String text = <original script>;String osType = <Windows or Linux>;// Generate ech
Hi,We are gradually automating our infrastructure (IaC) and need to automate the installation and configuration of CommVault Media Agents. Any good resources on that subject here or elsewhere? We also want to deploy a Media Agent on a Windows device in Azure order to backup a datalake (ADLS Gen2). So, the Media Agent needs to be configured with an ADLS Gen2 for it’s library. Any automation/scripting tips on that?
We have an offsite facility we send our aux copies to for DR purposes. PB’s of data for CommVault to go through . We do not have a firewall, and port 8400 can reach the offsite ok. Some of us in house think using the network topology to create a persistent connection and then open 8 routes will speed up the process over letting Commvault handle the traffic automatically. Does anyone have any insight into which process is better to use for us? or more technical how the network routes work or a recommended setup?
I would like to covers about problem with Commvault.The problem concerns the configuration of the Virtual Lab functionality.When attempting to configure a policy, Commvault will not be able to "see" CV_VLAB_GATEWAY or any network configured in the VMware environment. Such a machine is in the client's Vmware environment.I will be very appreciate for help
I would need assistance to estimate the storage size for backup, as we have nearly 140 + VM’s new servers which need to be backup daily, weekly Full and monthly tape, In order procure the license from commvault i would need to provide the following details but i’m unable to get all the details as i’ve confused between application size, Total VM size, Backup size, when i download the VM’s from vCenter igot all the clients list including VM’s size but when i calculate the total Storage based storage provision space, the toal size is showing nearly 200TB but when i login to each VM and look up for used space hardly 20TB to 30 TB and few VM’s are growing daily nearly 20 GB so in my case which size i should consider and how do i calculate the total capacity and make solution for this?? Total capacity of data to be protected Total capacity of physical data Total capacity of virtual machines Total VM’s – 220? Total Physical – 12? Disk retention on both sites
Hi, Setting up a backup for a Solaris Cluster. Got a question if we can have the Job Results folder on an NFS mounted volume on the Cluster Group? I have had the Job Results Folder on net-shared volumes before but never in a cluster group setup. Is it supported? /Patrik
Hi all, I have COMM Vault Version 11.2 installed in OracleLinux7. From time to time, a few Oracle databases v. 188.8.131.52 are UNKOWN in COMM Vault dialog box. I have checked the permissions of the user connection -C##DBAAS_BACKUP, granted as SYSDBA-, as well as the user catalog and RMAN connection. Both are find, but for an strange reason, sometimes my connection is UNKOWN. However, if I change C##DBAAS_BACKUP to SYS, the connection comes back to OPEN. This is a weird behavior, since I have mentioned before, this specific user has all the permissions required. Any idea?Regards, Laura
Hello All,I would like to use the Amazon KMS for the encryption, how do I achieve this. Do I need to register the Amazon KMS in our commcell and use it in our policies?As per below documentation we were asked to add additional keys to enable encryption. How that works, can anyone explain.https://documentation.commvault.com/11.24/expert/9263_enabling_server_side_encryption_with_amazon_s3_managed_keys_sse_s3.htmlwhat is the difference between the above documentation and registering the Amazon KMS in commcell?
If I add a VM to the filter list in a VM-subclient today, to stop it from being backed up, and we have 365 days monthly retention in the subclient storage policy, it will still be available for restore in june, right?Or do I need to create a separate subclient, with its own storage policy with retention details, add the VM to the subclient, run a full backup, and finally disable activity?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.