Commvault Q&A, release updates, and best practices
Total application Size is smaller than Total Data Size on Disk
I recently carried out a large delete from a Storage Policy copy, the Total Application Size is bigger than Total Data Size on Disk. I have waited a day before rechecking these stats. I can see there are no pending deletes so am assuming the Commvault pruning cycle has completed. My object storage carries out the physical deletes once a week. But I would not think this would affect how Commvault displays the data on disk size. Any ideas why my Total application Size is smaller than Total Data Size on disk (running 2022E or 11.28 for us old farts)
Size of Application/Backup and Data Written Terminology Understanding
Hi there,I would like to understand what is the diferrence between Size of Application, Size of Backup and Data Written.Below there is my understanding so far, please make me correct if I am wrong. Size of Backup - Total size of data that should be backed up (including Index files)Data Written - Total size of data physically stored to the library/diskSize of Application - found this definition “The original size of the data before compression”, then what is the difference between Size of Backup and Size of Application? I would be grateful for your ideas.
MediaAgent crash after guest file restore
Hello, I have issue with guest file restore, after starting restore the MA getting error and crashing. I am trying to restore Windows File and using RH8 MediaAgent. After starting restore there is high CPU usage and warrning on MA At some point MA not responding more and you need to restart it. When i am doing file restore and using Windows MediaAgent its work perfectly. I can remember in the past we always used RH8 MA for restores all kind of files. We are now on 11.28.48 SP. I made commvault case 230228-312 aswell, but they couldn't help.
Cloud library migration from Azure one tenant to another
We are running on Commvault 11.20.Currently backup jobs are using Azure Blob cloud disk library, with container default setting. (On Azure is container type is Cool). We would like to move this storage to another tenant with different storage account with Cool/Archieve type container. Looking for a best approach to migrate the storage, if it can be done from Commvault and not from Azure.
Doubt about MA architecture
I will soon have to implement Commvault on a site consisting of the following elements:- 2 MediaAgent- A NetApp storage array presenting file per block via iSCSIThe idea is that the MAs work on the same storage and that when one goes down the other continues to provide availability to the infrastructure for both backup and restore operations and reading a lot I think that the best for my case would be to implement GridStor: https://documentation.commvault.com/2022e/expert/10842_gridstor_alternate_data_paths.htmlFor my knowledge and experience, reading it makes me a somewhat complicated configuration and I don't know if it fits the needs I am looking for.This is the procedure that is the most difficult for me because I do not understand very well why: https://documentation.commvault.com/2022e/expert/9788_san_attached_libraries_configuration.htmlI thought that presenting the LUN to the MA and formatting it and sharing the data path with the other MA would be enough but I see that it is n
Kubernetes Backup Failed : Error mounting snap volumes
Hi I am unable to complete the backup job for Kubernetes.Getting error as unable to mount snap volumes.I tried following the commvault community solution but was unable to rectify the error.Looking forward for the Solution ASAP. Here are some logs :9388 1f84 03/15 21:00:10 1189 AppManager Found proxy clients on VSA subclient  with SP20 or higher service packs.9388 1f84 03/15 21:00:11 1189 JobSvr Obj Combined dynamic priority for the job is 9388 1f84 03/15 21:00:11 1189 Servant [---- SCHEDULED BACKUP REQUEST ----], taskid  Clnt[longhorn-cvk01] AppType[Virtual Server] BkpSet[defaultBackupSet] SubClnt[volume-grp] BkpLevel[Incremental]9388 2af4 03/15 21:00:11 1189 Scheduler Starting jobs only on proxies with service pack SP20 or higher9388 2af4 03/15 21:00:11 1189 AppManager Failed to unserialize vcenters Info XML: 9388 1614 03/15 21:00:11 1189 Scheduler Phase [2-Discover] (0,0) started on [WIN-63OL3L5JFPN] in  second(s) - vsdisc.exe
azure file share exclude folders
Hi, we are doing backup of azure file share (iDataAgent: Cloud apps), where are several folders. Do you know how I can exclude requested folders from the backup?I didnt found any option where to do it.The only thing what I found in admin book was:https://documentation.commvault.com/2022e/expert/148009_performing_azure_file_share_data_backups.htmlFiles with a trailing slash (/) in the file name are not backed up, But I need to exclude folders not only files. Thank you! br, Stano
Retention Lock on MRR Storage Best Practice
I’m deploying MRR storage and planning to enable immutability via the “Enable Retention Lock” workflow at the storage pool level, I’m keen to know anyone’s experience with this because once it’s enabled it’s not reversible.Typically, when using Retention Lock on a policy copy, we wouldn’t use extended retention on a copy and have the basic retention set to the requirement e.g.:One Selective Copy for Monthly backups with the required basic retention Another Selective Copy for the yearlies with the required retention Both Selective Copies would be pointing to the MRR storage pool with WORM enabled as a dependant copy in the storage pool via the workflow.Is this the best way to setup MRR with immutability?
Create custom alert for long-running jobs
Related to my previous topic:, this can be used for custom alert to detect long-running jobs from average.# Please refer to the following for creating custom alert in general, this is for SP16 (still my customers are on this stage) but basically applicable for newer releases:# https://documentation.commvault.com/commvault/v11_sp16/article?p=5308.htm start adding a new alert rule name it as you like Put the query below actual query as follows: set nocount onset transaction isolation level read uncommittedselect bkji.jobId ,bkji.applicationId ,apc.name as 'clientname' ,apap.subclientName ,bkji.bkpLevel ,1.0 * (dbo.GetUnixTime(GETUTCDATE()) - ji.jobStartTime) / grp.avg_duration as 'exceeded' ,grp.avg_duration ,grp.count_jobfrom jmbkpjobinfo bkjiinner join JMJobInfo ji on bkji.jobid = ji.jobidinner join APP_Application apap on apap.id = bkji.applicationIdinner join APP_Client apc on apc.id = apap.clientIdinner join ( select appId, bkpLevel, avg(duration) as avg_duration, avg(total
Detect longer jobs than usual
There’s some alert configuration to detect longer jobs than usual, also you can check the jobs the same condition in Job Controller (small icon would appear).Still there’s not so easy way to detect the delay with some sort of custom criteria, like for specific client, duration which would take double or three times longer as usual, etc. This query is intended to address some requests, listing up the difference between running jobs durations and “average” per subclient and backup level.use commservset transaction isolation level read uncommittedselect bkji.jobId ,bkji.applicationId ,apc.name as 'clientname' ,apap.subclientName ,bkji.bkpLevel ,1.0 * (dbo.GetUnixTime(GETUTCDATE()) - ji.jobStartTime) / grp.avg_duration as 'exceeded' ,grp.avg_duration ,grp.count_jobfrom jmbkpjobinfo bkjiinner join JMJobInfo ji on bkji.jobid = ji.jobidinner join APP_Application apap on apap.id = bkji.applicationIdinner join APP_Client apc on apc.id = apap.clientIdinner join ( select appId, bkpLevel, avg(du
Retrieve all Job Phase failures
An MSP customer, there're a lot of alerts which send out mails whenever any of them detects phase errors, but sometimes the job itself got succeeded, also in this case it's quite difficult to look into the phase errors since the information would be scattered at detailed job results, events, alerts, etc. This query is created by a customer's requiest who want to track job **phase** errors at once, combining many information from various tables including event or error parameters.So some verbose informations there, but you can easily detect when the phase failures happened, like one of MAs is having connection issue and affecting mlutiple jobs. Here's the query, you can run it on CommServ DB, or creating customer reports on your Metrics Reporting could make your life easier.use commservSET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;declare @tmp table ( jobid int ,starttime bigint ,endtime bigint ,messageid int ,occurred nvarchar(max) ,message nvarchar(max))insert into @tmpselect f.jo
Error loading Environment Panel - Command Center
Hello guys, it's been a long time i haven't posted anything here 😅.Well, I noticed something weird occuring these days in the Environment Panel Command Center, where suddenly don’t shows any information that should, only “0s” and not the quantity of VMs, File Server, as you can below:When this happens I try to restart the Web Server service, but it has no effect, so before I open a ticket about this, does anyone got this error before too?I’m using 11.28.52, btw 😊. Thank you.
Requirements for commvault java console, MacOS?
This is a first for me. A user wants to log in to the java Commcell Console using a macOS computer.Are there any special requirements for this? As far as I know, they are downloading the jnlp-file and running it, but a message appears that reads “Connection to CommServe is lost..”In their screenshot, the java console is also showing and it says "GUI_JNI.dll not loaded". Should it require a DLL file in MacOS, or is this information only and not actually an error message?
LTO9 - Media calibration / Characterization
Hi and happy new year to all of you !I would like to know if some of you have already implemented some LTO9 drives / tape libraries, and would love to get your feedback about it using Commvault.My experience on the LTO9 media, using dual tape drives tape libraries, is quite bad.The Media calibration / optimization / characterization phase that any new LTO9 media has to deal with is a pain, on my side.Looks like on the first mount of a media -- let me reword it in my ‘old guy’ words -- it has to be somehow formatted to be able to be used by your favourite backup software. Below a link to Quantum’s FAQs about this :https://www.quantum.com/globalassets/products/tape-storage-new/lto-9/lto-9-quantum-faq-092021.pdf Short calculation : 50 LTO9 brand new tapes may require up to 2hours each of ‘calibration’ before they can be used. So this equals to 100 hours of ‘calibration’ before you could use the full 50 tape pool.. 😱 My 1st issue was that I had to adjust all the mount timeouts in that LT
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.