Commvault Q&A, release updates, and best practices
"Missed SLA" reports showing very old clients and VM's that have been removed or deconfigured years ago.
I have several of the CommVault reports (notably the “APSS - Missed SLA Report”) keep referring to clients and virtual servers (backed up with the Virtual Server Agent) that are either deconfigured (if a client) or removed from the VM group (if a VM).Is there something I need to be doing when removing a VM from virtual server backups, or deconfiguring a client so these do not show up on reports? I’m not sure why 100% of all deconfigured/removed VM’s are not showing up on these reports, only “some of them” Note: some of these were “last backed up” in 2015, 2019, 2020, etc. they all indicate “No Schedule” or “No Job within SLA Period” as the “reason” which is 100% true as they have been removed/deconfigured a long time ago.
Hello everyone,About to do a Mine with Commvault deployment, in the documentation it says to download the Mine plug-in from the Commvault store but it is missing: https://cloud.commvault.com/webconsole/softwarestore/store.do#!/133/724?fl=%7B%22qfType%22:1,%22qfVal%22:%22%22%7D Does anyone know where to get it?Thanks
El Plug-in de vCloud Director no carga al ingresar como usuario de Tenant . al ingresar como provider , si me puede abrir el plug-in , e incluso puedo configurar el usuario con el que conectara hacia el comserve. validando el error que me arroja cuando intento abrir el plug-in como usuario de un tenat , abriendo la vista de desarrollador me esta arrojando un error 403 forbbiden. tienen alguna idea de lo que estara sucediendo? , pudiera ser que me falte agregar algo en el comserve para que me permita abrir el plu-ing desde un tenat?. comserve version 11.28.53 saludos Dulce J Rico
Hey Everyone,I have a DR site that replica a vm from the vc prod to vc-dr.so the backup work from the netapp machine bout site, the snapshot is working well but the machine live sync get this error: Error Code: [23:10]Description: Error while reading pipeline buffer from MediaAgent [commvault-ma-dr.dordom.local].Source: commvault-ma-dr, Process: vsrst
Hi team, please you could help me with a problem, i need to know how i can change the way that one job start a backup by user, beacuse currently my boss will delete one user, and i need configure each task in another user for example i need to change this user to other, but i but I can't find the option from where to do it.
Hi Folks I’ve hit a problem in seeding data to azure using a DataboxCopy had been created and DDB is ready to be shipped also. I’ve followed this procedure:https://documentation.commvault.com/v11/expert/97276_migrating_data_to_microsoft_azure_using_azure_data_box.htmlthis has also helped: https://commvaultondemand.atlassian.net/wiki/spaces/ODLL/pages/351142608/Deduplication+Database+Seeding#DeduplicationDatabaseSeeding-DDBSeedingusingDeduplicatedStorage I’m on step 4 “Once the jobs associated with the initial seeding is complete, shutdown the data box using the recommended shut down process for Azure Data Box.” Running the validation i get this error:https://aka.ms/dberr5 - Large file shares are not enabled on your storage account(s). To disregard this errorThe CV_Magnetic is 36TB an so easily hits the 5TB limits stipulated here: https://learn.microsoft.com/en-us/azure/databox/data-box-disk-limits under “Object size limits and Azure Files”So the only think i can do is drop the storag
Hello,I take scrClientID and destclientID from the command:qoperation execscript -sn DataInterfacePairConfig -si listByClient -si clientNameis there any way to associate scrClientID with client name (how to get client name using thise id’s)?thank you very much for helpDorothy
We’re using Commvault v11.28.48 and have several jobs that are in a “waiting” status and sitting at 10%-20%. The reason is simple: Mount path does not have enough space.I’m new to Commvault (our backup admin left recently and I wasn’t involved in the initial config or daily use) so I may be missing something obvious, but I thought aged data would be deleted automatically. I’ve run the “Data Retention Forecast and Compliance Report” and see the various estimated aging dates, the only thing that sticks out is the last line under the Disk Media Summary:Estimated Size to Free: 9406 GB (this box is green: “prunable job / recyclable media”) Delay reason for physical space cleanup: Archive file has been queued for pruning from the DDBOur Netapp library is 24.7TB with a size on disk of 24.46TB, reserve space set to 100GB, so an extra 9TB would be great. “Enable pruning of aged data” is checked on the library.Am I correct in thinking that 9406GB is data that should be automatically removed (pru
We are currently in the process of going Tapeless with Commvault. Our Commserve Disk Library is onsite and will be staying on site, the only part is we will be sending our Aux to Metallic Cloud Service, or whatever it is called now. My question is can we run our Commcell from the cloud to connect to our Inhouse Media to do all our backups and Aux backups without having to go full Metallic and have hot storage? Our Security officer is asking as an added layer of security incase a “bad actor” access our onsite system they would also have access to our commcell since its on site, if it was housed in the cloud we are slightly more secure. Thank you for any info. We have a meeting next week with commvault to further our services with them, but figured I would ask here so I don’t sound like a complete idiot in the meeting.
Hi all. I have a question regarding the Workflow Configurations (see this post:) In general I know how to use these and they work like a charm. Unfortunately I don’t have any possibility to see the configured values after setting them.E.G. I have a “configuration” with the name “TestConfig” of type String. I can set it to “Whatever” in the properties of the workflow and the workflow will use it. But if I have a look again in the properties it looks as if “TestConfig” is empty and not set.Am I missing something or is there no possibility to look at the configured values?
Hello Community#1 vms are linked to the same storage pool and DDB is enabled, when moving the VM between subclients will initiate a re-baseline ? #2 Could you confirm whether Commvault generates the DDB signature based on the server name or UUID or else?"#3 If we enable Commvault encryption on the subclient level, will this trigger DDB re-baseline when the scheduled job starts for the existing subclients ? if so, does job will convert to full automatically or has to run full manually ? thank you
Hello,Error: Failed to update storage policy copy. Cannot update copy with override retention when storage pool is using Storage WORM lock property set.It is only possible to add/remove associations when we remove the option of WORM storage lock?I know the procedure.@Commvault feedback: It is maybe better to make possible that we can add. (not remove).
Which Commserv table or view has the information about the current throughput and average throughput
Which Commservtable or view has the information about the current throughput and average throughputfor an actively running job , which we could find in the Job Controller. And what is the column name to get these details average throughput and current throughput for all the respective jobs in job controller. For eg: the following Commserv view CommCellJobController only lists certain information like
So in an effort to clean up our environment, we are trying to get our sla pushed to %97 and above. One of the issues is the clients that are part of the Exchange dag have no scheduled backups. My concern is that if I start backing up these clients, that they would be backing up the dags which is being backed up by our exchange backups that we are doing. Is the software smart enough to notice that the exchange client is part of an exchange cluster, and will not backup the dags which the exchange job is doing, or would i be double backing up dags if i started backing up the nodes as well?
We have traditionally pulled chargeback from our metric server through SQL connection. We have a new commcell that does not have an internal connection to the corporate network. How can I get chargeback from it? Is there an API connection that we can do to the report on cloud.commvault.com?
Its been a while since I installed a Linux FSA on a client, my question is;I want to install the Linux FSA on a client but I dont want the client to have access to edit or change other user groups. Just want it to perform backups of the Linux client.Having said this what items should I uncheck in the Permission details
ÜbersetzenHello,I want to automatically write data from reports into a Word document. For example, the number of failed jobs from the backup-job-summary report should be written to a Word document. Is there already a workflow for this or does anyone have an idea how to implement it?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.