Recently active topics
Hello,sometimes after vm reboot CommVault services do not come up even those are set to auto-start services when os starts,if such scenario happens for any of client how we can create work flow to check CommVault services on SQL server VM's as we do run it manually on CommVault process manager and get email alert if not running or try to start twice and get email alert.
HelloWe have just gone into production with Commvault and are starting to look in more detail at the various reports available. I’ve created a couple and so has a colleague but we are wondering how we can share these and/or simply make them available to the team. I cannot seem to export my reports (to XML) in the Java console and also there does not appear to be a corresponding “My Reports” in the web console. Thanks in advance.
Hi all, A customer has recently brought this to our attention that within their environment there is an outdated version of the jQuery being utilised via their WebConsole. They are currently running 11.20.85 and the version of jQuery they are still using is 2.1.3 which has known vulnerabilities. I am aware of there being another topic open for a similar, if not same, issue.https://community.commvault.com/topic/show?tid=988&fid=2However, this does not look to have a promising resolve, is there a resolve to mitigate these vulnerabilities or force the webconsole to utilise a more updated version of the jQuery? Kind regards, Jonathan
Hi,I explain my case. We have configured a policy through snap and its subsequent backup copy of an NFS through NDMP. When we do the snap, the size is 0 bytes and there have been changes in the nfs content, so the backup copy is not done. On some occasions that same task has finished correctly 'but without data, and other times this error appears Subclient [default] is protected by snapshots on storage array [NFSUNITY] - backup job  client [win-backup] snap engine [Dell EMC Unity Snap]. and this is the summary Backup job  completed. Client [192.168.24.47], Agent Type [NDMP], Subclient [default], Backup Level [Full], Objects [Not Applicable], Failed , Duration [00:00:23], Total Size [Not Applicable] , Media or Mount Path Used . Thank you very muchsnap copy backup copy
I’m interested in getting this workflow from the webstore, but both on the commserve’s browser (MS Edge) or on my workstation (firefox), when I click the install button while signed in with my commvault account - so this link here: https://commserve.mycompany.edu/webconsole/softwarestore/#!/136/671/14567 from my commserve, i see the messages in the lower right corner:Installing "Enable Subclient Index"followed seconds later bySomething went wrong installing Enable Subclient IndexWhat logs can I examine to figure out why this is happening? following the prereqs here, I definitely know I’m logged in as part of the master group, and obviously the commserve has the web console installed.I also did use this successfully once, a couple years ago, to install a workflow, so I know it has worked in the past.
What this does is a discovery on a hypervisor sub client and creates a vm pseudo client for any discovered virtual machines.The functionality is outlined here:https://documentation.commvault.com/11.26/expert/104558_creating_vm_clients_before_performing_backup_operations.html In practice it looks like this. Thanks!Chris
Can anyone have workflow for below steps ? we would need to run the workflow as the first step and then the workflow would have a box that pops up asking for the name of the Client we want to decommission where we input the client name.Then we want the workflow to remove the client from all the groups it is part of (with the exception of automated client groups).Next we want the workflow to turn off backup activity of the client.Next we want the workflow to add the client to the decommissioned group.
Hello Team, We have 2 different Commvault services on our 2 different CommServe servers. However, these services did not work even though we rebooted the servers, even though they were automatic startup types. We want to know what these services do and we want to know if they are safe to run manually. This one is exist on Active Node: This one is on Passive Node:
This is a new installation. We are installing new commserve server on Linux platform. also planning to install Webserver, CommCell console and command center on dedicated windows server. But while installing CS on Linux, WebServer will be installed as default on Linux CS machine. So, need to move the WebServer role to windows server. what are the steps needs to be taken care to make WebServer installed on Windows machine act as default one.
We try backup PostgreSQL files but while we backup job started after a scan phase some temp files were being deleted by the PostgreSQL system. Then CV couldn't find scanned files, after that we took error.We try to additional settings(link down below) for sPGDirIgnoreList but didn’t fix the issuehttps://documentation.commvault.com/11.24/expert/21723_backup_troubleshooting.html Can you help me to this issue ??
I’m trying to archive Exchange mailbox but the job is “completed with error”I can browse and recover The messages still exist “As it has been backed up not archived” Check readiness shows that I should add at least one system account “I already added 3 service accounts” Archive & Archive index phases are completed Finalize phase is failedCheck the attachment file for the job logs
Hello,We are trying to perform Oracle backups from a script on the client. We have tested running the script from rman and the backup works perfectly.But executing it from the Commcell in the pre backup option, but it always fails indicating the following errors:Failed with Oracle DB/RMAN error [RMAN-00558: error encountered while parsing input commands RMAN-01006: error signaled during parse RMAN-02001: unrecognized punctuation symbol "/" RMAN-00558: error encountered while parsing input commands RMAN-01009: syntax error: found "}": expecting one of: "advise, allocate, alter, analyze, associate statistics, audit, backup, begin, @, call, catalog, change, comment, commit, configure, connect, convert, copy, create, create catalog, create global, create script, create virtual, crosscheck, declare, delete, delete from, describe, describe catalog, disassociate statistics, drop, drop catalog, drop database, duplicate, exit, explain plan, flashback, flashback table, grant, grant catalog, gran
This is probably a really basic question but I just need validation that this is working the way I suspect it is.I have some huge Exchange mailboxes, some of them containing over 3 million messages. I also set limits on how long my mailbox archive jobs can run so they are not active during the day and adding load onto our Exchange system.My question is, if a mailbox archive job is stopped before finishing does the next job treat the messages that were done as completed? For example, a mailbox has 100,000 items in it and the archive Monday night completes 50,000 of them before being stopped. Does the next job try to archive all 100,000 messages again or just the 50,000 that were not finished?
Hi Team, Do we have commvault diagrams library to import and use in gliffy confluence to create backup Estate Arch easily ? I have very old infra arch diagram which was created years back , Now need to update the arch diagrams with commvault devices connectivity so I have Gliffy for Confluence using , if we have commvault defined images library to import will be greatly reduce my time on recreating the Arch.
Hello I’m getting some isues with some VMWare backups Error with Virtual Machine change tracking. It may be necessary to power cycle the vm, or contact VMware support regarding the QueryChangedDiskAreas API. Have anyone faced this issue before and have an idea to solve it?
Morning,We’ve recently updated from 11.24.34 to 11.24.52, and are receiving “Unable to start SnapDiff V2 session” events. I see in the patchnotes for 11.24.49 that snapdiff will now attempt v2 before falling back to v1. Our infrastructure is setup for Snapdiff v1, is there a global setting I can modify to stop the commserve attempting to use SnapDiff v2? I was unable to find anything with a search of the documentation.Thanks.
Hello CC I have an issue with a single node SQL server backup. Im getting the following error Error Code: [30:403]Description: Failed to register job with Job Manager. Please check the job results directory as it is not accessible.Source: lom1door01, Process: SQLiDA I have checked the jobs location and the permissions and I opened it full access to All Users including our Commvault admin Galaxy account and still i get this error. Anyone any ideas as to whats going on as Im stuck. No results when I click on the error code. Any help would be highly appreciated.
Hello Community-members, I am doing a 1-touch Bare Metal Restore test of a linux machine with UEFI.The whole procudure from the documentation is followed and all goes well until I have to select the backupset for the restore.All the available backup's are in the list, but every backup I select gives the message “job could not be selected”. And these jobs where all successfull completed, so they should be OK. Now I want to look in the logfiles to see if I can find a clue why this is happening.So I have 2 questions:Has anyone ran into the same issue?Can someone tell me what the username/password is for the 1-touch iso's so I can login to check the logfiles? Thanks in advance,Rob Sonneveld
Is it possible to change Azure Cloud Archive library into Azure Cloud Combined Storage Archive/Cool Library
Hello Commvault Community!,I have a question in the name of one of our client.We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index res
I have a customer that’s running Azure SQL backups, the backup process triggers the export of the database to a bacpac file in a dedicated storage account and then backs this up to a Cloud library.I was under the impression that the data in the export location would only be transient as part of the backup process, however, looking at the storage account there are bacpac files going back for as long as we have had backups running for these Azure SQL accounts.Is this the expected behaviour or should the backup process be cleaning up the staging location?
We have a global Secondary Tape policy that we use for sending aux copies to tape. Currently it it set to use only 1 stream to keep the number of tapes being written to a minimum.I would like to switch the Full backups to 2 device streams though. I assume there is no way to do with the schedules? I need to make a new storage policy that is associated with the Full schedules?Just checking that I’m going about this right, thanks! Nico
Hello Everyone. i’m facing a weird issue regarding all the jobs related to a specific mediaagenti have a MediaAgent for sql backups and we have log backup runs hourly.i have recently upgraded the CommCell to 11.25.11since then all log backups start in waiting status with no errors it stay in wait state for hours and all log backup fails to start because of this behavior. i dont understand why this behavior happens, or the upgrade maybe related to this behavior any tips ?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.