<?xml version="1.0"?>
<rss version="2.0">
    
                    <channel>
        <title>Join the conversation</title>
        <link>https://community.commvault.com</link>
        <description>On the Forum you can ask questions or take part in discussions.</description>
                <item>
            <title>Commvault VSA proxy (server name) is failing to initialize because it cannot locate its own AWS instance ID in any AWS region</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/commvault-vsa-proxy-server-name-is-failing-to-initialize-because-it-cannot-locate-its-own-aws-instance-id-in-any-aws-region-11797</link>
            <description>We have 2 aws account, where ec2 backup are being save though commvault, but from last 2 week we are getting multiple failure alert.i have check the logs:| vsbkp.log========================1528  179c  04/13 04:16:55 45270647 CAmazonInfo::DetectLocalProxyInformation() - Proxy is Amazon Instance. BIOS Manufacturer : :Xen] and Version : :4.11.amazon] 1528  179c  04/13 04:16:55 45270647 CAmazonInfo::DetectLocalProxyInformation() - Instance Id : i-0726b3f933369834c1528  179c  04/13 04:16:55 45270647 AmazonCompute::GetOutpostAccountARNForInstanceId() - Exception - The instance ID &#039;i-…...&#039; does not exist1528  179c  04/13 04:16:55 45270647 CAmazonInfo::DetectLocalProxyInformation() - BIOS UUID : EC25B4D9-F00D-3912-C757-1F22221F6CC21528  179c  04/13 04:16:56 45270647 AmazonCompute::_GetVMInfoForVM() - Non Fatal Exception The instance ID &#039;i-…….&#039; does not exist in region ap-south-21528  2620  04/13 04:17:23 45270647 VSBkpCoordinator::OnIdle_Starting() - Waiting for n1] agents to initialize.  tserver name]1528  22fc  04/13 04:17:31 45270647 VSBkpController::MonitorProgress() - :0] VMs are being processed1528  2620  04/13 04:19:13 45270647 VSBkpCoordinator::OnIdle_Starting() - Timeout waiting for agent mserver name] to initialize.1528  2620  04/13 04:19:13 45270647 VSBkpCoordinator::OnIdle_Starting() - No Agents are running, stopping  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 17 Apr 2026 12:04:43 +0200</pubDate>
        </item>
                <item>
            <title>Meaning of &quot;Snapshot options&quot; in Server Plan for M365 backups</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/meaning-of-snapshot-options-in-server-plan-for-m365-backups-11787</link>
            <description>Hi all, I’m a bit confused about the &quot;Snapshot options&quot; section in the Server Plan settings, specifically when the plan is used for Microsoft 365 backups (Exchange, SharePoint, etc.).  Since M365 backup is about mailboxes and sites, not VMs or LUNs, what is the actual purpose of this section? Does &quot;Enable backup copy&quot; or the snapshot schedule have any real effect on how M365 data is processed or moved to the storage? Or is this just a generic UI section for all Server Plans, or should I be aware of some specific logic here for M365 apps? Thanks for the clarification!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 17 Apr 2026 10:32:17 +0200</pubDate>
        </item>
                <item>
            <title>Web Console Deprecation: What Customers Need to Know</title>
            <link>https://community.commvault.com/platform-updates-47/web-console-deprecation-what-customers-need-to-know-11679</link>
            <description>Web Console Deprecation – What Customers Need to Know  Commvault is continuing its move toward a single, modern management experience through the Command Center. As part of this transition, the legacy Web Console has been officially deprecated and is no longer included by default in newer Commvault platform releases. This article explains what the deprecation means for you, when the Web Console may still be required, and how to request access if your environment depends on it. Why is the Web Console being deprecated? The Web Console is being retired as Commvault standardizes on the Command Center as its primary management and reporting interface. The Command Center provides equivalent or enhanced capabilities, improved security alignment, and is the strategic platform for all future feature development. Deprecation timeline  • Version 11.38: Web Console interface marked as deprecated • Version 11.40: Web Console removed from the default installation media • Version 11.42 and later: Web Console not available on new or upgraded installations  When is the Web Console still needed?  Although deprecated, Commvault recognizes that some customer environments still rely on Web Console functionality in limited scenarios, such as: • Editing or maintaining reports using Report Builder In these cases, access to the Web Console package is still supported through a controlled request process. How to request access to the Web Console package  Commvault provides an automated workflow to request access to the Web Console package when there is a valid business or technical requirement. To submit a request: 1. Sign in to the Commvault Cloud Services Portal 2. Navigate to Workflows 3. Select the workflow titled “Request WebConsole Package Access” 4. Enter your CommCell ID and contact email address 5. Provide a brief explanation of why the Web Console is required 6. Submit the request What happens next?  After submission, you will receive a confirmation notification. If additional information is required, a Commvault representative will contact you. Once approved, you will receive a secure download link for the Web Console package. Additional information  For detailed eligibility criteria and workflow information, refer to the Commvault Knowledge Base article: How to obtain the Web Console Package (KB89689) Moving forward  While access to the Web Console remains available for specific scenarios, customers are strongly encouraged to adopt the Command Center as their primary interface to take advantage of ongoing enhancements, security improvements, and future innovations.</description>
            <category>Platform Updates</category>
            <pubDate>Fri, 17 Apr 2026 09:30:34 +0200</pubDate>
        </item>
                <item>
            <title>Backups of VMs from one subclient may be subject to longer retention periods across different weekends</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/backups-of-vms-from-one-subclient-may-be-subject-to-longer-retention-periods-across-different-weekends-11791</link>
            <description>Hi,After migrating subclients from Index v1 to Index v2, VM backups within application clusters are losing time consistency on Selective Copies (e.g., First Full of the Month). The issue arises because Index v2 splits the subclient backup into individual child jobs for each VM; when these jobs finish after midnight, the system assigns them to different retention periods. Due to a fixed backup window, we cannot delay the start times, and manually selecting jobs for tape marks is not scalable.How to configure it so that Selective Copy treats all child jobs of a given subclient as a single time unit, ensuring a consistent point-in-time recovery for the entire application cluster? Kind regards,</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 17 Apr 2026 05:54:02 +0200</pubDate>
        </item>
                <item>
            <title>Full vm In-place restore not restoring original vm UUID, categories, or vm description</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/full-vm-in-place-restore-not-restoring-original-vm-uuid-categories-or-vm-description-11793</link>
            <description>Hi all.  When I tried to do a full vm restore of a Nutanix VM with inPlaceRestore and overwriteVM set to True (tried from both command center UI and API), The expected behavior in my mind is that the vm’s metadata (vm description, categories, and UUID) is kept after the restore, just like how a VMware full vm restore has been working for us. In our testing, the vm is deleted and then re-created, causing the VM UUID to change, at the same time, all the other vm metadata fields like description and categories are not restored; Guest Tool connection is broken because of the new UUID as well. Is this expected behavior? Could this have been caused by misconfiguration on VM Group/Hypervisor side?  Commvault 11.40Nutanix AHV 7.5 Thanks in advance!</description>
            <category>Virtualization and Containers</category>
            <pubDate>Thu, 16 Apr 2026 23:51:48 +0200</pubDate>
        </item>
                <item>
            <title>O365 Plan Retention: Infinite vs. Retain Deleted Items logic</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/o365-plan-retention-infinite-vs-retain-deleted-items-logic-11777</link>
            <description>Hi all, As I understand it, there are basically two possibilities regarding the Office 365 plan retention: Infinite and Retain deleted items for X period of time.Infinite: All items (e.g., emails, sites, folders) remain in the backups forever.Retain deleted items for X period: Only data that has been deleted from the source (M365) is removed from the backups after the given period. All other &quot;live&quot; (non-deleted) data remains in the backup forever, just like with the infinite option. Am I correct in thinking that once Office 365 data is backed up, it stays in the storage libraries forever, with the only exception being deleted data (when that option is selected)?Even though Server Plans have their own retention settings, do they affect Office 365 data? It seems to me that M365 data follows an &quot;incremental forever&quot; logic and there is no way for retention to delete &quot;live&quot; data from the storage.Please confirm my understanding or correct me if I am mistaken.Thanks in advance for your contributions!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 16 Apr 2026 15:59:05 +0200</pubDate>
        </item>
                <item>
            <title>Exchange Online No valid Azure app found</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/exchange-online-no-valid-azure-app-found-11776</link>
            <description>Hi everyone,For about a week now, we have 2 failed mailboxes in the Exchange Online backups that have the failure reason &quot;No valid Azure app found. Please run Check Readiness&quot;. In the same backups we have 346 mailboxes that show no issues.Only these 2 mailboxes have issues and I cannot find what is causing this. The mailboxes still exist and are no different from other shared mailboxes.The logs show no different errors for the mailboxes so I am a bit at a loss. I also see the same issue for multiple Exchange applications.Does anyone have an idea how to resolve this?Kind regards,Paul</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 16 Apr 2026 13:40:03 +0200</pubDate>
        </item>
                <item>
            <title>Add Certificate on Helm deployed commvault</title>
            <link>https://community.commvault.com/getting-started-51/add-certificate-on-helm-deployed-commvault-11794</link>
            <description>I have deployed Commvault components using this procedure. But I’m not sure how to add my S3 Compatible storage SSL root certificate to be available on deployed pods.Please, can you tell me if there is a procedure in order to add the certificates on MediaAgent pods?Thanks</description>
            <category>Getting Started</category>
            <pubDate>Thu, 16 Apr 2026 04:45:14 +0200</pubDate>
        </item>
                <item>
            <title>Backup failing VM with Error Code: [91:300] Description: Error creating virtual machine [VMNAME] snapshot</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/backup-failing-vm-with-error-code-91-300-description-error-creating-virtual-machine-vmname-snapshot-11790</link>
            <description>Hi Team, We are backing up a VM(hosted on MS HyperV) whose data disk is located on a SMB NAS File share.We are getting below error.Error Code:  91:300] Description: Error creating virtual machine  VMNAME] snapshot. Please check the virtual machine snapshot tree and the hosting volume for free space. .Checkpoint operation for &#039;VMNAME&#039; failed. (Virtual machine ID 83A8CX27-…….A4) Checkpoint operation for &#039;VMNAME&#039; was cancelled. (Virtual machine ID 83A8CX27-…….A4) &#039;VMNAME&#039; could not initiate a checkpoint operation: The process cannot access the file because it is being used by another process. (0x80070020). (Virtual machine ID 83A8CX27-…….A4) &#039;VMNAME&#039; could not create auto virtual hard disk \\IP\Host\VM_Disk_3714DCEE-3E97-4364-A678-897EF9FE93F0.avhdx: The process cannot access the file because it is being used by another process. (0x80070020). (Virtual machine ID 83A8CX27-…….A4)]. Source: commserve, Process: JobManager I have verified in the HyperV Host event viewer, it is throwing error “19100”&#039;VMNAME&#039; background disk merge failed to complete: The process cannot access the file because it is being used by another process. (0x80070020). (Virtual machine ID 83A8CX27-…….A4).So, my plan is to take downtime, merge the .avhdx disk with .vhdx.Could you guys please check and confirm if we have any other alternate way or i can go ahead with my POA.Thank you.</description>
            <category>Virtualization and Containers</category>
            <pubDate>Thu, 16 Apr 2026 02:53:34 +0200</pubDate>
        </item>
                <item>
            <title>New installation Commserver on Linux</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/new-installation-commserver-on-linux-11744</link>
            <description>I would like to deploy a new CommServe server on Linux using SP 11.40. My current CommServe is running on Windows with SP 11.36. I plan to set up the new CommServe as a standby node in advance.Could you advise on the best practices for this setup?The operating system I am using is Rocky Linux. I have provisioned an additional disk for the Commvault installation and software cache, which is mounted at /opt/CommvaultData.What are the initial steps I should follow to ensure a secure and reliable installation? For the installation of SP 11.40, I would like to first deploy two new passive nodes on Linux, and only then perform the failover to Linux.I also have a question regarding the software cache. With the new CommServe on Linux, I have configured a new software cache for Windows and enabled it as a remote software cache.How can I ensure that, during update installations, clients automatically use the correct remote cache? Specifically, I want Windows clients to use the Windows software cache and Linux clients to use the Linux software cache. I have already created two new client groups, categorized into Linux clients and Windows clients.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 15 Apr 2026 14:09:54 +0200</pubDate>
        </item>
                <item>
            <title>Commvault 11.40.47 – Backup of Nutanix AHV VMs Fails (Unable to Mount NFS Share)</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/commvault-11-40-47-backup-of-nutanix-ahv-vms-fails-unable-to-mount-nfs-share-11789</link>
            <description>Hello everyone,I’m facing an issue while backing up production virtual machines hosted on Nutanix AHV using Commvault version 11.40.47, and I would appreciate your guidance.EnvironmentCommvault Version: 11.40.47	Hypervisor: Nutanix AHV	Prism Element IP: 10.10.100.120	Commvault Access Node / Media Agent:	Hostname: srv-CV-backup		IP: 10.10.100.24	Issue DescriptionWhen starting a backup job:Commvault successfully discovers and lists all the VMs	The backup job starts but fails before backing up any disks	The issue occurs for all VMsError MessageError opening the disks for virtual machine eWin-SRV1].Access node esrv-CV-backup] is unable to mount NFS share e10.10.100.120:/PROD].Source: srv-backup, Process: JobManagerNotesThe latest Commvault version available to us is installed (11.40.47)	VM discovery works correctly	The failure seems related to mounting the NFS datastore from NutanixHas anyone encountered a similar issue with Commvault and Nutanix AHV?Are there any specific NFS permissions, firewall rules, or configuration settings that should be checked on either the Nutanix or Commvault side?PS: the lastes succesful backup was before going to this AOS version 7.5also this version is compatible with commvault its mentionned in the docs Thank you in advance for your support.Best regards,</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 15 Apr 2026 13:22:52 +0200</pubDate>
        </item>
                <item>
            <title>Restore -&gt; Insufficient space for caching</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/restore-insufficient-space-for-caching-11792</link>
            <description>Hi,i try to restore from a VM (guest files and folders) and i get this message: If i try to restore from a younger date it works fine.What can i do?ThanksDennis</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 15 Apr 2026 13:10:09 +0200</pubDate>
        </item>
                <item>
            <title>Manual Upgrade of Visual Studio Tools for Applications</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/manual-upgrade-of-visual-studio-tools-for-applications-11785</link>
            <description>Hi.Is there a supported way to manually upgrade Visual Studio Tools for Applications? I do not see any mention made of it.Current release is 16.0.31110Our customer advised that CVE-2025-29803 was detected in the current release. An upgrade should be done to version 16.0.35907.Current Commvault release is 11.40.42. I also don’t see any mention made in the Commvault maintenance release for this version. Any advise?Thanks. </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 15 Apr 2026 06:15:10 +0200</pubDate>
        </item>
                <item>
            <title>Pre-register for SHIFT 2026 in Nashville, Nov. 10-12</title>
            <link>https://community.commvault.com/news-events-7/pre-register-for-shift-2026-in-nashville-nov-10-12-11784</link>
            <description>We&#039;re thrilled to announce SHIFT 2026 will be coming to Nashville November 10-12 and the agenda is expanded to encompass three full days with distinct tracks for executive and practitioners.For our community of practitioners and admins, we’ll host hands-on labs, full product certifications, as well as deep-dive sessions on the latest Commvault technology, data protection, cyber resilience, and optimizing how you work in the ever-evolving world of AI. Pre-register here to save your spot  </description>
            <category>News &amp; Events</category>
            <pubDate>Tue, 14 Apr 2026 21:09:13 +0200</pubDate>
        </item>
                <item>
            <title>Gmail Archive/Backup</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/gmail-archive-backup-11788</link>
            <description>Hello, Can Commvault support Gmail Backup or Archive? </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 14 Apr 2026 13:05:02 +0200</pubDate>
        </item>
                <item>
            <title>Auto Discover DB instance options</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/auto-discover-db-instance-options-6151</link>
            <description>This is a quick one.When applying the below key to disable Auto Discover of Instances, is there a way to limit that to only one client instead of at a Global level? https://documentation.commvault.com/additionalsetting/details?name=nDisableAutoDiscoverInstancePostInstall In my case, I have MySQL DBA’s who want Auto Discover on and the Oracle DBA’s who want it off. They don’t want to manually turn it off every time an Instance is added.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 14 Apr 2026 11:22:20 +0200</pubDate>
        </item>
                <item>
            <title>Commvault MS SQL plug-in user permission / role</title>
            <link>https://community.commvault.com/share-best-practices-3/commvault-ms-sql-plug-in-user-permission-role-11786</link>
            <description>Hi everyone,I&#039;m working on a custom set of permissions for our MSSQL admins to allow them to perform on-demand backups. Despite granting what I believe to be more than sufficient privileges, they are still encountering errors.The process works perfectly when the user has master rights.Current permissions: Error with those: Has anyone successfully configured a least-privilege role for this scenario and could share?</description>
            <category>Share Best Practices</category>
            <pubDate>Tue, 14 Apr 2026 09:59:27 +0200</pubDate>
        </item>
                <item>
            <title>Recovery Point not visible after Synthetic Full Backup</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/recovery-point-not-visible-after-synthetic-full-backup-11781</link>
            <description>Hello,After running a Synthetic Full Backup in our Commvault environment, the job completes successfully. However, we are unable to see any new Recovery Point in the CommCell Console / Command Center after the backup.ver: 11.40.38 Nis = April  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 13 Apr 2026 22:13:30 +0200</pubDate>
        </item>
                <item>
            <title>Verify Connection status not updating for M365 Exchange app despite successful mailbox browsing</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/verify-connection-status-not-updating-for-m365-exchange-app-despite-successful-mailbox-browsing-11758</link>
            <description>Hi everyone, We are facing a confusing issue with the M365 Exchange Online agent in Commvault version 11.32.According to the documentation, under the Exchange connection settings:https://documentation.commvault.com/2023e/software/complete_microsoft_365_service_catalog_for_exchange_online_using_custom_configuration_option.html?_gl=1*hjd47t*_gcl_au*NjQ0NDk4ODY5LjE3Njk1OTU0OTI.I should click &quot;Verify Connection&quot; to validate the Azure app registration and update the app status. However, when I click this button, the status doesn&#039;t update to &quot;Successful,&quot; and it remains blank or shows no confirmation of a successful binding.The strange part is that even though the verification seems to fail or does not show any status, I am able to browse through mailbox items and users within the Commvault Command Center in Office 365 app.Have you experienced this behavior before? Is it a known bug, or what is your take on this issue? Any input would be greatly appreciated!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 13 Apr 2026 15:37:34 +0200</pubDate>
        </item>
                <item>
            <title>Improved Navigation in the Commvault Documentation Portal</title>
            <link>https://community.commvault.com/platform-updates-47/improved-navigation-in-the-commvault-documentation-portal-11778</link>
            <description>We’re pleased to share an important update to the Commvault documentation portal for Innovation release 11.42: a redesigned left-hand navigation panel that is now aligned with the structure of the Commvault Command Center interface.What’s ChangingIn the past, the organization of our documentation differed from the navigation experience within the product. This sometimes made it harder to move between the Command Center and supporting documentation.With this update, the documentation structure now mirrors the layout and logic of the Command Center. This creates a more intuitive and consistent experience when navigating between the product and its documentation.What This Means for YouEasier navigation between the Command Center and documentation	Faster access to relevant information	A more streamlined and consistent user experienceThis change is designed to help you find answers more quickly and reduce friction when using documentation alongside the product.We Value Your FeedbackAs you explore the updated navigation, we encourage you to share your feedback so we can continue improving your experience.Click the &quot;Give Feedback&quot; link within the documentation portal	Select &quot;I have feedback about the documentation site&quot; under Feedback Type	Share your thoughts on the new navigation — what works well and what could be improvedYour input directly helps us refine and enhance the documentation experience.</description>
            <category>Platform Updates</category>
            <pubDate>Sun, 12 Apr 2026 22:19:48 +0200</pubDate>
        </item>
                <item>
            <title>Log commit jobs after disabling log caching</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/log-commit-jobs-after-disabling-log-caching-11764</link>
            <description>Hi,After updating our environment to 11.40.42 I saw repeated log commit jobs failing with the following error.Sweep job has failed on Media Agent [xxxxxxxx]. Error [0xECCC000D:{I2::FileSweepScanner::Scan(684)} + {I2::SweepScanner::GenerateCollectFileForMountPath(269)} + {CQiFile::Open(95)} + {CQiUTFOSAPI::open(77)/ErrNo.13.(Permission denied)-Open failed, File=\\?\UNC\XX.XX.XX.XX\MountPoints\XXXXXXXX\Disk08\63WA3P_12.04.2024_11.32\CV_MAGNETIC\CV_APP_DUMPS\SDT\00DF80E8-D0C5-47B8-939E-E5BBC4908E99\collecttot2.cvf, OperationFlag=0x810A, PermissionMode=0x80}]I found that log caching was turned on at the plan that is used for the client. As we do not want to use log caching, I disabled the option on the t-log schedule in the plan.I now see that normal transaction log backups are running for the client instead of the log commit jobs. However, I still get a log commit job every 2 hours which keeps failing with this error.How do I get rid of the log commit job? A full backup has been made since I disabled the log caching option. I don&#039;t mind losing the logs from the few days this was still enabled and failing.Thanks in advance.Paul</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 10 Apr 2026 15:25:11 +0200</pubDate>
        </item>
                <item>
            <title>Problem with filtering by time in views in Command Center</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/problem-with-filtering-by-time-in-views-in-command-center-11713</link>
            <description>Hi all,I have a problem with the functionality of the filters in the views in Jobs - Job history, when I want to filter by Start or End.When I use the condition &quot;Relative&quot; (1 day), the list of jobs is displayed.When I use the condition &quot;Between&quot; (yesterday-today), the error &quot;Something went wrong&quot; appears.When I use the condition &quot;Older than&quot; (anything), &quot;No results found&quot; appears.There is a similar problem when using the time condition in the view in Protect - Virtualization - Virtual machines.It behaves the same in 11.40 and 11.42.Am I doing something wrong? Or is this a known issue?Regards,Lubos</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 10 Apr 2026 15:12:02 +0200</pubDate>
        </item>
                <item>
            <title>Commvault version 11.36 cannot back up virtual machines in the PVE 9.1 + Ceph 19.2 cluster.</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/commvault-version-11-36-cannot-back-up-virtual-machines-in-the-pve-9-1-ceph-19-2-cluster-11775</link>
            <description>We are running a relatively new version of Ceph. In this newer Ceph release, the RBD commands used to create VM snapshots require specifying both the namespace and the keyring file to locate the VM disks. By contrast, Commvault locates VM disks using only the pool in its RBD command parameters. Commvault version 11.36 only supports older versions of Ceph.  </description>
            <category>Virtualization and Containers</category>
            <pubDate>Fri, 10 Apr 2026 06:54:54 +0200</pubDate>
        </item>
                <item>
            <title>Move index directory on Index Server node</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/move-index-directory-on-index-server-node-11733</link>
            <description>Hi,I need to move the index directory on the index server node.Documentation and Arlie point to the workflow IndexServerDirectoryMove but this does not exist in the current environment and it is not available on the store.It is also not possible to manually change the directory location as the path in the edit node screen is greyed out.I hope anyone can help me with this issue. </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 09 Apr 2026 15:31:18 +0200</pubDate>
        </item>
                <item>
            <title>Incorrectly created Server Groups for CommServe LiveSync configuration</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/incorrectly-created-server-groups-for-commserve-livesync-configuration-11765</link>
            <description>Hello Commvault Community,  While troubleshooting another topic, the Failover configuration was disabled and when I wanted to enable it again, I noticed something disturbing and I would like to ask whether this has changed in last versions or whether it is a bug that needs to be fixed.For clarity, I&#039;ve checked this from both consoles (CommCell Console and CommandCenter), and both have the same problem.When we changed the CS Failover setting from &quot;Existing Configuration&quot; to &quot;Use Network Gateway,&quot; changes were made to the Client Computer Groups and Network Topologies created by the CS Failover configuration. What concerned us was that nothing was displayed for the &quot;External Clients for Failover&quot; group and the &quot;Proxy Clients for Failover&quot; group. We repeatedly refreshed, restarted the console and tried the &quot;Push Network Configuration&quot; option, but nothing helped. Until we noticed that the Server Group/Client Computer Group association settings were set by default to search for clients/servers within the scope: &quot;Client Scope&quot; was set to &quot;Clients of user,&quot; and the user itself was set to &quot;undefined.&quot; We found a solution to this problem, but it shouldn&#039;t work automatically like now. The group creation process for failover configurations doesn&#039;t work correctly for versions SP40 and SP42. We confirmed this in two different environments, and the behavior was the same - the association value is incorrectly set to &quot;Client of user&quot; instead of &quot;Clients of this CommCell.&quot; If this has changed in recent versions, please let us know - we couldn&#039;t find any information in the documentation. We need information on whether this is the intended behavior of the system or whether it should work as described above and it’s a bug that needs to be fixed.Thanks in advance!Kind Regards,Kamil </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 09 Apr 2026 08:28:49 +0200</pubDate>
        </item>
                <item>
            <title>Oracle database name longer than 8 letters</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/oracle-database-name-longer-than-8-letters-8796</link>
            <description>Hello,we have a problem to configure a RAC instance where the database name is longer than 8 letters. When we try to configure it, we see in the logs that the database names is shorten to 8 letters:The database name is AKJFP14LOG  size=e0]28841 70a9 05/03 13:05:24 ### RacBrowser::LocalInstanceBrowse() - Oracle SbtLibrary:[/opt/commvault/Base/libobk.so]28841 70a9 05/03 13:05:24 ### RacBrowser::FillInstanceStatus(1034) - m_oraInfoReturn  size=e0]28841 70a9 05/03 13:05:24 ### RacBrowser::ValidateDBProps() - Received DBStatus=READ WRITEIs it a problem with /opt/commvault/Base/libobk.so?Thank you.Thomas   </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 09 Apr 2026 07:33:55 +0200</pubDate>
        </item>
                <item>
            <title>CV certification</title>
            <link>https://community.commvault.com/readiverse-academy-60/cv-certification-11768</link>
            <description>I dont have work mail id as i lost my job and i want to take up CV Certification with my Personal mail? how to register for exam?</description>
            <category>Readiverse Academy</category>
            <pubDate>Thu, 09 Apr 2026 01:23:29 +0200</pubDate>
        </item>
                <item>
            <title>🚀 (Updated) Proactive Advisory: Transition from Exchange Web Services (EWS) to Microsoft Graph API</title>
            <link>https://community.commvault.com/platform-updates-47/updated-proactive-advisory-transition-from-exchange-web-services-ews-to-microsoft-graph-api-11349</link>
            <description>Updated advisory as of April 10, 2026. We will continue to provide new updates as we see further developments and updates from Microsoft around their transition. - Commvault Support &amp;amp; Community teamAt Commvault, we are committed to ensuring your data remains resilient and your transitions remain seamless. We are closely monitoring Microsoft’s retirement of Exchange Web Services (EWS) in Exchange Online and are already engineering the path forward for your backup environment.What is Changing?Microsoft is transitioning from the 20-year-old EWS architecture to the modern, REST-based Microsoft Graph API. This shift enhances security via OAuth-based authentication and provides a more unified architecture across Microsoft 365.The TimelinePer Microsoft, this is the planned transition timeline for EWS:June 30, 2026: EWS access blocked for F1, F3, and Kiosk licenses.	October 1, 2026: Phased disablement of EWS begins (tenant-controlled).	April 1, 2027: EWS permanently retired for all Exchange Online tenants. IMPORTANT: This change applies only to Exchange Online. On-premises Exchange Server environments are not impacted by this deprecation. Commvault Milestone Details We understand that your backup continuity is non-negotiable. Commvault is developing a robust, Graph API-based Exchange backup solution designed to bridge existing API gaps while maintaining the high standards of recovery you expect.Planned Availability: Summer 2026 Target Release 11.44 LTS deployment models supported for both SaaS (Metallic) and Software. Our timeline ensures your environment is &quot;Graph-ready&quot; well before Microsoft begins the phased disablement in October 2026. Dependency on the Microsoft EcosystemAs a Microsoft partner, Commvault’s development timeline is inextricably linked to Microsoft’s API release schedule. While Graph is the future, Microsoft has acknowledged that it does not yet have full feature parity with EWS. What this means for you:Shared Timelines: Our delivery of 11.44 LTS depends on Microsoft releasing stable, production-ready versions of the necessary Graph endpoints.	Gap Closure: We are working directly with Microsoft to ensure they address &quot;feature gaps&quot; (such as Archive Mailboxes, Public Folders, and advanced metadata) so that your backup fidelity is not compromised.	Proactive Adaptation: If Microsoft shifts their roadmap or delays critical API features, Commvault will adjust our engineering efforts to keep your data protected using the most secure methods available.	 Recommended Actions for AdministratorsWhile the transition is still a ways off, there are two key areas to keep on your radar:Licensing Audit: Users on F1, F3, or Kiosk licenses will lose EWS access by June 2026. If these users require backup via EWS until our Summer 2026 release, consider temporary license upgrades (Plan 1/2 or E3/E5).	Permissions Prep: Moving to Graph will require reauthorizing Azure AD applications and granting new OAuth scopes. We will provide detailed &quot;how-to&quot; documentation closer to the 11.44 LTS release. Stay TunedWe will continue to share updates, timelines, and guidance here and through our KB. For full Microsoft guidance, take a look at this blog post from their Exchange team and this Microsoft Learn article.  As always, don’t hesitate to post questions here or reach out to your account team.</description>
            <category>Platform Updates</category>
            <pubDate>Wed, 08 Apr 2026 21:51:13 +0200</pubDate>
        </item>
                <item>
            <title>Guidance Required – Promoting LiveSync DR CommServe (Azure) to Production During On-prem Decommission</title>
            <link>https://community.commvault.com/share-best-practices-3/guidance-required-promoting-livesync-dr-commserve-azure-to-production-during-on-prem-decommission-11769</link>
            <description>Description:We are currently planning a transition of our Commvault environment and need guidance and best practices for our scenario.Current Setup:Primary CommServe (Production) hosted in a colo  data center	LiveSync DR CommServe already configured and running in Azure (Cloud)	Media Agents and backups are currently aligned with the colo-based production CommServePlanned Change:The colo site is being decommissioned	All workloads and servers are being migrated to Azure	The existing DR CommServe in Azure will be promoted to Production	Post transition, backups will be fully aligned to Azure workloads and infrastructureObjective:We want to:Promote the Azure DR CommServe to act as the new Production CommServe	Reconfigure clients, Media Agents, and storage policies to align with Azure	Safely decommission the existing colo-based CommServeKey Questions / Clarifications Required:LiveSync Failover:	What is the recommended approach to promote DR CommServe to active Production?	Client Connectivity:Best practice for updating client communication:	Any tools/scripts for bulk client updates?	Database Consistency:	Any specific checks recommended before promoting DR?		Media Agents &amp;amp; Storage:	Considerations when switching storage from colo-based to Azure-based storage		Any impact on existing storage policies or deduplication databases?		Licensing / Configuration:	Any licensing or configuration implications when moving fully to Azure?		Best Practices:	Recommended sequence of steps for this migration</description>
            <category>Share Best Practices</category>
            <pubDate>Wed, 08 Apr 2026 11:24:35 +0200</pubDate>
        </item>
                <item>
            <title>HSX went read only state</title>
            <link>https://community.commvault.com/hyperscale-x-q-a-54/hsx-went-read-only-state-11773</link>
            <description>Good day to allIn my environment HSX went ready mode due to reached 85% threshold i have changed the retention policy from 30 days 1 cycle to 30 days 0 after that i overserved DDB is 0% previously it was 33%  also space is not reclaimed post reclaim and data aging jobs. Kindly suggestRegardsRobert</description>
            <category>HyperScale X Q&amp;A</category>
            <pubDate>Wed, 08 Apr 2026 08:57:54 +0200</pubDate>
        </item>
                <item>
            <title>Get to Know Arlie (Better): An Update from Commvault Support</title>
            <link>https://community.commvault.com/getting-started-51/get-to-know-arlie-better-an-update-from-commvault-support-11610</link>
            <description>Our support ecosystem brings together skilled engineers, shared global knowledge, and continuous training. It also showcases the use of innovative technological advancements. One such innovation is Arlie, our AI-enabled support assistant.Arlie reads logs, identifies patterns, and surfaces insights before small issues turn into major problems, helping our engineers deliver faster, smarter resolutions. This lets the team focus more on understanding the customer’s situation and less on searching through raw data.  Arlie is enabled for all SaaS customers by default in Command Center today and is available in Command Center starting with SP40 for software customers. Read more about enabling Arlie in the documentation here. You can also access it via arlie.commvault.com or through our Support portal. Having Arlie in our customers’ hands enables you to search for solutions directly without having to wait on Commvault. This can be a lifesaver while handling critical, time-sensitive issues. Arlie can be your accelerated, “always-on” problem solver, and our support engineers are always available to provide clarity and deeper assistance. Read more about Arlie here:	Arlie’s Latest Enhancements: Your New and Improved AI Assistant 		Arlie: What AI in IT Was Meant to Be 		The Agentic Revolution: Commvault, Arlie, and the Future 	If you’ve recently experienced AI-assisted support within our ecosystem, I’d love to hear your thoughts through this brief Arlie feedback survey. Your feedback directly helps us refine our approach. Every insight you share will help strengthen our capabilities.I look forward to sharing more about Arlie, and we’d love to hear from you about your questions, ideas, and how you’re testing and using Arlie.  </description>
            <category>Getting Started</category>
            <pubDate>Tue, 07 Apr 2026 21:30:14 +0200</pubDate>
        </item>
                <item>
            <title>SAP Hana agent not available for download from cloud.commvault.com</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/sap-hana-agent-not-available-for-download-from-cloud-commvault-com-11519</link>
            <description>Cannot see SAP Hana part of the list for database agent download from cloud.commvault.com </description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Tue, 07 Apr 2026 21:27:53 +0200</pubDate>
        </item>
                <item>
            <title>upgrade from 11.32.x to 11.40.x</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/upgrade-from-11-32-x-to-11-40-x-11757</link>
            <description>Im planning on upgrading our commvault environment from 11.32.x to the 11.40.x (LT support) in the very near future.  We currently are on a windows server 2019 Datacenter.  Will commvault 11.40.x run on win server 2019?We are also using Ms SQL server 2016 on our current 11.32.x.  Does 11.40.x support this or does that need to be upgraded as well using the commvault 11.40.x installer.  Please advise.   Any input or links would be appriciatedBC </description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Tue, 07 Apr 2026 21:14:16 +0200</pubDate>
        </item>
                <item>
            <title>How to earn badges</title>
            <link>https://community.commvault.com/community-guides-30/how-to-earn-badges-138</link>
            <description>Participation of course!  🤩 🤩 🤩 For all the actions you take in the community, including…asking questions	starting conversations	answering questions 	participating in conversations	sharing best practices… you are awarded points and those eventually add up into automatically applied badges that appear on your avatar and profile.   There are also some special manually applied badges that the community team can use to recognize members. For example, you’ll see that some early participants in our community received Founding Member badges.    And finally there are external programs like Commvault’s Expert Certification Program by which Commvault administrators and engineers have earned “Certified Expert” rank that is reflected in the community.       As we grow our community more opportunities for recognition will be announced. Stay tuned! </description>
            <category>Community Guides</category>
            <pubDate>Tue, 07 Apr 2026 16:57:27 +0200</pubDate>
        </item>
                <item>
            <title>VSA with Lisens</title>
            <link>https://community.commvault.com/getting-started-51/vsa-with-lisens-11419</link>
            <description>Dears,I have a vCenter environment, and I created a Red Hat 8 virtual machine to be used as a VSA proxy.I installed the Virtual Server package and the File System package successfully.However, after completing the installation, this node does not appear as an Access Node in the Virtualization configuration.Kindly advise why this is happening.Note: I am currently using a trial license.</description>
            <category>Getting Started</category>
            <pubDate>Tue, 07 Apr 2026 15:11:03 +0200</pubDate>
        </item>
                <item>
            <title>VM Backup Issue on Huawei FusionStorage HCI with commvault</title>
            <link>https://community.commvault.com/getting-started-51/vm-backup-issue-on-huawei-fusionstorage-hci-with-commvault-11766</link>
            <description>We are currently deploying a Commvault solution to protect VMs hosted on a Huawei FusionStorage HCI platform. We have encountered a specific issue regarding shared storage backups and would like to request your technical guidance.While backup and restore operations for VMs hosted on the local disks of the FusionCompute cluster are working successfully, we encounter an error when attempting to back up VMs hosted on FusionStorage shared storage (please see the attached screenshot for error details).Error code 91:327Commvault Version: v11.36.90 (2024E)Deployment: Commvault All-in-One installed on a Windows VM (no additional VSA proxy).Environment Versions:FusionCompute: 8.9.0FusionCube: 8.3.0FusionStorage: 8.3.0Based on the Commvault documentation, we understand that the FusionCompute SDK does not support direct FusionStorage operations, requiring Commvault to use a Java SDK and command-line utilities. The documentation also specifies that the Virtual Server Agent (VSA) must be installed on the Domain 0 host (Dom0).Could you please provide clarification on the following points:Compatibility: Is Commvault v11.36.90 (2024E) fully compatible with the Huawei versions listed above (FusionCompute 8.9.0 / FusionCube 8.3.0)?Dom0 Identification: How can we correctly identify the Dom0 host within this HCI architecture?VSA Installation: What is the specific procedure for installing the VSA agent directly on the Dom0?</description>
            <category>Getting Started</category>
            <pubDate>Tue, 07 Apr 2026 15:07:10 +0200</pubDate>
        </item>
                <item>
            <title>How to install and configure a FREL (File Recovery Enabler for Linux)</title>
            <link>https://community.commvault.com/getting-started-51/how-to-install-and-configure-a-frel-file-recovery-enabler-for-linux-11716</link>
            <description>I am presented with the need to install FREL on an existing Linux VM with the operating system already in place, I want to use this FREL for Huawei Fusion Compute Hypervisor, I installed media agent, file server and VSA on this linux, So , How to install and configure a FREL ? whats the next step to make it operational</description>
            <category>Getting Started</category>
            <pubDate>Tue, 07 Apr 2026 15:03:03 +0200</pubDate>
        </item>
                <item>
            <title>From the Lab:  10 Commvault Configuration Mistakes I See All the Time</title>
            <link>https://community.commvault.com/share-best-practices-3/from-the-lab-10-commvault-configuration-mistakes-i-see-all-the-time-11770</link>
            <description>Commvault is incredibly powerful.But that power comes with complexity—and complexity is where mistakes creep in.I’ve worked in enough environments to see the same patterns repeat:Backups are “working”… until they aren’t	Performance slowly degrades	Restores become harder than they should beAnd most of the time, it’s not a product issue.It’s configuration.Here are 10 Commvault configuration mistakes I see all the time—and how to avoid them. 1. Building Everything Around a Single MediaAgentIt works at first.One MediaAgent. One place for everything. Simple.Until:Jobs stack up	Throughput drops	That one system becomes your bottleneckFix:Start with at least two MediaAgents and distribute workloads.If everything depends on one MediaAgent, you don’t have resilience—you have risk. 2. Undersizing the CommServeThe CommServe is the brain of your environment.And yet, it’s often treated like an afterthought.What happens:Slow job scheduling	UI lag	Reporting delaysFix:Proper CPU/RAM sizing	Fast storage for the database	Regular DB maintenanceIf the CommServe struggles, everything struggles. 3. Poor Deduplication Database (DDB) PlacementThis one causes more long-term pain than almost anything else.The mistake:Putting the DDB on slow or shared storage.The result:Slower backups	Longer job times	Painful rebuildsFix:Place DDB on fast, dedicated storage (SSD if possible)	Monitor DDB health regularlyDedupe performance lives and dies by storage speed. 4. Overloading a Single Storage PoolIt’s easy to just keep adding workloads to the same storage pool.Until it can’t keep up.Symptoms:Increased job duration	Resource contention	Inconsistent performanceFix:Split workloads across multiple storage pools	Align pools with workload types or performance tiersOne pool for everything = one problem for everything. 5. Ignoring Network ThroughputBackup traffic is still network traffic.And it’s often underestimated.What happens:Bottlenecks between clients and MediaAgents	Slower backups and restores	Unpredictable performanceFix:Validate bandwidth between components	Use dedicated backup networks where possible	Monitor throughput, not just job statusYou can’t out-configure a slow network. 6. No Clear Retention StrategyRetention tends to evolve organically—and that’s the problem.What I see:Different retention policies everywhere	No alignment with business needs	Storage filling up unexpectedlyFix:Define retention by workload type	Align with compliance and business requirements	Plan for growth, not just current usageRetention isn’t a setting—it’s a strategy. 7. Mixing All Workloads TogetherDatabases. VMs. File servers. Archive jobs.All in the same policies, same schedules, same infrastructure.What happens:Performance conflicts	Harder troubleshooting	Unpredictable job behaviorFix:Segment workloads:By type	By priority	By performance requirementsSeparation creates control—and control creates stability. 8. Skipping Restore TestingThis is the big one.Everything looks fine… because nobody has tried to restore anything.Reality:Backups can succeed but still be unusable	Application restores may fail due to misconfigurations	Recovery times are often unknownFix:Test regularly:Full VM restores	File-level recovery	Application-level recoveryBackup success means nothing without recovery success. 9. Too Many Exceptions (No Standardization)“I’ll just configure this one differently…”That adds up quickly.What I see:Inconsistent policies	Confusing configurations	Hard-to-manage environmentsFix:Standardize naming, policies, and schedules	Limit exceptions	Document anything that deviatesConsistency is what makes environments scalable. 10. Ignoring Alerts and Warning StatesCommvault will tell you when something isn’t right.The problem is… people stop listening.What happens:Warnings pile up	Real issues get buried	Small problems become big onesFix:Clean up existing warnings	Tune alerts so they’re meaningful	Treat warnings as early signals—not noiseIf everything is a warning, nothing is. Bringing It All TogetherMost Commvault issues don’t come from dramatic failures.They come from:Small misconfigurations	Overlooked details	Decisions that made sense at the timeUntil scale exposes them. Final ThoughtYou don’t need a perfect environment.But you do need a predictable one.Good configuration isn’t about making things work today.It’s about making sure they still work when everything grows.Fix these early, and your environment will feel stable.Ignore them, and you’ll spend your time chasing problems that didn’t have to exist.</description>
            <category>Share Best Practices</category>
            <pubDate>Tue, 07 Apr 2026 15:00:57 +0200</pubDate>
        </item>
                <item>
            <title>Unable to delete old Storage Policy and DDB after CommCell Migration using MongoDB Recovery Assistant Tool</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/unable-to-delete-old-storage-policy-and-ddb-after-commcell-migration-using-mongodb-recovery-assistant-tool-11727</link>
            <description>Dear Team, We recently migrated our CommCell server using the &quot;Recovering the MongoDB Database Using the Mongo Recovery Assistant Tool for Windows&quot; document. After the migration, the new CommCell server (Commsafe) cloned all configurations from the old server, including the old Disk Library, Mount Paths, and DDB entries.We have since configured a new Disk Library, new Data Path, and new DDB on the Commsafe server and successfully re-associated all subclients to the new Storage Policy. The old CommCell server has been fully decommissioned, and the old client win-i3vamh7fvl3 has been retired from CommCell Console.However, when we attempt to delete the old Storage Policy, we receive an error stating that DDB Backups are still associated with it. Additionally, when attempting deletion, the Job Controller shows pending/queued jobs tied to the old Storage Policy, which is blocking the deletion.What we need help with:How to clear the stale/orphaned DDB association left from the cloned configuration	How to remove or kill the queued jobs in the Job Controller that are linked to the old Storage Policy	Correct sequence to fully delete the old Storage Policy, DDB, and Disk Library safely</description>
            <category>Storage and Deduplication</category>
            <pubDate>Tue, 07 Apr 2026 07:25:03 +0200</pubDate>
        </item>
                <item>
            <title>Lab Install: Designing a Commvault Environment That Won’t Break at Scale</title>
            <link>https://community.commvault.com/share-best-practices-3/lab-install-designing-a-commvault-environment-that-won-t-break-at-scale-11767</link>
            <description>In a lab setting, I&#039;ve installed several Commvault servers for various use cases, including customer demos. Below is my take on designing a Commvault environment. It’s easy to build a Commvault environment that works.It’s much harder to build one that still works a year later, under more load, more data, and more expectations.Because scale doesn’t break things immediately—it exposes the shortcuts you took early on.I’ve seen environments that looked perfectly fine at deployment:Backups running	Jobs completing	Storage holding upThen growth hits.More VMs.More data.More retention.And suddenly:Jobs start missing windows	Deduplication performance drops	MediaAgents get overloaded	Restore times creep upThat’s not a failure of Commvault.That’s a design problem.Let’s talk about how to build it right from the start. 1. Start with Architecture, Not JobsOne of the most common mistakes is jumping straight into creating backup jobs.But jobs don’t define your environment—architecture does.Core components to design properly:CommServe (your brain)	MediaAgents (your workhorses)	Storage (disk, object, cloud)What scalable design looks like:Dedicated CommServe with proper sizing	Multiple MediaAgents (not just one doing everything)	Storage designed for throughput, not just capacityIf your architecture is weak, no amount of tuning will fix it later. 2. Don’t Underestimate the CommServeEverything flows through the CommServe:Job scheduling	Metadata	Database operationsWhen it struggles, everything struggles.Best practices:Use proper CPU and RAM sizing (don’t go minimal)	Place the database on high-performance storage	Regularly maintain and monitor the CommServe DBWhat happens if you don’t:Slow job initiation	Delays across the environment	Reporting and UI lagScaling starts with a healthy control plane. 3. Scale Out MediaAgents EarlyA single MediaAgent might work today—but it becomes your bottleneck tomorrow.What scalable looks like:Multiple MediaAgents distributing load	Workloads balanced across them	Separation by function if needed (e.g., production vs archive)Key considerations:CPU and RAM for deduplication	Network throughput	Disk I/O performanceCommon mistake:“We’ll add another MediaAgent later.”By the time you need it, you’re already dealing with performance issues.Design for distribution from day one. 4. Get Deduplication Right (Or Pay for It Later)Commvault’s deduplication is powerful—but it’s also one of the most misunderstood areas.What to plan:Proper sizing of the Deduplication Database (DDB)	Fast storage for DDB (this is critical)	Logical storage pool designBest practices:Avoid overloading a single DDB	Monitor dedupe ratios and performance	Scale out instead of overloadingWhat goes wrong at scale:Slow backups	Increased job runtimes	DDB rebuild pain (and downtime risk)Bad dedupe design doesn’t fail fast—it degrades slowly. 5. Design for Throughput, Not Just CapacityStorage conversations often focus on:“How much data do we need to store?”The better question is:“How fast can we move data?”What scalable storage looks like:High IOPS and throughput	Parallel write capability	Integration with object storage where appropriateWhat causes problems:Cheap storage that can’t keep up	Single repositories handling too much load	Ignoring network throughput between componentsBackups don’t fail because of size—they fail because of speed. 6. Separate Workloads IntentionallyNot all backups are equal.Mixing everything together leads to contention.Examples of smart separation:Production vs dev/test	Large databases vs small VMs	Short retention vs long-term archiveWhy it matters:Predictable performance	Easier troubleshooting	Better resource allocationSegmentation brings control. Control brings stability. 7. Plan Retention Like It’s a Growth Problem (Because It Is)Retention is where scale quietly explodes.What starts as:30 days of backupsBecomes:90 days	Then a year	Then compliance-driven retentionWhat to plan:Storage growth over time	Archive tiers (object/cloud)	Lifecycle policiesCommon mistake:Designing for today’s retention, not tomorrow’s requirements.Retention is the silent driver of scale. 8. Build for Recovery, Not Just BackupThis is where most designs fall short.They optimize for:Backup success	Storage efficiencyBut ignore:Restore performance	Recovery workflowsWhat scalable recovery looks like:Fast access to recent backups	Tested restore scenarios	Clear prioritization of critical systemsWhat breaks at scale:Restores taking too long	Difficulty finding the right data	Bottlenecks during large recoveriesIf recovery doesn’t scale, your design doesn’t scale. 9. Monitor Before It HurtsAt scale, issues don’t appear suddenly—they build.What to monitor:Job duration trends	MediaAgent load	DDB health	Storage latency	Capacity growthWhat boring (good) looks like:Predictable job completion	No surprise slowdowns	No last-minute capacity issuesIf you’re only reacting to alerts, you’re already behind. 10. Keep It Simple (Seriously)Over-engineering is just as dangerous as under-designing.Too many:Storage pools	Policies	ExceptionsLeads to:Complexity	Confusion	Operational mistakesWhat scalable simplicity looks like:Standardized policies	Clear naming conventions	Minimal exceptionsComplex environments don’t scale—they collapse under their own weight. Bringing It All TogetherDesigning a Commvault environment that scales isn’t about adding more later.It’s about making the right decisions early:Strong architecture	Distributed load	Thoughtful storage design	Realistic retention planning	Recovery-focused thinking Final ThoughtScale doesn’t break systems.It reveals them.If your design is solid, scale feels predictable.If it’s not, scale feels like failure.Build it right the first time—and your future self won’t be firefighting later.</description>
            <category>Share Best Practices</category>
            <pubDate>Mon, 06 Apr 2026 15:38:18 +0200</pubDate>
        </item>
                <item>
            <title>Oracle credentials &quot;vault&quot;</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/oracle-credentials-vault-11731</link>
            <description>!--scriptorstartfragment--&amp;gt;Hi all, We’re currently looking at migrating Oracle backups to Commvault, and the scale is quite big — we’re talking hundreds of databases. One of the challenges is credential management. Right now we’re using connection strings that are specific per Oracle instance, which isn’t very practical at this scale. So I’m wondering: is there any way in Commvault to handle Oracle credentials centrally? Some kind of credential vault, maybe?Or could this be solved with a workflow or another native Commvault feature? Any tips or best practices would be really helpful.!--scriptorendfragment--&amp;gt;</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 06 Apr 2026 05:44:57 +0200</pubDate>
        </item>
                <item>
            <title>error getting running post backup script execution on backup</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/error-getting-running-post-backup-script-execution-on-backup-11763</link>
            <description>i am getting error to execute the post script running on oracle database backup. getting following error Error Code:  7:78] ... Executable file (/opt/commvault/Base/post_scripts/db_rman_listarch_remote_vlsbi.sh ) not found in installation directory or executing outside of installation directory is disabledhowever i executed the script manually on the server and it’s running fine, even the script copied on the commvault installaion directory on the /opt/commvault/Base/post_script folder.  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Sat, 04 Apr 2026 00:13:45 +0200</pubDate>
        </item>
                <item>
            <title>One Off Backups and Licence Usage</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/one-off-backups-and-licence-usage-11760</link>
            <description>Hi all.We’ve been using our Commvault setup to take one off NDMP backups of critical research data from our central storage, then copying to two tapes, all that good stuff.  As these are a one-off backup, I’ve been disabling the client activity and removing the licence afterwards, as they’ll only be needed for a potential restore down the line (retention is a minimum of 5 years).I’ve noticed that this doesn’t count towards licence usage, which in essence means that we can do as many “archives” as we have storage for, as long as we have the sufficient licence headroom for the initial one off backup.My worry is this would be seen as circumventing licence costs, and against the fair usage idea, and if we were ever audited it would result in additional charges.  I should add we’re not a massive setup, 1000 VMs and ~250TB of capacity.  The archives in total would be 10s of TB, maybe ranging into 100s.Anybody got any relevant info or thoughts?TIAIan</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 03 Apr 2026 17:47:56 +0200</pubDate>
        </item>
                <item>
            <title>Office 365 Configuration Helper fails with AAD Graph error</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/office-365-configuration-helper-fails-with-aad-graph-error-11762</link>
            <description>Hi all, when registering an Office 365 Exchange app in the Commvault Command Center (version 11.32), we attempted to use the Office 365 Configuration Helper tool.However, after running the helper, the following error occurs: &quot;Access blocked to AAD Graph API for this application.&quot; Although the provided link (https://learn.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) describes the general API migration, it does not offer specific advice for resolving this within the Commvault tool. We have already tried installing the Microsoft Graph PowerShell SDK (https://learn.microsoft.com/en-us/powershell/microsoftgraph/installation), but the error persists and the configuration tool still fails. Have you had any similar experiences with the Office 365 Configuration Helper tool?Thanks in advance for sharing your insights!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 03 Apr 2026 06:19:59 +0200</pubDate>
        </item>
                <item>
            <title>1-touch QSDK communication error - network topology problem?</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/1-touch-qsdk-communication-error-network-topology-problem-11111</link>
            <description>Hi,I’m trying to test 1-touch restore for Windows. When I’m trying to connect to Commserve, there is a “failed to setup a communication (QSDK) session” error: You can observe that the initial connection was succesful (“connected to comserve”). But then there is this QSDK problem. I checked the logs and there is info that it could be no direct tunnel or couldn’t bring any automatic tunnels to commserve.I think it could be the problem with network routes / topologies on Commserve, but here is a question: How can I setup network route or topology to client which not exist yet? I mean when I start 1-touch wizard and configure network connection to Commserve, it is not a real client but just some host with random hostname so how Commserve could know what network settings apply to this client?I can setup some topology to some group but what client should be there? With this initial step of 1-touch I didn’t enter any client information yet. PS: I restarted the Commands Manager service and tried to play with groups, routes and pseudoclients and ONCE somehow I successfuly connected to Commserve and went to next step but cannot reproduce it.PS 2: There is 8403/TCP opened bidirectionally between Commserve and this client.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 02 Apr 2026 23:12:36 +0200</pubDate>
        </item>
                <item>
            <title>Report Builder no longer available with 11.40 ?</title>
            <link>https://community.commvault.com/developer-tools-integration-and-automation-50/report-builder-no-longer-available-with-11-40-11291</link>
            <description>Hi,I just learned, that the WebConsole is no longer available with 11.40.So far, so good.But what I really miss is the Report Builder, that is no longer available with new installations of Commvault.Are there any plans to make it available in the CommandCenter as well in the near future ?and if so, when will this happen ?rgdsKlaus</description>
            <category>Developer Tools, Integration and Automation</category>
            <pubDate>Thu, 02 Apr 2026 21:07:16 +0200</pubDate>
        </item>
                <item>
            <title>DDB Performance and Status</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/ddb-performance-and-status-11756</link>
            <description>Hi, is there an option to adjust the values for Average Q&amp;amp;I Time and Primary Records ?  Actually value in SP40 			Good &amp;lt; 600 μs Or &amp;gt;= 600 μs and &amp;lt; 160 Million									Warning 600-800 μs and &amp;gt;= 160 Million Or DDB needs an upgrade									Critical &amp;gt;= 800 μs and &amp;gt;= 160 Million					 I need to change this values a bit.  Arlie say: How to Adjust DDB Q&amp;amp;I Time Thresholds:Open the CommCell Console.	Go to Control Panel &amp;gt; Media Management.	Select the Deduplication tab.	Locate and adjust the following parameters:	DDB horizontal scaling threshold QI Time		Warning Threshold for Average Q&amp;amp;I Time		Critical Threshold for Average Q&amp;amp;I Time	 But this options arent available in SP40</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 02 Apr 2026 16:37:25 +0200</pubDate>
        </item>
                <item>
            <title>Is SharePoint Online backup still possible on CV 11.32 after Azure ACS retirement (KB 84896)?</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/is-sharepoint-online-backup-still-possible-on-cv-11-32-after-azure-acs-retirement-kb-84896-11761</link>
            <description>Hi all,I am currently running Commvault 11.32 and I’m looking for clarification regarding KB 84896 (&quot;SharePoint Online Azure ACS Retirement in Microsoft 365&quot;).https://kb.commvault.com/article/?id=84896 Today is April 2nd, 2026, which according to the article is the final deadline for Azure ACS functionality. The KB specifically mentions that due to Microsoft&#039;s restrictions on DisableCustomAppAuthentication and the retirement of ACS, users must upgrade to Commvault Platform Release 11.36.28 or later.Is it more recommended to upgrade Commvault from version 11.32 rather than disabling authentication to allow Office 365 SharePoint backup and restore? Finally, could registration of the Sharepoint app and backup work somehow (when disabling authetication in sharepoint portal) in 11.32 as well?Your opinions would be much appreciated.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 02 Apr 2026 16:25:29 +0200</pubDate>
        </item>
                <item>
            <title>One of the system schedules policy interfered with the execution of the other schedules</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/one-of-the-system-schedules-policy-interfered-with-the-execution-of-the-other-schedules-11759</link>
            <description>Hello,I have simpana 11.36.68 and we have a problem because one system schedules interfered with execution other schedule policy.For a specific client, I configured the schedule so that a Full backup runs on Fridays, while Incremental backups run on the other days. However, last night both a Full and an Incremental backup started at the same time, even though the Full backup should not have been triggered on Wednesday.This indicates that a system schedule may have overridden the configured backup schedule.The schedule policy for this client does not provide the option to include Wednesday or Saturday, which is causing scheduling issues.What might be causing the issue?Best regards,Elizabeta</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 02 Apr 2026 12:52:00 +0200</pubDate>
        </item>
                <item>
            <title>M365 Custom Config: Certificate creation &amp; Azure upload workflow</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/m365-custom-config-certificate-creation-azure-upload-workflow-11735</link>
            <description>Hi all, I’m setting up M365 backup using the Custom Configuration, which is now the recommended approach over the Express option.I have already registered the app in Azure and have my Application ID and Tenant ID ready. However, the documentation is a bit unclear regarding the certificate workflow—I can’t find the specific section explaining where and how to generate the certificate required for the Azure portal. Is the certificate generated directly within the Command Center wizard? Also, once it&#039;s generated, is the correct next step to upload it to the Azure app registration? Could someone please clarify these steps or point me to the exact documentation page covering this procedure?https://documentation.commvault.com/2023e/software/complete_microsoft_365_service_catalog_for_exchange_online_using_custom_configuration_option.html?_gl=1*hjd47t*_gcl_au*NjQ0NDk4ODY5LjE3Njk1OTU0OTI.  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 02 Apr 2026 09:52:40 +0200</pubDate>
        </item>
                <item>
            <title>Reports not viewable by other admins</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/reports-not-viewable-by-other-admins-11748</link>
            <description>Hello,We are creating multiple reports in command center using the backup summary report, but seem to have an issue where only the user who created the report can see the report with the filters. How can we configure these to allow other admins to be able to see, adjust and edit these?</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 01 Apr 2026 12:47:52 +0200</pubDate>
        </item>
                <item>
            <title>On cloned server with same IP can not start backup Commvault</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/on-cloned-server-with-same-ip-can-not-start-backup-commvault-11692</link>
            <description>Hello,I have installed cloned machine (remain same IP address and same name) instead original. On the Commvault backup server there were no changes. After the server was cloned, the backup from Commvault stopped working.Description: Failed to start phase  Backup] on  mailserver] due to network error  Remote machine  mailserver]. The socket connect failed. Error returned  -1=Connection to mailserver:cvd(8400/8400) failed on bck: Can&#039;t unambiguously route connection to mailserver because there are multiple tunnels from different instances of CVFWD pretending to be mailserver]. Check Network Connectivity.]. Will attempt to restart. Please check if this product&#039;s services are running on the remote host.Is there any procedure that needs to be performed on the Commvault backup server for a cloned machine?Best regards,Elizabeta</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 01 Apr 2026 12:14:42 +0200</pubDate>
        </item>
                <item>
            <title>upgrade client agent</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/upgrade-client-agent-11745</link>
            <description>Good morning I would like to know if it is possible to find out the reasons for a client upgrade failure after the job has been completed.thanks</description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Wed, 01 Apr 2026 11:21:41 +0200</pubDate>
        </item>
                <item>
            <title>DevOps Resilience webinar (on-demand): Protecting the Data that Powers Delivery</title>
            <link>https://community.commvault.com/news-events-7/devops-resilience-webinar-on-demand-protecting-the-data-that-powers-delivery-11747</link>
            <description>Join us April 9 in your region for an in-depth look at protecting and recovering the critical data that drives innovation and delivery. DevOps platforms like Azure DevOps, GitHub, GitLab, and Jira house valuable source code, pipelines, work items, and metadata that form the core of an organization’s intellectual property and innovation engine. Yet protection and recovery are often overlooked or managed through native tools or homegrown scripts - approaches that can create gaps in protection, compliance, and recovery readiness.Commvault Senior Product Manager Aashray Augur and Product Marketing Manager Katharine Colucci will demo Commvault Cloud Backup &amp;amp; Recovery for DevOps and discuss real-world scenarios where our enterprise-grade solution can help automate protection, streamline compliance readiness, and accelerate recovery across distributed DevOps environments.Click here for details and to register.  </description>
            <category>News &amp; Events</category>
            <pubDate>Wed, 01 Apr 2026 06:22:04 +0200</pubDate>
        </item>
                <item>
            <title>Join Commvault at Google Cloud Next | April 22 - 24, 2026</title>
            <link>https://community.commvault.com/news-events-7/join-commvault-at-google-cloud-next-april-22-24-2026-11693</link>
            <description>Join us at Booth #3617 at Mandalay Bay, Las Vegas, April 22 - 24, 2026.  Click here for details or to book a meeting with our team. We’d love to see you if you plan to be there!You’ll find out much more about Commvault’s expanding collaboration with Google Cloud, including:	Comprehensive protection for Google Cloud Services			Resilience for AI datasets in BigQuery			Application-consistent protection and granular recovery for GKE workloads			Single-pane protection and recoverability across Google Cloud services and Workspace productivity apps</description>
            <category>News &amp; Events</category>
            <pubDate>Wed, 01 Apr 2026 01:38:53 +0200</pubDate>
        </item>
                <item>
            <title>Exchange Online Delete: Backend Data or Just Index? (Infinite Retention)</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/exchange-online-delete-backend-data-or-just-index-infinite-retention-11740</link>
            <description>Hey Commvault Community, I just deleted an Exchange Online mailbox backup from Command Center, kicking off Job ID: 1032081. Screenshot shows &quot;Index Delete&quot; phase done at 100%. For Exchange Online (and other O365 services like Teams/SharePoint), does deleting the backup actually remove backend data from cloud storage, or just kill the index (no more browse/restore) while emails/files stay forever?From what I know, Commvault uses only forever / infinite retention for O365 services (retention rules only apply to deleted data backups). So, that&#039;s why I&#039;m asking in case of deleting an Exchange Online mailbox backup, does the backend backup data also get deleted from cloud storage, or just the index? Please for your feedback.Best regardsNikos</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 31 Mar 2026 15:53:19 +0200</pubDate>
        </item>
                <item>
            <title>ReFS Support for Disklibraries on Windows 2022 ?</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/refs-support-for-disklibraries-on-windows-2022-11737</link>
            <description>Hi,according to BOL (ReFS for Windows 2012 and Windows 2016), ReFS is supported for Disklibraries up to Windows Version 2019.Is there a reason to not extent the support to the recent versions of Windows OS ? (2022, 2025)Is it required to copy-out, reformat using NTFS, copy-back all disklib paths/volumes, if the server is upgraded from Windows 2019 to a more recent version of Windows ?This can become challenging if the disklib has not eough freespace available to temporary evacuate one ot its paths.BRKlaus </description>
            <category>Storage and Deduplication</category>
            <pubDate>Tue, 31 Mar 2026 14:29:28 +0200</pubDate>
        </item>
                <item>
            <title>Sharepoint Backup On Prem</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/sharepoint-backup-on-prem-11743</link>
            <description>Hi All, we have an very Big Sharepoint Farm on Prem. Actually we are using the Sharepoint Agent for documents and also for database.  No we are faceing 2 Problems: The logs on the sql server are not getting truncated	the performance is very very bad What is the risk when we only backup the Documents via the sharepoint agent and backup teh databse via sql server agent ? </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 31 Mar 2026 14:20:50 +0200</pubDate>
        </item>
                <item>
            <title>Nutanix AHV support</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/nutanix-ahv-support-11739</link>
            <description>Hi,When will Nutanix AHV be supported by Commvault for version 10.3.0.2 and later ? According to the documentation, the latest supported version is currently 7.5.x.https://documentation.commvault.com/11.42/software/nutanix_ahv_system_requirements.htmlThe Commvault software is supported with Nutanix AHV versions 4.6, 4.7, 5.0, 5.1.x.x, 5.5.x.x, 5.6.x, 5.8.x, 5.9.x, 5.10.x, 5.11, 5.15, 5.16, 5.17.x, 5.18.x, 5.20.x, 6.0, 6.5, 6.7, 6.8, 6.10, 7.0.x, 7.3.x, and 7.5. Regards,Lukasz</description>
            <category>Virtualization and Containers</category>
            <pubDate>Mon, 30 Mar 2026 23:33:39 +0200</pubDate>
        </item>
                <item>
            <title>Commvault 11.36  Hyperscale X 3.x - memory running low</title>
            <link>https://community.commvault.com/hyperscale-x-q-a-54/commvault-11-36-hyperscale-x-3-x-memory-running-low-11742</link>
            <description>Is it normal behavior for all nodes the the Hyperscaler cluster to ran out of memory after few months without reboot. We had the issue with 11.32, no change with 11.36. Migrating from Hyperscale from 2.x to 3.x didn’t improve the situation. Only fix is to reboot each node.They each have 512GB. After a reboot they have about each 300GB free. Most of is buff/cache.When the have less then 10%, dashboard memory turns to red.</description>
            <category>HyperScale X Q&amp;A</category>
            <pubDate>Mon, 30 Mar 2026 19:27:56 +0200</pubDate>
        </item>
                <item>
            <title>SQL Database Backup is failed with Error</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/sql-database-backup-is-failed-with-error-5013</link>
            <description>Hi Team,I Have facing one issue in our commvault ,While running the SQL database backup we have getting below error ,please provide any solution for the below issue.Error Code: [30:324]Description: &quot;Error encountered when closing the virtual device, Please see SQL server vdi.log for more details&quot; </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 30 Mar 2026 18:12:20 +0200</pubDate>
        </item>
                <item>
            <title>Issue with WebConsole – Reports and SLA failing after failover</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/issue-with-webconsole-reports-and-sla-failing-after-failover-11741</link>
            <description>helloWe are experiencing an issue with the WebConsole on our Commvault environment following a failover. Failover windows cs to linux cs  EnvironmentCommvault version: 11.40.26	CommServe OS: Linux	WebConsole / Command Center hosted on: linuxIssue DescriptionThe WebConsole is not functioning correctly.	Scheduled reports and SLA reports are failing.	jobs are working without any issue.Observed BehaviorAccess to Command Center (/commandcenter) is working (HTTP 200).	Access to WebConsole (/webconsole) returns HTTP 404 (Application Not Found).	Reports fail with the following error:	&quot;Failed to get the list of command centers&quot;		&quot;There is no WebConsole available to proceed further&quot;	Logs (relevant excerpts)scheduleReport.log:	GetWebConsoleURLList() - There is no WebConsole available to proceed further		JobManager.log:	Failed to get the list of command centers for web server id i0]		Apache logs:	/commandcenter → 200		/webconsole → 404	Actions Already PerformedVerified that WebConsole and Command Center components are installed	Restarted Commvault services	Performed repair via cvpkgadd	Verified Apache and WebServer services are running	Confirmed WebConsole binaries are present on diskCurrent AssessmentIt appears that:Command Center is operational	WebConsole (legacy UI) is not deployed or not registered properly	No Web Server entry seems available from CommServe perspective (web server id = 0)RequestCould you please assist with:Verifying the Web Server / WebConsole registration in the CommServe database	Providing the correct procedure to redeploy or re-register the WebConsole	Confirming if this behavior is expected after failover and if additional steps are requiredPlease let us know if additional logs or diagnostics are required. </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 30 Mar 2026 15:47:29 +0200</pubDate>
        </item>
                <item>
            <title>re image  hyperscall X</title>
            <link>https://community.commvault.com/hyperscale-x-q-a-54/re-image-hyperscall-x-11726</link>
            <description>Hello,I am currently attempting to reinstall three HyperScale X nodes that were previously configured with immutability (WORM enabled).Each node is equipped with 24 × 18 TB disks.During the fresh installation, the process consistently stalls at around 68%. After more than 2 hours, there is no further progress, whereas the expected installation time is typically between 45 and 90 minutes.Additionally, I am not seeing any relevant logs generated on the nodes during this stage, which makes troubleshooting difficult.Given the previous immutability configuration, I would like to confirm:	Is there any specific cleanup or prerequisite required before reinstalling these nodes?			Could residual data or disk metadata from the previous configuration cause this behavior?	At this stage, would you recommend proceeding with a full reimage of the nodes, or is there a supported cleanup procedure to resolve this?Thank you for your assistance.Best regards,</description>
            <category>HyperScale X Q&amp;A</category>
            <pubDate>Mon, 30 Mar 2026 15:27:32 +0200</pubDate>
        </item>
                <item>
            <title>Azure Application API Permissions for M365 Services</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/azure-application-api-permissions-for-m365-services-5639</link>
            <description>Hi CommVault Community,I deployed the Azure Applications for M365 Services with the Express Setup.The Azure Applications have some permissions that are somewhat concerning from a security perspective.Therefore the question, are the following permissions really necessary?Why do the Azure Apps need &quot;Application.ReadWrite.All&quot; permissions?Why does the Azure Application for SharePoint Online need &quot;RoleManagement.ReadWrite.Directory&quot; permissions?They are not listed in the official documentation!https://documentation.commvault.com/2023/essential/142507_request_and_grant_permissions_to_azure_apis_for_azure_app_for_sharepoint_online.htmlCVSPBackupAppCVTeamsBackupAppCVODBackupAppCVExBackupApp best regards,Andreas</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 30 Mar 2026 13:39:07 +0200</pubDate>
        </item>
                <item>
            <title>Migrating tapes without tape library</title>
            <link>https://community.commvault.com/share-best-practices-3/migrating-tapes-without-tape-library-11689</link>
            <description>Hello,One of our clients has an On-prem CommCell with IBM 3576-MTL6 library with 6 drives and over a thousands tapes. Considering customer is planning workloads/clients migration to a new CommCell is it possible to migrate indexes of these tapes and lift and shift actual tapes, however without migrating the actual library and tape drive hardware. What is the best option? Thanks Tomas </description>
            <category>Share Best Practices</category>
            <pubDate>Mon, 30 Mar 2026 10:34:24 +0200</pubDate>
        </item>
                <item>
            <title>Support for Netapp ONTAP 9.17.1</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/support-for-netapp-ontap-9-17-1-11522</link>
            <description>Hello All,Our customers are increasingly requesting to update their NetApp storage to Ontap 9.17.1. In addition, new systems are increasingly being delivered with 9.17, and we need to perform a downgrade. (if still supported by the hardware)Are there any plans to release Ontap 9.17.1 or even 9.18.1? Grateful for any help! </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 30 Mar 2026 08:13:27 +0200</pubDate>
        </item>
                <item>
            <title>Upgrade from 11.28 to 11.40</title>
            <link>https://community.commvault.com/share-best-practices-3/upgrade-from-11-28-to-11-40-11738</link>
            <description>HI,we would like to upgrade our commvault PF from v11.28 to v11.40,I would like to know if someone did it lastely and how you did it please if you have a detailed procedure please share it with me it would be very helpful :)thanks</description>
            <category>Share Best Practices</category>
            <pubDate>Fri, 27 Mar 2026 01:19:14 +0100</pubDate>
        </item>
                <item>
            <title>Automatic restart of pending jobs</title>
            <link>https://community.commvault.com/developer-tools-integration-and-automation-50/automatic-restart-of-pending-jobs-11734</link>
            <description>Good morning everyone.I hope you are well.I put them in context.In the environment of one of my clients we perform weekly &quot;Synthetic Full&quot; backups to one of their user groups. These users within the group work as &quot;Home Office&quot;, which is why backups were scheduled weekly, so as not to affect the SLA of the group. For this reason, some backup jobs are not executed and remain in pending status.My doubt is:For those backlogs, is there a way for Commvault to visualize the connected computers in the group and automatically retry pending jobs without having to run it manually?I would greatly appreciate your support.I remain attentive to your comments. Thank you.</description>
            <category>Developer Tools, Integration and Automation</category>
            <pubDate>Thu, 26 Mar 2026 17:26:21 +0100</pubDate>
        </item>
                <item>
            <title>Azure workloads and access nodes requirement</title>
            <link>https://community.commvault.com/share-best-practices-3/azure-workloads-and-access-nodes-requirement-11718</link>
            <description>A design consideration for Azure workloads spread across three Azure regions (A, B, C). Additionally, we have a Media agent (bare-metal server with disk storage for primary backups) that we want to host in Azure in one of these regions (A/B/C). Our CommServer is hosted on-premises.looking for advice on the best deployment strategy for access nodes / proxy servers:    1. Should we deploy separate access/proxy nodes for each workload in every region?    2. Can we combine access nodes / proxy servers for multiple workloads? If yes, what would be the best possible combinations?The workloads include:    • MySQL Flexible Servers    • PostgreSQL Flexible Servers    • SQL Managed Instances    • SQL Databases    • Storage Accounts (File &amp;amp; Blob)    • Virtual Machines    • VM Scale SetsGoal: compatible with multi-region expansion, Optimize latency, cost, and operational efficiency while ensuring secure and reliable access to each workload.Any insights, architectural patterns, or references to best practices would be greatly appreciated. </description>
            <category>Share Best Practices</category>
            <pubDate>Thu, 26 Mar 2026 12:26:05 +0100</pubDate>
        </item>
                <item>
            <title>Commvault SaaS (Metallic) Secondary Copy to IBM COS – Feasibility, Setup, and Recovery Considerations</title>
            <link>https://community.commvault.com/saas-metallic-q-a-42/commvault-saas-metallic-secondary-copy-to-ibm-cos-feasibility-setup-and-recovery-considerations-11736</link>
            <description>Since we do not have control over MediaAgents in Commvault SaaS (Metallic):Is it possible to configure a secondary copy from Commvault SaaS (Metallic) to IBM Cloud Object Storage (COS)?	If yes, what are the high-level steps, prerequisites, and system requirements?		In the event that recovery from the secondary copy is required:	What is the recovery procedure?		Are there any limitations or considerations?	Commvault SaaS (Metallic):Backup workloads: Azure MS365, Azure Entra ID. </description>
            <category>SaaS (Metallic) Q&amp;A</category>
            <pubDate>Thu, 26 Mar 2026 07:30:20 +0100</pubDate>
        </item>
                <item>
            <title>Failed to authenticate with Netapp. Please validate the credentials || NDMP NAS Backup - metallic</title>
            <link>https://community.commvault.com/saas-metallic-q-a-42/failed-to-authenticate-with-netapp-please-validate-the-credentials-ndmp-nas-backup-metallic-11730</link>
            <description>Hey Everyone, We are configuring a File server backup which is hosted on NAS using commvault metallic. We are configuring the NAS Backup using NDMP. We are able to preview(browse) the File server content in the metallic.Issue: But while performing the “test connection”, we could see the error - Failed to authenticate with Netapp. Please validate the credentials.We are working with commvault since last 10 days and still no resolution is provided, so i thought to check this with community guys…. we have validate all the pre checks like user account, access node and ports… all are in place. We could also see the SSL certificate is validate. Please let us know your view on this.Best Regards.</description>
            <category>SaaS (Metallic) Q&amp;A</category>
            <pubDate>Wed, 25 Mar 2026 12:46:32 +0100</pubDate>
        </item>
                <item>
            <title>Exclude SQL databases from VM backups</title>
            <link>https://community.commvault.com/getting-started-51/exclude-sql-databases-from-vm-backups-8948</link>
            <description>Hi All,If I have a VM client being backed up with the VSA agent and it also has the SQL agent installed on it, how would I go about excluding the SQL databases from the the VM backup? Thanks!</description>
            <category>Getting Started</category>
            <pubDate>Wed, 25 Mar 2026 09:10:59 +0100</pubDate>
        </item>
                <item>
            <title>Anyone upgrade from 11.32 to 11.40?</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/anyone-upgrade-from-11-32-to-11-40-11352</link>
            <description>We are looking to upgrade our environment from 11.32 to 11.40 LTS. Was wondering if anyone out there has been through an upgrade to 11.40 and how it went. Thanks for the input.</description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Wed, 25 Mar 2026 07:11:51 +0100</pubDate>
        </item>
                <item>
            <title>Backup plans and Primary Copy to Tape</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/backup-plans-and-primary-copy-to-tape-11722</link>
            <description>I have an implementation where the customer is writing backup jobs directly to tape.They are LTO-9 so have 18TB in capacity.Total Full backups are 6TB. I have created multiple plans for VM’s and SQL backups. They run at different times and have different retentions for daily backups.What I’ve noticed is that each Plan (Storage Policy) writes to its own tape which is far from ideal for tape usage.  The only thing I could think of is to not use plans, create 2 Storage Policies (to satisfy the different retentions) and then create schedules in the Commvault Java Console as we did many years ago.Is this still the best method?Thanks in advance.Mauro</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 24 Mar 2026 19:08:09 +0100</pubDate>
        </item>
                <item>
            <title>Community Terms &amp; Conditions</title>
            <link>https://community.commvault.com/community-guides-30/community-terms-conditions-11728</link>
            <description>Welcome to the Commvault Community! As a member, we encourage you to review our Terms of Use, which outline the guidelines for participation and use of this community.You can find the full Terms of Use here: https://community.commvault.com/site/termsIf you have any questions, feel free to reach out to the Community team.</description>
            <category>Community Guides</category>
            <pubDate>Tue, 24 Mar 2026 13:45:22 +0100</pubDate>
        </item>
                <item>
            <title>Support for OpenNebula (KVM) Hypervisor</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/support-for-opennebula-kvm-hypervisor-10238</link>
            <description>Hi there,My name is Jim and I’m a Product Manager at OpenNebula - one of the most popular cloud management platforms on the market.We are an Open Source product and have a huge community of users, however in recent months we’ve had a huge influx of requests from our Enterprise Edition customers for Commvault VM backup support.I understand that in order to have this native integration it will need work from the Commvault team and perhaps you have some API standards we can implement on our side as well. We are willing to commit resources to assist and work on such an implementation and it would be great to open a discussion with the Commvault team on this.It would be great to hear if Commvault have had requests from their side as well - we would love to partner on this and share information on the Commvault / OpenNebula customers who have requested such an integration.Thanks in advance,Jim</description>
            <category>Virtualization and Containers</category>
            <pubDate>Tue, 24 Mar 2026 10:27:09 +0100</pubDate>
        </item>
                <item>
            <title>Slow pruning of deleted storage pool on S3 bucket</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/slow-pruning-of-deleted-storage-pool-on-s3-bucket-11714</link>
            <description>Hi community, we recently deleted a deduplicated storage pool which was using a S3 bucket on our NetApp Storage Grid and had WORM/object lock enabled.This was nearly a week ago and we still see almost no space beeing cleared on the bucket.We are seeing delete markers being set (sometimes even multiple for some reason) but older versions with actual data are not being deleted.Also Commvault is still issuing S3 delete commands to the storage and the delete marker count is already higher than the count of actual object versions which seems weird. We noticed a recommendation to set up a lifecycle policy on the storage to delete non-current-versions which we did and now is clearing up some more space than before but still very slowly, but this is more of a storage side issue.Source: Configuring WORM Storage Lock We noticed correct deletion behaviour (physical pruning) in the past on other active storage pools with S3 object lock so my question is:Is it necessary to set up these lifecycle policies or not (especially for or NetApp Storage Grid)?Would there be no physical pruning besides the delete markers otherwise?Also we would be interested to know why Commvault is issuing multiple delete requests for the same object. Is this expected behaviour? We are currently on 11.40.42 Thanks in advance!</description>
            <category>Storage and Deduplication</category>
            <pubDate>Tue, 24 Mar 2026 08:57:19 +0100</pubDate>
        </item>
                <item>
            <title>Upgrade OS on file server with archiving configured</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/upgrade-os-on-file-server-with-archiving-configured-11725</link>
            <description>Hello everyone,I have fileserver WS2016 with data archiving configured, plan to upgrade it to W2025. Could you please recommend best way to do OS upgrade without stub recall.1) Is windows in-place upgrade supported?2) Is it supported to move virtual disks with stubs to new cleanly installed VM with same name an redeployment of agent on new VM? If yes how to perform it correctly?Br,Andrejs</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 24 Mar 2026 05:27:53 +0100</pubDate>
        </item>
                <item>
            <title>&quot;I am experiencing challenges locating the available guidelines for creating a CommVault plan and enabling file-search functionality for VM backups.&quot; oping anyone can guide me on this</title>
            <link>https://community.commvault.com/getting-started-51/i-am-experiencing-challenges-locating-the-available-guidelines-for-creating-a-commvault-plan-and-enabling-file-search-functionality-for-vm-backups-oping-anyone-can-guide-me-on-this-11721</link>
            <description>We are running CommVault version 11.40.30 with Hyperscale and AGP. We would like to confirm whether individual virtual machine files can be backed up and restored at granular file level. If so, what is the most efficient method to locate and restore internal VM files (for example, AppRecovery_ver2_2025.pdf) directly, rather than restoring the entire VM and searching for the file post-restore?Does achieving this required activation of a feature, as a requirement to utilizes such feature? IF yes please for your kind guidelines.</description>
            <category>Getting Started</category>
            <pubDate>Mon, 23 Mar 2026 17:02:55 +0100</pubDate>
        </item>
                <item>
            <title>Credly badge?</title>
            <link>https://community.commvault.com/readiverse-academy-60/credly-badge-11724</link>
            <description>Hello,Last week I refreshed the Engineer and Expert certifications. I received my Engineer Credly badge nearly instantly but haven&#039;t received the Expert one so far. Is there any delay I should expect?</description>
            <category>Readiverse Academy</category>
            <pubDate>Mon, 23 Mar 2026 14:30:20 +0100</pubDate>
        </item>
                <item>
            <title>LiveSync Failover Failure - Error 30:436 (SYSADMIN Role Validation Failed)</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/livesync-failover-failure-error-30-436-sysadmin-role-validation-failed-11639</link>
            <description>Hello Team,We are facing an issue during LiveSync failover in our Commvault Linux installation.commvault version - 11.40.34When we attempt to perform the failover, the CvFailoverLogShipping job gets triggered but fails with the following error:Error code:  30:436]Description: Instance validation failed: Insufficient SQL Server role. To configure the instance, the user  sqladmin_cv] must be a member of the SYSADMIN SQL server role.Source: rhel1Process: JobManagerTroubleshooting performed:	Verified the SQL login configured in Commvault.			Assigned required permissions, including SYSADMIN role, to the SQL user.			Revalidated credentials after updating permissions.			Verified permissions from the standby node as well; permissions appear consistent on both sides.			Despite this, the instance validation continues to fail with the same error.	create new user and assigned all required permissionRequesting assistance to identify why SYSADMIN role validation is failing during LiveSync failover.Please let us know what additional logs or diagnostic information are required.Thanks, </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 23 Mar 2026 06:41:16 +0100</pubDate>
        </item>
                <item>
            <title>RPM Group field for native packages</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/rpm-group-field-for-native-packages-11715</link>
            <description>Hi guys, we ran into an issue deploying an RPM created from the 11.40.42 base which I’m wondering if anyone else has. If you run “rpm -qip ournewcvlt.rpm” it returns the following:Name        : ournewcvlt.11.40.42.Instance001Version     : 11.4000.1372.42Release     : 1.el8Architecture: x86_64Install Date: (not installed)Group       : Applications/ArchivingThe last field there “Group” is not a Unix Group in the traditional sense but appears to conflict with other aspects of our repos environment. The question is - there doesn’t seem to be any control over this and we wonder if it is possible to modify this.I found howto: repack RPM for nicer name or add extra files | Community but that doesn’t exactly speak to it. Does anyone know if we snag the blah.src.rpm file before it gets cleaned up if we can modify the Group. Essentially it affects dependencies for a couple other packages.thanks</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 20 Mar 2026 19:36:45 +0100</pubDate>
        </item>
                <item>
            <title>After upgrade to 11.40: how to migrate SharePoint Online Classic agent to Command Center</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/after-upgrade-to-11-40-how-to-migrate-sharepoint-online-classic-agent-to-command-center-11719</link>
            <description>Hi everyone,after upgrading a customer environment from Commvault 11.32 to 11.40 (LTS), we ran into an issue with the SharePoint Online Classic agent.Environment &amp;amp; situation:Commvault upgraded from 11.32 → 11.40	The customer is still using the SharePoint Online (Classic) agent, configured via the Java / CommCell Console	SharePoint Online is not yet configured in Command CenterAfter the upgrade, the Classic agent can no longer be used for new protection jobs, so we are planning to configure SharePoint Online in Command Center going forward.Before doing so, I would like to clarify the impact on existing (legacy) SharePoint Online backups created with the Classic agent:Are those legacy backups still restorable as is using the Classic client / Java Console after new protection is moved to Command Center?Or is there any migration or conversion step required to make the legacy data restorable or visible in Command Center? thanks and best regards,Andreas</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 20 Mar 2026 13:25:09 +0100</pubDate>
        </item>
                <item>
            <title>Auto-start services when OS starts and LiveSync</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/auto-start-services-when-os-starts-and-livesync-11699</link>
            <description>Hi,In the CommServe LiveSync (11.40) environment on the active server, the “Auto-start services when OS starts” option is grayed out.Only after entering the following commands can the checkbox be selected:gxadmin.exe -consoleMode -enableautostartoption truegxadmin.exe -consoleMode -setsvcstartmode Automatic ...but after some time the checkbox becomes inactive again. The following article contains that this option is managed by failover.https://documentation.commvault.com/11.40/software/enabling_or_disabling_commserve_livesync.html“After you enable the CommServe LiveSync feature, the Auto-start services when OS starts option in process manager is disabled and controlled by the CSLiveSync feature.” During failover, it disables all services on one Commserve and enables them on the other, then does the opposite during failback. It also reverses live sync replications and a few other things.However, shouldn&#039;t this checkbox be checked in such a normal scenario? Shouldn&#039;t it be checked and grayed out (preventing manual changes)?This description from the documentation refers more to the fact that this checkbox is “greyed out” - not whether it should be checked or unchecked. Is this a bug or intentional behavior? Regards,Michał</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 20 Mar 2026 13:09:44 +0100</pubDate>
        </item>
                <item>
            <title>Question to „Index Server - Active and Inactive Clients association” Report</title>
            <link>https://community.commvault.com/share-best-practices-3/question-to-index-server-active-and-inactive-clients-association-report-11707</link>
            <description>The name and description suggest that the output will display all index servers and their associated clients.My tests have shown that index servers are visible to Solr clients, but not to &quot;regular&quot; clients (v1 or v2).I expected to see all active &quot;non-Solr index servers&quot; listed under &quot;Active Standalone Clients&quot;This group always shows up as empty for me.What am I doing wrong?</description>
            <category>Share Best Practices</category>
            <pubDate>Fri, 20 Mar 2026 09:03:43 +0100</pubDate>
        </item>
                <item>
            <title>Move Mount Path</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/move-mount-path-11648</link>
            <description>Hello,I am in a situation that I need to move a mount path to another mount path. I would like to just have backup jobs from the move path in the destination.Is there a way to prevent new backup jobs being written to the new mount path while the move is running?Something like the bullet points below.BR Henke	Set &quot;Disable mount path for new data&quot; on the destination before initiating the move			Leave the setting untouched while the job runs			Manually clear the flag after the move completes (since the job only auto-clears it on the source, not the destination)</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 20 Mar 2026 08:02:30 +0100</pubDate>
        </item>
                <item>
            <title>CommServe needs at least 10 minutes to reboot due to D:\Commvault\ContentStore\Base\StopProc.vbs</title>
            <link>https://community.commvault.com/share-best-practices-3/commserve-needs-at-least-10-minutes-to-reboot-due-to-d-commvault-contentstore-base-stopproc-vbs-10587</link>
            <description>Hello,Commvault 11.32.x is running on a Windows 2019 server.We noticed that when we need to shutdown or reboot our CommServe in case of Windows Updates for example, the whole process takes at least 10 minutes, more often 15 to 20 minutes… One could understand that it is difficult to afford with such amount of time for each reboots.The shutdown process stays “ stuck “ for several minutes on the step : “ Shutting down of service: Group Policy client “.   A case has been opened at Microsoft support, investigations pointed out that this huge amount of time needed is due to the execution of Commvault D:\Commvault\ContentStore\Base\StopProc.vbs.If I am not wrong, this file is supposed to allow current jobs enough time to complete smoothly.MS support asked if it would be possible to “ disable “ temporarily the execution of this file, in order to test if that solves the issue.-→ Does somebody else is facing the same Commvault behavior at shutdown ?-→ Knowing that each time we shutdown/reboot CommServe, we make sure that no jobs are running and the scheduler is disabled, what do you think about disabling the execution of this file ?Thanks for your thoughts, ideas, point of views... </description>
            <category>Share Best Practices</category>
            <pubDate>Fri, 20 Mar 2026 06:30:03 +0100</pubDate>
        </item>
                <item>
            <title>Webinar | ResOps: The Next Evolution of Enterprise Resilience (on demand)</title>
            <link>https://community.commvault.com/news-events-7/webinar-resops-the-next-evolution-of-enterprise-resilience-on-demand-11710</link>
            <description>Join Phil Goodwin (IDC) and Chris Mierzwa (Commvault) on April 8 as they introduce ResOps (Resilience Operations) a modern leadership framework for operationalizing resilience as an ongoing, enterprise discipline. Rather than treating resilience as fragmented tools or isolated recovery plans, ResOps reframes it as a repeatable, accountable operating model that aligns security, IT, and business stakeholders around shared risk and continuity objectives.   Learn more about: Why operational resilience is emerging as a defining indicator of enterprise risk maturity 	How cross-functional execution strengthens business continuity as a competitive differentiator  	How modern CISOs are earning and sustaining board credibility through resilience leadership	What measurable executive impact looks like in a ResOps-driven organization  Click here to register for the webinar. </description>
            <category>News &amp; Events</category>
            <pubDate>Fri, 20 Mar 2026 04:40:40 +0100</pubDate>
        </item>
                <item>
            <title>AWS - Cross-DPS Backup Copies (using IntelliSnap)</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/aws-cross-dps-backup-copies-using-intellisnap-11717</link>
            <description>Hi Commvault people,I am looking for feedback on Backup Copy throughput.We have a fairly new Commvault-AWS implementation.We have a few different DSP’s, with clients spread across several of the DSP’s.There is only one Media Agent, but we use Proxy’s in the other DSP’s.For example DSP1 has the MA and it’s attached library.DSP2 has a bunch of servers but only a VSA Proxy. Same for DSP 3 and 4 - clients and a VSA proxy but no Media Agent.So what we basically have is a cross-DSP configuration.The main media agent in DSP1 hosts the library.Clients are snapped in all of the other DSP’s using the VSa proxy who then streams the conetnts to the MA and it’s associated library.This is fully supportd and it seems to work well.However, I have noticed that although the Snap works fairly quickly (15 mins for a few servers), the Backup Copy is slower.  It will take a couple of hours to transfer a few hundred GB, averaging only 330ish GB/hr.So my question:-Is anyone else a cross-DSP backup solution and what rates of throughput are you seeing on your Backup Copy?I am hoping to avoid dropping in any additional hardware (Media Agents), but I am concerned that as we add a lot more workloads into the DSP’s, we will run into problems with slow Backup Copies, which therefore introduces risk to the customer. </description>
            <category>Virtualization and Containers</category>
            <pubDate>Fri, 20 Mar 2026 03:00:13 +0100</pubDate>
        </item>
                <item>
            <title>System State – Active Directory component appears empty during restore (why?)</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/system-state-active-directory-component-appears-empty-during-restore-why-11708</link>
            <description>Hi everyone, When restoring a Domain Controller’s System State backup (taken with the Windows File System agent), the Active Directory component appears empty and shows the message “Not allowed to browse system state components.”I’m aware that these days granular AD object recovery requires the licensed Active Directory Agent, so this is not about licensing. My question is more technical and focusing to System State:Why does the System State AD component show empty — is this purely by design, or does Commvault handle AD data differently	During a full System State restore, is the AD database automatically restored even though it’s not visible in the browse view?	Has this behavior always been like this, or did older Commvault versions allow limited browsing of AD components within System State? Trying to understand if this is an intentional design constraint or something that regressed over time.Thank you in advance,Nikos</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 19 Mar 2026 23:50:31 +0100</pubDate>
        </item>
                <item>
            <title>Commvault only using one of 4 available drives</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/commvault-only-using-one-of-4-available-drives-11698</link>
            <description>I came to a client recently that had a large backlog of aux copy data and problematic tape libraries. After some remediation I got the tape copy running on 3 drives while awaiting replacements for a couple of broken ones. The replacements were done and following the resumption of the backup the tape copy job will only select one drive. So there are 2 media agents, each with a tape library attached, one has 3 drives and the other has 2 drives, so 5 in total. The storage policy has the 2 libraries in the path and 5 streams with multiplexing x 25. The drive pools on both libraries are set to use all drives. I cannot understand why it will only use a single drive since the backup job was resumed where it was using 3 previously. It now has 5 working drives which can all be seen in Commvault and all work fine in the Tape library GUI’s tests.Commvault version is v11.32.115. </description>
            <category>Storage and Deduplication</category>
            <pubDate>Wed, 18 Mar 2026 15:41:28 +0100</pubDate>
        </item>
                <item>
            <title>Are the free certificates the same as from ILT?</title>
            <link>https://community.commvault.com/readiverse-academy-60/are-the-free-certificates-the-same-as-from-ilt-11711</link>
            <description>Hello,Just a simple question - are the E-lerning free courses with exams ends with the exacly the same certifcate as Instructor Lead paid one?For example: Commvault Engineer – CommCell Console ILT - 3000$ andCertified Engineer - CommCell Console (Exam plus Course Materials) - free I mean it will be great, but a bit suprising </description>
            <category>Readiverse Academy</category>
            <pubDate>Wed, 18 Mar 2026 15:30:31 +0100</pubDate>
        </item>
                <item>
            <title>Switching from on-premises backup storage to AWS S3</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/switching-from-on-premises-backup-storage-to-aws-s3-11635</link>
            <description>Hello everyone,I have a question.Since we don&#039;t want to renew our maintenance contract for our on-premises backup storage, we&#039;re now wondering whether we can run the entire backup data set, which is 89 TB, on AWS S3. Has anyone had any experience with this? What about read accesses during data verification and data aging? What kind of costs should we expect? Commvault mentions a value of 20 TB for data verification jobs, for example. However, I don&#039;t think that&#039;s realistic, since the data verification job only reads blocks. To reduce read costs from S3, there&#039;s supposedly the option of deploying a media agent in AWS to eliminate these costs. Does anyone have any experience with this?I&#039;d like to get a rough idea of ​​the ongoing costs we can expect if we were to go this route.I would really appreciate any feedback.Regards, Thomas</description>
            <category>Storage and Deduplication</category>
            <pubDate>Wed, 18 Mar 2026 13:42:39 +0100</pubDate>
        </item>
                <item>
            <title>Quicker application of network config</title>
            <link>https://community.commvault.com/share-best-practices-3/quicker-application-of-network-config-11712</link>
            <description>I’ve noticed that when installing the File and MSSQL agent on a client, it populates the networking quicker if you add the default file subclient to a storage policy before adding the sql subclients in.</description>
            <category>Share Best Practices</category>
            <pubDate>Wed, 18 Mar 2026 12:40:09 +0100</pubDate>
        </item>
                <item>
            <title>Classic Reports (JAVA Console) in CV11.40?</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/classic-reports-java-console-in-cv11-40-11709</link>
            <description>CV 11.32.132Hi, we currently use Classic Reports (Java Console) for billing our customers and have not yet been able to create a replacement using Web Reports (e.g., Chargeback Report).We need to perform a Commcell upgrade from 11.32 (Long Time) to 11.40 (Long Time) with a hardware replacement of the Commserver (to Windows Server 2025 std).Can any of you, based on your experience, tell us whether we will still be able to use the Classic Reports and the Java Console as usual after upgrading Commcell to 11.40?Regards,Alex </description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Wed, 18 Mar 2026 08:02:53 +0100</pubDate>
        </item>
                <item>
            <title>Commvault and MS Fabric protection</title>
            <link>https://community.commvault.com/saas-metallic-q-a-42/commvault-and-ms-fabric-protection-11706</link>
            <description>Looking to find out if CV can protect MS Fabric, as an app like it would protect EXO, SPO, OneDrive, Teams, etc or if it has to do things differently , or not at all?  </description>
            <category>SaaS (Metallic) Q&amp;A</category>
            <pubDate>Tue, 17 Mar 2026 07:31:47 +0100</pubDate>
        </item>
                <item>
            <title>VMWare Conversion to HPE VME</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/vmware-conversion-to-hpe-vme-11704</link>
            <description>Hello Team,Do we have any roadmap or update regarding VMware supportability on HPE VME?I reviewed the compatibility for version 11.32, and I do not see VMware listed as supported. However, could you please share any updates on the internal development roadmap for VMware support on HPE VME?Regards,Srini</description>
            <category>Virtualization and Containers</category>
            <pubDate>Tue, 17 Mar 2026 06:28:25 +0100</pubDate>
        </item>
                <item>
            <title>Promote the Synchronous Copy to the Primary Copy</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/promote-the-synchronous-copy-to-the-primary-copy-11703</link>
            <description>Hi, Could you possibly assist me with this?Commvault 11.32.119Is there a way to promote a Synchronous Copy to the Primary Copy using Command Center?  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 17 Mar 2026 05:43:41 +0100</pubDate>
        </item>
                <item>
            <title>How change Backup Destination using Plans</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/how-change-backup-destination-using-plans-11702</link>
            <description>Hi, Can you help me with this real quick?Commvault 11.32.119.How can I change the Backup Destination of Plans using Command Center?  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 17 Mar 2026 05:15:36 +0100</pubDate>
        </item>
                <item>
            <title>No finished job within SLA period</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/no-finished-job-within-sla-period-11700</link>
            <description>Hello,our SLA status is at only 60% after upgrading to 11.40. back in January. That means that about 40% says “missed” and when I check the SLA report it says “no finished job within SLA period” for about 500 clients.After checking them, I can say that there were indeed jobs running in the past few days for those clients (including full backups). The majority of those clients don’t have a file-system backup configured but SQL, EDB or VM backup.   Does anybody know where this issue may come from?Thanks in advance</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 16 Mar 2026 12:03:21 +0100</pubDate>
        </item>
            </channel>
</rss>
