<?xml version="1.0"?>
<rss version="2.0">
    
                    <channel>
        <title>Join the conversation</title>
        <link>https://community.commvault.com</link>
        <description>On the Forum you can ask questions or take part in discussions.</description>
                <item>
            <title>New Readiverse Academy Certification Tiers Are Here</title>
            <link>https://community.commvault.com/readiverse-academy-60/new-readiverse-academy-certification-tiers-are-here-11844</link>
            <description>We’re excited to introduce a new, tiered certification path in Readiverse Academy, designed to help Commvault learners build the skills you need to operate, secure, and recover with confidence. As Commvault Cloud environments continue to evolve, teams need clear, role-based learning paths that support real-world responsibilities — from platform administration and workload protection to cyber resilience and recovery readiness. The new Readiverse Academy certification structure gives learners a guided path from foundational knowledge to advanced cloud engineering expertise. This is not a reset. It is a clearer roadmap for what to learn next.  What’s New The programme now includes four certification tiers: 	Commvault Cloud Practitioner – foundational platform and cyber resilience knowledge 		Commvault Cloud Specialist – expanded operational, workload, and security depth 		Commvault Cloud Professional – advanced recovery and workload expertise, including Cloud Rewind or Cleanroom Recovery coursework 		Commvault Cloud Expert – advanced cloud engineering and resilience leadership, including Cloud Engineer coursework, advanced feature courses, Cloud Rewind, and Cleanroom Recovery 	Each tier builds on core skills across platform knowledge, cyber resilience, and workload or feature expertise. Learners can complete individual courses or follow the required path toward certification.   Already Completed Readiverse Academy Courses? If you have already completed courses or earned certifications in Readiverse Academy, your progress still matters. Existing certifications continue to reflect your accomplishments with previous Commvault product releases. The new certification tiers are aligned to Commvault’s expanded cyber resilience portfolio across Commvault Software, Commvault SaaS, and hybrid environments. There is no direct progression from previous certification tracks into the new programme, but learners who are already familiar with Readiverse Academy are well positioned to continue building toward the new tiers.  Who Should Explore the New Courses? These certifications are designed for learners working across Commvault environments, including: 	Platform administrators 		Security specialists 		Cloud engineers 		Workload owners 		Teams responsible for data protection, recovery, and cyber resilience 	Courses are available for self-paced learning, with select courses also offered in instructor-led formats.  How to Get Started Visit readiverse.commvault.com to explore the course catalogue and certification paths. A few recommended starting points: 	New to Commvault Cloud? Start with the Commvault Cloud Administrator course. 		Focused on recovery readiness? Explore Cyber Resilience, Cloud Rewind, or Cleanroom Recovery. 		Supporting specific workloads? Browse the available workload and feature courses. 	 For Our Partners As a Commvault partner, you’ll build the same validated skills across platform administration, cyber resilience, and recovery, so you can confidently deliver, secure, and recover environments at scale.Your certification path aligns to Administrator and Cyber Resilience Engineer roles, with consistent course content across workloads, security, Cloud Rewind, and Cleanroom. Course titles may vary slightly, and partner-exclusive modules are included to support your role.As always, access Readiverse Academy through the Partner Portal to start or continue your certification journey and track progress.   </description>
            <category>Readiverse Academy</category>
            <pubDate>Wed, 13 May 2026 19:31:01 +0200</pubDate>
        </item>
                <item>
            <title>Commvault Backed up data migration from on-premises to Cloud</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/commvault-backed-up-data-migration-from-on-premises-to-cloud-11846</link>
            <description>Hi Team,We have a requirement where a company is separating from its parent organization and establishing a newly created Commvault environment.As part of this transition, they would like to migrate their long-term retention backup data from the existing on-premises environment to the cloud within the new Commvault setup.Please confirm whether this migration is supported and advise on the recommended approach to perform it.Additionally, please provide the following details:	Prerequisites and requirements			Supported migration methods			High-level implementation steps			Any limitations or considerations that should be taken into account	Your guidance on this would be appreciated.Thanks,Rahul Raina</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 13 May 2026 18:41:53 +0200</pubDate>
        </item>
                <item>
            <title>How to ADD (implement) Commvault Cloud (Metallic)</title>
            <link>https://community.commvault.com/saas-metallic-q-a-42/how-to-add-implement-commvault-cloud-metallic-11819</link>
            <description>Hello,i have stable working environment of Commvault 11.40. Now i want few of mine AUX copy direct into CLOUD (Commvault Cloud SaaS) .I add taht i am not completally newby in cloud area as we have already Azure bloob as library.  Back to subject up to now i found that it work on capacity license what if good understood i paid license and Commvault give me space / storage &amp;amp; MAs etc. What i an not found how it look from on premise current CV  env ...where is any instruction how to add Cloud Env to on Prem exist env ….. how to add library, MA’s , after that storage policies copies based on previous ones etc ) . Also is any doc how is that SaaS env manged from Commvault side ? Is any www page where i connect or i have it in command center / commcell console  ? Resuming how to move from onPrem env to Hybrid Env </description>
            <category>SaaS (Metallic) Q&amp;A</category>
            <pubDate>Wed, 13 May 2026 17:41:07 +0200</pubDate>
        </item>
                <item>
            <title>Digital Sovereignty Decoded: A Framework for Strategy, Solutions, and Smart Tradeoffs | Webinar, May 27</title>
            <link>https://community.commvault.com/news-events-7/digital-sovereignty-decoded-a-framework-for-strategy-solutions-and-smart-tradeoffs-webinar-may-27-11849</link>
            <description>Digital sovereignty is one of the most talked-about — and least understood — concepts in enterprise IT today. Organizations face mounting pressure from regulators, boards, and customers to demonstrate control over their data, infrastructure, and supply chains. But where do you start, and how far do you need to go?Join Commvault’s Pranay Ahlawat, Alex Zinin, Jakub Lewandowski and Darren Thomson on May 27 in your region to cut through the confusion with a practical framework for building a digital sovereignty strategy that fits your organization&#039;s reality. They’ll explore the core pillars of sovereignty, why it&#039;s not a binary switch but a sliding scale of decisions, and how industry regulations and business context shape the right tradeoffs. We&#039;ll also show how Commvault and our partner ecosystem are putting these principles into practice. Click here to register for the webinar  </description>
            <category>News &amp; Events</category>
            <pubDate>Wed, 13 May 2026 17:25:47 +0200</pubDate>
        </item>
                <item>
            <title>Automatic synthetic full scheduling with multiple schedules</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/automatic-synthetic-full-scheduling-with-multiple-schedules-11847</link>
            <description>Hello community, we currently have the following situation with our synth full in our plans:Copies:Primary - 15 days / deduped	Sync copy to tape / 30 days	Selective copy to tape (monthly fulls) / 1 yearSynth fulls:Automatic every 15 days	On the 1st every month for extended retentionPreviously I reduced the automatic synth full to 15 days to better align with the retention and therefore have better aging regarding the cycles.We will also enable WORM on the Primary in the near future so I want to leave it this way to also align with DDB sealing.My question is about the synth scheduling:Because of the 15 days schedule I now encountered a situation where a synth ran on the 29th and then shortly after on the 1st because of extended retention.This of course causes extensive amount of data on the sync copy as the full backups get expanded onto the tapes.Should I sacrifice the automatic schedule in favor of a manual synth full backup (e.g. every month on the 15th) or are there other ways to avoid these situations with the automatic schedule? Thanks in advance for any suggestions </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 13 May 2026 14:14:08 +0200</pubDate>
        </item>
                <item>
            <title>Commvault integration with Morpheus</title>
            <link>https://community.commvault.com/developer-tools-integration-and-automation-50/commvault-integration-with-morpheus-9618</link>
            <description>Hi All,I&#039;m looking for technical resources and experience in integrating Commvault with Morpheus.What is the integration architecture?Is it possible to match tenant between Morpheus and Commvault? If so, how? So far, I&#039;ve found only one link on the subject at Morpheus:Commvault — Morpheus Docs documentation (morpheusdata.com)In the integration section of this link, it says to add the IP or Hostname of the CommServe”, but I&#039;m under the impression that it refers to the Web Server (in our architecture, the Web Server is not hosted on the Commserve). Thanks,Luc</description>
            <category>Developer Tools, Integration and Automation</category>
            <pubDate>Tue, 12 May 2026 16:50:34 +0200</pubDate>
        </item>
                <item>
            <title>Set-up of a CommandCenter in DMZ</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/set-up-of-a-commandcenter-in-dmz-11824</link>
            <description>Hi Vaulters, Hope everyone is doing well. We currently have an Exchange On-Premises user mailbox archiving solution implemented within our environment using Commvault. All components of our CommCell infrastructure (CommServe, MediaAgents, Web Server, etc.) are hosted within the internal network, and the solution is operating as expected. End-users connected internally are able to seamlessly access and retrieve their archived emails (Recall functionality) without any issues. However, we now have a requirement to provide external access to this archived email content for a subset of users. These users connect through a VPN, which places them within the DMZ network segment rather than the internal network.We understand that it is possible to deploy a dedicated Command Center instance within the DMZ to handle external access use cases. That said, we are unclear about the complete architecture and configuration required to make this setup fully functional. Specifically, we would like clarification on the following points:How should the Command Center deployed in the DMZ be integrated with the existing Web Server used for Recall operations located in the internal network?	Is there a need to configure any specific URL mappings, reverse proxy rules, or redirection mechanisms between the DMZ and internal components?	Are there any additional settings, network requirements (ports, firewall rules), or security considerations that must be addressed to enable this communication securely?	Does this setup require a dedicated Web Server in the DMZ, or can the internal Web Server be reused via appropriate network configuration? From my understanding, there is no need for Web Server in the DMZ, the Command Center is sufficient and will be linked with the internal one.	 Despite reviewing the available documentation, we have not found clear guidance on this particular architecture or implementation scenario. Any detailed explanation, best practices, or reference architecture would be greatly appreciated. Kind regards,  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 12 May 2026 16:20:16 +0200</pubDate>
        </item>
                <item>
            <title>Issue about The number of partitions created automatically by Commvault 11.40</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/issue-about-the-number-of-partitions-created-automatically-by-commvault-11-40-11842</link>
            <description>The number of partitions created is determined by the number of DDB MediaAgents selected during configuration:	1 MediaAgent selected	Two partitions are created on the single MediaAgent.			2 MediaAgents selected	Two partitions are created on each MediaAgent, for a total of four partitions.			3 to 6 MediaAgents selected	A total of six partitions are created and distributed evenly across the selected MediaAgents.			More than 6 MediaAgents selected			The first six MediaAgents are used to create the six partitions.						The remaining MediaAgents are assigned the Deduplication Database (DDB) role only.			The software automatically assigns the deduplication database (DDB) path on the MediaAgent without requiring user input.Link Configuring Disk StorageI couldn&#039;t quite understand the final part where more than 6 MediaAgents are selected. Can someone explain it in other words?</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 12 May 2026 15:14:40 +0200</pubDate>
        </item>
                <item>
            <title>Additing additional Media Agent servers</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/additing-additional-media-agent-servers-11843</link>
            <description>Hello Team, Currently we have two physical media agent servers and using only index cache. The deduplication feature is enabled on target end (Dedup appliance). Now we want to add two new media agent servers and after few days will remove old media agent servers. Shall we move indexing data from old media agent to new media agent servers.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 11 May 2026 18:35:40 +0200</pubDate>
        </item>
                <item>
            <title>Active Directory Permissions to Back Up Group Policy Objects</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/active-directory-permissions-to-back-up-group-policy-objects-10524</link>
            <description>Hello!Having just upgraded to 2024E, specifically 11.36.41, I notice the Active Directory agent now supports backups for GPO as documented here - Great. Changes in Commvault Platform Release 2024EHowever, the permissions required are not particularly helpful:	Permissions to Back Up Group Policy Objects via PowerShell: The account must have the necessary permissions to back up GPOs using PowerShell cmdlets. By default, members of the Remote Management Users group possess these permissions.	My question is, if you don’t want the account to be a member of “Remote Management Users” or admin groups, what granular permissions can be set on the account to still achieve the backup? full error:-----Currently whilst the backup is completing for AD as it always has, its now completing but with error “Failed to process group policy object”.Error Code: .28:548]Description: Failed to process group policy object. Please verify following: (1) User account configured in Active Directory connection settings is member of Remote Management Users group or has administrator permissions. (2) User account configured in Active Directory connection settings has read and write permission to job results directory.-----</description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Mon, 11 May 2026 16:19:21 +0200</pubDate>
        </item>
                <item>
            <title>Enabling Arlie</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/enabling-arlie-11841</link>
            <description>Team,We have commvault On-Prem with version 11.40.6 and planning to enable Arlie.Please let me know if we need to buy separate license or it included in capacity-based license?and also, I want to understand the steps to enable Arlie when checked documentation its very highlevel.Enabling Arlie </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 11 May 2026 13:34:53 +0200</pubDate>
        </item>
                <item>
            <title>ErrorCode:9517 &quot;ClientInfo is not Complete&quot; . RestApi Restore appTypeId 13</title>
            <link>https://community.commvault.com/developer-tools-integration-and-automation-50/errorcode-9517-clientinfo-is-not-complete-restapi-restore-apptypeid-13-11833</link>
            <description>I&#039;m able to restore data between Windows machines using the API.--- XML WINDOWS ---&amp;lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&amp;gt;&amp;lt;DM2ContentIndexing_RetrieveToClientReq mode=&quot;2&quot; serviceType=&quot;1&quot;&amp;gt;  &amp;lt;userInfo userGuid=&quot;8a06f467-b2d2-4fbc-905f-c8033d8f83de&quot;/&amp;gt;  &amp;lt;header&amp;gt;    &amp;lt;destination        clientId=&quot;6749&quot;        subclientId=&quot;9151&quot;        inPlace=&quot;0&quot;&amp;gt;      &amp;lt;destPath val=&quot;V:\Arquivos&quot;/&amp;gt;    &amp;lt;/destination&amp;gt;        &amp;lt;filePaths val=&quot;F:\temp\teste.txt&quot;/&amp;gt;    &amp;lt;srcContent                subclientId=&quot;9150&quot;        clientId=&quot;6748&quot;        instanceId=&quot;1&quot;        backupSetId=&quot;7440&quot;        appTypeId=&quot;33&quot;        /&amp;gt;  &amp;lt;/header&amp;gt;  &amp;lt;advanced      restoreDataAndACL=&quot;1&quot;      restoreDeletedFiles=&quot;0&quot;      unconditionalOverwrite=&quot;1&quot;/&amp;gt;&amp;lt;/DM2ContentIndexing_RetrieveToClientReq&amp;gt; But I am not having the same success with the NAS (appTypeId 13)&amp;lt;header&amp;gt;&amp;lt;srcContent subclientId=&quot;488&quot;clientId=&quot;131&quot;instanceId=&quot;1&quot;backupSetId=&quot;141&quot;appTypeId=&quot;13&quot;/&amp;gt;&amp;lt;filePaths val=&quot;/__VOLUME__/PFS02/Teste/teste.csv&quot;/&amp;gt;&amp;lt;destinationclientId=&quot;2181&quot;subclientId=&quot;3340&quot;inPlace=&quot;0&quot;&amp;gt;&amp;lt;destPath val=&quot;/svm_nasrs/Arquivo/teste&quot;/&amp;gt;&amp;lt;/destination&amp;gt;  </description>
            <category>Developer Tools, Integration and Automation</category>
            <pubDate>Sun, 10 May 2026 08:45:47 +0200</pubDate>
        </item>
                <item>
            <title>Detect newly registered clients on CommServe</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/detect-newly-registered-clients-on-commserve-11839</link>
            <description>Hi All, I&#039;m looking for a way to detect newly registered clients on CommServe that haven&#039;t had policy configuration done.Perhaps a REST API, qcommand, or a database query? What criteria should I use for detection? Should they not have had a backup for a few days, or should they not have policy assignments, etc.? I want to use this to feed an automation system, so sharing use cases or best practices would be helpful.Best Regards.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Sat, 09 May 2026 05:50:37 +0200</pubDate>
        </item>
                <item>
            <title>Storage Policy Copy type</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/storage-policy-copy-type-11840</link>
            <description>What fields/tables  are used to decide the copy type of an StoragePlicyCopy : “snap primary”,”primary”, “selective”,”synchronous”,… </description>
            <category>Storage and Deduplication</category>
            <pubDate>Sat, 09 May 2026 03:06:41 +0200</pubDate>
        </item>
                <item>
            <title>Restore Teams planner items</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/restore-teams-planner-items-11837</link>
            <description>Hi everyone,I have a question regarding the restore of Teams planners. I have a customer tried to restore the planner of a teams channel but the restored planner is empty. When restoring to disk, the folder structure is created to mirror the teams\channel\planner path but the resulting folders are empty.I already found that they did not have planner items enabled in the Office 365 plan which was why the original restore was empty. I enabled this option last week and since that time backups have been running without issue.I have just browsed the latest backup of multiple teams with planners but all planners are showing empty.Is there anything else I need to do to be able to restore the planner items? The Commvault environment is running 11.40.47.Kind regards,Paul</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 08 May 2026 12:06:54 +0200</pubDate>
        </item>
                <item>
            <title>CommServe rebuild/upgrade path from 11.32 - best practise?</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/commserve-rebuild-upgrade-path-from-11-32-best-practise-11311</link>
            <description>I have a small CommCell with a CommServe on 11.32.115 running on a build of Windows Server that needs a refresh.I could stand up a new VM with the same IP and hostname and do a DR DB restore and I guess I’m back up and running quickly with a clean build?But I noticed Commvault now offer a Linux CommServe and a pre-configured OVA appliance so I’m wondering if this is something I should be looking at instead? The current environment doesn’t use “plans” it’s all traditional storage policies and copies and schedule policies to run jobs out of business hours.I don’t tend to use Command Center I tend to use the Java UI but I guess that’s out of habit.It’s a pretty small and simple environment with just a few Windows file agents and a NAS agent pulling some UNC paths in.What would people suggest please?</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 08 May 2026 12:00:39 +0200</pubDate>
        </item>
                <item>
            <title>Openstack VM restore with API not supported</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/openstack-vm-restore-with-api-not-supported-11835</link>
            <description>When I try to make a complete restore of a openstack’s instance, the response is : &quot;errorMessage&quot;: &quot;The particular hypervisor restore is not supported in API&quot;,Commserv version is : 11 SP36The request is : POST {{serverurl}}/V4/vm/the-correct-guid/restoreaccept: application/jsoncontent-type: application/jsonAuthtoken: {{auth_token}}{  &quot;powerOnVmAfterRestore&quot;: true,  &quot;overwriteVM&quot;: true,  &quot;inPlaceRestore&quot;: true,  &quot;fromtime&quot;: &quot;2026-05-04T23:30:36&quot;,  &quot;totime&quot;: &quot;2026-05-04T23:36:26&quot;,  &quot;vmDestinationInfo&quot;: {    &quot;openstack&quot;: {    &quot;openstackInstanceInfoList&quot; : n        {           &quot;sourceInstanceGuid&quot; : &quot;the-correct-guid&quot;,           &quot;InstanceName&quot; : &quot;realname&quot;        }       ]    }   } This instance is backuped successfully, and I can ask for details:GET {{serverurl}}/v4/virtualmachines/the-correct-guidaccept: application/jsonAuthtoken: {{auth_token}}{  &quot;vmDetails&quot;: {    &quot;displayName&quot;: &quot;realname&quot;,    &quot;summary&quot;: {      &quot;hypervisor&quot;: {        &quot;id&quot;: 747,        &quot;name&quot;: &quot;hypervisorname&quot;      },      &quot;vmGroup&quot;: {        &quot;id&quot;: 931,        &quot;name&quot;: &quot;Test-OpenStack&quot;      },      &quot;host&quot;: &quot;nova&quot;,      &quot;os&quot;: &quot;Any&quot;,      &quot;vendor&quot;: &quot;OPENSTACK&quot;,      &quot;vmSize&quot;: 53687091200,../..</description>
            <category>Virtualization and Containers</category>
            <pubDate>Fri, 08 May 2026 10:52:49 +0200</pubDate>
        </item>
                <item>
            <title>Gmail Archive/Backup</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/gmail-archive-backup-11788</link>
            <description>Hello, Can Commvault support Gmail Backup or Archive? </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 08 May 2026 09:57:15 +0200</pubDate>
        </item>
                <item>
            <title>Exchange Online backup, degraded performance, multiple tenants</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/exchange-online-backup-degraded-performance-multiple-tenants-11838</link>
            <description>(CV 11.40.47)We have several Exchange Online backups running for our customers, and each has their own little slice of heaven dedicated to them in our datacenter. They have their own dedicated access Nodes, so they are isolated from each other on our end.We&#039;re seeing Exchange Online backup performance is degrading across the board. Because the customers all have their own independent MS365 tenants, it would be strange if this was a Microsoft throttling issue hitting all of them at the same time, no?I am not seeing this performance hit on their Teams, SharePoint, or OneDrive backups.anyone else seeing degraded performance with Exchange?</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 08 May 2026 06:02:28 +0200</pubDate>
        </item>
                <item>
            <title>Problem with filtering by time in views in Command Center</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/problem-with-filtering-by-time-in-views-in-command-center-11713</link>
            <description>Hi all,I have a problem with the functionality of the filters in the views in Jobs - Job history, when I want to filter by Start or End.When I use the condition &quot;Relative&quot; (1 day), the list of jobs is displayed.When I use the condition &quot;Between&quot; (yesterday-today), the error &quot;Something went wrong&quot; appears.When I use the condition &quot;Older than&quot; (anything), &quot;No results found&quot; appears.There is a similar problem when using the time condition in the view in Protect - Virtualization - Virtual machines.It behaves the same in 11.40 and 11.42.Am I doing something wrong? Or is this a known issue?Regards,Lubos</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 07 May 2026 17:39:37 +0200</pubDate>
        </item>
                <item>
            <title>Check backup consistency</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/check-backup-consistency-11836</link>
            <description>Hi,is it possible to create a report where i can see that my backups are consistent?In Nagios, we already check the jobs for success or failure.RegardsDennis</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 07 May 2026 14:26:21 +0200</pubDate>
        </item>
                <item>
            <title>Course not found</title>
            <link>https://community.commvault.com/readiverse-academy-60/course-not-found-11830</link>
            <description>Hi, I have selected for an internship and the company requires me to do the following courses which I can not seem to find. Did the course names change?Courses: Metallic MSP Portal Training		Metallic MSP Portal API		Metallic Specialist Training for MSPs		Metallic Specialist PLUS for MSP		Commvault Support Foundations 2024		Service Advantage - Specialization - Advanced Architecture Design</description>
            <category>Readiverse Academy</category>
            <pubDate>Thu, 07 May 2026 11:53:45 +0200</pubDate>
        </item>
                <item>
            <title>Release 11.40 Unable to Download</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/release-11-40-unable-to-download-11808</link>
            <description>Even after trying from various systems, all with different OS and connectivity, the download of version 11.40 software has not been completed in the last two days.The error reported is this:9936 13 04/23 16:19:27 ### ### ### - Download URL:[https://downloadcenter.commvault.com/CVMedia/11.0.0/BUILD80/SP40_7482319_R1372/Windows/], NumberOfDownloadStreams:[3]9936 21 04/23 16:20:25 ### ### ### - Checksum [1414247112] of the file [C:\pub\Download\WinX64\ThirdParty\CVInstallThirdParty\CVInstallPrereqPackages.exe] doesn&#039;t match with the expected checksum [1099018047].9936 21 04/23 16:20:25 ### ### ### - Checksum [1414247112] of the file [C:\pub\Download\WinX64\ThirdParty\CVInstallThirdParty\CVInstallPrereqPackages.exe] doesn&#039;t match with the expected checksum [1099018047].9936 21 04/23 16:20:25 ### ### ### - Checksum [1414247112] of the file [C:\pub\Download\WinX64\ThirdParty\CVInstallThirdParty\CVInstallPrereqPackages.exe] doesn&#039;t match with the expected checksum [1099018047].9936 21 04/23 16:20:25 ### ### ### - Error: Failed to download file: [C:\pub\Download\WinX64\ThirdParty\CVInstallThirdParty\CVInstallPrereqPackages.exe], download URL: [https://downloadcenter.commvault.com/CVMedia/11.0.0/BUILD80/SP40_7482319_R1372/Windows/ThirdParty/ CVInstallThirdParty/CVInstallPrereqPackages.exe?__cv__=1776955825_359efe5aaf1ab1df32894e46b257f8cc] Has this happened to others?</description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Thu, 07 May 2026 10:19:58 +0200</pubDate>
        </item>
                <item>
            <title>Oracle Linux VSA OS Security Updates</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/oracle-linux-vsa-os-security-updates-11829</link>
            <description>Hi,We get error message when trying to do OS Security updates on VSA’s running Oracle Linux 9. Has anyone else encountered this issue?command: sudo dnf makecacheResult: Oracle Linux 9 BaseOS Latest (x86_64)                                                                                                                                                             Errors during downloading metadata for repository &#039;ol9_baseos_latest&#039;:  - Curl error (60): SSL peer certificate or SSH remote key was not OK for https://yum.oracle.com/repo/OracleLinux/OL9/baseos/latest/x86_64/repodata/repomd.xml tSSL certificate problem: unable to get local issuer certificate]Error: Failed to download metadata for repo &#039;ol9_baseos_latest&#039;: Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried ThanksPPS  </description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Thu, 07 May 2026 09:00:31 +0200</pubDate>
        </item>
                <item>
            <title>Cannot join existing commserve hyperscale edge</title>
            <link>https://community.commvault.com/hyperscale-x-q-a-54/cannot-join-existing-commserve-hyperscale-edge-11834</link>
            <description>Hi team,  I have a trouble when i creating lab for hyperscale edge,When setup failed register with commserve and for question is do we need to first commserve separately from the appliance? Thanks</description>
            <category>HyperScale X Q&amp;A</category>
            <pubDate>Thu, 07 May 2026 08:07:44 +0200</pubDate>
        </item>
                <item>
            <title>Oracle Archive Log subclient not running on schedule</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/oracle-archive-log-subclient-not-running-on-schedule-11828</link>
            <description>Hi,Has anyone experienced issues with the schedule for the ArchiveLog_standby subclient not running on schedule even though it is assigned to a Plan?From what we can see is that this behaviour occurs when database maintenance occurred. There are no failures logged, it just does not run.To resolve it you temporarily change the backup plan to something else and revert, which “enables” the backup again. Cheers. </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 07 May 2026 08:06:51 +0200</pubDate>
        </item>
                <item>
            <title>Readiness check fails even though cvping from CS to client is working using 8400, 8403 and 8600 in both direction.</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/readiness-check-fails-even-though-cvping-from-cs-to-client-is-working-using-8400-8403-and-8600-in-both-direction-10736</link>
            <description>cvfwd.log from CS displays below message.ERROR: Client connection to ed*wb**** failed on ***t00*: Can&#039;t unambiguously route connection to ed*wb**** because there are multiple tunnels from different instances of CVFWD pretending to be ed*wb**** I have uninstalled the CV client and reinstalled but still no go. Not sure what else to check. Why system thinks that there are multiple tunnels? CS and MA running on Windows, and client is running on Linux.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 07 May 2026 01:31:45 +0200</pubDate>
        </item>
                <item>
            <title>Issue backing up client through a media agent without a DDB</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/issue-backing-up-client-through-a-media-agent-without-a-ddb-11832</link>
            <description>We have a client sitting in a DMZ that can successfully connect to media agent 01. However, media agent 01 does not have a DDB. In the job logs we can see the scan phase for the job complete on MA01 but then it fails on the backup phase. We noticed in the logs an attempt to connect to the CVD service on media agent 02 which houses the DDB. The client does not have a path to MA02. In this situation, does commvault require the client to have comms to the DDB media agent MA02? I thought that MA01 would handle connecting to MA02 not the client. How does Commvault handle the data flow in this setup?  Thanks!</description>
            <category>Storage and Deduplication</category>
            <pubDate>Wed, 06 May 2026 21:15:40 +0200</pubDate>
        </item>
                <item>
            <title>ResAPI for restore responding with error &quot;ClientInfo is not complete&quot;,&quot;errorCode&quot;:9517</title>
            <link>https://community.commvault.com/developer-tools-integration-and-automation-50/resapi-for-restore-responding-with-error-clientinfo-is-not-complete-errorcode-9517-8051</link>
            <description>Hello, I am trying to test the Retrieve to Client (Restore) API following the document  Retrieve to Client (Restore) | Commvault. I tested the API with both Postman and Golang, and both times I received an error in response. The error is pasted below.&quot;ClientInfo is not complete&quot;,&quot;errorCode&quot;:9517Please see the sample GoLang code below. I followed the example in the Commvault rest API docs, but I can&#039;t figure out what&#039;s missing here. package mainimport (	&quot;crypto/tls&quot;	&quot;fmt&quot;	&quot;io/ioutil&quot;	&quot;net/http&quot;	&quot;strings&quot;)func main() {	url := &quot;https://hostname/commandcenter/api/retrieveToClient&quot;	method := &quot;POST&quot;	payload := strings.NewReader(`{  &quot;mode&quot;: 2,  &quot;serviceType&quot;: 1,  &quot;userInfo&quot;: {    &quot;userGuid&quot;: &quot;da752adf-79f0-47d6-8be5-d3dadc9abc5e&quot;  },  &quot;advanced&quot;: {    &quot;restoreDataAndACL&quot;: true,    &quot;restoreDeletedFiles&quot;: true  },  &quot;header&quot;: {    &quot;destination&quot;: {      &quot;clientId&quot;: 171,      &quot;clientName&quot;: &quot;&amp;lt;&amp;lt;clientname&amp;gt;&amp;gt;&quot;,      &quot;inPlace&quot;: false,      &quot;destPath&quot;: a        &quot;C://Users//Administrator//Downloads&quot;      ]    },    &quot;filePaths&quot;: t      &quot;//C://Users//Administrator//Desktop//800gb&quot;    ],    &quot;srcContent&quot;: {      &quot;subclientId&quot;: 399,      &quot;clientId&quot;: 171,      &quot;instanceId&quot;: 1,      &quot;backupSetId&quot;: 331,      &quot;appTypeId&quot;: 33    }  }}`)	client := &amp;amp;http.Client{}	http.DefaultTransport.(*http.Transport).TLSClientConfig = &amp;amp;tls.Config{InsecureSkipVerify: true}	req, err := http.NewRequest(method, url, payload)	if err != nil {		fmt.Println(err)		return	}	req.Header.Add(&quot;Content-Type&quot;, &quot;application/json&quot;)	req.Header.Add(&quot;Accept&quot;, &quot;application/json&quot;)	req.Header.Add(&quot;Authtoken&quot;, &quot;QSDK 3b83bf1066977563d72cc4d5de464a5b4602376898a02ae6d0bb71b2203d48c1d76041e15edda9950eeb20242519e7c51a2a33c8a90d6c921d532e1f9b71c104f22fca2b87e0566fb6fa4aa12b6a3eb26036f77470d15f8468ab08ce41ce4b14d8017a898127d930021c4d9e37626fb2f8b025ad070dd3f8e998d0fdc8a0ae2ec865e1879aef4ee20081d6932753a8d9171c04870d64c090d4db4cf5e2696e5d8885d32b3db00d1d489d1774c9d0811aa83f2ec97b4639f64d45de8dc813d7967e5cf7f7304bb314a65920e0cb2b22e545ada376f28de827d65dc20ea48eb0dda3fbef4e598ed68fa&quot;)	res, err := client.Do(req)	if err != nil {		fmt.Println(err)		return	}	defer res.Body.Close()	body, err := ioutil.ReadAll(res.Body)	if err != nil {		fmt.Println(err)		return	}	fmt.Println(string(body))} </description>
            <category>Developer Tools, Integration and Automation</category>
            <pubDate>Wed, 06 May 2026 15:55:42 +0200</pubDate>
        </item>
                <item>
            <title>RHE Linux Requirements for IBM Spectrum Scale (GPFS)</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/rhe-linux-requirements-for-ibm-spectrum-scale-gpfs-11825</link>
            <description>For Red Hat Linux requirements for IBM Spectrum Scale (GPFS); does it support Red Hat Enterprise Linux 9.x? Documentation online says Red Hat Enterprise Linux 9.0 but it could be a typo.System Requirements for for IBM Spectrum Scale (GPFS)</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 06 May 2026 14:53:52 +0200</pubDate>
        </item>
                <item>
            <title>Has anyone used volume replication to migrate their Commserve</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/has-anyone-used-volume-replication-to-migrate-their-commserve-11827</link>
            <description>Hi Commvault Team,We are looking at migration scenario’s, where will be moving our Commserve and standby-commserve to another hosted environment (we are moving into AWS).I have observed that some of our SQL clients have been migrated into AWS by making us of what is referred to as an MGN agent.This agent is basically a replication agent, and it has been successfully replicating SQL databases.So this got me thinking - would this be suitable for a Commvault commserve migration, because at the end of the day it’s SQL-based, and the replication side of things could save a lot of time with the migration into AWS.Obviously, we need to keep an eye on supportability, but I just throught I would throw this scenario out there as it may be worth further discussion.So, if anyone has used a volume-based replication to assist with a Commserve migration, then it would be good to hear of your experience in doing so.Thanks </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 05 May 2026 14:26:28 +0200</pubDate>
        </item>
                <item>
            <title>Hadoop configuration || Unable to get index server information for client</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/hadoop-configuration-unable-to-get-index-server-information-for-client-11823</link>
            <description>I have configured a Hadoop backup in Commvault. The scan phase completes successfully and files are being detected correctly.However, before the backup starts, the job fails with the following error:&quot;Unable to get index server information for client [120]&quot;Because of this, the backup never starts. I have waited for a long time, but no events or additional logs are being generated.Environment details:Hadoop client installed with required packagesMediaAgents are already configuredCommunication services appear to be running fineHas anyone faced this issue before? Any suggestions on what to check or how to resolve it would be really helpful.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Sun, 03 May 2026 17:01:52 +0200</pubDate>
        </item>
                <item>
            <title>Evaluation Copy of Commvault to build monitoring tools</title>
            <link>https://community.commvault.com/developer-tools-integration-and-automation-50/evaluation-copy-of-commvault-to-build-monitoring-tools-11821</link>
            <description>Hello, Is it possible to obtain an evaluation copy of Commvault?We are a Commvault partner, and our Development Team is requesting a copy to build out our monitoring tools.Thanks,Charles Weeks</description>
            <category>Developer Tools, Integration and Automation</category>
            <pubDate>Fri, 01 May 2026 20:07:18 +0200</pubDate>
        </item>
                <item>
            <title>PostgreSQL 17 - increment backup</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/postgresql-17-increment-backup-11820</link>
            <description>Hi all.In PostgreSQL 17 incremental backups have been introduced, is this supported by Commvault? https://pgdash.io/blog/incremental-backup-in-postgresql-17.html Thanks-Anders</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 01 May 2026 13:34:50 +0200</pubDate>
        </item>
                <item>
            <title>Commvault VSA proxy (server name) is failing to initialize because it cannot locate its own AWS instance ID in any AWS region</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/commvault-vsa-proxy-server-name-is-failing-to-initialize-because-it-cannot-locate-its-own-aws-instance-id-in-any-aws-region-11797</link>
            <description>We have 2 aws account, where ec2 backup are being save though commvault, but from last 2 week we are getting multiple failure alert.i have check the logs:| vsbkp.log========================1528  179c  04/13 04:16:55 45270647 CAmazonInfo::DetectLocalProxyInformation() - Proxy is Amazon Instance. BIOS Manufacturer : :Xen] and Version : :4.11.amazon] 1528  179c  04/13 04:16:55 45270647 CAmazonInfo::DetectLocalProxyInformation() - Instance Id : i-0726b3f933369834c1528  179c  04/13 04:16:55 45270647 AmazonCompute::GetOutpostAccountARNForInstanceId() - Exception - The instance ID &#039;i-…...&#039; does not exist1528  179c  04/13 04:16:55 45270647 CAmazonInfo::DetectLocalProxyInformation() - BIOS UUID : EC25B4D9-F00D-3912-C757-1F22221F6CC21528  179c  04/13 04:16:56 45270647 AmazonCompute::_GetVMInfoForVM() - Non Fatal Exception The instance ID &#039;i-…….&#039; does not exist in region ap-south-21528  2620  04/13 04:17:23 45270647 VSBkpCoordinator::OnIdle_Starting() - Waiting for n1] agents to initialize.  tserver name]1528  22fc  04/13 04:17:31 45270647 VSBkpController::MonitorProgress() - :0] VMs are being processed1528  2620  04/13 04:19:13 45270647 VSBkpCoordinator::OnIdle_Starting() - Timeout waiting for agent mserver name] to initialize.1528  2620  04/13 04:19:13 45270647 VSBkpCoordinator::OnIdle_Starting() - No Agents are running, stopping  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 01 May 2026 05:22:14 +0200</pubDate>
        </item>
                <item>
            <title>Deploying Azure Access Node</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/deploying-azure-access-node-11818</link>
            <description>!--&amp;gt;Hello,I’m playing around backup Azure VMs to on-prem Commvault. I setup all Azure stuff: create application, assign custom roles etc. So next step is configure access node.I deployed Commvault Cloud BYOL Access Node VM as described here: https://documentation.commvault.com/11.40/software/deploying_commvault_cloud_access_node_from_azure_marketplace.html (selected 11.36 as there is no 11.40 version on Azure).In Azure I setup network rules to open incoming connections on 22 and 8400-8403 ports from my on-premise public IP. Checked network connectivity from Commserve (telnet) on this ports - it&#039;s ok.So according to documentation next step is register node by commvaultRegistration.sh script.Entered values:client name: whatever	client hostname: public IP of the Azure VM	CS Name: commserve	CS hostname: local IP of commserve	Is CS behind firewall: yes - option 2 (CS can connect to client)	Port number: 8403	HTTP proxy? noAnd here are first problems. Script is not finishing, it&#039;s just stay on &quot;executing given operation&quot;: Do you wish to proceed with registration using the above information? (yes/no)yescat: /sys/class/net/: Is a directorycat: eth0/address: No such file or directorycat: /etc/sysconfig/network-scripts/ifcfg-: No such file or directorycat: eth0: No such file or directorymv: target &#039;eth0&#039; is not a directoryConfigured of network interfaces...Redirecting to /bin/systemctl restart network.serviceFailed to restart network.service: Unit network.service not found.Restarted networking services ...error reading information on service yum-cron: No such file or directoryRedirecting to /bin/systemctl start yum-cron.serviceFailed to start yum-cron.service: Unit yum-cron.service not found.Redirecting stopping service for Instance001 to systemd ...Running &quot;systemctl stop commvault.Instance001.service&quot; ...Stopping Commvault services for Instance001 ...Cleaning up /var/log/commvault/Log_Files/locks ...All services stopped.check if cvfwd(sa) is running..root 11802 11027 0 14:41 pts/0 00:00:00 grep cvfwdExecuting given operation...No output was generatedSimCallWrapper failed. Please check SimCallWrapper.log under Simpana Log Files. Error r2]And that&#039;s all - script is in some loop, nothing more happened. So checked the SimCallWraper log but don&#039;t know what is the problem - there are only some generic errors like:SIM Call ProcessClientSetupRequest failed with error = -1	No output was generated	SimCallWrapper failed. Error r2]I tried to finish installation from Commserve side (like in documentation) but with no luck.Just for mention, commserve just have local IP, of course it&#039;s not public so Azure VM cannot reach it. But as far as I know, it&#039;s not a problem because Commserve can reach Azure VM and I can setup one-way network topology (but how since I don&#039;t have the client registered yet).How can I register Azure Access Node in on-prem Commvault then?I tried to create pseudoclient, then apply one-way network configuration and install software but it doesn&#039;t work. I tried with 11.42 node from Marketplace, but it seems like registration script is bugged there, it stops just after last anserw and in logs there is info about empty credentials (which was of course not true).I discovered that the only way I managed to register/install client was just deploy 11.36 Access node (even without commvaultRegistration.sh script), creating pseudoclient in CommCell console, provide public IP, check &quot;Fetch configuration information from the client that is already in cecoupled mode&quot; and provide 8400 port. Then configure network topology, update client etc. It was successful but it seems rather like a workaround than official way described in the documentation. Any thoughts?</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 01 May 2026 00:20:59 +0200</pubDate>
        </item>
                <item>
            <title>Upgrade from 11.32 to 11.40/11.42 what happens to index version?</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/upgrade-from-11-32-to-11-40-11-42-what-happens-to-index-version-11817</link>
            <description>Hi all. I’m planning to upgrade from 11.32 to 11.40/11.42.Many of my agents (Oracle, MSSQL, file) and virtual machines are old installations and are still running index version 1. What will happen when I upgrade to 11.40/11.42, will they continue to use index version 1 or will they be auto “upgraded” to index version 2, or is this still a manual process so I can upgrade the index when I find fit? Cheers-Anders</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 30 Apr 2026 23:03:35 +0200</pubDate>
        </item>
                <item>
            <title>Recruiting volunteers for two new product design studies: Restore experience and Data Security features</title>
            <link>https://community.commvault.com/design-feedback-welcome-68/recruiting-volunteers-for-two-new-product-design-studies-restore-experience-and-data-security-features-11807</link>
            <description>Join us for a prototype review session!Expanding on the pilot we conducted in December and January, our Product and UX Design team is recruiting for two new design review studies — one on the Restore experience and workflows and the other on new data security capabilities and managing sensitive data across cloud and hybrid environments. Find details and instructions for how to sign up for both of these in our updated “Active Design Studies” hub here. (You’ll need to log in using your customer or partner credentials to access these, unlike most areas in the community.)Feedback from our user community is a critical component of the design process and helps shape workflow and usability priorities. And unlike formal EAs or betas, nothing to install and no more involved commitment, just an hour or less on Zoom reviewing prototypes and providing feedback on future design direction and workflow assumptions.​@Sougato Roy and I look forward to sharing more insights and feedback from the earlier pilot studies soon. Stay tuned for more!</description>
            <category>Design Feedback: Welcome</category>
            <pubDate>Thu, 30 Apr 2026 20:27:58 +0200</pubDate>
        </item>
                <item>
            <title>Gotchas or issues wiping Aux Copied Storage to reconfigure mount points</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/gotchas-or-issues-wiping-aux-copied-storage-to-reconfigure-mount-points-11814</link>
            <description>Background:We have Aux copies moving data to a second location, and from that location Tapes are written. We have 2 Media Agents there, each with 1 mount point to storage (and they share the mount points).  The mount points have gotten so large we cannot allocate more space to them (a Windows 256 TB limitation).We *could* just add more mount points to add more storage but we would like to just “delete the existing mount points” from Commvault, wipe the storage, and then rebuild the mount points so we have maybe 4-5 smaller mount points per server (say 75 TB each). Then we would Aux copy all the data back to that location (which would prob. take 4-7 days). Questions:If we were to do this, would there be any “gotchas” or things to consider (we have a global DDB on this secondary Aux copied location, and several on the primary location)? I assume its not as simple as “turn off aux copies, delete mount points, wipe storage, and then reallocate storage and mount points, share the mount points between the two servers”…or is it?	We write tapes from this secondary Aux copy location… any considerations for tapes (have to reconfigure something to get them to work again, etc?)</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 30 Apr 2026 16:17:28 +0200</pubDate>
        </item>
                <item>
            <title>How Commvault’s Design Feedback Program Helps Us Build a Better Product Experience</title>
            <link>https://community.commvault.com/design-feedback-welcome-68/how-commvault-s-design-feedback-program-helps-us-build-a-better-product-experience-11782</link>
            <description>Product and UX Design Feedback Pilot Readout - Improving our Product with Your Insights In late 2025, we launched a low-key pilot series of Product Design and UX feedback studies exclusively within our Community. The goal was focused: bring hands-on operators in for informal sessions to review product design at the prototype stage, before features ship, validate workflows against lived experience, and close the gap between product usability and what end users actually need.We&#039;re excited to share that the pilot phase is complete and take a moment to report what we heard and learned along the way. These sessions helped our UX Design leads catch points of confusion early, validate what’s working, and prioritize improvements before code is committed. And just as as importantly, participants confirmed the format was genuinely useful and easy to fit into a busy week.Thanks to our pioneering Community members and All Stars for diving in with us on the pilot! What are the Design Feedback Studies and Sessions? Not a beta or &quot;Tech Preview.&quot; Not a survey and not a focus group either. These are moderated, one-on-one usability testing sessions and interviews between a Commvault UX researcher and an operator who has close knowledge of the use case, product, and workflows.During sessions, customers discuss different aspects of the use case or workflow, navigate and react to prototypes, and provide input that directly informs product design decisions before release. Participants gain visibility and a voice into proposed enhancements and assumptions driving design and UX decisions. Commvault gains crucial feedback grounded in real operational context.Our Early Adopter and beta programs are of course incredibly valuable too, but we know they often require installs or upgrades, troubleshooting, and time commitments over an extended period. And the outcomes are very different. Design Feedback sessions are intentionally lighter weight and more accessible for any of our end users: no setup, dependencies or longer commitment—just one hour on Zoom to review and discuss designs. What We Learned from the Pilot StudiesAll participants completed the workflow—but a few moments still felt “uncertain”In the cloud connection workflow we tested, participants all completed the task. That’s a win, of course. However, our facilitators also saw moments where users could move forward without being 100% confident about what an option meant or what a label indicated. In data protection environments, ambiguity is more than a UX nit: it can lead to misconfiguration, extra support cycles, and worse. The recurring feedback themes centered on permission scope, how connections behave as subscriptions change over time, and whether status labels communicate the right level of assurance.Based on what you shared, we’re focusing our next design iterations in three areas: Clearer guidance during authenticationMore in-context explanation of permission scope and recommended choices—right when you’re making the decision.More control (and clearer rules) for subscription scopeExploring more granular scope controls (including tag- and naming-based filtering) and making the “what happens when new subscriptions appear” behavior explicit.Status labels that read the way you expectCleaning up terminology gaps (for example, “Azure managed”) so protection states align with how you interpret them and what action, if any, you should take. And... We’re Moving Forward with the Feedback Program!Because the pilot studies produced strong insights and the experience was positive for participants, we’re already moving forward with a new set of Design Feedback studies. We&#039;re actively recruiting in April for two new studies--one covering the Restore experience in Command Center and the other new Data Security feature design. You can read more in Jenn&#039;s post here and follow the individual links to details on the respective studies and to sign up.As with the pilot, Design Feedback sessions will continue to be approximately one-hour structured Zoom sessions where you review prototypes or proposed workflows and tell us what’s clear, what’s missing, and what would make the experience more reliable in practice. As always, thanks to our amazing Community for the great energy and collaborative insights!Thank you!Roy, Jenn, Damian &amp;amp; The Community Team</description>
            <category>Design Feedback: Welcome</category>
            <pubDate>Thu, 30 Apr 2026 15:27:37 +0200</pubDate>
        </item>
                <item>
            <title>From the Lab:  10 Commvault Configuration Mistakes I See All the Time</title>
            <link>https://community.commvault.com/share-best-practices-3/from-the-lab-10-commvault-configuration-mistakes-i-see-all-the-time-11770</link>
            <description>Commvault is incredibly powerful.But that power comes with complexity—and complexity is where mistakes creep in.I’ve worked in enough environments to see the same patterns repeat:Backups are “working”… until they aren’t	Performance slowly degrades	Restores become harder than they should beAnd most of the time, it’s not a product issue.It’s configuration.Here are 10 Commvault configuration mistakes I see all the time—and how to avoid them. 1. Building Everything Around a Single MediaAgentIt works at first.One MediaAgent. One place for everything. Simple.Until:Jobs stack up	Throughput drops	That one system becomes your bottleneckFix:Start with at least two MediaAgents and distribute workloads.If everything depends on one MediaAgent, you don’t have resilience—you have risk. 2. Undersizing the CommServeThe CommServe is the brain of your environment.And yet, it’s often treated like an afterthought.What happens:Slow job scheduling	UI lag	Reporting delaysFix:Proper CPU/RAM sizing	Fast storage for the database	Regular DB maintenanceIf the CommServe struggles, everything struggles. 3. Poor Deduplication Database (DDB) PlacementThis one causes more long-term pain than almost anything else.The mistake:Putting the DDB on slow or shared storage.The result:Slower backups	Longer job times	Painful rebuildsFix:Place DDB on fast, dedicated storage (SSD if possible)	Monitor DDB health regularlyDedupe performance lives and dies by storage speed. 4. Overloading a Single Storage PoolIt’s easy to just keep adding workloads to the same storage pool.Until it can’t keep up.Symptoms:Increased job duration	Resource contention	Inconsistent performanceFix:Split workloads across multiple storage pools	Align pools with workload types or performance tiersOne pool for everything = one problem for everything. 5. Ignoring Network ThroughputBackup traffic is still network traffic.And it’s often underestimated.What happens:Bottlenecks between clients and MediaAgents	Slower backups and restores	Unpredictable performanceFix:Validate bandwidth between components	Use dedicated backup networks where possible	Monitor throughput, not just job statusYou can’t out-configure a slow network. 6. No Clear Retention StrategyRetention tends to evolve organically—and that’s the problem.What I see:Different retention policies everywhere	No alignment with business needs	Storage filling up unexpectedlyFix:Define retention by workload type	Align with compliance and business requirements	Plan for growth, not just current usageRetention isn’t a setting—it’s a strategy. 7. Mixing All Workloads TogetherDatabases. VMs. File servers. Archive jobs.All in the same policies, same schedules, same infrastructure.What happens:Performance conflicts	Harder troubleshooting	Unpredictable job behaviorFix:Segment workloads:By type	By priority	By performance requirementsSeparation creates control—and control creates stability. 8. Skipping Restore TestingThis is the big one.Everything looks fine… because nobody has tried to restore anything.Reality:Backups can succeed but still be unusable	Application restores may fail due to misconfigurations	Recovery times are often unknownFix:Test regularly:Full VM restores	File-level recovery	Application-level recoveryBackup success means nothing without recovery success. 9. Too Many Exceptions (No Standardization)“I’ll just configure this one differently…”That adds up quickly.What I see:Inconsistent policies	Confusing configurations	Hard-to-manage environmentsFix:Standardize naming, policies, and schedules	Limit exceptions	Document anything that deviatesConsistency is what makes environments scalable. 10. Ignoring Alerts and Warning StatesCommvault will tell you when something isn’t right.The problem is… people stop listening.What happens:Warnings pile up	Real issues get buried	Small problems become big onesFix:Clean up existing warnings	Tune alerts so they’re meaningful	Treat warnings as early signals—not noiseIf everything is a warning, nothing is. Bringing It All TogetherMost Commvault issues don’t come from dramatic failures.They come from:Small misconfigurations	Overlooked details	Decisions that made sense at the timeUntil scale exposes them. Final ThoughtYou don’t need a perfect environment.But you do need a predictable one.Good configuration isn’t about making things work today.It’s about making sure they still work when everything grows.Fix these early, and your environment will feel stable.Ignore them, and you’ll spend your time chasing problems that didn’t have to exist.</description>
            <category>Share Best Practices</category>
            <pubDate>Thu, 30 Apr 2026 07:19:30 +0200</pubDate>
        </item>
                <item>
            <title>Convert Netapp Snapprotect to a regular Commvault environment</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/convert-netapp-snapprotect-to-a-regular-commvault-environment-11816</link>
            <description>Im planning to upgrade/convert my v11 Snapprotect Commvault environment to the generic Commvault.WHat are the steps on doing it?From what I know are the following differences1. OEM ID of snapprotect is different from usual Commvault software2. SNapprotect licence is also differentHow to convert this licence int a regular Commvault capacity base licence?How to convert such that a regular Commvault installer that is downloadable from the Commvault website can be use to upgrade to newer versions of Commvault?Any ideas, suggestions, recommendations?</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 30 Apr 2026 00:59:50 +0200</pubDate>
        </item>
                <item>
            <title>Arlie won&#039;t go away</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/arlie-won-t-go-away-11809</link>
            <description>Every time we install a Commvault patch it re-enables Arlie.  Commvault - stop trying to make Arlie happen.  It’s horrible, we want it disabled - it isn’t going to happen.I have disabled Arlie in our two CommServes after every update, and yet it re-enables itself and causes annoyances such as:			2373079&amp;lt;SUP&amp;gt;* &amp;lt;/SUP&amp;gt;&amp;lt;BR&amp;gt;(F)									Workflow Management									04/25/2026 8:00:01 									04/25/2026 8:05:02 									0:05:01									&amp;lt;LI&amp;gt;Job Type: AI-JOB-FAILURE-INSIGHTS					 I normally don’t comment like this but it really does suck.Fix it.  Honestly, AI shouldn’t be a part of every platform - I think integrated into your software is a waste of time.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 29 Apr 2026 15:59:30 +0200</pubDate>
        </item>
                <item>
            <title>Health Report Errors</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/health-report-errors-11631</link>
            <description>I’ve got a new installation that’s just completed and am having an issue with the Health Report in the Dashboard.The Error I’m seeing states ‘Unable to load the data from the server’.I’ve checked all services, restarted Tomcat/IIS and the CVCloud DB are online.When you click on the Health Report it then loads an error that the ‘User does not have access to the page’.The user is part of the ‘master’ role with associated entities configured correctly. Any ideas where I could start troubleshooting from as the next step?  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 29 Apr 2026 14:25:25 +0200</pubDate>
        </item>
                <item>
            <title>Catch Up with Commvault at Red Hat Summit 2026 | May 11 - 14, 2026</title>
            <link>https://community.commvault.com/news-events-7/catch-up-with-commvault-at-red-hat-summit-2026-may-11-14-2026-11806</link>
            <description>Join us at Booth #329 at Georgia World Congress Center, May 11 - 14, 2026. Click here for details or to book a meeting with our team We’d love to see you if you plan to be there! Check out the latest tech and demos, talk cloud native resilience, and enter to win some fun prizes.See how Commvault helps protect virtual machines and containers running together on Red Hat OpenShift.	Stop by, get some swag, and enter to win a pair of Ray-Ban Meta Wayfarer glasses.	Learn how we can help accelerate your migration to OpenShift or enable production-ready recovery across VMs and containers once you’re there.</description>
            <category>News &amp; Events</category>
            <pubDate>Tue, 28 Apr 2026 20:54:10 +0200</pubDate>
        </item>
                <item>
            <title>Optimize for table level restore</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/optimize-for-table-level-restore-11810</link>
            <description>Hi All, short question regardind the backup feature Optimize for table level restore for the sql idata agent. 	Unless otherwise noted for a particular feature, block-level browse is not supported from backups to tape libraries or virtual tape libraries. If you want to perform a block-level restore by using a secondary copy on tape libraries or virtual tape libraries, move the copy to disk storage before performing the restore.	 Performing a Block-Level Backup Operation on SQL Databases so what happend if we have this feature aktive and have tape as sync copy. Is it like guest files and folder restore for vmware that this is not possible but hole vm / db is possible ?  Or did we have to bring the backup jobs back to disc library from tape to run an normal db restore or is only table browse not possible ? </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 28 Apr 2026 16:43:30 +0200</pubDate>
        </item>
                <item>
            <title>PRoxmox VE livesync</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/proxmox-ve-livesync-11811</link>
            <description>Livesync est il possible entre deux proxmox VE?</description>
            <category>Virtualization and Containers</category>
            <pubDate>Tue, 28 Apr 2026 06:17:34 +0200</pubDate>
        </item>
                <item>
            <title>Issue with Commvault Plans and schedule policies</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/issue-with-commvault-plans-and-schedule-policies-11681</link>
            <description>Hi, If anyone can explain, I would appreciate it.Commvault 11.32.119  Our company is gradually using Plans and a question has come up.  A question here. According to my research, Commvault creates two scheduling policies to satisfy the RPO. One is generic and the other is more specific to database agents.If the most specific database policy only has just one task, which is precisely the task that commands the Incremental backup, where is Commvault configuring the task to run a Full backup? Has Commvault configured the Full backup task in the generic policy (Test_Plan)? It seems to me that is what happened, but examining the policy, it&#039;s not possible to find anything in the Full backup task settings that states that a Full backup will run every hour, for example. I modified the rule &quot;run incremental every 1 day(s) at 9:00 PM&quot; by clicking on the pen icon and adding a rule. See the screen below. I checked the &quot;Run full backup on databases every&quot; option.  When examining the policies created using the Commcell Console, I cannot find in which task, or in which schedule policy, this Full backup was configured. </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 28 Apr 2026 00:57:17 +0200</pubDate>
        </item>
                <item>
            <title>Oracle iDataAgent (command line) sctructure</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/oracle-idataagent-command-line-sctructure-11812</link>
            <description>During the review of some settings, we noticed a structure that was previously unknown to us. The (command line) subclient.   In fact, this is strange, because this structure doesn&#039;t appear as a Subclient on another screen. What is this for? </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 27 Apr 2026 20:39:26 +0200</pubDate>
        </item>
                <item>
            <title>Incorrectly created Server Groups for CommServe LiveSync configuration</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/incorrectly-created-server-groups-for-commserve-livesync-configuration-11765</link>
            <description>Hello Commvault Community,  While troubleshooting another topic, the Failover configuration was disabled and when I wanted to enable it again, I noticed something disturbing and I would like to ask whether this has changed in last versions or whether it is a bug that needs to be fixed.For clarity, I&#039;ve checked this from both consoles (CommCell Console and CommandCenter), and both have the same problem.When we changed the CS Failover setting from &quot;Existing Configuration&quot; to &quot;Use Network Gateway,&quot; changes were made to the Client Computer Groups and Network Topologies created by the CS Failover configuration. What concerned us was that nothing was displayed for the &quot;External Clients for Failover&quot; group and the &quot;Proxy Clients for Failover&quot; group. We repeatedly refreshed, restarted the console and tried the &quot;Push Network Configuration&quot; option, but nothing helped. Until we noticed that the Server Group/Client Computer Group association settings were set by default to search for clients/servers within the scope: &quot;Client Scope&quot; was set to &quot;Clients of user,&quot; and the user itself was set to &quot;undefined.&quot; We found a solution to this problem, but it shouldn&#039;t work automatically like now. The group creation process for failover configurations doesn&#039;t work correctly for versions SP40 and SP42. We confirmed this in two different environments, and the behavior was the same - the association value is incorrectly set to &quot;Client of user&quot; instead of &quot;Clients of this CommCell.&quot; If this has changed in recent versions, please let us know - we couldn&#039;t find any information in the documentation. We need information on whether this is the intended behavior of the system or whether it should work as described above and it’s a bug that needs to be fixed.Thanks in advance!Kind Regards,Kamil </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Sun, 26 Apr 2026 03:06:02 +0200</pubDate>
        </item>
                <item>
            <title>Enable client-side deduplication on VSA Windows proxy</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/enable-client-side-deduplication-on-vsa-windows-proxy-11656</link>
            <description>Hi all, I would like to enable client-side deduplication on a Windows VSA proxy so that only unique data is sent directly from the VSA to the MediaAgent DDB/backup library, instead of sending all read data from the VSA and deduplicating only on the MediaAgent side.We are looking to this approach because the VSA proxy will be a VM inside an AVS environment, while the MediaAgent will be outside of AVS (as Azure VM).  Could you please help with:Required packages on Windows VSA proxy: What Commvault packages are needed on the Windows server acting as VSA proxy to support client-side deduplication (e.g., Virtual Server, File System Core, File System, or any additional components)?How to enable client-side deduplication for VSA backups: What are the step-by-step configuration steps (storage policy, subclient properties, etc.) to enable source-side/client dedup specifically for VSA workloads?How to verify client-side dedup is working: What are the step-by-step checks (job details fields, reports, DDB verification, etc.) to confirm dedupe is processing on the VSA proxy and reduced data is sent to the MediaAgent? Thanks in advance! Best regards,Nikos</description>
            <category>Storage and Deduplication</category>
            <pubDate>Fri, 24 Apr 2026 13:09:41 +0200</pubDate>
        </item>
                <item>
            <title>Live Sync Configurations on combined CommServe &amp; Media Agent deployment</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/live-sync-configurations-on-combined-commserve-media-agent-deployment-11591</link>
            <description>Considerations for Live Sync Configurations on combined CommServe &amp;amp; Media Agent deploymentHi All,I am planning a deployment of Commvault on a single physical server at a remote site which will require Live Sync to a secondary site.There are a couple of things that I am not sure about.If a failover is initiated how is the Media Agent component accessed as the services in Instance001 will be stopped. The same for the Media Agent component in Instance001 on the standby CS/MA.Is it an option to have the Media Agent installed into Instance002 on both Primary and Standby CommServe computers. Doing it this way would mean that the services will remain online.Then the Failover component will be deployed into Instance003?Is this a workable way or is there a different method that would be better?Thanks.Ignes </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 24 Apr 2026 07:58:33 +0200</pubDate>
        </item>
                <item>
            <title>Pre-register for SHIFT 2026 in Nashville, Nov. 10-12</title>
            <link>https://community.commvault.com/news-events-7/pre-register-for-shift-2026-in-nashville-nov-10-12-11784</link>
            <description>We&#039;re thrilled to announce SHIFT 2026 will be coming to Nashville November 10-12 and the agenda is expanded to encompass three full days with distinct tracks for executive and practitioners.For our community of practitioners and admins, we’ll host hands-on labs, full product certifications, as well as deep-dive sessions on the latest Commvault technology, data protection, cyber resilience, and optimizing how you work in the ever-evolving world of AI. Pre-register here to save your spot  </description>
            <category>News &amp; Events</category>
            <pubDate>Thu, 23 Apr 2026 18:55:46 +0200</pubDate>
        </item>
                <item>
            <title>Commvault Stencils and Templatyes</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/commvault-stencils-and-templatyes-9395</link>
            <description>Team, Please assist me with latest commvault Stencils or templates</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 23 Apr 2026 16:57:38 +0200</pubDate>
        </item>
                <item>
            <title>Prevent unnecessary 2FA mail when logging into Command Center</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/prevent-unnecessary-2fa-mail-when-logging-into-command-center-11800</link>
            <description>Hi everyone,I have a 11.40 environment and the login nowadays has 3 steps and before I get to fill in the 2FA PIN, I have triggered the email with the 2FA code.I first get the screen to fill in my username.	The second screen allows me to put in my password.	After this, I get the third screen where I can fill in the 2FA code if I have not added a 2FA PIN to my password on the second screen (the same way as for the commcell console). If I did not fill in the 2FA PIN with my password, I get a mail with a 2FA PIN.I want to configure the logic that a 2FA code is only sent out via mail when I don&#039;t fill in a PIN in the third screen. This is because I don&#039;t want to get a 2FA PIN mail because I have already stored the secret in a TOTP app. It is also not logical for end users that have a TOTP app that they still get the mail when they have not yet had the change to fill in the 2FA PIN.I also rather do not want to entirely disable the 2FA mail via the additional setting DisableTFAEmail because this can cause issues for end users that did not setup a TOTP app.I hope my explanation of the issue is clear. I cannot find a solution in the documentation for how to change this logic. It seems to me the login steps changed but the logic did not change from 11.32 where you had the 2FA PIN field beneath the password field on the second login screen.Kind regards,Paul</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 23 Apr 2026 16:46:58 +0200</pubDate>
        </item>
                <item>
            <title>Cleanroom Recovery in Action (On-demand)</title>
            <link>https://community.commvault.com/news-events-7/cleanroom-recovery-in-action-on-demand-11805</link>
            <description>Join Commvault’s ​@Dinesh Reddy, Director of Product Management, and ​@Toussaint Brock, Product Marketing Manager, for an in-depth view into the latest innovation around Cleanroom Recovery. They’ll walk through how organizations are operationalizing cyber recovery with Cleanroom Recovery for Air Gap Protect and Hyperscale X.See how organizations can automate cyber recovery, customize the creation of a Cleanroom, and automatically scan data, VMs, and applications for threats as they are recovered into the Cleanroom.  The webinar will cover: Why confident, clean, recovery of data, apps, and infrastructure is mission critical for all organizations.  	The latest innovations that are generally available from Cleanroom	Recovery including Cloud Cleanroom creation automation, runbook experience, and support for on premises Cleanroom Recovery.   	Demonstration of the latest enhancements.Click here for more information and to register!9:30 AM BST | 11 AM SGT | 1 PM EDT</description>
            <category>News &amp; Events</category>
            <pubDate>Thu, 23 Apr 2026 16:45:04 +0200</pubDate>
        </item>
                <item>
            <title>Commvault OLVM Keycloak Integration</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/commvault-olvm-keycloak-integration-11475</link>
            <description>Hi AllOvirt the Source project for OLVM is defaulting Keycloack as the Default Authentication Provider, so far Commvault doesn’t support it officially is there any workaround? or is the support coming soon? thanks </description>
            <category>Virtualization and Containers</category>
            <pubDate>Thu, 23 Apr 2026 10:53:40 +0200</pubDate>
        </item>
                <item>
            <title>New installation Commserver on Linux</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/new-installation-commserver-on-linux-11744</link>
            <description>I would like to deploy a new CommServe server on Linux using SP 11.40. My current CommServe is running on Windows with SP 11.36. I plan to set up the new CommServe as a standby node in advance.Could you advise on the best practices for this setup?The operating system I am using is Rocky Linux. I have provisioned an additional disk for the Commvault installation and software cache, which is mounted at /opt/CommvaultData.What are the initial steps I should follow to ensure a secure and reliable installation? For the installation of SP 11.40, I would like to first deploy two new passive nodes on Linux, and only then perform the failover to Linux.I also have a question regarding the software cache. With the new CommServe on Linux, I have configured a new software cache for Windows and enabled it as a remote software cache.How can I ensure that, during update installations, clients automatically use the correct remote cache? Specifically, I want Windows clients to use the Windows software cache and Linux clients to use the Linux software cache. I have already created two new client groups, categorized into Linux clients and Windows clients.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 22 Apr 2026 16:29:25 +0200</pubDate>
        </item>
                <item>
            <title>VSA with Lisens</title>
            <link>https://community.commvault.com/getting-started-51/vsa-with-lisens-11419</link>
            <description>Dears,I have a vCenter environment, and I created a Red Hat 8 virtual machine to be used as a VSA proxy.I installed the Virtual Server package and the File System package successfully.However, after completing the installation, this node does not appear as an Access Node in the Virtualization configuration.Kindly advise why this is happening.Note: I am currently using a trial license.</description>
            <category>Getting Started</category>
            <pubDate>Wed, 22 Apr 2026 10:40:27 +0200</pubDate>
        </item>
                <item>
            <title>License Application Confirmation - Permanent License with Renewed Support/Maintenance</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/license-application-confirmation-permanent-license-with-renewed-support-maintenance-11804</link>
            <description>Hello, We have renewed the support/maintenance contract for the CommCell via the Maintenance Advantage portal and received a new license XML file.Current license configuration:License type: Permanent	Mode: Production	Support/Maintenance: Expired ==&amp;gt; so we renewed this one for 1 year  Question:Before applying the new license XML file :Will applying the new license file maintain the permanent license type, or will it change the license structure?The customer is concerned that applying the renewed support license might:Remove or modify the existing permanent license entitlements	Convert from permanent to term/subscription licensing	Impact production backupsMy expectation is this license file only update the support/maintenance expiration dateBut I need to prove it to the customer Can you help me pleaseThanks.Best regards,</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 22 Apr 2026 06:26:25 +0200</pubDate>
        </item>
                <item>
            <title>Storage Migration for Backup</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/storage-migration-for-backup-11803</link>
            <description>We have decided to move our backup data to the NetApp AFF A90 and allow the existing NetApp system to age off in line with the retention policy.Could you please advise on how best to present the volumes to the MediaAgents—whether as block storage or via NFS? We would like to understand which option would be more suitable for our use case.Additionally, could you confirm whether NVMe is supported by Commvault version 11.40?</description>
            <category>Storage and Deduplication</category>
            <pubDate>Wed, 22 Apr 2026 04:22:05 +0200</pubDate>
        </item>
                <item>
            <title>Commvault 11.40.47 – Backup of Nutanix AHV VMs Fails (Unable to Mount NFS Share)</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/commvault-11-40-47-backup-of-nutanix-ahv-vms-fails-unable-to-mount-nfs-share-11789</link>
            <description>Hello everyone,I’m facing an issue while backing up production virtual machines hosted on Nutanix AHV using Commvault version 11.40.47, and I would appreciate your guidance.EnvironmentCommvault Version: 11.40.47	Hypervisor: Nutanix AHV	Prism Element IP: 10.10.100.120	Commvault Access Node / Media Agent:	Hostname: srv-CV-backup		IP: 10.10.100.24	Issue DescriptionWhen starting a backup job:Commvault successfully discovers and lists all the VMs	The backup job starts but fails before backing up any disks	The issue occurs for all VMsError MessageError opening the disks for virtual machine eWin-SRV1].Access node esrv-CV-backup] is unable to mount NFS share e10.10.100.120:/PROD].Source: srv-backup, Process: JobManagerNotesThe latest Commvault version available to us is installed (11.40.47)	VM discovery works correctly	The failure seems related to mounting the NFS datastore from NutanixHas anyone encountered a similar issue with Commvault and Nutanix AHV?Are there any specific NFS permissions, firewall rules, or configuration settings that should be checked on either the Nutanix or Commvault side?PS: the lastes succesful backup was before going to this AOS version 7.5also this version is compatible with commvault its mentionned in the docs Thank you in advance for your support.Best regards,</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 21 Apr 2026 12:58:34 +0200</pubDate>
        </item>
                <item>
            <title>Practical Briefing: Resilience Planning in Uncertain Times (On-demand)</title>
            <link>https://community.commvault.com/news-events-7/practical-briefing-resilience-planning-in-uncertain-times-on-demand-11801</link>
            <description>On April 29, join experts from Commvault, Deloitte, and IDC for a structured, 45-minute session providing a practical framework for assessing gaps in your organization’s recovery posture and a checklist to drive operational resilience. Most organizations won&#039;t discover the gaps in their recovery posture until they&#039;re already in a crisis. This briefing is designed to change that. Agenda includes:A framework for defining your minimum viability and Survival Time Objective, making operational recovery requirements a decided factor in advance	A clear understanding of why cyber recovery and disaster recovery require different architectures, and what that means for your current posture  	Practical blueprints for air-gapped data protection, cleanroom recovery, and cross-region resilience  	An understanding of how sovereignty issues can complicate recovery options  	A resilience checklist infrastructure teams can act on immediately Find out more and register here. 9:30 AM BST | 11 AM SGT | 1 PM EDT </description>
            <category>News &amp; Events</category>
            <pubDate>Mon, 20 Apr 2026 23:44:45 +0200</pubDate>
        </item>
                <item>
            <title>Reset secret key not sending new qr to end user</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/reset-secret-key-not-sending-new-qr-to-end-user-8573</link>
            <description>Hello, I am trying to reset secret key of end user but end user not getting nieuwe qr, i tested with trst user without MFA and i am getting e-mail to reset password.In webserver log i see that  WEBAPI-STARTED processing nPUT]:[/User/Secret/Reset] request. : /User/Secret/Reset : AdditionalInfon ConsoleTypeyAdminConsole]]11336 91    04/16 13:07:25 91  e.skepko Invoke - WEBAPI-FINISHED processing sPUT]:[/CVWebService.svc/User/Secret/Reset] in t25] ms;  HTTP code &#039;OK&#039;</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 20 Apr 2026 16:38:32 +0200</pubDate>
        </item>
                <item>
            <title>Meaning of &quot;Snapshot options&quot; in Server Plan for M365 backups</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/meaning-of-snapshot-options-in-server-plan-for-m365-backups-11787</link>
            <description>Hi all, I’m a bit confused about the &quot;Snapshot options&quot; section in the Server Plan settings, specifically when the plan is used for Microsoft 365 backups (Exchange, SharePoint, etc.).  Since M365 backup is about mailboxes and sites, not VMs or LUNs, what is the actual purpose of this section? Does &quot;Enable backup copy&quot; or the snapshot schedule have any real effect on how M365 data is processed or moved to the storage? Or is this just a generic UI section for all Server Plans, or should I be aware of some specific logic here for M365 apps? Thanks for the clarification!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 17 Apr 2026 10:32:17 +0200</pubDate>
        </item>
                <item>
            <title>Web Console Deprecation: What Customers Need to Know</title>
            <link>https://community.commvault.com/platform-updates-47/web-console-deprecation-what-customers-need-to-know-11679</link>
            <description>Web Console Deprecation – What Customers Need to Know  Commvault is continuing its move toward a single, modern management experience through the Command Center. As part of this transition, the legacy Web Console has been officially deprecated and is no longer included by default in newer Commvault platform releases. This article explains what the deprecation means for you, when the Web Console may still be required, and how to request access if your environment depends on it. Why is the Web Console being deprecated? The Web Console is being retired as Commvault standardizes on the Command Center as its primary management and reporting interface. The Command Center provides equivalent or enhanced capabilities, improved security alignment, and is the strategic platform for all future feature development. Deprecation timeline  • Version 11.38: Web Console interface marked as deprecated • Version 11.40: Web Console removed from the default installation media • Version 11.42 and later: Web Console not available on new or upgraded installations  When is the Web Console still needed?  Although deprecated, Commvault recognizes that some customer environments still rely on Web Console functionality in limited scenarios, such as: • Editing or maintaining reports using Report Builder In these cases, access to the Web Console package is still supported through a controlled request process. How to request access to the Web Console package  Commvault provides an automated workflow to request access to the Web Console package when there is a valid business or technical requirement. To submit a request: 1. Sign in to the Commvault Cloud Services Portal 2. Navigate to Workflows 3. Select the workflow titled “Request WebConsole Package Access” 4. Enter your CommCell ID and contact email address 5. Provide a brief explanation of why the Web Console is required 6. Submit the request What happens next?  After submission, you will receive a confirmation notification. If additional information is required, a Commvault representative will contact you. Once approved, you will receive a secure download link for the Web Console package. Additional information  For detailed eligibility criteria and workflow information, refer to the Commvault Knowledge Base article: How to obtain the Web Console Package (KB89689) Moving forward  While access to the Web Console remains available for specific scenarios, customers are strongly encouraged to adopt the Command Center as their primary interface to take advantage of ongoing enhancements, security improvements, and future innovations.</description>
            <category>Platform Updates</category>
            <pubDate>Fri, 17 Apr 2026 09:30:34 +0200</pubDate>
        </item>
                <item>
            <title>Backups of VMs from one subclient may be subject to longer retention periods across different weekends</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/backups-of-vms-from-one-subclient-may-be-subject-to-longer-retention-periods-across-different-weekends-11791</link>
            <description>Hi,After migrating subclients from Index v1 to Index v2, VM backups within application clusters are losing time consistency on Selective Copies (e.g., First Full of the Month). The issue arises because Index v2 splits the subclient backup into individual child jobs for each VM; when these jobs finish after midnight, the system assigns them to different retention periods. Due to a fixed backup window, we cannot delay the start times, and manually selecting jobs for tape marks is not scalable.How to configure it so that Selective Copy treats all child jobs of a given subclient as a single time unit, ensuring a consistent point-in-time recovery for the entire application cluster? Kind regards,</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 17 Apr 2026 05:54:02 +0200</pubDate>
        </item>
                <item>
            <title>Full vm In-place restore not restoring original vm UUID, categories, or vm description</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/full-vm-in-place-restore-not-restoring-original-vm-uuid-categories-or-vm-description-11793</link>
            <description>Hi all.  When I tried to do a full vm restore of a Nutanix VM with inPlaceRestore and overwriteVM set to True (tried from both command center UI and API), The expected behavior in my mind is that the vm’s metadata (vm description, categories, and UUID) is kept after the restore, just like how a VMware full vm restore has been working for us. In our testing, the vm is deleted and then re-created, causing the VM UUID to change, at the same time, all the other vm metadata fields like description and categories are not restored; Guest Tool connection is broken because of the new UUID as well. Is this expected behavior? Could this have been caused by misconfiguration on VM Group/Hypervisor side?  Commvault 11.40Nutanix AHV 7.5 Thanks in advance!</description>
            <category>Virtualization and Containers</category>
            <pubDate>Thu, 16 Apr 2026 23:51:48 +0200</pubDate>
        </item>
                <item>
            <title>O365 Plan Retention: Infinite vs. Retain Deleted Items logic</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/o365-plan-retention-infinite-vs-retain-deleted-items-logic-11777</link>
            <description>Hi all, As I understand it, there are basically two possibilities regarding the Office 365 plan retention: Infinite and Retain deleted items for X period of time.Infinite: All items (e.g., emails, sites, folders) remain in the backups forever.Retain deleted items for X period: Only data that has been deleted from the source (M365) is removed from the backups after the given period. All other &quot;live&quot; (non-deleted) data remains in the backup forever, just like with the infinite option. Am I correct in thinking that once Office 365 data is backed up, it stays in the storage libraries forever, with the only exception being deleted data (when that option is selected)?Even though Server Plans have their own retention settings, do they affect Office 365 data? It seems to me that M365 data follows an &quot;incremental forever&quot; logic and there is no way for retention to delete &quot;live&quot; data from the storage.Please confirm my understanding or correct me if I am mistaken.Thanks in advance for your contributions!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 16 Apr 2026 15:59:05 +0200</pubDate>
        </item>
                <item>
            <title>Exchange Online No valid Azure app found</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/exchange-online-no-valid-azure-app-found-11776</link>
            <description>Hi everyone,For about a week now, we have 2 failed mailboxes in the Exchange Online backups that have the failure reason &quot;No valid Azure app found. Please run Check Readiness&quot;. In the same backups we have 346 mailboxes that show no issues.Only these 2 mailboxes have issues and I cannot find what is causing this. The mailboxes still exist and are no different from other shared mailboxes.The logs show no different errors for the mailboxes so I am a bit at a loss. I also see the same issue for multiple Exchange applications.Does anyone have an idea how to resolve this?Kind regards,Paul</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 16 Apr 2026 13:40:03 +0200</pubDate>
        </item>
                <item>
            <title>Add Certificate on Helm deployed commvault</title>
            <link>https://community.commvault.com/getting-started-51/add-certificate-on-helm-deployed-commvault-11794</link>
            <description>I have deployed Commvault components using this procedure. But I’m not sure how to add my S3 Compatible storage SSL root certificate to be available on deployed pods.Please, can you tell me if there is a procedure in order to add the certificates on MediaAgent pods?Thanks</description>
            <category>Getting Started</category>
            <pubDate>Thu, 16 Apr 2026 04:45:14 +0200</pubDate>
        </item>
                <item>
            <title>Backup failing VM with Error Code: [91:300] Description: Error creating virtual machine [VMNAME] snapshot</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/backup-failing-vm-with-error-code-91-300-description-error-creating-virtual-machine-vmname-snapshot-11790</link>
            <description>Hi Team, We are backing up a VM(hosted on MS HyperV) whose data disk is located on a SMB NAS File share.We are getting below error.Error Code:  91:300] Description: Error creating virtual machine  VMNAME] snapshot. Please check the virtual machine snapshot tree and the hosting volume for free space. .Checkpoint operation for &#039;VMNAME&#039; failed. (Virtual machine ID 83A8CX27-…….A4) Checkpoint operation for &#039;VMNAME&#039; was cancelled. (Virtual machine ID 83A8CX27-…….A4) &#039;VMNAME&#039; could not initiate a checkpoint operation: The process cannot access the file because it is being used by another process. (0x80070020). (Virtual machine ID 83A8CX27-…….A4) &#039;VMNAME&#039; could not create auto virtual hard disk \\IP\Host\VM_Disk_3714DCEE-3E97-4364-A678-897EF9FE93F0.avhdx: The process cannot access the file because it is being used by another process. (0x80070020). (Virtual machine ID 83A8CX27-…….A4)]. Source: commserve, Process: JobManager I have verified in the HyperV Host event viewer, it is throwing error “19100”&#039;VMNAME&#039; background disk merge failed to complete: The process cannot access the file because it is being used by another process. (0x80070020). (Virtual machine ID 83A8CX27-…….A4).So, my plan is to take downtime, merge the .avhdx disk with .vhdx.Could you guys please check and confirm if we have any other alternate way or i can go ahead with my POA.Thank you.</description>
            <category>Virtualization and Containers</category>
            <pubDate>Thu, 16 Apr 2026 02:53:34 +0200</pubDate>
        </item>
                <item>
            <title>Restore -&gt; Insufficient space for caching</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/restore-insufficient-space-for-caching-11792</link>
            <description>Hi,i try to restore from a VM (guest files and folders) and i get this message: If i try to restore from a younger date it works fine.What can i do?ThanksDennis</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 15 Apr 2026 13:10:09 +0200</pubDate>
        </item>
                <item>
            <title>Manual Upgrade of Visual Studio Tools for Applications</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/manual-upgrade-of-visual-studio-tools-for-applications-11785</link>
            <description>Hi.Is there a supported way to manually upgrade Visual Studio Tools for Applications? I do not see any mention made of it.Current release is 16.0.31110Our customer advised that CVE-2025-29803 was detected in the current release. An upgrade should be done to version 16.0.35907.Current Commvault release is 11.40.42. I also don’t see any mention made in the Commvault maintenance release for this version. Any advise?Thanks. </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Wed, 15 Apr 2026 06:15:10 +0200</pubDate>
        </item>
                <item>
            <title>Auto Discover DB instance options</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/auto-discover-db-instance-options-6151</link>
            <description>This is a quick one.When applying the below key to disable Auto Discover of Instances, is there a way to limit that to only one client instead of at a Global level? https://documentation.commvault.com/additionalsetting/details?name=nDisableAutoDiscoverInstancePostInstall In my case, I have MySQL DBA’s who want Auto Discover on and the Oracle DBA’s who want it off. They don’t want to manually turn it off every time an Instance is added.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Tue, 14 Apr 2026 11:22:20 +0200</pubDate>
        </item>
                <item>
            <title>Commvault MS SQL plug-in user permission / role</title>
            <link>https://community.commvault.com/share-best-practices-3/commvault-ms-sql-plug-in-user-permission-role-11786</link>
            <description>Hi everyone,I&#039;m working on a custom set of permissions for our MSSQL admins to allow them to perform on-demand backups. Despite granting what I believe to be more than sufficient privileges, they are still encountering errors.The process works perfectly when the user has master rights.Current permissions: Error with those: Has anyone successfully configured a least-privilege role for this scenario and could share?</description>
            <category>Share Best Practices</category>
            <pubDate>Tue, 14 Apr 2026 09:59:27 +0200</pubDate>
        </item>
                <item>
            <title>Recovery Point not visible after Synthetic Full Backup</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/recovery-point-not-visible-after-synthetic-full-backup-11781</link>
            <description>Hello,After running a Synthetic Full Backup in our Commvault environment, the job completes successfully. However, we are unable to see any new Recovery Point in the CommCell Console / Command Center after the backup.ver: 11.40.38 Nis = April  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 13 Apr 2026 22:13:30 +0200</pubDate>
        </item>
                <item>
            <title>Verify Connection status not updating for M365 Exchange app despite successful mailbox browsing</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/verify-connection-status-not-updating-for-m365-exchange-app-despite-successful-mailbox-browsing-11758</link>
            <description>Hi everyone, We are facing a confusing issue with the M365 Exchange Online agent in Commvault version 11.32.According to the documentation, under the Exchange connection settings:https://documentation.commvault.com/2023e/software/complete_microsoft_365_service_catalog_for_exchange_online_using_custom_configuration_option.html?_gl=1*hjd47t*_gcl_au*NjQ0NDk4ODY5LjE3Njk1OTU0OTI.I should click &quot;Verify Connection&quot; to validate the Azure app registration and update the app status. However, when I click this button, the status doesn&#039;t update to &quot;Successful,&quot; and it remains blank or shows no confirmation of a successful binding.The strange part is that even though the verification seems to fail or does not show any status, I am able to browse through mailbox items and users within the Commvault Command Center in Office 365 app.Have you experienced this behavior before? Is it a known bug, or what is your take on this issue? Any input would be greatly appreciated!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 13 Apr 2026 15:37:34 +0200</pubDate>
        </item>
                <item>
            <title>Improved Navigation in the Commvault Documentation Portal</title>
            <link>https://community.commvault.com/platform-updates-47/improved-navigation-in-the-commvault-documentation-portal-11778</link>
            <description>We’re pleased to share an important update to the Commvault documentation portal for Innovation release 11.42: a redesigned left-hand navigation panel that is now aligned with the structure of the Commvault Command Center interface.What’s ChangingIn the past, the organization of our documentation differed from the navigation experience within the product. This sometimes made it harder to move between the Command Center and supporting documentation.With this update, the documentation structure now mirrors the layout and logic of the Command Center. This creates a more intuitive and consistent experience when navigating between the product and its documentation.What This Means for YouEasier navigation between the Command Center and documentation	Faster access to relevant information	A more streamlined and consistent user experienceThis change is designed to help you find answers more quickly and reduce friction when using documentation alongside the product.We Value Your FeedbackAs you explore the updated navigation, we encourage you to share your feedback so we can continue improving your experience.Click the &quot;Give Feedback&quot; link within the documentation portal	Select &quot;I have feedback about the documentation site&quot; under Feedback Type	Share your thoughts on the new navigation — what works well and what could be improvedYour input directly helps us refine and enhance the documentation experience.</description>
            <category>Platform Updates</category>
            <pubDate>Sun, 12 Apr 2026 22:19:48 +0200</pubDate>
        </item>
                <item>
            <title>Log commit jobs after disabling log caching</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/log-commit-jobs-after-disabling-log-caching-11764</link>
            <description>Hi,After updating our environment to 11.40.42 I saw repeated log commit jobs failing with the following error.Sweep job has failed on Media Agent [xxxxxxxx]. Error [0xECCC000D:{I2::FileSweepScanner::Scan(684)} + {I2::SweepScanner::GenerateCollectFileForMountPath(269)} + {CQiFile::Open(95)} + {CQiUTFOSAPI::open(77)/ErrNo.13.(Permission denied)-Open failed, File=\\?\UNC\XX.XX.XX.XX\MountPoints\XXXXXXXX\Disk08\63WA3P_12.04.2024_11.32\CV_MAGNETIC\CV_APP_DUMPS\SDT\00DF80E8-D0C5-47B8-939E-E5BBC4908E99\collecttot2.cvf, OperationFlag=0x810A, PermissionMode=0x80}]I found that log caching was turned on at the plan that is used for the client. As we do not want to use log caching, I disabled the option on the t-log schedule in the plan.I now see that normal transaction log backups are running for the client instead of the log commit jobs. However, I still get a log commit job every 2 hours which keeps failing with this error.How do I get rid of the log commit job? A full backup has been made since I disabled the log caching option. I don&#039;t mind losing the logs from the few days this was still enabled and failing.Thanks in advance.Paul</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 10 Apr 2026 15:25:11 +0200</pubDate>
        </item>
                <item>
            <title>Commvault version 11.36 cannot back up virtual machines in the PVE 9.1 + Ceph 19.2 cluster.</title>
            <link>https://community.commvault.com/virtualization-and-containers-48/commvault-version-11-36-cannot-back-up-virtual-machines-in-the-pve-9-1-ceph-19-2-cluster-11775</link>
            <description>We are running a relatively new version of Ceph. In this newer Ceph release, the RBD commands used to create VM snapshots require specifying both the namespace and the keyring file to locate the VM disks. By contrast, Commvault locates VM disks using only the pool in its RBD command parameters. Commvault version 11.36 only supports older versions of Ceph.  </description>
            <category>Virtualization and Containers</category>
            <pubDate>Fri, 10 Apr 2026 06:54:54 +0200</pubDate>
        </item>
                <item>
            <title>Move index directory on Index Server node</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/move-index-directory-on-index-server-node-11733</link>
            <description>Hi,I need to move the index directory on the index server node.Documentation and Arlie point to the workflow IndexServerDirectoryMove but this does not exist in the current environment and it is not available on the store.It is also not possible to manually change the directory location as the path in the edit node screen is greyed out.I hope anyone can help me with this issue. </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 09 Apr 2026 15:31:18 +0200</pubDate>
        </item>
                <item>
            <title>Oracle database name longer than 8 letters</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/oracle-database-name-longer-than-8-letters-8796</link>
            <description>Hello,we have a problem to configure a RAC instance where the database name is longer than 8 letters. When we try to configure it, we see in the logs that the database names is shorten to 8 letters:The database name is AKJFP14LOG  size=e0]28841 70a9 05/03 13:05:24 ### RacBrowser::LocalInstanceBrowse() - Oracle SbtLibrary:[/opt/commvault/Base/libobk.so]28841 70a9 05/03 13:05:24 ### RacBrowser::FillInstanceStatus(1034) - m_oraInfoReturn  size=e0]28841 70a9 05/03 13:05:24 ### RacBrowser::ValidateDBProps() - Received DBStatus=READ WRITEIs it a problem with /opt/commvault/Base/libobk.so?Thank you.Thomas   </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 09 Apr 2026 07:33:55 +0200</pubDate>
        </item>
                <item>
            <title>CV certification</title>
            <link>https://community.commvault.com/readiverse-academy-60/cv-certification-11768</link>
            <description>I dont have work mail id as i lost my job and i want to take up CV Certification with my Personal mail? how to register for exam?</description>
            <category>Readiverse Academy</category>
            <pubDate>Thu, 09 Apr 2026 01:23:29 +0200</pubDate>
        </item>
                <item>
            <title>🚀 (Updated) Proactive Advisory: Transition from Exchange Web Services (EWS) to Microsoft Graph API</title>
            <link>https://community.commvault.com/platform-updates-47/updated-proactive-advisory-transition-from-exchange-web-services-ews-to-microsoft-graph-api-11349</link>
            <description>Updated advisory as of April 10, 2026. We will continue to provide new updates as we see further developments and updates from Microsoft around their transition. - Commvault Support &amp;amp; Community teamAt Commvault, we are committed to ensuring your data remains resilient and your transitions remain seamless. We are closely monitoring Microsoft’s retirement of Exchange Web Services (EWS) in Exchange Online and are already engineering the path forward for your backup environment.What is Changing?Microsoft is transitioning from the 20-year-old EWS architecture to the modern, REST-based Microsoft Graph API. This shift enhances security via OAuth-based authentication and provides a more unified architecture across Microsoft 365.The TimelinePer Microsoft, this is the planned transition timeline for EWS:June 30, 2026: EWS access blocked for F1, F3, and Kiosk licenses.	October 1, 2026: Phased disablement of EWS begins (tenant-controlled).	April 1, 2027: EWS permanently retired for all Exchange Online tenants. IMPORTANT: This change applies only to Exchange Online. On-premises Exchange Server environments are not impacted by this deprecation. Commvault Milestone Details We understand that your backup continuity is non-negotiable. Commvault is developing a robust, Graph API-based Exchange backup solution designed to bridge existing API gaps while maintaining the high standards of recovery you expect.Planned Availability: Summer 2026 Target Release 11.44 LTS deployment models supported for both SaaS (Metallic) and Software. Our timeline ensures your environment is &quot;Graph-ready&quot; well before Microsoft begins the phased disablement in October 2026. Dependency on the Microsoft EcosystemAs a Microsoft partner, Commvault’s development timeline is inextricably linked to Microsoft’s API release schedule. While Graph is the future, Microsoft has acknowledged that it does not yet have full feature parity with EWS. What this means for you:Shared Timelines: Our delivery of 11.44 LTS depends on Microsoft releasing stable, production-ready versions of the necessary Graph endpoints.	Gap Closure: We are working directly with Microsoft to ensure they address &quot;feature gaps&quot; (such as Archive Mailboxes, Public Folders, and advanced metadata) so that your backup fidelity is not compromised.	Proactive Adaptation: If Microsoft shifts their roadmap or delays critical API features, Commvault will adjust our engineering efforts to keep your data protected using the most secure methods available.	 Recommended Actions for AdministratorsWhile the transition is still a ways off, there are two key areas to keep on your radar:Licensing Audit: Users on F1, F3, or Kiosk licenses will lose EWS access by June 2026. If these users require backup via EWS until our Summer 2026 release, consider temporary license upgrades (Plan 1/2 or E3/E5).	Permissions Prep: Moving to Graph will require reauthorizing Azure AD applications and granting new OAuth scopes. We will provide detailed &quot;how-to&quot; documentation closer to the 11.44 LTS release. Stay TunedWe will continue to share updates, timelines, and guidance here and through our KB. For full Microsoft guidance, take a look at this blog post from their Exchange team and this Microsoft Learn article.  As always, don’t hesitate to post questions here or reach out to your account team.</description>
            <category>Platform Updates</category>
            <pubDate>Wed, 08 Apr 2026 21:51:13 +0200</pubDate>
        </item>
                <item>
            <title>Guidance Required – Promoting LiveSync DR CommServe (Azure) to Production During On-prem Decommission</title>
            <link>https://community.commvault.com/share-best-practices-3/guidance-required-promoting-livesync-dr-commserve-azure-to-production-during-on-prem-decommission-11769</link>
            <description>Description:We are currently planning a transition of our Commvault environment and need guidance and best practices for our scenario.Current Setup:Primary CommServe (Production) hosted in a colo  data center	LiveSync DR CommServe already configured and running in Azure (Cloud)	Media Agents and backups are currently aligned with the colo-based production CommServePlanned Change:The colo site is being decommissioned	All workloads and servers are being migrated to Azure	The existing DR CommServe in Azure will be promoted to Production	Post transition, backups will be fully aligned to Azure workloads and infrastructureObjective:We want to:Promote the Azure DR CommServe to act as the new Production CommServe	Reconfigure clients, Media Agents, and storage policies to align with Azure	Safely decommission the existing colo-based CommServeKey Questions / Clarifications Required:LiveSync Failover:	What is the recommended approach to promote DR CommServe to active Production?	Client Connectivity:Best practice for updating client communication:	Any tools/scripts for bulk client updates?	Database Consistency:	Any specific checks recommended before promoting DR?		Media Agents &amp;amp; Storage:	Considerations when switching storage from colo-based to Azure-based storage		Any impact on existing storage policies or deduplication databases?		Licensing / Configuration:	Any licensing or configuration implications when moving fully to Azure?		Best Practices:	Recommended sequence of steps for this migration</description>
            <category>Share Best Practices</category>
            <pubDate>Wed, 08 Apr 2026 11:24:35 +0200</pubDate>
        </item>
                <item>
            <title>HSX went read only state</title>
            <link>https://community.commvault.com/hyperscale-x-q-a-54/hsx-went-read-only-state-11773</link>
            <description>Good day to allIn my environment HSX went ready mode due to reached 85% threshold i have changed the retention policy from 30 days 1 cycle to 30 days 0 after that i overserved DDB is 0% previously it was 33%  also space is not reclaimed post reclaim and data aging jobs. Kindly suggestRegardsRobert</description>
            <category>HyperScale X Q&amp;A</category>
            <pubDate>Wed, 08 Apr 2026 08:57:54 +0200</pubDate>
        </item>
                <item>
            <title>Get to Know Arlie (Better): An Update from Commvault Support</title>
            <link>https://community.commvault.com/getting-started-51/get-to-know-arlie-better-an-update-from-commvault-support-11610</link>
            <description>Our support ecosystem brings together skilled engineers, shared global knowledge, and continuous training. It also showcases the use of innovative technological advancements. One such innovation is Arlie, our AI-enabled support assistant.Arlie reads logs, identifies patterns, and surfaces insights before small issues turn into major problems, helping our engineers deliver faster, smarter resolutions. This lets the team focus more on understanding the customer’s situation and less on searching through raw data.  Arlie is enabled for all SaaS customers by default in Command Center today and is available in Command Center starting with SP40 for software customers. Read more about enabling Arlie in the documentation here. You can also access it via arlie.commvault.com or through our Support portal. Having Arlie in our customers’ hands enables you to search for solutions directly without having to wait on Commvault. This can be a lifesaver while handling critical, time-sensitive issues. Arlie can be your accelerated, “always-on” problem solver, and our support engineers are always available to provide clarity and deeper assistance. Read more about Arlie here:	Arlie’s Latest Enhancements: Your New and Improved AI Assistant 		Arlie: What AI in IT Was Meant to Be 		The Agentic Revolution: Commvault, Arlie, and the Future 	If you’ve recently experienced AI-assisted support within our ecosystem, I’d love to hear your thoughts through this brief Arlie feedback survey. Your feedback directly helps us refine our approach. Every insight you share will help strengthen our capabilities.I look forward to sharing more about Arlie, and we’d love to hear from you about your questions, ideas, and how you’re testing and using Arlie.  </description>
            <category>Getting Started</category>
            <pubDate>Tue, 07 Apr 2026 21:30:14 +0200</pubDate>
        </item>
                <item>
            <title>SAP Hana agent not available for download from cloud.commvault.com</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/sap-hana-agent-not-available-for-download-from-cloud-commvault-com-11519</link>
            <description>Cannot see SAP Hana part of the list for database agent download from cloud.commvault.com </description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Tue, 07 Apr 2026 21:27:53 +0200</pubDate>
        </item>
                <item>
            <title>upgrade from 11.32.x to 11.40.x</title>
            <link>https://community.commvault.com/software-upgrades-and-updates-55/upgrade-from-11-32-x-to-11-40-x-11757</link>
            <description>Im planning on upgrading our commvault environment from 11.32.x to the 11.40.x (LT support) in the very near future.  We currently are on a windows server 2019 Datacenter.  Will commvault 11.40.x run on win server 2019?We are also using Ms SQL server 2016 on our current 11.32.x.  Does 11.40.x support this or does that need to be upgraded as well using the commvault 11.40.x installer.  Please advise.   Any input or links would be appriciatedBC </description>
            <category>Software Upgrades and Updates</category>
            <pubDate>Tue, 07 Apr 2026 21:14:16 +0200</pubDate>
        </item>
                <item>
            <title>How to earn badges</title>
            <link>https://community.commvault.com/community-guides-30/how-to-earn-badges-138</link>
            <description>Participation of course!  🤩 🤩 🤩 For all the actions you take in the community, including…asking questions	starting conversations	answering questions 	participating in conversations	sharing best practices… you are awarded points and those eventually add up into automatically applied badges that appear on your avatar and profile.   There are also some special manually applied badges that the community team can use to recognize members. For example, you’ll see that some early participants in our community received Founding Member badges.    And finally there are external programs like Commvault’s Expert Certification Program by which Commvault administrators and engineers have earned “Certified Expert” rank that is reflected in the community.       As we grow our community more opportunities for recognition will be announced. Stay tuned! </description>
            <category>Community Guides</category>
            <pubDate>Tue, 07 Apr 2026 16:57:27 +0200</pubDate>
        </item>
                <item>
            <title>VM Backup Issue on Huawei FusionStorage HCI with commvault</title>
            <link>https://community.commvault.com/getting-started-51/vm-backup-issue-on-huawei-fusionstorage-hci-with-commvault-11766</link>
            <description>We are currently deploying a Commvault solution to protect VMs hosted on a Huawei FusionStorage HCI platform. We have encountered a specific issue regarding shared storage backups and would like to request your technical guidance.While backup and restore operations for VMs hosted on the local disks of the FusionCompute cluster are working successfully, we encounter an error when attempting to back up VMs hosted on FusionStorage shared storage (please see the attached screenshot for error details).Error code 91:327Commvault Version: v11.36.90 (2024E)Deployment: Commvault All-in-One installed on a Windows VM (no additional VSA proxy).Environment Versions:FusionCompute: 8.9.0FusionCube: 8.3.0FusionStorage: 8.3.0Based on the Commvault documentation, we understand that the FusionCompute SDK does not support direct FusionStorage operations, requiring Commvault to use a Java SDK and command-line utilities. The documentation also specifies that the Virtual Server Agent (VSA) must be installed on the Domain 0 host (Dom0).Could you please provide clarification on the following points:Compatibility: Is Commvault v11.36.90 (2024E) fully compatible with the Huawei versions listed above (FusionCompute 8.9.0 / FusionCube 8.3.0)?Dom0 Identification: How can we correctly identify the Dom0 host within this HCI architecture?VSA Installation: What is the specific procedure for installing the VSA agent directly on the Dom0?</description>
            <category>Getting Started</category>
            <pubDate>Tue, 07 Apr 2026 15:07:10 +0200</pubDate>
        </item>
                <item>
            <title>How to install and configure a FREL (File Recovery Enabler for Linux)</title>
            <link>https://community.commvault.com/getting-started-51/how-to-install-and-configure-a-frel-file-recovery-enabler-for-linux-11716</link>
            <description>I am presented with the need to install FREL on an existing Linux VM with the operating system already in place, I want to use this FREL for Huawei Fusion Compute Hypervisor, I installed media agent, file server and VSA on this linux, So , How to install and configure a FREL ? whats the next step to make it operational</description>
            <category>Getting Started</category>
            <pubDate>Tue, 07 Apr 2026 15:03:03 +0200</pubDate>
        </item>
                <item>
            <title>Unable to delete old Storage Policy and DDB after CommCell Migration using MongoDB Recovery Assistant Tool</title>
            <link>https://community.commvault.com/storage-and-deduplication-49/unable-to-delete-old-storage-policy-and-ddb-after-commcell-migration-using-mongodb-recovery-assistant-tool-11727</link>
            <description>Dear Team, We recently migrated our CommCell server using the &quot;Recovering the MongoDB Database Using the Mongo Recovery Assistant Tool for Windows&quot; document. After the migration, the new CommCell server (Commsafe) cloned all configurations from the old server, including the old Disk Library, Mount Paths, and DDB entries.We have since configured a new Disk Library, new Data Path, and new DDB on the Commsafe server and successfully re-associated all subclients to the new Storage Policy. The old CommCell server has been fully decommissioned, and the old client win-i3vamh7fvl3 has been retired from CommCell Console.However, when we attempt to delete the old Storage Policy, we receive an error stating that DDB Backups are still associated with it. Additionally, when attempting deletion, the Job Controller shows pending/queued jobs tied to the old Storage Policy, which is blocking the deletion.What we need help with:How to clear the stale/orphaned DDB association left from the cloned configuration	How to remove or kill the queued jobs in the Job Controller that are linked to the old Storage Policy	Correct sequence to fully delete the old Storage Policy, DDB, and Disk Library safely</description>
            <category>Storage and Deduplication</category>
            <pubDate>Tue, 07 Apr 2026 07:25:03 +0200</pubDate>
        </item>
                <item>
            <title>Lab Install: Designing a Commvault Environment That Won’t Break at Scale</title>
            <link>https://community.commvault.com/share-best-practices-3/lab-install-designing-a-commvault-environment-that-won-t-break-at-scale-11767</link>
            <description>In a lab setting, I&#039;ve installed several Commvault servers for various use cases, including customer demos. Below is my take on designing a Commvault environment. It’s easy to build a Commvault environment that works.It’s much harder to build one that still works a year later, under more load, more data, and more expectations.Because scale doesn’t break things immediately—it exposes the shortcuts you took early on.I’ve seen environments that looked perfectly fine at deployment:Backups running	Jobs completing	Storage holding upThen growth hits.More VMs.More data.More retention.And suddenly:Jobs start missing windows	Deduplication performance drops	MediaAgents get overloaded	Restore times creep upThat’s not a failure of Commvault.That’s a design problem.Let’s talk about how to build it right from the start. 1. Start with Architecture, Not JobsOne of the most common mistakes is jumping straight into creating backup jobs.But jobs don’t define your environment—architecture does.Core components to design properly:CommServe (your brain)	MediaAgents (your workhorses)	Storage (disk, object, cloud)What scalable design looks like:Dedicated CommServe with proper sizing	Multiple MediaAgents (not just one doing everything)	Storage designed for throughput, not just capacityIf your architecture is weak, no amount of tuning will fix it later. 2. Don’t Underestimate the CommServeEverything flows through the CommServe:Job scheduling	Metadata	Database operationsWhen it struggles, everything struggles.Best practices:Use proper CPU and RAM sizing (don’t go minimal)	Place the database on high-performance storage	Regularly maintain and monitor the CommServe DBWhat happens if you don’t:Slow job initiation	Delays across the environment	Reporting and UI lagScaling starts with a healthy control plane. 3. Scale Out MediaAgents EarlyA single MediaAgent might work today—but it becomes your bottleneck tomorrow.What scalable looks like:Multiple MediaAgents distributing load	Workloads balanced across them	Separation by function if needed (e.g., production vs archive)Key considerations:CPU and RAM for deduplication	Network throughput	Disk I/O performanceCommon mistake:“We’ll add another MediaAgent later.”By the time you need it, you’re already dealing with performance issues.Design for distribution from day one. 4. Get Deduplication Right (Or Pay for It Later)Commvault’s deduplication is powerful—but it’s also one of the most misunderstood areas.What to plan:Proper sizing of the Deduplication Database (DDB)	Fast storage for DDB (this is critical)	Logical storage pool designBest practices:Avoid overloading a single DDB	Monitor dedupe ratios and performance	Scale out instead of overloadingWhat goes wrong at scale:Slow backups	Increased job runtimes	DDB rebuild pain (and downtime risk)Bad dedupe design doesn’t fail fast—it degrades slowly. 5. Design for Throughput, Not Just CapacityStorage conversations often focus on:“How much data do we need to store?”The better question is:“How fast can we move data?”What scalable storage looks like:High IOPS and throughput	Parallel write capability	Integration with object storage where appropriateWhat causes problems:Cheap storage that can’t keep up	Single repositories handling too much load	Ignoring network throughput between componentsBackups don’t fail because of size—they fail because of speed. 6. Separate Workloads IntentionallyNot all backups are equal.Mixing everything together leads to contention.Examples of smart separation:Production vs dev/test	Large databases vs small VMs	Short retention vs long-term archiveWhy it matters:Predictable performance	Easier troubleshooting	Better resource allocationSegmentation brings control. Control brings stability. 7. Plan Retention Like It’s a Growth Problem (Because It Is)Retention is where scale quietly explodes.What starts as:30 days of backupsBecomes:90 days	Then a year	Then compliance-driven retentionWhat to plan:Storage growth over time	Archive tiers (object/cloud)	Lifecycle policiesCommon mistake:Designing for today’s retention, not tomorrow’s requirements.Retention is the silent driver of scale. 8. Build for Recovery, Not Just BackupThis is where most designs fall short.They optimize for:Backup success	Storage efficiencyBut ignore:Restore performance	Recovery workflowsWhat scalable recovery looks like:Fast access to recent backups	Tested restore scenarios	Clear prioritization of critical systemsWhat breaks at scale:Restores taking too long	Difficulty finding the right data	Bottlenecks during large recoveriesIf recovery doesn’t scale, your design doesn’t scale. 9. Monitor Before It HurtsAt scale, issues don’t appear suddenly—they build.What to monitor:Job duration trends	MediaAgent load	DDB health	Storage latency	Capacity growthWhat boring (good) looks like:Predictable job completion	No surprise slowdowns	No last-minute capacity issuesIf you’re only reacting to alerts, you’re already behind. 10. Keep It Simple (Seriously)Over-engineering is just as dangerous as under-designing.Too many:Storage pools	Policies	ExceptionsLeads to:Complexity	Confusion	Operational mistakesWhat scalable simplicity looks like:Standardized policies	Clear naming conventions	Minimal exceptionsComplex environments don’t scale—they collapse under their own weight. Bringing It All TogetherDesigning a Commvault environment that scales isn’t about adding more later.It’s about making the right decisions early:Strong architecture	Distributed load	Thoughtful storage design	Realistic retention planning	Recovery-focused thinking Final ThoughtScale doesn’t break systems.It reveals them.If your design is solid, scale feels predictable.If it’s not, scale feels like failure.Build it right the first time—and your future self won’t be firefighting later.</description>
            <category>Share Best Practices</category>
            <pubDate>Mon, 06 Apr 2026 15:38:18 +0200</pubDate>
        </item>
                <item>
            <title>Oracle credentials &quot;vault&quot;</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/oracle-credentials-vault-11731</link>
            <description>!--scriptorstartfragment--&amp;gt;Hi all, We’re currently looking at migrating Oracle backups to Commvault, and the scale is quite big — we’re talking hundreds of databases. One of the challenges is credential management. Right now we’re using connection strings that are specific per Oracle instance, which isn’t very practical at this scale. So I’m wondering: is there any way in Commvault to handle Oracle credentials centrally? Some kind of credential vault, maybe?Or could this be solved with a workflow or another native Commvault feature? Any tips or best practices would be really helpful.!--scriptorendfragment--&amp;gt;</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Mon, 06 Apr 2026 05:44:57 +0200</pubDate>
        </item>
                <item>
            <title>error getting running post backup script execution on backup</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/error-getting-running-post-backup-script-execution-on-backup-11763</link>
            <description>i am getting error to execute the post script running on oracle database backup. getting following error Error Code:  7:78] ... Executable file (/opt/commvault/Base/post_scripts/db_rman_listarch_remote_vlsbi.sh ) not found in installation directory or executing outside of installation directory is disabledhowever i executed the script manually on the server and it’s running fine, even the script copied on the commvault installaion directory on the /opt/commvault/Base/post_script folder.  </description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Sat, 04 Apr 2026 00:13:45 +0200</pubDate>
        </item>
                <item>
            <title>One Off Backups and Licence Usage</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/one-off-backups-and-licence-usage-11760</link>
            <description>Hi all.We’ve been using our Commvault setup to take one off NDMP backups of critical research data from our central storage, then copying to two tapes, all that good stuff.  As these are a one-off backup, I’ve been disabling the client activity and removing the licence afterwards, as they’ll only be needed for a potential restore down the line (retention is a minimum of 5 years).I’ve noticed that this doesn’t count towards licence usage, which in essence means that we can do as many “archives” as we have storage for, as long as we have the sufficient licence headroom for the initial one off backup.My worry is this would be seen as circumventing licence costs, and against the fair usage idea, and if we were ever audited it would result in additional charges.  I should add we’re not a massive setup, 1000 VMs and ~250TB of capacity.  The archives in total would be 10s of TB, maybe ranging into 100s.Anybody got any relevant info or thoughts?TIAIan</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 03 Apr 2026 17:47:56 +0200</pubDate>
        </item>
                <item>
            <title>Office 365 Configuration Helper fails with AAD Graph error</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/office-365-configuration-helper-fails-with-aad-graph-error-11762</link>
            <description>Hi all, when registering an Office 365 Exchange app in the Commvault Command Center (version 11.32), we attempted to use the Office 365 Configuration Helper tool.However, after running the helper, the following error occurs: &quot;Access blocked to AAD Graph API for this application.&quot; Although the provided link (https://learn.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) describes the general API migration, it does not offer specific advice for resolving this within the Commvault tool. We have already tried installing the Microsoft Graph PowerShell SDK (https://learn.microsoft.com/en-us/powershell/microsoftgraph/installation), but the error persists and the configuration tool still fails. Have you had any similar experiences with the Office 365 Configuration Helper tool?Thanks in advance for sharing your insights!</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Fri, 03 Apr 2026 06:19:59 +0200</pubDate>
        </item>
                <item>
            <title>1-touch QSDK communication error - network topology problem?</title>
            <link>https://community.commvault.com/self-hosted-q-a-2/1-touch-qsdk-communication-error-network-topology-problem-11111</link>
            <description>Hi,I’m trying to test 1-touch restore for Windows. When I’m trying to connect to Commserve, there is a “failed to setup a communication (QSDK) session” error: You can observe that the initial connection was succesful (“connected to comserve”). But then there is this QSDK problem. I checked the logs and there is info that it could be no direct tunnel or couldn’t bring any automatic tunnels to commserve.I think it could be the problem with network routes / topologies on Commserve, but here is a question: How can I setup network route or topology to client which not exist yet? I mean when I start 1-touch wizard and configure network connection to Commserve, it is not a real client but just some host with random hostname so how Commserve could know what network settings apply to this client?I can setup some topology to some group but what client should be there? With this initial step of 1-touch I didn’t enter any client information yet. PS: I restarted the Commands Manager service and tried to play with groups, routes and pseudoclients and ONCE somehow I successfuly connected to Commserve and went to next step but cannot reproduce it.PS 2: There is 8403/TCP opened bidirectionally between Commserve and this client.</description>
            <category>Self-Hosted Q&amp;A</category>
            <pubDate>Thu, 02 Apr 2026 23:12:36 +0200</pubDate>
        </item>
            </channel>
</rss>
