Recently active topics
Hello Commvault Community, When we try to restore out of place data from Microsoft DFS servers, we get a message that some files cannot be restored. "Reparse points failed to be restored as this is an out-of-place restore. Please check the restore result for more details " We want to find out from which logs we can find which files weren’t restored when restoring DFS data to a server not covered by DFS replication. If we need to increase the logging level, for which logs? Thanks&Regards,Kamil
Hello Community,We would like some clarification about the application sizes that are reported in the command center for the O365 apps.When looking at directly an exchange application (see screenshot 1), we can find a value for associated entities and a value for application size. It seems that the associated entities are related to the number of mailboxes protected. But what is the exact definition for the value reported under “Application size” ? Is it the cumulativ total size protected since the beginning of backups ? or is it the size at instant t of exchange data on the tenant ?When looking at the chargeback details report for the same client (which has only one exchange online client configured) (see screenshot 2), we don’t find the same values. There is a FET size which is less than the Application size reported at the client level, and the mailbox count is only 46, while there are 67 mailboxes backed up.Does someone know the exact definitions of the values provided at these 2 l
Hello Community,I keep on wondering about one thing. There’s a feature called Ransomware Protection which is allowing MediaAgents to protect Libraries and Mount Paths. In terms of local storage there’s kind of lock being provided. What if there’s cloud storage in place e.g. Azure or GCP connected to MediaAgent? There’s still some kind of lock in place which is preventing objects from being modified by non-commvault tasks?
Currently we are protecting a customer’s Sharepoint farm using SQL agent on the SQL server; and File system agent on rest of the servers; we cant get hypervisor access hence used the file system agent. We tested the recovery and restored all the servers and used SQL database restore to restore the databases. However, the search functionality does not work on the recovered environment. We are thinking of using the Sharepoint Farm backup; the documentation says to install Sharepoint agent; so as I understand > Install Sharepoint Agent on all the servers in the farm > Because SQL agent is already installed on SQL server do I need to install Sharepoint agent as well on it ; also do I need to stop SQL backups and Sharepoint backups would protect the same set of data again> Create a sudo-client and select all these as member servers > Expand sudo-client and create user-defined sub-client and on the content tab include web-front-end data > Perform backup > Use Sharepoi
Hi everyone,We are currently deploying a Commvault environment to protect via VSA a great amount of VMs (almost 6000VMs)VSA proxies are Windows Server 2019 SE VMs that reside on the same cluster that VMs we would like to protect. So transport mode will be HOTADD.We are currently on test phase so as to obtain results and determine the number of VSAs Proxies that finally we will deploy.The environment is the following: Commvault version: v11.24.7 VCSA: VCenter 7.0u2 ESXi Version: 7.0uw VSA Proxies O.S: Windows Server 2019 SE VDDK version used on Backups: 7.0.1As per VMWare documentation says on VMWare 7.0u2 each PVSCSI card on the proxy VSA should be able to mount 64 vmdks of the Backed-up VMs.The issueis that we are monitoring during the backup and each proxy is just able to mount 15 disks via hotadd (stucked on the previous limitation of VSphere 6.5)We have added manually more than 20VMDK disk on the Virtual Machine so we understand that this is not a VMware issue. The limitation arise
Hi all,I’m looking to configure Disaster Recovery for VMware. The documentation and the configuration is quite clear. I am perplexed by the configuration of the VM network, in particular the mapping of the source network to the destination network.When I configure the recovery target (destination vCenter) it’s possible to configure only one target network, instead of having the possiblity to configure a complete mapping of all existing networks.The only way I found to configure the network mapping is to do it VM for VM, network adapter for network adapter, making the configuration impossible if in front of hundreds/thousands of VM.Is there any way to configure it globally? Are there any possibility to implement this functionallity in the future releases? ThanksFabrizio
Hi Team, What kind of information do CacheDB and ResourceMgrDB DBs store, what are their duties?It is stated in the BOL below that it is non-critical, but it does not say what kind of information they store.https://documentation.commvault.com/11.24/expert/96200_commserve_recovery_frequently_asked_questions.html
Hi all,one of our customers (running CV11.24.43) is interested in configuring a Selective Copy to Amazon S3 Standard-IA/Deep Archive (Combined Storage Tier). So far, no problem.We intend to copy Monthly Fulls. Basic retention will be 365 days and Yearly Fulls will get extended retention set to 10 years.Also, they want to lock the objects to prevent them from being deleted before the retention is met. The “Enable WORM Storage” workflow should take care of that.But it does raise a few questions:Would you recommend using Deduplication in this scenario, or not? If we use Dedupe, I suppose a DDB seal will take place automatically ever 365 days, right? In this combined storage tier, metadata is written to Standard-IA, and actual backup data is stored in the Deep Archive tier, right? Do we set the object-level retention on both tiers? Retention of Index V2 backups does not follow the Storage Policy settings, and might be pruned earlier than the configured retention on the Storage Policy. What
Hello All, We are configuring a new cloud based storage, and concerning that, we wanted to know if the CommServe has to have access to the cloud library ? Since the storage and the MA will have a private network in which the storage will present its buckets to the MA. and the CommServe with the MA will communicate through our backup network.In the configuration steps, we came through : So we wondered if that means that the CS has to have some sort of access to the storage (Which is not the case in our platform, since the storage is only seen by the MA through their private network), or it's just the information related to the MA accessing the storage ? Regards.
Hi All,I imported the “Client monthly growth report” from store in command center. When I opened the report, then it says that no records\data available. In settings I’m keeping client metadata for 84 days. It should show me the comparison for 2-3 months of each client. Any suggestions?
Good morning allI wanted to confirm that initiating a restore using Command Centre won’t give you the option to change the Data Path (Media Agent and Library)?We’re allowing business units to manage their own operations and this is one requirement that’s come up, and it looks like we will have to get them to use the Java console instead? Thanks.Mauro
Hello everyone, how are you guys doing?Well, currently we have a client that is using Commvault software and he is kinda lost in the alerts that he receives via e-mail (even having the configurations of the Tokens in the Alert when he receives, the pieces of information is not properly shown as he expects like the correct name of the VMs, why some of them are getting errors and which VMs, etc).With this, he requests to see if there’s an option to receive the Alerts integrating with Zabbix or with something similar. I took a look in the Alerts Configurations and I see the option to use SNMP and Webhooks. Here we already have a Zabbix server that we use to receive some specific Alerts such as if the server is down, or rebooted. With this, I took a look in the documentation and noticed that Commvault can use SNMPv3: https://documentation.commvault.com/v11/essential/97609_setting_up_snmpv3_alert_notifications.html but I didn’t found anything in how to properly configure this so we can use
Hi Everyone,Greetings, we are planning to migrate the commserve from 2012 OS server to 2019. The commserve service pack is 11 SP 9+, we are planning to upgrade 11.24 with latest maintenance. Here the question is, the commserve running on low OS 2012, can we upgrade the commserve to latest version on same OS to migrate or after migration of database to new hardware need to upgrade the service pack? Thanks and regards.
Hi Team, i got several Servers which have the Filesystem & SQL Agent installed. I grouped them based on agent to a SQL group. Now i wanted to give the SQL Admins the permission to backup & restore SQL instances. Unfortunately the grouping of “Client Computer Groups” with SQL Agent counts for the Whole “Client” so when i add the SQL Admins as Backup / Restore user on that group they can also restore / backup on the Filesystem Agent ( as the Client is added not only the SQL Agent ) so is there a way to just allow the SQL agent and exclude the other Agents with an automatic association? ( the only way i found is manually just give the right on each Client Agent itself…. )
HI Team, we are getting below error when SQL job scheduler run Description: Another backup is running for client [zucnlifdbmfoctn], iDataAgent [SQL Server], Instance [zucnlifdbmfoctn], Subclient [default].Source: zucpgtssvcvcsm1, Process: JobManager
Hi everyone,We enabled 2FA on our customer’s environment and we disabled SSO. Customer is using one specific domain user for Delphix and it connects Commvault enviroment with SSO.https://docs.delphix.com/docs537/delphix-administration/sql-server-environments-and-data-sources/virtualizing-databases-using-delphix-with-sql-server/managing-sql-server-dsources/additional-dsource-topics/linking-a-dsource-from-a-commvault-sql-server-backup We don’t want to enable SSO because of the 2FA.https://documentation.commvault.com/11.24/expert/7907_enabling_two_factor_authentication_at_commcell_level_in_commcell_console_administrator.html Is it possible to enable SSO only for this specific domain user with additional settings or something?
i am seeing the same type of errors and i am running 11.23.47. we use the same rman backup script across 60+ clients without an issue. only one client continues to get this error. however, it will run successfully sometimes. for example, yesterday it ran fine in the am but always fails during the backup window at 6pm. these are some of the errors logged: 3303:RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at 03/30/2022 18:03:023304:ORA-19506: failed to create sequential file, name="040pn4gb_4_1_1", parms=""3305:ORA-27028: skgfqcre: sbtbackup returned error3306:ORA-19511: non RMAN, but media manager or vendor specific failure, error text: Starting backup at 30-mar-2022 18:01:41released channel: ORA_DISK_1allocated channel: ORA_SBT_TAPE_1channel ORA_SBT_TAPE_1: SID=2613 device type=SBT_TAPEchannel ORA_SBT_TAPE_1: CommVault Systems for Oracle: Version 11.0.0(BUILD80)allocated channel: ORA_SBT_TAPE_2channel ORA_SBT_TAPE_2: SID=1340 device type=SBT_TAPEchannel ORA_SBT_TA
Customer having monitoring tool called Zabbix and using the same for Alerts and Monitoring on all their IT devices. So they want to include commvault Infra servers (CS, MA) and Hyperscale X in Zabbix monitoring and Zabbix agent needs to be installed on local system for the same.Is Zabbix agent can be installed on HyperScale nodes? Is there any document available from Commvault end for the same?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.