Ask questions, give answers, get good karma
- 3,489 Topics
- 16,019 Replies
Adjust the block size for cloud lib
Hello Community I am currently troubleshooting an issue with high IO during SQL backups. Upon investigation, I found the IO wait type to be BACKUPIO. Further analysis of the Commvault performance log revealed that writing to the media is slow, which is consistent with the issue we found using a PowerShell script to determine I/O wait type (BACKUPIO).Currently, our storages live in Azure, with three copies (primary, 2nd, and 3rd). The primary and 2nd copies have deduplication enabled, with a block level ddb factor of 512 of each. To achieve a better balance between write performance and recoverability, Perf log recommends adjusting the block size from the data path properties to 128KB or 256KB.My question is whether I need to change the block level ddb factor to 128KB on both the source and destination cloud lib, and if so, what is the proper procedure to adjust the value? Do I need to put the MA in maintenance mode and ensure that no jobs are running in the DDB/MA, then reboot the ser
Setting job precedence/stagger times
Good day allI guess this question is two-fold and I’ve not really been able to find a way this can be done using a Plan.We’re running SQL Plans that backup the DB’s in the traditional Full/Diff/TLOG format.We have 1 SQL instance that mustn’t have it’s TLOGS backed up. My assumption is that it would need to go into it’s own Plan with TLOG slider switched off?The second portion to this same instance is that the backups of the various DB’s don’t happen at the same time due to their size?Is it possible to configure an instance that a DB will not start backing up until the prior one is complete and they keep cascading in this manner.I understand you could create multiple sub-clients within each instance and assign different schedules. However, this would potentially mean that if a DB finishes sooner than the next one is scheduled to start, there’s an unused portion of the backup window. Conversely if one schedule overruns into the next, then we have 2 jobs running which we’re trying to avoi
outlook 2016 support exchange on prem.
hi, a customer of ours, wants to move off of all their office 2013 installation.but the access node they use cannot be updated as per commvault documentation:https://documentation.commvault.com/2023/essential/93789_microsoft_outlook_requirements_for_exchange_mailbox_for_on_premises_exchange_servers.html i know this had to do because of limitations with in the newer outlook versions.but as Microsoft it self will stop support, i see this becoming a problem for multiple customers. https://learn.microsoft.com/en-us/lifecycle/announcements/office-2013-ends-support-one-year can you let me know whether this is on the road map, or what commvaults solution will be for this? kind regards, Thos Gieskes.
MySQL Backup using Percona XtraBackup
Hi Team, We took a hot backup of our MySQL DBs with the Percona XtraBackup utility in the document below. We will perform restore tests, can we return any DB under MySQL Instance that we want to learn? Or is all Full Instance restore the only option? https://documentation.commvault.com/2022e/expert/93111_mysql_backup_using_percona_xtrabackup.html Best Regards.
PostgresSQL with compression
Hi Community, Is it safe to use software compression with DumpBased PostgresSQL backup ? We’ve had a client complaining about jobs related to PostgresSQL backups taking too long, after a first check, we’ve found that all created subclients under DumpBased had compression turned off, we know that DBs have their own compression, but wanted to be sure of the safeness of using it on commvault side. Regards.
Cloud DR Backup needs a to request Access to Download
Hi, there seems to be a recent change, you now need to request to access/download the DR-sets.I can see some of the pros and cons, maybe you can tell us a little bit more about the reason and the process, the request is taking on the commvault side.I tried a testrun on friday, requested the download at 7:48 am (CEST) and got the request approved after 8h (at 3:32 pm (CEST).Also the request was sent to the mail-address of the account holder.If there was a real Disaster, i would have had to raise a ticket and hopefully could get the DR-Set faster, but that is another thing you need to do, and if you need the DR-Set, you will have your hands full.To improve the process, I would like to suggest the following additions:1.) Give a timeframe, how long the request will take. At the moment, there is no information and it would be good to get a timeframe. In a Restore-Test or other none vital operation, the 8h response time can be worked around.2.) Lets give us the option to add another Mail-Add
Issues with vCD snapshort Backups
Hello, In the meantime, I found the following timestamp and also warning about the snapshot creation for the mentioned VMs. However, there is no reason mentioned in the logs for why the snapshot creation is disabled.Error Code: [91:482]Description: Snapshot creation is disabled on (VM Name). Please help to find the solution for this error.
SQL Query for Disabled for Write
Hi Team,I was developing a SQL query for Library space details in which I need to get Lib Name, Capacity, FreeSpace. I got the LibName, Capacity, FreeSpace columns from the CSDB tables.But I need the column through which I can exclude the “Disabled for Write” option on mount paths.Can someone please let me know in which table that status of “disabled for write” will be available. Thanks,Harshavardhan.
Migrate files between os type (Linux & Windows)
I’am migration a server from Windows to Linux, and have a lot of files that needs to be moved. And will use the live sync replication for this.https://documentation.commvault.com/11.24/expert/92967_configuring_replication_for_file_system_agents_01.html Each time we try to replicate a file dir from Windows to Linux, all the files is getting skipped.I can’t find any documentation that this feature is not supported? can anyone confirm/deny this?If I do it the other way, from Linux to Windows, it works fine.
Multi-Stream support for Vmware Full VM Restore
Does Commvault support or plan to add support for Multi-Streaming during Full VM Restore in VMware?For example, a VM with 4 ‘vmdk’ files can use 4 streams during restoration.We are running ver 11.24 and see that a single stream is used during the restore which slows restores down, especially for larger VMs.We could of course use Live VM recovery but that does not help if the VM is running any resource-heavy app.
Is there a way to make a subclient eligible for subclient policy association?
Good Afternoon, I have a few clients that when I try to associate with the subclient policy their backup-set does not show up. this is presumably because they are somehow not eligible to setup to use a subclient policy.Is there a way to make these clients eligible without losing data in the existing subclients? Thanks.Chris.
Upgrading away from SQL 2012, what to select for ROLES?
Hello everyone,I have an old CommVault installation that has servers other than my CommServer that are still running SQL Server 2012. I’m following the documentation to uninstall all the packages, uninstall SQL Server and do the reinstallation. As per the documentation, I took a screen capture of all the CommVault _packages_ that were installed. When I run the installer, its asking me to select _roles_ and I have no idea what to check off here. Is there a way to find out what ROLES I need to select AFTER all the CommVault software has been uninstalled?ThanksKen
Activate - Exchange System Governance
We started using Active later last year, ( Amazing for those who have not deployed this yet) and while the file system projects are useful and abundant, I am wanting to add our exchange servers. Does anyone know if you can use just the DB of the exchange server ? or does each mailbox have to be collected within commvault for the crawl/scan to work?
Tomcat Certificate expiry period
Hey all, ich habe eine Frage und zwar hab ich gerade ein Zertifikat ausgetauscht und die Zeit für den Ablauf auf 397 Tage gesetzt. Das Zertifikat funktioniert auch, aber es läuft nach 87 Tagen ab. Wie kann das sein und warum gehen nur 397 Tage ? Hier ist der String: keytool -certreq -keyalg RSA -alias tomcat -file "D:\Program Files\Commvault\ContentStore\Apache\conf\cvcert22.csr" -keystore "D:\Program Files\Commvault\ContentStore\Apache\conf\cvcert22.jks" -validity 397 -ext SAN=dns:XXXX Added translation:hey all, I have a question, I just exchanged a certificate and set the expiry time to 397 days. The certificate also works, but it expires after 87 days. How can that be and why is it only 397 days?
Vault tracker policy action stuck in running state
Hi all,we have successfully set up a Vault Tracker Policy and there are a couple of tapes to be exported in view media review.However, if I click on run now and check Actions under VaultTracker option, I can see the action with the running status but nothing is being done next.Do you have any idea what I am missing? Of course, we have marked Auto Acknowledge under Vault Tracker Policy Properties. Is there any log or additional settings which needs to be adjusted?
Azure SQL bacpac exports not cleaning up
I have a customer that’s running Azure SQL backups, the backup process triggers the export of the database to a bacpac file in a dedicated storage account and then backs this up to a Cloud library.I was under the impression that the data in the export location would only be transient as part of the backup process, however, looking at the storage account there are bacpac files going back for as long as we have had backups running for these Azure SQL accounts.Is this the expected behaviour or should the backup process be cleaning up the staging location?
Failed to Find Tunnel to Server
After building out a server and attempting to add it into commvault, I’m running into error flags. This is what I get within EVMgrS log. 10344 3308 12/06 09:50:21 ### ERROR: CvFwClient::connect(): Connect to <IP Address>:8400 failed: Connection timed out10344 3308 12/06 09:50:46 ### ERROR: CvFwClient::connect(): Failed to find tunnel to <server>10344 3308 12/06 09:51:07 ### ERROR: CvFwClient::connect(): Connect to <IP address>:8400 failed: Connection timed out10344 3308 12/06 09:51:07 ### LibConfigAppClientProp::onMsgPreconfigureClient() - Failure in ConfigurePreImagedClient.Check for CvInstallManager.log10344 3308 12/06 09:51:08 ### LibConfigAppClientProp::onMsgPreconfigureClient() - Registration of Client: is failed.10344 3308 12/06 09:51:08 ### operateEntityClientProperties() - Unable to register client10344 248c 12/06 09:51:21 ### CWorkQueueAdmin::ProcessEndJobRequest() - Received request for job end token with parameters : [<?xml version="1.0" encodin
Office 365 & AzureAD authentication methods
We want to configure backups for Office 365 & AzureAD.According to the documentation the most secure method currently available is by registering (multiple) apps which can be configured manually or automatically when using a global admin with temporary MFA disabled.The only possible authentication method seems to be clients secrets which is better than service accounts for sure, but I wouldn’t call it the most secure method either.Not sure if (Azure) conditional access can be applied to the app registrations.Is there any other way possible, such as certificates or even managed identities ?It seems Commvault has possibilities in that direction but only for Azure VM’s.Kind regards,Tom
Need to add new MP for load balancing
I made the resume of the Forecast of the client and need to add new MA for load balance and i related SP information and can you help me to identify in which SP i need to add new MP or add in all SP making load balance, what do you think?** and i use Round Robin in the SP.And i need to create a new DDB for the new MA to load balance.Storage Polices:Example:CommServeDRSP_LX_BD_FS\Primary
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.