Commvault Q&A, release updates, and best practices
Using server 2008r2 as a media agent after FR22 .3 update
This is a conversation post after my initial post about FR22 .3 This is more of a findings topic/conversation .I had three older wk8r2 media agents ( now replaced) that experienced widespread issues after going to FR22 .3NOTE: none of these issues are/were recorded with Commvault as actual issues. The decision to replace/Migrate the OS was made at the 11th hour after working weeks on these issues.The basic application appears to work just fine with 2k8r2 fr22 .3 - Readiness, services running ,can run jobs etc. The issue we were running into was consistent across all three. And the only 2k8r2 media agents in our environment's I knew it was an issue. Seemed too coincidental not to be.After the Fr22 .3 update- within 4 hours our jobs started experiencing all or some of the following errors:Pipeline errorsMedia mount services device not readylibrary full.Even when attempting to select new snap mount hosts for jobs i was getting connection refused messages in the GXTail event logs.The mos
How to take advantage of a modern backup solution in the COVID19 crisis and how to help employees work from home
When the pandemic hit, we were forced to quickly adapt and answer a bunch of questions we’d never asked ourselves: how can we keep in touch with our colleagues when we’re not in the office? And how can we make sure we are still efficient while working from home?It quickly became apparent that one, seemingly small issue could prove catastrophic: our information system is designed so all business documents are located on our servers and on our work computers. But this also meant that many wouldn’t be able to access these documents from home.Since the employees were already familiar with the backup system, this new opportunity only confirmed our desire to modernize the complete system and to digitize business processes.Everyone was aware that these changes were necessary, that it would help them work through the emergency, and, most importantly, that working from home wouldn’t be an insurmountable problem.A new short manual was compiled to help employees adapt to working from home, with a
Cloud storage https connection
Hi there, I have successfully added the cloud storage (S3 compatible). However, for the time being I am only able to set up connection based on http protocol. When I want to add a new cloud storage library using https there is the error message failed to do verification.To move further, I would like to utilize https protocol. I have self signed certificate from my netapp cloud storage S3 compatible - is it possible to allow using it since I dont have CA issued cert? Could Commvault be forced to use a self signed certificate? What I did try was to “Use this additional setting and set its value to 0 to skip the checking of the server's certificate claimed identity for the cloud libraries”, but it didnt help. Is it possible to check using of this settings? Do you have any suggestions for such situation? Thanks for you ideas.
NAS transport mode backups independent disks
Hello,I have implemeted NAS transport mode for the backups of Vmware Virtual machines residing on a NAS.In books online (https://documentation.commvault.com/11.21/expert/32585_frequently_asked_questions_for_virtual_server_agent_with_vmware.html#how-do-backup-and-restore-operations-handle-independentrdm-disks) I can read that independent disks and RDMs are skipped during the backup. This is what I’m used to when using NDB, Hotadd or SAN transport mode. With NAS transport mode it seems that the independent disk does get a backup. I can even browse it to do restores (VM files or guest files). We are not using intellisnap, so no hardware snapshots are made and the options to include RDM and independent disks are not selected.I double checked in VMware and the disk of the VM is configured as independent, the moment the backup starts, I can see that the other disks of the VM get snapshotted and the independent disk does not get a snapshot as expected. But it is included in the backup job. Do
Automating restores for routine validation of library integrity
I am looking for a way to automatically perform a sample of VM restores on daily\weekly basis from the most current backups from them. Long story short, I ran into a data integrity issue on one of my libraries and had no idea. I.E. VM Baselines were referencing a chunk on the library that did not exist. Therefore, any restores that referenced that baseline kept failing. I have my backups set to full instead of synthetic full which future backups never indicated a backup failure. Since this incident, I would like to start routinely validating that VM restores actually work. Plus this was on a CV Ransomware webinar slide earlier this month about validation😊. It does not look like this can be done natively within Commcell. Or can it? It looks like the road to go down is the use of the Commvault PowerShell module and calling upon APIs. Does anyone perform this kind of validation today? How do you accomplish this?
Why is Download Center and Commvault Store not synchronized?
Hello CCIs there a reason why not all reports are listed in the commvault store? I was trying to download the file anomaly report from the store. However, this is not available there. I will have to download it from the download center (cloud.commvault.com).
Aux copy error: may not be copied as we failed to get array controller Media Agent, make sure to set an array controller Media Agent for source Array, will be retried soon
In our CIFN environment, we would like to take snapvaults from our primary (snap copy), under our storage policy. WE are currently using Open Replication (not OCUM). Our initial (copy) snapshot works fine but aux copy (snapvaults) are not. Here is the message under the Progress tab when job initiates. Error: Data to Storage Policy [storage-xxx] Copy [snap_vault] may not be copied as we failed to get array controller Media Agent, make sure to set an array controller Media Agent for source Array, will be retried soon. Any ideas?
Export history Backup client VSA to a new client VSA "VMWARE"
Hello, my configuration : Detect configuration from Commserve : vsa client vmware : vcenter1 OS : Windows Server 2012 R2 Indexing Version : V1 Virtual Server : Working behind Proxy MEDIA Check Readiness : Not Ok because OS Version isn't WindowsBackups scheduled tasks : working fineActual configuration : vcenter1 OS : LINUX VMWARE An idea to update os version on Commserve client vcenter1 and keep backup history and configuration subclients ? i try it this solution provide from Commvault : Clone Client Dataset https://documentation.commvault.com/commvault/v11/article?p=110180.htm But didn't work Force to deploy a new vsa client and reconfigure everything and also lose the unified backup history to receive checkreadiness status OK ?
Windows Firewall Rules
Hello, did someone may know why the old way of adding firewall rules with the script are not running anymore on new deployments ? As you can see there are only two firewall rules deployed on the installation. May someone can explain why this changes are done ? Many thanks and best RegardsPhilipp
Kubernetes Node OS Requirement
SP20 doc says that Windows Server 2012 R2 is supported, then you jump to SP21 doc and it says that only 2016 and 2019 are supported.Is this correct?SP20: https://documentation.commvault.com/commvault/v11/article?p=123637.htmSP21: https://documentation.commvault.com/11.21/essential/124720_hardware_specifications_for_access_node_for_kubernetes.htmlTks
Changing User Account for a backup?
So we have certain servers/vm’s that are in a controlled environment that only has certain level of permissions to have access to. The primary user account we use for are normal backups does not have access to this environment inside of the Virtual Machine Client for vm backups. I am wondering if there is an easy way to change the user account the vm backups use so that i can have it use my credentials which does have access in order to get a backup through a vm backup?
Finding and accessing private Teams Chats
On or around October 5 2020, Microsoft changed the mailbox location where Teams stores its compliance records captured for personal chat and channel conversation messages. Microsoft didn’t communicate this change, presumably as they thought that compliance processing is a background task that tenants cannot control. What changed?Compliance messages for private Teams Chats were previously stored in \Conversation History\Teams Chat Folder but are now stored in a non_ipm_subtree folder called TeamsMessagesData.For about a month after the change, Teams compliance records existed in both the old and new locations in Exchange. As of Nov. 6th, in accordance with their 'Plan of Record', Microsoft ran a background process that moved records from the Exchange Team Chat folder to the new folder TeamsMessagesData. What’s the impact?In terms of programmatic access to these messages, this change affects the ability of the Graph (and Outlook REST) Endpoints to access them. While you could never acce
VSA Backup job failure alert not showing failed VM name in mail body
Hi,We have requirement to generate auto ticket in SNOW for backup failure using SNMP alerting .We have enabled and configured alerts and we are receiving the alert as well.When filesystem backup gets failed we are receiving alert with client name and below details as expected.Error Code: [24:42] Failure Reason: Machine Name:However when VSA Backup fails with error , its not showing this above information in alert (Mail body) instead its sending attachement which contains Machine Name & error.Attachement contains below information:-ServerName Failed Unable to quiesce guest file system during snapshot creationCan anyone please help me on how to get the VSA Backup(Machine Name) in mail body using alert or any other method which give me the intended output on email or SNMP ? Thanks in advance.
Postgres - pipe Log Files directly to CommVault
Hello,I'm new to CommVault, we are just starting getting it set up for production. We currently are running our backups on TSM but are preparing switching to CommVault. Is there a way to send completed log files directly to CommVault per pipe or other methods? That is how we do it with TSM. When a log file has been filled completely it gets piped directly into TSM. No waiting at all. The suggested method of placing the files into a separate archive with the postgres archive command doesn't appeal to us that much, it would nice to have the completed files go directly to a Log Backup instead of parking them, while you wait for CommVault to pull them.Any suggestion would be much appreciated!We are running on RedHat 7 with Postgres 10 and 12.Thanks!
Hello, is there any technical person in commvault available for MSP related topics and questions?I have get the information that could be a little bit diffiicult, but for me us an MSP Partner it is a little bit difficult this answer, so may someone knows someone which is able to discuss with us some msp related points?many thanks and best RegardsPhilipp
MS365 - Onedrive Backup
Hello, we are deploying our MSP Environment for MS365 and we found some issues with this in combination with some features where documented but not in the console for some reason. In Detail we are looking for adding users over ad groups for onedrive for business which is documented in theat link but not in the console. We can only add users and this is in big teants a not useable way from my point of view because when you thinking about a big tenant with something like 1000 users you can not add manually this, it is an big error source from my point of view and the feature is there for mailboxes from MS365 which is great but not for onedrive for business. https://documentation.commvault.com/11.22/essential/93693_creating_user_defined_user_group[…]p_specific_user_accounts_for_onedrive_for_business.htmlthanks for your replys and your support in that question !
rac client and multiple commcell
I have commvault clients installed on the nodes in exadata cluster. Both the RAC clients and the FS clients for this cluster currently backup to commcell A.What do i need to do if i need to backup one RAC client to commcell A and one RAC client to commcell B?Will i have to install a second instance of commvault client on the exadata nodes first?Thanks
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.