Commvault Q&A, release updates, and best practices
Hi,I have 2 Eternus CS200 S1, one in main site and 1 in DR. Previously both are standalone appliance with license each. During the implementation, the requirements changed and the DR appliance was installed in cold standby mode. Licenses for both appliance were merged together into one license with support for dual IP’s for DR purposes. Now there’s a new requirement to separate the 2 CS 200 as stand alone appliance again. Questions are:Is there a way to get the license back for the DR appliance? What are the procedures to split the sites? links for reinstalling the commvault software in the DR appliance links for configuring the DR appliance in Restore ModeRegards,
We have a Linux client with File System Agent installed. The defaultBackupSet have 5 Subclients. All of them disabled for a long time. One of those Subclients have 60 Incremental Backup Jobs without a base Full Backup. There is no Full Backup. I double-checked the backup history (Backup Type: All / Job Status: All / Time Range unchecked / All operations).Do they have any use?Would it be possible to restore some data from those jobs even without a Full backup?
Does your organization have a mandatory password change policy and Commvault SQL accounts (sqladmin_cv and sqlexec_cv) are not in an exception? No worries. We have you covered. We now have “Refresh Commserve Database Access Key” workflow available in your Commserve and it can be scheduled to change passwords of these accounts. Happy meeting your security/audit requirements. Have fun.
Good evening folks I have a customer who experienced an issue with one of their servers and it seems it broke the Commvault application.The Commserve can no longer communicate with the client (DIPs are defined).We tried to repair the installation with no success. We uninstalled and then reinstalled Commvault successfully interactively (Remote option doesn’t work at the moment).The interactive installation takes a long time to register to the Commserve, but does eventually complete. The Commserve does not reflect the change however.We noticed that there’s a locked ‘cvd’ process on the client and suspect that it may be the problem. The Unix admin suggests we reboot the client and try again.Before we go this route, is it possible to kill this process manually? It’s the one that’s been flagged with 24Apr.Thanks so muchRegards,Mauro
Hi there, I want to share my experience with DDB reconstruction. My colleagues started the DDB database reconstruction because the DDB database was not in good condition. There were no current backups of DDB database (the last backup was 14 days old) and maybe some other error messages were active. They decided to start full reconstruction, maybe there was no option to make manual backup of the DDB database - hm, but the database had not been updated with new records anyway. The thing is that the reconstruction is very time consuming and moreover, since the DDB database is not running the backup jobs are not possible. The workaround we did was to temporarily disable deduplication. Another caveat is that we are running out of disk space. And that is how a disaster looks like . At the end I wil put my hypothetic questions. Is the DDB database needed for restoring of data - I wouldnt say so. And what would happen if the broken DDB database was deleted and completely the new one was built
Hi,Our customer recently upgraded from SP18 to FR20.Afterwards, they noticed they can’t reach the CommCell Console anymore using the URL http://”commserve”:81/console , which they always used in the past.Connecting to it over port 80 seems to work fine.We checked the IIS bindings and they are still there, so we’re unsure what happened.Has this been removed in FR20? We reset commvault processes and ran a iisreset but that didn’t help.Thanks.Jeremy
Hello, i was working on a Document restore for a customers sharepoint environment.we noticed when selecting an out of place location in the restore that we did not have the option to select a sub dir in the browse.we can only select the root of a site. see screendump: the customer wants to know if there will be an option available to restore To a different subset of site?i was not able to find details in the BOL: https://documentation.commvault.com/11.24/essential/136058_restoring_sharepoint_online_document_to_another_sharepoint_site.html kind Regards, Thos Gieskes.
I have a client that is looking to migrate to plans, I have built out their required classes & retentions in a lab the only thing I have a question about is the synthetic full scheduling, they have a requirement to run a weekly synthetic full. When you create a plan it creates the schedule for a full in line with the primary retention as an automatic schedule every 30 days.Is it still acceptable to edit the schedule policy in the Java GUI for the synthetic full from “Every 30 days” to say a shorter period of time (ie 7 days) and allow the full backup window to handle the window or change from the automatic schedule to a weekly schedule with a set time?I want to achieve the flexibility of plans for them but also meet their requirements.
Our Database team which handles SQL servers and SQL databases has sent us a notice that they are wanting to upgrade the SQL Database from 2014 to 2019. “Hello Team,BKR-BKCOM-01/COMMVAULT is still running on an old version of SQL 2014, hence we need to upgrade the server to latest version of 2019. Can you help us on how to proceed? This is a physical server so we need a new physical server to host the 2019 version then move the databases. Once tested, then we can offline the old 2014 version and online the new server to production.”I don't know if they are saying they want a new server for the Commvault Commserve or if they are hinting at their own server, which makes no sense why he added that in the email. Our specs for the commserve easily matches the SQL recommendations for 2019. My only thing i need help on is besides the article:https://documentation.commvault.com/11.24/expert/142607_upgrading_microsoft_sql_server_2016_express_to_microsoft_sql_server_2019_standard_edition.htmlIs
Im getting this error when i try to add a new azure stack hub cleint to commvault. Error Code: [91:139] Description: Unable to connect to virtual machine host as user . Failed to get AzureStack URLs, Please make sure Azure PowerShell is installed on the proxy]Azure powershell is installed. Does anyone know how to get past this issue?
Using simpana 11.20.46I have done a last backup from an old OES server who has been decommissioned.This server has 2 sub clients: the linux file system and the OES file system. The storage policy takes a backup of both clients (the LX agent and the OES agent)This backup have been finally written to tape, using a synchronous storage policy, after that I have checked that the data on these tapes was valid and write protected the tapes. I can see the LX and the OES backup.Then I have removed this client completely from Simpana, now I want to check whether it is possible to restore from these write protected tapes.So I have putted again these tapes into the library, I have catalogued these, no errors at all, I can see the catalogued contents and also a dummy client has been created, but only with 1 sub client: the linux file system.So now when I try to see the jobs, I only see the linux jobs but I would prefer to see the OES ones…. Any tips???Thank you in advance, Ricard Malvesi Saguer
Hi there, most likely we will need to temporarily change disk library for a couple of storage policies because of slow DDB database reconstruction.My question is how to change disk library for primary storage policy copy? There is no scroll down menu to change disk library in Default Destionation field Does it mean that I need to create new secondary copy and after that promote this newly created secondary copy to be the primary one? Is there any potential data loss? Do you have any hints or caveats for this task?
Hi,I have a strange behavior with Commvault and Intellisnap. We trigger Netapp Snapshots with Commvault Jobs. These Snaps are not copied to Commvault Library. Reason: We have a 2-Node Netapp Metrocluster in Place and a third Netapp with async Snapmirror syncronisation to keep the History of Snapshots there (so this is kind of a Backup Storage for the Metrocluster).We use Commvault only for triggering Snaps and for restores from these snaps. But Data is kept on Netapp with Snapshots. So this way our Service Desk personell needs only one GUI to restore Backups from Netapp and from all other kind of Backups we store in Commvault Library.Since about 3 Months I observe the following problem: When you browse a Netapp Snapshot Job with Commvault, you don’t see all Folders. When you browse the same Snapshot with Windows Explorer using ~snapshot after the UNC path, everything is there and you can restore it with no problem.Any idea why the Folders are not visible in Commvault Job browsing? I do
Hi All, Qscript in the link here started to fail randomly since yesterday for me..in both of our commcells..Could not understand why I have this issue, for fun sake I tried to copy paste from CV’s documentation, from -help section of qcommand - maybe after 5 months of succesfull everyday use somehow script decided for himself to mistype the command...lol - laughing through tears here when example in -help section in itself is faulty :) Same goes for API call, response code is ‘200’ - but in the response is one of CV generic resp codes, which is ‘2’. def get_commvault_clients(self): """This method returns all servers with backups in past 24 hours from DK1 and DK2 commcells""" job_history =  for api in self.api_url: jobs_request = api["url"] + "ExecuteQCommand" jobs_response = requests.post( jobs_request, headers=api["headers"], timeout=3600, data="command=qoperation execscript -s
Hi Folks,I’ m looking a guide or BOL pages about the Commvault Network Hardening. I saw that there are some guides about Ransomware Protection, Securing the CommServe.. but can’ t find related to the Network. Basiclly, I need what can be done for Network Hardening ? Like Port Restriction, Authenticate with Certificate, SSL handshake, Encrypted tunnel ?Best Regards.
Hello, I'm currently testing the Hyperscale X 2.1 Reference Architecture deployment and i would like to know if it's possible to configure at 1GB, my network interfaces on Data Protection Network, because when I configure, he tells me " Encouterd error Interface should have at least 10GB" , but when I tested Hyperscale 1.5 Reference Architecture deployment, I was able to configure my interfaces to 1GB for the Data Protection Network without any problem. So, I would like to confirm if it's possible to use my 1GB interfaces instead of 10GB with Hyperscale X 2.1 for the DPN. Please note that the other interfaces for the SPN and Replication will be on a 10GB link.
Hello guys, I am trying to upgrade VMware and Hyper-V pseudo-clients to Indexing Version 2. I have found the following procedure:https://documentation.commvault.com/commvault/v11/article?p=10812.htm However, the procudure refers to clients (agents, i.e. File system Agent). I cannot find any procedure regarding to pseudo-clients. Any ideas how to upgrade them to Indexing Version 2? Rgds,Kamil
Hello guys, I need your help. The Customer has an Exchange DAG pseudo-client with subclient ‘databases’ configured as a subclient that contains all DAG databases.They have removed some databases from Exchange DAG nodes (on server level).However, after click on ‘Discover’ button, deleted databases are still visible as a content. The CS and Exchange nodes are running 11.20.46. Exchange version is 2016. I cannot find any hotfix regarding to this issue. As a workaround, the Customer put deleted databases to default subclient and disabled activity on subclient level. Any ideas how to fix it? Rgds,Kamil
Hello,I have a simple question. Is it safe to change storage policy for a subclient? Is there any data loss posibility? Do remain old data in the old storage policy?I am asking because I need to temporarily change disk library to tape library and because data are not synchronized it is not possible to promote secondary copy to primary.
Hi, maybe someone from the community also needed to automatically restart “Completed with errors” VM job only for failed vm’s? There is an option from GUI, but it would be much easier to restart it using api or wokflow. If someone found an option to do that it would be great if yu could share it! Thank you!
Hello Is it possible to get an alert when the automatic SQL instance discovery discovers a new SQL instance?I would like to get some sort of alert of when this happens to insure that no new instances go unnoticed and unconfigured.If it is possible how do you set it up?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.