Ask and answer questions about your self-managed and on-prem software
- 4,170 Topics
- 18,287 Replies
Hello ; I have 2 Mysql Cluster running on the same version “8.0.20” and check configuration have been applied to both clusters and everything is fine. Check readiness is ok for both clusters. Also, FS backups for both clusters were fine, and FS our of place restore between both clusters. The issue in restoring the DB from one cluster to the second one is that the restore is starting and there has been no progress on both DBs cluster and everything is fine and check readiness is ok for both also FS Backup for both clusters were fine, also FS our of place restore between both cluster , the issue in restoring DB from one cluster to the second one, the restore starting and no progress on the job since job started and no data moved but the DB created on Mysql end but there is no progress from commcell side and job last time update was the same as the time of start job ?Commv. version 11.30.48.
In the past few months, “file activity anomaly alerts” have been exploding in my commvault warnings. Consider this thread a continuation of my previous thread here. For instance, we have Microsoft Configuration Manager use its built-in backup function to back its files/databases up twice a week to a location on our file server, and then later Commvault backs up the whole file server.Here’s a few of the files which it always sticks on, some related to Config Manager, some random other onesDescription: A suspicious file [P:\Project\Staff\DISASTER\SCCM\SSCBackup\CD.Latest\SMSSETUP\AI\LUTables.enc] is detected on the machineDescription: A suspicious file [P:\Project\Staff\DISASTER\SCCM\SSCBackup\CD.Latest\splash.hta] is detected on the machine Description: A suspicious file [K:\Users\Choi2\R\4.2.1\file3ea85e644f6c\spatstat.random\R\spatstat.random] is detected on the machine Description: A suspicious file [M:\TSProfiles\Buetz.v6\.conda\pkgs\tornado-6.2-py39hb82d6ee_0\Lib\site-packages\torn
Hello community,We are try to configure SAP MaxDB backup. We have completed client installation and instance configuration, but we facing errors related to pipes / streams.Backint, param and streams are configured according to following guide:https://documentation.commvault.com/2022e/expert/22205_configuring_multiple_streams_for_backups_and_restores.htmlDuring backup (from Workflow) we are getting the following error:Error Code: [19:857] Description: OK ERR -24925,ERR_PREPARE: Preparation of backup operation failed Can not create pipe '\\.\pipe\pipe_mem1'. (System error 13; Permission denied) Source: cv*****, Process: Workflow Please for your feedback,Nikos
The jobs are seemingly still running fine without any failed items.I restarted the services on the access node, also tried verifying in another browser. Maybe it could be a problem with browser cache, but no.Discovery and index timestamps are up to date.Running 11.32.23Any comments on this? What should be done? Can I trust the backup?
Hi All,i’m using Commvault from several year. I‘m used to work with the Commcell Console but the command center become very useful and simple for commcell administration.for new customers (not on hyperscaleX) i would like to only make them use the command center because it’s easier to use than Commcell console and also ,comparing to other product, the commvault “complexity” should not be an argument anymore.in the command center some advanced configurations seems to be missing.how to configure a media agent gristore in the command center ?thanks in advanceChristophe
Since a few months we have the problem that the archiving function does not work anymore. Files that were previously stubbed without any problems are suddenly no longer stubbed. Also files that had a recall are not stubbed again. In the logs it says "Reason: [Skipping as file has recall on data access or open attribute ON]". Files are for example jpg and bak. We are already in contact with the support but they don't get any further.
Hi all,I’m about to renew my license and intend to increase it for the first time.I’d be happy if anyone can help with questions or direct me to a thread where it was already answered. Have VM Socket licensing that I understand is deprecated and at some point will be turned to 10 VM pack license although it’s perpetual license. Is that true? when will existing license will be turned to VM packs? I heard around 2025 If socket license turns to VM packs, what is the ratio I should expect will remain? for example, one perpetual socket license turns to X VM packs? if so, how many? I have 2 SKU almost the same. can someone explain the difference?Commvault Complete Backup & Recovery for Endpoints, Per User CV-BR-EPCommvault Complete Backup & Recovery - Per Front-End TB CV-BR-FT and:Commvault Complete Backup & Recovery for Endpoints, Per User CV-BKRC-EP-11Commvault Complete Backup & Recovery - Per Front-End TB CV-BKRC-FT-11 thank you all in advance!
Hi,we would like to have an environment consisting of one Active CS host in the production site and one Standby CS host in a DR Site. Also we would like to enable automatic failover. And CS host will act as proxy for the client. Commvault documentation, at least for me, is not clear about all the ports that we need to open.Could you please help me find out?I understood that ports 8405 and 8408 and also 8090, 8091, 8097 for HAC needs to be open between the two CS, both direction.But I’m not sure which ports I need to open from the clients to both proxy CS.In the documentation it says “If the SQL clients are used as the proxy, verify that all the clients in the CommCell environment can communicate with the port and hostname of the SQL clients in both the active host and the passive CommServe hosts.” https://documentation.commvault.com/v11/essential/106081_planning_for_commserve_livesync_setup.html Thanks
Hi guys, one quick question.NetApp offers the feature to lock Snapshots to prevent deletion of it without the need of an SnapLock Aggregate. The feature is called “Tamperprrof Snapshot”.https://docs.netapp.com/us-en/ontap/snaplock/snapshot-lock-concept.html?q=TamperproofIs this feature supportet by Commvault?Could Commvault handle/manage the retention/lock of the snapshots on the ONTAP-level?For primay snap and snapvault copies.If not, what would be the best solution to keep the snapshots safe from deletion → SnapLock? RegardsJulian
Some Background - I’ve worked off and on with Commvault over the past several years and we currently are migrating data from a cloud-vendor back to on prem. The question I have is about a behavior I’ve never noticed before, but is probably from a lack of experience staring this closely at jobs as they run rather than something going wrong. I just want to understand it.The AUX copy that is currently running has 2 streams remaining. Last Friday 11/22, those two streams showing the following information:STREAM 1:Data to be Copied - 4.06 TBData Copied - 263.63 GBSTREAM 2:Data to be Copied - 926.14 GBData Copied - 221.9 GBThen this morning I log in to check up on the job and those same streams now show this:STREAM 1:Data to be Copied - 3.43 TBData Copied - 575.65 GBSTREAM 2:Data to be Copied - 842.33 GBData Copied - 298.26 GBOver the past week as I’ve been monitoring these two streams I’ve noticed these numbers shift in a way that the “Data to be Copied” amount shrinks and the “Data Copi
Hi Community,I can’t find anything therefore in the commvault documentation or somewere else.Does Commvault support the Oracle Easy Connect Naming Method?https://docs.oracle.com/en/database/oracle/oracle-database/18/ntcli/specifying-a-connection-by-using-the-easy-connect-naming-method.html#GUID-1035ABB3-5ADE-4697-A5F8-28F9F79A7504Thank you for answers.Tobi
Hi, Like my name state, I’m quite a beginner so please bear with me.I’m writing this after finding a similar post answered more than a year ago that didn’t quite answer my own question.https://community.commvault.com/self-hosted-q-a-2/average-throughput-information-read-write-network-ddb-lookup-meaning-2854 I have this job that backup a SMB File share running on my media agent. The file share contains RDS Profiles and HomeDirectory so it contains millions of relatively small files.The job has been running for 3 days and it’s ETA is at least 7 days.The Job details / Progress tab / Load portion says : Read 98% Write 0.07% Network 0.47% DDB Lookup 1.30% with a current throughput of 0.001 GB/hr. I have no idea where it got the Average Throughput from because I’ve never seen it over 1GB/hr).The Subclient job setting / Advanced settings / Performance tab / Number of Data Readers is fixed to 10 data readers. My questions are :Read being that high and others start so low, does that means the b
Does Commvault backup the nfs mount path in the Vm while backup...? do we need to add any additional settings..?
Does Commvault backup the nfs mount path in the Vm while backup...? do we need to add any additional settings..? if the backup is failed due to Error “ scasi-0-nfs share is not accessible”. how would we troubleshoot it. what are the basic steps to check first?
So… this is probably a stupid question but…I’m a long time CommVault client and I’m slowly switching from the java GUI to the Command Center. This week we installed a new AD server so I went into CC, created a backup plan, and installed the CV AD client on the new server. Now I want to run a full backup but… there doesn’t seem to be any way to do that. When I go to Protect > Active Directory, I see the new server but there’s no option in the Action menu to run a backup. Am I missing something?
We want to have Azure Blob containers Immutable feature turned ON.Primary - Incremental - 30Days - AzureBlobContainer1Monthly Full backup Extended retention - 365Days - AzureBlobContainer1With Commvault Deduplication.Commvault DDB resides on the Media Agent.How so we handle DDB and the extended retention of INCR and Full on the Container1? Will the data on the container1 expire post INCR has reached its retention for 30Days? Currently we are not sealing our DDB. Do we need to seal the DDB when using Azure Immutable storage? Can we have one Azure Immutable blob and inside them have multiple folders with multiple retention? How does Commvault Retention linked with Azure blob Immutable - Access Policy - Retention Interval? What if Azure Retention Interval on AzureBlobcontainer1 is set for 30Days and My Commvault retention is for 365Days. what happens to the same data on 31st day?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.