Ask and answer questions about your self-managed and on-prem software
- 4,174 Topics
- 18,296 Replies
Hi guys, one quick question.NetApp offers the feature to lock Snapshots to prevent deletion of it without the need of an SnapLock Aggregate. The feature is called “Tamperprrof Snapshot”.https://docs.netapp.com/us-en/ontap/snaplock/snapshot-lock-concept.html?q=TamperproofIs this feature supportet by Commvault?Could Commvault handle/manage the retention/lock of the snapshots on the ONTAP-level?For primay snap and snapvault copies.If not, what would be the best solution to keep the snapshots safe from deletion → SnapLock? RegardsJulian
any idea on the below error , Error Code: [18:183] Description: Failed with Oracle DB/RMAN error [RMAN-03009: failure of allocate command on ch1 channel at 12/01/2023 13:13:35 ORA-03113: end-of-file on communication channel ] Source: grb-srv-opr, Process: ClOraAgent we restarted the server and services backup 4,5 backup ran sucessfully again we are facing same issue
IntelliSnap failed on Azure Premium SSD V2 with below error, incremental snapshots is still preview status, any chance to create full snapshot for Azure disk? 20164 5584 04/10 06:59:07 182052 CreateManagedDiskSnapshot() - Failed to create Incremental Snapshot for disk [vaz-wpc-002_DataDisk3]. Attempting normal snapshot...20164 5584 04/10 06:59:07 182052 CreateManagedDiskSnapshot() - Exception [System.AggregateException: One or more errors occurred. ---> Microsoft.Rest.Azure.CloudException: Only incremental snapshots are supported for disks of Sku PremiumV2_LRS.
In the past few months, “file activity anomaly alerts” have been exploding in my commvault warnings. Consider this thread a continuation of my previous thread here. For instance, we have Microsoft Configuration Manager use its built-in backup function to back its files/databases up twice a week to a location on our file server, and then later Commvault backs up the whole file server.Here’s a few of the files which it always sticks on, some related to Config Manager, some random other onesDescription: A suspicious file [P:\Project\Staff\DISASTER\SCCM\SSCBackup\CD.Latest\SMSSETUP\AI\LUTables.enc] is detected on the machineDescription: A suspicious file [P:\Project\Staff\DISASTER\SCCM\SSCBackup\CD.Latest\splash.hta] is detected on the machine Description: A suspicious file [K:\Users\Choi2\R\4.2.1\file3ea85e644f6c\spatstat.random\R\spatstat.random] is detected on the machine Description: A suspicious file [M:\TSProfiles\Buetz.v6\.conda\pkgs\tornado-6.2-py39hb82d6ee_0\Lib\site-packages\torn
helloin my enviroment we have many share with more than 10 milions files and directory to backup.It is normal that in the access nodes the ifind process can consume more than 20 GB of ram ?There is indication about the relation between number of directories and files and memory usage of ifind ?thanks in advance
Hello, Every time I run a Maintenance Failover, I receive the "CommServe Time Shift Detected !" alert. Detected a change in the system time, it is now forwarded by 30 days 22 hours 32 minutes 10 seconds. But there's no time shift between the active and standby Commserve servers… Regards,Pedro
Hello ; I have 2 Mysql Cluster running on the same version “8.0.20” and check configuration have been applied to both clusters and everything is fine. Check readiness is ok for both clusters. Also, FS backups for both clusters were fine, and FS our of place restore between both clusters. The issue in restoring the DB from one cluster to the second one is that the restore is starting and there has been no progress on both DBs cluster and everything is fine and check readiness is ok for both also FS Backup for both clusters were fine, also FS our of place restore between both cluster , the issue in restoring DB from one cluster to the second one, the restore starting and no progress on the job since job started and no data moved but the DB created on Mysql end but there is no progress from commcell side and job last time update was the same as the time of start job ?Commv. version 11.30.48.
Hello community,We are try to configure SAP MaxDB backup. We have completed client installation and instance configuration, but we facing errors related to pipes / streams.Backint, param and streams are configured according to following guide:https://documentation.commvault.com/2022e/expert/22205_configuring_multiple_streams_for_backups_and_restores.htmlDuring backup (from Workflow) we are getting the following error:Error Code: [19:857] Description: OK ERR -24925,ERR_PREPARE: Preparation of backup operation failed Can not create pipe '\\.\pipe\pipe_mem1'. (System error 13; Permission denied) Source: cv*****, Process: Workflow Please for your feedback,Nikos
The jobs are seemingly still running fine without any failed items.I restarted the services on the access node, also tried verifying in another browser. Maybe it could be a problem with browser cache, but no.Discovery and index timestamps are up to date.Running 11.32.23Any comments on this? What should be done? Can I trust the backup?
Hi All,i’m using Commvault from several year. I‘m used to work with the Commcell Console but the command center become very useful and simple for commcell administration.for new customers (not on hyperscaleX) i would like to only make them use the command center because it’s easier to use than Commcell console and also ,comparing to other product, the commvault “complexity” should not be an argument anymore.in the command center some advanced configurations seems to be missing.how to configure a media agent gristore in the command center ?thanks in advanceChristophe
Since a few months we have the problem that the archiving function does not work anymore. Files that were previously stubbed without any problems are suddenly no longer stubbed. Also files that had a recall are not stubbed again. In the logs it says "Reason: [Skipping as file has recall on data access or open attribute ON]". Files are for example jpg and bak. We are already in contact with the support but they don't get any further.
Hi all,I’m about to renew my license and intend to increase it for the first time.I’d be happy if anyone can help with questions or direct me to a thread where it was already answered. Have VM Socket licensing that I understand is deprecated and at some point will be turned to 10 VM pack license although it’s perpetual license. Is that true? when will existing license will be turned to VM packs? I heard around 2025 If socket license turns to VM packs, what is the ratio I should expect will remain? for example, one perpetual socket license turns to X VM packs? if so, how many? I have 2 SKU almost the same. can someone explain the difference?Commvault Complete Backup & Recovery for Endpoints, Per User CV-BR-EPCommvault Complete Backup & Recovery - Per Front-End TB CV-BR-FT and:Commvault Complete Backup & Recovery for Endpoints, Per User CV-BKRC-EP-11Commvault Complete Backup & Recovery - Per Front-End TB CV-BKRC-FT-11 thank you all in advance!
Hi,we would like to have an environment consisting of one Active CS host in the production site and one Standby CS host in a DR Site. Also we would like to enable automatic failover. And CS host will act as proxy for the client. Commvault documentation, at least for me, is not clear about all the ports that we need to open.Could you please help me find out?I understood that ports 8405 and 8408 and also 8090, 8091, 8097 for HAC needs to be open between the two CS, both direction.But I’m not sure which ports I need to open from the clients to both proxy CS.In the documentation it says “If the SQL clients are used as the proxy, verify that all the clients in the CommCell environment can communicate with the port and hostname of the SQL clients in both the active host and the passive CommServe hosts.” https://documentation.commvault.com/v11/essential/106081_planning_for_commserve_livesync_setup.html Thanks
Some Background - I’ve worked off and on with Commvault over the past several years and we currently are migrating data from a cloud-vendor back to on prem. The question I have is about a behavior I’ve never noticed before, but is probably from a lack of experience staring this closely at jobs as they run rather than something going wrong. I just want to understand it.The AUX copy that is currently running has 2 streams remaining. Last Friday 11/22, those two streams showing the following information:STREAM 1:Data to be Copied - 4.06 TBData Copied - 263.63 GBSTREAM 2:Data to be Copied - 926.14 GBData Copied - 221.9 GBThen this morning I log in to check up on the job and those same streams now show this:STREAM 1:Data to be Copied - 3.43 TBData Copied - 575.65 GBSTREAM 2:Data to be Copied - 842.33 GBData Copied - 298.26 GBOver the past week as I’ve been monitoring these two streams I’ve noticed these numbers shift in a way that the “Data to be Copied” amount shrinks and the “Data Copi
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.