Ask questions, give answers, get good karma
- 2,326 Topics
- 11,818 Replies
Hello, we would like to reconstruct our environment. In addition to the current environment, we plan to build it with new VM's (CommServe, WebServer (Commandcenter) in parallel. Since we currently have only one license in the current environment, the question would be whether we can simply transfer this to the new VM and with the help of a DR backup also all previously made settings ?We need to replace the VM because the operating system is too old.Kind RegardsThomas
Hi all, I have reviewed 2 environments showing the same inconsistency, FR24 and FR28/2022E (On screenshots)On a storage policy that has a snap and a primary copy where encryption is used the following is seen:Java GUI > Properties on primary copy > Shows encrypted (Taken over from GDDB, as expected)What I expect to see in command center, when I review my backup copy job > Encryption Enabled: Yes, as we can see in the java GUI.What it actually shows:Encryption enabled: Unavailable When reviewing the snap job (Which does not use dedupe, and we do not encrypt it)In the command center it shows: Encryption enabled: YesIn java GUI it shows: Encryption enabled: No (As expected, because we do not encrypt the snaps) Is this somehow expected behavior, or should we raise a case to dig into this more?
Hello team,Azure VM(V2) backup via a MA lives in Azure and backup working fine just noticed an interesting thing when synth full runs, during startsynthfull phase, it attached to on-premise index server to retrieve indexsupposed it should pull index from its own index server (index server to which the VM Storage Policy associated )972 5fac 09/28 02:15:12 748707 CVOnDemandSvcClient::SubmitTask() - On Demand service CVODS_indexserver_On_premise_MA_01_1 launched at host TOAPVLTMDA01.nbfc.com*toapvltmda01*8400*8402. Unique ID is DCF190B8-BFE8-40DD-9920-08F7503F60A1972 5fac 09/28 02:15:12 748707 CVOnDemandSvcClient::Attach() - Successfully attached to on Demand service CVODS_indexserver_On_premise_MA_01_1 launched at host On_premise_MA_01.XXX.com*On_premise_MA_01*8400*8402. Unique ID is DCF190B8-BFE8-40DD-9920-08F7503F60A1972 5fac 09/28 02:15:44 748707 ProcessSynthFullResponse called.972 5fac 09/28 02:15:44 748707 JM Client CVJobClient::initialize(): Got remote host [azcp-cvser
Hello All, Did any of you performed cross server sybase restore ? I have a task to restored Prod to sandbox. But the restore is failing with different error each time. If you are successful with cross server Sybase restore, Please share you experience. Thank you. Description: Loading database failed :[WARNING: In order to LOAD the master database, the ASE must run in single-user mode. If the master database dump uses multiple volumes, you must execute sp_volchanged on another ASE at LOAD time in order to signal volume changes. Can't open a connection to site 'SYB_BACKUP'. See the error log file in the ASE boot directory. Can't open a connection to site 'SYB_BACKUP'. See the error log file in the ASE boot directory. Could not establish communication with Backup Server 'SYB_BACKUP'. Please make sure that there is an entry in Sysservers for this server, and that the correct server is running. ] Source: Loss of control process SrvSybAgent.exe. Possible causes: 1. The control process h
Hi Team,I was developing a SQL query for Library space details in which I need to get Lib Name, Capacity, FreeSpace. I got the LibName, Capacity, FreeSpace columns from the CSDB tables.But I need the column through which I can exclude the “Disabled for Write” option on mount paths.Can someone please let me know in which table that status of “disabled for write” will be available. Thanks,Harshavardhan.
When i create reference copy sub client for tier on prem to cloud , But it was copying the data , it’s not moving the data to on prem to cloud , How can we do Retention settings for Tiered cloud storage and how can we setup the settings for Retention.Please help me in that process
HI Team, Greetings!On the mount path, when i click view content(attached snap). last backup time is showing as 9/9/2022 but backups are running fine on the mount path. from storage policy, i can see that jobs are showing on the mount point by using view media. i am using sp11.20.85. looks like bug. did anyone have any idea about it ?
Hi team, I’m conducting a PoC for MongoDB sharded cluster.Customer runs two shards and each shard has 3nodes. (1 primary and 2 secondary)We’ve installed plug-in on all 6 nodes and all configuration went well. For 1st shard, we performed the initial full backup without any option.For second shard,we performed the initial full backup with below option for better performance. Name: bObjectOpsExtentBackupCategory: FileSystemAgentType: BooleanValue: trueName: nObjectOpsLargeFileThresoldMBCategory: FileSystemAgentType: IntegerValue: 200000000 Backup performance is not bad in the beginning. (100-150MB/sec)Backup size of shard is 5TB and should be able to complete the job within 20hrs.So performance need to be higher than 80-100MB/sec at least. Backup server and Mongodb nodes spec is enough and resources are always available.But backup peformance is gradually degraded as time goes by and comes to 5-6MB/sec.We’ve never completed the initial backup
I am deleting a number of Storage Policy Copies and have noticed that Big Data Apps Index backups do not age like other backups. I have manually copied these backups to long term storage and run data aging expecting the backup to disappear from my short term retention policy. It seems as though these backups do not recognise when there are multiple copies. do I need to manually delete the extra copies?
I configured a Global Command Center by registering the remote CommCells as outlined in https://documentation.commvault.com/2022e/essential/151227_global_command_center.html. When you register the remote CommCell, it asks for the Service CommCell host name along with username and password. It then reaches out and synchronizes the service CommCells.For some reason, when I look at the CommCell names under Service CommCells, it is inconsistent. Some are showing a CommCell ID/Registration number while others have the full hostname and others have the short CommCell name.Where is it pulling that CommCell Name from? Can it be edited?It would be nice if you could edit the CommCell names on the Service CommCells page to input a friendly name so that the other admins would know which one it was. Perhaps that would be an enhancement? Right now under Actions I only see Refresh and Delete.
I have a customer who has configured SAML within their environment that’s currently on 11.28 (2022E). If they login via the Web Console/Command Center, the SAML authentication is required. Local users are unable to access the environment. However, when logging in via the CommCell Console, users are able to authenticate using either local accounts or SAML and access the environment.Is there anyway to enforce the use of only SAML on the CommCell Console and not allow local users to authenticate successfully?
Hi,Today I ran auxiliary and backup copies via a Windows batch script and the qoperation command/xml parameters.It works well, but i wanted to do something more 21th centuries, more stable (the qcommand response sometimes change with new versions) and something redundant by using directly the webserver and the rest api.But i don’t find how to do it in the documentation. We have the possibility to create an auxiliary copy, but not to launch it. I’m quite surprised. Please say i'm wrong.Thank you.
Hi Team, We are configured FSO and try to perform a file server optimization on a file server (size 30TB ).now we are getting error regarding the index server . Error : Error Code: [72:106]Description: Failed to send data to Index Engine. Please verify that the Index Engine is running.Source: Please help here
Hello, I’m trying to create bunch of subclients on client using command lines. everything was working fine no issue. and i have created a lot of subclients using ths way.suddenly, i start to get this error subclient: Error 0x202: Failed to connect to QSDK Serversequence: QLogin -cs commserver -u user User logged in successfully. qcreate subclient -c client -a dataagenttype -b backupset -i instance -n subclient -sp storagepolicy -f content1 subclient: Error 0x202: Failed to connect to QSDK Serverany idea why this error started to pop up and how i can overcome it ? i have restarted the commserver services i have rebooted the server and the issue still exsist.
My administrative Job Summary Report has started to show:ERROR CODE [34:53]: CommServeDR: Destination Directory [\\<sever_216>\D$\DR_Dump_Prod] does not exist or is inaccessibleSource: <server_57>, Process: commserveDRWhen I check <server_216>, I see there’s lots of disk space and that the D:\DR_Dump_Prod folder exists and has several folders named SET_99999 with the most recent folder being 2 days old. Does anyone know how to check what’s gone wrong?Ken
Hello All, I am getting UpdateIndex initialization failed for sap for oracle db while schedule backup running incremental and it will failed, when we retriggered it will convert increment backup to full backup and it will complete. please help how to fix the issue
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.