Ask questions, give answers, get good karma
- 2,326 Topics
- 11,823 Replies
Hello All, I am getting UpdateIndex initialization failed for sap for oracle db while schedule backup running incremental and it will failed, when we retriggered it will convert increment backup to full backup and it will complete. please help how to fix the issue
Hello, Very strange problem with CommVault RMAN (archive) backups. Sometimes a RMAN archivelog backup job stays “running” forever/hangs.It's not related to a specific Oracle instance, the problem occurs with all instances.At random, sometimes the jobs run and finish OK in a couple of minutes and sometimes stay running forever. Host (Windows2016) memory and CPU performance is ok.ClOragent.log logging stops at : OraObject::GetOraMode() - oraMode = READ WRITESo hangup occurs before the RMAN script is called.No entry in the database alert.log at that timestamp, just skipped. Looks like communication with the CIOragent.exe process is lost, although it is still running on te server…Hope you have an idea how to solve this problem...
Hi Team, We saw that one of our file backup has detected stale mount path in the scan phase. I saw below BOL but there is no how we can get backup this stale mount path. What can we check from OS ?How can we take the backup for this stale mount path, what is the suggestions ? 35041 88e1 09/21 01:01:20 15332611 MountPathInfo::expandContentPathByVolumes(491) - /data of type [xfs] is skipped due to stale https://documentation.commvault.com/2022e/expert/24174_frequently_asked_questions_linux_file_system_01.html#how-are-stale-nfs-mount-points-detected-during-scan-phase
we are using MySQL8.0 on centos7. We installed Commvault v11.28.14. We enabled xtrabackup and full instance xtrabackup. we are trying to full backup,and pending status. cvfwd.log have error:(192.168.179.3 is client IP)2640 11bc 08/23 12:30:08 DT:00128 ######## ERROR: cvfwd_ssl_ext_parse_cb(): Peer 192.168.179.3 didn't provide a common CA certificate2640 11bc 08/23 12:30:08 ######## ######## ERROR: cvfwd_ssl_log_and_clear_errors(): error:1422B06E:SSL routines:custom_ext_parse:bad extension: (ssl\statem\extensions_cust.c:162)2640 1878 08/23 12:31:03 DT:00129 ######## ERROR: cvfwd_ssl_ext_parse_cb(): Peer 192.168.179.3 didn't provide a common CA certificate2640 1878 08/23 12:31:03 ######## ######## ERROR: cvfwd_ssl_log_and_clear_errors(): error:1422B06E:SSL routines:custom_ext_parse:bad extension: (ssl\statem\extensions_cust.c:162)
Hi Team, We are configured FSO and try to perform a file server optimization on a file server (size 30TB ).now we are getting error regarding the index server . Error : Error Code: [72:106]Description: Failed to send data to Index Engine. Please verify that the Index Engine is running.Source: Please help here
Hi, there seems to be a recent change, you now need to request to access/download the DR-sets.I can see some of the pros and cons, maybe you can tell us a little bit more about the reason and the process, the request is taking on the commvault side.I tried a testrun on friday, requested the download at 7:48 am (CEST) and got the request approved after 8h (at 3:32 pm (CEST).Also the request was sent to the mail-address of the account holder.If there was a real Disaster, i would have had to raise a ticket and hopefully could get the DR-Set faster, but that is another thing you need to do, and if you need the DR-Set, you will have your hands full.To improve the process, I would like to suggest the following additions:1.) Give a timeframe, how long the request will take. At the moment, there is no information and it would be good to get a timeframe. In a Restore-Test or other none vital operation, the 8h response time can be worked around.2.) Lets give us the option to add another Mail-Add
Hi, I am thinking to use Aux copy to move backup jobs of backup copy of storage policy from one disk library to another as the original disk library is getting full. I have several questions regarding Aux copy and hoping that I can get answers here.as there are two different disk libraries which use different DDBs, does it mean that a full copy of backup jobs will be transferred through the network without deduplication? When I looked into the aux job that I ran for testing, the size of backup data transferred across is much smaller and very closed to the actual data written size of the original copy. Did I miss anything or this is expected behavior? If so what is the theory behind it. How can I tell for sure that all backup jobs have been copied across? I am planning to delete the original copy to free up space but I want to make sure that all jobs are copied across before doing it.Kind regards,Boyi
Hello community!I recently switched for OneDrive backup, from v1 to v2 and a have a question:In Users tab, I can add an extra column called “OneDrive enabled”.Then I see that my users are mixed in this filter. Some of these are Yes and some other with No, but all of users are backed up (with also actual size).So, could you please help me, what is “OneDrive enabled” for?Thank you in advance,Nikos
Hi Team,I cannot understand what the scheduler task option is for:Run Incremental Backup - Before Synthetic Fullsince it later gives Failure Reason:Synthetic Full cannot run because no Incremental or Differential backup was run after last Full backup for the subclient.Can someone explain this? Regards,Piotr Grzegorek.
commvault.Instance001.service has suddenly stopped on all the linux clients.Next backup will run on Saturday.I believe this is a commvault communication service on all clients which needs to be run in order to backup clients.Could you please let me know how to restart this as I cannot have backup failures this saturday. It is crucial to the business.
Hello,we have to recall stubs of about 2000 mailboxes. I want to use massrecall tool to recall the stubs. My question is, is there an option to define the complete mailbox of the user and to define exclusions of specific folders (calendar, deleted items, etc.). At the monent I don’t know which folders were created by users beside inbox. Or is it necessary to report all mailboxes and folders to create a input file which contains the folders that shoul be included for mass recall?Thanks and regards,Thomas
Hi, ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡRman Log:[Recovery Manager: Release 188.8.131.52.0 - Production on Mon Sep 19 22:34:13 2022Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.RMAN>connected to target database: ORAKO16 (DBID=2846671649)using target database control file instead of recovery catalogRMAN>old RMAN configuration parameters:CONFIGURE CONTROLFILE AUTOBACKUP ON;new RMAN configuration parameters:CONFIGURE CONTROLFILE AUTOBACKUP ON;new RMAN configuration parameters are successfully storedRMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03009: failure of allocate command on ch1 channel at 09/19/2022 22:34:17ORA-19554: error allocating device, device type: SBT_TAPE, device name:ORA-27211: Fail
There is a data loss in customer environment. Customer wants me to check if there was anything which was backed up and could possible be restored. There are 2 incremental backups from which data needs to be restored. But while browsing the incremental data is stuck at “loading data….” and not actually loading data. Previous incremental backups are being browsed successfully.
Hi, We are looking for a way to fetch from the Commserv - Jobs Copied By AuxCopy Job.Currently we use this API call - qoperation execscript -sn QS_JobsCopiedByAuxCopyJob,but the response we are getting includes details that are not aligned with what we see on the Console GUI. Jobs that are marked as ‘Copied’ on GUI doesn't show on the API response. Is there any other way to fetch that data and the get the latest status ? Thanks
Hi Problem description:VMWare subclient (with NetApp datastores via NFS) is running using:SnapShot (primary Snap) -> SnapVault -> Backup2Disk.First a full and since then always inc + 1 x a week synthetic full was backed up. The primary target (default disklibrary) must be changed and this is not possible in the storage policy :( I can set up a new primary target but the default library (for metadata SnapShot etc) is always the original library. So I have to set up a new storage policy with a new default library. How does it behave now with:a.) the Synthetic fulls? Do I have to start again with a real full or does CV just take the data assigned to the client from whatever library for the next synthetic full and if I delete the old storage policy I have still functionable backups?b.) if I move the SubClient to the new Storage Policy it keeps the CBT/SnapShot/SnapVault information? Unfortunately I can't test it easily at the client because there is some "unrest" about CV. Best rega
Hi guys,We are planning to set up LiveSync and our Web Server is installed on the primary CommServe Instance001. I assume that if we don’t separate the Web Server, we will only be able to operate from the CommCell if a failover occurs (since LiveSync works at the SQL Server agent level). I did not see in the documentation that having a seperate Web Server was a requirement for LiveSync and I was wondering if it was absolutely necessary or if it would be a better practice to set it up as a separate server.Thanks for your help!
Hi Team,Greetings! instead of Connect-CVServer command from PowerShell to connect commserve. is there a another way where we can connect commvault using PowerShell without providing credentials. i want to connect to commserver from PowerShell with token or something ?
Hi all,our task is to implement Quantum ActiveVault feature in the Commvault environment. ActiveVault should create a special room (for the tapes) inside the tape library, that is not visible for any (backup) application. According to me, in CV we should adjust schedule policy or create VaultTracker policy, that moves full tapes to I/E (and then to defined export location). Once the tape is in the I/E slot, the quantum tape library can mark this tape as part of special room and the tape looks like it is outside from CV point of view. Then, once data are aged, an administrator has to manually move tapes to I/E by clicking in the GUI environment of the tape library. The thing is, how to find out, which tapes should be moved back to library since they are aged.I think, it is necessary to have Due back for reuse policy type, isnt it? So, firstly run a report to find tapes for return, secondly move manually tapes in management console of the library and lastly run VaultTracker policy? Am I
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.