Ask questions, give answers, get good karma
- 2,868 Topics
- 14,027 Replies
As more and more companies move their databases to cloud environments, we would like to learn more about how you create copies of your cloud database for purposes other than restore and disaster recovery, such as dev/test, reporting, analytics, etc.Your feedback will help our Product team improve our cloud database dev/test solution. If you are a backup or DB admin or devops engineer, please take 4-5 minutes to complete the brief survey here. Please submit responses by February 20, and feel free to share with your colleagues who have insights or feedback on the topic. As always, let us know here if you have any questions about the survey.
Hello all, I'd like to understand the steps taken by Commvault when a VSA backup from a vSphere environment uses a HSX block as access nodes/proxy. Consider that vSphere datastores are FC SAN based LUNs. So, best performance would be to use SAN transport mode on the cvlt side. During the backup operation, how the datastore LUNs are presented to the appliance? I saw in the docs the following: "For Linux proxies, you might need to rescan the SCSI or iSCSI bus after attaching devices.For MediaAgents on the HyperScale 1.5 Appliance or HyperScale X Appliance, you can use the rescan-scsi-bus.sh script to force a rescan." So, that rescan procedure, when it is required? Everytime a new datastore is presented to the virtual environment? If there's no new datastore, it is done only once during HSX deployment? Regards,Pedro
Hello, I am trying to P2V a Windows Machine with the Virtualize me feature. The P2V works well but the machine doesn’t keep the name.1 how should I proceed to keep the name of the machine and not have WIN-xxxxxxxx I don’t want to loose the SID and domain connectivity 2 also, I need to put online each disk, is it a normal behavior ? Thanks !
Hello Commvault Community,Is it possible to change or hide the name of the service that is visible on commserve that appears when we’re scanning ports with a network scanner? In the attachments there is a screenshot showing exactly what we mean - scanningportbynmaptool.png Can we rename or hide it instead of "Commvault Webserver"? Will it have an impact on the operation of WebServer - probably Commvault systems refer to it, hence the question. Thanks for help,Kamil
Hi,In the documentation, it is written “ The DDBs created for Windows MediaAgent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period. The DDBs created for LINUX MediaAgent should be formatted at 4 KB block size. “ Questions:Why the default deduplication block size is 128 KB as it is neither optimal for Windows & Linux Media Agent? and in addition, not optimal for Cloud library …. I cannot find in the documentation where to setup the block size of the DDB. It is in he storage pool properties but can someone provide me the link in Commvault?ThanksRegards,
Hello,We are planning to migrate the commserve from win server 2012 OS server to new hardware win 2019. Also we are planing to upgrade Simpana, our Simpana is 11.20.Which upgrade method is suitable for us?Should I upgrade Simpana on the old server before migrating Commserve to the new server, or install Windows on the new server and upgrade there?Thank you!Best regards,Elizabeta
Hello.Is this a valid configuration in a non dedup scenarioLibrary with two pathsMedia agent A on-prem with a disk pathMedia agent B cloud with a disk pathNone of the media agents can reach each other.The clients using a storage policy with the above library is configured to override the default path and use an explicit path that is within their zones, for on prem clients they use Media Agent A and clients in the cloud use media agent B.We are in the midst of a migration and the above would reduce the number of stg policies created.//Henke
2 SAP HANA servers have problems with the Commvault client. The commvault.service is not starting, no such file or directory.I did Readiness Check and Restart All Services from the Commcell console for the 2 client but it is not working. I tried to repair the software in one of the server but issue is the same.Any advise?
I have couple of questions related to VM UUID we have huge amounts of VM backups, we were cross-checking whether UUID is getting matched on both commvault and vmware end. some of the UUID Are same but some are different, Same UUID VM are not restored VM’s so not sure why this diffrence is happening Is it possible to pull the report using API or something which shows the VM details with UUID from commvault end ?
Hi Guys, I have client that has there files failing on their SharePoint online backup. can anyone assist me with a root cause of this. https://verimarksa.sharepoint.com/sites/PODREPORT/Contents/sites/PODREPORT/Shared Documents/General/Pod check (New).xlsx FAILED Prior version 602.0 download failed.https://verimarksa.sharepoint.com/sites/PODREPORT/Contents/sites/PODREPORT/Shared Documents/General/Pod check (New).xlsx FAILED Prior version 602.0 download failed.
Sharepoint DB Backup fails due to service accounts not having access to Jobresults/Log Files directories
Hi!We’re currently testing in our Dev environment with Sharepoint Subscription Edition. I’ve configured it like i normally do, and when i do a “Validate Account” it says everything is fine. However, when i tested doing a full DB backup, it “Completed with errors”. So i did a little bit of digging, and found out that there are a lot (~15) of service-accounts in use for this sharepoint installation, so just for testing purposes, i added all service-accounts with access to the Job results and Log Files directories, and then the backup completed successfully.So, it seems to me that there needs to be more accounts that needs to be given access to those folders. So i started doing some more testing, by removing 1 account at a time, but i ran into an issue where it seems some of the account/account-info is cached, because the job would fail, and i would add the account back again, and it would still fail for the next couple of times, but if i waited some hours, then retried, without having do
Hello dears, im facing an issue while performing out of place restore for oracle database, the restore start to transfer 3 or more files then go in pending status with the below error Error Code: [18:182]Description: Failed with SBT library error [ORA-27199: skgfpst: sbtpcstatus returned errorORA-19511: non RMAN, but media manager or vendor specific failure, error text: ]Source: R12_DB, Process: ClOraAgent Mediaagent job manager log26501 59f7 02/01 10:28:38 #### SdtTailServerPool::StopPipe: Going to stop client with SDT pipe [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0]. Type 26501 59f7 02/01 10:28:38 #### SdtTailServerPool::StopPipe: Cannot find SDT pipe [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0]. JobId . Calling JM unregister here on its behalf26501 59f7 02/01 10:28:38 #### deinitializeSDTpipeline CALLED for pipelineID [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0] 5255
Good day All,I am in the process of onboarding a MongoDB cluster in the backup environment. It is a 3-node replica set.During the initial configuration and testing all the settings were left as default. This includes using the Native snap engine.The backup fails due to insufficient free capacity/extents on the Volume Group. I understand how this need to be fixed.What are the considerations for selecting a different Snap Engine? These MongoDB instances are located on VMWare.Or is the only supported engine the Native one?https://community.commvault.com/commvault-q-a-2/backup-mongodb-without-intellisnap-4327 Since mongodump can only be used on Standalone instances as part of a Pre/Post script according to the following link "https://documentation.commvault.com/2022e/expert/43131_performing_full_backup_of_standalone_mongodb_deployment.html" when using the File System agent what are the other options?Am I correct in assuming that mongodump is not supported on replica sets?I have setup VM ba
nFilterApplicationFiles setting for Filesystem subclient backups and HANA using the HANA pseudoclient
I have a commvault client that appears to be backing up the HANA database via the filesystem subclient, and I set the nFilterApplicationFiles setting and a full ran over the weekend and there was no change in the size and files backed up, and looking at the content of the backup, it appears there’s > 2 TB of data backed up in the fileystem subclient that’s in the /hana/shared/[db] path. I was under the impression that nFilterApplicationFiles would prevent that… but the “database discovered” is located in the pseudoclient, not the SAP for HANA agent associated to the server level commvault client…. Questions:Does the nFilterApplicationFiles setting “work like it should” by not backing up the HANA database when the HANA pseudoclient is used as the backup method, and backups are “pushed” to commvault via the SAP/HANA side (using backint?). Are there other settings/configs to make it work? Like something I would have to do in the “SAP for HANA” subclient? I coudl manually add in filesy
Customer requests a temporary blackout window between two dates. They want a period of 4 days without any activity.When I try to create the blackout window, I am able to select the start and end date, but the console won’t allow me to configure the start/end time such a way that it stops all activity in that period. The customer would like to stop backup at 02pm at one date, then start again at 03am 4 days later.I can’t save this setting because the console says “Start time should be less than end time”Any workaround for this?
Hello,I want to schedule a 14 days - daily retention, with weekly and monthly (GFS extended retention) and Im looking for best practice about Cycles.Im thinking to schedule a synthetic full every 7 days, but what about Cycles? 1 or 2? Thank you in advance,Nikos
Hello, I have some issue with restoring sql DB to another VM, the issue is that it take to long sometimes almost 1 day for 2.9TB. The sql back used 4 stream and using 5 streams for restore. Hier are some logs for the restore which still running Any idee why its taking so long?
Hello all. We have 6 Hs2300s and 1 RO1200, all the same supermicro servers underneath. We have 10Gb fibre cards in them. The Bios sees the cards, the OS sees them as physical interfaces, and we believe the SFPS, cabling and switch config is correct.However on both the Windows and the Linux it shows as network cable unplugged/no link detected, and the switch shows no presence on the line.We have a call raised with CV but has anyone else seen this? Is there some setting in the Bios we need to apply?
I recently upgraded to 11:28.44 and since then my OneDrive backups have been failing with this errorError Code: [19:599]Description: Loss of control process CvCloudBkp.exe. Possible causes: 1. The control process has unexpectedly died. Check crash dump or core file. 2. The communication to the control process machine might have gone down due to network errors. 3. If the machine is a cluster, it may have failed over. 4. The machine may have rebooted. 5. Antivirus/Malware software may be blocking the process. Please ensure that all exclusions are in place: http://documentation.commvault.com/commvault/v11/article?p=8665.htm>Source: tiicommvault, Process: JobManager Apart from the Commvault upgrade nothing else has changed & AV exceptions are applying correctly. Any ideas of how to trouble shoot this?
Hello Commvault Community! Vulnerability topic for .Net Core - came back several times, but I want to make sure about this topic.We have an environment that has already gone through many updates of various FRs and there are remnants of previous .Net Core versions. (currently the environment runs on FR24). The documentation says that version 4.6 is required, so can we remove all packages below on all CommServes (Active and Passiv) and install for version 4.6?Client: xyz1QID-106105EOL/Obsolete Software: Microsoft .Net Core Version 3.1 DetectedClient: xyz1QID-38794Secure Sockets Layer/Transport Layer Security (SSL/TLS) Server Supports Transport Layer Security (TLSv1.1)Client: xyz2QID-106105EOL/Obsolete Software: Microsoft .Net Core Version 3.1 DetectedTLSv1.1 is supportedVersions:- OS - Windows 2016- SQL Server - 13.0.5893.48- Commvault environment version - 11.24.48".NET Core 3.1 End of life.NET Core 3.1 will reach end of life on December 13, 2022, as described in .NET Releases and per .
Hello team,I have 4 media agents on AWS that are used to backup sap databases. The schedule is done through third part application.I need to update the Commserve and the 4 media Agent installed as gridstorI remember the last time I put in maintenance mode one by one each media agent but I have the impression that the jobs fell in error instead of terminating gracefully.any advice lease ?Thanks !
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.