Commvault Q&A, release updates, and best practices
Hi,In the documentation, it is written “ The DDBs created for Windows MediaAgent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period. The DDBs created for LINUX MediaAgent should be formatted at 4 KB block size. “ Questions:Why the default deduplication block size is 128 KB as it is neither optimal for Windows & Linux Media Agent? and in addition, not optimal for Cloud library …. I cannot find in the documentation where to setup the block size of the DDB. It is in he storage pool properties but can someone provide me the link in Commvault?ThanksRegards,
Hello,We are planning to migrate the commserve from win server 2012 OS server to new hardware win 2019. Also we are planing to upgrade Simpana, our Simpana is 11.20.Which upgrade method is suitable for us?Should I upgrade Simpana on the old server before migrating Commserve to the new server, or install Windows on the new server and upgrade there?Thank you!Best regards,Elizabeta
Hello.Is this a valid configuration in a non dedup scenarioLibrary with two pathsMedia agent A on-prem with a disk pathMedia agent B cloud with a disk pathNone of the media agents can reach each other.The clients using a storage policy with the above library is configured to override the default path and use an explicit path that is within their zones, for on prem clients they use Media Agent A and clients in the cloud use media agent B.We are in the midst of a migration and the above would reduce the number of stg policies created.//Henke
Hello Commvault Community,Is it possible to change or hide the name of the service that is visible on commserve that appears when we’re scanning ports with a network scanner? In the attachments there is a screenshot showing exactly what we mean - scanningportbynmaptool.png Can we rename or hide it instead of "Commvault Webserver"? Will it have an impact on the operation of WebServer - probably Commvault systems refer to it, hence the question. Thanks for help,Kamil
Hi,I’m working on a project where a large Hyperscale environment needs to be migrated to Azure. Looking at using either Azure storage or Metallic recovery reserve and/or possibly a mix of the two. There’s short term 30 days as well as LTR 5 to 7 years so will use Hot / Cool tiers (possibly combined storage tiers if Metallic not used)HS is setup using the default DDB block size 128KB. In an ideal world - one could just setup a secondary copy for the cloud storage (either Metallic or Azure) and kick off the Aux copy to cloud, just let it run to get the data over in to Azure then eventually promote it to primary copy… however…As the cloud storage will then be used as a primary copy - ideally, we want to configure it with 512KB DDB block size. Media agents will be setup in Azure as they will eventually become the production MA’s once things get cut over. Some key questions on the above:copying between storage policies with different DDB block sizes – how will this affect overall dedupl
FYI while moving some customers from legacy schedule/storage policies to server plans we noticed the plan was not creating the daily full (we have it set on the plan for databases). We performed an assessment and noticed this was the case in several customer environment. Now it appears they have a fix already which will be added in the upcoming maintenance release for FR28. I'm not sure if this issue also persist in older/newer version, but beaware this might be the case. Relevant fixes are available as well!
2 SAP HANA servers have problems with the Commvault client. The commvault.service is not starting, no such file or directory.I did Readiness Check and Restart All Services from the Commcell console for the 2 client but it is not working. I tried to repair the software in one of the server but issue is the same.Any advise?
I have couple of questions related to VM UUID we have huge amounts of VM backups, we were cross-checking whether UUID is getting matched on both commvault and vmware end. some of the UUID Are same but some are different, Same UUID VM are not restored VM’s so not sure why this diffrence is happening Is it possible to pull the report using API or something which shows the VM details with UUID from commvault end ?
Hi Guys, I have client that has there files failing on their SharePoint online backup. can anyone assist me with a root cause of this. https://verimarksa.sharepoint.com/sites/PODREPORT/Contents/sites/PODREPORT/Shared Documents/General/Pod check (New).xlsx FAILED Prior version 602.0 download failed.https://verimarksa.sharepoint.com/sites/PODREPORT/Contents/sites/PODREPORT/Shared Documents/General/Pod check (New).xlsx FAILED Prior version 602.0 download failed.
Sharepoint DB Backup fails due to service accounts not having access to Jobresults/Log Files directories
Hi!We’re currently testing in our Dev environment with Sharepoint Subscription Edition. I’ve configured it like i normally do, and when i do a “Validate Account” it says everything is fine. However, when i tested doing a full DB backup, it “Completed with errors”. So i did a little bit of digging, and found out that there are a lot (~15) of service-accounts in use for this sharepoint installation, so just for testing purposes, i added all service-accounts with access to the Job results and Log Files directories, and then the backup completed successfully.So, it seems to me that there needs to be more accounts that needs to be given access to those folders. So i started doing some more testing, by removing 1 account at a time, but i ran into an issue where it seems some of the account/account-info is cached, because the job would fail, and i would add the account back again, and it would still fail for the next couple of times, but if i waited some hours, then retried, without having do
hello,we are using commvautl to bcp some VMs but we get this erreur :There are too many existing backup snapshots on the virtual machine. Backup snapshots are being cleaned up, however manual intervention may be requiredto ensure that all snapshots are removed. This may indicate that the storage is unable to keep up with the IO activity ofthe virtual machine during a backup.any help please.
Hello dears, im facing an issue while performing out of place restore for oracle database, the restore start to transfer 3 or more files then go in pending status with the below error Error Code: [18:182]Description: Failed with SBT library error [ORA-27199: skgfpst: sbtpcstatus returned errorORA-19511: non RMAN, but media manager or vendor specific failure, error text: ]Source: R12_DB, Process: ClOraAgent Mediaagent job manager log26501 59f7 02/01 10:28:38 #### SdtTailServerPool::StopPipe: Going to stop client with SDT pipe [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0]. Type 26501 59f7 02/01 10:28:38 #### SdtTailServerPool::StopPipe: Cannot find SDT pipe [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0]. JobId . Calling JM unregister here on its behalf26501 59f7 02/01 10:28:38 #### deinitializeSDTpipeline CALLED for pipelineID [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0] 5255
Hello,I’ve tried to create a workflow that:run a saved report sends that report in a emailI have little experiance with workflows and I haven’t been successfull in my tries. I don’t even know if it is possible to do.Is there anyone that have done it or know how it can be accomplished? Attached is a view of my current tries, though not successful. Thanks for any help.
Good day All,I am in the process of onboarding a MongoDB cluster in the backup environment. It is a 3-node replica set.During the initial configuration and testing all the settings were left as default. This includes using the Native snap engine.The backup fails due to insufficient free capacity/extents on the Volume Group. I understand how this need to be fixed.What are the considerations for selecting a different Snap Engine? These MongoDB instances are located on VMWare.Or is the only supported engine the Native one?https://community.commvault.com/commvault-q-a-2/backup-mongodb-without-intellisnap-4327 Since mongodump can only be used on Standalone instances as part of a Pre/Post script according to the following link "https://documentation.commvault.com/2022e/expert/43131_performing_full_backup_of_standalone_mongodb_deployment.html" when using the File System agent what are the other options?Am I correct in assuming that mongodump is not supported on replica sets?I have setup VM ba
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Hello, We're going to soon deploy a HyperScale X Reference Architecture with 3 nodes. I would like to know, since I didn't find anywhere else, if HSX can be the proxy/access host for my VSA based backups. Can each node be a proxy? Can it perform the File Recovery Enabler (Linux VM file restore) role? Can it perform Windows VMs file recovery? My virtual environment is VMware vSphere 7. Kind regards,Pedro Rocha
nFilterApplicationFiles setting for Filesystem subclient backups and HANA using the HANA pseudoclient
I have a commvault client that appears to be backing up the HANA database via the filesystem subclient, and I set the nFilterApplicationFiles setting and a full ran over the weekend and there was no change in the size and files backed up, and looking at the content of the backup, it appears there’s > 2 TB of data backed up in the fileystem subclient that’s in the /hana/shared/[db] path. I was under the impression that nFilterApplicationFiles would prevent that… but the “database discovered” is located in the pseudoclient, not the SAP for HANA agent associated to the server level commvault client…. Questions:Does the nFilterApplicationFiles setting “work like it should” by not backing up the HANA database when the HANA pseudoclient is used as the backup method, and backups are “pushed” to commvault via the SAP/HANA side (using backint?). Are there other settings/configs to make it work? Like something I would have to do in the “SAP for HANA” subclient? I coudl manually add in filesy
Customer requests a temporary blackout window between two dates. They want a period of 4 days without any activity.When I try to create the blackout window, I am able to select the start and end date, but the console won’t allow me to configure the start/end time such a way that it stops all activity in that period. The customer would like to stop backup at 02pm at one date, then start again at 03am 4 days later.I can’t save this setting because the console says “Start time should be less than end time”Any workaround for this?
We are currently looking to go Tapeless and I realize that some of our Backups are not setup as best they could be. One of the places is backup of SQL Databases. Currently we have the SQL Databases backed up by the SQL Program itself and then we make a backup of the .BAK files. This adds ALOT of extra time for restores as we have to restore the BAK and then restore the BAK again inside SQL. I have been told when Commvault was first installed years ago we tried using Commvault for the full backup of the DB’s but it was very slow and had issues. I am hoping with the newer technology things will be better. My question is what are best practices. I have been reading through the Docs from Commvault for SQL server and it seems straight forward, but I am not sure if the backup window needs to be when no-one is using the DB, or if the DB needs to be locked for a few hours as it backs up. We normally have processes running against the DB’s most of the day and night. I am sure many comp
Hello,I want to schedule a 14 days - daily retention, with weekly and monthly (GFS extended retention) and Im looking for best practice about Cycles.Im thinking to schedule a synthetic full every 7 days, but what about Cycles? 1 or 2? Thank you in advance,Nikos
Hello, I have some issue with restoring sql DB to another VM, the issue is that it take to long sometimes almost 1 day for 2.9TB. The sql back used 4 stream and using 5 streams for restore. Hier are some logs for the restore which still running Any idee why its taking so long?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.