Commvault Q&A, release updates, and best practices
We just upgraded to 2022E. We would like to know how to reduce false positives from our roaming terminal services profiles stored on our main file server, from our users that are using Microsoft Visual Studio Code… I’m sure Commvault, you know that one of the file extensions you are constantly alerting on is anything ending in “.code” - but the problem with this is that VScode saves temp files with .code extension in users’ appdata folders. Here’s an example from this morning:Description: A suspicious file [M:\LabTSProfiles\jsmith.v6\AppData\Roaming\Code\CachedData\6261075646f055b99068d3688932416f2346dd3b\polyfills-3e1ee7640a5aae80b3466bca7f4bdf90.code] is detected on the machineAnyway as you can see, we have John Smith’s roaming appdata folder, with VS Code’s “code” folder.I’d like to know if I can use a wildcard of some kind to specifically exclude any .code file that falls into that Code subfolder. I do not want to use the sExcludeExtensions additional setting to block every code fi
Is there a way to create an alert or email notification specifically for whenever a backup job failes due to “insufficient free space available to create a snapshot of virtual machine”?I see this from time to time whenver a datastore in vCenter is has low in free space which is easy to resolve. However, I may not always notice the error right away.
Hi All, We are still new with the CV / AWS implementation and have a few questions on how CV handles tagging.AWS – Commvault Install – 11.28.36Issue: Our AWS/Applications support groups are working on multiple automation and has ran into an issue with tags that Commvault creates. Some of the automation doesn’t know what to do with these tags that Commvault has left behind.Questions:What are these tags used for during backup? How long are these tags kept? Does Commvault delete these tags? If these tags are deleted by the automation, will there be any issues?Any thought on this is greatly appreciated. ThanksChuck
Hello, im trying to enable Ransomware protection on a linux MAi have followed the below guild and all good.https://documentation.commvault.com/v11/expert/126093_configuring_ransomware_protection_for_linux_mediaagent.html but after i finished the configuration, rebooted the MA when i check the MA properties for the ransomware protection it is still disabled...any ideas?
Management has found the CommVault Cleanup Report and is reviewing it monthly to ensure I’m looking after CommVault properly. One of the sections shows the Disabled Subclients which contains 39 entries, 29 of which are “default”. When I go in through the java GUI, there’s no option to delete these even though there are no backups and they are not part of a schedule.Question: Is there a way to delete the “default” sublcient?KenP.S. The client doesn’t have a Plan within the command center nor does the default subclient.
We have been in a seemingly never-ending tail chase with regards to needed Field Permissions for Salesforce Account. CommCell is on 11.28.32 but it’s been like this since day one (a year ago).Before anyone suggests - yes, the list of required permissions has been confirmed to be correct for the profile. User Permissions are not the issue. We have downloaded the Salesforce-compatible profile over and over, they feed it into the org, however they do it, the field permissions are fine for a while, and then sooner or later there’s a new list of fields the profile apparently can’t read.This seems like inheritance or something isn’t working for newly created fields, but I’m no Sf guy.Anyone know if there’s a way we can schedule a regular dump of those “lacking” field permissions to xml so I can send this to the app team automatically via workflow? Or, do you feel this should not be necessary once the profile is correctly configured?Then, similarly, Is dealing with field permissions handled d
Good day Community,For a second customer this month, when I install the standby CommServe it hangs and spins. The first customer took three full reinstall to finally get it through. But the current client, it never works and always hangs. In the install log file, it hangs on this line.Adding environment variable [CV_Instance001] = [C:\Program Files\Commvault\ContentStore\Base]When I look the variables, it is there.I’m using version 11.28.44Is this something common?Thanks in advance.
Hello all, I am new to RHV/OLVM environment, so what does it mean this: "You must have an admin user account that can connect to Red Hat VMs"+The backup/restore user must have several additional permissions. So, the service account that is used to connect to the hypervisor must be an admin account, able to connect to Red Hat VMs, and have the permissions listed in the doc? Why would an admin account need additional permissions? Admin isn't sufficient? Regards,
This assumption correct? - If you to start CV encrypting data sent dedupe storage my guess would be that it a completely new set of dedupe data? Once encryption is turned on, the dedupe engine will see it as new data rather than encryption version of the old. While the unencrypted and encrypted data from the same servers remains in the same dedupe storage, storage usage could higher than usual.
Windows 2016 server Client is installed manually and is visible to the Commvault side. All the connectivity and readiness checks are Okay. But when we are trying to configure file system subclients, It gives following error message, “Client configuration failed. There is no license available for File System Core.” As this CommCell is configured with host-based licensing this need to purchase an additional license from Commvault. ? Kindly help us to verify this as we couldn’t found a documentation proof.We installed File System and SQL server agent with the installation.
Hi,In the documentation, it is written “ The DDBs created for Windows MediaAgent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period. The DDBs created for LINUX MediaAgent should be formatted at 4 KB block size. “ Questions:Why the default deduplication block size is 128 KB as it is neither optimal for Windows & Linux Media Agent? and in addition, not optimal for Cloud library …. I cannot find in the documentation where to setup the block size of the DDB. It is in he storage pool properties but can someone provide me the link in Commvault?ThanksRegards,
Hi, I have a short question in context of Exchange Policy Assignments. We are using AutoDiscovery with AD Groups to assign Exchange Policies to User Mailboxes in the Mailbox Agent. The Question is what happens if I add a specific user to a different AD Group which has a different set of Exchange Policies configured.Will the Exchange Policies gets changed for the User Mailbox or will it keep the original Exchange Policy Assignment? Kind RegardsFlorian
Hi let me ask four question in here from japan. (1) Can we Use Salesforce backup in multiple environments without any problems? If there are problems or something we need to consider, please point out.Also, is it possible to restore by linking with a system other than Salesforce? (2) I would like to receive detailed information on the object comparison and metadata comparison of the SalesForce backup function.Could you give me the information of the document or the web? (3) How long is the maximum length of time that can be specified as a backup retention period?Also, is it possible to store backups that have exceeded their retention period separately so that they are not lost? (4)What industries does Metallic sell to?And what industries are selling Salesforce Backup licenses to?Kind Regards
Has anyone ever transferred their initial baseline backup between two sites using an available removable disk drive (Seeding a Deduplicated Storage Policy). We are looking like may have to use this method to get a good AUX copy for some remote sites. How did it go? did you have any issues? Thanks for your input.
Hello dears, im facing an issue while performing out of place restore for oracle database, the restore start to transfer 3 or more files then go in pending status with the below error Error Code: [18:182]Description: Failed with SBT library error [ORA-27199: skgfpst: sbtpcstatus returned errorORA-19511: non RMAN, but media manager or vendor specific failure, error text: ]Source: R12_DB, Process: ClOraAgent Mediaagent job manager log26501 59f7 02/01 10:28:38 #### SdtTailServerPool::StopPipe: Going to stop client with SDT pipe [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0]. Type 26501 59f7 02/01 10:28:38 #### SdtTailServerPool::StopPipe: Cannot find SDT pipe [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0]. JobId . Calling JM unregister here on its behalf26501 59f7 02/01 10:28:38 #### deinitializeSDTpipeline CALLED for pipelineID [SDTPipe_R12_DBCRP_COLD_jd-mediaagent_5255_1675236023_20203_1_11b7691e0] 5255
Hello everyone - Im looking for that commvault doc that said you need to upgrade 11.20.* to 11.24 and then to 11.28 etc.. I do see this doc https://documentation.commvault.com/2022e/essential/2619_platform_release_schedule_and_lifecycles.html But this just tells me about each release schedule etc. I want that doc that tells me the upgrade paths for each version. example - do i need to upgrade to 11.24 to get to 11.28 from 11.20 …? Does this doc exist? I dont think you can go straight from 11.20 to latest and greatest? correct? ThanksBC
Hi, We are new with AWS - Commvault installation. The designed VPC architecture and our CV configuration is as such:CV = 11.28.36We have 1 COMSERVE serving 6 different VPCs. Each VPC has it’s on Media Agent and S3 bucket. However, in 1 VPC there are 2 accounts, each account has it’s on MA and S3 bucket. We are finding that VM discoveries from each MA is discovering VMs associate with the other account. These accounts are divided by subnet ranges.We tried to limit the discovery of the VM’s in the Client Group by choosing the following CIDR filter parameter: “Client IPV4 CIDR is 192.168.32.0,19”.This only yielded the EC2’s that have CV integration, ie Media agent, cassandra, etc. Is there a way to limited the VM’s discovered by IP address ranges?ThanksChuck
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.