Commvault Q&A, release updates, and best practices
Hello Team, im configuring 2500 endpoint device for protection and content indexing. i have created the package.created laptop plancreated content index policy installed the package on 1 laptop how i can enable content indexing on the device or how i can include it to the CI plan because there’s no button in the association tab? how i can assign universal access to each device owner so he can only see his device from the webconsole? i have went to manage/system/ access control and activate automatic owner assignment with permission as shown below appreciate the fast response.
DDB Verification for Private Cloud Library
Hi Team,We created Private Cloud Library using Dell EMC ECS S3 protocol. However, today I noticed that the DDB Engines of the related libraries are unchecked in the DDB Verification schedule created automatically by the system.Is this the default behavior?Is it not recommended for DDB Verification Cloud Libraries ?Is DDB Verification recommended even if our private cloud data is still in our own DataCenter? Best Regards.
Full backup or synthetic for streaming on agent-based backups?
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
Hi Team,I can see below .NET related vulnerability.Can you please let me know, if .NET update will impact Commvault backup server and media agents ?Microsoft CVE-2020-0605: .NET Framework Remote Code Execution Vulnerability Microsoft CVE-2020-1046: .NET Framework Remote Code Execution Vulnerability Microsoft CVE-2020-1147: .NET Framework, SharePoint Server, and Visual Studio Remote Code Execution Vulnerability Microsoft CVE-2021-24111: .NET Framework Denial of Service Vulnerability Microsoft CVE-2022-26832: .NET Framework Denial of Service Vulnerability Microsoft CVE-2020-1108: .NET Core & .NET Framework Denial of Service Vulnerability Microsoft CVE-2022-21911: .NET Framework Denial of Service Vulnerability Microsoft CVE-2022-41064: .NET Framework Information Disclosure Vulnerability Microsoft CVE-2023-21722: .NET Framework Denial of Service Vulnerability Microsoft CVE-2023-21808: .NET and Visual Studio Remote Code Execution Vulnerability Microsoft CVE-2022-41089: .NET Fr
Full Backup capacity ?
Hi I have a customer who has multiple sites and now we are looking to replicate those site to the cloud and need to evaluate the bandwidths for each site !Is there a way to generate a report for each site telling us what is the equivalent of a full backup for those sites? I have the Total Application Size and Total Data Size on Disks but I cannot really rely on this since some have 15 days retention and others have 60 days retention ! Is there a report or a way to find this kind of information ?
Best practices for a large linux file server besides File Agent?
Our linux file server is a large VMware VM - 48GB RAM, 8 virtual CPUs that are running on a Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz - and it has about 500TB of storage on it, that our linux admin has set up as 12x 60TB (max size) volumes on our SAN, that communicate back to VMware over iSCSI.We get reasonable performance with our linux users using this linux VM for day to day usage, but certain customers and projects are reaching crazy levels of unstructured file storage. We have one customer that has a folder that consumes 16TB of data, across 41 million files. And while that’s our worst offender, the top 5 projects are all pretty similar.We’ve been using the Linux File Agent installed into this VM since starting to use Commvault in 2018. We typically see about 3 hours for an incremental backup to run across this file server, with the majority of time just scanning across the entire volume, and the backup phase running relatively quickly. We run 3x incremental backups per day, at 6a
Manually deleting old files from S3. DDB and Barcode Question.
Greetings,I have a single s3 bucket that has held all of our Aux copy backups for many years, and it looks like it goes back to 2017. This bucket gigantic sitting at about 1.5PB and there are hundreds or millions of files in it. I inherited this setup when I came on board a bit back so I don’t know much about the history of it. I do know that a couple years back Commvault was rebuilt/recreated and everything was setup new. Our current storage policies have a year retention going out to this s3 bucket, and I see it in Commvault for the jobs that they do get aged off after a year. This does go through a DDB, and I ended up sealing it a few weeks ago, and at the same time cleaned up (aged/space reclamation) as much as I could in order to shrink down our bucket. It dropped the bucket size very slightly when I did this. I was hoping for a lot more. That started to make me think that there is a discrepancy from what the bucket has to was Commvault believes it has. I think there is a lot more
issue on enabling ransomware protection on a new mount for Linux MA
Hello Community Before make the change in prod env, I have mounted a new iscsi LUN(no multipath) on a MA (Rhl8.7) in a lab env and configured it for Commvault library. Also completed the first backup on this mount.I'm attempting to enable Commvault ransomware protection for this new mount.During this process, I received a message stating that the operation would be disruptive and required the update of the fstab conf file for both local and network file systems. After confirming the operation by entering 'y,' a policy was created/added in the cvstorage module. Issue:Even though the process was carried out, but fstab was not updated for both mounts. It was expected that the process would involve unmounting the mount and updating fstab for it, as indicated in the following output:2023-xx-xx 22:18:05,387 - __main__ - INFO - unmounting 'XXX_mount_name'2023-xx-xx22:18:05,412 - __main__ - INFO - updating fstab with security 'XXX_mount_name'However, the process doesn’t run umount/update fsta
Multi-Stream support for Vmware Full VM Restore
Does Commvault support or plan to add support for Multi-Streaming during Full VM Restore in VMware?For example, a VM with 4 ‘vmdk’ files can use 4 streams during restoration.We are running ver 11.24 and see that a single stream is used during the restore which slows restores down, especially for larger VMs.We could of course use Live VM recovery but that does not help if the VM is running any resource-heavy app.
Usage of HPE StoreOnce as a disk library
Hi, I need some info about the usage of HPE StoreOnce as a disk library. Something like this:https://documentation.commvault.com/11.24/expert/102869_add_hpe_catalyst_storage.htmlCurrently we're using one HPE StoreOnce catalyst store for a disk library. This gives us some single point of failure issue and we need to try and remediate this. So I would like to know what options there is. Like could we have a disk library with catalyst stores from multiple HPE StoreCnce boxes? Like a grid consisting of 4MA's with storage from 4 HPE StoreOnce. So in case there's some issue/maintenance we take down one HPE StoreOnce box at a time and backups will continue to run.
Aux Copy performance: Does CommVault "throttle" or "pace" itself, in terms of total throughput for a job?
Just a curiosity...Basically:Does Commvault increase/decrease the throughput (of a job, or per stream) of an Aux copy based on internal things like “how much data has to be copied” and/or “when the next job to run is”? or anything similar? Essentially: If no external bottlenecks existed for an Aux copy, would CommVault go as fast as it can to Aux copy, or would it internally throttle based on “need to not go as fast as possible”? if so, is there a way to see these metrics/know when its ocurring?Put another way: I’m wondering if Commvault, having “a few TB to copy” for a job, decides ‘yes, i can easily copy this in x hours before the next job, so therefore, no need to run at max performance for this job”, and then on another day, say there’s 100 TB to copy, so it determines” I need to ramp up the throughput for this job if I’m going to be able to finish this in time”?Another way to put this is: I’m not looking for a “yes/no” if “job time or data amount to copy” is literally the exact
Is there a CV Rest API to get the properties of details of a Scheduled REPORT
I have some customized reports, which are scheduled to run once in 90 days, i would like to update certain properties of that schedule once in a while using workflowIt would be easy to do that if I do an GET api on the report schedule using the report schedule nameand using that i would update the properties that I want to. I’m unable to find any API specific to report scheduling. https://api.commvault.com/#416fa5bf-c150-4cbd-8f05-4bdc867d2719 is it possible to Get the details of a specific scheduled REPORT orhow do i list the properties of a particular REPORT Schedule orhow do i find the taskID for a REPORT Schedule. https://api.commvault.com/#8de6b19c-2815-4d50-b89e-99654a090db0
vSphere 8, Virtual Machine Hardware Version 20
Hi everyone.. When will VMs with vHW v.20 be supported? I see that vSphere 8 is now listed as supported but VMs with Virtual Machine Hardware Version 20 is not… https://documentation.commvault.com/2022e/essential/107803_vmware_system_requirements.html Any idea?Thanks.. Kind regardsKim Rubeck Solstar
API get Full Schedules
Hello everyone, Is it possible to get Full Schedules via API call? i am able to get the schedules and sub tasks inside the schedule but i could not find a way to differentiate schedules from Full, vs Incremental vs Full Synthetic? Granted Full Synthetic has a display name so it lets you differentiate from Full and incremental but Full backup and incremental does not have any name in schedules.is there any other property that i can key off? Thank you!
Replacing a VMware access node via Command Center might impact pruning of snapshot
Following a recent experience I thought let's share this with the community. We replaced all our access nodes in one of our environments to use the Commvault provided FREL image. At some point we noticed older snapshots were not being pruned and we found out that the aging of the snapshots, when deleting them directly from the hypervisor configuration was failing with the following error: No Host found with HostId .The logs revealed the following error:14852 149c 03/17 09:36:01 741958 CVSnapClientAPIInternal::deleteVolumeSnaps () - Failed to delete Volume Snaps. Err [60204:Unknown Volume Mount Host]. VolSnapInfo Status- Err [60204:No Host found with HostId .].We in the end managed to fix the issue by selecting the newly added access nodes in the Array Access Nodes section that is part of the the Array Management module for the particular hypervisor within the CommCell Console.Of course I hope development will fix this in the future so you do not have to go back to t
Moving clients to new storage policy question
Greetings, We changed some retention and aux locations and I pointed a bunch of clients over to this new storage policy we created. The new policy has a primary and aux location to it. The primary is going to have the same storage location that the clients went to before with the same DDB stuff. The aux location is new though and has a new pool and DDB to it. My question is if I should run a full backup of each of these clients now that they are repointed to the new storage policy? I think I have to right? The clients have been running a synth full on one day every two weeks and incrementals every other day. The original full backups don’t exist anymore. They wont just pickup in their normal chain (synths and incrementals) with moving them to the new policy right?Thanks
how to change or modify the current credential for linux client from commcell console?
Suppose that we have been installed a file system agent software on linux client with root password Abc123!. After that root password has been changed and there are a new requirements to install another mysql agent software on that client with update with current MR version.So, how to change or modify the current root password for linux client from commcell console?
VM Backup Definition Suggestions
Hi team,I need a recommendation for our vmware backups.Our situation is as follows;Definition 1: With DefaultBackupSet, we back up approximately 1000 VMs with a large number of subclients every day with incr and on weekends with Synthetic Full.Definition 2: With another BackupSet, we are backing up VMs with 1 tag in Vcenter every Friday. There are about 2500 VMs. Jobs work as incr on Friday, we can't run full because backup takes too long. We shorten the time by taking incr and Synthetic Full works after the subclient is finished.Both types of backup are done with NBD, not suitable for our SAN and Hot-add environment.We use 10 virtual proxies in total.Since backups working with Definition 2 start incr, if it has a new VM tag, it automatically converts to Full. If there is a VM backed up with both definitions, it continues as Incr.Now the config made here may be wrong, but I need this, what do you recommend?- As in Definition 1, I have to back up about 1000 VMs every day.- As in Definit
Issue with Ingesting events, audit, alerts to splunk - TCP connector - Syslog configuration
Unable to find the events in splunk, we confirm that the host is reachable from commserve as well as the IPIs there anything else that we must verify?Also tried deploying a syslog server separately (enabling rsyslog), even there we were not able to find the logmonitor.log being sent.
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.