Commvault Q&A, release updates, and best practices
Hi, I ve noticed one thing about index server ports. The customer reported me that one of his index servers not working. He has two instances of index server. Instance001 is using port 20010, Instance002 is using 20000. Instance001 service didn’t start. I ve noticed that process of instance002 datacube.exe is listening on that port. There is also 20000 port used for datacube.exe process for instance002 - that’s correct.So I am assuming that index server is also using some dynamic ports. When I ve stopped instance002 then I was able to start instance001, then restarted instanced002. After that both instances were running: instance001 was using port 20010 - correct, instance002 was using 20000 -correct and 20020. I didn’t find anything about that in documentation. Looks like sometimes it needs additional ports and it checks if its free. If not it check portnumber+10 etc. I am not sure about that. Is it possible to set up range of ports for index server? Regards,Lukasz
Hi,I am using the commvault tool first time and I have added my window machine in commcell, now I am going to install the agent on that machine but getting the below error, Failed to install Base Package. Failed to access the remote registry. I am using Windows 10 OS there is no AD this system on the workgroup network and the machine was on the Azure portal.username is: workgroup\commvault For more information please find the screenshotAlso how I can download agent software from commcell and install it manually on window and Linux machine
Good day allI guess this question is two-fold and I’ve not really been able to find a way this can be done using a Plan.We’re running SQL Plans that backup the DB’s in the traditional Full/Diff/TLOG format.We have 1 SQL instance that mustn’t have it’s TLOGS backed up. My assumption is that it would need to go into it’s own Plan with TLOG slider switched off?The second portion to this same instance is that the backups of the various DB’s don’t happen at the same time due to their size?Is it possible to configure an instance that a DB will not start backing up until the prior one is complete and they keep cascading in this manner.I understand you could create multiple sub-clients within each instance and assign different schedules. However, this would potentially mean that if a DB finishes sooner than the next one is scheduled to start, there’s an unused portion of the backup window. Conversely if one schedule overruns into the next, then we have 2 jobs running which we’re trying to avoi
With a Tape Library of 8 Drives, I wanted to force the Tape Aux Copy to a single Drive This will be for only one Storage Policy XYZ (out of 30 or so SP’s) and for only a single Storage Policy Copy ABC (within SP XYZ) There are multiple Tape Storage Pools set up eg POOL-N, POOL-O, POOL-P, … POOL-X Storage Policy Copy ABC used POOL-XI had hoped that setting the Device streams to 1, but this still results in multiple drives being used If I run a manual AUX Copy, for Storage Policy XYZ, for copy ABC and set the number of readers to 1, that works fine - as I’ve limited the stream to 1 ‘at the source’’, meaning there can never be more than 1 stream at the target - and therefore 1 tape used I could create new AUX Schedules an change the AUX Job options for just that schedule and apply that schedule to to just the AUX Copies where I want to use a single DriveBut that will be messy I could also limit the stream to 1 (combine stream) on the disk source copy that then feeds
Hi All, this is more an wish to commvault and not a big problem :) So let me try to explain the point: In 02.22 yo have an problem and open an support case the case is escalted to dev team and you get an diag patch. You are happy that after the instalaltion the backup or restore is running fine. The case is close and everthing is forgotten :) Than you update the envirmoemnt in 05.22 to the latest version and the diag patch was overwritten because it wasnt implemented in the new release. What happen you run into the same problem you open an support case and get an diag patch. So my wish is that the installer check if an diag patch is installed and failed to update. Or give me an hint that there is an diag patch installed if you install now the update the diag patch is gone or in best case i get an message the diag patch will be repelace with hotfix update nummer 4567. One trigger could be good. To you have an idear how to prevent updateing clients with diag patches ?
Hello the community, Since last week, i’m trying to download the last plateform release with the last HPk.When i download from the commserv i have an error with a .py which haven’t the good checksum\SP28_4489590_R1081\Windows\ThirdParty\Python\x64\Lib\installpythonmodules.py] has different checksum  than the expected one . I also try to made a package from the installer on my computer and kaspersky have detected a virus in a file : Nom: UDS:DangerousObject.Multi.GenericExactitude: ExactementNiveau de menace: ÉlevéType d'objet: FichierNom de l'objet: CvEdgeMonitor.exeChemin de l'objet: ...\Downloads\Commvault_Maintenance_11_28_32_WinX64\BinaryPayload\CvEdgeMonitor.exe.zip//SHA256: A869B4A2B2559E00A2D773D7FA987AC0DBF0653506EA6667791AAC21949919F5MD5: F0D0133E1735F5F48A9BC00CF6857403Raison: Protection Cloud Is there someone else with this problems ?
Hi all I have a customer who’s dashboard doesn’t report on a number of fields. Running the reports for each component also brings up empty results, as expected.I had a look at the possibility of Branch Cache running as per another post here on a similar issue, but it isn’t installed.I staged the Commserve in my lab and reporting is correct.The customer is on FR11.21 and the lab version is FR11.28.xxI don’t think this would be a fix as I believe the issue has been around for some time.Below is the client view: Below is the view in my lab:
Hi All,Has anybody worked on bringing in the Commvault maglib status and tape media usage status to Grafana dashboard? Is there anyway to showcase the usage trend and capacity reporting based on maglib utilization and publish in Grafana?Possibly if we can take the real time data from Commvault then we can pretty much show this metric in Grafana. Any leads?
Hi all I wanted to get some opinions on the links visible to the general internet user.I’m getting a few customers asking for access to the Expert links. My understanding (correct me if I’m wrong) is that it’s for Commvault Partners and Vendors only?If this is the case, could the Expert link be removed from a general user’s view as it’s an awkward to conversation to have where a customer feels they should have access it? When a partner signs in, they’d see the full documentation suite.What are the thoughts on this?Regards,Mauro
Hello, a customer asked if it would be possible to make all their primary copies (for disk libraries) WORM-protected and what the implications were.Up until now our standard is n-days/1cycle retention on primary copies and n-days/0cycle retention on a secondary copy.We basically use the 1 cycle as a safety net. So if for whatever reason the backup of a client does not work for a long time, they always have one backup available without setting a manual retention on those jobs.Now we have an internal discussion about how the retention for WORM works. Specifically if the cycle retention is also relevant for manual deletion of the jobs/clients that hold those jobs.So for example: A client is using a WORM storage policy with 14 days/1 cycle retention.Data Aging will not age out and delete the jobs automatically until both conditions are met.But is it possible to manually delete the jobs on day 15? I would say it is not possible because it is still retained by the cycles. If that is the case
Hi,I'm setting up the new Linux (Red-Hat) MediaAgent right now and I have a "what would be better" question. Maybe someone would like to share experiences :) On this new MediaAgent I plan to create a new disk library. MediaAgent will have available resources via SAN from an array (several volumes of 8TB each). Is it better to do LVM on these volumes (create vgs, create lvols and finally create filesystem - for example ext4) or better to make a gpt (parted) partitions and create ext4 without creating vgs and lvols?I am very curious about your opinion what would be a better solution.greetings
When MA and VSA binaries are installed on AWS instance of RHEL 8.7 with Kernel version Linux 4.18.0-425.3.1.e18.x86_64, both the instances crashes with error: kernel: watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [rmmod:9167]. the message from OS shows:cvblk: loading out-of-tree module taints kernel.cvblk: module license 'CommVault Systems' taints kernel.Disabling lock debugging due to kernel taintcvblk: module verification failed: signature and/or required key missing - tainting kernel As per Commvault documentation: https://documentation.commvault.com/2022e/expert/3515_block_level_backup_for_unix_system_requirements_01.htmlthe supported system requirement for block-level backup is RedHat Enterprise Linux 8.6 with kernel 4.18.0-372 Similar issue was reported earlier for the above kernel version, but now it seems to be supported.Could you please update as when it will support 4.18.0-425 kernel version. Note: The version 4.18.0-425 works for Commserve binaries, as the instance did
Hi, We have recently acquired a new server and storage as part of our hardware refresh for the Commvault server.The new server have the following:2x 480GB SATA SSD configured as Raid 1 (OS installed)2x 1.6TB PCIe SSD - still deciding whether to use host based mirroring or leave it as standalone disks, intended use is for SQL database, DDB, and index. The old server has the following disk configuration:OS - 558GB - 173GB usedSQL - 278GB - 866MB usedDDB - 418GB - 25.6GB usedCommvault V11 SP16 HPK17 Any recommendation for the new server’s disk configuration? If I will use host based mirroring will it have impact on the server’s performance? Which Commvault version should I use? 2022E or 11.26? Thank you in advance.
Hello, we have many VSA backup jobs running and are now planning to change the names of the VMs on VCenter side. Currently these are only with the name in Vcenter and that should be changed to the FQDN of the VMs. Now my question would be if the change to the FQDN could affect the VSA backups or if we have to consider something.Kind RegardsThomas
what is the best practice to update snmpd.conf file with list of process for Commserve and Media Server
HII would like to update snmpd.conf file with list of commvault process ( for Commserve and MA & VSA). this is just for monitoring purpose. Is there any documentation on how to set these process with max and min values in snmpd.conf file. #COMMVAULT CommservecvlaunchdcvdQSDKTomcatcvmongodbCvMessageServiceCvWorkflowEnginecvfwdClMgrSCvMountdWebServerCoreEvMgrSJobMgrAppMgrSvcMediaManager#COMMVAULT Media Sever/VSAcvlaunchdcvdcvfwdClMgrSCvMountdIndexingService_cleanup Thanks & RegardsLeena
With all of the cool features coming in our Feature Releases, I thought it would be handy to create a list of Best Practices for planning and Performing a Commserve update to a higher Feature Release.There’s definitely a lot of Proactive work that can go into your process that will save you from potential headaches or even nightmare scenarios!Check out the Planning documentation first Plan the Timing out in advance - Make sure any key stakeholders know the CS will be down during this planned upgrade Do you use any type of Commserve Failover? If so, note the instructions for those differ accordingly. Upgrading Hardware as well? Review the documentation on Hardware Refreshes. Suspending jobs - Ideally, have no jobs running; though some jobs can be suspended and resumed after others cannot be suspended and must be killed. Check and Double check that you have the Media Downloaded or if you are using the Commserve Software Cache, that it is updated with your required Maintenance Release.
I am unable to manually enter the AMI ID in command center, whereas I am able to do the same from CommCell console. I followed the below links to define additional settings for restoring Marketplace AMI Ids.https://documentation.commvault.com/11.26/expert/145983_enabling_specific_ami_ids_for_restores_and_replication.htmlhttps://documentation.commvault.com/11.26/expert/8681_adding_additional_setting_from_commcell_console.html and restarted the services, but still unable to manually add the ami-id in restore option from Command center. In commcell console, it allows to manually enter the ami-id
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.