Get help with all things HSX
Recently active
I am currently looking to automate some of our tedious task we do manually on commvault. One of which is removing the user DBs from being backup via commvault and deconfiguring a server from being backup if there are no active DBs on that server and vice versa. I started looking at this page here for available API that I can use to help with this PowerShell function I am looking to create. Was wondering if this is something doable as some of the API I saw on that page, I didn’t see one to remove user db from being backup if there are configured to be backed up. Any guidance with this would be greatly appreciated.
We recently received a fresh HSX 2.2212 ISO download from commvault supporth here. Whenever we try to mount the ISO and image servers after we have configured them, the installation has been failing. We have had some success install on 3 of 12 servers with the same configuration and ISO, but the remaining are always failing due to timing out or just fails immediately. We are using Idrac to mount the ISO and running the install.
Hi Team,One of my customer having seventeen-node HSX 3.x cluster. They have 12 drive tape library and planning to connect 6 drives out 12 with HSX through to perform aux copy.Here is my doubt to outline the solution.Do I need to zone all the MA with tape library or only six MA? If six MA, how do I choose those six MA alone? Can you share your thoughts/inputs. TIA,Mani
Dear Community,Have a nice day. I got a question from security team and i need to know your thoughts,Can we install FireEye EDR on HSX Nodes as they are linux based servers ? Thanks in advance
Dear Community,I try to configure airgap interval on HSX cluster from 7AM to 4AM next day to allow aux copy for 3 hours only , when I run command “./cvsds -c 7:00 4:00” I get “Failed to get storage services status”.Kindly need your support.
We have several hyperscale x clusters, has someone done the os migration from RHEL to Rocky? https://documentation.commvault.com/2024e/expert/migrating_hyperscale_x_version_on_existing_nodes.html What is your experiences with this?
Hi All,A customer asked me about the size of the Ethernet frames used between 2 HSX clusters during Auxiliary Copies.From what I understand, the customer is concerned that the frames are too small. And that in relation to certain network components, the throughput obtained is not very high if the frames are small.And he asks me to put the question to Commvault ...As I understand it, this is independent of Commvault. But I'd like to check my understanding to the community.Ethernet has a minimum frame size of 64 bytes. It also has a maximum frame size of 1518 bytes. (if jumbo frame is not configured).For Auxiliary Copy jobs between HSX clusters, Commvault uses 128kB data blocks.These data blocks are sent to the Rocky Linux kernel, which splits them into 1500-byte segments (if jumbo frame is not configured) and sends them to the Ethernet interface driver.Can you please confirm my correct understanding? Is there any reason why Ethernet frames should not be the maximum size?Regards,Luc
Hi alli’m checkin a configuration on node of HSX and i can see that we have an SSD disk (2.9T) not in use, a NVME disk (6T) with ddb and IC and the last one SSD disk (2,9T) for metadataHedvig.It is normal to have a disk not in use ? maybe it could be useful in case of disk fault ? thanks in advancevincenzo
Hello teamHave a question about HSX RA disks for Boot and MetadataFor boot i saw reccomendations Create a RAID1 volume, but for MetaData i didn’t see anything, jus 2 dedicated NVME\SSD disksI’m trying find some info see below for Cache (CVFS/Hedvig) as i understand we use EC 4+2, so for this disk we doesnt need a RAID1 - yes or no?for Logs (Index cache/DDB) can we used RAID for this disks or its just dedicated NVME\SSD? and one more question whats happening if a metadata disks will go offlain or crushed in one or more nodes?
Hi, We are adding a HSX node to a cluster with 4 nodes with 18TB drives. 18TB drives are no longer available for Dell server so we have to go with 20TB drives, that according to doc will be formatted to 18TB usable volumes. Is the requirement for 768GB RAM based on usable drive size or the actual size?Are we able to run these servers on 512GB RAM as the drives will be 18TB usable to the Storage Pool? /Patrik
Is there a command you can run to check the status of all Hedvig Nodes in a cluster?In gluster it was “gluster peer status”Thanks
Hi,I have a question, where the answer just might be trial and error and worst case scenario is performance loss. But other teams experienced a crash of there ESX systems after modifying this setting. So, I thought let me first check here.From a legal obligation to take measures to save energy (in a data center) we are obliged to set servers to so-called balanced mode where possible.We have a hyperscale environment, 3 Dell PowerEdge R740xd servers.If we enable the setting; Power Factor Correction (PFC) here on all 3 servers, could that possibly cause problems, this also in combination with CommVault? Regards,Danny
can we use hyperscale x node as access node from hyper-v backups.I dont see it in list of access nodes
Are there any rough timelines when HyperScale X Reference Architecture will support the HPE Alletra Storage Server 4120 ?The HPE Alletra Storage Server 4120 is HPE’s next generation of the currently support model Apollo 4200 Gen 10 Plusttps://documentation.commvault.com/2023e/essential/design_specifications_for_hyperscale_x_reference_architecture.htmlThanks