Get help with all things HSX
Recently active
Does Commvault support 100 GbE, 200GbE and 400 GbE Network for HyperScale X reference arcitecture and commserver.On a 400G network, what is the maximum backup speed achievable? Can we reach 40GB/sec? Would using multi-streaming or multiplexing across the mount points help in achieving this speed?
after shifting from hyperscale 1.5 to hyperscale X our db2 restore taking longer time. open ticket with support they reviewing and found it’s known issue for read fragmentation drops on cvfs during so requested to upgrade the CVFS version so we upgraded to CVFS version along OS patch upgrade but after that issue not resolved and request to move the ticket to development any one faces similar issue like reading data from hyperscale X. I run the mount path validation for check throughput and getting 70MB/s read throughput even the same throughput i got once i run the CVperformance tool.
Is this option also working for Hyperscale X
Hello allplease can you help with this issue there the A nad PTR records exists ?Thanks Pavel
Hi, I am currently going through the documentation to plan the update but some points are not clear to me:https://documentation.commvault.com/2023e/expert/migrating_hyperscale_x_version_on_existing_nodes.htmlIs this paper for both Hyperscale (appliance and RA)? The migration process migrates and reboots each node in the cluster from RHEL7 to Rocky8 sequentially. This process may take several hours to complete.Can I continue to perform backups while the cluster is being updated? (I'm thinking of the database log backups)We have a 6-node cluster, does anyone have experience with the duration? ThanksMichael
I am currently looking to automate some of our tedious task we do manually on commvault. One of which is removing the user DBs from being backup via commvault and deconfiguring a server from being backup if there are no active DBs on that server and vice versa. I started looking at this page here for available API that I can use to help with this PowerShell function I am looking to create. Was wondering if this is something doable as some of the API I saw on that page, I didn’t see one to remove user db from being backup if there are configured to be backed up. Any guidance with this would be greatly appreciated.
Hi Team,One of my customer having seventeen-node HSX 3.x cluster. They have 12 drive tape library and planning to connect 6 drives out 12 with HSX through to perform aux copy.Here is my doubt to outline the solution.Do I need to zone all the MA with tape library or only six MA? If six MA, how do I choose those six MA alone? Can you share your thoughts/inputs. TIA,Mani
We recently received a fresh HSX 2.2212 ISO download from commvault supporth here. Whenever we try to mount the ISO and image servers after we have configured them, the installation has been failing. We have had some success install on 3 of 12 servers with the same configuration and ISO, but the remaining are always failing due to timing out or just fails immediately. We are using Idrac to mount the ISO and running the install.
Dear Community,Have a nice day. I got a question from security team and i need to know your thoughts,Can we install FireEye EDR on HSX Nodes as they are linux based servers ? Thanks in advance
Dear Community,I try to configure airgap interval on HSX cluster from 7AM to 4AM next day to allow aux copy for 3 hours only , when I run command “./cvsds -c 7:00 4:00” I get “Failed to get storage services status”.Kindly need your support.
Hi All,A customer asked me about the size of the Ethernet frames used between 2 HSX clusters during Auxiliary Copies.From what I understand, the customer is concerned that the frames are too small. And that in relation to certain network components, the throughput obtained is not very high if the frames are small.And he asks me to put the question to Commvault ...As I understand it, this is independent of Commvault. But I'd like to check my understanding to the community.Ethernet has a minimum frame size of 64 bytes. It also has a maximum frame size of 1518 bytes. (if jumbo frame is not configured).For Auxiliary Copy jobs between HSX clusters, Commvault uses 128kB data blocks.These data blocks are sent to the Rocky Linux kernel, which splits them into 1500-byte segments (if jumbo frame is not configured) and sends them to the Ethernet interface driver.Can you please confirm my correct understanding? Is there any reason why Ethernet frames should not be the maximum size?Regards,Luc
Hi alli’m checkin a configuration on node of HSX and i can see that we have an SSD disk (2.9T) not in use, a NVME disk (6T) with ddb and IC and the last one SSD disk (2,9T) for metadataHedvig.It is normal to have a disk not in use ? maybe it could be useful in case of disk fault ? thanks in advancevincenzo
Is there a command you can run to check the status of all Hedvig Nodes in a cluster?In gluster it was “gluster peer status”Thanks
Hi,I have a question, where the answer just might be trial and error and worst case scenario is performance loss. But other teams experienced a crash of there ESX systems after modifying this setting. So, I thought let me first check here.From a legal obligation to take measures to save energy (in a data center) we are obliged to set servers to so-called balanced mode where possible.We have a hyperscale environment, 3 Dell PowerEdge R740xd servers.If we enable the setting; Power Factor Correction (PFC) here on all 3 servers, could that possibly cause problems, this also in combination with CommVault? Regards,Danny