Hyperscale-X node deployment: Inappropriate ioctl for device

Userlevel 1
Badge +4


I’m trying to setup my first HS-X node on an HPE Apollo 4200 server. The server has 2 SSDs for OS, I created a hardware RAID 1 with them. 2 SSDs for metadata and 24 Disks for data. I downloaded HS-X media ISO 2.2 from 2021-08 (seems to be the latest one)

First, I’m not able to use “Multi Node” Option in first install screen, I guess that is because networking is not configured yet.

I then go through the config, select all the disks depending on their purpose and start the installation. After reaching 100% it shows that installation failed and “cannot set terminal process group (-1): Inappropriate ioctl for device”. I see that esp, boot and lvm partition were created on /dev/sdc (RAID 1), I also see that lvm logical disks were created on sdc and OS was installed.

If I reboot the installer ISO it tells me that an existing HS-X installation is detected. But when I try to boot from the OS RAID 1 device, it does not boot. So I guess that the grub bootmanger was not installed correctly and that this is the error shown before.

Are there any known issues with HPE servers, anything to be done in addition? Documentation for Reference Architecture install is not so much. I did not find a dedicated install documentation for HS-X on HPE.

We will do the final install together with Commvault in a few weeks, but I need to do some tests before, especially regarding network configuration. 




Best answer by pirx 7 May 2022, 09:17

View original

If you have a question or comment, please create a topic

6 replies

Userlevel 7
Badge +23

Hi @pirx , and thanks for the post!

I was able to find some information from previous incidents with this issue (the article refers to a re-image, but the effect looks to be the same).

I’m pasting the entire resolution as it was written so I don’t miss any details.  Normally, I’d be hesitant about deleting anything, but in this case, you are just testing.


In reference architecture installs, the OS logical volume sits on a volume group using two physical volumes. Because only one of the OS drives was replaced the existing drive had an existing Volume Group configured which caused the install failure. If this error occurs, use the following steps to remove the existing volume group and re-run the installer. Development is working on resolving this issue in future versions of the installer.

NOTE*- These steps should only be followed when the node is being refereshed and there is no data on the OS volumes that needs to be recovered! Additionally this only applies to Reference Architecture, these steps will not work for the HyperScale Appliance.

1. Use the vgdisplay command to list volumes groups on the system:

# vgdisplay
--- Volume group ---
VG Name               raidvg
System ID
Format                lvm2
Metadata Areas        2
Metadata Sequence No  5
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                4
Open LV               4
Max PV                0
Cur PV                2
Act PV                2
VG Size               299.06 GiB
PE Size               4.00 MiB
Total PE              76560
Alloc PE / Size       53408 / 208.62 GiB
Free  PE / Size       23152 / <90.44 GiB

2. The raidvg is the volume group for the OS, and will need to be removed. Before doing so, confirm which PVs are part of the Volume Group using pvdisplay. NOTE: The metadatavg is for the DDB and index cache volumes, do not make any changes to this volume group!

# pvdisplay   --- Physical volume ---
PV Name               /dev/sdb
VG Name               raidvg
PV Size               150.00 GiB / not usable 4.00 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              38399
Free PE               19106
Allocated PE          19293
PV UUID               XlL2Q0-j4aI-y1iO-JtbV-hlno-afni-sfGwqs

3. Remove the volume group using the force option:

vgremove -ff raidvg

4. Remove the associated physical volumes using the below command:

pvremove -ff <PV Name>

5. Reboot the server and restart the HyperScale installer, which should complete successfully.

Userlevel 1
Badge +4

I can try it but it doesn’t really match my setup, no disk was replaced, everything is fresh and new. It happend on all 3 new nodes I tested this. Where can I find a more detailed log after install? I only found a file in /tmp… that had the same error in it that is displayed after install.

Userlevel 7
Badge +23

Let me see if I can get someone internal to reply and advise.

Userlevel 7
Badge +23

I already heard back from the manager of Hyperscale support himself!

He looked at this and said it’s definitely too complex an issue to be solved over community replies and he suggests you open a support case.

Can you open an incident and share the number here?

Userlevel 1
Badge +4

I already heard back from the manager of Hyperscale support himself!

He looked at this and said it’s definitely too complex an issue to be solved over community replies and he suggests you open a support case.

Can you open an incident and share the number here?

Problem could be solved. It was pretty easy in the end, the md5sum of the iso was not correct, the iso was corrup. It was ok on the computer I downloaded it but something happend during copy to networkshare from where I mounted it in ILO.


Sorry for the confusion.

Userlevel 7
Badge +23

I figured that was a long shot, but it ended up being the issue.  Should have known better 🤣

Glad to hear it was a simple fix.