Hi @pirx , and thanks for the post!
I was able to find some information from previous incidents with this issue (the article refers to a re-image, but the effect looks to be the same).
I’m pasting the entire resolution as it was written so I don’t miss any details. Normally, I’d be hesitant about deleting anything, but in this case, you are just testing.
Resolution
In reference architecture installs, the OS logical volume sits on a volume group using two physical volumes. Because only one of the OS drives was replaced the existing drive had an existing Volume Group configured which caused the install failure. If this error occurs, use the following steps to remove the existing volume group and re-run the installer. Development is working on resolving this issue in future versions of the installer.
NOTE*- These steps should only be followed when the node is being refereshed and there is no data on the OS volumes that needs to be recovered! Additionally this only applies to Reference Architecture, these steps will not work for the HyperScale Appliance.
1. Use the vgdisplay command to list volumes groups on the system:
# vgdisplay
--- Volume group ---
VG Name raidvg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 2
Act PV 2
VG Size 299.06 GiB
PE Size 4.00 MiB
Total PE 76560
Alloc PE / Size 53408 / 208.62 GiB
Free PE / Size 23152 / <90.44 GiB
2. The raidvg is the volume group for the OS, and will need to be removed. Before doing so, confirm which PVs are part of the Volume Group using pvdisplay. NOTE: The metadatavg is for the DDB and index cache volumes, do not make any changes to this volume group!
# pvdisplay --- Physical volume ---
PV Name /dev/sdb
VG Name raidvg
PV Size 150.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 38399
Free PE 19106
Allocated PE 19293
PV UUID XlL2Q0-j4aI-y1iO-JtbV-hlno-afni-sfGwqs
3. Remove the volume group using the force option:
vgremove -ff raidvg
4. Remove the associated physical volumes using the below command:
pvremove -ff <PV Name>
5. Reboot the server and restart the HyperScale installer, which should complete successfully.
I can try it but it doesn’t really match my setup, no disk was replaced, everything is fresh and new. It happend on all 3 new nodes I tested this. Where can I find a more detailed log after install? I only found a file in /tmp… that had the same error in it that is displayed after install.
Let me see if I can get someone internal to reply and advise.
I already heard back from the manager of Hyperscale support himself!
He looked at this and said it’s definitely too complex an issue to be solved over community replies and he suggests you open a support case.
Can you open an incident and share the number here?
I already heard back from the manager of Hyperscale support himself!
He looked at this and said it’s definitely too complex an issue to be solved over community replies and he suggests you open a support case.
Can you open an incident and share the number here?
Problem could be solved. It was pretty easy in the end, the md5sum of the iso was not correct, the iso was corrup. It was ok on the computer I downloaded it but something happend during copy to networkshare from where I mounted it in ILO.
Sorry for the confusion.
I figured that was a long shot, but it ended up being the issue. Should have known better 
Glad to hear it was a simple fix.