Issue
-
There was a known issue identified with Redhat Gluster where the Capacity is reported incorrectly
-
This is due to an incorrect parameter on the Gluster known as shared-brick-count which will count how many bricks share a file system and how the disk space is divided accordingly (within the Gluster)
Methods:
-
Method 1 - if you are on the following Service Pack & Hot Fix Pack
-
SP15 & Hot Fix Pack 35
-
SP16 & Hot Fix Pack 27
-
SP17 & Hot Fix Pack 12
-
-
Method 2 - prior to the above Service Pack or Hot Fix Pack
IMPORTANT NOTE - this only needs to be done once and will be applied across the Gluster\, no further remediation is required.
Method 1
-
Commvault introduced an additional feature under (/opt/commvault/MediaAgent) which will generate the shared-brick-count script in (/usr/lib64/glusterfs/3.12.2/filter/) for all the nodes
-
Navigate to the following location
-
# cd /opt/commvault/MediaAgent
-
-
Execute the following script
-
# ./cvavahi.py fix_gluster_volsize
-
-
Then execute the following command (only on one node) and this should resolve the size mismatch
-
# gluster volume set (storage pool name) cluster.min-free-inodes 5%
-
-
On each of the HyperScale Nodes, log in and verify all the Bricks have "Shared-brick-count" set to "1" using the following command
-
# grep -r "shared-brick-count" /var/lib/glusterd/vols/HyperScale_Pool
-
-
Then re-run df -h and confirm that the volume shows the correct size.
Method 2
Do the following on all Nodes
-
Navigate to the following location
-
# cd /usr/lib64/glusterfs/3.12.2/filter/
-
Note - You may need to create the 'filter' folder if it doesn't already exist.
-
Use the following command
-
# mkdir filter
-
-
-
-
Inside the ‘filter’ folder, create a file named “shared-brick-count.sh” and paste the following content to it :
-
# vi shared-brick-count.sh
-
-
Input the following content:
#!/bin/bash
sed -i -e 's/option shared-brick-count [0-9]*/option shared-brick-count 1/g' "$1"
-
Give execute permissions to the shared-brick-count.sh file on all nodes by executing the command
-
# chmod 755 shared-brick-count.sh
-
Do this on only 1 Node in the Gluster
-
Run the following command to force Gluster to update the volume. Run this on only one node.
-
# gluster volume set HyperScale_Pool cluster.min-free-inodes 5%
-
Verify the content
-
On each of the HyperScale Nodes login and verify all the Bricks have "Shared-brick-count" set to "1" using the following command
-
# grep -r "shared-brick-count" /var/lib/glusterd/vols/HyperScale_Pool
-
-
Then re-run df -h and confirm that the volume shows the correct size.