Skip to main content

Media write operation failure] [Dedupe disk media] [The request could not be performed because of an I/O device error.]

Hello Faisal,

 

Could you please share the output of the df -kh command? Our goal is to check the disk and storage pool utilization levels. If the utilization is above 85%, it is expected to see I/O errors in such cases. The recommended solution would be to reduce the utilization by performing a level 4 space reclamation job.

Additionally, there may be other contributing factors that should be reviewed. For a more thorough investigation, I would recommend raising a case with Commvault Support to address any potential underlying issues.

 

 

 


[root@dc-cv-ma-01 ~]# df -HT
Filesystem                                  Type      Size  Used Avail Use% Mounted on
devtmpfs                                    devtmpfs  271G     0  271G   0% /dev
tmpfs                                       tmpfs     271G     0  271G   0% /dev/shm
tmpfs                                       tmpfs     271G  2.4M  271G   1% /run
tmpfs                                       tmpfs     271G     0  271G   0% /sys/fs/cgroup
/dev/mapper/raidvg-root                     xfs       144G  9.8G  134G   7% /
/dev/mapper/metadatavg-ddb                  xfs       2.0T   21G  1.9T   2% /ws/ddb
/dev/mapper/raidvg-var                      xfs        24G  946M   23G   4% /var
/dev/mapper/hedvigmetavg-hedvighpoddata     xfs        38G  297M   38G   1% /hedvig/hpod/data
/dev/sdp2                                   xfs       995M  379M  616M  39% /boot
/dev/mapper/raidvg-var_log                  xfs        72G   15G   58G  21% /var/log
/dev/mapper/raidvg-opt                      xfs       192G  9.9G  182G   6% /opt
/dev/mapper/hedvigmetavg-hedvighpodlog      xfs        38G  327M   38G   1% /hedvig/hpod/log
/dev/mapper/hedvigmetavg-mntd2              xfs        77G  701M   76G   1% /mnt/d2
/dev/mapper/hedvigmetavg-hedvigd2           xfs       322G  9.9G  313G   4% /hedvig/d2
/dev/mapper/hedvigmetavg-mntd4              xfs       1.1T  7.6G  1.1T   1% /mnt/d4
/dev/sdp1                                   vfat      998M  2.3M  996M   1% /boot/efi
/dev/mapper/hedvigmetavg-flachemetadata     xfs        38G  296M   38G   1% /flache/metadatadir
/dev/mapper/hedvigmetavg-mntd3              xfs       1.1T  7.7G  1.1T   1% /mnt/d3
/dev/mapper/hedvigmetavg-mntf1              xfs        38G  297M   38G   1% /mnt/f1
/dev/mapper/hedvigmetavg-mntd5              xfs       1.1T  7.7G  1.1T   1% /mnt/d5
/dev/mapper/metadatavg-index                xfs       1.8T   15G  1.8T   1% /opt/commvault/MediaAgent64/IndexCache
/dev/sdc                                    xfs       8.0T  185G  7.9T   3% /hedvig/d5
/dev/sdg                                    xfs       8.0T  185G  7.9T   3% /hedvig/d9
/dev/sde                                    xfs       8.0T  184G  7.9T   3% /hedvig/d7
/dev/sdd                                    xfs       8.0T  185G  7.9T   3% /hedvig/d6
/dev/sdm                                    xfs       8.0T  178G  7.9T   3% /hedvig/d14
/dev/sdf                                    xfs       8.0T  184G  7.9T   3% /hedvig/d8
/dev/sdb                                    xfs       8.0T  182G  7.9T   3% /hedvig/d4
/dev/sdl                                    xfs       8.0T  186G  7.9T   3% /hedvig/d13
/dev/sdh                                    xfs       8.0T  185G  7.9T   3% /hedvig/d10
/dev/sdk                                    xfs       8.0T  184G  7.9T   3% /hedvig/d12
/dev/sda                                    xfs       8.0T  190G  7.9T   3% /hedvig/d3
/dev/sdj                                    xfs       8.0T  183G  7.9T   3% /hedvig/d11
127.0.0.1:/exports/CVLTBackupCVStoragePool5 nfs4      169T  2.9T  166T   2% /ws/hedvig/CVLTBackupCVStoragePool5
127.0.0.1:/exports/CVLTBackupCVStoragePool5 nfs4      169T  2.9T  166T   2% /ws/hedvig/CVLTBackupCVStoragePool5-r
tmpfs                                       tmpfs      55G     0   55G   0% /run/user/0


Based on the provided information, it appears this may be a newly deployed node. If that is the case, please verify whether the system is running Rocky Linux. If so, check the status of the firewall services; if they are active, disable them and then re-test the behavior.

If the issue persists after disabling the firewall, I recommend raising a case with Commvault Support for further investigation.


system is running Rocky Linux. New deployed HyperScale X but pervious two month it working fine .


Hi ​@Sk Faisal.

 

Please run the below command and share its output

 

/usr/local/hedvig/scripts/showmembers.exp


root@dc-cv-ma-01.dwasa.org.bd's password:
Last login: Thu Jul  3 14:58:01 2025
>root@dc-cv-ma-01 ~]# /usr/local/hedvig/scripts/showmembers.exp
spawn /usr/local/hedvig/scripts/authorize-cli.sh
OpenJDK 64-Bit Server VM warning: Options -Xverify:none and -noverify were deprecated in JDK 13 and will likely be removed in a future release.
Welcome to Hedvig Duro CLI.

Type 'help' or '?' for help. Type 'quit' or 'exit' to quit.
hedvigduro> connect -h dc-cv-ma-01.dwasa.org.bd -p 7000
hedvigduro> showmembers
LIVE MEMBERS:3
dc-cv-ma-01.dwasa.org.bd:7000
dc-cv-ma-02.dwasa.org.bd:7000
dc-cv-ma-03.dwasa.org.bd:7000

UNREACHABLE MEMBERS:3
dc-cv-ma-01.dwasa.org.bd:7010
dc-cv-ma-02.dwasa.org.bd:7010
dc-cv-ma-03.dwasa.org.bd:7010

Total members that form this cluster: 6
hedvigduro> exit


Problem Solved using below procedure 

#stop commvault

# export HV_PUBKEY=1

# hv_deploy

# show_all_clusters

# login_to_cluster <cluster name>

# stop_cluster

# exit

 

# export HV_PUBKEY=1

# hv_deploy

# show_all_clusters

# login_to_cluster <cluster name>

# start_cluster

#exit

#start commvault 

 


Reply