Skip to main content
Question

DDB Verif fails with NFS (Dataserver-IP) share on Linux MA

  • May 5, 2025
  • 5 replies
  • 130 views

Forum|alt.badge.img

Hi,

 

We have a full Linux Commcell (Redhat 9.5) in V 11.36 with a 2-node Grid MA and partitionned DDB.

Firewall is enabled and port 2049,2050 and 111 are open.

When I start a ddb Verif, it goes to 50% successfuly, (local verfification) and goes in pending when it tries to validate the network path on the other node using NFS.

If I stop the FW on the MA then the ddb verif finish with success :

BEFORE :


20679 51f2 04/23 17:43:04 #### initiateSession: 3dfs server @ [MA002] started successfully!!
20679 51f2 04/23 17:43:04 #### initiateSession: Share MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC exported successfully as /ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC_c1e0710d-b14e-46aa-b840-2e833e4a4d60_21 for session [22]
20679 51f2 04/23 17:43:05 #### waitForNfsReply: Error: nfs_service failed for Session [22], revents: 0x1C err: -1->rpc_service: socket error No route to host(113).
20679 51f2 04/23 17:43:05 #### didMountSucceed: Failed to mount MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC fd:24,port:51880 error- rpc_service: socket error No route to host(113). - will retry:0
20679 51f2 04/23 17:43:07 #### waitForNfsReply: Error: nfs_service failed for Session [22], revents: 0x1C err: -1->rpc_service: socket error No route to host(113).
20679 51f2 04/23 17:43:07 #### didMountSucceed: Failed to mount MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC fd:24,port:51884 error- rpc_service: socket error No route to host(113). - will retry:1
20679 51f2 04/23 17:43:08 #### waitForNfsReply: Error: nfs_service failed for Session [22], revents: 0x1C err: -1->rpc_service: socket error No route to host(113).
20679 51f2 04/23 17:43:08 #### didMountSucceed: Failed to mount MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC fd:24,port:51896 error- rpc_service: socket error No route to host(113). - will retry:2
20679 51f2 04/23 17:43:08 #### doNfsMount: Failed to mount MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC for session[22], fd:24,port:51896 error- rpc_service: socket error No route to host(113).
20679 51f2 04/23 17:43:08 #### stopSession: Stop Session [22]:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC_c1e0710d-b14e-46aa-b840-2e833e4a4d60_21
20679 51f2 04/23 17:43:08 #### dumpSessionInfo: Session [22]: Files: 0, WorkQ: 0, Pending: 0, Total: 0, Cmplt: 0, Cnctd:False, FrcLgt:False, MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC
20679 51f2 04/23 17:43:08 #### ~CNfsSession: Unexport MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC_c1e0710d-b14e-46aa-b840-2e833e4a4d60_21 for session [22]
20679 51f2 04/23 17:43:08 2077 NfsFile: Failed to add file 0x7f1c881bf4a0:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC to session, Err: 0xEC02CD19:{DNFS::CNfsSessionPool::addNfsFiletoSession(360)} + {DNFS::CNfsSession::initiateSession(1024)} + {DNFS::CNfsSession::doNfsMount(979)/MM.52505-Failed to mount MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC for session[22], fd:24,port:51896 error- rpc_service: socket error No route to host(113).}

 

AFTER FW is stopped :


21650 54bb 04/23 17:51:32 #### initiateSession: Share MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC exported successfully as /ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC_541c6557-2dd9-441a-92b8-f390480ec069_25 for session [2]
21650 54bb 04/23 17:51:32 #### didMountSucceed: Mounted Successfully Mountpath [/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC] Server [MA002].
21650 54bb 04/23 17:51:32 #### doNfsMount: Path MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC mounted for session[2], socket fd: 2147483647, port: 0
21650 54bb 04/23 17:51:32 #### addNfsFiletoSession: Added NfsSession [2] to pool 0x7fae24001010:2
21650 54d0 04/23 17:51:32 #### doWork: Session[2]: Worker thread started -->> Files: 1, WorkQ: 0
21650 54bb 04/23 17:51:32 2077 NfsFile: 0x7fae242d64e0 Server Name [MA002] client Name [ZPICC101] mount path [/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC]
21650 54bb 04/23 17:51:32 #### Tdfs::TdfsConfigClient::Check3dfsStatus(1272) - 3dfs Service status [8: TDFS_STARTED].
21650 54bb 04/23 17:51:32 #### Tdfs::TdfsConfigClient::GetNetworkPorts(3155) - TdfsGetNetworkPorts status [0: SUCCESS].
21650 54bb 04/23 17:51:32 #### start3dfsServer: Using Mount port 2049, NFS Port 2049. err: Success
21650 54bb 04/23 17:51:32 #### initiateSession: 3dfs server @ [MA002] started successfully!!
21650 54bb 04/23 17:51:32 #### initiateSession: Share MA002:/ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC exported successfully as /ma/MA002/vol01/JKOLPE_04.23.2025_11.14/CV_MAGNETIC_d4d9bba0-5e9b-4219-89b2-05d4a61008d0_26 for session [3]
 

 It looks like its trying to use random ports that are not open (51880 ,51884, 51896 ).

I really need to continue using the FW and I would like to know if it a a specific range I need to open on if I can fix them?

Thx 

5 replies

Jon Vengust
Vaulter
Forum|alt.badge.img+8
  • Vaulter
  • May 6, 2025

Hi JohnX,

 

Hope you’re doing well.

 

The ports referenced are part of our dynamic range between 49152 and 65535.

 

Source: https://kb.commvault.com/article/64047


Forum|alt.badge.img
  • Novice
  • July 29, 2025

Hi JohnX.

Is that subjected issue is fixed ? We are also facing NFS port communication issue for restore and aux copy for Linux MediaAgent ? If any fix, please share the details.

 

Thanks,

venkatesh.R


Forum|alt.badge.img+5

@JohnX ​@Venky I think 2049 and 2050 are used already by nfs-server service and then Commvault 3dnfs have to use random ports. You can bind this services to any static ports by using additional settings metioned here: https://documentation.commvault.com/2024e/expert/entering_required_firewall_settings_to_configure_dataserver_ip.html

 

But anyway I couldn’t configure everything properly on RHEL9 and I came back to RHEL8:

 

Now I’m testing regular nfs sharing instead of using Commvault built-in services because I had some problems with DataServer-IP and support told me that it will be end of life.


Forum|alt.badge.img
  • Novice
  • August 20, 2025

@mateusznitka ,When you mention switching to regular NFS sharing, how did it impact the aux copy throughput? We are considering changing our Commvault built-in storage to NFS share type, as our current aux copies to tape are running very slow. Did you notice any improvement in tape backup and aux copy speeds after making this change?

 


Forum|alt.badge.img+5

Hi, I’m just in the middle of configuring it so I will let you know later, but I’m using disk library, not tape.