Question

Dedicated Interface Paring (DIP) for VMware backup, and HyperScaleX as target

  • 6 October 2023
  • 5 replies
  • 271 views

Badge +3

Good day!

We have a requirement to route VMware backup over the dedicated VMkernel network on ESXI hosts. Unfortunately, the CommVault solution does not have a proper method to route VMware backup traffic over 902 (NFC) to HyperscaleX. If I’m correct the “Dedicated Interface Paring (DIP)” will not work for the VMware backup type.  Please let me know if there are some improvements made in the latest Version to manage VMware backup traffic using DIP.

Below are the environment details, IP, and names being just dummy, curious to know if it will work if I use a host file entry to route traffic over a dedicated VMkernel network.

 

Commserve:   IP and Name

MGMT -→ 192.168.10.10 cs01.test.com

Backup -→ 10.0.8.10

HyperScaleX:  Network label, IP and Name. Three node HSX cluster. Bond1 has sub interfaces. 

Bond1.1 → CS-Reg → 192.168.10.11 HSX01.test.com, 192.168.10.12 HSX02.test.com, 192.168.10.12 HSX03.test.com → This has DNS entry.

Bond1.2 → Data Protection 10.0.8.11, 10.0.8.12, 10.0.8.13

Bond2 → Storage Pool network. 192.168.20.10, 192.168.20.11,192.168.20.12

 

VMware Network:   IP and Name

MGMT → 192.168.10.x

VCenter -→ 192.168.10.14 vcenter1.test.com

ESXi -→ 192.168.10.15 ESX01.test.com, 192.168.10.16 ESX02.test.com ---> This has DNS entry.

Dedicated VMkernal Network → ESX01 → 10.0.8.14, ESX02 → 10.0.8.15

 

I will give a host file entry on HyperScaleX server as follows.

10.0.8.14 ESX01.test.com 

10.0.8.15 ESX02.test.com 

 

What I do not know is, the ESX/Vcenter by default will resolve the HyperscaleX to HSX01.test.com 192.168.10.11 so I expect that it will move the traffic from ESX Management network 192.168.10.15 to HyperscaleX 192.168.10.12

In this context how ESXi server will move traffic form ESXi 10.0.8.14 interface to HyperscaleX 10.0.8.11 interface? Please help.


5 replies

Userlevel 7
Badge +23

Hi @Comtech,

With the exception of NDMP, DIPs are only used to determine which interfaces are used between two commvault machines (i.e Commvault software). It does not affect Commvault to application traffic (again, with exception of NDMP 😉)

During the VM backup, Commvault will query vCenter for the ESXi host which hosts the VM. vCenter will return the FQDN of the esxi host - and based on the route table of the operating system, the OS will pick the interface to use. As you have discovered - usually you can solve these by putting in a hosts file entry for the ESXi server FQDN to match up to the right vmkernel IP address that you want to connect to for NFC (902).

I’m not sure of an alternative method to override the IP address of an ESXi server. Putting in a hosts file entry for the ESXi hosts using the backup VMKernel IP address should solve it.

Userlevel 1
Badge +13

@Damian Andre  hello ,

 

as you said during theDuring the VM backup Commvault will query vCenter for the ESXi host which hosts the VM - is the commserv query this or the access node. 

as described above .

 

we have hyperscal comserv registration/Data protection on a vlan  and all of our vm having a second interface with same vlan as hyperscal x .

during the confiugration i used commserve as access node where i was able to browse the vm’s 

but when i removed commserv and added hyperscal x as access node, i was not able to browser any of the vm.

 

in this case - hyperscale x nodes should have access to vcenter  and esxi host 443 and 902

so that discovery can happen and browse the content.

but when the backup starts since all the vm having a backup network and if we can configure a os level route to hyperscal x nodes , will that work ?

 

 

 the vCenter, ESX servers, and Virtual Server Agent must be able to communicate with each other. To ensure that all components can communicate through the firewall, ensure that the ports for web services (default: 443) and TCP/IP (default: 902) are opened for bidirectional communication on each of these machines.

Userlevel 7
Badge +23

but when i removed commserv and added hyperscal x as access node, i was not able to browser any of the vm.

 

I don’t think its a comms problem here - you need a windows VSA to be able to perform the browse operation of windows guests. So its likely failing to mount the file system to browse the VM. You can try browse a linux VM and see if it works to confirm.

 

For browse and restore operations, use a Linux access node and MediaAgent to browse Linux guests, and a Windows access node and MediaAgent to browse Windows guests.
Source

Userlevel 1
Badge +13

@Damian Andre  hello what i meant was browsing the vcenter cleint content to add vm,s

 

not live browse.

so basically commserv and vsa need to have access to vcenter and hosts.

Userlevel 7
Badge +23

@Damian Andre  hello what i meant was browsing the vcenter cleint content to add vm,s

 

not live browse.

so basically commserv and vsa need to have access to vcenter and hosts.

Only a single VSA needs access to vCenter to perform discovery / browse for content, and that does not need to be the CommServe. There is a setting to configure VSA / proxy / access nodes on the properties of the virtualization client and also at the subclient level - so you may want to check both are correct here.

Reply