3-way NDMP backups - ONTAP

  • 3 December 2022
  • 8 replies

Userlevel 2
Badge +7

 I read that only EMC Celerra/VNX is supported for 3-way NDMP and have been doing this for years. We are now moving to begin backing up multiprotocol shares off of ONTAP in cluster mode, so  I’m wondering if NDMP backups can be done similarly - tapeless, but backing up volumes via NDMP to capture both sets of permissions. Or, is this simply not an option with ONTAP and the only way to do NDMP is via tape (direct)?

If NDMP tape-less isn’t supported then I guess that means the NAS client is the only way and we need to pick NFS or SMB, correct? The MA in this particular case in Linux, so NFS it will be if the 3-way isn’t and option direct to disk.


8 replies

Userlevel 2
Badge +4

Hello @downhill 

i guess your question is if a 3-way ndmp can backup to another ndmp target on disks and other options for backing up these volumes.

If you have same storage backend, you can configure Snapshot (IntelliSnap for backup) and Snap copy (IntelliSnap for DR). You can explore this option. 

Answer to your Question:

NDMP is independent of backend storage (tape or disk) that being said, the destination target should be configured as a disk library or a VTL and not a NDMP device. 

Other ways of backing up NDMP like a CIFS/NFS/SMB share mounted on a server would cause traditional problems (slow backups, long running backups, ...) due to various other dependent components of infrastructure. 


Userlevel 2
Badge +7


I’m not sure I follow. The BoL specifically says something like 3-way is only supported on Celerra/VNX.

3-way AFAIK essentially means you -don’t- have tape devices directly zoned to the NAS. But the docs for setting up NDMP for ONTAP imply you must have tape devices visible and attached. 

Or maybe my interpretation of 3-way is incorrect if someone can explain, great.


Userlevel 2
Badge +4

Hello @downhill 

3-way NDMP basically means the backup destination is not attached to the NAS device we are trying to backup instead it is either attached to MA or another NDMP device.

In Commvault, i have tested the MA option and it works. i.e., NAS device is configured as disk library (i.e., MA with CIFS shared to store backup data) which is eventually used to store the NDMP data with a streaming NDMP agent. 

Is this what you are looking for?



Userlevel 2
Badge +7

No not really. In NetApp-speak they call it “indirect”. Take the library out of the equation - doesn’t matter what the MA is using for a library. 

It appears this will work but Commvault apparently calls this remote NDMP not 3-way if I’m understanding correctly. But ONTAP has this very annoying “feature” where the array decides the interface to use for comm’s and for complex environments, this doesn’t seem to work, so I assumed this was some limitation on the CV side but in fact my connection issues are all due to ONTAP. 

Can you or some other vaulter confirm that remote/indirect NDMP is supported? Seems to be but the documentation seems to be conflicting on each vendor’s site.

My main objective was to have an NDMP client PER SVM on the NetApp but the way they do things this seems nearly impossible. Cluster-aware forces traffic down the e0m port incorrectly.


Userlevel 2
Badge +4

Received the below update on the SVM NDMP backups

We do support NDMP client per SVM. With tunnelling turned off for SVM on Snap Configuration tab, network traffic will use interfaces/ports configured for the SVM.


Use Tunneling

If you select this option, then the snapshot operations for a Storage Virtual Machine (SVM) are executed using the cluster array instead of directly using the SVM. To use Use Tunneling, the SVM client must be associated with the cluster client using the client properties.

On the network-attached storage (NAS) cluster client, set the Client Type to Cluster File Server. On the SVM client, set the Client Type to Storage Virtual Machine, and then select the proper cluster client.

Use Tunneling is useful for configurations in which the client that issues the snapshot operations cannot directly access the SVM.


Userlevel 2
Badge +7

I don’t think my point is making it across but have new info relating to the topic:

  1. if the array cluster interconnects have NDMP enabled on them, they will listen on port 10000. I am doing CAB with this and am not concerned anymore about per-SVM clients...
  2. If, and only if, I add the “array” using one of those cluster interconnect IPs, can I possibly move all traffic off the cluster management LIF (e0m). CAB is sweet for sure.
  3. ONTAP, at least 9.11 in our case, does not honor the -data-port-range at all. Regardless of what start-stop ports are set at the cluster (and propagate down to SVMs), ALL traffic is done over 10000. So I think this is a bug in ontap and am pursuing.
  4. It is not necessary to “move” cluster management to other well-connected LIFs if and only if, the interconnect LIFs are connected to the same network in some way as the “remote” MA. I setup DIP’s between the NAS IC LIFs and dedicated BU interfaces on the MA, and voila - everything flows like I want, except the array isn’t known by it’s usual FQDN, it’s only known via the IC LIF.
  5. Intellisnap seems to work yet on this later ONTAP version but I thought I saw some blurb somewhere about this no longer working (maybe I’m confusing topics sadly), but in the arrangement noted above, the snap copy is done instantly and the backup copy runs no problem. 
Userlevel 7
Badge +16

I have recently configured a remote NDMP for ontap.

Configured a Media Agent which is able to connect to the lif in question, in my case on the mgmt lif.
To configure the correct NDMP data transfer port I went to the properties of the Media Agent object, clicked Network Route Configuration tab, clicked Incomming Ports and defined the port range used for NDMP traffic.

Then created the NAS client where I assigned the media agent as described above.

NDMP logging on the MA showed the correct port usage.

Hope this helps.

Userlevel 2
Badge +7

The network topologies applied at the MA group. I checked the box, added the data port range which matches what is set in ontap, watched ports on MA while backup runs, still uses port 10000 alone. Searched job log for data port 55100 - nothing found.