Question

Air Gap Solution (On-premise)

  • 27 September 2023
  • 7 replies
  • 256 views

Badge +1

Hello,

We currently have our primary copies stored on Hyperscale-X with secondary AUX copies residing at our opposing datacenter consisting of a traditional Windows MA attached to Nexsan storage.  We have a requirement to implement some sort of air gap solution.  I know it’s not fool-proof, but at the moment I’m considering a tertiary copy to move the data from either Hyperscale-X (local) or from the AUX copy to another MA/Storage located on a locked down network with a firewall in between.  The concern however is the length of time that we’d need to leave ports opened to create that air gap copy.  We could be talking dozens of TBs that would need to be copied and if we need to leave the ports opened for several hours, that’s going to increase our exposure.  Can anyone speak to how they’re maintaining an on-premise air gap solution and provide me with any suggestions?   I was thinking if we could somehow move that data to the air gap location via some sort of native hardware solution with Commvault initiating it, that could possibly be much faster and limit our exposure.  Thank you for any suggestions and guidance.


7 replies

Badge +3

@BillK - could you give more details on the storage/MAs to be used for tertiary copy that needs to be Airgapped?

 

Please check if this helps -

https://documentation.commvault.com/2023e/essential/147278_starting_or_stopping_network_gateway_to_create_air_gap.html

 

 

Badge +1

Hello Satya,

 

I’m open to suggestions but right now I’m just considering a conventional Windows MA for the tertiary copy with internal storage behind a firewall.  We open to spending money on infrastructure if it makes sense.  My only concern with using convention methods for the tertiary copy is that it could take a significant amount of time to copy our data and thus increase our exposure window.  Thank you for your reply.

Badge +3

@BillK - we can achieve on-premise airgap through one of these three ways -

  1. Having storage configured to VmWare vCenter VMs (act as storage MAs) and enabling power management on the storage MAs. The VMs will be turned-on for replication to tertiary copy, and as soon as replication finishes the MAs will be brought down to create Airgap

    https://documentation.commvault.com/2023e/essential/101313_cloud_mediaagent_power_management.html
     
  2. Create a NW topology to have the tertiary copy environment accessible via a NW proxy. The NW proxy must be hosted on VMWare VCenter hypervisor. Configure blackout windows. The NW proxy will be powered down during blackout window period. The replication jobs must be configured accordingly to run outside blackout window,

    https://documentation.commvault.com/2023e/essential/147278_starting_or_stopping_network_gateway_to_create_air_gap.html

     
  3. Have an HSX cluster in the tertiary site. There are cluster commands to create Airgap window (see attached PDF for commands). The cluster will not accept any connections in the Airgap window. The replication jobs must be configured accordingly to run outside Airgap window.
Badge +1

Hello,

#1 is not an option as we’ve discovered that virtual MAs perform very poorly.  All our MAs are currently physical nodes. With regards to #2, how does powering off this network gateway prevent an intruder from accessing the target air gap MA/storage?  Seems like this could prevent Commvault from accessing the storage but not an intruder.  I think we would need more details (possibly a diagram) regarding the network proxy option.  Commvault Support as actually suggested another option in which we lock down the tertiary copy behind a firewall on a segmented network and then implement a one-way firewall to only allow 1 outbound connection each to both the source MA and Commserve.  Thank you.

Badge +3

Hi @BillK  - the #2 suggests setting up a “one-way forwarding” or “cascading” network topology and placing the servers (MAs) of the Airgap site behind a firewall. Check the diagram in the doc page below. By bringing down the NW proxy the site is physically airgaped. 

 

https://documentation.commvault.com/2023e/essential/111616_configuring_cascading_network_gateway_connections_using_predefined_network_topologies.html

Userlevel 3
Badge +12

@BillK - we can achieve on-premise airgap through one of these three ways -

  1. Having storage configured to VmWare vCenter VMs (act as storage MAs) and enabling power management on the storage MAs. The VMs will be turned-on for replication to tertiary copy, and as soon as replication finishes the MAs will be brought down to create Airgap

    https://documentation.commvault.com/2023e/essential/101313_cloud_mediaagent_power_management.html
     
  2. Create a NW topology to have the tertiary copy environment accessible via a NW proxy. The NW proxy must be hosted on VMWare VCenter hypervisor. Configure blackout windows. The NW proxy will be powered down during blackout window period. The replication jobs must be configured accordingly to run outside blackout window,

    https://documentation.commvault.com/2023e/essential/147278_starting_or_stopping_network_gateway_to_create_air_gap.html

     
  3. Have an HSX cluster in the tertiary site. There are cluster commands to create Airgap window (see attached PDF for commands). The cluster will not accept any connections in the Airgap window. The replication jobs must be configured accordingly to run outside Airgap window.

Hi @Satya Narayan Mohanty,

 

Regarding Air-Gap on Commvault, how is Data pruning managed in this case ? If MA VMs are only brought up during AUX copies and shutdown when jobs are done, when is the data pruning done ?

 

Thanks in advance for any provided guidance.

Badge +3

Data pruning is triggered as soon as the MAs are reachable. Its periodic activity which keeps checking the MA status and once it is online (reachable), it sends the batches or pruning records to that MA in cycles. Hence, there is nothing to be done for pruning. It will kick off automatically as soon as MAs are up for auxiliary copy operation.

When using the power management based approaches (#1 and #2 above), power management has an additional ability to bring up the required MAs when the pruning backlog threshold meets (pre-defined and configurable).

Reply