Solved

Firewall implementation and test failover

  • 30 June 2021
  • 6 replies
  • 514 views

Userlevel 4
Badge +14

Hello community,

I am testing a Commserve failover.

I have already made a test when there were no firewall and all is ok.

Now it is in production and firewall is implemented.

There is ON premise site and AWS site connected through direct connect VPN

each site is backuped up locally, it means that actually there is no backup copy from on premise to AWS. 

I am not sure about the communication and ports used when the commserve DR becomes the master.

I think that when the commserve located on AWS become the master, the new commserve should communicates with MA and clients locatedon aws and also on premise 

Here is a little diagram, any help would be very appreciated.

I need somehelp to identify what port to use and what source and destination (bidirectionnal or not) and I will ask the network team to add rules.

 

Thank you !

 

 

 

icon

Best answer by MichaelCapon 2 July 2021, 11:16

View original

6 replies

Userlevel 5
Badge +8

Hi Bloopa,

Have a look through the documentation here for ports used for the services. 

https://documentation.commvault.com/commvault/v11/article?p=8572.htm

Here is also the livesync documentation https://documentation.commvault.com/commvault/v11/article?p=102842.htm

Firewall and Network Requirements

By default, this solution requires that all clients communicate with the CommServe server using a proxy. (By default, the SQL Clients installed in the CommServe hosts is used as the proxy.)

Note: A default topology, Firewall Topology created for failover clients, is created for communication between the production and standby CommServe hosts using port 8408. This default topology routes requests between the CommServe and the SQL instances and is created irrespective of the option selected for communication.
Hence, it is important to verify and ensure that production and standby CommServe hosts are able to reach other using port 8408.

To facilitate communication between the clients and the CommServe, verify the following conditions:

  • Both the active host and the passive CommServe hosts can reach the tunnel ports that are associated with the SQL clients in both the active host and the passive CommServe hosts.
  • If the SQL clients are used as the proxy, verify that all the clients in the CommCell environment can communicate with the port and hostname of the SQL clients in both the active host and the passive CommServe hosts.
  • If a dedicated proxy is already set up and available in your environment, use this dedicated proxy to communicate with the CommServe server.

    Note: A dedicated proxy is recommended in environments with a large number of clients. (5000 or more approximately.)

  • To failover to a CommServe hosted in the cloud, or if there is a firewall between the active host and the passive CommServe hosts, see Setting Up a Standby CommServe Using Port Forwarding Gateway Configuration.

    See Also: Network TCP Port Requirements.

 

Hope this helps

Userlevel 4
Badge +14

Helklo thank you.

 

I am in failover mode now, I think we have resolved all connectivity issue but

libraries are offline.

These libraries are located on the on premise site.

From the commserve failover located in AWS, cvping on on premise media agents is ok

Check readiness on the media agent attached to the library is ok

Any idea ?

 

 

 

Userlevel 4
Badge +14

The connectivity to the media agent and libraries is ok now after restarting the commcell console.

 

8836  4860  06/30 11:48:08 61291 CPipelayer::InitiatePipeline signatureType [CV_NO_SIGNATURE], signatureWhere [CV_SIGNATURE_NOWHERE]
8836  4860  06/30 11:48:08 61291 CPipelayer::InitiatePipeline Lan-Free - 0
8836  4860  06/30 11:48:08 61291 CPipelayer::InitiatePipeline Num Buffers [90], Buffer size [65536]
8836  4860  06/30 11:48:08 61291 CvFwClient::connect(): Connected to pcpl98a00i.centreapi.com:8400/8400 via network daemon.
8836  4860  06/30 11:48:08 61291 CPipelayer::connectToDest Connected pipeline to (pcpl98a00i)pcpl98a00i.centreapi.com:8400/8400, ConType 4 (Tunneled), Proxy []
8836  4860  06/30 11:48:08 61291 CPipelayer::InitiatePipeline SDT is using socket 2588 for connection 1/1
8836  290c  06/30 11:48:08 61291 JobStatusWorker Setting proxy Connection stats for job: [<?xml version="1.0" encoding="UTF-8" standalone="no" ?><CNSession_cvsJobNetworkInfo jobId="61291" mangledHostname="pcpl98a00i.centreapi.com*pcpl98a00i*8400"><networkInfo connectionType="4" proxy=""/></CNSession_cvsJobNetworkInfo>]
8836  4860  06/30 11:48:08 61291 CPipelayer::InitiatePipeline SDT Pipeline ID is [SDTPipe_awpw99a00b-SQL_pcpl98a00i_61291_1625046488_8836_18528_000001DC1A5D8450]
8836  4860  06/30 11:48:08 61291 SdtNetLink() - WAN Padding is OFF
8836  4860  06/30 11:48:08 61291 SdtNetLink::setSocket() - Switching socket 2588 to non-blocking mode.
8836  4860  06/30 11:48:08 61291 SdtNetLink::authenticate() - Connection authenticated successfully
8836  4860  06/30 11:48:08 61291 Thread pool monitor interval [300] seconds, Refresh interval [60] seconds
8836  4860  06/30 11:48:08 61291 SdtBase::cfgProcs() - Total Bufs=90, allocators=1
8836  4860  06/30 11:48:08 61291 SdtBase::cfgProcs() - Free Bufs [90]
8836  4860  06/30 11:48:08 61291 SdtBase::cfgProcs() - Free Bufs [0]
8836  4860  06/30 11:48:08 61291 SdtBase::cfgProcs() - Free Bufs [0]
8836  4860  06/30 11:48:08 61291 InitializePerfCounters() - Initializing Perf counters message[Pipeline-ID - 112160] [Starttime - 1625046488] [Starttime UTC - 1625042888]
8836  4860  06/30 11:48:08 61291 CCVAPipelayer::SendCommandToDSBackup() - Sent Command 136 to MediaAgent, Waiting for Response…

when commcell is lauching the dr to the on premise commserve for the failback It stucks at : 

Command 136 to MediaAgent, Waiting for Response…

any clues ?

 

Userlevel 7
Badge +23

@Bloopa , what do the logs show on the Media Agent (CVD.log is a good start)?

Can you run cvping in both directions across the firewall open port?

Userlevel 4
Badge +14

Hello, 

 

@Mike Struening 

I made a test and connectivity  was ok

check readiness to the media agent  from AWS to on premise is not stable, sometimes it is ok sometimes not.

 

but after 15 mn with this message on the destination MA, Command 136 to MediaAgent, Waiting for Response…

the full backup began, the Commserve tried to backup the database from AWS to the on premise site (Failback mode) it takes a very long time to perfom the backup. I have opened a case and the support was very quick and helpful. I have to discuss with network team because logshipping from On premise to AWS is quick but for the failback, log shipping from the AWS Commserve is very slow 1KB/s

I will plan a new test failover with commvault assistance in 2 weeks.

 

Thank you for your help !

Userlevel 6
Badge +14

Hi Bloopa,

I have seen in some environments the SP Copies are promoted from Secondary/Synchronous to be the Primary so that the DR side MA/Storage becomes the Primary for the DR CommServe. (In your case that would mean making AWS Primary on OnPrem the Secondary when in a failed-over state.)


Have you tried this at all? - I would hope that the backups running from the DRCS to the DR MA’s and S3 Storage (over AWS Networking) would run faster than running over the VPN to on-prem MA and DiskLib. 

Then once the failback is done you could re-promote on-prem copy to be the Primary.

 

Best Regards,

Michael

Reply