My Aux Copy jobs going to Metallic cloud storage have been stopped for the past three days. I’ve asked my network team if there are any issues and they are asking for the connection information. I can see my on-prem media agents easily enough but can’t seem to find any of the details for Metallic. How do I find network details on the Metallic cloud storage?
Hi
May i know whether you are getting any error in aux copy job running to the metallic cloud storage?
Regards,
Karthik
Hi
Kindly refer to the document below for details on Metallic Cloud Storage IP and network information. You can use this information to review and verify the connection
Hi
May i know whether you are getting any error in aux copy job running to the metallic cloud storage?
Regards,
Karthik
Yes, I am having issues with copying to cloud storage. Starting four days ago I’ve been getting these errors on Aux Copy jobs going to Metallic:
Error Code: o62:173]
Description: Library LCV Cloud Storage], MediaAgent iinf-srvp110.apacorp.net], Drive Pool vDrivePool(inf-srvp110.apacorp.net)24], Media]]: MediaAgent is not ready. Advice: Please check the following: 1. Media Mount Manager is still running on MediaAgent; 2. Connection between MediaAgent and CommServe is in good condition.
Source: inf-srv57, Process: JobManager
Yet when I look in the Storage Resources > Libraries branch of the Commcell Console, everything seems to be OK.
Thanks
I’ve sent that information on to my networking guys to see if they can find the source of the problem.
Ken
Also, I have support ticket 250106-602 open to help investigate this issue.
This is an interesting problem. There are no issues with firewall rules, URL filtering, or decryption yet traffic leaves the firewall but does not establish a session. We continue to work with CV support.
Update: It appears something changed within either Azure or Metallic cloud. Changing the network routing to use our internet connection instead of the expressroute connection has allowed aux copies to resume running normally. Investigation will continue to determine what exactly changed and why but from my perspective, the issue has been resolved.
For future reference, this is what we’ve found:
1) On Jan 3rd DNS for the cloud storage was changed from 20.150.71.132 to 20.60.242.11
2) 20.60.242.0/23 is a route advertised by Azure via BGP with the path being ExpressRoute. Traffic flowed via ExpressRoute but the sessions would not setup. Something in Azure was blocking it.
3) On Jan 7th we bypassed ExpressRoute and forced traffic via the corporate office Internet connection with a combination of PBF (policy based forwarding) and static routes. CommVault aux copy replication started working again.
4) While reviewing the BGP configuration at the DR site, we noticed that a lot of networks are Excluded/Filtered from BGP. A review of Azure IP blocks show these excluded networks as storage networks. See: Download Azure IP Ranges and Service Tags – Public Cloud from Official Microsoft Download Center
20.60.242.0/23 is also listed as a storage network but is NOT excluded/filtered from BGP on the DR site firewall.
We put a filter in place at the DR site to exclude that network and now traffic to 20.60.242.0/23 flows out the correct interface (internet) and is reachable from DR site devices.
We can now remove the PBF and route entries at the corporate office to have the metallic replication go over the DR site and Expressroute.
It's always the network Thanks for sharing
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.