Cloud VPN
HA VPN is a high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to your VPC network through an IPsec VPN connection in a single region.
When you create an HA VPN gateway, Google Cloud automatically chooses two external IP addresses, one for each of its fixed number of two interfaces. Each IP address is automatically chosen from a unique address pool to support high availability. Each of the HA VPN gateway interfaces supports multiple tunnels. You can also create multiple HA VPN gateways.
BGPoLAN feature is enabled on the Transit Gateway during its deployment (GCP).

Limits
- Each Cloud VPN tunnel can support up to 3 gigabits per second (Gbps) for the sum of ingress and egress traffic.
- Cloud VPN uses a maximum transmission unit (MTU) of 1460 bytes
- The recommended maximum packet rate for each Cloud VPN tunnel is 250,000 packets per second (pps)
- If a Cloud VPN tunnel goes down, it restarts automatically. If an entire virtual VPN device fails, Cloud VPN automatically instantiates a new one with the same configuration. The new gateway and tunnel connect automatically.
- If one tunnel becomes unavailable, Cloud Router withdraws the learned custom dynamic routes whose next hops are the unavailable tunnel. This withdrawal process can take up to 40 seconds, during which packet loss is expected.
- If Cloud Router receives the same prefix with different MED values through a given Cloud VPN interface, it only imports the route with the highest priority to the VPC network. The other inactive routes are not visible in the Google Cloud Console or through the Google Cloud CLI. If the route with the highest priority becomes unavailable, Cloud Router withdraws it and automatically imports the next best route to the VPC network.
- Cloud VPN only supports one-to-one NAT by using UDP encapsulation for NAT-Traversal (NAT-T). NAT-T is required so that IPsec traffic can reach destinations without external (public) IP addresses behind the NAT.
- One-to-many NAT and port-based address translation are not supported. Cloud VPN cannot connect to multiple peer VPN gateways that share a single external IP address.
Cloud Router
Google Cloud Router dynamically exchanges routes between Virtual Private Cloud (VPC) and on-premises networks by using Border Gateway Protocol (BGP):

I’m going to use it with Cloud VPN to exchange routes among the VPN gateways and CSRs.
Cloud VPN
We select VPN on the left panel from the Hybrid Connectivity:

GCP Cloud VPN supports two VPN options, a classic VPN where a single tunnel to the remote peer is configured, and a high-availability option where two tunnels are configured.
We need to provide a name, a network and a region where we are going to deploy the gw:

Once the VPN HA is created we can see the public IPs associated to the pair:

Peer VPN Gateways. I’m going to create two peers to represent locally the 2 x CSRs with a single interface each I have running on AWS:

I’ll have a total of 4 x VPN tunnels and I’m using IKEv2:

On top of those 4 x VPN tunnels I’m running BGP:

I’m using 64512 for GCP and 65513 for on-prem:
ASN should come from the private range (64512–65534)

The two BGP interface IP addresses must be link-local IP addresses belonging to the same /30 subnet in 169.254.0.0/16.
On-Prem Configuration
I’m using a pair of Cisco CSR1000v to emulate on-prem routers: the relevant part of the configuration for this document is provided below for one of the CSRs:
crypto ikev2 proposal gcp-east1-gw-on-prem-csr-proposal | |
encryption aes-cbc-256 aes-cbc-192 aes-cbc-128 | |
integrity sha256 | |
group 16 | |
! | |
crypto ikev2 policy gcp-east1-gw-on-prem-csr-policy | |
proposal gcp-east1-gw-on-prem-csr-proposal | |
! | |
crypto ikev2 keyring gcp-east1-gw-on-prem-csr-key-0 | |
peer gcp-east1-gw-interface-0 | |
address 35.242.15.156 | |
pre-shared-key pRq0yheLiEuAJ25z9aNPhzIdm8mMyFTa | |
! | |
! | |
! | |
crypto ikev2 keyring gcp-east1-gw-on-prem-csr-key-1 | |
peer gcp-east1-gw-interface-1 | |
address 35.220.6.163 | |
pre-shared-key wcgx9/OjeEcjkM4cJyyn1LoWT8C0LWDO | |
! | |
! | |
! | |
crypto ikev2 profile gcp-east1-gw-on-prem-csr-ike-profile-0 | |
match address local interface GigabitEthernet1 | |
match identity remote any | |
identity local address 3.232.164.133 | |
authentication remote pre-share | |
authentication local pre-share | |
keyring local gcp-east1-gw-on-prem-csr-key-0 | |
lifetime 36000 | |
dpd 60 5 periodic | |
! | |
! | |
! | |
crypto ikev2 profile gcp-east1-gw-on-prem-csr-ike-profile-1 | |
match address local interface GigabitEthernet1 | |
match identity remote any | |
identity local address 3.232.164.133 | |
authentication remote pre-share | |
authentication local pre-share | |
keyring local gcp-east1-gw-on-prem-csr-key-1 | |
lifetime 36000 | |
dpd 60 5 periodic | |
! | |
! | |
! | |
crypto ipsec security-association replay window-size 1024 | |
crypto ipsec transform-set gcp-east1-gw-on-prem-csr-ts esp-aes 256 esp-sha-hmac | |
mode tunnel | |
! | |
! | |
! | |
crypto ipsec profile gcp-east1-gw-on-prem-csr-s-0 | |
set transform-set gcp-east1-gw-on-prem-csr-ts | |
set pfs group16 | |
set ikev2-profile gcp-east1-gw-on-prem-csr-ike-profile-0 | |
! | |
! | |
! | |
crypto ipsec profile gcp-east1-gw-on-prem-csr-s-1 | |
set transform-set gcp-east1-gw-on-prem-csr-ts | |
set pfs group16 | |
set ikev2-profile gcp-east1-gw-on-prem-csr-ike-profile-1 | |
! | |
! | |
! | |
interface Tunnel1000 | |
ip address 169.254.0.2 255.255.255.252 | |
ip mtu 1400 | |
ip tcp adjust-mss 1360 | |
tunnel source GigabitEthernet1 | |
tunnel mode ipsec ipv4 | |
tunnel destination 35.242.15.156 | |
tunnel protection ipsec profile gcp-east1-gw-on-prem-csr-s-0 | |
! | |
! | |
! | |
interface Tunnel2000 | |
ip address 169.254.0.6 255.255.255.252 | |
ip mtu 1400 | |
ip tcp adjust-mss 1360 | |
tunnel source GigabitEthernet1 | |
tunnel mode ipsec ipv4 | |
tunnel destination 35.220.6.163 | |
tunnel protection ipsec profile gcp-east1-gw-on-prem-csr-s-1 | |
! | |
! | |
! | |
router bgp 65513 | |
bgp log-neighbor-changes | |
neighbor 169.254.0.1 remote-as 64512 | |
neighbor 169.254.0.5 remote-as 64512 | |
! | |
address-family ipv4 | |
network 192.168.0.0 mask 255.255.255.0 | |
neighbor 169.254.0.1 activate | |
neighbor 169.254.0.5 activate | |
exit-address-family | |
! |
Testing
From the GCP Hybrid Connectivity VPN Cloud VPN Tunnels screen:

From the GCP VPC Network Routes Dynamic screen:

Two routes from the active VPN GW are installed in the VPC routing table.
BPG over LAN
We will use BGPoLAN to exchange routes between Cloud Router and Aviatrix Gateways.
Transit BGP to LAN allows Aviatrix Transit Gateways to communicate with multiple instances in the same VPC in GCP without running any tunneling protocol such as IPsec or GRE.
Because the Cloud Router does not expose interfaces or an IP address by default, we will need to work with the Network Connectivity Center (NCC) first.
NCC provides a single management to create, connect, and manage heterogeneous on-prem and cloud networks.

First step is to create a hub:

Second step is to create a spoke for the aviatrix transit hub and add the gateway instances to it:
- the vpc network is the one dedicated for BGPoLAN

Once the spoke is created we need to configure BGP:

We have four BGP sessions but only two active:



We need also to create a spoke for the VPN tunnels:

Once the spokes are created and configured, we move to the controller to create an external connection:


Testing
From the GCP Console Hybrid Connectivity -> Cloud Routers:

From the GCP Console VPC Network -> Routes -> Dynamic Routes:

From the Multi-Cloud Transit -> BGP -> Connections we can check the status of the BGP connections:

From the Aviatrix Transit Gateway Diag we can check if the gateway learned the on-prem route:

Aviatrix Site to Cloud (S2C)
Aviatrix supports connectivity between its Gateways in the cloud and on-premise routers using a feature called Site2Cloud.
Site2Cloud builds an encrypted connection between two sites over the Internet in an easy-to-use and template-driven manner.
On one end of the tunnel is an Aviatrix Gateway. The other end could be an on-prem router, firewall, or another public cloud VPC/VNet, that the Aviatrix Controller does not manage.

Configuration
From the Controller -> Site2Cloud

Create a New Site2Cloud Connection. I’m going to use my ingress VPC for that:
Unmapped and Route based S2C are not supported from the transit gws


Once the configuration is done, we click on the connection and then select edit to download the configuration:

Aviatrix Controller is nice enough to create the required firewall rules:

And statics:

Cloud Interconnect
Cloud Interconnect extends your on-premises network to Google’s network through highly available low latency connections:

Dedicated Interconnect
Dedicated Interconnect provides direct physical connections between on-premises network and Google’s network.

The on-prem network must physically meet Google’s network in a supported colocation facility. This facility is where a vendor provisions a circuit between your network and a Google Edge point of presence (PoP).
VLAN attachments determine which VPCs can talk to on-premises network through a Dedicated Interconnect connection.
A VLAN attachment associate the circuit with a Cloud Router. The Cloud Router creates a BGP session for the VLAN attachment and its corresponding on-premises peer router to exchange routes.
Limits
- 10 and 100 Gbps redundant circuits
- Up to 8 x 10 Gbps circuits bundling
- Up to 2 x 100 Gbps circuits bundling
- Traffic is not encrypted
- Each VLAN attachment supports a maximum bandwidth of 50 Gbps
Pricing
- Hourly charges for both Interconnect connections and VLAN attachments.
- Egress traffic from a VPC network through an Interconnect connection is charged by number of gigabytes (GB) transferred
Partner Interconnect
Partner Interconnect connects on-premises network to Google through a supported service provider.
A Partner Interconnect connection is useful if your data center is in a physical location that can’t reach a Dedicated Interconnect colocation facility, or your data needs don’t warrant an entire 10-Gbps connection.

Partner Interconnect supports Layer 2 and Layer 3 connections:
- Layer 2 connections traffic passes through the service provider’s network to reach the VPC network or on-premises network. BGP is configured between the on-premises router and a Cloud Router in the VPC network.

- Layer 3 connections traffic passes to the service provider’s network. Their network then routes the traffic to the correct destination, either to the on-premises network or to the VPC network.

Configuration
Once the VLANs attachments are created the configuration is the same as described above for the VPN tunnels. The main difference is the type of spoke created under Hybrid Connectivity Center:

CSR Troubleshooting Companion
IKE2:
csr# show crypto ikev2 sa
csr# debug crypto ikev2
ipsec:
csr# show crypto ipsec sa
csr# debug crypto ipsec
Packet Capture:
csr# show debugging
csr# debug platform condition ipv4 <on-prem VM ip> both
csr# debug platform condition start
csr# debug platform packet-trace packet 1024
csr# show platform packet-trace summary
csr# show platform packet-trace packet <packet number>
Enter this command to clear the trace buffer and reset packet-trace:
csr# clear platform packet-trace statistics
The command to clear both platform conditions and the packet trace configuration is:
csr# clear platform conditions all
References
https://cloud.google.com/network-connectivity/docs/vpn
https://cloud.google.com/network-connectivity/docs/interconnect?hl=en_US
https://cloud.google.com/network-connectivity/docs/router
https://cloud.google.com/community/tutorials/using-cloud-vpn-with-cisco-asr
2 thoughts on “Hybrid GCP Connectivity and Aviatrix”