
Google Cloud Interconnect is a service provided by Google Cloud Platform (GCP) that enables customers to establish private, high-performance connections between their on-premises infrastructure and Google Cloud. It offers low-latency, secure connectivity by bypassing the public internet, making it ideal for scenarios like data migration, replication, disaster recovery, or hybrid cloud deployments. There are three main options:
- Dedicated Interconnect: Provides a direct physical connection between your on-premises network and Google’s network, offering high bandwidth (10 Gbps to 200 Gbps) for large-scale data transfers.
- Partner Interconnect: Connects your on-premises network to Google Cloud through a supported service provider, suitable for lower bandwidth needs (50 Mbps to 10 Gbps) or when a direct connection isn’t feasible.
- Cross-Cloud Interconnect: Links your network in another cloud provider (e.g., AWS, Azure) directly to Google’s network.
Key benefits include reduced latency, enhanced security (traffic stays off the public internet), cost savings on egress traffic, and direct access to Google Cloud’s internal IP addresses without needing VPNs or NAT devices. It’s widely used by enterprises in industries like media, healthcare, and global operations for reliable, scalable cloud connectivity.
Prerequisites
- Ensure you have a Megaport account and a physical Port (e.g., 1 Gbps, 10 Gbps, or 100 Gbps) provisioned in a Megaport-enabled data center. If not, order one via the Megaport Portal.
- Confirm you have a Google Cloud project with a Virtual Private Cloud (VPC) network set up. In this design, we are using the Aviatrix “transit” vpc.
- Create at least one Cloud Router per Aviatrix Transit VPC
- Identify the Google Cloud region where you want to connect (must align with a Megaport Point of Presence).
- Decide on the bandwidth for your Virtual Cross Connect (VXC)—Megaport supports 50 Mbps to 10 Gbps for Partner Interconnect.
Create a Partner Interconnect Attachment in Google Cloud
Navigate to Network Connectivity > Interconnect from the main menu:

Click Create VLAN Attachments:

Select Partner Interconnect, then click Continue.

Choose I already have a service provider (Megaport in this case).

Configure the Attachment:
- Resiliency (single or redundant VLANs)
- Network (Aviatrix Transit VPC)
- MTU (it must align with the Aviatrix Transit VPC mtu create before)
- VLAN A Cloud Router
- VLAN B Cloud Router

Generate Pairing Key:
- After creating the attachment, Google will provide a pairing key (a UUID-like string, e.g., 123e4567-e89b-12d3-a456-426614174000/us-central1/1).
- Copy this key—you’ll need it in the Megaport Portal.

Provision a Virtual Cross Connect (VXC) in Megaport
Go to the Megaport Portal (portal.megaport.com):

In this example, we will create a MVE (Cisco Catalyst 8000v) and connect it to the Aviatrix Gateway using BGPoIPSec over the cloud partner interconnect.
On the Megaport click on Portal Services -> Create MVE:

Select the region and then the Vendor/Product:

Add additional interfaces to the MVE to align to the design:

Click the + Connection and Choose Cloud as the connection type.

Select Google Cloud as the Provider:

Paste the pairing key from Google Cloud into the provided field. Megaport will automatically populate the target Google Cloud location based on the key:

Configure VXC Details:

- Name: Give your VXC a name (e.g., gcp-vxc-1).
- Rate Limit: Set the bandwidth to match the capacity chosen in Google Cloud (e.g., 1000 Mbps for 1 Gbps).
- A-End VNIC: This is the interface from your VM where you are attaching the connection.
- Preferred A-End VLAN: Specify a VLAN ID if required, or let Megaport auto-assign it.

Deploy the VXC:
- Add the VXC to your cart, proceed to checkout, and deploy it.
- Deployment typically takes a few minutes.


A second connection is required for the redundant vlan attachement. The steps are exactly the same.
Activate the Attachment in Google Cloud
Return to Google Cloud Console and check the attachment status:

Activate the Attachment:

Configure BGP
Set Up BGP in Google Cloud. In the attachment details, click Edit BGP Session:

Peer ASN: Enter your on-premises router’s ASN (private ASN, e.g., 64512–65534). Google’s ASN is always 16550.
Note the BGP IP addresses assigned by Google (e.g., 169.254.1.1/29 for Google, 169.254.1.2/29 for your side).
Configure the Megaport MVE with the information generated above.
Verify Connectivity
- Check BGP Status:
- In Google Cloud Console, under the attachment details, confirm the BGP session is Established.


This connection is what we can underlay and the only prefixes exchanged should be the Megaport C8000v and Aviatrix Transit Gateways IPs.
The MVE router configuration is under the Cisco C8000v Configuration session.
Configure Aviatrix
We create an External Connection attaching over Private Network from CoPilot:

The connectivity diagram for this solution looks like the following mermaid diagram:

The IPSec on the right it really goes on top of the cloud interconnect but my mermaid stills are not up to the task :).
Checking the status and prefixes exchanged:

From Megaport MVE, we see 3 bgp neighbors: 1 x underlay (Cloud Partner Interconnect VLAN Attachment) and 2 x overlay (Aviatrix):
megaport-mve-103456#show ip bgp summary
BGP router identifier 169.254.214.2, local AS number 65501
BGP table version is 32, main routing table version 32
15 network entries using 3720 bytes of memory
23 path entries using 3128 bytes of memory
8 multipath network entries and 16 multipath paths
4/4 BGP path/bestpath attribute entries using 1184 bytes of memory
2 BGP AS-PATH entries using 64 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 8096 total bytes of memory
BGP activity 17/2 prefixes, 27/4 paths, scan interval 60 secs
16 networks peaked at 17:48:39 Apr 2 2025 UTC (4d20h ago)
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
169.254.10.1 4 65502 7220 7953 32 0 0 5d00h 8
169.254.10.5 4 65502 7220 7935 32 0 0 5d00h 8
169.254.214.1 4 16550 24152 26085 32 0 0 5d14h 1
The output below shows a few routes learned from the overlay:
megaport-mve-103456#show ip bgp
BGP table version is 32, local router ID is 169.254.214.2
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path, L long-lived-stale,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*m 10.0.2.0/24 169.254.10.1 0 65502 64512 ?
*> 169.254.10.5 0 65502 64512 ?
*m 100.64.0.0/21 169.254.10.1 0 65502 64512 ?
*> 169.254.10.5 0 65502 64512 ?
*m 100.64.8.0/21 169.254.10.1 0 65502 64512 ?
*> 169.254.10.5 0 65502 64512 ?
Cisco C8000v Configuration
The default username for the NVE admin is mveadmin
interface GigabitEthernet2
ip address 169.254.214.2 255.255.255.248
mtu 1460
no shutdown
!
interface Loopback0
ip address 192.168.255.1 255.255.255.255
no shutdown
!
interface Tunnel11
ip address 169.254.10.2 255.255.255.252
ip mtu 1436
ip tcp adjust-mss 1387
tunnel source Loopback0
tunnel destination 192.168.5.3
tunnel mode ipsec ipv4
tunnel protection ipsec profile AVX-IPSEC-5.3
no shutdown
!
interface Tunnel12
ip address 169.254.10.6 255.255.255.252
ip mtu 1436
ip tcp adjust-mss 1387
tunnel source Loopback0
tunnel destination 192.168.5.4
tunnel mode ipsec ipv4
tunnel protection ipsec profile AVX-IPSEC-5.4
no shutdown
!
crypto ikev2 proposal AVX-PROPOSAL
encryption aes-cbc-256
integrity sha256
group 14
!
crypto ikev2 policy AVX-POLICY
proposal AVX-PROPOSAL
!
crypto ikev2 keyring AVX-KEYRING-5.3
peer AVX-PEER-5.3
address 192.168.5.3
pre-shared-key Avtx2019!
!
!
crypto ikev2 keyring AVX-KEYRING-5.4
peer AVX-PEER-5.4
address 192.168.5.4
pre-shared-key Avtx2019!
!
!
crypto ikev2 profile AVX-PROFILE-5.3
match identity remote address 192.168.5.3 255.255.255.255
identity local address 192.168.255.1
authentication local pre-share
authentication remote pre-share
keyring local AVX-KEYRING-5.3
lifetime 28800
dpd 10 3 periodic
!
crypto ikev2 profile AVX-PROFILE-5.4
match identity remote address 192.168.5.4 255.255.255.255
identity local address 192.168.255.1
authentication local pre-share
authentication remote pre-share
keyring local AVX-KEYRING-5.4
lifetime 28800
dpd 10 3 periodic
!
crypto ipsec transform-set AVX-TS-5.3 esp-aes 256 esp-sha256-hmac
mode tunnel
!
crypto ipsec transform-set AVX-TS-5.4 esp-aes 256 esp-sha256-hmac
mode tunnel
!
crypto ipsec profile AVX-IPSEC-5.3
set security-association lifetime seconds 3600
set transform-set AVX-TS-5.3
set pfs group14
set ikev2-profile AVX-PROFILE-5.3
!
crypto ipsec profile AVX-IPSEC-5.4
set security-association lifetime seconds 3600
set transform-set AVX-TS-5.4
set pfs group14
set ikev2-profile AVX-PROFILE-5.4
!
router bgp 65501
bgp log-neighbor-changes
neighbor 169.254.214.1 remote-as 16550
neighbor 169.254.214.1 update-source GigabitEthernet2
neighbor 169.254.214.1 timers 20 60
neighbor 169.254.10.1 remote-as 65502
neighbor 169.254.10.1 update-source Tunnel11
neighbor 169.254.10.1 timers 60 180
neighbor 169.254.10.5 remote-as 65502
neighbor 169.254.10.5 update-source Tunnel12
neighbor 169.254.10.5 timers 60 180
address-family ipv4
network 172.16.5.0 mask 255.255.255.0
network 192.168.255.1 mask 255.255.255.255
redistribute connected
neighbor 169.254.214.1 activate
neighbor 169.254.10.1 activate
neighbor 169.254.10.1 soft-reconfiguration inbound
neighbor 169.254.10.5 activate
neighbor 169.254.10.5 soft-reconfiguration inbound
maximum-paths 4
exit-address-family
How to generate traffic
configure terminal
ip sla 10
tcp-connect 192.168.40.2 443
timeout 5000 ! 5 seconds to connect
exit
ip sla schedule 10 life 10 start-time now
show ip sla statistics 10
Reference
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview