
Lab and Configuration Staging
The lab diagram for this exercise is show below:

- gcp vpc configuration

- Cloud Router:

- VPN:

- Peer Gateway:

- CSR1000v configuration:
crypto ikev2 keyring KEYRING1
peer 35.242.4.56
address 35.242.4.56
pre-shared-key q6UehAxCDgBm19Cf2Y59BiQoPxGG7AWB
!
peer 35.220.10.92
address 35.220.10.92
pre-shared-key q6UehAxCDgBm19Cf2Y59BiQoPxGG7AWB
!
!
crypto ikev2 profile IKEV2-PROFILE-GCP
match identity remote address 35.242.4.56 255.255.255.255
match identity remote address 35.220.10.92 255.255.255.255
authentication remote pre-share
authentication local pre-share
keyring local KEYRING1
lifetime 28800
dpd 10 10 on-demand
!
interface Tunnel100
ip address 169.254.5.70 255.255.255.252
ip mtu 1400
ip tcp adjust-mss 1360
tunnel source GigabitEthernet1
tunnel mode ipsec ipv4
tunnel destination 35.242.4.56
tunnel protection ipsec profile ipsec-vpn-gcp
ip virtual-reassembly
!
interface Tunnel110
ip address 169.254.138.178 255.255.255.252
ip mtu 1400
ip tcp adjust-mss 1360
tunnel source GigabitEthernet1
tunnel mode ipsec ipv4
tunnel destination 35.220.10.92
tunnel protection ipsec profile ipsec-vpn-gcp
ip virtual-reassembly
!
router bgp 36180
bgp log-neighbor-changes
bgp graceful-restart
neighbor 169.254.5.70 remote-as 64514
neighbor 169.254.138.178 remote-as 64514
!
address-family ipv4
redistribute connected
redistribute static
neighbor 169.254.5.70 activate
neighbor 169.254.138.178 activate
exit-address-family
- CSR1000v routes:
csr1000v-3#show ip route
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, m - OMP
n - NAT, Ni - NAT inside, No - NAT outside, Nd - NAT DIA
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
H - NHRP, G - NHRP registered, g - NHRP registration summary
o - ODR, P - periodic downloaded static route, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
& - replicated local route overrides by connected
Gateway of last resort is 172.31.0.1 to network 0.0.0.0
S* 0.0.0.0/0 [1/0] via 172.31.0.1, GigabitEthernet1
10.0.0.0/24 is subnetted, 4 subnets
B 10.11.0.0 [20/100] via 169.254.138.177, 00:45:55
[20/100] via 169.254.5.69, 00:45:55
B 10.11.1.0 [20/100] via 169.254.138.177, 00:45:55
[20/100] via 169.254.5.69, 00:45:55
B 10.11.2.0 [20/100] via 169.254.138.177, 00:45:55
[20/100] via 169.254.5.69, 00:45:55
B 10.11.3.0 [20/100] via 169.254.138.177, 02:16:05
[20/100] via 169.254.5.69, 02:16:05
169.254.0.0/16 is variably subnetted, 4 subnets, 2 masks
C 169.254.5.68/30 is directly connected, Tunnel100
L 169.254.5.70/32 is directly connected, Tunnel100
C 169.254.138.176/30 is directly connected, Tunnel110
L 169.254.138.178/32 is directly connected, Tunnel110
172.31.0.0/16 is variably subnetted, 3 subnets, 2 masks
C 172.31.0.0/28 is directly connected, GigabitEthernet1
L 172.31.0.13/32 is directly connected, GigabitEthernet1
S 172.31.0.128/28 [1/0] via 172.31.0.1
- VPC001 routes:

- Custom Cloud Route configuration:

- CSR1000v route table:
csr1000v-3#show ip route
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, m - OMP
n - NAT, Ni - NAT inside, No - NAT outside, Nd - NAT DIA
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
H - NHRP, G - NHRP registered, g - NHRP registration summary
o - ODR, P - periodic downloaded static route, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
& - replicated local route overrides by connected
Gateway of last resort is 172.31.0.1 to network 0.0.0.0
S* 0.0.0.0/0 [1/0] via 172.31.0.1, GigabitEthernet1
10.0.0.0/8 is variably subnetted, 6 subnets, 2 masks
B 10.11.0.0/24 [20/100] via 169.254.138.177, 00:06:35
[20/100] via 169.254.5.69, 00:06:35
B 10.11.1.0/24 [20/100] via 169.254.138.177, 00:06:35
[20/100] via 169.254.5.69, 00:06:35
B 10.11.2.0/24 [20/100] via 169.254.138.177, 00:06:35
[20/100] via 169.254.5.69, 00:06:35
B 10.11.3.0/24 [20/100] via 169.254.138.177, 00:06:35
[20/100] via 169.254.5.69, 00:06:35
B 10.12.0.0/22 [20/100] via 169.254.138.177, 00:00:48
[20/100] via 169.254.5.69, 00:00:48
B 10.13.0.0/22 [20/100] via 169.254.138.177, 00:00:48
[20/100] via 169.254.5.69, 00:00:48
169.254.0.0/16 is variably subnetted, 4 subnets, 2 masks
C 169.254.5.68/30 is directly connected, Tunnel100
L 169.254.5.70/32 is directly connected, Tunnel100
C 169.254.138.176/30 is directly connected, Tunnel110
L 169.254.138.178/32 is directly connected, Tunnel110
172.31.0.0/16 is variably subnetted, 3 subnets, 2 masks
C 172.31.0.0/28 is directly connected, GigabitEthernet1
L 172.31.0.13/32 is directly connected, GigabitEthernet1
S 172.31.0.128/28 [1/0] via 172.31.0.1
- VPC001 route table:

- VPC002 route table:

- VPC003 route table:

Staging Aviatrix
- stage controller (7.1) and copilot (3.10)

- stage transit gateways
- stage AVX Transit to CSR1000v IPSec and BGP

- stage spoke gateways using a new subnetwork

- spokes are not attached, except for gcp-vpc003-gw:


Flows of Interest
- flow 1: native google cloud hub to on-prem
- flow 2: native google cloud spoke to on-prem
- flow 3: native google cloud hub to spoke
- flow 4: avx spoke gateway to native cloud hub

Constraints
- Flow 1 and Flow 2 depends on the Cloud VPN IPSec connection
- Flow 3 depends on the vpc peering
Migration Approaches
The Slicer
- this approach leverages the gateway customize Spoke VPC Routing Table and Customize Spoke VPC Routing Table to attract traffic towards the fabric
- Spoke VPC Routing Table: This feature allows you to customize Spoke VPC/VNet route table entry by specifying a list of comma separated CIDRs. When a CIDR is inserted in this field, automatic route propagation to the Spoke(s) VPC/VNet will be disabled, overriding propagated CIDRs from other spokes, transit gateways and on-prem network. One use case of this feature is for a Spoke VPC/VNet that is customer facing and your customer is propagating routes that may conflict with your on-prem routes.
- Customize Spoke VPC Routing Table: This route policy enables you to selectively exclude some VPC/VNet CIDRs from being advertised to on-prem.. When this policy is applied to an Aviatrix Spoke Gateway, the list is an “Include list” meaning only the CIDRs in the input fields are advertised to on-prem

Constraints
- The slicer requires gateways on every vpc for proper routing
- The slicer does not support /32s
- The slicer is limited to the numbers of custom routes supported in a single google project (600)
- The slicer requires vpc peering to be tiered down
- Ther slicer, to keep flow symmetric, requires both sides of a connection to be updated
Testing

Flow 1 and Flow 2 Migration using The Slicer (Switch Traffic)

Slicing it:

CSR1000v routes:
csr1000v-3#show ip route
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, m - OMP
n - NAT, Ni - NAT inside, No - NAT outside, Nd - NAT DIA
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
H - NHRP, G - NHRP registered, g - NHRP registration summary
o - ODR, P - periodic downloaded static route, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
& - replicated local route overrides by connected
Gateway of last resort is 172.31.0.1 to network 0.0.0.0
S* 0.0.0.0/0 [1/0] via 172.31.0.1, GigabitEthernet1
10.0.0.0/8 is variably subnetted, 15 subnets, 3 masks
B 10.11.0.0/24 [20/0] via 169.254.132.78, 00:46:41
B 10.11.1.0/24 [20/100] via 169.254.138.177, 00:00:36
[20/100] via 169.254.5.69, 00:00:36
B 10.11.2.0/24 [20/100] via 169.254.138.177, 00:00:36
[20/100] via 169.254.5.69, 00:00:36
B 10.11.3.0/24 [20/100] via 169.254.138.177, 00:00:36
[20/100] via 169.254.5.69, 00:00:36
B 10.12.0.0/16 [20/100] via 169.254.138.177, 00:01:59
[20/100] via 169.254.5.69, 00:01:59
B 10.12.64.0/24 [20/0] via 169.254.132.78, 00:46:17
B 10.12.65.0/25 [20/0] via 169.254.132.78, 00:04:14
B 10.12.65.128/25 [20/0] via 169.254.132.78, 00:04:14
B 10.12.66.0/25 [20/0] via 169.254.132.78, 00:04:14
B 10.12.66.128/25 [20/0] via 169.254.132.78, 00:04:14
B 10.13.0.0/16 [20/100] via 169.254.138.177, 00:01:59
[20/100] via 169.254.5.69, 00:01:59
B 10.13.64.0/24 [20/0] via 169.254.132.78, 03:20:13
B 10.13.65.0/24 [20/0] via 169.254.132.78, 00:00:11
B 10.13.66.0/24 [20/0] via 169.254.132.78, 00:00:11
B 10.14.64.0/24 [20/0] via 169.254.132.78, 03:20:13
169.254.0.0/16 is variably subnetted, 6 subnets, 2 masks
C 169.254.5.68/30 is directly connected, Tunnel100
L 169.254.5.70/32 is directly connected, Tunnel100
C 169.254.132.76/30 is directly connected, Tunnel300
L 169.254.132.77/32 is directly connected, Tunnel300
C 169.254.138.176/30 is directly connected, Tunnel110
L 169.254.138.178/32 is directly connected, Tunnel110
172.31.0.0/16 is variably subnetted, 3 subnets, 2 masks
C 172.31.0.0/28 is directly connected, GigabitEthernet1
L 172.31.0.13/32 is directly connected, GigabitEthernet1
S 172.31.0.128/28 [1/0] via 172.31.0.1
If the Cloud Router custom advertisement is doing (route) summarization, the slice on the avx spoke gateway advertised routes is not required. In this case, we should customize the advertisement to only allow the subnetwork where the avx gateway was deployed.
Flow3 and Flow 4 Migration using The Slicer (Switch Traffic)
This step requires that all the north-south flows were properly migrated at least between spoke and hub (peering will be removed) . Deletion of the vpc peering after a north-south migration will make the RFC1918 routes to kick in and conclude the migration:
- initial routes

- peer removed

once the gateways is reconfigured:

As an example, vpc001 route table without the north-south migration complete:

In this case, the spoke gateway requires customization to avoid an asymmetric flow or black hole. VPCs can be migrated individually or even subnets with this approach. More details/tests can be found on the following blog:
BGPoLAN
- Peering the AVX Transit with Google Cloud Cloud Router using NCC (Network Connectivity Center) is a migration approach for customers using Cloud Interconnects and with demand for high throughput
- The architecture show below is one of many possible and it does not require to create new cloud interconnect lan interfaces
- There is a cost associated with NCC: https://cloud.google.com/network-connectivity/docs/network-connectivity-center/pricing. Price is based on a flat utilization fee plus data transfer.
- For this scenario work properly, do not forget to enable Site-to-site data transfer during the NCC spoke creation.

Flow 1, Flow 2, Flow 3 and Flow 4 Migration using BGPoLAN (Switch Traffic)
Flow 1 requires no migration in this scenario where the cloud native hub is repurposed as a connectivity vpc or avx bgpolan vpc.

The Slicer could once again be used to migrate north-south flows in a different time than east-west. One advantage of this approach is that we can migrate all flows from a vpc in a single operation… breaking the vpc peering:

After a few seconds the routes are withdrawn and the custom RFC1918 programmed by the avx controller will be preferred:

vpc001 route table is dynamically populated with the route learned from the transit: workloads on vpc001 to reach out to vpc002 needs to transverse the fabric ingress at the avx transit “bpgolan” interface:

vpc003 talks to vpc001 and vpc002 using the RFC1918 custom routes programmed by the avx controller. For more information on Google Cloud Network Connectivity Center and Avx visit the link below:
Custom Routes
Cleanup
Multiple Regions
References
Terraform Examples
VPC
resource "google_compute_network" "vpc_network" {
for_each = var.vpcs
project = var.project
name = each.value.name
auto_create_subnetworks = false
routing_mode = "GLOBAL"
}
Subnetwork
resource "google_compute_subnetwork" "network" {
depends_on = [
google_compute_network.vpc_network
]
for_each = var.networks
project = var.project
name = each.value.name
ip_cidr_range = each.value.ip_cidr_range
region = each.value.region
network = each.value.network
}
Firewall Rule
resource "google_compute_firewall" "vpc001_compute_firewall" {
project = var.project
name = "fw-${google_compute_network.vpc_network["vpc001"].name}"
network = google_compute_network.vpc_network["vpc001"].name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["22", "80", "443", "53"]
}
allow {
protocol = "udp"
ports = ["53"]
}
source_ranges = ["192.168.0.0/16", "172.16.0.0/12", "10.0.0.0/8", "35.191.0.0/16", "130.211.0.0/22", "35.199.192.0/19"]
}
Cloud Router
resource "google_compute_router" "google_compute_router1" {
depends_on = [
google_compute_network.vpc_network
]
project = var.project
name = "cr-east-${google_compute_network.vpc_network["vpc001"].name}"
network = google_compute_network.vpc_network["vpc001"].name
bgp {
asn = 64514
advertise_mode = "DEFAULT"
}
region = "us-east1"
}
Cloud VPN Gateway
resource "google_compute_ha_vpn_gateway" "ha_gateway1" {
depends_on = [
google_compute_router.google_compute_router1
]
project = var.project
region = google_compute_router.google_compute_router1.region
name = "vpn-east-${google_compute_network.vpc_network["vpc001"].name}"
network = google_compute_network.vpc_network["vpc001"].name
}
External Gateway
resource "google_compute_external_vpn_gateway" "external_gateway1" {
project = var.project
name = "peer-${replace(var.remote_ip1, ".", "-")}"
redundancy_type = "SINGLE_IP_INTERNALLY_REDUNDANT"
interface {
id = 0
ip_address = var.remote_ip1
}
}
VPN Tunnel
resource "google_compute_vpn_tunnel" "tunnel1" {
depends_on = [
google_compute_router.google_compute_router1,
google_compute_ha_vpn_gateway.ha_gateway1
]
project = var.project
name = "tunnel-1-${google_compute_external_vpn_gateway.external_gateway1.name}"
peer_external_gateway = google_compute_external_vpn_gateway.external_gateway1.self_link
peer_external_gateway_interface = "0"
router = google_compute_router.google_compute_router1.self_link
shared_secret = "Avtx2019!"
vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway1.self_link
vpn_gateway_interface = "0"
region = google_compute_router.google_compute_router1.region
}
BGP Router Interface
resource "google_compute_router_interface" "router1_interface1" {
project = var.project
name = "router1-interface1"
router = google_compute_router.google_compute_router1.name
region = google_compute_router.google_compute_router1.region
ip_range = "169.254.0.1/30"
vpn_tunnel = google_compute_vpn_tunnel.tunnel1.name
}
resource "google_compute_router_interface" "router1_interface2" {
project = var.project
name = "router1-interface2"
router = google_compute_router.google_compute_router1.name
region = google_compute_router.google_compute_router1.region
ip_range = "169.254.0.5/30"
vpn_tunnel = google_compute_vpn_tunnel.tunnel2.name
}
BGP Rote Peering
resource "google_compute_router_peer" "router1_peer1" {
project = var.project
name = "router1-peer1"
router = google_compute_router.google_compute_router1.name
region = google_compute_router.google_compute_router1.region
peer_ip_address = "169.254.0.2"
peer_asn = var.peer_asn
advertised_route_priority = var.advertised_route_priority
interface = google_compute_router_interface.router1_interface1.name
}
resource "google_compute_router_peer" "router1_peer2" {
project = var.project
name = "router1-peer2"
router = google_compute_router.google_compute_router1.name
region = google_compute_router.google_compute_router1.region
peer_ip_address = "169.254.0.6"
peer_asn = var.peer_asn
advertised_route_priority = var.advertised_route_priority
interface = google_compute_router_interface.router1_interface2.name
}
Discovery
- Get the list of VPCs in the project:
vpcs = service.networks().list(project=project_id).execute()
- Get the list of subnetworks in the project:
subnetworks = service.subnetworks().list(project=project_id, region=region).execute()
- Get the list of routes in the project.
routes = service.routes().list(project=project_id).execute()
- Get the list of Cloud Interconnects in the project.
interconnects = service.interconnects().list(project=project_id).execute()
- Get the list of Cloud LAN Interfaces in the project.
lan_interfaces = service.regionBackendServices().list(project=project_id, region=region).execute()
- Get the list of Cloud VPN Gateways in the project.
vpn_gateways = service.vpnGateways().list(project=project_id, region=region).execute()
- Get the list of Cloud VPN External Gateways in the project.
external_gateways = service.externalVpnGateways().list(project=project_id).execute()
- Get the list of Cloud VPN Tunnels in the project.
vpn_tunnels = service.vpnTunnels().list(project=project_id, region=region).execute()
- Get the list of Cloud Routers in the project.
routers = service.routers().list(project=project_id, region=region).execute()
Where:
service = get_authenticated_service()
and
def get_authenticated_service():
"""Authenticate and create a service object for the Compute Engine API."""
credentials, project_id = google.auth.default(scopes=['https://www.googleapis.com/auth/compute'])
# Create a service object with the authenticated credentials
service = googleapiclient.discovery.build('compute', 'v1', credentials=credentials)
return service
Saving Routes
A route consists of a destination range and a next hop. The destination range specifies the range of IP addresses that the route applies to. The next hop specifies the IP address or hostname of the device that the traffic should be sent to.
Google Cloud supports the following types of routes:
- Default routes: These routes are created automatically when you create a VPC network. They direct traffic to the internet gateway for your VPC network.
- Subnet routes: These routes are created when you create a subnet in a VPC network. They direct traffic to the default route for the subnet.
- Static routes: These routes are created manually. They can be used to direct traffic to specific destinations, such as an internet gateway or a cloud load balancer.
- Dynamic routes: These routes are created automatically by Cloud Router. They are used to direct traffic to destinations that are connected to your VPC network through a Cloud Router.
For backup and restore operations, we need to store static routes in a file and in case of fallback, we would need to read that file and apply the static route back.
Migration Steps
Switch Traffic
- Delete vpc peering
- Delete static routes (behavior should be handled by an argument)
- If the scenario includes a new cloud router, and cloud lan interfaces for their interconnect, we also need to remove the vpc prefix(es) from the cloud router custom advertisement IP ranges.
- Change avx gateway propagation to advertise all prefixes