In this document I discuss the design options for ingress traffic to an Aviatrix managed GCP cloud networking.
Design Requirements
- Provide secure and centralized ingress, egress, and east-west for applications running on Google Cloud
- Integrate with Palo Alto Next General Firewalls for advanced security services
- Provide application owners capability to deploy their own objects such but not restricted to L4-L7 Load Balancers
- Solution should be high-available and scalable
- Provide visibility and analytics
Proposed Design
The Aviatrix proposed design is show in the diagram below:

- Applications are deployed into their own spoke VPCs.
- HTTP(S) or TCP LB(s) is created for use with a dedicated ingress VPC. If needed, multiple LBs can be deployed with multiple forwarders. Each lb is capable of handling up to 1 mi queries per second.
- Users gain access to applications exposed by external L4-L7 LB(s) where advanced features can also be enable (Cloud CDN, Cloud Armor, SSL Policies, Cloud IDS). The source IP can be added to the X-Forwarded-For header on HTTP(S) load balancers as those work as proxies.
- The LB sends the traffic towards the backend servers through an pair of standalone gateways doing SNAT/DNAT (ingress gateways). This is required to bring the traffic back. Spoke gateways cannot be used to perform SNAT in this design.
- The NATed traffic is sent to the spoke gateways sitting in the same VPC.
- The spoke gateways forward the traffic to the transit hub, a.k.a. FireNet, for advanced inspection where an internal LB front-ends a set of PAN NGFW VM Series. The ILB provides high-availability and scalability for the firewalls.
- Once the traffic is cleared, the firewall that handled the inspection sends the flow back to the transit gateway towards the VPC where the backend service is running.
- The gateways running on the backend vpc is responsible to forward the traffic to the VM.
- Network Segmentation can be used to provide network isolation through security domains and connection policies to Aviatrix Transit for different business units, and or applications.
Constraints
- The system-generated routes cannot be overridden, making workloads in the same VPC unable to be inspected.
- Increased latency and data transfer costs if compared to a design with distributed LBs and Firewalls deployed with the app inside a same vpc.
- GCP External load balancers work as pass-through devices requiring DNAT/SNAT.
- SNAT is required in this solution because the load-balanced packets are received by the backend VMs with the packet’s source and destination IP addresses, protocol, and, if the protocol is port-based, the source and destination ports unchanged. Responses from the backend VMs would go directly to the clients (direct server return) and the traffic would be dropped (asymmetric routing).
Centralized Ingress x Distributed Ingress
In a typical on-premises world it is often a lengthy process to provision a new service. Cloud changes this as it possible for teams to have more autonomy over their cloud environment. It’s important to place guard rails in place to ensure your organization can move fast but securely.

Centralizing ingress (on the left of the diagram above) can be a great way to separate out roles. This pattern gives centralized control over all the traffic flowing into a service. Service insertion becomes simplified and it’s still possible to integrate with cloud-native services. There are also economies of scale to centralizing a set of services to use a shared security stack.
The Distributed approach is depicted on the right of the diagram above: each network construct has its own set of objects.
Centralized
- single point of entry
- simple management
- reduced time to deployment
- security guard rails for roles/personas
- consistent policy and management
- extra hop inserted into the flows adding latency and possible increase of costs due ingress/egress crossing vpcs
Distributed
- multiple points of entry
- complex management
- complex to implement security guard rails
- increase of costs associated to deployment and adoption of multiple services, their configuration, and management
Aviatrix Advantage
Using Aviatrix transit brings along several benefits to a centralized ingress architecture. From an infrastructure perspective, FireNet take the heavy lifting out of deploying and integrating next gen firewalls. With FireNet we can deploy and integrate quickly through an intuitive GUI:
Or codify in terraform:
Since all the traffic is flowing through Aviatrix gateways there is full visibility and control over traffic. Netflow data is exported to Copilot which allows for visualization and traffic analysis which is key to day-2 operations and troubleshooting.
Flight path, a Copilot platform tool, can be used to troubleshoot traffic flows. Flight path will look at NACLs, security groups, and route tables to quickly identify any anomalies in traffic patterns.
Check my previous post for more CoPilot capabilities at:
Taking the left seat of your cloud deployment with Aviatrix CoPilot
In this document I’m going to explorer a dedicate ingress VPC using gcp native load balancers to expose services to the internet.
GCP Load Balancers Primer
A load balancer distributes flows that arrive on the load balancer’s frontend to target pool instances in the backend, allowing you to scale applications and provide high availability for services.
Load balancers comes in two types: internal and external. The difference between the two types is the source of the traffic. The internal load balancer only supports traffic originating from within the VPC network or coming across a VPN terminating within GCP while the external load balancer is reachable from any device on the internet.


External load balancers are not members of VPC networks. Instead, like public IP addresses attached directly to an instance, GCP translates inbound traffic to the public frontend IP address directly to the private IP address of the instance. Because the public network load balancer is not attached to a specific VPC network, any instance in the project that is in the region can be part of the target pool(regardless of the VPC network) to which the backend instance is attached.
The backend target pools of the network load balancer are composed of instances within the GCP region. After the network load balancer picks a
backend instance from the target pool, the network load balancer sends the traffic directly to the instance. The network load balancer does not translate the destination IP address to the IP address of the backend instance. Instead, it sends the traffic to the backend instance with the original destination. The backend instance can use both the IP and the port to differentiate between applications, allowing port reuse. This configuration requires that the backend instances listen for the frontend IP address of the network load
balancer in addition to its own IP address.
The following diagram summarizes the available Cloud Load Balancing products:

We are focusing in two scenarios in this document:
- Using a cloud native HTTP(S) load balancer (Layer 7)
- Using a cloud native TCP/UDP/SSL load balancer (Layer 4)
External HTTP(S) Load Balancing (L7)
Global external HTTP(S) load balancer (classic): This is the classic external HTTP(S) load balancer that is global in Premium Tier but can be configured to be regional in Standard Tier. This load balancer is implemented on Google Front Ends (GFEs). GFEs are distributed globally and operate together using Google’s global network and control plane.

The following resources are required for an HTTP(S) Load Balancing deployment:
- An external forwarding rule specifies an external IP address, port, and target HTTP(S) proxy. Clients use the IP address and port to connect to the load balancer.
- A target HTTP(S) proxy receives a request from the client. The HTTP(S) proxy evaluates the request by using the URL map to make traffic routing decisions. The proxy can also authenticate communications by using SSL certificates.
- For HTTPS load balancing, the target HTTPS proxy uses SSL certificates to prove its identity to clients.
- The HTTP(S) proxy uses a URL map to make a routing determination based on HTTP attributes (such as the request path, cookies, or headers). Based on the routing decision, the proxy forwards client requests to specific backend services.
- A backend service distributes requests to healthy backends.
- A health check periodically monitors the readiness of your backends.
- The HTTP load balancer initiates connections to the backend instances sourced from 130.211.0.0/22 and 35.191.0.0/16.
- The original source IP address of the web client is part of the HTTP packet
header in the X-Forwarded-For (XFF) HTTP header field. The VM-Series
firewall logs the XFF information, in addition to other session data, to retain information about the original source IP address for each session. - To get the traffic to a private instance, a NAT policy rule on the VM-Series firewall must translate the destination address from the firewall’s public interface IP address to the backend instance IP address for traffic sourced from the HTTP load balancer. The private destination might be a server instance or the frontend IP of an internal load balancer. To ensure traffic symmetry, the VM-Series firewall must also translate the source IP address to the IP address of its private interface.
External TCP/UDP Network Load Balancing (L4)
External TCP/UDP Network Load Balancing has the following characteristics:
- Network Load Balancing is implemented by using Andromeda virtual networking and Google Maglev.
- Load-balanced packets are received by backend VMs with the packet’s source and destination IP addresses, protocol, and, if the protocol is port-based, the source and destination ports unchanged.
- Load-balanced connections are terminated by the backend VMs.
- Responses from the backend VMs go directly to the clients (green line in the diagram below).
- GCP sends HTTP health checks from the IP ranges 209.85.152.0/22,
209.85.204.0/22, and 35.191.0.0/16.

GCP Load Balancer Limits
- Global External Managed Forwarding rules: 375
- Regional External Managed Forwarding Rules: 35
- External network load balancer forwarding rules: 375
https://cloud.google.com/load-balancing/docs/quotas
FireNet Deployment
The FireNet deployment on GCP is covered in details on the following posts:
Deploying an Aviatrix FireNet on GCP with Fortinet FortiGate
Ingress Gateways
Gateway Sizing
Aviatrix gateways are deployed in multiple clouds on the cloud-native instances/VMs. Every public cloud (CSP) provides different sizes and types of instances/VMs. Aviatrix provides the recommended gateway/instances sizes based on the CSP:
https://community.aviatrix.com/t/35hpa6g/aviatrix-multi-cloud-gateway-sizing#aws
Standalone Gateways Deployment
The ingress gateway can be deployed from the Controller GUI:

DNAT Configuration
Once the ingress gateway is deployed, we have to configure a DNAT rule for each LB/back:
The DNAT configuration is shown in the sections below.
- DST CIDR:
The Public IP of the NLB (<IP>/32) for NLBs
Empty for external HTTPs LBs
- INTERFACE: eth0
- DNAT IPS: The IP of the backend service
- DST PORT
- DNAT PORT
SNAT Configuration
SNAT was enable during the standalone ingress gw creation. When Single IP is selected, the gateway’s primary IP address is used as source address for source NAT function. This is the simplest and default mode when you enable NAT at gateway launch time.
HTTP(S) Applications Load Balancer Configuration
Instance Groups
I’m going to work with an unmanaged instance group with a single instance to keep the number of instances running low in my lab:
- Ingress gateway is add as member of this group
Load Balancer
I’m going to use the Create a Load Balancer wizard to configure a classic HTTP(S) LB:

In the back end configuration we pick the previously created instance group, protocol, and named port:

We can attach advanced services such as Cloud CDN in the backend:

For the HC, I’m doing TCP to port 22. Cloud Armor policies can also be attached to a backend under Security:

Cloud Armor configuration is covered in the following post:
Deploying Aviatrix Controller and CoPilot on GCP behind Cloud Armor
Host and path rules:

The next step is to configure the front end:

I’m using HTTP and Network Service Tier but Premium and HTTPS are also supported options.
Ingress Gateway DNAT Configuration
Once the ELB is created we have the information required to setup the DNAT:

Testing
Pointing a web browser to the ELB IP address should get us to the nginx default page running on spoke30 linux instance:

If you like CLI, I like too, but also like a nice and colorful screenshot, you can use a tool like curl to check the deployment.
GCP L7 LBs store the client ip on the X-Forwarded-For Header. We can check if that is the case in our deployment looking through the nginx.log file:
172.21.20.8 — — [15/Apr/2022:16:36:35 +0000] “GET / HTTP/1.1” 200 4057 “-” “curl/7.77.0” “23.1X4.X26.XX, 34.111.161.198”
172.21.20.8 is the standalone gateway doing SNAT/DNAT while 23.1X4.X26.XX is my MAC ip address and 34.111.161.198 is an IP from GCP reserved range for external HTTP LBs.
TCP/UDP Load Balancer Configuration
Instance Group
I’m going to work with a unmanaged instance group:

Load Balancer
I’m going to use the Create a Load Balancer wizard to configure a TCP/UDP NLB:

Back End Configuration:

Health Check:
- I’m checking port 22

Front end:
- Each forwarding rule can have one, multiple or all ports

Ingress Gateway DNAT Configuration
Once the NLB is created we have the information required to setup the DNAT:

Testing
Pointing a web browser to the NLB IP address should get us to the nginx default page running on spoke40 linux instance:

If you like CLI, I like too, but also like a nice and colorful screenshot, you can use a tool like curl to check the deployment.
Troubleshooting
CoPilot AppIQ

The report from AppIQ is shown below:

AppIQ is going to check firewall rules, GW and VPC route tables.
CoPilot FlowIQ
FlowIQ can be used to help visualize the flows of interest:

CoPilot Topology Diag Tools
Topology can be used to visualize graphically the deployment. From any gateway, the dig tools can be launched:

From the Diag tool we can use network tools like ping, traceroute, tracepath, tracelog, test connectivity, check the active sessions, check interface stats, as well run packet captures:

The results can be seen on the GUI or they can be download as a cap file:

Firewall Rules
- TCP port 80 in my case needs to be opened for inbound traffic to the HTTP(S) LBs and NLBs.
- For the health check probes for HTTP(S) LBs, the firewall rule must allow the following source ranges: 130.211.0.0/22 and 35.191.0.0/16.
- For the health check probes from TCP/UDP NLBs, the firewall rule must allow the following source ranges: 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22.
- You might want to include your own IP to troubleshoot any issues with the VM and or application running on it.
4 thoughts on “Centralized Ingress with Aviatrix on GCP”