A day in a life of an Aviatrix FireNet Ingress Packet

In this document I provide a hop-by-hop analysis including packet captures detailing an ingress flow using a centralized architecture as shown in the diagram below.

This document can be think of a companion for Centralized Ingress with Aviatrix on GCP providing more depth to it.

1- Client — External HTTP(S) Load Balancer

  • source is:
  • destination is:

curl -vvv
* Trying…
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.77.0
> Accept: */*
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.14.1
< Date: Fri, 22 Apr 2022 17:46:30 GMT
< Content-Type: text/html
< Content-Length: 4057
< Last-Modified: Tue, 21 Dec 2021 19:41:19 GMT
< ETag: “61c22ddf-fd9”
< Accept-Ranges: bytes
< Via: 1.1 google

Logs Explorer:

2- External HTTP(S) Load Balancer — Aviatrix Standalone Gateway

External HTTP(S) Load Balancer proxies the connection and starts a new one using an IP from its reserved pool.

  • source is: or
  • destination is: (load balancer backend)

3- Aviatrix Standalone Gateway — Aviatrix Gateways

To keep the traffic symmetric we Source NAT using the private address of the standalone gateway. We also DNAT pointing the traffic towards the Internal Load Balancer front ending the application server.

  • Source ->
  • Destination ->

4- Spoke Gateways — Transit Gateways

  • Source ->
  • Destination ->

The vpc routing table has the avx-us-east1-ingress gateway as next hop for the prefix:

The Ingress gw is attached to the transit gateway:

The next hop for is through the tunnels pointing to the transit:

Here the traffic is encapsulated and marked (0x7) for inspection (inspection policy under Firewall Network):

5- Transit Gateways — Internal Network Load Balancer

  • Source ->
  • Destination ->

Because packets are marked for inspection they are forced to use a table called “firewall_rtb” that forces all the traffic to the NLB facing the firewalls. Packets have the “inspect” flag removed and de-encapsulated before being sent to the “wire”.

6- Internal Network Load Balancer — VM-Series

  • NLB works as a pass-through device
  • Source ->
  • Destination ->

The NLB chooses a healthy firewall from the back end and forwards the packets to it. By default Aviatrix Transit FireNet use 5-tuple hashing algorithm (source IP, source port, destination IP, destination port and protocol type) to load balance the traffic across different firewall. However, user has an option to select two-tuple (source IP and destination IP) hashing algorithm to map traffic to the available firewalls.

SNAT isn’t required because Google Cloud uses symmetric hashing. This means that when packets belong to the same flow, Google Cloud calculates the same hash.

7- VM-Series — Transit Gateways

  • Source ->
  • Destination ->

Firewall inspects packets, consults its forwarding table, and then sends the packet back:

8- Transit Gateways — Aviatrix Gateways at Spoke

  • Source ->
  • Destination ->

Spoke 30 is attached to the transit:

Transit routing table:

  • routes through tunnel interfaces

9-Aviatrix Gateways at Spoke — Internal Network Load Balancer

  • Source ->
  • Destination ->

Spoke GW default points to the subnet gateway:

VPC routing table:

  • is the VPC network

10- Internal Network Load Balancer — NGINX at Compute Engine Instance

  • NLB works as a pass-through device
  • Source ->
  • Destination ->

11- NGINX at Compute Engine Instance

  • NGINX log shows the Client IP and the External Load balancer IP:



Leave a Reply