In this document I provide a hop-by-hop analysis including packet captures detailing an ingress flow using a centralized architecture as shown in the diagram below.

This document can be think of a companion for Centralized Ingress with Aviatrix on GCP providing more depth to it.
1- Client — External HTTP(S) Load Balancer
- source is: 23.124.126.28
- destination is: 34.111.161.198
curl -vvv http://34.111.161.198
* Trying 34.111.161.198:80…
* Connected to 34.111.161.198 (34.111.161.198) port 80 (#0)
> GET / HTTP/1.1
> Host: 34.111.161.198
> User-Agent: curl/7.77.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.14.1
< Date: Fri, 22 Apr 2022 17:46:30 GMT
< Content-Type: text/html
< Content-Length: 4057
< Last-Modified: Tue, 21 Dec 2021 19:41:19 GMT
< ETag: “61c22ddf-fd9”
< Accept-Ranges: bytes
< Via: 1.1 google
Logs Explorer:

2- External HTTP(S) Load Balancer — Aviatrix Standalone Gateway
External HTTP(S) Load Balancer proxies the connection and starts a new one using an IP from its reserved pool.
- source is: 130.211.0.0/22 or 35.191.0.0/16
- destination is: 172.21.20.8 (load balancer backend)

3- Aviatrix Standalone Gateway — Aviatrix Gateways
To keep the traffic symmetric we Source NAT using the private address of the standalone gateway. We also DNAT pointing the traffic towards the Internal Load Balancer front ending the application server.
- Source -> 172.21.20.8
- Destination -> 172.21.130.5

4- Spoke Gateways — Transit Gateways
- Source -> 172.21.20.8
- Destination -> 172.21.130.5
The vpc routing table has the avx-us-east1-ingress gateway as next hop for the 172.16.0.0/12 prefix:

The Ingress gw is attached to the transit gateway:

The next hop for 172.21.130.0/23 is through the tunnels pointing to the transit:

Here the traffic is encapsulated and marked (0x7) for inspection (inspection policy under Firewall Network):

5- Transit Gateways — Internal Network Load Balancer
- Source -> 172.21.20.8
- Destination -> 172.21.130.5
Because packets are marked for inspection they are forced to use a table called “firewall_rtb” that forces all the traffic to the NLB facing the firewalls. Packets have the “inspect” flag removed and de-encapsulated before being sent to the “wire”.
6- Internal Network Load Balancer — VM-Series
- NLB works as a pass-through device
- Source -> 172.21.20.8
- Destination -> 172.21.130.5
The NLB chooses a healthy firewall from the back end and forwards the packets to it. By default Aviatrix Transit FireNet use 5-tuple hashing algorithm (source IP, source port, destination IP, destination port and protocol type) to load balance the traffic across different firewall. However, user has an option to select two-tuple (source IP and destination IP) hashing algorithm to map traffic to the available firewalls.

SNAT isn’t required because Google Cloud uses symmetric hashing. This means that when packets belong to the same flow, Google Cloud calculates the same hash.
7- VM-Series — Transit Gateways
- Source -> 172.21.20.8
- Destination -> 172.21.130.5
Firewall inspects packets, consults its forwarding table, and then sends the packet back:

8- Transit Gateways — Aviatrix Gateways at Spoke
- Source -> 172.21.20.8
- Destination -> 172.21.130.5
Spoke 30 is attached to the transit:

Transit routing table:
- 172.21.30.0/23 routes through tunnel interfaces

9-Aviatrix Gateways at Spoke — Internal Network Load Balancer
- Source -> 172.21.20.8
- Destination -> 172.21.130.5

Spoke GW default points to the subnet gateway:

VPC routing table:
- 172.21.130.0/23 is the VPC network

10- Internal Network Load Balancer — NGINX at Compute Engine Instance
- NLB works as a pass-through device
- Source -> 172.21.20.8
- Destination -> 172.21.130.5

11- NGINX at Compute Engine Instance
- NGINX log shows the Client IP and the External Load balancer IP:

References
http://rtrentinsworld.com/2022/05/28/centralized-ingress-with-aviatrix-on-gcp/