Deploying an Aviatrix FireNet on GCP with Fortinet FortiGate

Aviatrix Transit FireNet allows the deployment of 3rd party firewalls onto the Aviatrix transit architecture.

Transit FireNet works the same way as the Firewall Network where traffic in and out of the specified Spoke is forwarded to the firewall instances for inspection or policy application.

FireNet Design

GCP virtual private cloud (VPC) is a logically segmented global network within GCP that allows connected resources to communicate with each other. VPC networks contain one or more private IP subnets, each assigned to a GCP region. A subnet lives in only one region, but all subnets within a VPC network are reachable to the connected resources regardless of their location within GCP or their project membership.

Communication happens at Layer 3 because GCP forwards traffic by using host routes, even when the instances are part of a subnet that has a smaller subnet mask, such as /24. The reduced mask implies that all instances within the VPC network must communicate via a router. However, the intra-VPC traffic never transits the router. Instead, the router responds with a proxy address resolution protocol that tells the device how to communicate directly with the destination device.

By default, all instances connected to the VPC network communicate directly, even when they are part of different subnets. When you add a subnet, GCP automatically generates routes that facilitate communication within the VPC network as well as to the internet. These routes are known as system-generated routes.

The effect of GCP’s traffic forwarding capabilities is that a firewall can’t be inserted in the middle of intra-VPC traffic. For that reason the FireNet design has four vpcs: egress, LAN vpc, and the transit FireNet VPC.

Load balancers are used to provide high-availability and scalability to the design. GCP offers several load balancers that distribute traffic to a set of instances based on the type of traffic but in this design we will focus at the network load balancer.

The load balancer comes in two types: internal and network. The difference between the two types is the source of the traffic. The internal load balancer only supports traffic originating from within the VPC network or coming across a VPN terminating within GCP. The network load balancer is reachable from any device on the internet.

Network load balancers are not members of VPC networks. Instead, like public IP addresses attached directly to an instance, GCP translates inbound traffic to the public frontend IP address directly to the private IP address of the instance. Because the public network load balancer is not attached to a specific VPC network, any instance in the project that is in the region can be part of the target pool (regardless of the VPC network) to which the backend instance is attached.

Traffic from the load balancer to your instances has a source IP address in the ranges of and The destination IP address is the private IP address of the backend instance.

Aviatrix deploys and configures the Internal Load Balancers for a Firenet.

Number of Interfaces

GCP provides virtual machines with two, four, or eight network interfaces. The number of network interfaces is relevant for GCP deployments because you might attach an interface to each VPC network in the deployment.

Sizes such as the n1-standard-2 might work if CPU, memory, and disk capacity were the only concern, but they are limited by having only two network interfaces.

You cannot add additional interfaces to a GCP compute engine virtual machine instance after deployment.

Consumption Models

Fortinet offers its FortiGate NGFW in two consumption models:

  • BYOL (Bring Your Own License): let you run software on Compute Engine while using licenses purchased directly from the provider. Google only charges you for the infrastructure costs, giving you the flexibility to purchase and manage your own licenses.

You must obtain a license to activate the FortiGate. If you have not activated the license, you see the license upload screen when you log into the FortiGate and cannot proceed to configure the FortiGate.

You can obtain licenses for the BYOL licensing model through any Fortinet partner. After you purchase a license or obtain an evaluation license (60-day term), you receive a PDF with an activation code.

  • PAYG (Pay as you Go): PAYG license is a usage-based or pay-per-use license. You can purchase a PAYG license from the GCP Marketplace. Google bills hourly for the GCP PAYG licenses.

FortiGate is priced based on the number of CPU cores (vCPU) of the instance.

System Models

System Performance

Onboard GCP

When you create a cloud account Aviatrix Controller for GCloud, you will be asked to upload a GCloud Project Credentials file. Below are the steps to download the credential file from the Google Developer Console. The steps are detailed below:


I’m going to leverage the mc-firenet and mc-spoke developed by my colleague Dennis Hagens to deploy a Firenet. Here is my provider:

terraform {
required_providers {
aviatrix = {
source = "AviatrixSystems/aviatrix"
version = "2.21.2"
google = {
source = "hashicorp/google"
version = "4.14.0"
http = {
source = "hashicorp/http"
version = "2.1.0"
provider "aviatrix" {
controller_ip = var.controller_ip
username = var.username
password = var.password
skip_version_validation = true
verify_ssl_certificate = false
provider "google" {
project = var.project
region = var.region
provider "http" {
# Configuration options
data "http" "ip" {
url = ""

The module deploys 3 VPCs (Transit Firenet, Egress and LAN), transit gateways (HA), and firewall instances. The following inputs are required for the firenet design:

  • account
  • cloud
  • region
  • cidr (transit)
  • lan_cidr
  • firewall_image
  • egress_cidr
  • egress_enabled
  • fw_amount

The terraform file that I used in my environment can be downloaded from here:

module "mc_transit" {
source = "terraform-aviatrix-modules/mc-transit/aviatrix"
version = "v2.0.0"
cloud =
cidr = var.vpcs["firenet"]
region = var.region
account = var.account
enable_transit_firenet = true
lan_cidr = var.vpcs["lan"]
module "firenet_1" {
source = "terraform-aviatrix-modules/mc-firenet/aviatrix"
version = "1.0.0"
transit_module = module.mc_transit
firewall_image = var.firewall_image
firewall_image_version = var.firewall_image_version
# bootstrap_bucket_name_1 = var.storage_bucket_name
egress_cidr = var.vpcs["egress"]
egress_enabled = true
inspection_enabled = true
instance_size = var.instance_size
mgmt_cidr = var.vpcs["mgmt"]
password = var.password

view raw


hosted with ❤ by GitHub

For testing purposes I’m going to create spokes:

  • spoke10 vpc
  • spoke20 vpc
  • ingress vpc (ingress will not be used for testing in this document)

module "mc-spoke" {
for_each = {
"spoke10" = "spoke10"
"spoke20" = "spoke20"
"ingress" = "ingress"
source = "terraform-aviatrix-modules/mc-spoke/aviatrix"
version = "1.1.2"
account = var.account
cloud =
name = "gcp-${each.value}-${var.region}"
region = var.region
cidr = var.vpcs["${each.value}"]
inspection = true
transit_gw = module.mc_transit.transit_gateway.gw_name
ha_gw = true
instance_size = var.instance_size
single_az_ha = false

view raw


hosted with ❤ by GitHub

Once the terraform is applied, before testing, I’m going to check/review the configuration applied.

Checking VPCs and Subnets

Looking at the gcp console VPC network:

Checking Aviatrix Gateways

From the Multi-Cloud transit menu, we check if the transit gateway was created correctly:

  • Connect Transit is enabled

We have to install the Transit Firenet Function at the deployment.

Checking Transit Firenet on Transit Gateway

Transit gws were deployed with the firenet funcionality and it is enabled (egress is also enabled):

By default Firenet is configured to inspect inbound and east-west traffic but not outbound. To enable outbound (egress), we have to enable it.

Firewall Configuration

The following APIs should be enabled:

  • Compute Engine API
  • Cloud Deployment Manager V2 API
  • Cloud Runtime Configuration API

I’m going to use Aviatrix terraform firenet module to provision the VMs. If you prefer to deploy outside you can consult the Fortinet terraform module:

FortiGate Configuration

After the launch is complete, the console displays the FortiGate-VM instance with its public IP address of management interface and allows us to download the .pem file for SSH access to the instance. Select the instance and click on actions. A drop-down menu will appear to download the key:

Change the pem file permission restricting access only to the owner.

Change Password

SSH to the instance and change the password for the admin user:

config system admin

edit admin

set password <new-password_str>


Once the password is set, using a web brownser we can connect to the Fortinet GUI. The first time we log in after the deploymenet, it loads the Setup wizard:

Hostname or Serial Number:


New features video review:



  • To activate a PAYG license copy the VM’s serial number from the dashboard:
  • In the Registration page, enter the serial number, and select Next to continue registering the product. Enter your details in the other fields.

Interfaces Configuration

  • Port 1 is the mgmt and external/unstrusted/WAN interface
  • Port 2 is the internal/trusted/LAN interface

Disable source check:

config system interface

edit “port2”

set src-check disable


Vendor Firewall Integration

This step automatically configures the RFC 1918 and non-RFC 1918 routes between Aviatrix Gateway and Vendor’s firewall instance in this case Fortinet FortiGate-VM. This can also be done manually through Cloud Portal and/or Vendor’s Management tool.

Create an Administrator profile

The REST API admin should have the minimum permissions required to complete the request. Click on System -> Administrators -> Create New -> REST AP Admin:

We need to provide username and profile. Because Aviatrix will push configs to the FortiGate-VM, we will need to create an admin profile:

Once the admin profile is created, we create the user:

An API-Key is generated for the user:

The system will automatically generate a new API key, this key will only be displayed once.

Configure Aviatrix

If you dont see the output as below, click on the sync button:

FortiGate is accepting connections via a load balancer (ILB). We have to additionally configure routes to the health probes’ IP ranges on each interface receiving traffic. This prevents the reverse path forwarding check from blocking the health probes.

The route on the external interface covers the ranges that the external network LB uses.

Health Check

FortiGate requires an administrative access to reply to health probes from a NLB:

  • Check Administrative Access IPV4 HTTPS and HTTP boxes

DNAT Policy for Health Check

Unlike a device-based or instance-based load balancer, Internal TCP/UDP Load Balancing doesn’t terminate connections from clients, instead traffic is sent to the backends directly. For that reason, a DNAT policy on the firewall to translate the destination of the health check packets to the firewall interface IP address is required. Go to Policy and Object -> Virtual IPs -> New Virtual IP:

  •  Interface: port2
  • Type: Static NAT
  • External IP address range: ILB address (LAN Subnet .99 and .100)
  • Destination Translation: FortiGate LAN nic2 IP address

Because we use a LB for TCP and another for UDP, the configuration will have two VIPs:

Security Policy for Heath Check

Create a security policy granting GCP health check ranges access both VIPs created before:

  • Incoming Interface: port2
  • Outgoing Interface: port2
  • Source : and (GCP fixed ranges)
  • Destination Address: VIP created above

Firenet Policy

To inspect traffic from spokes 10 and 20, we have to add them to the Inspection Policy using the submenu Policy under the Firewall Network menu:


Checking GWs at Spokes

Using the Gateway menu, we check if the GWs for spokes 10 and 20 were deployed correctly:

  • spokes are attached to the transit gateway

East-West Traffic Flow

FortiGate implicit denies all communication. We need to create a security policy for inter vpc flows:

Two VM instances were deployed to test the traffic flow:

A packet capture on the FortiGate shows the ssh attempts:

spoke10 instance ( pings spoke20 instance (

We can use AppIQ and FlighPath to check and troubleshoot if required:

FlighPath will check gcp vpc network firewall rules, spoke and transit gateway route tables:


4 thoughts on “Deploying an Aviatrix FireNet on GCP with Fortinet FortiGate

Leave a Reply