Aviatrix Transit FireNet allows the deployment of 3rd party firewalls onto the Aviatrix transit architecture.
Transit FireNet works the same way as the Firewall Network where traffic in and out of the specified Spoke is forwarded to the firewall instances for inspection or policy application.
FireNet Design
The diagram below shows the Aviatrix Firenet design for Azure. When a transit gateway is deployed with the firenet option checked, the Aviatrix controller will:
- create subnets
- create UDRs
- create an internal NLB
- configure the internal NLB (front end, back-end, healtch check)

Aviatrix deploys and configures the Internal Load Balancers for a Firenet.
Number of Interfaces
Fortigate Next Generation Firewall instance has 2 interfaces as described below:
Consumption Models
Fortinet offers its FortiGate NGFW in two consumption models:
- BYOL (Bring Your Own License): let you run software on Compute Engine while using licenses purchased directly from the provider. Google only charges you for the infrastructure costs, giving you the flexibility to purchase and manage your own licenses.
You must obtain a license to activate the FortiGate. If you have not activated the license, you see the license upload screen when you log into the FortiGate and cannot proceed to configure the FortiGate.
You can obtain licenses for the BYOL licensing model through any Fortinet partner. After you purchase a license or obtain an evaluation license (60-day term), you receive a PDF with an activation code.
- PAYG (Pay as you Go): PAYG license is a usage-based or pay-per-use license. You can purchase a PAYG license from the Azure Marketplace.
FortiGate is priced based on the number of CPU cores (vCPU) of the instance.
System Models

System Performance
Deployment
There are several plans available for deployment on Azure Marketplace:
I’m going to leverage the mc-firenet and mc-spoke developed by my colleague Dennis Hagens to deploy a Firenet. Here is my provider:
terraform { | |
required_providers { | |
aviatrix = { | |
source = "AviatrixSystems/aviatrix" | |
version = "2.22.2" | |
} | |
azurerm = { | |
source = "hashicorp/azurerm" | |
version = "3.8.0" | |
} | |
} | |
provider "aviatrix" { | |
controller_ip = var.controller_ip | |
username = var.username | |
password = var.password | |
skip_version_validation = true | |
verify_ssl_certificate = false | |
} | |
provider "azurerm" { | |
subscription_id = var.subscription_id | |
client_id = var.client_id | |
client_secret = var.client_secret | |
tenant_id = var.tenant_id | |
} |
The module deploys a single VPC (Transit Firenet), transit gateways (HA), and firewall instances. The terraform file that I used in my environment can be downloaded from here:
module "mc_transit" { | |
source = "terraform-aviatrix-modules/mc-transit/aviatrix" | |
version = "v2.1.1" | |
cloud = var.cloud | |
cidr = var.vpcs["firenet"] | |
region = var.region | |
account = var.account | |
enable_transit_firenet = true | |
lan_cidr = var.vpcs["lan"] | |
enable_bgp_over_lan = true | |
} | |
module "firenet_1" { | |
source = "terraform-aviatrix-modules/mc-firenet/aviatrix" | |
version = "1.0.2" | |
transit_module = module.mc_transit | |
firewall_image = var.firewall_image | |
firewall_image_version = var.firewall_image_version | |
egress_cidr = var.vpcs["egress"] | |
egress_enabled = true | |
inspection_enabled = true | |
instance_size = var.instance_size | |
mgmt_cidr = var.vpcs["mgmt"] | |
password = var.password | |
} | |
module "mc-spoke" { | |
for_each = { | |
"spoke30" = "spoke30" | |
"spoke40" = "spoke40" | |
"ingress" = "ingress" | |
} | |
source = "terraform-aviatrix-modules/mc-spoke/aviatrix" | |
version = "1.2.1" | |
account = var.account | |
cloud = var.cloud | |
name = "azure-${each.value}-${var.region}" | |
region = var.region | |
cidr = var.vpcs["${each.value}"] | |
inspection = true | |
transit_gw = module.mc_transit.transit_gateway.gw_name | |
ha_gw = true | |
instance_size = var.instance_size | |
single_az_ha = false | |
} |
Supported firewall images are listed here:
For testing purposes I’m going to create the following spokes:
- spoke30 vpc
- spoke40 vpc
- ingress vpc (ingress will not be used for testing in this document)
Once the terraform is applied, before testing, I’m going to check/review the configuration applied.
Checking VPCs and Subnets
From the azure portal virtual network:
Checking Aviatrix Gateways
From the Multi-Cloud transit menu, we check if the transit gateway was created correctly:
- Connect Transit is enabled
We have to install the Transit Firenet Function at the deployment.
Checking Transit Firenet on Transit Gateway
Transit gws were deployed with the firenet functionality and it is enabled :
By default Firenet is configured to inspect inbound and east-west traffic but not outbound. To enable outbound (egress), we have to enable it.
FortiGate Configuration
After the launch is complete, the console displays the FortiGate-VM instance with its public IP address of management interface and allows us to download the .pem file for SSH access to the instance. Select the instance and click on actions. A drop-down menu will appear to download the key:

Change the pem file permission restricting access only to the owner.
If you provided username and password you dont need to download the key file or changed the password is instructed below.
Change Password
SSH to the instance and change the password for the admin user:
config system admin
edit admin
set password <new-password_str>
exit
Once the password is set, using a web browser we can connect to the Fortinet GUI. The first time we log in after the deployment, it loads the Setup wizard:

Hostname or Serial Number:

Dashboard:

New features video review:

Dashboard:

License
- To activate a PAYG license copy the VM’s serial number from the dashboard:

- Go to Customer Service & Support and create a new account or log in with an existing account:

- In the Registration page, enter the serial number, and select Next to continue registering the product. Enter your details in the other fields.

To obtain the VM-ID run the command below on the CLI console:
Once the VM-ID is provided and the license agreement is accepted you should see:
Interfaces Configuration
- Port 1 is the mgmt and external/unstrusted/WAN interface
- Port 2 is the internal/trusted/LAN interface

Disable source check using the CLI console:
config system interface edit “port2” set src-check disable end
Vendor Firewall Integration
This step automatically configures the RFC 1918 and non-RFC 1918 routes between Aviatrix Gateway and Vendor’s firewall instance in this case Fortinet FortiGate-VM. This can also be done manually through Cloud Portal and/or Vendor’s Management tool.
Create an Administrator profile
The REST API admin should have the minimum permissions required to complete the request. Click on System -> Administrator -> Create New -> REST AP Admin:

We need to provide username and profile. Because Aviatrix will push configs to the FortiGate-VM, we will need to create an admin profile:

Once the admin profile is created, we create the user:

An API-Key is generated for the user:

The system will automatically generate a new API key, this key will only be displayed once.
Vendor Integration
Back to the controller, we will configure the vendor integration:

If you dont see the output as below, click on the sync button:
FortiGate is accepting connections via a load balancer (ILB). We have to additionally configure routes to the health probes’ IP ranges on each interface receiving traffic. This prevents the reverse path forwarding check from blocking the health probes.
Health Check
FortiGate requires an administrative access to reply to health probes from a NLB:
- select and edit port2
Security Policy for Heath Check
Create a security policy granting Azure health check ranges access both VIPs created before:
Once the policy is saved the health probe status of the NLB changes from 0 to 100%:
Firenet Policy
To inspect traffic from spokes 30 and 40, we have to add them to the Inspection Policy using the submenu Policy under the Firewall Network menu:

Testing
Checking GWs at Spokes
Using the Gateway menu, we check if the GWs for spokes 30 and 40 were deployed correctly:
- spokes are attached to the transit gateway

East-West Traffic Flow
FortiGate implicit denies all communication. We need to create a security policy for inter vpc flows:

Two test VMs were deployed into spokes 30 and 40 respectlivery. spoke30 instance (10.255.230.36) pings spoke40 instance (10.255.240.36):
We can use AppIQ and FlighPath to check and troubleshoot if required:

FlighPath will check azure network nsg, nsg rules rules, spoke and transit gateway route tables:
References
https://docs.aviatrix.com/HowTos/config_FortiGateAzure.html
4 thoughts on “Deploying an Aviatrix FireNet on Azure with Fortinet FortiGate”