Deploying an Aviatrix FireNet on GCP with CheckPoint

Aviatrix Transit FireNet allows the deployment of 3rd party firewalls onto the Aviatrix transit architecture.

Transit FireNet works the same way as the Firewall Network where traffic in and out of the specified Spoke is forwarded to the firewall instances for inspection or policy application.

FireNet Design

GCP virtual private cloud (VPC) is a logically segmented global network within GCP that allows connected resources to communicate with each other. VPC networks contain one or more private IP subnets, each assigned to a GCP region. A subnet lives in only one region, but all subnets within a VPC network are reachable to the connected resources regardless of their location within GCP or their project membership.

Communication happens at Layer 3 because GCP forwards traffic by using host routes, even when the instances are part of a subnet that has a smaller subnet mask, such as /24. The reduced mask implies that all instances within the VPC network must communicate via a router. However, the intra-VPC traffic never transits the router. Instead, the router responds with a proxy address resolution protocol that tells the device how to communicate directly with the destination device.

By default, all instances connected to the VPC network communicate directly, even when they are part of different subnets. When you add a subnet, GCP automatically generates routes that facilitate communication within the VPC network as well as to the internet. These routes are known as system-generated routes.

The effect of GCP’s traffic forwarding capabilities is that a firewall can’t be inserted in the middle of intra-VPC traffic. For that reason the FireNet design has four vpcs: egress, LAN vpc, and the transit FireNet VPC.

Load balancers are used to provide high-availability and scalability to the design. GCP offers several load balancers that distribute traffic to a set of instances based on the type of traffic but in this design we will focus at the network load balancer.

The load balancer comes in two types: internal and network. The difference between the two types is the source of the traffic. The internal load balancer only supports traffic originating from within the VPC network or coming across a VPN terminating within GCP. The network load balancer is reachable from any device on the internet.

Network load balancers are not members of VPC networks. Instead, like public IP addresses attached directly to an instance, GCP translates inbound traffic to the public frontend IP address directly to the private IP address of the instance. Because the public network load balancer is not attached to a specific VPC network, any instance in the project that is in the region can be part of the target pool (regardless of the VPC network) to which the backend instance is attached.

Traffic from the load balancer to your instances has a source IP address in the ranges of and The destination IP address is the private IP address of the backend instance.

Aviatrix deploys and configures the Internal Load Balancers for a Firenet.

Number of Interfaces

GCP provides virtual machines with two, four, or eight network interfaces. The number of network interfaces is relevant for GCP deployments because you might attach an interface to each VPC network in the deployment.

Sizes such as the n1-standard-2 might work if CPU, memory, and disk capacity were the only concern, but they are limited by having only two network interfaces.

You cannot add additional interfaces to a GCP compute engine virtual machine instance after deployment.

Consumption Models

CheckPoint offers its NGFW in two consumption models:

  • BYOL (Bring Your Own License): let you run software on Compute Engine while using licenses purchased directly from the provider. Google only charges you for the infrastructure costs, giving you the flexibility to purchase and manage your own licenses.

You can obtain licenses for the BYOL licensing model through any CheckPoint partner. After you purchase a license or obtain an evaluation license (90-day term), you receive a PDF with an activation code.

  • PAYG (Pay as you Go): PAYG license is a usage-based or pay-per-use license. Image usage fee is charged with a 1-minute minimum and billed by Google. vCPU Based Pricing:
  • $0.80/hour for instances with 1–2 vCPU(s).
  • $0.95/hour for 4 vCPU instances.
  • $1.50/hour for instances with 6–8 vCPU(s).
  • $3.00/hour for instances with 10–16 vCPU(s).
  • $3.40/hour for instances with 18 or more vCPUs.

CloudGuard Network Security NGFW is priced based on the number of CPU cores (vCPU) of the instance.

Supported Releases

Onboard GCP

When you create a cloud account Aviatrix Controller for GCloud, you will be asked to upload a GCloud Project Credentials file. Below are the steps to download the credential file from the Google Developer Console. The steps are detailed below:


I’m going to leverage the mc-firenet and mc-spoke developed by my colleague Dennis Hagens to deploy a Firenet. Here is my provider:

terraform {
required_providers {
aviatrix = {
source = "AviatrixSystems/aviatrix"
version = "2.21.2"
google = {
source = "hashicorp/google"
version = "4.14.0"
http = {
source = "hashicorp/http"
version = "2.1.0"
provider "aviatrix" {
controller_ip = var.controller_ip
username = var.username
password = var.password
skip_version_validation = true
verify_ssl_certificate = false
provider "google" {
project = var.project
region = var.region
provider "http" {
# Configuration options
data "http" "ip" {
url = ""

view raw


hosted with ❤ by GitHub

The module deploys 3 VPCs (Transit Firenet, Egress and LAN), transit gateways (HA). The terraform file that I used in my environment can be downloaded from here:

module "mc_transit" {
source = "terraform-aviatrix-modules/mc-transit/aviatrix"
version = "v2.0.0"
cloud =
cidr = var.vpcs["firenet"]
region = var.region
account = var.account
enable_transit_firenet = true
lan_cidr = var.vpcs["lan"]
module "firenet_1" {
source = "terraform-aviatrix-modules/mc-firenet/aviatrix"
version = "1.0.0"
transit_module = module.mc_transit
firewall_image = var.firewall_image
firewall_image_version = var.firewall_image_version
egress_cidr = var.vpcs["egress"]
egress_enabled = true
inspection_enabled = true
instance_size = var.instance_size
mgmt_cidr = var.vpcs["mgmt"]
password = var.password
module "mc-spoke" {
for_each = {
"spoke100" = "spoke100"
"spoke200" = "spoke200"
source = "terraform-aviatrix-modules/mc-spoke/aviatrix"
version = "1.1.2"
account = var.account
cloud =
name = "gcp-${each.value}-${var.region}"
region = var.region
cidr = var.vpcs["${each.value}"]
inspection = true
transit_gw = module.mc_transit.transit_gateway.gw_name
ha_gw = true
instance_size = var.instance_size
single_az_ha = false

view raw


hosted with ❤ by GitHub

For testing purposes I also created two spokes:

  • spoke100 vpc
  • spoke200 vpc

Once the terraform is applied, before testing, I’m going to check/review the configuration applied.

Checking VPCs and Subnets

Looking at the gcp console VPC network:

Checking Aviatrix Gateways

From the Multi-Cloud transit menu, we check if the transit gateway was created correctly:

  • Connect Transit is enabled

We have to install the Transit Firenet Function at the deployment.

Checking Transit Firenet on Transit Gateway

Transit gws were deployed with the firenet functionality and it is enabled (egress is also enabled):

By default Firenet is configured to inspect inbound and east-west traffic but not outbound. To enable internet outbound (egress), we have to manuall configure it:

CheckPoint CloudGuard Configuration

After the launch is complete, the console displays the CheckPoint instance with its public IP address of management interface and allows us to download the .pem file for SSH access to the instance. Select the instance and click on actions. A drop-down menu will appear to download the key:

Change the pem file permission restricting access only to the owner.

Admin Password

To change or set the Gaia Portal login password, connect over SSH to the instance and run:

set user admin password

Once the password is set we can connect to the GAIA GUI using a browser. The first time we log in after the deployment, it loads the First Time Configuration wizard:

Click Go! once the info is provided:

Interfaces Configuration

No configuration is required

  • Port 1 is the mgmt and external/unstrusted/WAN interface
  • Port 2 is the internal/trusted/LAN interface

Vendor Firewall Integration

This step automatically configures the RFC 1918 and non-RFC 1918 routes between Aviatrix Gateway and Vendor’s firewall instance in this case CheckPoint CloudGuard NGFW. This can also be done manually through Cloud Portal and/or Vendor’s Management tool.

Configure Aviatrix

Use the Vendor Integration tool under Firewall Network:

If you don’t see the output as below, click on the sync button:

CheckPoint CloudGuard Management Server

The overall CheckPoint management architecture is show below:

I’m deploying a standalone CloudGuard to work as centralized management server for my CheckPoint GWs:

Once the information required to the deployment is provided, we can follow the deployment through the Deployment Manager:

CloudGuard PAYG comes with 24×7 support:


SmartConsole is supported only on Windows machine. We can download it from the GAIA console:


Once is installed, launch it:

Once the install is done we connect to the Management server we deployed earlier:

The next step is add my Gateways to the mgmt:


Disable Anti-Spoofing on eth1(LAN) interface:

Security Policy for Mgmt Access

A security policy is required if you plan or need to access Gaia for your Gateways:

  • test-lab-aviatrix-mgmt allows my Mac and my GCP Windows instance to ssh and or https to Gaia

DNAT Policy for Health Check

Unlike a device-based or instance-based load balancer, Internal TCP/UDP Load Balancing doesn’t terminate connections from clients, instead traffic is sent to the backends directly. For that reason, a DNAT policy on the firewall to translate the destination of the health check packets to the firewall interface IP address is required.

  • Original Source: and
  • Original Destination: TCP NLB and UDP NLB
  • Translated Destination: CheckPoint CloudGuard NIC 1 IP

Once policy is instaled the LB backend will get healthy:

A access control policy is also required:

Security Police for East-West Inspection

Security Police for Internet Inspection

Besides configuring Aviatrix Firenet for centralized egress, CheckPoint GWs also requires configuration: access policy and NAT.

  • Egress trough Firewall is enabled under Firewall Network -> List -> Details:

Create a policy to allow egress traffic:

  • test-lab-aviatrix-egress

In SmartConsole, set up Network Address Translation (NAT) rules, so that Internet bound traffic is hidden behind the CloudGuard cisGateway’s public address:

GCP CE instances that requires egress needs a “avx-snat-noip” tag:

  • Before tag
  • After tag:

Instance access to internet:

Firewall packet capture using tcpdump:

East-West Traffic Flow

Firenet Policy

To inspect traffic from spokes 100 and 200, we have to add them to the Inspection Policy using the sub menu Policy under the Firewall Network menu:


CloudGuard implicit denies all communication. We need to create a security policy for inter vpc flows:

A GCP CE instance is deployed at each spoke for testing the east-west flow:

spoke100 instance ( ssh to spoke200 instance (

A packet capture on the CheckPoint shows the ssh attempts:

We can use AppIQ and FlighPath to check and troubleshoot if required:

FlighPath will check gcp vpc network firewall rules, spoke and transit gateway route tables:


5 thoughts on “Deploying an Aviatrix FireNet on GCP with CheckPoint

Leave a Reply