Using F5 and Aviatrix for Ingress Traffic on AWS

Aviatrix Overview

Aviatrix is a cloud network platform that brings multi-cloud networking, security, and operational visibility capabilities that go beyond what any cloud service provider offers. Aviatrix software leverages AWS, Azure, GCP and Oracle Cloud APIs to interact with and directly program native cloud networking constructs, abstracting the unique complexities of each cloud to form one network data plane, and adds advanced networking, security and operational features enterprises require.

F5 Overview

F5 BIG-IP Cloud Edition is currently available for AWS environments and consists of multiple components, including F5 BIG-IQ Centralized Management, F5 BIG-IQ Data Collection, and F5 Per-App Virtual Edition (VE)

  • BIG-IQ Centralized Management: The BIG-IQ (among other services) centralizes the configuration, deployment, and management of Service Scaling Groups (SSGs). Additionally, BIG-IQ provides the tenant portal and hosts the visibility/analytics dashboard.
  • BIG-IQ Data Collection Device (DCD): The DCD handles data collection, processing, and storage.
  • BIG-IP Per-App VEs: Per-App VEs are deployed into SSGs and provide advanced traffic management and web application firewall security on a per-app basis.

I’ll focus on BIG-IP VEs in this document.

BIG-IP VE Supported Platforms

Licensing Models

There are two ways you can run BIG-IP in AWS:

  • Utility Model: you pay Amazon both for the compute and disk requirements of the instances and for the BIG-IP software license at an hourly rate.
  • BYOL: You do pay Amazon only for the compute + disk footprint, not for the F5 software license. You must license the device after it launches, either manually or in an orchestration manner.

In addition to choosing between utility and BYOL licenses models, you’ll also need to choose the licensed features and the throughput level. When taking a BYOL approach, the license will have a max throughput level and will be associated with a Good/Better/Best (GBB) package (more info at

Licenses are available for throughput of 20, Gbps, 10 Gbps, 5 Gbps, 3 Gbps, 1 Gbps, 200 Mbps, or 25 Mbps.

Throughput depends on a variety of factors, including AWS instance type and region, the configuration of BIG-IP (whether it’s processing internal or external traffic), whether you have a single NIC or multiple NICs, whether you are using AWS cluster and enhanced networking, etc.

Number of NICs

In AWS each interface is attached as a layer 3 endpoint we must add an interface for each subnet. This contrasts with traditional networks, where you can add VLANs to your trunk for each subnet via tagging. Even though we’re in a virtual world, the number of Elastic Network Interfaces (ENIs) is also limited according to the EC2 instance size. The following are common cases:

  • 1 NIC: for use in one-armed, simple network environment (the single NIC processes both management and data plane traffic).
  • 2 NIC: BIG-IP with 2 NICs for use in environments with a separate management network.
  • 3 NIC: for use in environments having a traditional enterprise architecture with separate management, external, and internal networks. This requires static routes management.

Unsupported features and limitations

Directly Connected

The benefit of the directly connected architecture where BIG-IP can serve as the default gateway, is that each node in the tier can communicate with other nodes in the tiers and leverage virtual listeners on BIG-IP without having to be SNATed.

The problem is that as application and or tenant increases so does the number of required interfaces.


Routed is an architecture where the pool members live on remote networks. In the case below, the route table for all pool members must be contain a route that leads back to BIG-IP.

By doing so, you can:

  • leverage BIG-IP for outbound use case (secure outbound traffic)
  • return internet traffic back through the BIG-IP and avoid SNAT’ing your internet facing VIPs

These routed architectures allow you to reduce the number of interfaces used to connect internal networks. Two potential drawbacks include the requirement of SNAT (as BIG-IP is no longer inline to intercept response traffic) and adding an additional network hop.

Network Configuration

Using Aviatrix, I’ll create a new vpc dedicated to ingress with F5 using the Create a VPC under Useful Tools:

The advanced option allows us to set the size of the subnets and the number of AZs. Aviatrix will take of creating all the required network constructs and proper configuration. Next, I create spoke gateways for the new ingress VPC:

After the gateways are provisioned, I attach them to the Aviatrix transit hub:

Design Patterns


A single instance is deployed in this case. Standalone BIG-IP VEs are primarily used for Dev/Test/Staging as they lack high-availability required for production environments.


Two instances are deployed in this case. One is active while the second instance is on standby. This design is referred as failover clusters and are primarily used to replicate traditional Active/Standby BIG-IP deployments. 
 In these deployments, an individual BIG-IP VE in the cluster owns (or is Active for) a particular IP address. For example, the BIG-IP VEs will fail over services from one instance to another by remapping IP addresses, routes, etc. based on Active/Standby status. In some solutions, failover is implemented via API (API calls to the cloud platform vs. network protocols like Gratuitous ARP, route updates, and so on). In other solutions, failover is implemented via an upstream service (like a native load balancer) which will only send traffic to the Active Instance for that service based on a health monitor.

BIG-IP has adapted and replaced the GARP failover method with API calls to Amazon. These API calls toggle ownership of Amazon secondary private-IP addresses between devices. Any EIPs which map to these secondary IP addresses will now point to the new active device. Floating IPs in BIG-IP speak are now equivalent to secondary IPs in the EC2 world.

The API-based failover mechanism is the increase in failover time to =~ 10 sec per EIP.

This is the time it takes for changes to propagate in AWS’s network. While this downtime is still significantly less than a DNS timeout.


BIG-IP VEs are all Active and are primarily used to scale out an individual L7 service. This type of deployment relies on upstream service to distribute traffic like Route53, ELB, and GWLB.

F5 Deployment: AWS CloudFormation Templates

F5 provides several CFTs with options for licensing model, high availability, and auto-scaling at:

I’ll deploy a standalone F5 with 3 NICs:

  • mgmt and external interfaces are attached to a public subnet
  • internal is attached to a private subnet

Before launching a CFT, go to marketplace and subscribe to the license you are going to use:

The repository has links to the proper CFT template stored in a public bucket:

The network configuration is where we will specify the information from the previously created ingress vpc to properly inject the F5 into the vpc and networks Aviatrix managed:

I’ll pick an entry level image and because I decided to adopt a 3 NIC design i’ll need to pick a m5.xlarge image:

Once information is provided the stack runs and creates the BIG-IP instance:

Terraform Module

F5 also provides Terraform modules to deploy F5. They are located at:

F5 Configuration

Once the CFT is complete, we need to configure the BIG-IP following the steps detailed below.

Change Admin Password

To change the admin password and access the BIG-IP GUI we have to ssh to the instance and run:

modify auth password admin

save sys config

Setup Utility

When we connect to the GUI for the first time, the Setup Utility is loaded if you are using BYOL licenses. If not, we can skip to the logical configuaration below (starting from static routes).


A VLAN is a logical subset of hosts on a local area network (LAN) that operate in the same IP address space. By default, the BIG-IP includes VLANs named internal and external. When we initially ran the Setup utility, we assigned the following to each of these VLANs:

  • A static and a floating self IP address
  • A VLAN tag
  • One or more BIG-IP system interfaces

A typical VLAN configuration is one in which the system has the two VLANs external and internal, and one or more BIG-IP system interfaces assigned to each VLAN:


A self IP address is an IP address on the BIG-IP system that you associate with a VLAN, to access hosts in that VLAN. Self IP addresses serve two purposes:

  • First, when sending a message to a destination server, the BIG-IP system uses the self IP addresses of its VLANs to determine the specific VLAN in which a destination server resides
  • Second, a self IP address can serve as the default route for each destination server in the corresponding VLAN.


Because we are using two interfaces for data, we need to manage static routes. I created a default pointing to the gateway of the external subnet:

And statics towards spokes (RFC 1918) pointing to the gateway of the internal interface:


I’ll add the VM2 as a Node:


I’ll create a pool using http as health monitor:

VM2 is added as member to this pool:

Virtual Server

I created a virtual server listening on port 80 and doing auto-map:


From my mac I http to the public IP address of the external BIG-IP interface:

Apache Benchmark numbers (host)

Apache Benchmark numbers (BIG-IP)

Active-Active using an ELB

I created an ELB to front-end the BIG-IP:

Listening on port 80:

I added the BIG-IP to the target group:


To test this design, we must access the ELB URL, which will hit the HTTPD server running on port 80 on the back-end server:

Apache Benchmark numbers (ELB)

Active-Active using Route53

Route 53 can check the health of our BIG-IPs and respond to DNS queries using only the healthy resources. First, we create Health Checks for each BIG-IP (I deployed another standalone called bigip2):

And then I create type A records with multivalue answer routing policy:

Each record has its own health check and Record ID.

Apache Benchmark numbers

Active-Active using GWLB

I’ll cover it in a next post.

Active/Standby using API

The diagram below shows how a failover pair works using the F5 Cloud Extension Failover (CFE). When a failover occurs the secondaray IPs of the active node are moved to standby node that is promoted to active.

The failover pair can also be deployed from the F5 CFT templates. Once the stack is created a pair of BIGIPs are deployed in failover mode with the proper secondary IPs configured:

Copilot AppIQ can be used to detail the communication flow:


Leave a Reply