Using F5 and Aviatrix for Ingress Traffic on GCP

Aviatrix Overview

Aviatrix is a cloud network platform that brings multi-cloud networking, security, and operational visibility capabilities that go beyond what any cloud service provider offers. Aviatrix software leverages AWS, Azure, GCP and Oracle Cloud APIs to interact with and directly program native cloud networking constructs, abstracting the unique complexities of each cloud to form one network data plane, and adds advanced networking, security and operational features enterprises require.

F5 Overview

F5 BIG-IP Cloud Edition is currently available for GCP environments and consists of multiple components, including F5 BIG-IQ Centralized Management, F5 BIG-IQ Data Collection, and F5 Per-App Virtual Edition (VE)

  • BIG-IQ Centralized Management: The BIG-IQ (among other services) centralizes the configuration, deployment, and management of Service Scaling Groups (SSGs). Additionally, BIG-IQ provides the tenant portal and hosts the visibility/analytics dashboard.
  • BIG-IQ Data Collection Device (DCD): The DCD handles data collection, processing, and storage.
  • BIG-IP Per-App VEs: Per-App VEs are deployed into SSGs and provide advanced traffic management and web application firewall security on a per-app basis.

I’ll focus on BIG-IP VEs in this document.

BIG-IP VE Supported Platforms

https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414

Licensing Models

There are two ways you can run BIG-IP in GCP:

  • Utility Model: you pay GCP both for the compute and disk requirements of the instances and for the BIG-IP software license at an hourly rate.
  • BYOL: You do pay GCP only for the compute + disk footprint, not for the F5 software license. You must license the device after it launches, either manually or in an orchestration manner.

In addition to choosing between utility and BYOL licenses models, you’ll also need to choose the licensed features and the throughput level. When taking a BYOL approach, the license will have a max throughput level and will be associated with a Good/Better/Best (GBB) package (more info at https://support.f5.com/csp/article/K14810).

Licenses are available for throughput of 20, Gbps, 10 Gbps, 5 Gbps, 3 Gbps, 1 Gbps, 200 Mbps, or 25 Mbps.

Throughput depends on a variety of factors, including GCP instance type and region, the configuration of BIG-IP (whether it’s processing internal or external traffic), whether you have a single NIC or multiple NICs, whether you are using GCP cluster and enhanced networking, etc.

Number of NICs

In GCP each interface is attached as a layer 3 endpoint we must add an interface for each subnet. This contrasts with traditional networks, where you can add VLANs to your trunk for each subnet via tagging. Even though we’re in a virtual world, the number of Network Interfaces (NIs) is also limited according to the GCP instance size. The following are common cases:

  • 1 NIC: for use in one-armed, simple network environment (the single NIC processes both management and data plane traffic).
  • 2 NIC: BIG-IP with 2 NICs for use in environments with a separate management network.
  • 3 NIC: for use in environments having a traditional enterprise architecture with separate management, external, and internal networks. This requires static routes management.

Unsupported features and limitations

https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414

Network Configuration

The environment I’m going to use is shown on the diagram below:

The FireNet deployment on GCP is covered in details on the following posts:

https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414

Design Patterns

Standalone

A single instance is deployed in this case. Standalone BIG-IP VEs are primarily used for Dev/Test/Staging as they lack high-availability required for production environments.

Active/Standby

Two instances are deployed in this case. One is active while the second instance is on standby. This design is referred as failover clusters and are primarily used to replicate traditional Active/Standby BIG-IP deployments. 
 
 In these deployments, an individual BIG-IP VE in the cluster owns (or is Active for) a particular IP address. For example, the BIG-IP VEs will fail over services from one instance to another by remapping IP addresses, routes, etc. based on Active/Standby status. In some solutions, failover is implemented via API (API calls to the cloud platform vs. network protocols like Gratuitous ARP, route updates, and so on). In other solutions, failover is implemented via an upstream service (like a native load balancer) which will only send traffic to the Active Instance for that service based on a health monitor.

BIG-IP has adapted and replaced the GARP failover method with API calls to GCP. These API calls toggle ownership of GCP secondary private-IP addresses between devices. Any EIs which map to these secondary IP addresses will now point to the new active device. Floating IPs in BIG-IP speak are now equivalent to secondary IPs in the Compute Engine world.

The API-based failover mechanism is the increase in failover time to =~ 10 sec per EI.

This is the time it takes for changes to propagate in GCP’s network. While this downtime is still significantly less than a DNS timeout.

Active/Active

BIG-IP VEs are all Active and are primarily used to scale out an individual L7 service. This type of deployment relies on upstream service to distribute traffic like Cloud DNS, External HTTP(S) Load Balancers, and External TCP/UP Load Balancers.

GCP Marketplace

I’ll deploy a standalone F5 with a single NIC from Marketplace:

  • the interface is attached to the ingress vpc public subnet

I’ll pick up a PAYG 25 Mbps instance:

I’ll accept the suggested machine series and type:

The Networking is where we will specify the information from the previously created ingress vpc to properly inject the F5 into the vpc and networks Aviatrix managed:

Once information is provided the deployment runs and creates the BIG-IP instance:

External IP Address

Do not forget to reserve the IP address allocated during the deployment:

F5 Configuration

Once the deployment is complete, we need to configure the BIG-IP following the steps detailed below.

Change Admin password

To change the admin password, connect to the instance from a terminal using gcloud and the command below:

gcloud compute ssh — zone “<<zone>>” “admin@<<instance name>>” — tunnel-through-iap — project “<<project name>>”
modify auth password admin
save sys config

Once the password is saved, the GUI can be used for the configuration:

F5 BIGIP GUI listens on port 8443

Setup Utility

When we connect to the GUI for the first time, the Setup Utility is loaded if you are using BYOL licenses. If not, we can skip to the logical configuration below.

Node

I’ll add the “ce-spoke40-instance” as a Node:

Pools

I’ll create a pool using http as health monitor:

  • “ce-spoke40-instance” is added as member to this pool:

Virtual Server

I created a virtual server listening on port 80:

auto-map or snat is not required with a single NIC design

Once the configuration is “finished”, we can see the virtual server status is “green”:

Testing

To test the deployment, we must access the EI URL, which will hit the NGNIX server running on port 80 on the back-end server:

We can also check the traffic on the firewall logs to make sure the FireNet was deployed correctly. The flow of interest is 172.21.20.6 (BIGIP) -> 172.21.140.3 (Compute Engine Instance):

Copilot

FlowIQ can be used to identify/visualize flows ingressing the cloud network:

AppIQ can be used to visualize and or troubleshoot:

I left the HA gateways down purposely.

Active/Active using GCP NLB

Before creating the NLB, I’ll create an unmanaged instance group and I’ll add the BIGIPs to it:

I’m going to create a single region only NLB to front end the BIGIPs:

Back End:

Health Check:

Front end:

  • later I’ll reserve the ephemeral ip allocated to the NLB

Firewall Configuration

GCP sends HTTP health checks from the IP ranges 209.85.152.0/22,
209.85.204.0/22, and 35.191.0.0/16.

F5 Configuration

I create a virtual using as destination the IP address of the NLB:

I enabled Auto-Map as well:

Default Pool is the same as before:

Testing

To test the deployment, we must access the NLB external IP address, which will hit the F5 that has the NGNIX server running on port 80 as the back-end server:

Automation

F5 Google Deployment Manager Templates

F5 provides several GDM templates with options for licensing model, high availability, and auto-scaling at:

https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414

Terraform

F5 also provides Terraform modules to deploy F5. They are located at:

https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414

References

https://clouddocs.f5.com/cloud/public/v1/matrix.html#GUID-E24BAA63-87FD-4BAB-BAF3-AEDA4D544414

Leave a Reply