-
Aviatrix Notification using WebHooks


NAAMES Photo Essay by NASA Goddard Photo and Video is licensed under CC-BY 2.0 Aviatrix CoPilot
CoPilot leverages the intelligence, advanced network, and security services delivered by Aviatrix’s multi-cloud network platform to provide enterprise cloud network operations teams both familiar day-two operational features such as packet capture, trace route and ping and new operational capabilities specifically built for multi-cloud network environments.
The following previous blog post provides more details:
The following previous posts go into details on how to deploy Aviatrix:
Avitrix CoPilot Notifications is where alerts can be configured so that you can be notified about changes in your Aviatrix transit network. The alerts can be based on common telemetry data monitored in the network.
The full list of metrics is listed at:
When configuring alerts, you can choose a notification channel of email or Webhook destinations.
Webhooks
CoPilot supports Webhook alerts. Webhooks are user-defined HTTP callbacks. They are usually triggered by some event, such as pushing code to a repository or a comment being posted to a blog. When that event occurs, the source site makes an HTTP request to the URL configured for the webhook. CoPilot alerts expose the following variables (objects):
Name Type alertobjectalert.closedboolalert.metricstringalert.namestringalert.thresholdinta lert.unitstringeventobjectevent.exceededOrDroppedstringevent.matchingHosts.[(n>-1)]stringevent.matchingHosts.[0]stringevent.messagestringevent.newlyAffectedHostsarrayevent.newlyAffectedHosts.[(n>=0)]stringevent.newlyAffectedHosts.[0]stringevent.receiveSeparateAlertboolevent.recoveredHostsarrayevent.recoveredHosts.[(n>-0)]stringevent.recoveredHosts.[0]stringevent.timestampstringextraobjectextra.hostsobjectextra.hosts.HOSTNAMEobjectextra.hosts.HOSTNAME.userTagsobjectextra.hosts.HOSTNAME.userTags.TAGstring webhook.namestring webhook.secretstring webhook.urlstring You can customize webhooks using the Handlebars templating language. Handlebars uses a template and an input object to generate HTML or other text formats. Handlebars templates look like regular text with embedded Handlebars expressions. The template is located under the Tag/Labels on the recipients configuration:

A handlebars expression is a
{{, some contents, followed by a}}. When the template is executed, these expressions are replaced with values from an input object. More info on Handlebar can be found at https://handlebarsjs.com/Configuration
I’m going to initially use https://webhook.site to check if my configuration is working properly:

Webhooks are configured under CoPilot -> Monitor -> Notifications -> Recipients:

We can use the built in “test” at the end of the Recipient page configuration:

Back to webhook.site we can see the test details:

Customizing the Handlebar Template
I’m going to use the example below:
{ "status": "{{#if alert.closed}}ok{{else}}critical{{/if}}", "check": {{alert.name}}, "copilotstatus": {{alert.status}}, "host": {{#if event.receiveSeparateAlert}} {{#if event.newlyAffectedHosts}} {{event.newlyAffectedHosts.[0]}} {{else}} {{event.recoveredHosts.[0]}} {{/if}} {{else}} {{#if event.newlyAffectedHosts}} {{event.newlyAffectedHosts}} {{else}} {{event.recoveredHosts}} {{/if}} {{/if}}, "alert_timestamp": "Received <<alert.metric>> at <<event.timestamp>>" }The preview window provides a quick mechanism to visualize the handlebar template:

webhook.site:

Once all the required recipients are created, we can create alerts and associate the webhooks to the alert configuration:

Testing
I shutdown a “few” AVX gws to trigger the gw_status alert:

Because I selected “Receive Separate Notification For Each Host” we see four requests associate to the “gw_status” down:

Creating Severity Levels
Alerts have as status open or closed but what most enterprises want is to know as well how severe is the incident. We can accomplish it creating webhooks with different handlebar templates:
- alerts are associated to the recipient indicating the severirty of the monitoring metric. Ex.: a gateway down is critical

- while CPU utilization equal or above 60% generates a Warning:

Thanks Reid for helping me understand CoPilot alerts and webhooks.
References
-
SAP HANA on GCP with Aviatrix


Photo by panumas nikhomkhai on Pexels.com SAP HANA
SAP HANA is SAP AG’s implementation of in-memory database technology. There are three components within the software group:
- SAP HANA Database (or HANA DB) refers to the database technology itself
- SAP HANA Tools refers to the suite of tools provided by SAP for modeling. It also includes the modeling tools from HANA Studio as well replication and data transformation tools to move data into HANA DB
- SAP HANA Certified Hardware refers to HANA DB as delivered on certified hardware.
HANA DB takes advantage of the low cost of main memory (RAM), data processing abilities of multi-core processors and the fast data access of solid-state drives relative to traditional hard drives to deliver better performance of analytical and transactional applications. It offers a multi-engine query processing environment
which allows it to support relational data (with both row- and column-oriented physical representations in a hybrid engine) as well as graph and text processing for semi- and unstructured data management within the same system. HANA DB is 100% ACID (Atomicity, Consistency, Isolation und Durability) compliant.While HANA has been called variously an acronym for Hasso’s New Architecture (a reference to SAP founder Hasso Plattner) and High Performance Analytic Appliance. HANA is a name not an acronym.
HANA is not SAP’s first in-memory product. Business Warehouse Accelerator (BWA, formerly termed BIA) was designed to accelerate queries by storing BW infocubes in memory. This was followed in 2009 by Explorer Accelerated where SAP combined the Explorer BI tool with BWA as a tool for performing ad-hoc analyses. Other SAP products using in-memory technology were CRM Segmentation, By Design (for analytics) and Enterprise Search (for role based search on structured and unstructured data). All of these were based on the TREX engine.
Aviatrix Overview
Aviatrix is a cloud network platform that brings multi-cloud networking, security, and operational visibility capabilities that go beyond what any cloud service provider offers. Aviatrix software leverages AWS, Azure, GCP and Oracle Cloud APIs to interact with and directly program native cloud networking constructs, abstracting the unique complexities of each cloud to form one network data plane, and adds advanced networking, security and operational features enterprises require.

Aviatrix Configuration
- Create a new vpc (or pick an existing one):

Deploy AVX gws:

And attach to the transit gateway.
As specified in SAP Note 2731110, do not place any network virtual appliance (NVA) in between the application and the database layers for any SAP application stack. Doing so introduces significant data packets processing time and unacceptably slows application performance.
Linux Deployment
The VMs deployed as SAP DB HANA nodes should be certified by SAP for HANA, as published in the SAP HANA hardware directory. SAP certifies two linux distributions for HANA: SuSE and RedHat:
- I’m going to work with SuSE in this post. This distribution provides SAP-specific capabilities, including pre-set parameters for running SAP effectively.

Certified instance sizes:
- I’m going to install the smallest certified instance for testing (and save a few $)

https://cloud.google.com/solutions/sap/docs/certifications-sap-hana Instance creation:

- The instance is deployed using the vpc that was created from AVX controller:

For persistent block storage, you can attach Compute Engine persistent disks when you create your VMs or add them to your VMs later.
When you certain Compute Engine SSD-based persistent disks for SAP HANA, you need to account for not only the storage requirements of your SAP HANA instance, but also for the performance of the persistent disk. Some of the minimum throughput characteristics that SAP is recommending, are:
- Read/write on /hana/log of 250 MB/sec with 1 MB I/O sizes
- Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes
- Write activity of at least 250 MB/sec for /hana/data with 16 MB and 64 MB I/O sizes
SAP support note #2972496 lists the supported file systems for different operating systems and databases, including SAP HANA.
- I’ll create 1 TB volume assuming that the
/hana/data,/hana/logand/hana/sharedvolumes are all mapped to the same disk.

https://cloud.google.com/solutions/sap/docs/certifications-sap-hana Add the disk to the instance:

Create FSs:

Do not forget to add the FSs to the /etc/fstab to mount properly during a reboot.
Install Java (you can download from https://tools.hana.ondemand.com/#cloud):
ricardotrentin@suse-hdb:/var/tmp> sudo rpm -ivh sapjvm-8.1.090-linux-x64.rpm Preparing... ################################# [100%] Updating / installing... 1:sapjvm-8-1.090-1 ################################# [100%] Configuring SAP JVM JDK Version : 8.1.090 Directory : /usr/java/sapjvm_8.1.090 Adapt /usr/java/sapjvm_8_latest pointing to /usr/java/sapjvm_8.1.090 Create symbolic link /opt/sapjvm_8 pointing to /usr/java/sapjvm_8_latest Done. Have fun :-) ....Download Software:

If you are downloading the software from the SuSE Linux you will probably will need to install VINO. Run yast2 (yast2 is SuSE management tool) and install VINO:

And enable remote management:

Use the Download Manager in GUI mode to download the SAP HANA 2.0, express edition server only installer:

Install HANA
I’m installing HANA Express what is slightly different from a full blown HANA deployment.
- run setup_hdxe.sh as root

Provide a master password:


Install HANA Tools
The SAP HANA studio is a collection of applications for SAP HANA. It enables technical users to manage the SAP HANA database, to create and manage user authorizations, and to create new or modify existing models of data in the SAP HANA database. It is a client tool, which can be used to access local or remote SAP HANA databases. To install SAP HANA Tools, proceed as follows:
- Get an installation of Eclipse:

- In Eclipse, choose in the menu bar Help > Install New Software

- Add the URL https://tools.hana.ondemand.com/2022-06

After restarting Eclipse:

To work with and manage an SAP HANA system in the SAP HANA studio, you must create and configure a connection to the system:

Once the instance is added we can check/monitor its status and manage it:

Checking/Testing
As the HDB admin user, run:
instance-hdb:avxadm> HDB info USER PID PPID %CPU VSZ RSS COMMAND avxadm 16325 16324 0.0 235492 5456 -bash avxadm 16930 16325 0.0 222784 3416 \_ /bin/sh /usr/sap/AVX/ avxadm 16965 16930 0.0 268556 4176 \_ ps fx -U avxadm - avxadm 2757 1 0.0 25548 3228 sapstart pf=/usr/sap/AVX/ avxadm 2822 2757 0.0 405716 98064 \_ /usr/sap/AVX/HDB01/in avxadm 3008 2822 1.8 5090576 3108604 \_ hdbnameserver avxadm 3379 2822 0.3 845512 178484 \_ hdbcompileserver avxadm 3402 2822 2.1 5402948 3622292 \_ hdbindexserver -p avxadm 3554 2822 0.3 1534580 374704 \_ hdbdiserver avxadm 3557 2822 0.3 1741888 614464 \_ hdbwebdispatcher avxadm 2151 1 0.0 458096 36440 /usr/sap/AVX/HDB01/exe/sa avxadm 1980 1 0.0 89548 9908 /usr/lib/systemd/systemd avxadm 1983 1980 0.0 169956 5180 \_ (sd-pam) avxadm 2285 1980 0.0 84704 5672 \_ /usr/bin/dbus-daemoninstance-hdb:avxadm> HDB proc USER PID PPID %CPU VSZ RSS COMMAND avxadm 17187 16325 0.0 222784 3396 \_ /bin/sh /usr/sap/AVX/HDB01/HDB proc avxadm 17224 17187 0.0 222784 520 \_ /bin/sh /usr/sap/AVX/HDB01/HDB proc avxadm 2757 1 0.0 25548 3228 sapstart pf=/usr/sap/AVX/SYS/profile/AVX_HDB01_instance-hdb.c.rtrentin-01.internal avxadm 2822 2757 0.0 405716 98064 \_ /usr/sap/AVX/HDB01/instance-hdb.c.rtrentin-01.internal/trace/hdb.sapAVX_HDB01 -d -nw -f /usr/sap/AVX/HDB01/instance-hdb.c.rtrentin-01.internal/daemon.ini pf=/usr/sap/AVX/SYS/profile/AVX_HDB01_instance-hdb.c.rtrentin-01.internal avxadm 3008 2822 1.8 5090576 3108604 \_ hdbnameserver avxadm 3379 2822 0.3 845512 178484 \_ hdbcompileserver avxadm 3402 2822 2.1 5402948 3622292 \_ hdbindexserver -port 30103 avxadm 3554 2822 0.3 1534580 374704 \_ hdbdiserver avxadm 3557 2822 0.3 1741888 614464 \_ hdbwebdispatcher avxadm 2151 1 0.0 458096 36440 /usr/sap/AVX/HDB01/exe/sapstartsrv pf=/usr/sap/AVX/SYS/profile/AVX_HDB01_instance-hdb.c.rtrentin-01.internal -D -u avxadmavxadm@instance-hdb:/usr/sap/AVX/HDB01> /usr/sap/hostctrl/exe//sapcontrol -nr 01 -function GetProcessList 25.10.2022 18:41:18 GetProcessList OK name, description, dispstatus, textstatus, starttime, elapsedtime, pid hdbdaemon, HDB Daemon, GREEN, Running, 2022 10 25 15:12:35, 3:28:43, 4293 hdbcompileserver, HDB Compileserver, GREEN, Running, 2022 10 25 15:13:06, 3:28:12, 4551 hdbdiserver, HDB Deployment Infrastructure Server, GREEN, Running, 2022 10 25 15:18:39, 3:22:39, 7665 hdbnameserver, HDB Nameserver, GREEN, Running, 2022 10 25 15:12:36, 3:28:42, 4315 hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2022 10 25 15:13:53, 3:27:25, 4763 hdbindexserver, HDB Indexserver-AVX, GREEN, Running, 2022 10 25 15:13:06, 3:28:12, 4571CoPilot
AppIQ now discovers potential SAP applications to help SAP Basis engineers quickly diagnose or rule out issues with SAP applications running in their networks.

Terraform
Google has a terraform module that automates the steps above:
https://cloud.google.com/solutions/sap/docs/sap-hana-deployment-guide-tf?hl=en
References
https://cloud.google.com/solutions/sap/docs/certifications-sap-hana
https://tools.hana.ondemand.com/#hanatools
SAP support note #1944799 – SAP HANA guidelines for SLES operating system installation
SAP support note #2205917 – SAP HANA DB recommended OS settings for SLES 12 for SAP applications
SAP support note #1984787 – SUSE Linux Enterprise Server 12: installation notes
SAP support note #171356 – SAP software on Linux: General information
-
Site-2-Cloud connectivity with FortiGate and Aviatrix

The diagram below shows the environment I’m going to test:

Active-Standby
This option supports connecting AVX transit gateways to on-prem with only one active tunnel and the other one as backup. The use case is a deployment scenario where on-prem device such as firewall does not support asymmetric routing on two tunnels. Aviatrix configuration:
- ECMP is disabled
- Active-Standby is enabled

The active/standby configuration will produce the following configuration:
- transit-us-east-1 is the primary path with a metric of 100
- transit-us-east-1 to transit-us-east-1-hagw has a metric of 200
- transit-us-east-1-ha is the backup path with a metric of 300

FortiGate config
To align the FortiGate configuration to the AVX gateways, we need to use BGP Weight attribute to prefer a route received from the AVX primary transit gateway GRE tunnel over the AVX transit gateway ha GRE tunnel.
GRE:
edit "toAVX" set vdom "root" set ip 169.254.102.229 255.255.255.255 set allowaccess ping set type tunnel set remote-ip 169.254.102.230 255.255.255.252 set interface "port1" next edit "toAVX-HA" set vdom "root" set ip 169.254.62.33 255.255.255.255 set allowaccess ping set type tunnel set remote-ip 169.254.62.34 255.255.255.252 set interface "port1" endBGP:
config router bgp set as 65500 set router-id 192.168.1.1 config neighbor edit "169.254.102.230" set soft-reconfiguration enable set remote-as 65501 set update-source "192.168.1.1" next edit "169.254.62.34" set soft-reconfiguration enable set remote-as 65501 set update-source "192.168.1.1" next end config redistribute "connected" set status enable end endBGP traffic engineering:
config router prefix-list edit "prf-10.0.0.0-8" config rule edit 2 set prefix 10.0.0.0 0.0.0.0 set ge 8 unset le next end next endconfig router route-map edit "rt-map-10.0.0.0-8" # config rule edit 1 set match-ip-address "prf-10.0.0.0-8" set set-weight 40000 next end next endconfig router bgp set as 65500 set router-id 192.168.1.1 config neighbor edit "169.254.102.230" set soft-reconfiguration enable set remote-as 65501 set route-map-in "rt-map-10.0.0.0-8" next edit "169.254.62.34" set soft-reconfiguration enable set remote-as 65501 next endChecking
execute router clear bgp ip 169.254.102.230 softget router info bgp network
Testing
Shooting down the primary AVX gateway “promotes” the “surviving” routes from the AVX HA gateway GRE tunnel to the preferred path and those routes are installed into the FortiGate routing table:

Active/Active
This option supports connecting AVX transit gateways to on-prem with more than one tunnel active. Aviatrix configuration:
- BGP ECMP is enabled
- Active-Standby is disabled

The active/standby configuration will produce the following configuration:
- transit-us-east-1 is the primary path with a metric of 100
- transit-us-east-1 to transit-us-east-1-hagw has a metric of 200
- transit-us-east-1-ha is also a primary path with a metric of 100

FortiGate config
To align the FortiGate configuration to the AVX gateways, we need to configure ECMP:
config router bgp set as 65500 set router-id 192.168.1.1 set ebgp-multipath enable set ibgp-multipath enable set additional-path enable config neighbor edit "169.254.102.230" set soft-reconfiguration enable set remote-as 65501 next edit "169.254.62.34" set soft-reconfiguration enable set remote-as 65501 next end config redistribute "connected" set status enable endChecking


Asymmetric Routing
AVX and FortiGate ECMP enabled causes half of the traffic to get dropped. It happens on flows that takes different paths on egressing and ingressing FortiGate GRE tunnels. If a FortiGate receives the response packets, but not the requests, by default it blocks the packets as invalid. This behavior is known as asymmetric routing.

It is not recommended but if it is required that the FortiGate should permit asymmetric routing, it can be configured with the following command:
config system settings set asymroute enable endReferences
List of all useful BGP debug and verification commands:
show router bgp
get router info bgp summary
get router info bgp network
get router info routing-table bgp
get router info bgp neighbors
get router info bgp neighbors advertised-routes
get router info bgp neighbors routes
get router info bgp neighbors received-routes
diagnose sys tcpsock | grep 179
diagnose ip router bgp level info
diagnose ip router bgp all enable
exec router clear bgp allhttps://community.fortinet.com/t5/FortiGate/Technical-Tip-BGP-route-selection-process/ta-p/195932
-
Moving an AWS brownfield to Aviatrix

Brownfield environment is show in the diagram below: (I clicked my way through the deployment, I have to confess :():

ASA:
I modified a few items from the configuration generated by AWS, mainly:
interface Management0/0 no management-only nameif management security-level 0 ip address dhcp setroute ! interface TenGigabitEthernet0/0 nameif internal security-level 100 ip address dhcp setroute ! interface Tunnel100 nameif tunnel100 ip address 169.254.48.110 255.255.255.252 tunnel source interface management tunnel destination 3.208.50.157 tunnel mode ipsec ipv4 tunnel protection ipsec profile PROFILE1 ! interface Tunnel200 nameif tunnel200 ip address 169.254.242.78 255.255.255.252 tunnel source interface management tunnel destination 3.220.28.236 tunnel mode ipsec ipv4 tunnel protection ipsec profile PROFILE1 ! interface Tunnel300 nameif tunnel300 ip address 169.254.16.234 255.255.255.252 tunnel source interface management tunnel destination 52.21.63.18 tunnel mode ipsec ipv4 tunnel protection ipsec profile PROFILE1 ! interface Tunnel400 nameif tunnel400 ip address 169.254.92.46 255.255.255.252 tunnel source interface management tunnel destination 54.210.195.29 tunnel mode ipsec ipv4 tunnel protection ipsec profile PROFILE1 ! router bgp 65000 bgp log-neighbor-changes bgp graceful-restart address-family ipv4 unicast neighbor 169.254.92.45 remote-as 64512 neighbor 169.254.92.45 ebgp-multihop 255 neighbor 169.254.92.45 timers 10 30 30 neighbor 169.254.92.45 activate neighbor 169.254.242.77 remote-as 64512 neighbor 169.254.242.77 ebgp-multihop 255 neighbor 169.254.242.77 timers 10 30 30 neighbor 169.254.242.77 activate neighbor 169.254.48.109 remote-as 64512 neighbor 169.254.48.109 ebgp-multihop 255 neighbor 169.254.48.109 timers 10 30 30 neighbor 169.254.48.109 activate neighbor 169.254.16.233 remote-as 64512 neighbor 169.254.16.233 ebgp-multihop 255 neighbor 169.254.16.233 timers 10 30 30 neighbor 169.254.16.233 activate redistribute connected no auto-summary no synchronization exit-address-family ! crypto ipsec ikev2 ipsec-proposal SET1 protocol esp encryption aes protocol esp integrity sha-1 crypto ipsec profile PROFILE1 set ikev2 ipsec-proposal SET1 set pfs group14 set security-association lifetime seconds 3600 crypto ipsec security-association replay window-size 128 crypto ipsec security-association pmtu-aging infinite crypto ipsec df-bit clear-df management crypto ikev2 policy 200 encryption aes integrity sha group 14 prf sha256 lifetime seconds 28800 crypto ikev2 policy 201 encryption aes integrity sha group 14 prf sha256 lifetime seconds 28800 crypto ikev2 enable management ! group-policy AWS internal group-policy AWS attributes vpn-tunnel-protocol ikev2 tunnel-group 3.208.50.157 type ipsec-l2l tunnel-group 3.208.50.157 general-attributes default-group-policy AWS tunnel-group 3.208.50.157 ipsec-attributes isakmp keepalive threshold 10 retry 10 ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** tunnel-group 3.220.28.236 type ipsec-l2l tunnel-group 3.220.28.236 general-attributes default-group-policy AWS tunnel-group 3.220.28.236 ipsec-attributes isakmp keepalive threshold 10 retry 10 ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** tunnel-group 52.21.63.18 type ipsec-l2l tunnel-group 52.21.63.18 general-attributes default-group-policy AWS tunnel-group 52.21.63.18 ipsec-attributes isakmp keepalive threshold 10 retry 10 ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** tunnel-group 54.210.195.29 type ipsec-l2l tunnel-group 54.210.195.29 general-attributes default-group-policy AWS tunnel-group 54.210.195.29 ipsec-attributes isakmp keepalive threshold 10 retry 10 ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key *****Testing
Once the configuration is applied to the ASAv, we see the Site-to-Site VPN connections, after a few seconds, come up online:

A VM running behind the on-prem ASA firewall can ping the VMs running on AWS:

Aviatrix Deployment
Transit and Firenet can be deployed using the following code:
Site-2-Cloud
Once the AVX transit is deployed, the next step is to connect it to on-prem:

S2C config:


Checking:

The config for the remote device is downloaded and then applied to the “on-prem” ASA:

Once the config is applied we can see the new bgp peers:
- 169.254.38.66 and 169.254.45.178

Because the on-prem ASA actually runs on AWS, I had to tell Aviatrix to use as identity the private IP address not the default Public IP as the remote identifier.

Deploy Gateways
We will “inject” avx gws into the existing VPCs using “empty” subnets. If there is no room for a new subnet we can add another CIDRs to the VPC:

We can deploy spokes also using Terraform into an existing VPC. Once the gateways are deployed, we have the following setup:

Gateways are not attached to the transit at this time.
Disable Spoke Advertisement
Preparing for the spoke attachment, we can disable the VPC CIDR adverstisement using the “Customize Spoke Advertised VPC CIDRs” and list only the subnet where the gateways were deployed:

Attachment
The environment once the gateways are attached is show below:

Checking the ASA routes:

Checking vpc routes:
- Existing routes/more speficifs point to the vgw
- RFC1918 routes were added to the routing table pointing to the aviatrix gateway

Cut-Over
There are two, maybe three, steps in this step of the overall migration process:
- advertise all prefixes from Aviatrix spokes
- shutdown ASA tunnels
- disable route propagation
After those steps:
- firewall


- VPC

Clean up
The last step is to remove VPN connections and VGW from the environment:

There are several variations of the migration and the entire process can also be automated to cause minimum if not any disruption to the app flows.
References
-
“Terraform-ing” your way towards Secure Multi Cloud Networking with Aviatrix

I’m going to Terraform an entire Aviatrix deployment using terraform on this blog, mainly the controller and copilot. There is always discussion around the controller and copilot deployment using automation but I’m assume if you are reading this post you are already convinced.
Management Network
I’m creating a new management network and subnet. This step is not necessary but it helps validating that the gcp controller terraform module can deploy a controller into an existing vpc:
resource "google_compute_network" "google_compute_network-aviatrix_mgmt_vpc" { project = var.project name = var.aviatrix_mgmt_vpc auto_create_subnetworks = false } resource "google_compute_subnetwork" "google_compute_subnetwork-aviatrix_mgmt_network" { name = var.aviatrix_mgmt_network ip_cidr_range = var.aviatrix_mgmt_network_cidr region = var.region network = google_compute_network.google_compute_network-aviatrix_mgmt_vpc.id }Controller Deployment
The module gcp-controller allows you to launch the Aviatrix Controller and create the Aviatrix access account connecting to the Controller in Google Cloud Platform:
module "aviatrix-controller-gcp" { depends_on = [ google_compute_subnetwork.google_compute_subnetwork-aviatrix_mgmt_network ] source = "AviatrixSystems/gcp-controller/aviatrix" access_account_name = var.aviatrix_access_account aviatrix_controller_admin_email = var.aviatrix_controller_admin_email aviatrix_controller_admin_password = var.aviatrix_controller_admin_password aviatrix_customer_id = var.aviatrix_customer_id gcloud_project_credentials_filepath = var.gcloud_project_credentials_filepath incoming_ssl_cidrs = var.incoming_ssl_cidrs use_existing_network = true network_name = google_compute_network.google_compute_network-aviatrix_mgmt_vpc.name subnet_name = google_compute_subnetwork.google_compute_subnetwork-aviatrix_mgmt_network.name }My terraform.tfvars looks like:
aviatrix_mgmt_vpc = "aviatrix-mgmt-vpc" aviatrix_mgmt_network = "aviatrix-mgmt-network" aviatrix_mgmt_network_cidr = "192.168.254.0/24" aviatrix_controller_admin_email = "rtrentin@aviatrix.com" aviatrix_controller_admin_password = "mytopsecretpassword" aviatrix_customer_id = "avx-x-x.x" aviatrix_access_account = "test-lab-aviatrix-gcp" project = "rtrentin-01" region = "us-central1" gcloud_project_credentials_filepath = "/Users/ricardotrentin/.gcp/rtrentin-01-6cxxdcdxxb84.json" incoming_ssl_cidrs = ["0.0.0.0/0"]I’m using a Google Cloud Service Account to authenticate. The credentials file is located at gcloud_project_credentials_filepath. Before applying the terraform file, we need to execute a few steps before:
gh repo clone AviatrixSystems/terraform-aviatrix-gcp-controller cd terraform-aviatrix-gcp-controller python3 -m venv venv source venv/bin/activate pip install -r requirements.txt export GOOGLE_APPLICATION_CREDENTIALS="path to credential file"Once the requirements are satisfied, we can run terraform apply:
(venv) ricardotrentin@RicardontinsMBP controller % terraform state list google_compute_network.google_compute_network-aviatrix_mgmt_vpc google_compute_subnetwork.google_compute_subnetwork-aviatrix_mgmt_network module.aviatrix-controller-gcp.data.google_compute_network.controller_network[0] module.aviatrix-controller-gcp.data.google_compute_subnetwork.controller_subnet[0] module.aviatrix-controller-gcp.module.aviatrix-controller-build.google_compute_firewall.controller_firewall module.aviatrix-controller-gcp.module.aviatrix-controller-build.google_compute_instance.controller module.aviatrix-controller-gcp.module.aviatrix-controller-initialize.null_resource.run_script module.aviatrix-controller-gcp.module.aviatrix-controller-ip-address.google_compute_address.ip_addressChecking

Access accounts:

Software version:

References
https://registry.terraform.io/modules/AviatrixSystems/gcp-controller/aviatrix/latest
-
Establishing Multiple External Connectivity using Aviatrix Site-2-Cloud (S2C)

The premises of this design is to establish a backup path using the internet to protect application flows that still leverage on-prem and or customers seating on a main campus accessing apps living on the cloud:

As I don’t have a DX circuit I’m going to use a Site-to-Site VPN to simulate it and Site-2-Cloud from the AVX transit gateways will provide backup to the DX connection.
Primary Configuration
The primary connection uses DX and there are a few supported scenarios to integrate it with Aviatrix. I’m going to leverage private interfaces and connect it to a VGW on the same region as my AVX transit:


Once the VPN Gateway is created, we attach it to the transit hub:

From the Multi-cloud Transit External menu we connect to the remote branch device (a CSR1000v running on an Equinix Metal server):

Checking the VPN connections:

Checking on the remote side:
- tunnels are up
- bgp peers are connected and exchanging routes

AWS advertises 172.16.0.0/16 (AVX transit) using different metrics 100 and 200 respectively.
The next step is to build an overlay between the AVX transit gateways and the remote device but forcing it through the previously created tunnel (as it is supposed to work as a DX connection):

A static route helps the AVX transit gateways to reach out to private interface of the remote device:

Enable route propagation on the VGW would also satisfy the need but it would bring other prefixes to the VPC route table (we can always filter what we will advertise from on-prem).

Limiting the number of routes received
On the on-prem router we can create a prefix list to limit the routes advertised towards the AWS VGW:
ip prefix-list router-to-vgw seq 10 permit 192.168.11.1/32The prefix-list is then used on the BGP peer:
router bgp 361XX bgp log-neighbor-changes bgp graceful-restart neighbor 169.254.161.193 remote-as 64512 neighbor 169.254.161.193 ebgp-multihop 255 neighbor 169.254.161.193 update-source Tunnel1 neighbor 169.254.231.253 remote-as 64512 neighbor 169.254.231.253 ebgp-multihop 255 neighbor 169.254.231.253 update-source Tunnel2 ! address-family ipv4 network 192.168.100.0 redistribute connected neighbor 169.254.161.193 activate neighbor 169.254.161.193 prefix-list router-to-vgw out neighbor 169.254.231.253 activate neighbor 169.254.231.253 prefix-list router-to-vgw out maximum-paths 4 exit-address-familyOn the AWS side we can check if only the prefix-list is learned:

Site-2-Cloud configuration using private network addresses

I’m using a GRE tunnel in this case. Real life scenario we could also go with IPSec depending on the security requirements for data in transit. After downloading and applying the config:

Tunnels 101 and 102 are up. Those are GRE tunnels using private IP addresses:

At this point we have connectivity between on-prem and the Aviatrix fabric using a “private” circuit.
Secondary Configuration
The secondary connection leverages the internet running BGP on top of a pair of IPSec tunnels:


We can download the configuration from the Site-2-Cloud menu:


Applying the config to the remote device:

Traffic “Engineering”
My spoke (10.3.0.0/24) is reachable through four paths:
- 2 GRE tunnels
- 2 IPSEC tunnels

There are a few parameters under Multi Cloud Transit -> Advanced Config that allows us to customize the BGP process that runs on the AVX gateways:

- Manual Advertise Routes: allows to manually advertise routes to each BGP connection.
- Preserve AS Path: This field is applicable to both Gateway Manual BGP Advertised Network List and Connection Manual BGP Advertised Network List. When disabled, behavior defaults to the AS path being stripped during BGP route advertisements from transit or spoke gateways to neighbors. When enabled, AS Path is preserved.
- Gateway AS Path Prepend: we can insert BGP AS_PATH on the Aviatrix Transit Gateway to customize the BGP AP_PATH field when it advertises to peer devices.
- Connection AS Path Prepend: we can customize AS_PATH per connection
I’m going to use the Connection AS Path Prepend:

Once the configuration is changed we can observe on the CSR1000v that another AS was added to the path:

The following page details the mechanism BGP uses to select a path over others and how to influence BGP:
https://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/13753-25.html
References
https://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/13753-25.html
-
Replacing Native NAT Gateways with Aviatrix Spoke Gateways

In this post I’m going to transfer the functionality of a couple of native NAT gateways to Aviatrix while preserving the NAT GWs IP addresses. If you need a refresh on AVX egress capabilities please take a look at:
AVX spoke gateways can be used as egress in a distributed model customizing the snat functionality.
Elastic IP (EIP)
An Elastic IP address is a static IPv4 address which is reachable from the internet. An Elastic IP address is allocated to your AWS account, and is yours until you release it.
NAT Gateways
A NAT gateway is a Network Address Translation (NAT) service providing internet access to instances in a private subnet. The NAT gateway replaces the source IP address of the instances with the IP address of the NAT gateway.
Aviatrix EIPs
Aviatrix Controller creates EIPs when creating gateways and applies tags as show below:
- Backup
- Aviatrix-Created-Resource
- Type
- Name
- Controller

The gw EIP is used in several places such as security rules, syslog, netflow among others. The controller is responsible to manage it.
Using terraform:
resource "aws_eip" "aws_eip-nat" { count = 2 lifecycle { ignore_changes = [tags, ] } }NAT Gateways Deployment
Two GWs are deployed on different AZs:

Using terraform:
resource "aws_nat_gateway" "aws_nat_gateway" { count = var.aws_nat == true ? 2 : 0 allocation_id = aws_eip.aws_eip-nat["${count.index}"].id subnet_id = data.aws_subnet.aws_subnet_prefix["${count.index}"].id }Route table (private):

Using terraform:
resource "aws_route" "aws_route-nat" { count = var.aws_nat == true ? 2 : 0 route_table_id = data.aws_route_table.aws_route_table-private["${count.index}"].id destination_cidr_block = "0.0.0.0/0" nat_gateway_id = aws_nat_gateway.aws_nat_gateway["${count.index}"].id depends_on = [ aws_nat_gateway.aws_nat_gateway ] }Aviatrix Gateways Deployment
resource "aviatrix_spoke_gateway" "aviatrix_spoke_gateway-gateway" { depends_on = [ aws_nat_gateway.aws_nat_gateway ] count = var.aws_nat == true ? 0 : 1 cloud_type = "1" account_name = var.account gw_name = var.vpc_name vpc_id = data.aws_vpc.vpc_name.id vpc_reg = var.region gw_size = var.gw_size subnet = data.aws_subnet.aws_subnet_public-prefix[0].cidr_block ha_subnet = data.aws_subnet.aws_subnet_public-prefix[1].cidr_block ha_gw_size = var.gw_size manage_transit_gateway_attachment = false allocate_new_eip = false eip = aws_eip.aws_eip-nat[0].public_ip ha_eip = aws_eip.aws_eip-nat[1].public_ip }Once the gateways are provisioned we need to create the custom snat policy:
resource "aviatrix_gateway_snat" "aviatrix_gateway_snat-egress" { depends_on = [ aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway ] count = var.aws_nat == true ? 0 : 1 gw_name = aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway[0].gw_name snat_mode = "customized_snat" snat_policy { src_cidr = data.aws_vpc.vpc_name.cidr_block src_port = "" dst_cidr = "" dst_port = "" protocol = "all" interface = "eth0" connection = "None" mark = "" snat_ips = aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway[0].private_ip snat_port = "" exclude_rtb = "" } } resource "aviatrix_gateway_snat" "aviatrix_gateway_snat-egress-ha" { depends_on = [ aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway ] count = var.aws_nat == true ? 0 : 1 gw_name = aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway[0].ha_gw_name snat_mode = "customized_snat" snat_policy { src_cidr = data.aws_vpc.vpc_name.cidr_block src_port = "" dst_cidr = "" dst_port = "" protocol = "all" interface = "eth0" connection = "None" mark = "" snat_ips = aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway[0].ha_private_ip snat_port = "" exclude_rtb = "" } }The flat “apply route” takes care of creating the proper route to bring the egress traffic to the gateway:

The last step is to attach them to the transit:
resource "aviatrix_spoke_transit_attachment" "aviatrix_spoke_transit_attachment-gateway" { depends_on = [ aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway ] count = var.aws_nat == true ? 0 : 1 spoke_gw_name = aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway[0].gw_name transit_gw_name = var.transit_gw }Extras
Data:
data "aws_vpc" "vpc_name" { filter { name = "tag:Name" values = [var.vpc_name] } } data "aws_subnets" "aws_subnets_public" { filter { name = "vpc-id" values = [data.aws_vpc.vpc_name.id] } tags = { Name = "*public*" } } data "aws_subnet" "aws_subnet_public-prefix" { count = length(data.aws_subnets.aws_subnets_public.ids) id = tolist(data.aws_subnets.aws_subnets_public.ids)[count.index] } data "aws_subnets" "aws_subnets_private" { filter { name = "vpc-id" values = [data.aws_vpc.vpc_name.id] } tags = { Name = "*private*" } } data "aws_subnet" "aws_subnet_private-prefix" { count = length(data.aws_subnets.aws_subnets_private.ids) id = tolist(data.aws_subnets.aws_subnets_private.ids)[count.index] } data "aws_route_table" "aws_route_table-private" { count = length(data.aws_subnets.aws_subnets_private.ids) subnet_id = tolist(data.aws_subnets.aws_subnets_private.ids)[count.index] }The process is controlled by a variable (aws_nat) defined on the terraform.tfvars file:
controller_ip = ":)" username = "admin" password = ":)" account = ":)" region = "us-east-1" vpc_name = "other-apps-vpc" transit_gw = "aws-us-east-1-transit" gw_size = "t3.small" aws_nat = falseSetting aws_nat to false triggers the migration while setting it to false fails back to the native NAT gateways. Because EIPs can be allocated only during the gateway creation, setting to aws_nat true will destroy the AVX gws. If EIPs don’t need to be repurposed the spoke gateways do not need to be destroyed:
resource "aviatrix_spoke_gateway" "aviatrix_spoke_gateway-gateway" { cloud_type = "1" account_name = var.account gw_name = var.vpc_name vpc_id = data.aws_vpc.vpc_name.id vpc_reg = var.region gw_size = var.gw_size subnet = data.aws_subnet.aws_subnet_public-prefix[0].cidr_block ha_subnet = data.aws_subnet.aws_subnet_public-prefix[1].cidr_block ha_gw_size = var.gw_size manage_transit_gateway_attachment = false allocate_new_eip = true }and can be attached independently of the NAT gateways existence:
resource "aviatrix_spoke_transit_attachment" "aviatrix_spoke_transit_attachment-gateway" { depends_on = [ aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway ] spoke_gw_name = aviatrix_spoke_gateway.aviatrix_spoke_gateway-gateway[0].gw_name transit_gw_name = var.transit_gw }References
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
-
5 min RTO with Aviatrix and Terraform


Disaster recovery involves a set of policies, tools, and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.
The Recovery Time Objective (RTO) is the targeted duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.
I covered using Aviatrix to address the challenges of DR/BC before:
In this new blog I address a new set of requirements:
- remote branches do not support multiple tunnels
- remote branches overlapping IPs
- Applications with hard coded IP (different App instances must run with the same IP address)
Proposed Design
The proposed solution has the following major points:
- a set of resources is available only to the active region: vnet, gateways, ipsec tunnels, gateway attachments, and route propagation
- a set of resources is in standby on the non-active region
- Mapped NAT to overcome IP overlap
- terraform is used to manually switch over the active and standby regions (thanks to Chris to the idea of using terraform state to take care of that. Also thanks to Dennis to always help me with terraform)

Terraform and Aviatrix Provider to the rescue
Site-2-Cloud terraform:
The code above creates a Site-2-Site connection to an existing AVX gateway but it only creates on the active region using the expression count = var.region_active == “west” ? 1 : 0. The active region is determined by the value of the variable region_active declared in the terraform.tfvars.
The same principle is used to advertise the remote branch prefixes to the AVX fabric from the proper region using the included_advertised_spoke_routes variable:
Because the applications requires the same ip addresses, only one vnet will be attached to the transit:
If a need exists to switch over from one region to another, the fail over is as simple as change the value of region_active in the terraform.tfvars and run terraform apply. Terraform will “destory” the site-2-cloud connection on the active region, detached the workload vnet from the transit, and withdrawn the remote branch prefix from the vpn spoke gateway. Terraform will also create the new objects in the now active region.
References
-
Using Azure Log Analytics with Aviatrix


Photo by PhotoMIX Company on Pexels.com Special thanks to Jorge, Manny, and Alex!
What is Log Analytics
Log Analytics is a SaaS offering from Microsoft that helps you collect and report against data generated by resources in Azure or from your on-premises environment. It is a very powerful tool can hold and analyze millions of records using the Kusto query language.
Workspace
Log Analytics is a tool in the Azure portal that’s used to edit and run log queries with data in Azure Monitor Logs.
A Log Analytics workspace is a unique environment for log data from Azure Monitor and other Azure services, such as Microsoft Sentinel and Microsoft Defender for Cloud. Each workspace has its own data repository and configuration but might combine data from multiple services.




Log Forwarder
To ingest Syslog into Log Analytics from Aviatrix appliances, which you can’t install the Log Analytics agent directly as of today, you’ll need to use a Linux machine that will collect the logs from the controller and gateways and forward them to Microsoft Log Analytics workspace. This machine has two components that take part in this process:
- A syslog daemon
- The Log Analytics Agent (also known as the OMS Agent)

https://docs.microsoft.com/en-us/azure/azure-monitor/agents/data-sources-syslog Linux Deployment
I’m going to use an Ubuntu running 20.04 LTS on top of a X64:

NSG configuration:
- create an inbound security rule allowing AVX Controller and Gateways to access the VM on tcp port 514
Agent Installation
We use the Data Collection Rules to install the Azure Monitor Agent:





Checking the Log Forwarder agent:


Syslog Reception
Edit /etc/rsyslog.conf and remove comments from the following lines to enable rsyslog to work as a server:
module(load="imtcp")
input(type="imtcp" port="514")Do not forget to restart the rsyslog daemon after the changes:
sudo service rsyslog restartAviatrix Configuration
The rsyslog configuration is done under Settings -> Logging -> Remote Syslog:

Testing
I ran a query to list 10 syslog events to test the configuration:

References
-
Tech Note: Migrating an Aviatrix Controller from AWS to GCP

Constraints
- AWS Access Account uses access key
- Software version >= 6.8a
Deploy Controller on target CSP



Connect to the controller and initialize it:

Bring the controller to the desired software version:

Create the access accounts:

Controller Security Group Mgmt
Default:


Disabling it:

I created a temporary SG granting inbound access to port 443.
Change AWS to Access/Secret key based

Change the AWS account from IAM role-based to Access and Secret keys:
- this step only works when they are no gateways deployed into AWS

Backup

Shutdown Controller
Before proceeding to the restore, make sure the current controller is down.
Restore
Restore the AWS controller backup providing the Account Name, Bucket Name, and File Name to the Restore under Maintenance:

Re-Enable Controller Security Group Management
Once the restore is completed and the environment is validated, enable the Controller Security Group Management:

