Connecting Cisco ACI to Aviatrix

Cisco ACI Overview

It is a Software Defined Network (SDN) solution from Cisco for Data Centers. ACI fabric consists of discrete components connected in a spine and leaf switch topology that it is provisioned and managed as a single entity:

  • Application Policy Infrastructure Controller (APIC): The APIC is the point of configuration for policies and the place where statistics are archived and processed to provide visibility, telemetry, and application health information and enable overall management of the fabric. The controller is a physical appliance based on a Cisco UCS rack server with two interfaces for connectivity to the leaf switches.
  • Nexus 9000 Series spine and leaf switches that are designed to operate either in NX-OS mode or in ACI mode to take full advantage of ACI application-policy-based services and infrastructure automation features.

All workloads connect to leaf switches. The leaf switches used in an ACI fabric are Top-of-the-Rack (ToR) switches. These devices have ports connected to servers, firewalls, and router ports. Leaf switches are at the edge of the fabric and provide the VXLAN Tunnel Endpoint (VTEP) function. The leaf switches are responsible for routing or bridging tenant packets and for applying network policies.

The spine switches are available in several form factors both for modular switches as well as for fixed form factors. These devices interconnect leaf switches. They can also be used to build a Cisco ACI Multi-Pod fabric by connecting a Cisco ACI pod to an IP network, or they can connect to a supported WAN device.

The ACI fabric forwards traffic based on host look-ups (when doing routing): all known endpoints in the fabric are programmed in the spine switches.

The endpoints saved in the leaf switch forwarding table are only those that are used by the leaf switch in question, thus preserving hardware resources at the leaf switch.

For more information consult the following link:


The Layer 3 Out (L3Out) in ACI is the set of configurations that define connectivity to outside via routing.

L3Out provides the necessary configuration objects for the following functions:

For more information consult the following link:

Aviatrix Overview

Aviatrix is a cloud network platform that brings multi-cloud networking, security, and operational visibility capabilities that go beyond what any cloud service provider offers. Aviatrix software leverages AWS, Azure, GCP and Oracle Cloud APIs to interact with and directly program native cloud networking constructs, abstracting the unique complexities of each cloud to form one network data plane, and adds advanced networking, security and operational features enterprises require.

Design Patterns

Secure Edge

Aviatrix Secure Edge has a virtual form factor that lets you deploy an Edge Gateway as a standard virtual machine (VM). It is designed to enable enterprises migrating to the cloud to integrate their on-premises footprint as spokes into the enterprise Cloud backbone. The result is secure, seamless connectivity to locations at the Edge of your network such as data centers, remote sites, provider locations, branch offices, and retail stores.

By extending the Aviatrix data plane to the Edge of the network, you can use Aviatrix Controller and Aviatrix CoPilot to manage orchestration, visibility, and operational capabilities.

  • Aviatrix Edge runs on VMware only at this time.
Design Overview
  • Secure Edge is deployed on-prem
  • Edge works as a “remote” gateway and can connect to the transit gateway using the internet or dedicated circuits
  • ACI connects to the Edge device using a L3Out
  • L3Out runs BGP to dynamically exchange routes with the Edge device
  • Secure Edge supports High Performance Encryption (HPE) providing high throughput and bandwidth

The diagram above does not show the detailed on-prem physical and logical configuration required for this design to work properly.

Site 2 Cloud (S2C)

Site2Cloud builds an encrypted connection between two sites over the Internet in an easy-to-use and template-driven manner. Its workflow is similar to AWS VGW or Azure VPN.

On one end of the tunnel is an Aviatrix Gateway. The other end could be an on-prem router, firewall, or another public cloud VPC/VNet, that the Aviatrix Controller does not manage.

Design Overview
  • Aviatrix transit gateways establish IPSEC tunnels with the on-prem routers or firewalls
  • ACI connects to the on-prem routers using a L3Out
  • L3Out runs OSPF/EIGRP/BGP to dynamically exchange routes with the routers and the routers with the transit gateways

The diagram above does not show the detailed on-prem physical and logical configuration required for this design to work properly.

BGP over LAN

GOLF feature uses the BGP EVPN protocol over OSPF for WAN routers that are connected to spine switches as shown in the diagram below.

All tenant connections to the WAN use a single BGP session on the spine switches. This aggregation of sessions towards improves control plane scale by reducing the number of tenant BGP sessions and the amount of configuration required for all of them. The network is extended out using Layer 3 sub-interfaces configured on spine fabric ports.

  • Connection to the WAN is done physically and logically through the spine switches
  • Transit routing with shared services using GOLF is not supported.

Design Overview

  • CSR1000v are deployed in the cloud and configured as Golf (DCIGW) devices
  • Aviatrix transit gateways connects to the CSR1000v using BGPoLAN, exchanging routes dynamically
  • ACI Tenants requiring access to the cloud needs a dummy l3out consuming the Golf label from the infra l3out.
  • A external epg representing the prefix learned from the cloud is required to classify the traffic properly for security policies enforcement using contracts

The diagram above does not show the detailed on-prem physical and logical configuration required for this design to work properly.

While the design above is possible, it is not recommended for its complexity and requirements.


Leave a Reply