Home

  • Tech Note: Migrating an Aviatrix Controller from AWS to Azure

    Tech Note: Migrating an Aviatrix Controller from AWS to Azure

    Constraints

    • AWS Access Account uses access key

    AWS Controller

    Change the AWS account from IAM role-based to Access and Secret keys

    • Create an access key for an user with permissions to manage Aviatrix.
    • Use the key id and key secret to change the Access Account:

    This procedure is only supported on Accounts without Gateways deployed.

    Backup

    • Use the button Backup Now to take a backup before shutdown the controller:

    Shutdown Controller

    • Using the AWS Console, shutdown the Controller and CoPilot instances:

    Azure Controller

    Requirements

    • An existing or new VNET and subnet with a route table associated where there is a default route pointing to the internet
    • The subnet should be big enough to host the Aviatrix Cloud Network Controller and Aviatrix Cloud Network CoPilot
    • At least two Public IP Addresses
    • Permission to create VNETs, subnets, route tables, routes, route table association, public ip adress, deploy VMs, create service principals, create storage accounts, and blobs.

    Aviatrix Cloud Network Controller deploys Controller 7.1.4105 and later. To deploy Controller version 7.1.4101 or earlier, subscribe to Aviatrix Secure Networking Platform BYOL.

    Deploy New Controller

    The steps below should be completed before the cut over.

    Bring the controller to the desired software version (7.1.3176)

    Onboard Access Accounts

    • Azure
    • OCI

    Transfer Backup from AWS Bucket to Azure Storage Account

    • Download from the AWS S3 Bucket:
    • Upload to the Azure Storage Account:

    Restore

    Use the Controller Settings -> Maintenance -> Backup and Restore to Restore a backup from the storage account:

    The restore will trigger the Controller Public IP migration wizard that will ask to confirm that the Controller Public IP changed:

    After the restore:

    Re-Enable Controller Security Group Management

    • disable and enable Controller Security Group Management
    • After a few seconds the Controller NSG will reflect the changes (security rule is created for each gateway):
    • The Security Rules of the Gateways are also updated with the new controller public ip:

    Patching

    • Reapply patches to update properly the inventory.

    Deploy CoPilot

    • Backup will restore the AWS Controller config including CoPilot bindings.
    • Deploy CoPilot in Azure and reconfigure the association.

    Backup

    • Backup will restore the AWS Controller config including Backup configuration.
    • Remove the old backup configuration
    • Reconfigure backup using an Azure Storage Account

    Tags

    The controller tags the resources it creates. One of those tags is the Controller IP:

    After the migration, another tag is created to store the old controller ip configuration:

    References

    https://docs.aviatrix.com/documentation/latest/controller-platform-administration/controller-backup-restore.html?expand=true

    https://docs.aviatrix.com/documentation/latest/getting-started/getting-started-guide-azure.html

  • Experimenting with GCP PBR

    Experimenting with GCP PBR

    Policy-based routes can route traffic based on: destination, protocol,and source.

    How to Configure it

    PBR requires an internal pass-through network load balancer as next hop:

    • Unmanaged Instance Groups for the AVX spokes:
    • NLB:

    Do not forget to create the proper firewall rules for the health checks. HCs are sourced from the following ranges: 130.211.0.0/22 and 35.191.0.0/16.

    Create a route but select Policy Based Route from the drop down menu:

    Testing

    Test is quite simple. From Test VM, if we ping 10.17.60.51 the traffic should not go through the Standalone Gateways but if we ping 192.168.200.3 we should see the traffic flowing through the standalone gateways.

    • Pinging 192.168.200.3:
    • Packet capture on the standalone gateway:
    • Pinging 10.17.60.51:
    • Packet capture on the standalone gateway:

    Constraints

    • Policy-based routes don’t support matching traffic based on port
    • Policy-based routes are not exchanged between VPC networks that are connected through VPC Network Peering

    You can find more constraints and information on the links listed on the references.

    References

    https://cloud.google.com/vpc/docs/policy-based-routes

    https://cloud.google.com/load-balancing/docs/internal/setting-up-ilb-next-hop

  • Supernetting

    Supernetting

    From Wiki:

    A supernetwork, or supernet, is an Internet Protocol (IP) network that is formed by aggregation of multiple networks (or subnets) into a larger network. The new routing prefix for the aggregate network represents the constituent networks in a single routing table entry. The process of forming a supernet is called supernetting, prefix aggregation, route aggregation, or route summarization.

    https://en.wikipedia.org/wiki/Supernetwork

    Topology

    Prefix Advertised

    Gateways advertise by default subnets prefixes discovered during the deployment:

    Supernetting

    Testing

    Pinging an existent target:

    Pinging a non existent target:

    • A loop is introduced in the network:
    • A loop is introduced in the network:

  • A little help from my friend… hacks on how to work with default routes

    A little help from my friend… hacks on how to work with default routes

    Most if not all GCP customers consume GCP PaaS/SaaS services like GKE, Cloud SQL, and others. Those services have their compute capacity provisioned inside Google owned VPCs and to establish a data plane for customers to use them vpc peerings are used.

    AVX Behavior

    • Routes are created with a fixed priority of 1000
    • Egress through Spoke or Firenet creates routes with tags (avx-snat-noip) and priority 991
    NAME                                  NETWORK  DEST_RANGE      NEXT_HOP                            PRIORITY
    avx-0869709459044dd3ab184f9a7c18c885  vpc003   0.0.0.0/0       us-east1-b/instances/gcp-spoke-003  991
    avx-132f04c21c274870a00d1717fba75421  vpc003   192.168.0.0/16  us-east1-b/instances/gcp-spoke-003  1000
    avx-2250d167f6c44869b619f9338563962d  vpc003   172.16.0.0/12   us-east1-b/instances/gcp-spoke-003  1000
    avx-aaee80e084aa4f9e8255b38efebc5361  vpc003   0.0.0.0/0       default-internet-gateway            1000
    avx-afb0539fe13a4967be78e1e1f4625f21  vpc003   10.0.0.0/8      us-east1-b/instances/gcp-spoke-003  1000
    default-route-0ac78bbc901a6640        vpc003   10.13.64.0/24   vpc003                              0
    default-route-de7040c7b44e831a        vpc003   10.13.65.0/24   vpc003                              0
    default-route-dfafcf4749ec29a9        vpc003   10.13.66.0/24   vpc003                              0
    default-route-fd5dbac6f98fbdc5        vpc003   0.0.0.0/0       default-internet-gateway            1000

    Constraints

    • Tagged routes cannot be exported or imported across vpc peerings

    Workarounds

    AVX Gateway Routes

    Create routes with a higher priority and with the tag avx-<vpc name>-gbl with the next hop “Default internet gateway”. Those are used exclusively by AVX Spoke Gateways.

    gcloud compute routes create avx-gateway-0-0-0-0-1 \
        --network vpc003\
        --destination-range 0.0.0.0/1\
        --next-hop-gateway default-internet-gateway \
        --tags avx-vpc003-gbl\
        --priority 100
    gcloud compute routes create avx-gateway-128-0-0-0-1 \
        --network vpc003\
        --destination-range 128.0.0.0/1\
        --next-hop-gateway default-internet-gateway \
        --tags avx-vpc003-gbl\
        --priority 100

    This step is necessary to prevent a route loop when executing the step below.

    0.0.0.0/0 Option 1

    It is possible to use the feature Customize Spoke VPC Routing Table to trigger the creation of 0.0.0.0/1 and 128.0.0.0/1 custom routes pointing to the gateway.

    This feature should be tested as in some versions the creation of 0/0, 128/1, and 0/1 from the controller is blocked.

    The routing table looks like the following:

    NAME                                   NETWORK  DEST_RANGE        NEXT_HOP                            PRIORITY
    avx-0-0-0-0-0                          vpc003   0.0.0.0/0         us-east1-b/instances/gcp-spoke-003  900
    avx-41e0705ff87b4117960c65694fdab6ce   vpc003   0.0.0.0/1         us-east1-b/instances/gcp-spoke-003  1000
    avx-aa2190407b7a4b5e9ff134515627d0e5   vpc003   128.0.0.0/1       us-east1-b/instances/gcp-spoke-003  1000
    avx-aaee80e084aa4f9e8255b38efebc5361   vpc003   0.0.0.0/0         default-internet-gateway            1000
    default-route-0ac78bbc901a6640         vpc003   10.13.64.0/24     vpc003                              0
    default-route-de7040c7b44e831a         vpc003   10.13.65.0/24     vpc003                              0
    default-route-dfafcf4749ec29a9         vpc003   10.13.66.0/24     vpc003                              0
    default-route-fd5dbac6f98fbdc5         vpc003   0.0.0.0/0         default-internet-gateway            1000

    There are corner cases where 0/1 and 128/1 are not supported by Google PaaS services.

    0.0.0.0/0 Option 2

    Create a 0.0.0.0/0 pointing to a NLB front ending the AVX gateways with a priority high enough to bring the traffic to the gateways:

    gcloud compute routes create avx-0-0-0-0-0\
        --network vpc003\
        --destination-range 0.0.0.0/0\
        --next-hop-ilb avx-nlb-vpc003-feip\
        --priority 900

    This route is not monitored by the AVX Controller. After executing the command above, the route table looks like:

    NAME                                   NETWORK  DEST_RANGE        NEXT_HOP                                 PRIORITY
    avx-0-0-0-0-0                          vpc003   0.0.0.0/0         10.13.64.9                               900
    avx-0819c846ec4e4b4395d16a31d34bba0f   vpc003   172.16.0.0/12     us-east1-b/instances/gcp-spoke-003       1000
    avx-aaee80e084aa4f9e8255b38efebc5361   vpc003   0.0.0.0/0         default-internet-gateway                 1000
    avx-ba648051a45e41678097dfedc04f2bff   vpc003   192.168.0.0/16    us-east1-b/instances/gcp-spoke-003       1000
    avx-c088391a7e924e00bc3af30ab9df0e0c   vpc003   0.0.0.0/0         us-east1-b/instances/gcp-spoke-003       991
    avx-e03f07b01ee14676ab005c0d8dc1a7cd   vpc003   10.0.0.0/8        us-east1-b/instances/gcp-spoke-003       1000
    default-route-0ac78bbc901a6640         vpc003   10.13.64.0/24     vpc003                                   0
    default-route-de7040c7b44e831a         vpc003   10.13.65.0/24     vpc003                                   0
    default-route-dfafcf4749ec29a9         vpc003   10.13.66.0/24     vpc003                                   0
    default-route-fd5dbac6f98fbdc5         vpc003   0.0.0.0/0         default-internet-gateway                 1000
    

    From the console:

    0.0.0.0/0 Option 3

    Create a 0.0.0.0/0 pointing to the AVX gateway with a priority high enough to bring the traffic to the gateways:

    gcloud compute routes create avx-gw-0-0-0-0-0\
        --network vpc003\
        --destination-range 0.0.0.0/0\
        --next-hop-instance gcp-spoke-003\
        --priority 900
    gcloud compute routes create avx-hagw-0-0-0-0-0\
        --network vpc003\
        --destination-range 0.0.0.0/0\
        --next-hop-instance gcp-spoke-003-hagw\
        --priority 900

    This route is not monitored by the AVX Controller. Google checks if the next hop compute instance is up or down and it will properly set the route active or inactive but it has no visibility on the health of the instance.

    After executing the command above, the route table looks like:

    NAME                                  NETWORK  DEST_RANGE      NEXT_HOP                            PRIORITY
    avx-gw-0-0-0-0-0                         vpc003   0.0.0.0/0    us-east1-b/instances/gcp-spoke-003  900
    avx-0869709459044dd3ab184f9a7c18c885  vpc003   0.0.0.0/0       us-east1-b/instances/gcp-spoke-003  991
    avx-132f04c21c274870a00d1717fba75421  vpc003   192.168.0.0/16  us-east1-b/instances/gcp-spoke-003  1000
    avx-2250d167f6c44869b619f9338563962d  vpc003   172.16.0.0/12   us-east1-b/instances/gcp-spoke-003  1000
    avx-aaee80e084aa4f9e8255b38efebc5361  vpc003   0.0.0.0/0       default-internet-gateway            1000
    avx-afb0539fe13a4967be78e1e1f4625f21  vpc003   10.0.0.0/8      us-east1-b/instances/gcp-spoke-003  1000
    avx-gateway-0-0-0-0-1                 vpc003   0.0.0.0/1       default-internet-gateway            100
    avx-gateway-128-0-0-0-1               vpc003   128.0.0.0/1     default-internet-gateway            100
    default-route-0ac78bbc901a6640        vpc003   10.13.64.0/24   vpc003                              0
    default-route-de7040c7b44e831a        vpc003   10.13.65.0/24   vpc003                              0
    default-route-dfafcf4749ec29a9        vpc003   10.13.66.0/24   vpc003                              0
    default-route-fd5dbac6f98fbdc5        vpc003   0.0.0.0/0       default-internet-gateway            1000

    All the internet traffic is diverted to the AVX gateway, including Google API calls.

    Avoiding Google API calls through the Fabric

    To avoid sending Google API calls through the fabric, add the following route for private.googleapis.com:

    gcloud compute routes create google-api-restricted-199-36-153-4-30 \
        --network vpc003\
        --destination-range 199.36.153.8/30\
        --next-hop-gateway default-internet-gateway

    To avoid sending Google API calls through the fabric, add the following route for restricted.googleapis.com:

    gcloud compute routes create google-api-restricted-199-36-153-4-30 \
        --network vpc003\
        --destination-range 199.36.153.4/30\
        --next-hop-gateway default-internet-gateway

    Google Console SSH Access

    gcloud compute routes create google-console-ssh-35-235-240-0-20 \
        --network vpc003\
        --destination-range 35.235.240.0/20\
        --next-hop-gateway default-internet-gateway

    Failure or Maintenance Scenarios

    While the routes created by the controller are managed in different scenarios like gateway failure or gateway maintenance, Google custom routes provides limited monitoring. Google checks if the next hop compute instance is up or down and it will properly set the route active or inactive but it has no visibility on the health of the instance.

    A route will become inactive when the compute instance is down and it will become active when the instance is up. The time it takes to mark the route as active or inactive depends on the Google API.

    To avoid traffic disruption during a scheduled maintenance, the custom 0/0 route created can be deleted and then recreated once the maintenance is concluded.

    For example, when upgrading gateways image:

    • remove 0/0 pointing to hagw
    • upgrade hagw
    • create 0/0 poiting to hagw
    • delete 0/0 pointing to gw
    • upgrade gw
    • create 0/0 poiting to gw

    Fail over Times for VPC/VNET Egress

    • Tests were executed shooting the compute instance down.
    • Ping was used as the measurement tool

    Option 1: 4 seconds

    Mon Jul 29 22:06:10 UTC 2024: 64 bytes from ua-in-f138.1e100.net (108.177.12.138): icmp_seq=325 ttl=114 time=0.675 ms
    Mon Jul 29 22:06:14 UTC 2024: 64 bytes from ua-in-f138.1e100.net (108.177.12.138): icmp_seq=329 ttl=114 time=3.97 ms

    Option 2: 18 seconds

    Mon Jul 29 22:21:02 UTC 2024: 64 bytes from vl-in-f139.1e100.net (74.125.141.139): icmp_seq=311 ttl=114 time=0.670 ms
    Mon Jul 29 22:21:20 UTC 2024: 64 bytes from vl-in-f139.1e100.net (74.125.141.139): icmp_seq=329 ttl=114 time=4.16 ms

    Option 3: 11 seconds

    Mon Jul 29 22:28:15 UTC 2024: 64 bytes from vl-in-f139.1e100.net (74.125.141.139): icmp_seq=743 ttl=114 time=0.668 ms
    Mon Jul 29 22:28:26 UTC 2024: 64 bytes from vl-in-f139.1e100.net (74.125.141.139): icmp_seq=754 ttl=114 time=4.10 ms

    Next steps

    • Fail over tests with overloaded spokes
    • Firenet

    References

    https://cloud.google.com/vpc/docs/routes

    https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

    https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity

  • Hello again old friend…

    Hello again old friend…

    Distributed Cloud Firewall

    Distributed Cloud Firewall enhances security by enforcing network policies between SmartGroups, which you define to manage applications within single or multiple cloud environments.

    SmartGroups:

    • Logical groupings of applications that can span across various cloud accounts, regions, and VPCs/VNets.

    Network Policy Enforcement:

    • Policies can be defined to filter and control traffic between applications residing in different SmartGroups.

    How to enable it

    Once you are logged in on Copilot, go to Security -> Distributed Cloud Firewall. Click on Enable Distributed Cloud Firewall:

    Click Beging Using Distributed Cloud Firewall to start configuring it:

    A Greenfield Rule will be created to allow traffic that maintains the current state, facilitating the creation of custom rules for specific security needs.

    Distributed Cloud Firewall will deny all previously permitted traffic due to its implicit Deny All rule.

    SmartGroups

    SmartGroup are created outside the Security folder. It accepts as input the following resource types:

    • virtual machines
    • subnets
    • VPC/VNEts
    • IPs/CIDRs

    By default two SmartGroups are created:

    • Anywhere (0.0.0.0/0)
    • Public Internet

    Logical AND and OR operators are supported:

    • multiple conditions inside the same block works as an AND
    • multiple blocks with conditions works as an OR

    Rules

    Rules are created inside the Distributed Cloud Firewall. It defines the communication allowed or not between two SmartGroups:

    • protocol
    • port
    • Enforcement
    • Logging
    • Action (Permit, Deny)
    • TLS
    • IDS
    • Priority

    How to disable it

    Distributed Cloud Firewall can be disable from Settings -> Configuration -> Add-on Features:

    Local Egress on VPC/VNETs

    Local Egress does the following:

    • Changes the default route on the VPC/VNET to point to the Spoke Gateway
    • Enables SNAT

    From Security -> Egress -> Egress VPC/Vnets click on Local Egress on VPC/VNets:

    Select the VPCs/VNETs:

    Click Add:

    Private route tables after the change:

    Single IP Source NAT is enabled on the gateways:

    What happens if there is a default route present?

    The default route is replaced:

    If the feature is disabled, the controller restores the previous route:

    If you prefer to work from the controller or if you are using automation, the feature can be controlled using the single_ip_snat parameter.

    Testing

    Internet Access:

    Rule allowing internet access:

    Rules are saved as draft and a commit is required:

    Monitor

    Use the Monitor tab to visualize traffic.

    References

    https://docs.aviatrix.com/documentation/latest/network-security/secure-networking-configuring.html?expand=true

    https://registry.terraform.io/providers/AviatrixSystems/aviatrix/latest/docs/resources/aviatrix_smart_group

    https://registry.terraform.io/providers/AviatrixSystems/aviatrix/latest/docs/resources/aviatrix_distributed_firewalling_policy_list

  • OCI Notes

    OCI Notes
    black server racks on a room
    Photo by Manuel Geissinger on Pexels.com

    Terraform Provider

    https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm

    https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm#environmentVariables

    OCI CLI

    Requirements:

    https://docs.oracle.com/en-us/iaas/Content/API/Concepts/cliconcepts.htm#Requirements

    Quick start:

    https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm#Quickstart

    Environment variables:

    https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/clienvironmentvariables.htm

    Authenticate:

    oci session authenticate --auth security_token

    To refresh the token:

    oci session refresh --profile <profile_name>

    Examples

    https://github.com/oracle/terraform-provider-oci/tree/master/examples

    AVX Access Account

    https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five

    https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm

    Do not forget to click the add button to add the private key to the user.

    Supported Instance Sizes

    Regions and Availability Domains

    https://docs.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm

    Minimal Required OCI IAM Policy Bound by Compartment

    
    Allow group <YOUR GROUP NAME> to manage volume-family in compartment <YOUR COMPARTMENT>
    Allow group <YOUR GROUP NAME> to manage instance-family in compartment <YOUR COMPARTMENT>
    Allow group <YOUR GROUP NAME> to manage virtual-network-family in compartment <YOUR COMPARTMENT>
    Allow group <YOUR GROUP NAME> to inspect all-resources in compartment <YOUR COMPARTMENT>
    Allow group <YOUR GROUP NAME> to inspect app-catalog-listing in compartment <YOUR COMPARTMENT>
    Allow group <YOUR GROUP NAME> to read app-catalog-listing in compartment <YOUR COMPARTMENT>
    Allow group <YOUR GROUP NAME> to manage app-catalog-listing in compartment <YOUR COMPARTMENT>

  • Private Mode

    Private Mode
    black and white wooden sign behind white concrete
    Photo by Tim Mossholder on Pexels.com

    Private Mode facilitates Aviatrix deployments without relying on public IPs. Private Mode was introduced on Aviatrix software version 6.8.

    Constraints

    • AWS and Azure are currently supported.
    • Private Mode will not work if you already have gateways deployed in your Controller(Public IPs).
    • BGP over LAN functionality is not available.
    • Features on the Controller Security tab are not supported.
    • FQDN Gateway functionality is unavailable.
    • Creation of VPN or Public Subnet Filtering Gateways is not supported.
    • Enabling internet-bound egress traffic for inspection through Firewall is not possible.
    • Distributed Cloud Firewall (DCF) is not supported.
    • Egress for Transit FireNet is not supported.

    Architecture

    Transit to Spoke data plane tunnels will utilize orchestrated native peering as an underlay. Cloud instances will only have private IPs (Aviatrix Controller, Gateways, and CoPilot) and management traffic occurs through native cloud constructs like Load Balancers, Private Link Services, and peering connections, serving as the foundation for the Aviatrix encrypted Transit network.

    Load Balancers

    Elastic Load Balancing (ELB) automatically disperses incoming traffic among multiple targets, including EC2 instances, containers, and IP addresses, across one or more Availability Zones. It continuously monitors the health of registered targets and directs traffic solely to those deemed healthy.

    Private Link

    PrivateLink is a feature that enables communication between customer applications and AWS services using private IP addresses.

    Traffic between a VPC endpoint and an endpoint service stays within the AWS network, without traversing the public internet.

    Traffic from your VPC is sent to an endpoint service using a connection between the VPC endpoint and the endpoint service.

    The following diagram provides a high-level overview of how AWS PrivateLink works:

    https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

    The user of a service is a service consumer. Service consumers create interface VPC endpoints to connect to endpoint services that are hosted by service providers.

    A service provider creates an endpoint service to make their service available in a Region. There are multiple types of VPC endpoints:

    • An endpoint network interface refers to a network interface managed by the requester, serving as an ingress point for traffic directed towards an endpoint service. When you create a VPC endpoint and designate specific subnets, AWS generates an endpoint network interface within each specified subnet.
    • To establish connectivity between a fleet of virtual appliances utilizing private IP addresses and your VPC, you can create a Gateway Load Balancer endpoint. This endpoint serves as a means to direct traffic towards the fleet. The routing of traffic from your VPC to the Gateway Load Balancer endpoint is managed through route tables.
    • A Gateway endpoint is established to facilitate the routing of traffic towards Amazon S3 or DynamoDB. Gateway endpoint does not use private link.

    The expected Link Service configuration for AWS is having one Link Service in each region where you want to launch gateways.

    It is not possible to launch gateways in the same VPC/VNet as the Link Service VPC/VNet.

    VPC Peering

    A VPC peering connection establishes a network connection between two Virtual Private Clouds (VPCs), allowing the routing of traffic between them using private IPv4 or IPv6 addresses.

    https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

    This enables instances within either VPC to communicate as if they were part of the same network.

    The need of a Web Proxy

    The Controller requires outbound internet access for updates and licensing. AVX Controller expects the proxy to listen on port 3128 and this proxy must be in the same VPC as the controller and copilot.

    Squid on Ubuntu Install

    To install Squid on Ubuntu, you can follow these steps:

    1. Update the package index:
    sudo apt update
    1. Install Squid using the following command:
    sudo apt install squid
    1. make sure under “/etc/squid/squid.conf” file the following is configured:
    http_access allow localhost
    http_access allow localnet
    http_port 3128
    1. You can start the Squid service using the following command:
    sudo systemctl start squid
    1. If you want Squid to start automatically every time the system boots, you can enable it as a systemd service:
    sudo systemctl enable squid
    1. To verify that Squid is running, you can use the following command:
    sudo systemctl status squid

    By default, Squid listens on port 3128 for incoming proxy connections.

    IAM Policy Requirement

    The following permissions are required for Private Mode:

    {
            "Action": [
                    "elasticloadbalancing:DescribeTargetHealth",
                    "ec2:CreateVpcEndpointServiceConfiguration",
                    "ec2:DeleteVpcEndpointServiceConfigurations",
                    "ec2:CreateVpcEndpoint",
                    "ec2:DeleteVpcEndpoints",
                    "ec2:ModifyVpcEndpointServicePermissions",
                    "ec2:DescribeVpcEndpointServicePermissions",
                    "ec2:DescribeVpcEndpoints"
            ],
            "Resource": "*",
            "Effect": "Allow"
    }

    CoPilot Deployment

    CoPilot is deployed from marketplace, controller, or using terraform as before.

    Configuration

    Proxy

    Proxy can be configured during the controller initialization:

    Private Mode

    In the Aviatrix Controller interface, you can access the “Settings” section to configure various features, including “Private Mode.” Change the status from Disabled to Enable:

    The Controller will display the message below if the feature was enabled successfully.

    The next step is to create the intra-cloud link service. The Intra-Cloud Link Service is configured in the same Cloud as Controller and CoPilot. The Link Service registers the Controller and CoPilot as targets on different ports.

    The Controller creates a new subnet (avx-<vpc name>-frontend) in each AZ as showed below:

    End Point service is created:

    With following NLB:

    With the following listeners:

    • TCP:31283 (Netflow)
    • TCP:443 (Controller)
    • TCP:5000 (rsyslog)

    CoPilot

    CoPilot association can be done from the Private Mode Workflow (this step adds CoPilot to the target groups created before):

    But also the association from Settings -> CoPilot is required:

    Do not forget CoPilot also will require proxy that is configured under CoPilot Settings -> Configuration -> Proxy Server (protocol needs to be specified):

    Transit

    Useful tools “Create a VPC” wizard has a option to create only private mode subnet:

    The result is show in the screen capture below:

    Gateways

    I’m going to repeat the process above for a couple of spoke gateways for testing:

    The subnets and route tables are show in the figure below:

    The spoke creation wizard has a new box for the Cloud Link Service VPC:

    Endpoints are created on each VPC in the same subnet where gateways are deployed:

    Attachment

    The spoke attachment creates the required vpc peerings for the data plane:

    Tunnels are established using private IP address.

    Testing

    For testing purposes, two EC2 instances were deployed as show below:

    From Spoke 2 EC2 instance we can ping Spoke 3 EC2 instance:

    VPN

    If you need help setting up a VPN to the private environment you can use AWS Client VPN EndPoint among other possible solutions.

    AWS Client VPN: https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html

    Mutual authentication: https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/mutual.html

    Add the client certificate and key information (mutual authentication): https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-working-endpoint-export.html

    Deployment Flow

    VPN Client VPC -> VPN Client VPC Subnets -> VPN -> Mgmt VPC -> Mgmt VPC Subnets -> VPN Client VPC and Mgmt VPC Peering -> VPN Client VPC and Mgmt VPC Peering routes -> AVX Controller -> Proxy -> Private Mode -> CoPilot

    Troubleshooting

    • Controller requires public ip address and internet access

    References

    https://docs.aviatrix.com/documentation/latest/controller-platform-administration/private-mode.html

    https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html

    https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html

    https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

    http://www.squid-cache.org/

  • AVX “Global VPC” Tagging

    AVX “Global VPC” Tagging
    blue and yellow globe
    Photo by Pixabay on Pexels.com

    GCP Global VPC creates regional awareness between the VPC and Aviatrix gateways allowing you to restrict spoke gateway traffic to transit gateways in the same region as the spoke gateway. Without global VPC, communications between spokes over transit in the same region are routed outside the region.

    Regional awareness is achieved by appending regional network tags to virtual machines and adding regional routes to the gateways in the routing table using tags.

    From Google Cloud documentation:

    “A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or instance templates. A tag is not a separate resource, so you cannot create it separately. All resources with that string are considered to have that tag. Tags enable you to make firewall rules and routes applicable to specific VM instances.”

    Network tags allow you to apply firewall rules and routes to a specific instance or set of instances:

    • You make a firewall rule applicable to specific instances by using target tags and source tags.
    • You make a route applicable to specific instances by using a tag.

    You can assign network tags to new VMs at creation time, or you can edit the set of assigned tags at any time later. You can edit network tags without stopping a VM.

    Constraints

    • A network tag only applies to the VPC networks that are directly attached to the instance’s network interfaces.

    Constraints (SW Version 7.1)

    • Spokes using the Global VPC Routing for GCP feature cannot be connected to FireNet transit gateways.

    Design

    Deployment

    East-1:

    West-1:

    VPC Routing

    Before attachment:

    After attachment:

    Bringing up the extra HA gateways, we have:

    Instance Tagging

    There are three methods of tagging GCP spoke gateways:

    • Tag on Changes – Aviatrix recommends this method. Any time there is a configuration change to the gateway or connections to the gateway, Aviatrix reevaluates the tags in your environment and verifies all gateways are regionally aware of the changes and that the regions can communicate with each other.
    • Auto Tag – Aviatrix monitors virtual machines launched in the VPCs and automatically adds tags for newly launched virtual machines in the VPC or removes tags for virtual machines removed from the VPC.
    • Manage Manually – You do all the tagging through the GCP console and Aviatrix becomes regionally aware of those tags.

    We are going to explore Tag on Changes and Auto Tag.

    Tag on Changes

    • No Tags are added to a compute engine deployed
    • Reapply Tags can be used to apply tags on new resources:

    Whenever new subnets are added, the Reapply Tags operation must be performed to sync VPC subnets to update the routing tables and add the routes to the newly deployed regions. This operation applies tags to new or existing virtual machines in the new region that haven’t been tagged.

    • Reattaching the gateway:

    Auto Tag

    After a few minutes AVX Controller commands tagging on the recently deployed compute engine instance:

    AlloyDB

    Google Cloud Platform (GCP) offers a fully managed database solution called AlloyDB. It has interoperability with the well-liked open-source relational database system PostgreSQL and is built to manage workloads at the enterprise level. By providing managed services like scaling and automated backups, it helps businesses run PostgreSQL-based apps on the cloud and takes advantage of Google’s database infrastructure management experience.

    Deployment

    AllowDB can be deployed from the Google Console creating a cluster:

    AlloyDB is a regional service: pick up your region and your vpc:

    AlloyDB uses Private Services Connection:

    Once the cluster is deployed we can see the network peering and routes imported and exported:

    Networking

    Applications connect to an AlloyDB instance using a single, static IP address, even while the instance itself consists of several nodes. There are no IP addresses exposed by the instance to the public internet.

    From Google documentation:

    “Applications running outside the VPC require an intermediary service to connect to the AlloyDB instance. Solutions include running proxy services on a VM within the instance’s VPC, or using other Google Cloud products to establish a permanent connection between your application and your VPC.”

    There are a few possible solutions like discussed before (https://rtrentinsworld.com/2023/04/23/apigee-not-bee/) but not all possible solution does not work well with routes using tagging.

    • tagged routes are not exported to the AlloyDB while static route without tags is:

    Possible solutions:

    • AVX with spoke gateways with global vpc feature disabled
    • Proxy VMs
    • Load Balancer
    • AVX with BGPoLAN peering with Google Cloud Routers deployed in different regions

    AVX with BGPoLAN peering with Google Cloud Routers requires the use of Google Network Connectivity Center which has cost associated to it:

    https://cloud.google.com/network-connectivity/docs/network-connectivity-center/pricing

    Vertex AI

    Vertex AI is an old friend of mine. I covered Vertex AI previously:

    Vertex AI falls in the same category, cloud network speaking, as AlloyDB and multiple other services. They are deployed on Google Cloud Private VPCs and connected to a customer VPC using Private Services Access:

    https://cloud.google.com/vertex-ai/docs/general/vpc-peering

    What happens when all gateways in one Region fails?

    AVX Controller does not take any re-tagging action in a failure scenario but an enterprise can create a Cloud Function to re-tag resources. AVX Professional Service can help you with that.

    References

    https://docs.aviatrix.com/documentation/latest/building-your-network/gcp-global-vpc.html

    https://cloud.google.com/vpc/docs/add-remove-network-tags

    https://cloud.google.com/alloydb/docs/configure-connectivity#about_network_connectivity

    https://cloud.google.com/alloydb/docs/connect-external

    https://cloud.google.com/network-connectivity/docs/network-connectivity-center/pricing

  • External Connections Traffic Engineering

    External Connections Traffic Engineering
    aerial photo of buildings and roads
    Photo by Aleksejs Bergmanis on Pexels.com

    BGP (Border Gateway Protocol) is typically used in wide-area networks (WANs) to exchange routing information between different autonomous systems (ASes) on the internet. It’s not commonly used in local area networks (LANs) because LANs typically use interior gateway protocols (IGPs) like OSPF or RIP for routing within the same network.

    However, there are scenarios where BGP can be used within a LAN, particularly in large-scale data center environments or specialized network setups. One such scenario is when peering with third-party Network Virtual Appliances (NVAs) that are deployed within the LAN. These NVAs might need BGP to exchange routing information with the local network.

    Aviatrix is a cloud networking solution that provides advanced networking and security services for cloud environments, including AWS, Azure, Google Cloud, and others. It does support BGP (Border Gateway Protocol) over LAN in the context of its cloud network orchestration and management capabilities.

    Aviatrix allows organizations to set up and manage BGP peering with various network components, including third-party Network Virtual Appliances (NVAs), within their cloud-based LANs. This support for BGP over LAN enables more advanced routing and connectivity options for cloud-based workloads, making it a valuable feature for organizations looking to optimize their cloud network architecture.

    Design

    This network diagram illustrates an Aviatrix implementation utilizing BGP over LAN to establish robust connections. Aviatrix, a cloud networking solution, powers this architecture, enabling dynamic routing within the LAN. The implementation connects to Cisco Cloud Routers, which serve as gateways to multiple branches, ensuring efficient and secure data flow across the network.

    This design creates two external connections instead of a single connection so as the traffic can be engineered using AS Path Prepend by specifying AS PATH for each BGP connection.

    https://read.docs.aviatrix.com/HowTos/transit_advanced.html#connection-as-path-prepend

    Configuration

    First Connection: Transit Gateway x CSR1000v-1

    GUI screenshots are great :). Same configuration can be done using terraform using the aviatrix_transit_external_device_conn resource:

    https://registry.terraform.io/providers/AviatrixSystems/aviatrix/latest/docs/resources/aviatrix_transit_external_device_conn

    A new interface is created:

    We have not selected “Enable Remote Gateway HA”, the AVX Transit Gateway HA does not have a second interface.

    Every-time we specify a different IP for the Local LAN IP, a new interface is created:

    One important observation is that the Remote LAN IP should be unique in a AVX Transit Gateway:

    Detachment of the connection also removes the extra interface created previously:

    The BGP configuration for the CSR1000v can be downloaded from the Controller:

    Once the configuration is applied to the CSR1000v the neighborhood is established:

    Second Connection: Transit Gateway x CSR1000v-2

    If no Local LAN IP is specified for the Transit Gateway, the controller will trigger the creation of a new interface/new IP address:

    To avoid such situation, we need to specify the IP of the existent interface:

    Keep in mind that using the same IP for multiple connections has a side effect of the Controller removing it if one of those connections is detached.

    The way to trick the controller to create a single connection from the Transit Gateway HA is to provide a “fake” Remote LAN IP. Not really a trick because the connection will be created but it will be always down.

    A second interface is created on the HA gateway:

    Once the configuration is applied to the CSR1000v-2, the connection is established:

    As expected, one connection is down. From the Multi-Cloud Transit BGP tab we can see the status of connections 1 and 2:

    Traffic Engineering

    Now with multiple connections, we can use Prepend to influence the traffic:

    Configuring Prepend:

    References

    https://docs.aviatrix.com/previous/documentation/latest/planning-secure-networks/s2c-overview.html

    https://docs.aviatrix.com/documentation/latest/building-your-network/external-connection-create-bgp-over-lan.html

  • VPC Peering Security Groups

    VPC Peering Security Groups
    black android smartphone on top of white book
    Photo by Pixabay on Pexels.com

    A security group serves as a protective barrier, functioning like a firewall to manage the flow of network traffic to and from the resources within your Virtual Private Cloud (VPC). With security groups, you have the flexibility to select the specific ports and communication protocols that are permitted for both incoming (inbound) and outgoing (outbound) network traffic.

    You have the capability to modify the inbound or outbound rules within your VPC’s security groups to make reference to security groups in a peered VPC. This adjustment enables the smooth exchange of network traffic between instances associated with the specified security groups in the peered VPC.

    Testing

    Testing topology:

    SG:

    Result:

    Changing from cross referenced SG to CIDR:

    Results:

    No pings were lost.

    References

    https://docs.aws.amazon.com/vpc/latest/userguide/security-groups.html

    https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html

Leave a Reply