Home

  • Checking Bandwidth Consumption with Athena

    Checking Bandwidth Consumption with Athena
    close up of machine part in factory
    Photo by Pixabay on Pexels.com

    VPC flow logs capture information about the IP traffic going to and from network interfaces in a VPC. Athena is an interactive query service that makes it easy to analyze data directly in S3 using standard SQL.

    Topology

    Create a (S3) Bucket

    Enable (VPC) Flow Logs

    Apache Parquet is a columnar data format that stores and queries data more efficiently and cost-effectively than a text format. Queries on data stored in Parquet format are 10 to 100 times faster and cheaper than data stored in text format. Flow logs delivered in Parquet format with Gzip compression use about 20 percent less storage space in Amazon S3 than flow logs delivered in text format with Gzip compression, further reducing storage and query costs.

    Athena Integration

    Cloud Formation

    Athena

    Testing

    Bandwidth Consumption

    Breaking total bytes by vpc:

    Query Examples

    • daily bandwidth utilization
    SELECT SUM(bytes) AS total_bytes
    FROM <table_name>
    WHERE DATE(date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ')) = date_parse('2022-02-15', '%Y-%m-%d');
    
    • daily bandwidth utilization (range)
    SELECT DATE(date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ')) AS date, SUM(bytes) AS total_bytes
    FROM <table_name>
    WHERE DATE(date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ')) BETWEEN date_parse('start_date', '%Y-%m-%d') AND date_parse('end_date', '%Y-%m-%d')
    GROUP BY DATE(date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ'));
    
    • max daily bandwidth utilization
    SELECT DATE(date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ')) AS date, SUM(bytes) AS total_bytes
    FROM <table_name>
    WHERE date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ') BETWEEN date_parse('start_date', '%Y-%m-%d') AND date_parse('end_date', '%Y-%m-%d')
    GROUP BY DATE(date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ'));
    • top talkers
    SELECT srcaddr, SUM(bytes) AS total_bytes
    FROM <table_name>
    WHERE date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ') BETWEEN date_parse('start_date', '%Y-%m-%d') AND date_parse('end_date', '%Y-%m-%d')
    GROUP BY srcaddr
    ORDER BY total_bytes DESC;
    • top services/port
    SELECT dstport, SUM(bytes) AS total_bytesFROM <table_name>WHERE date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ') BETWEEN date_parse('start_date', '%Y-%m-%d') AND date_parse('end_date', '%Y-%m-%d')GROUP BY dstportORDER BY total_bytes DESC;
    • malicious/denied traffic
    SELECT *
    FROM <table_name>
    WHERE action = 'DENY';
    • trend
    SELECT DATE_FORMAT(date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ'), '%Y-%m-%d %H:00:00') AS hour, SUM(bytes) AS total_bytes
    FROM <table_name>
    WHERE date_parse("@timestamp", '%Y-%m-%dT%H:%i:%S.%fZ') BETWEEN date_parse('start_date', '%Y-%m-%d') AND date_parse('end_date', '%Y-%m-%d')
    GROUP BY hour;

    References

    https://aws.amazon.com/blogs/networking-and-content-delivery/analyze-vpc-flow-logs-with-point-and-click-amazon-athena-integration/

    https://docs.aws.amazon.com/athena/index.html

  • Dedicated Ingress VPC Health Checks

    Dedicated Ingress VPC Health Checks
    yellow and white plane on airport
    Photo by cottonbro studio on Pexels.com

    Topology

    (VPC003) Workload Configuration

    Instance Group:

    Health check:

    Network Load Balancer:

    (VPC001) Ingress VPC

    SNAT/DNAT using single NAT:

    Another option is to use customized NAT:

    Instance Group:

    Health Check:

    • standalone gateways

    External Global HTTP(S) Load Balancer:

    Testing

    Packet capture from the proxy instance:

    Troubleshooting

    Health check failures:

    “End-to-End” Health Check

    In this scenario, the external load balancer health check probes the the internal load balancer:

    New HC on port 80 (service port):

    References

    https://cloud.google.com/load-balancing/docs/health-check-concepts

  • Migrating from GCP… to GCP

    Migrating from GCP… to GCP
    zebra on green grass field
    Photo by Magda Ehlers on Pexels.com

    Current and Future Architecture

    Current state:

    Desired state:

    vpc001 is composed of the following subnets:

    ricardotrentin@RicardontinsMBP Downloads % gcloud compute networks subnets list --network vpc001
    
    NAME        REGION    NETWORK  RANGE          STACK_TYPE  IPV6_ACCESS_TYPE  INTERNAL_IPV6_PREFIX  EXTERNAL_IPV6_PREFIX
    network001  us-east1  vpc001   10.11.64.0/24  IPV4_ONLY
    network002  us-east1  vpc001   10.11.65.0/24  IPV4_ONLY
    network003  us-east1  vpc001   10.11.66.0/24  IPV4_ONLY
    network010  us-east1  vpc001   100.64.0.0/21  IPV4_ONLY

    vpc001 routing table (filtering routes of interest):

    On-prem (AS 36180) routing table:

    csr1000v-3#show ip route 
          10.0.0.0/8 is variably subnetted, 7 subnets, 2 masks
    B        10.0.0.0/16 [20/100] via 169.254.201.57, 01:42:20
    B        10.11.64.0/24 [20/100] via 169.254.0.5, 01:42:20
                           [20/100] via 169.254.0.1, 01:42:20
    B        10.11.65.0/24 [20/100] via 169.254.0.5, 01:42:20
                           [20/100] via 169.254.0.1, 01:42:20
    B        10.11.66.0/24 [20/100] via 169.254.0.5, 01:42:20
                           [20/100] via 169.254.0.1, 01:42:20
          100.0.0.0/21 is subnetted, 3 subnets
    B        100.64.0.0 [20/100] via 169.254.0.5, 01:42:20
                        [20/100] via 169.254.0.1, 01:42:20
    B        100.64.8.0 [20/100] via 169.254.0.5, 01:42:20
                        [20/100] via 169.254.0.1, 01:42:20
    B        100.64.16.0 [20/100] via 169.254.0.5, 01:42:20
                         [20/100] via 169.254.0.1, 01:42:20

    Staging

    On-prem route table after staging is complete (avx gateway is not attached):

    csr1000v-3#show ip bgp summary 
    
    Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
    169.254.0.1     4        64514     100     114       34    0    0 00:32:19        6
    169.254.0.5     4        64514     101     115       34    0    0 00:32:20        6
    169.254.2.1     4        64514     101     113       34    0    0 00:32:20        6
    169.254.2.5     4        64514     101     115       34    0    0 00:32:20        6
    169.254.3.1     4        64514     100     115       34    0    0 00:32:18        3
    169.254.3.5     4        64514     101     114       34    0    0 00:32:20        3
    169.254.4.1     4        64514     100     115       34    0    0 00:32:19        3
    169.254.4.5     4        64514     101     114       34    0    0 00:32:20        3
    169.254.5.1     4        64550      17      22       34    0    0 00:13:34        0
    169.254.100.45  4        64512     198     216       34    0    0 00:32:13        1
    169.254.201.57  4        64512     198     213       34    0    0 00:32:14        1

    Attaching the gateway:

    • AVX creates four (4) custom RFC1918 routes

    And advertise the vpc001 subnets with a better metric (please note that RFC 6598 prefixes are not advertised from AVX by default):

    csr1000v-3#show ip bgp summary 
    
    Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
    169.254.0.1     4        64514      81      90       29    0    0 00:25:48        6
    169.254.0.5     4        64514      81      90       29    0    0 00:25:49        6
    169.254.2.1     4        64514      81      90       29    0    0 00:25:49        6
    169.254.2.5     4        64514      81      90       29    0    0 00:25:48        6
    169.254.3.1     4        64514      81      90       29    0    0 00:25:49        3
    169.254.3.5     4        64514      81      91       29    0    0 00:25:48        3
    169.254.4.1     4        64514      81      91       29    0    0 00:25:49        3
    169.254.4.5     4        64514      81      91       29    0    0 00:25:48        3
    169.254.5.1     4        64550      30      36       29    0    0 00:25:42        5
    169.254.100.45  4        64512     158     169       29    0    0 00:25:39        1
    169.254.201.57  4        64512     159     168       29    0    0 00:25:48        1
    
    csr1000v-3#show ip route       
           10.0.0.0/8 is variably subnetted, 7 subnets, 2 masks
    B        10.0.0.0/16 [20/100] via 169.254.201.57, 00:23:00
    B        10.11.64.0/24 [20/0] via 169.254.5.1, 00:01:50
    B        10.11.65.0/24 [20/0] via 169.254.5.1, 00:01:50
    B        10.11.66.0/24 [20/0] via 169.254.5.1, 00:01:50
    B        10.12.64.0/24 [20/100] via 169.254.3.5, 00:23:00
                           [20/100] via 169.254.3.1, 00:23:00
    B        10.12.65.0/24 [20/100] via 169.254.3.5, 00:23:00
                           [20/100] via 169.254.3.1, 00:23:00
    B        10.12.66.0/24 [20/100] via 169.254.3.5, 00:23:00
                           [20/100] via 169.254.3.1, 00:23:00
          100.0.0.0/21 is subnetted, 3 subnets
    B        100.64.0.0 [20/100] via 169.254.0.5, 00:23:00
                        [20/100] via 169.254.0.1, 00:23:00
    B        100.64.8.0 [20/0] via 169.254.5.1, 00:01:50
    B        100.64.16.0 [20/0] via 169.254.5.1, 00:01:50

    To avoid traffic switching over to AVX asymmetrically during the staging we have a few options:

    • advertise only vpc001 subnet prefix where the gateway is deployed. This case only work if the avx gw is deployed on its own/dedicated subnet:
    csr1000v-3#show ip bgp summary 
    
    Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
    169.254.0.1     4        64514     367     398       33    0    0 02:01:02        6
    169.254.0.5     4        64514     367     400       33    0    0 02:01:02        6
    169.254.2.1     4        64514     367     399       33    0    0 02:01:02        6
    169.254.2.5     4        64514     367     400       33    0    0 02:01:02        6
    169.254.3.1     4        64514     367     399       33    0    0 02:01:02        3
    169.254.3.5     4        64514     367     399       33    0    0 02:01:02        3
    169.254.4.1     4        64514     367     401       33    0    0 02:01:02        3
    169.254.4.5     4        64514     367     400       33    0    0 02:01:02        3
    169.254.5.1     4        64550     126     140       33    0    0 02:00:55        1
    169.254.100.45  4        64512     730     767       33    0    0 02:00:52        1
    169.254.201.57  4        64512     731     768       33    0    0 02:01:01        1
    csr1000v-3#show ip route       
           10.0.0.0/8 is variably subnetted, 7 subnets, 2 masks
    B        10.0.0.0/16 [20/100] via 169.254.201.57, 02:02:01
    B        10.11.64.0/24 [20/0] via 169.254.5.1, 01:40:51
    B        10.11.65.0/24 [20/100] via 169.254.0.5, 00:01:35
                           [20/100] via 169.254.0.1, 00:01:35
    B        10.11.66.0/24 [20/100] via 169.254.0.5, 00:01:35
                           [20/100] via 169.254.0.1, 00:01:35
    B        10.12.64.0/24 [20/100] via 169.254.3.5, 02:02:01
                           [20/100] via 169.254.3.1, 02:02:01
    B        10.12.65.0/24 [20/100] via 169.254.3.5, 02:02:01
                           [20/100] via 169.254.3.1, 02:02:01
    B        10.12.66.0/24 [20/100] via 169.254.3.5, 02:02:01
                           [20/100] via 169.254.3.1, 02:02:01
          100.0.0.0/21 is subnetted, 3 subnets
    B        100.64.0.0 [20/100] via 169.254.0.5, 02:02:01
                        [20/100] via 169.254.0.1, 02:02:01
    B        100.64.8.0 [20/100] via 169.254.0.5, 00:01:35
                        [20/100] via 169.254.0.1, 00:01:35
    B        100.64.16.0 [20/100] via 169.254.0.5, 00:01:35
                         [20/100] via 169.254.0.1, 00:01:35
    • AS-Prepend
    csr1000v-3#show ip bgp 
    
         Network          Next Hop            Metric LocPrf Weight Path
     *    10.0.0.0/16      169.254.100.45         200             0 64512 i
     *>                    169.254.201.57         100             0 64512 i
     *    10.11.64.0/24    169.254.5.1              0             0 64550 64550 64550 i
     *m                    169.254.0.1            100             0 64514 ?
     *                     169.254.2.5            333             0 64514 ?
     *>                    169.254.0.5            100             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
    • Change GCP Cloud Router metric. This option does not work as AVX sets the advertised priority to 0.
    csr1000v-3#show ip bgp 
    BGP table version is 46, local router ID is 169.254.201.58
    Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, 
                  r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter, 
                  x best-external, a additional-path, c RIB-compressed, 
                  t secondary path, L long-lived-stale,
    Origin codes: i - IGP, e - EGP, ? - incomplete
    RPKI validation codes: V valid, I invalid, N Not found
    
         Network          Next Hop            Metric LocPrf Weight Path
     *    10.0.0.0/16      169.254.100.45         200             0 64512 i
     *>                    169.254.201.57         100             0 64512 i
     *>   10.11.64.0/24    169.254.5.1              0             0 64550 i
     *                     169.254.0.1              0             0 64514 ?
    • change the metric using route-map
    csr1000v-3#show route-map
     
    route-map avx-transit, permit, sequence 10
      Match clauses:
      Set clauses:
        local-preference 1000
      Policy routing matches: 0 packets, 0 bytes
    route-map gcp-vpc001-havpn, permit, sequence 10
      Match clauses:
      Set clauses:
        local-preference 2000
      Policy routing matches: 0 packets, 0 bytes
    csr1000v-3#show ip bgp 
    
    
         Network          Next Hop            Metric LocPrf Weight Path
     *>   10.0.0.0/16      169.254.201.57         100             0 64512 i
     *                     169.254.100.45         200             0 64512 i
     *>   10.11.64.0/24    169.254.0.1            100   2000      0 64514 ?
     *m                    169.254.0.5            100   2000      0 64514 ?
     *                     169.254.2.5            333             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
     *                     169.254.5.1              0   1000      0 64550 i

    Switching traffic over (East-West)

    • Stage avx gateways

    vpc001 destination routes to vpc002:

    • Remove vpc peering

    When Cloud Routers learn a prefix that exactly matches the destination of an existing subnet or peering subnet route, Google Cloud does not create any custom dynamic route for the conflicting prefix.

    Removing the vpc peering and its routes makes the avx static routes introduced by the controller when attaching the gateway preferred:

    For cases where (routes with) more specifics (prefixes) drag the traffic to another place instead the avx gateway like the scenario below:

    We can customize the avx spoke gateway vpc route table:

    Switching traffic over (North-South)

    While the on-prem switchover is considerably simple as we saw before, the vpc side it is not due the nature of Cloud Router. Cloud Router strips out any bgp property we could eventually use to traffic engineer traffic.

    Option 1: Most specific destination

    We can customize the avx spoke gateway vpc route table:

    • we will replace 172.31.0.0/28 and 172.31.0.128/28 into /27s: 172.31.0.0/27, 172.31.0.32/27, 172.31.0.64/27, 172.31.0.96/27, 172.31.0.128/27, 172.31.0.160/27, 172.31.0.192/27 and 172.31.0.224/27.

    Option 2: withdrawn dynamic routes

    There are multiple ways to accomplish this and i believe the most simple one is to disable the bgp peer:

    Once the bgp peering is tiered down, avx controller programmed static routes will be preferred:

    Other approaches include on-prem router BGP manipulation as blocking/filtering prefixes out.

    Spoke to Spoke Detailed Steps

    The stage environment diagram is displayed below:

    • Spoke gateway only advertises the network it is deployed into:
    • AVX gws are deployed and attached to the transit gw
    • VPCs vpc002 and vpc003 are peered

    Route table:

    Dealing with “Global VPC”s

    This case is where subnets inside a vpc are spread across regions:

    • 10.12.64-66.0/24 are on us-east1
    • 10.12.67-69.0/24 are on us-central1

    For a smooth migration in such case, we need to run AVX sw version 7.1 where “Global VPC” is supported:

    A second transit aligned to the second region (us-central1) is deployed and also connect to on-prem. The Global VPC feature allows the deployment of multiple gateways on different regions and with proper routes to bring the traffic from the workload to the preferred region.

    The migration strategy is the same but care should be take on where the import and export routes would be done to avoid cross region traffic.

    References

    https://cloud.google.com/vpc/docs/routes#routeselection

    https://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/13753-25.html

  • All those (vpc) flow logs… Consolidate vpc flow logs using BigQuery

    All those (vpc) flow logs… Consolidate vpc flow logs using BigQuery
    person on watercraft near waterfall
    Photo by Jacob Colvin on Pexels.com

    VPC FlowLogs

    VPC Flow Logs records a sample of network flow. Logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.

    Once the vpc_flows are enabled on all subnets of interest, we can go to Logging to check if we see logs arriving:

    Sample below:

    [
      {
        "insertId": "1gzzhw9g1wrk28h",
        "jsonPayload": {
          "src_vpc": {
            "subnetwork_name": "network001",
            "vpc_name": "vpc001",
            "project_id": "rtrentin-01"
          },
          "src_instance": {
            "vm_name": "ce-vpc001",
            "project_id": "rtrentin-01",
            "zone": "us-east1-b",
            "region": "us-east1"
          },
          "start_time": "2023-01-21T23:22:17.807967667Z",
          "reporter": "SRC",
          "packets_sent": "64",
          "end_time": "2023-01-21T23:22:17.807967667Z",
          "bytes_sent": "0",
          "connection": {
            "src_ip": "10.11.64.2",
            "protocol": 6,
            "dest_ip": "173.194.217.95",
            "dest_port": 443,
            "src_port": 55106
          }
        },
        "resource": {
          "type": "gce_subnetwork",
          "labels": {
            "subnetwork_id": "7135252347660278790",
            "subnetwork_name": "network001",
            "project_id": "rtrentin-01",
            "location": "us-east1-b"
          }
        },
        "timestamp": "2023-01-21T23:22:42.256060936Z",
        "logName": "projects/rtrentin-01/logs/compute.googleapis.com%2Fvpc_flows",
        "receiveTimestamp": "2023-01-21T23:22:42.256060936Z"
      }
    ]

    BigQuery

    Cloud Logging “routes” logs to destinations like buckets, BigQuery or to Pub/Sub:

    https://cloud.google.com/logging/docs/routing/overview

    We want to consolidate all logs in a centralized location where we can consume the data ingest. We can use BigQuery to accomplish that creating a sink. Sinks control how Cloud Logging routes logs:

    We can also set the table to expire after certain number of days:

    We want to filter vpc_flows from all logs:

    Click preview to validate the inclusion filter:

    Checking the sink destination:

    A simple BigQuery to show the possibilities we have now:

    Another simple example:

    Visualization

    We can use Looker Studio to explore and visualize the data:

    Looker Studio is a free, self-service business intelligence platform that lets users build and consume data visualizations, dashboards, and reports. With Looker Studio, you can connect to your data, create visualizations, and share your insights with others.

    Add data:

    I selected a table from “add a chart” and drag and drop the src_ip, dest_ip, dest_port, and protocol:

    I also added a couple of gauges and map 🙂

    References

    https://cloud.google.com/vpc/docs/using-flow-logs

    https://cloud.google.com/bigquery/

    https://cloud.google.com/bigquery/docs/visualize-looker-studio?hl=en_US

    https://cloud.google.com/community/tutorials/interconnect-usage-using-vpc-flow-logs

  • Using a GCP LB to provide DNS High-Availability

    Using a GCP LB to provide DNS High-Availability
    cargo ship at unloading containers
    Photo by Kelly on Pexels.com

    DNS uses UDP port 53 for most of it operations but relies on TCP for operations that requires the transmission of packets exceeding 512 bytes. When the message size exceeds 512 bytes, it triggers a ‘TC’ bit (Truncation) in DNS to inform the client that the message length has exceeded the allowed size. The client needs then to re-transmit over TCP (size limit is 64000 bytes).

    Back End Configuration

    • 35.191.0.0/16 and 130.211.0.0/22 are the GCP reserved ranges for NLB HC
    • 35.199.192.0/19 is the GCP reserved range for Cloud DNS type 1 source addresses

    If you happen to run the HC across a device like routers or firewall you will need to configure DNAT for those devices to properly reply back to the HC of the LB:

    • Source CIDR: 35.191.0.0/16 and 130.211.0.0/22
    • Destination CIDR: LB Front End IP (with mask)
    • Destination IP: router or firewall interface IP address

    You will probably need also to create a DNAT for your backend service:

    • Source CIDR: 35.199.192.0/19
    • Destination CIDR: LB Front End IP (with mask)
    • Destination IP: DNS server IP address

    and a SNAT:

    • Source CIDR: 35.199.192.0/19
    • Destination CIDR: DNS server IP address (with mask)
    • Destination IP: router or firewall interface IP address

    TCP Load Balancer

    We reserve a static IP to be shared between both LBs:

    UDP Load Balancer

    We have to select the previously reserved Internal IP for this design deployment to be successful:

    The end result is two LBs, one TCP and another UDP sharing the front end VIP:

    Cloud DNS Configuration

    We create a dns-policy with a rule to forward all queries to the IP of NLB:

    We also need to attach the networks that will utilize that policy:

    References

    https://cloud.google.com/load-balancing/docs/forwarding-rule-concepts

    https://cloud.google.com/load-balancing/docs/internal/internal-tcp-udp-lb-and-other-networks

    https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_forwarding_rule

    https://cloud.google.com/load-balancing/docs/health-check-concepts

  • Playing with GKE

    Playing with GKE
    cargo container lot
    Photo by Chanaka on Pexels.com

    Architecture

    Terraform

    resource "google_compute_subnetwork" "gke-network" {
    project = var.project
    name = "network010"
    ip_cidr_range = "100.64.0.0/21"
    region = data.google_compute_zones.available.region
    network = google_compute_network.vpc_network["vpc001"].name
    secondary_ip_range {
    range_name = "network010-pods"
    ip_cidr_range = "100.64.8.0/21"
    }
    secondary_ip_range {
    range_name = "network010-services"
    ip_cidr_range = "100.64.16.0/21"
    }
    }
    module "gke" {
    datapath_provider = "ADVANCED_DATAPATH"
    default_max_pods_per_node = 10
    enable_private_nodes = true
    horizontal_pod_autoscaling = true
    http_load_balancing = true
    ip_range_pods = "network010-pods"
    ip_range_services = "network010-services"
    name = "gke-east-${google_compute_network.vpc_network["vpc001"].name}"
    network_policy = false
    network = google_compute_network.vpc_network["vpc001"].name
    region = data.google_compute_zones.available.region
    release_channel = "UNSPECIFIED"
    remove_default_node_pool = true
    sandbox_enabled = true
    source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
    subnetwork = "network010"
    project_id = var.project
    zones = ["${element(data.google_compute_zones.available.names, 0)}"]
    node_pools = [
    {
    name = "node-pool-${google_compute_network.vpc_network["vpc001"].name}"
    machine_type = "e2-small"
    min_count = 2
    max_count = 5
    spot = true
    auto_repair = false
    auto_upgrade = false
    initial_node_count = 2
    }
    ]
    }
    view raw container.tf hosted with ❤ by GitHub
    ricardotrentin@RicardontinsMBP gcp-lab % gcloud container clusters list
    
    NAME             LOCATION  MASTER_VERSION   MASTER_IP     MACHINE_TYPE  NODE_VERSION     NUM_NODES  STATUS
    gke-east-vpc001  us-east1  1.25.5-gke.2000  34.73.106.24  e2-small      1.25.5-gke.2000  3          RUNNING
    
    ricardotrentin@RicardontinsMBP ~ %gcloud container clusters get-credentials gke-east-vpc001 --zone us-east1-b --project rtrentin-01
    
    
    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for gke-east-vpc001.
    ricardotrentin@RicardontinsMBP gcp-lab % kubectl  cluster-info
    
    Kubernetes control plane is running at https://34.73.106.24
    GLBCDefaultBackend is running at https://34.73.106.24/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
    KubeDNS is running at https://34.73.106.24/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    Metrics-server is running at https://34.73.106.24/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    
    ricardotrentin@RicardontinsMBP gcp-lab % kubectl get nodes 
    
    NAME                                                 STATUS   ROLES    AGE     VERSION
    gke-gke-east-vpc001-node-pool-vpc001-3f001dd8-pbmd   Ready    <none>   13m     v1.25.5-gke.2000
    gke-gke-east-vpc001-node-pool-vpc001-3f001dd8-wng9   Ready    <none>   5h9m    v1.25.5-gke.2000
    gke-gke-east-vpc001-node-pool-vpc001-3f001dd8-zvp9   Ready    <none>   4m25s   v1.25.5-gke.2000
    ricardotrentin@RicardontinsMBP gcp-lab % kubectl describe node
    
    

    Example of a deployment:

    --
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx 
      template:
        metadata:
          labels: 
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:latest
    --
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      annotations:
        networking.gke.io/load-balancer-type: "Internal"
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Cluster
      selector:
        app: nginx
      ports:
      - name: tcp-port
        protocol: TCP
        port: 80
        targetPort: 80
    ricardotrentin@RicardontinsMBP gcp-lab % kubectl get svc        
    
    NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP      100.64.16.1     <none>        443/TCP        7h27m
    nginx        LoadBalancer   100.64.17.212   100.64.0.19   80:31405/TCP   69s   
    
    ricardotrentin@RicardontinsMBP gcp-lab % kubectl get deploy                         
    
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   2/2     2            2           13m
    ricardotrentin@RicardontinsMBP gcp-lab % kubectl get pods       
    
    NAME                     READY   STATUS    RESTARTS   AGE
    nginx-6d666844f6-q8vpk   1/1     Running   0          13m
    nginx-6d666844f6-wv88k   1/1     Running   0          13m

    Using NEG:

    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-neg
      annotations:
        cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "nginx-neg"}}}'
    spec:
      type: ClusterIP
      ports:
      - port: 80
        targetPort: 80
      selector:
        app: nginx
    ricardotrentin@RicardontinsMBP gcp-lab % kubectl get svc   
                             
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   100.64.16.1     <none>        443/TCP   7h43m
    nginx-neg    ClusterIP   100.64.23.184   <none>        80/TCP    87s
    ricardotrentin@RicardontinsMBP gcp-lab % gcloud compute network-endpoint-groups list
    
    NAME       LOCATION    ENDPOINT_TYPE   SIZE
    nginx-neg  us-east1-b  GCE_VM_IP_PORT  2

    References

    https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview

    https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/24.1.0

  • GCP Routing Without Subtitles

    GCP Routing Without Subtitles
    a person holding a map
    Photo by cottonbro studio on Pexels.com

    Topology 1

    • VPC001 routes:
    • CSR routes:
         Network          Next Hop            Metric LocPrf Weight Path
     *    10.0.0.0/16      169.254.100.45         200             0 64512 i
     *>                    169.254.201.57         100             0 64512 i
     *m   10.11.64.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m   10.11.65.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m   10.11.66.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *>   169.254.0.0/30   0.0.0.0                  0         32768 ?
     *>   169.254.0.4/30   0.0.0.0                  0         32768 ?
     *>   169.254.100.44/30
                          0.0.0.0                  0         32768 ?
     *>   169.254.201.56/30
                          0.0.0.0                  0         32768 ?
     *>   172.31.0.0/28    0.0.0.0                  0         32768 ?
     *>   172.31.0.128/28  172.31.0.1               0         32768 ?

    Metric 100 comes from:

    advertised_route_priority = 100

    Topology 2

    • VPC001:

    • CSR:
         Network          Next Hop            Metric LocPrf Weight Path
     *    10.0.0.0/16      169.254.100.45         200             0 64512 i
     *>                    169.254.201.57         100             0 64512 i
     *    10.11.64.0/24    169.254.129.1          333             0 64514 ?
     *m                    169.254.0.1            100             0 64514 ?
     *>                    169.254.0.5            100             0 64514 ?
     *    10.11.65.0/24    169.254.129.1          333             0 64514 ?
     *m                    169.254.0.1            100             0 64514 ?
     *>                    169.254.0.5            100             0 64514 ?
     *    10.11.66.0/24    169.254.129.1          333             0 64514 ?
     *m                    169.254.0.1            100             0 64514 ?
     *>                    169.254.0.5            100             0 64514 ?
     *>   169.254.0.0/30   0.0.0.0                  0         32768 ?
     *>   169.254.0.4/30   0.0.0.0                  0         32768 ?
     *>   169.254.100.44/30
                          0.0.0.0                  0         32768 ?
     *>   169.254.129.0/30 0.0.0.0                  0         32768 ?
     *>   169.254.129.4/30 0.0.0.0                  0         32768 ?
     *>   169.254.201.56/30
                          0.0.0.0                  0         32768 ?
     *>   172.31.0.0/28    0.0.0.0                  0         32768 ?
     *>   172.31.0.128/28  172.31.0.1               0         32768 ?
    • CSR RIB:
    S*    0.0.0.0/0 [1/0] via 172.31.0.1, GigabitEthernet1
          10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
    B        10.0.0.0/16 [20/100] via 169.254.201.57, 06:57:24
    B        10.11.64.0/24 [20/100] via 169.254.0.5, 00:02:15
                           [20/100] via 169.254.0.1, 00:02:15
    B        10.11.65.0/24 [20/100] via 169.254.0.5, 00:02:15
                           [20/100] via 169.254.0.1, 00:02:15
    B        10.11.66.0/24 [20/100] via 169.254.0.5, 00:02:15
                           [20/100] via 169.254.0.1, 00:02:15
          169.254.0.0/16 is variably subnetted, 12 subnets, 2 masks
    C        169.254.0.0/30 is directly connected, Tunnel10
    L        169.254.0.2/32 is directly connected, Tunnel10
    C        169.254.0.4/30 is directly connected, Tunnel11
    L        169.254.0.6/32 is directly connected, Tunnel11
    C        169.254.100.44/30 is directly connected, Tunnel2
    L        169.254.100.46/32 is directly connected, Tunnel2
    C        169.254.129.0/30 is directly connected, Tunnel20
    L        169.254.129.2/32 is directly connected, Tunnel20
    C        169.254.129.4/30 is directly connected, Tunnel21
    L        169.254.129.6/32 is directly connected, Tunnel21
    C        169.254.201.56/30 is directly connected, Tunnel1
    L        169.254.201.58/32 is directly connected, Tunnel1
          172.31.0.0/16 is variably subnetted, 3 subnets, 2 masks
    C        172.31.0.0/28 is directly connected, GigabitEthernet1
    L        172.31.0.13/32 is directly connected, GigabitEthernet1
    S        172.31.0.128/28 [1/0] via 172.31.0.1

    Topology 3

    routing_mode = "REGIONAL"

    CSR:

     *>   10.0.0.0/16      169.254.100.45         100             0 64512 i
     *                     169.254.201.57         200             0 64512 i
     *>   10.11.64.0/24    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *>   10.11.65.0/24    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *>   10.11.66.0/24    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *>   169.254.0.0/30   0.0.0.0                  0         32768 ?
     *>   169.254.0.4/30   0.0.0.0                  0         32768 ?
     *>   169.254.100.44/30
                          0.0.0.0                  0         32768 ?
     *>   169.254.129.0/30 0.0.0.0                  0         32768 ?
     *>   169.254.129.4/30 0.0.0.0                  0         32768 ?
     *>   169.254.201.56/30
                          0.0.0.0                  0         32768 ?
     *>   172.31.0.0/28    0.0.0.0                  0         32768 ?
     *>   172.31.0.128/28  172.31.0.1               0         32768 ?

    Subnetworks 10.11.64-66 are on us-east1. Adding a new subnet to vpc001 but located in us-central1:

     *>   10.0.0.0/16      169.254.100.45         100             0 64512 i
     *                     169.254.201.57         200             0 64512 i
     *>   10.11.64.0/24    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *>   10.11.65.0/24    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *>   10.11.66.0/24    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *>   100.64.0.0/24    169.254.129.1          100             0 64514 ?
     *>   169.254.0.0/30   0.0.0.0                  0         32768 ?
     *>   169.254.0.4/30   0.0.0.0                  0         32768 ?
     *>   169.254.100.44/30
                          0.0.0.0                  0         32768 ?
     *>   169.254.129.0/30 0.0.0.0                  0         32768 ?
     *>   169.254.129.4/30 0.0.0.0                  0         32768 ?
     *>   169.254.201.56/30
                          0.0.0.0                  0         32768 ?
     *>   172.31.0.0/28    0.0.0.0                  0         32768 ?
     *>   172.31.0.128/28  172.31.0.1               0         32768 ?

    100.64.0.0/24 is advertised from central gateway.

    Topology 4

    Default Config:

    • vpc001 routes:
    • vpc002 routes:

         Network          Next Hop            Metric LocPrf Weight Path
     *    10.0.0.0/16      169.254.100.45         200             0 64512 i
     *>                    169.254.201.57         100             0 64512 i
     *m   10.11.64.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m   10.11.65.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m   10.11.66.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *>   169.254.0.0/30   0.0.0.0                  0         32768 ?
     *>   169.254.0.4/30   0.0.0.0                  0         32768 ?
     *>   169.254.100.44/30
                          0.0.0.0                  0         32768 ?
     *>   169.254.201.56/30
                          0.0.0.0                  0         32768 ?
     *>   172.31.0.0/28    0.0.0.0                  0         32768 ?
     *>   172.31.0.128/28  172.31.0.1               0         32768 ?

    Import/Export:

    • vpc001:
    • vpc002
    • CSR
    
         Network          Next Hop            Metric LocPrf Weight Path
     *    10.0.0.0/16      169.254.100.45         200             0 64512 i
     *>                    169.254.201.57         100             0 64512 i
     *m   10.11.64.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m   10.11.65.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m   10.11.66.0/24    169.254.0.5            100             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *>   169.254.0.0/30   0.0.0.0                  0         32768 ?
     *>   169.254.0.4/30   0.0.0.0                  0         32768 ?
     *>   169.254.100.44/30
                          0.0.0.0                  0         32768 ?
     *>   169.254.201.56/30
                          0.0.0.0                  0         32768 ?
     *>   172.31.0.0/28    0.0.0.0                  0         32768 ?
     *>   172.31.0.128/28  172.31.0.1               0         32768 ?

    Topology 5

     *>   10.0.0.0/16      169.254.100.45         100             0 64512 i
     *                     169.254.201.57         200             0 64512 i
     *    10.11.64.0/24    169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *    10.11.65.0/24    169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *    10.11.66.0/24    169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *m   10.12.64.0/24    169.254.3.5            100             0 64514 ?
     *>                    169.254.3.1            100             0 64514 ?
     *                     169.254.4.5            333             0 64514 ?
     *                     169.254.4.1            333             0 64514 ?
     *m   10.12.65.0/24    169.254.3.5            100             0 64514 ?
     *>                    169.254.3.1            100             0 64514 ?
     *                     169.254.4.5            333             0 64514 ?
     *                     169.254.4.1            333             0 64514 ?
     *m   10.12.66.0/24    169.254.3.5            100             0 64514 ?
     *>                    169.254.3.1            100             0 64514 ?
     *                     169.254.4.5            333             0 64514 ?

    CSR1000v RIB:

    S*    0.0.0.0/0 [1/0] via 172.31.0.1, GigabitEthernet1
          10.0.0.0/8 is variably subnetted, 7 subnets, 2 masks
    B        10.0.0.0/16 [20/100] via 169.254.100.45, 01:27:49
    B        10.11.64.0/24 [20/100] via 169.254.0.5, 01:28:10
                           [20/100] via 169.254.0.1, 01:28:10
    B        10.11.65.0/24 [20/100] via 169.254.0.5, 01:28:10
                           [20/100] via 169.254.0.1, 01:28:10
    B        10.11.66.0/24 [20/100] via 169.254.0.5, 01:28:10
                           [20/100] via 169.254.0.1, 01:28:10
    B        10.12.64.0/24 [20/100] via 169.254.3.5, 00:12:47
                           [20/100] via 169.254.3.1, 00:12:47
    B        10.12.65.0/24 [20/100] via 169.254.3.5, 00:12:47
                           [20/100] via 169.254.3.1, 00:12:47
    B        10.12.66.0/24 [20/100] via 169.254.3.5, 00:12:47
                           [20/100] via 169.254.3.1, 00:12:47
          100.0.0.0/24 is subnetted, 1 subnets
    B        100.64.0.0 [20/100] via 169.254.2.1, 00:03:01

    Using the same AS vpc001 and vpc002 does not exchange routes. If we change vpc002 CR to a different AS (64515):

    • vpc002 rt:
    • vpc001 rt:

    Topology 6

    • vpc001 rt:

    Topology 7

    Topology 8

    • CSR rt:
    *    10.11.64.0/24    169.254.2.5            333             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *    10.11.65.0/24    169.254.2.5            333             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *    10.11.66.0/24    169.254.2.5            333             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *    10.12.64.0/24    169.254.4.1            333             0 64514 ?
     *>                    169.254.3.1            100             0 64514 ?
     *                     169.254.4.5            333             0 64514 ?
     *m                    169.254.3.5            100             0 64514 ?
     *    10.12.65.0/24    169.254.4.1            333             0 64514 ?
     *>                    169.254.3.1            100             0 64514 ?
     *                     169.254.4.5            333             0 64514 ?
     *m                    169.254.3.5            100             0 64514 ?
     *    10.12.66.0/24    169.254.4.1            333             0 64514 ?
     *>                    169.254.3.1            100             0 64514 ?
     *                     169.254.4.5            333             0 64514 ?
    *m                    169.254.3.5            100             0 64514 ?
     *    100.64.0.0/24    169.254.2.5            333             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *    100.64.8.0/21    169.254.2.5            333             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
     *    100.64.16.0/21   169.254.2.5            333             0 64514 ?
     *                     169.254.2.1            333             0 64514 ?
     *>                    169.254.0.1            100             0 64514 ?
     *m                    169.254.0.5            100             0 64514 ?
    • CSR RIB:
    B        10.11.64.0/24 [20/100] via 169.254.0.5, 00:13:49
                           [20/100] via 169.254.0.1, 00:13:49
    B        10.11.65.0/24 [20/100] via 169.254.0.5, 00:13:49
                           [20/100] via 169.254.0.1, 00:13:49
    B        10.11.66.0/24 [20/100] via 169.254.0.5, 00:13:49
                           [20/100] via 169.254.0.1, 00:13:49
    B        10.12.64.0/24 [20/100] via 169.254.3.5, 00:13:49
                           [20/100] via 169.254.3.1, 00:13:49
    B        10.12.65.0/24 [20/100] via 169.254.3.5, 00:13:49
                           [20/100] via 169.254.3.1, 00:13:49
    B        10.12.66.0/24 [20/100] via 169.254.3.5, 00:13:49
                           [20/100] via 169.254.3.1, 00:13:49
          100.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
    B        100.64.0.0/24 [20/100] via 169.254.0.5, 00:13:49
                           [20/100] via 169.254.0.1, 00:13:49
    B        100.64.8.0/21 [20/100] via 169.254.0.5, 00:13:49
                           [20/100] via 169.254.0.1, 00:13:49
    B        100.64.16.0/21 [20/100] via 169.254.0.5, 00:13:49
                            [20/100] via 169.254.0.1, 00:13:49
    • VPC001 rt:

    References

    https://cloud.google.com/vpc/docs/using-routes#gcloud

    https://cloud.google.com/network-connectivity/docs/router/support/troubleshooting

    https://developer.hashicorp.com/terraform/tutorials/kubernetes/gke?in=terraform%2Fkubernetes&utm_offer=ARTICLE_PAGE

    https://cloud.google.com/vpc/docs/routes

  • That “little” AWS Security Group to PAN Migration Project

    That “little” AWS Security Group to PAN Migration Project
    person holding compass
    Photo by Valentin Antonucci on Pexels.com

    AWS Security Groups filters the traffic for one or more instances. It accomplishes this filtering function at the Transmission and IP layers, via their respective ports, and source/destination IP addresses.

    At least one Security Group is associated to an instance and it carries a set of rules that filter traffic entering and leaving the instances.

    Security Groups have a set of rules that filter traffic in two ways: inbound and outbound. The SG has a “Deny All” that allows data packets to be dropped if no rule is assigned to them from the source IP.

    The quota for security groups per network interface multiplied by the quota for rules per security group can’t exceed 1,000.

    SGs can and should be combined with NACLs and Next Generation Firewalls for a layered approach for an overall security posture.

    In this blog/lab I’m going to “transfer” granular SG rules to a Palo Alto NGFW located at a central location and use “generic” SG rules.

    Here comes some help!

    Expedition is the fourth evolution of the Palo Alto Networks Migration Tool. The purpose of this tool is to help reduce the time and efforts of migrating a configuration from a supported vendor to Palo Alto Networks.

    https://live.paloaltonetworks.com/t5/expedition/ct-p/migration_tool

    Installation

    The Ubuntu server should have internet access as the installer script will perform an update of the
    Expedition software by connecting to the Palo Alto Networks update servers for Expedition and
    additional Ubuntu dependencies, such as MariaDB, Apache Web Server, RabbitMQ, JVM 1.8, etc

    • Login with default credentials ,username=”admin”, password=”paloalto”

    Load Devices:

    Create Project

    Converting SGs to Security Policies

    Expedition does not support a direct conversion from AWS SGs or any other cloud service provider construct to PAN as the constructs/policies as those are different objects. The way we found out to do it is first to transform SGs into CSV (comma separated values) and then import to Expedition.

    Firewall policies can be a lot more complex and supports a wider range of cases. I’m working with the minimum input required to create a security policy.

    From a SG we can extract directly: the protocol, port range, and source:

    We can assume the action (allow) and get the destination (inbound rule) or source (outbound rule) querying the SG association. While to create a PAN policy we need:

    • source and destination addresses
    • service
    • action

    To create a service we need to combine the protocol and port range and the way we did it was to create a separate file called service containing that information.

    Services file example:

    0-65535-tcp;tcp;0-65535
    25-25-tcp;tcp;25-25
    53-53-udp;udp;53-53
    0-0-tcp;tcp;0-0
    0-65535-udp;udp;0-65535
    79-85-tcp;tcp;79-85

    The service file will be ingested first by Expedition and the security policy file can refer to it as in the example below:

    Internal;Internal;allow;10.13.0.140;0-65535-tcp;10.14.0.132;
    Internal;Internal;allow;10.13.0.84;0-65535-tcp;10.14.0.132;
    Internal;Internal;allow;10.13.0.140;0-65535-tcp;10.14.0.88;
    Internal;Internal;allow;10.13.0.84;0-65535-tcp;10.14.0.88;
    Internal;Internal;allow;10.13.0.84;25-25-tcp;10.14.0.88;
    Internal;Internal;allow;10.13.0.140;25-25-tcp;10.14.0.88;
    Internal;Internal;allow;10.13.0.84;25-25-tcp;10.14.0.132;
    Internal;Internal;allow;10.13.0.140;25-25-tcp;10.14.0.132;
    Internal;Internal;allow;10.12.0.141;53-53-udp;10.14.0.132;
    Internal;Internal;allow;10.12.0.68;53-53-udp;10.14.0.132;
    Internal;Internal;allow;10.12.0.141;53-53-udp;10.14.0.88;
    Internal;Internal;allow;10.12.0.68;53-53-udp;10.14.0.88;
    Internal;Internal;allow;10.13.0.140;0-0-tcp;10.14.0.132;
    Internal;Internal;allow;10.13.0.84;0-0-tcp;10.14.0.132;
    Internal;Internal;allow;10.13.0.140;0-tcp;10.14.0.88;
    Internal;Internal;allow;10.13.0.84;0-tcp;10.14.0.88;

    Thanks to my buddy Ron to come up with a tool to generate those files automagically.

    Importing CSV into Expedition

    • importing services

    Once the file is loaded, we need to map columns to the proper values:

    Then “import data”:

    • importing rules

    Once the file is loaded, we need to map columns to the proper values:

    Then “import data”:

    Corner Cases

    • AWS SG uses “-1” while PAN uses “Any”
    • AWS SG uses “icmp” while PAN refers to it as an application “ping”

    Applying Rules to NGFW

    Import base config:

    Go to export and drag and drop from the left to the right the objects for migration:

    And then click “merge”:

    After you have created the Final configuration we have to options to deploy it, one is manual XML file export that can be deployed on the PAN device that we are migrating to, or if that PAN device is already connected to Expedition, we can use API calls to send parts of the configuration or the whole configuration to the device.

    Another option is to use the API Output Manager:

    • Generate API Requests (atomic = single API call; subAtomic = multiple API calls):
    • Send API Requests:

    Checking the result:

    Do not forget to commit 😉

    References

    https://www.paloaltonetworks.com/products/secure-the-network/next-generation-firewall/migration-tool

    https://live.paloaltonetworks.com/t5/blogs/what-are-applications-and-services/ba-p/342508

    https://www.paloaltonetworks.com/products/secure-the-network/next-generation-firewall/migration-tool

  • Using GitHub Actions to deploy Aviatrix

    Using GitHub Actions to deploy Aviatrix
    black metal emergency exit staircase
    Photo by ready made on Pexels.com

    Automating Terraform with CI/CD enforces configuration best practices, promotes collaboration and automates the Terraform workflow.

    GitHub Actions Prime

    Actions

    An action is a custom application for the GitHub Actions platform that performs a repeated task. GitHub Actions is composed by:

    • events: an event is a specific activity in a repository that triggers a workflow run.
    • workflows: a workflow is a configurable automated process that will run one or more jobs.

    Workflow

    Workflows are defined by a YAML in a repository and will run when triggered by an event or manually. A workflow contains one or more jobs which can run in sequential order or in parallel.

    Jobs

    A job is a set of steps in a workflow that execute on the same runner. Each step is either a shell script that will be executed, or an action that will be run. Each job runs inside its own runner (a vm or a container), and has steps that run a script or run an action.

    Runners

    A runner is a server that runs workflows. Each runner can run a single job at a time. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners to run workflows.

    Creating workflows for Aviatrix

    The architecture we will implement is based on the following diagram:

    The steps are detailed below.

    • Create a github repository
    • Clone to your machine:
    • create a terraform cloud workspace
    • The execution mode is set to local (GitHub runners will take care of that):
    • configure the previously created github repository as the VCS:
    • create a directory called .github and a sub-directory called workflows:
    ricardotrentin@RicardontinsMBP terraform % ls -l .github/workflows
    total 8
    -rw-r--r-- 1 ricardotrentin staff 1874 Nov 10 13:37 avx-deploy-github-actions.yaml

    The yaml file is where the workflow is configured:

    • GitHub Action checkout checks out the repository in the runner VM (ubuntu latest)
    • GitHub Action setup-terraform sets up and configures the Terraform CLI
    name: 'AVX Deploy'
    on:
      push:
        branches: [ main ]
    jobs:
      terraform:
        name: 'Terraform'
        runs-on: ubuntu-latest
        steps:
          - name: Checkout
            uses: actions/checkout@v3
          - name: Setup Terraform
            uses: hashicorp/setup-terraform@v2
            with:
              cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
          - name: Terraform Format
            id: fmt
            run: terraform fmt -check
            continue-on-error: true
          - name: Terraform Init
            id: init
            run: terraform init
          - name: Terraform Plan
            id: plan
            if: github.event_name == 'pull_request'
            run: terraform plan -no-color
            continue-on-error: true
          - name: Update Pull Request
            uses: actions/github-script@v6
            if: github.event_name == 'pull_request'
            env:
              PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
            with:
              github-token: ${{ secrets.GITHUB_TOKEN }}
              script: |
                const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
                #### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
                #### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
                <details><summary>Validation Output</summary>
    
                \`\`\`\n
                ${{ steps.validate.outputs.stdout }}
                \`\`\`
    
                </details>
    
               #### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
    
               <details><summary>Show Plan</summary>
    
               \`\`\`\n
               ${process.env.PLAN}
               \`\`\`
    
              </details>
    
             *Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`;
    
             github.rest.issues.createComment({
               issue_number: context.issue.number,
               owner: context.repo.owner,
               repo: context.repo.repo,
               body: output
            })
          - name: Terraform Plan Status
            if: steps.plan.outcome == 'failure'
            run: exit 1
    
          - name: Approval
            uses: trstringer/manual-approval@v1
            with:
              secret: ${{ github.TOKEN }}
              approvers: rtrentin73
              minimum-approvals: 1
              exclude-workflow-initiator-as-approver: false      
    
          - name: Terraform Apply
            if: github.ref == 'refs/heads/main' && github.event_name == 'push'
            run: terraform apply -auto-approve

    Once the terraform files are ready (not in the scope of this blog):

    ricardotrentin@RicardontinsMBP terraform % ls
    README.md       peering.tf      provider.tf     spoke.tf        transit.tf      variables.tf    vnet.tf

    Save them and then run the following git commands:

    git add .
    git commit
    git push 

    The git push will trigger the workflow that will create a runner, install terraform, download modules, plan, and apply the tf files:

    The last step before “applying” is a manual approval:

    Once the workflow action is approved, the apply runs:

    References

    https://docs.github.com/en/actions

    https://docs.github.com/en/actions/deployment

    https://developer.hashicorp.com/terraform/tutorials/automation/github-actions

    https://github.com/marketplace/actions/hashicorp-setup-terraform

  • Migrating a full mesh vpc deployment to Aviatrix

    Migrating a full mesh vpc deployment to Aviatrix

    The diagram below shows the initial scenario:

    • vpc peering is used for inter-vpc communication
    • TGW is used for on-prem communication

    This TGW is not “managed” by Aviatrix.

    Private route table looks like:

    Public route tables look like:

    CSR config:

    interface Tunnel1
     ip address 169.254.183.186 255.255.255.252
     ip tcp adjust-mss 1379
     tunnel source GigabitEthernet1
     tunnel mode ipsec ipv4
     tunnel destination 34.207.45.60
     tunnel protection ipsec profile ipsec-vpn-07618b0c580da3adc-0
     ip virtual-reassembly
    !
    interface Tunnel2
     ip address 169.254.138.126 255.255.255.252
     ip tcp adjust-mss 1379
     tunnel source GigabitEthernet1
     tunnel mode ipsec ipv4
     tunnel destination 34.238.14.11
     tunnel protection ipsec profile ipsec-vpn-07618b0c580da3adc-1
     ip virtual-reassembly
    !
    router bgp 36XXX
     bgp log-neighbor-changes
     bgp graceful-restart
     neighbor 169.254.138.125 remote-as 64512
     neighbor 169.254.138.125 ebgp-multihop 255
     neighbor 169.254.183.185 remote-as 64512
     neighbor 169.254.183.185 ebgp-multihop 255
     !
     address-family ipv4
      redistribute connected
      redistribute static
      neighbor 169.254.138.125 activate
      neighbor 169.254.183.185 activate
      maximum-paths 4
     exit-address-family
    !   

    AVX Transit Gateway connection to AWS TGW

    TGW supports the following types of attachment:

    • VPC
    • VPN
    • Peering
    • Connect

    An AWS TGW Connect allows you to establish connection between a transit gateway and the AVX Transit Gateway using Generic Routing Encapsulation (GRE) and Border Gateway Protocol (BGP).

    You can create up to 4 Transit Gateway Connect peers per Connect attachment (up to 20 Gbps in total bandwidth per Connect attachment)

    GRE is established on top of an attachment:

    • the code below creates two extra subnets in the AVX transit vpc and uses them to attach to the TGW
    • on top of the attachment, a (TGW) connect is created
    
    data "aviatrix_transit_gateway" "avx-transit-gw" {
      gw_name = var.transit_gw
    }
    data "aws_vpc" "avx-transit-gw-vpc-cidr" {
      id = data.aviatrix_transit_gateway.avx-transit-gw.vpc_id
    }
    
    resource "aws_subnet" "tgw-attachment-subnet" {
      for_each   = toset(var.tgw_attachment_subnets_cidrs)
      vpc_id     = data.aviatrix_transit_gateway.avx-transit-gw.vpc_id
      cidr_block = each.value
      tags = {
        Name = "tgw-attachment-subnet"
      }
    }
    
    locals {
      tgw-attachment-subnet_list = [ 
        for subnets in aws_subnet.tgw-attachment-subnet: subnets.id
      ]
    }
    
    resource "aws_ec2_transit_gateway_vpc_attachment" "tgw-attachment-avx" {
      subnet_ids = local.tgw-attachment-subnet_list
      transit_gateway_id = aws_ec2_transit_gateway.tgw.id
      vpc_id             = data.aviatrix_transit_gateway.avx-transit-gw.vpc_id
    }
    
    resource "aws_ec2_transit_gateway_connect" "tgw-connect-avx" {
      transit_gateway_id      = aws_ec2_transit_gateway.tgw.id
      transport_attachment_id = aws_ec2_transit_gateway_vpc_attachment.tgw-attachment-avx.id
    }

    A route towards the TGW cidr block is required on the AVX gateway route table:

    • by default AVX route table has the vpc and a default pointing to the vpc IGW
    data "aws_route_table" "avx-tgw-route-table" {
      vpc_id = data.aviatrix_transit_gateway.avx-transit-gw.vpc_id
      filter {
        name   = "tag:Name"
        values = ["*transit-Public-rtb"]
      }
    }
    
    resource "aws_route" "route-avx-tgw-cidr" {
      route_table_id         = data.aws_route_table.avx-tgw-route-table.route_table_id
      destination_cidr_block = element(var.transit_gateway_cidr_blocks, 0)
      transit_gateway_id     = aws_ec2_transit_gateway.tgw.id
    }

    The next step is to create peers:

    resource "aws_ec2_transit_gateway_connect_peer" "connect-avx-primary" {
      bgp_asn                       = data.aviatrix_transit_gateway.avx-transit-gw.local_as_number
      transit_gateway_address       = "10.10.0.1"
      peer_address                  = data.aviatrix_transit_gateway.avx-transit-gw.private_ip
      inside_cidr_blocks            = ["169.254.253.0/29"]
      transit_gateway_attachment_id = aws_ec2_transit_gateway_connect.tgw-connect-avx.id
    }
    
    resource "aws_ec2_transit_gateway_connect_peer" "connect-avx-ha" {
      bgp_asn                       = data.aviatrix_transit_gateway.avx-transit-gw.local_as_number
      transit_gateway_address       = "10.10.0.2"
      peer_address                  = data.aviatrix_transit_gateway.avx-transit-gw.ha_private_ip
      inside_cidr_blocks            = ["169.254.254.0/29"]
      transit_gateway_attachment_id = aws_ec2_transit_gateway_connect.tgw-connect-avx.id
    }

    The IPv4 CIDR block must be /29 size and must be within 169.254.0.0/16 range, with exception of: 169.254.0.0/29, 169.254.1.0/29, 169.254.2.0/29, 169.254.3.0/29, 169.254.4.0/29, 169.254.5.0/29, 169.254.169.248/29.

    The first IP from each CIDR block is assigned for customer gateway, the second and third is for Transit Gateway. An example: from range 169.254.100.0/29, .1 is assigned to customer gateway and .2 and .3 are assigned to Transit Gateway.

    AVX Config

    resource "aviatrix_transit_external_device_conn" "avx-aws-connect" {
      vpc_id                   = data.aviatrix_transit_gateway.avx-transit-gw.vpc_id
      connection_name          = "avx-aws-connect"
      gw_name                  = data.aviatrix_transit_gateway.avx-transit-gw.gw_name
      connection_type          = "bgp"
      tunnel_protocol          = "GRE"
      bgp_local_as_num         = data.aviatrix_transit_gateway.avx-transit-gw.local_as_number
      bgp_remote_as_num        = var.amazon_side_asn
      remote_gateway_ip        = "10.10.0.1,10.10.0.2"
      direct_connect           = true
      ha_enabled               = false
      local_tunnel_cidr        = "169.254.251.1/29,169.254.252.1/29"
      remote_tunnel_cidr       = "169.254.251.2/29,169.254.252.2/30"
      enable_edge_segmentation = false
    }

    Checking

    The connectivity supported by Aviatrix differs from what TGW expects:

    • AVX creates a single bgp peer per gateway
    • AWS creates two bgp peers per TGW gateway

    Terraform

    My buddy Jun created a terraform module to automate all the steps above:

    https://registry.terraform.io/modules/jye-aviatrix/bgp-over-gre-brownfield-tgw-avx-transit/aviatrix/latest

    Routes Before Migration

    Private:

    Public:

    Aviatrix Gateways Deployment

    You can deploy gateways into the existing VPCs using the AVX Controller or terraform:

    resource "aviatrix_spoke_gateway" "spoke_gateway" {
      for_each                          = var.deploy_spoke_gateway ? var.vpcs : {}
      cloud_type                        = 1
      account_name                      = var.account
      gw_name                           = each.value.vpc_name
      vpc_id                            = module.vpc[each.value.vpc_name].vpc_id
      vpc_reg                           = data.aws_region.aws_region-current.name
      gw_size                           = var.gw_size
      ha_gw_size                        = var.gw_size
      subnet                            = element(slice(cidrsubnets(each.value.vpc_cidr, 4, 4, 4, 4, 4, 4), 4, 5), 0)
      single_ip_snat                    = false
      manage_transit_gateway_attachment = false
      ha_subnet                         = element(slice(cidrsubnets(each.value.vpc_cidr, 4, 4, 4, 4, 4, 4), 5, 6), 0)
    }

    If you are using Aviatrix Cloud Services Migration Framework tool kit:

    2022-11-14 17:13:45,310   subnet-060c971c19c12abd7
    2022-11-14 17:13:45,310 - Discover route(s) for rtb-0eb9d0f325efa2214
    2022-11-14 17:13:45,376   ...............................................................
    2022-11-14 17:13:45,376   Prefix                   Next-hop                      Origin
    2022-11-14 17:13:45,376   ...............................................................
    2022-11-14 17:13:45,376   10.11.0.0/23             local                         auto
    2022-11-14 17:13:45,376   10.12.0.0/23             pcx-078a185d033bca1e8         manual
    2022-11-14 17:13:45,376   **Alert** unexpected private IP 10.12.0.0/23 to pcx-078a185d033bca1e8 in rtb-0eb9d0f325efa2214
    2022-11-14 17:13:45,377   10.13.0.0/23             pcx-0014f772c8e2ba725         manual
    2022-11-14 17:13:45,377   **Alert** unexpected private IP 10.13.0.0/23 to pcx-0014f772c8e2ba725 in rtb-0eb9d0f325efa2214
    2022-11-14 17:13:45,377   10.14.0.0/23             pcx-08d3808b36fa6cb3d         manual
    2022-11-14 17:13:45,377   **Alert** unexpected private IP 10.14.0.0/23 to pcx-08d3808b36fa6cb3d in rtb-0eb9d0f325efa2214
    2022-11-14 17:13:45,377   0.0.0.0/0                tgw-05015cee55ad7479a         manual
    2022-11-14 17:13:45,377   **Alert** route 0.0.0.0/0 to unexpected tgw-05015cee55ad7479a in rtb-0eb9d0f325efa2214

    Once the gateways are deployed, the next step is to “attach” them to the transit gateways. Again, you can choose to do it using the GUI or terraform:

    resource "aviatrix_spoke_transit_attachment" "spoke_attachment" {
      depends_on = [
        aviatrix_spoke_gateway.spoke_gateway
      ]
      for_each        = var.attach_spoke_gateway ? var.vpcs : {}
      spoke_gw_name   = each.value.vpc_name
      transit_gw_name = data.aviatrix_transit_gateway.avx-transit-gw.gw_name
    }

    Once gateways are attached, AVX will take ownership and control of VPC route tables:

    • private
    • public

    East-West

    An easy way to do it is simply deleting the pcx routes (10.X.0.0/23) what makes the 10.0.0.0/8 pointing to the AVX spoke gateway elastic network interface preferred. But, but, there is a gotcha: we need to remove routes in all vpcs. For example, on the vpc 10.11.0.0/23 if we remove the route to 10.12.0.0/23, we also need to remove the route towards 10.11.0.0/23 from the 10.12.0.0/23 vpc.

    ACS Migration Toolkit version 0.2.39 introduced the option to delete pcx route tables.

    The dm.delete_peer_route is responsible for removing peering routes across vpcs. The dm.cleanup syntax is the following:

    python3.9 -m dm.delete_peer_route --ctrl_user admin --yaml_file test-lab-aviatrix-discovery.yaml 

    Using the migration toolkit we have:

    • local vpc
    • remote vpc

    Using the ACS migration tool kit to switch traffic to avx:

    python3.9 -m dm.switch_traffic --ctrl_user admin --yaml_file test-lab-aviatrix-discovery.yaml --rm_static_route --rm_propagated_route

    It is important to point out that because of the TGW attachment north-south traffic will be forced through avx and the return traffic comes through TGW. For that reason the flags –-rm_static_route and –rm_propaged_route are used to properly remove TGW attachments route propagation and use the BGPoGRE (TGW attachment type connect) for the return traffic:

    Internet Egress

    The way to migrate to a centralized egress is to remove the 0/0 from the private subnet route tables and enable centralized egress:

    Using the migration toolkit, we can instruct the tool to ignore TGW routes during the discovery:

    config:
      add_vpc_cidr: false
      filter_vgw_routes: False
      filter_tgw_routes: True
      configure_transit_gw_egress: True

    After a successful migration a private subnet route table will look like:

    Troubleshooting

    • The Outer IP addresses of the GRE tunnel assigned to Transit Gateway are not pingable.
    • If you have same prefix propagated into your Transit Gateway route table coming from VPN, Direct Connect, and Transit Gateway Connect attachments, we evaluate the best path in the following order:
      • Priority 1 – Direct Connect Gateway attachment
      • Priority 2 – Transit Gateway Connect attachment
      • Priority 3 – VPN attachment
    • AVX Controller creates an inbound rule for GRE

    References

    https://aws.amazon.com/blogs/networking-and-content-delivery/simplify-sd-wan-connectivity-with-aws-transit-gateway-connect/

    https://aws.amazon.com/blogs/networking-and-content-delivery/integrate-sd-wan-devices-with-aws-transit-gateway-and-aws-direct-connect/

Leave a Reply