Using Cloud Interconnect with Aviatrix

Google Cloud Interconnect is a service provided by Google Cloud Platform (GCP) that enables customers to establish private, high-performance connections between their on-premises infrastructure and Google Cloud. It offers low-latency, secure connectivity by bypassing the public internet, making it ideal for scenarios like data migration, replication, disaster recovery, or hybrid cloud deployments. There are three main options: Key benefits include reduced latency, enhanced security (traffic stays off the public internet), cost savings on egress traffic, and direct access to Google Cloud’s internal IP addresses without needing VPNs or NAT devices. It’s widely used by enterprises in industries like media, healthcare, … Continue reading Using Cloud Interconnect with Aviatrix

A little help from my friend… hacks on how to work with default routes

Most if not all GCP customers consume GCP PaaS/SaaS services like GKE, Cloud SQL, and others. Those services have their compute capacity provisioned inside Google owned VPCs and to establish a data plane for customers to use them vpc peerings are used. AVX Behavior Constraints Workarounds AVX Gateway Routes Create routes with a higher priority and with the tag avx-<vpc name>-gbl with the next hop “Default internet gateway”. Those are used exclusively by AVX Spoke Gateways. This step is necessary to prevent a route loop when executing the step below. 0.0.0.0/0 Option 1 It is possible to use the feature … Continue reading A little help from my friend… hacks on how to work with default routes

AVX “Global VPC” Tagging

GCP Global VPC creates regional awareness between the VPC and Aviatrix gateways allowing you to restrict spoke gateway traffic to transit gateways in the same region as the spoke gateway. Without global VPC, communications between spokes over transit in the same region are routed outside the region. Regional awareness is achieved by appending regional network tags to virtual machines and adding regional routes to the gateways in the routing table using tags. From Google Cloud documentation: “A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or … Continue reading AVX “Global VPC” Tagging

Apigee not bee :)

Apigee is a Google SaaS platform for developing and managing APIs. Apigee provides an abstraction layer to backend service APIs and provides security, rate limiting, quotas, and analytics. Apigee consists of the following components: A more granular network friendly diagram is show below: A more in depth overview is provided here: https://cloud.google.com/apigee/docs/api-platform/architecture/overview Setting it up There are at least three different ways to provision Apigee: https://cloud.google.com/apigee/docs/api-platform/get-started/provisioning-intro#provisioning-options I’m going to use a free trial wizard to get acquainted with Apigee: The evaluation wizard guides us through the steps: Apigee runtime requires a dedicated /22 range for evaluation: Each Apigee instance requires … Continue reading Apigee not bee 🙂

Scaling Out Secure Dedicated Ingress on GCP

Proposed Architecture The architecture presented below satisfies GCP customers requirements to use third party compute instance based appliances in their flows. The design considers HTTP(S) load balancers due its advanced capabilities. Constraints GCP Load Balancers Decision Chart Update DNS How to Scale Scenario 1 How to Scale Scenario 2 How to Scale Scenario 3 How to Scale Scenario 4 The HC as before is the same as we are checking the health of the compute instances: References https://research.google/pubs/pub44824/ https://cloud.google.com/load-balancing/docs/load-balancing-overview https://cloud.google.com/load-balancing/docs/backend-service Continue reading Scaling Out Secure Dedicated Ingress on GCP

Dedicated Ingress VPC Health Checks

Topology (VPC003) Workload Configuration Instance Group: Health check: Network Load Balancer: (VPC001) Ingress VPC SNAT/DNAT using single NAT: Another option is to use customized NAT: Instance Group: Health Check: External Global HTTP(S) Load Balancer: Testing Packet capture from the proxy instance: Troubleshooting Health check failures: “End-to-End” Health Check In this scenario, the external load balancer health check probes the the internal load balancer: New HC on port 80 (service port): References https://cloud.google.com/load-balancing/docs/health-check-concepts Continue reading Dedicated Ingress VPC Health Checks

Migrating from GCP… to GCP

Current and Future Architecture Current state: Desired state: vpc001 is composed of the following subnets: vpc001 routing table (filtering routes of interest): On-prem (AS 36180) routing table: Staging On-prem route table after staging is complete (avx gateway is not attached): Attaching the gateway: And advertise the vpc001 subnets with a better metric (please note that RFC 6598 prefixes are not advertised from AVX by default): To avoid traffic switching over to AVX asymmetrically during the staging we have a few options: Switching traffic over (East-West) vpc001 destination routes to vpc002: When Cloud Routers learn a prefix that exactly matches the … Continue reading Migrating from GCP… to GCP

All those (vpc) flow logs… Consolidate vpc flow logs using BigQuery

VPC FlowLogs VPC Flow Logs records a sample of network flow. Logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Once the vpc_flows are enabled on all subnets of interest, we can go to Logging to check if we see logs arriving: Sample below: BigQuery Cloud Logging “routes” logs to destinations like buckets, BigQuery or to Pub/Sub: We want to consolidate all logs in a centralized location where we can consume the data ingest. We can use BigQuery to accomplish that creating a sink. Sinks control how Cloud Logging routes logs: We can also set … Continue reading All those (vpc) flow logs… Consolidate vpc flow logs using BigQuery

Using a GCP LB to provide DNS High-Availability

DNS uses UDP port 53 for most of it operations but relies on TCP for operations that requires the transmission of packets exceeding 512 bytes. When the message size exceeds 512 bytes, it triggers a ‘TC’ bit (Truncation) in DNS to inform the client that the message length has exceeded the allowed size. The client needs then to re-transmit over TCP (size limit is 64000 bytes). Back End Configuration If you happen to run the HC across a device like routers or firewall you will need to configure DNAT for those devices to properly reply back to the HC of … Continue reading Using a GCP LB to provide DNS High-Availability