Running a GKE on top of an Aviatrix Secure Cloud Network – Part 2

Multi Cluster Systems (MCS)

Aviatrix Overview

Aviatrix is a cloud network platform that brings multi-cloud networking, security, and operational visibility capabilities that go beyond what any cloud service provider offers. Aviatrix software leverages AWS, Azure, GCP and Oracle Cloud APIs to interact with and directly program native cloud networking constructs, abstracting the unique complexities of each cloud to form one network data plane, and adds advanced networking, security and operational features enterprises require.

FireNet

Aviatrix Transit FireNet allows the deployment of 3rd party firewalls onto the Aviatrix transit architecture.

Transit FireNet works the same way as the Firewall Network where traffic in and out of the specified Spoke is forwarded to the firewall instances for inspection or policy application.

Aviatrix and GKE Deployment

The diagram below shows the topology I’m going to use in this document:

  • gke-us-east4-cluster-1 nodes are deployed on vpc gcp-spoke100-us-east4
  • gke-us-east4-cluster-2 nodes are deployed on vpc gcp-spoke200-us-east4

Aviatrix configuration and GKE deployment is covered on the following post:

I created the following extra subnet and secondary ranges:

The second cluster is created with the following command:

gcloud beta container clusters create gke-us-east4-cluster-2 \
–zone "us-east4-b" \
–enable-private-nodes \
–enable-private-endpoint \
–master-ipv4-cidr "192.168.254.16/28" \
–enable-ip-alias \
–network "gcp-spoke200-us-east4" \
–subnetwork "gcp-spoke200-us-east4-nodes" \
–cluster-secondary-range-name "gcp-spoke200-us-east4-pod" \
–services-secondary-range-name "gcp-spoke200-us-east4-services" \
–enable-master-authorized-networks \
–master-authorized-networks "172.24.140.0/23"
view raw gistfile1.txt hosted with ❤ by GitHub

After the creation of the second cluster, I got two GKEs running on different spokes and locations:

If you didn’t, enable workload identity (i forgot to included it during the cluster creation):

Routing

GKE creates a peering connection between the node vpc and the control plane:

If you plan to manage the cluster from a point outside the vpc, you have to change the peering configuration so that the custom routes from the vpc are exported to the control-plane peered network:

The control-plane network will learn all the routes from the vpc:

and it exports the control-plane:

Multi-cluster Services (MCS)

MCS is a cross-cluster Service discovery and invocation mechanism for GKE. Services enabled with this feature are discoverable and accessible across clusters with a virtual IP, behaving as a ClusterIP Service accessible in a cluster.

  • MCS configures Cloud DNS zones and records for each exported Service in the fleet clusters
  • MCS configures firewall rules that let Pods communicate with each other across clusters within the fleet.
  • MCS uses Traffic Director as a control plane to keep track of endpoints and their health across clusters.

Constraints

  • MCS only supports exporting services from VPC-native GKE clusters
  • Connectivity between clusters depends on clusters running within the same VPC network or in peered VPC networks
  • Services cannot be exported in the default and kube-system namespaces
  • A single Service can be exported up to 5 clusters
  • A single Service supports up to 250 pods
  • 50 unique Services are supported
  • A Service cannot be exported if that Service is already being exported by other clusters in a different project in the fleet with the same name and namespace.

MCS Configuration

The following APIs are required:

  • GKE fleet (hub)
  • Resource Manager
  • Traffic Director
  • Cloud DNS APIs
gcloud services enable multiclusterservicediscovery.googleapis.com  gkehub.googleapis.com cloudresourcemanager.googleapis.com trafficdirector.googleapis.com dns.googleapis.com

Next, we have to enable MCS for the fleet:

gcloud container hub multi-cluster-services enable

GKE clusters need to register to a fleet:

gcloud container hub memberships register <MEMBERSHIP_NAME> --gke-cluster <LOCATION/GKE_CLUSTER> --enable-workload-identity

If you have Anthos enabled after this step you can see the clusters registered:

And finally we have to create a “common” namespace in each cluster to export services into:

kubectl create ns NAMESPACE

Verification

Run the following command to verify if MCS is enabled:

gcloud container hub multi-cluster-services describe

Testing

I’m going to create a container running nginx:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: mcs-common
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
– name: nginx
image: nginx
ports:
– containerPort: 80
view raw nginx.yaml hosted with ❤ by GitHub

MCS supports ClusterSetIP and headless Services

I’ll expose nginx using a default service (ClusterIP):

apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: mcs-common
spec:
type: ClusterIP
selector:
app: nginx
ports:
protocol: TCP
port: 80
targetPort: 80

Ok… so far so good. To export a service to other clusters within a fleet we have to create a ServiceExport:

kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
namespace: mcs-common
name: nginx

When the export is created we can see that the Traffic Director is configured:

As Cloud DNS:

I’m going to cover Ingress in the next post!

References

https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-services

https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services

Leave a Reply