Apigee not bee :)

animal bee bloom blooming
Photo by Pixabay on Pexels.com

Apigee is a Google SaaS platform for developing and managing APIs. Apigee provides an abstraction layer to backend service APIs and provides security, rate limiting, quotas, and analytics. Apigee consists of the following components:

  • Apigee services: the APIs that you use to create, manage, and deploy API proxies.
  • Apigee runtime: a set of containerized runtime services in a Kubernetes cluster that Google maintains. All API traffic passes through and is processed by these services.
  • GCP services: provides identity management, logging, analytics, metrics, and project management functions.
  • Back-end services: back-end services are responsible for performing business logic, accessing databases, processing requests, and generating responses. These services can be hosted on the same server as the API proxy or on a separate server, and they communicate with the API proxy through a RESTful API or other protocols.
Source: https://cloud.google.com/apigee/docs/api-platform/get-started/what-apigee

A more granular network friendly diagram is show below:

Source: https://cloud.google.com/apigee/docs/api-platform/get-started/what-apigee

A more in depth overview is provided here: https://cloud.google.com/apigee/docs/api-platform/architecture/overview

Setting it up

There are at least three different ways to provision Apigee:


I’m going to use a free trial wizard to get acquainted with Apigee:

The evaluation wizard guides us through the steps:

  • Enable APIs
  • Networking
  • Organization
  • Access Routing

Apigee runtime requires a dedicated /22 range for evaluation:

Each Apigee instance requires a non-overlapping CIDR range of /22 and /28. The Apigee runtime plane is assigned IP addresses from within this CIDR range.

Organization provisioning can take up to 45 minutes

Client to Apigee traffic is also called “northbound” traffic. Northbound configuration options include the following:

  • internal with VPC peering
  • external with MIG
  • internal with PSC
  • external with PSC

Once the access routing is configured, Apigee is ready.


The network and cidr provided to the wizard is used to deploy the ingress internal load balancer (instance):

A vpc peering allows communication between VPCs:

Source: https://cloud.google.com/apigee/docs/api-platform/architecture/overview

The vpc network peering is part of the private service connection configuration:

To route traffic from client apps on the internet to Apigee, we can use a global external HTTPS load balancer. An LB can communicate across GCP projects.

We could also provision a MIG of virtual machines as a network bridge. The MIG VMs have the capability to communicate bidirectionally across the peered networks.

Apps on the internet talk to the XLB, the XLB talks to the bridge VM, and the bridge VM talks to the Apigee network.

Source: https://cloud.google.com/apigee/docs/api-platform/architecture/overview

Load Balancers

The reason we cannot simply position a load balancer in front of the Apigee ingestion:

A compute instance working as a proxy is required for routing traffic from outside the customer vpc (vpc001). More on that during the testing.

Using Apigee

I’m going to use the classic console as not every feature is available under the google cloud console:

Create API Proxy



From a VM running in the customer vpc001

From a VM running in the customer vpc001 (the one directly attached to the Apigee:

Tracing the API call using the Apigee trace:

From a VM running in the customer vpc002 (vpc peering)

vpc peering is not transitive and Apigee cidr is not exported from vpc001 towards vpc002 what makes necessary a proxy like a VM running on VPC001

From a VM running in the customer vpc002 (MIG)

Enable Private Google Access for a subnet of your VPC network:

Define variables:

Create a instance template:

Please folllow the entire procedure to deploy an external or internal load balancer with a MIG for a proper supported solution. The procedure can be found at https://cloud.google.com/apigee/docs/api-platform/get-started/install-cli#externalmig

gcloud compute instance-templates create $MIG_NAME \
  --project $PROJECT_ID \
  --region $REGION \
  --network $VPC_NAME \
  --subnet $VPC_SUBNET \
  --tags=https-server,apigee-mig-proxy,gke-apigee-proxy \
  --machine-type e2-medium --image-family debian-10 \
  --image-project debian-cloud --boot-disk-size 20GB \
  --no-address \
  --metadata ENDPOINT=$APIGEE_ENDPOINT,startup-script-url=gs://apigee-5g-saas/apigee-envoy-proxy-release/latest/conf/startup-script.sh

From an instance running in a second vpc (my case vpc002) we can access the apigee proxy using one of the MIG instance:

The debug shows the connection comes from which is one of the MIG instance:


From a VM running in the customer vpc002 using AVX

For this scenario, I removed the peering connection between vpc001 and vpc002. Custom advertise the Apigee CIDR range using the Customize Spoke Advertised VPC CIDRs:

VPC001 “imports” the apigee ranges:

VPC001 exports to apigee the RFC1918 created and controlled by the avx controller:

VPC002 gateway routing table:

Gateway vpc001 routing table:

From a VM running on vpc002 I can access the apigee lb without a proxy:

From the debug session we can see the client IP is indeed the IP from the vpc002 compute instance:

Private Service Connection (PSC)

One of the advantages of using PSC is that there is no need to deploy a MIG. Find out the apigee service attachment:

ricardotrentin@RicardontinsMBP workflows % curl -i -H "$AUTH" \
HTTP/2 200 
content-type: application/json; charset=UTF-8
vary: X-Origin
vary: Referer
vary: Origin,Accept-Encoding
date: Sun, 23 Apr 2023 18:08:59 GMT
server: ESF
cache-control: private
x-xss-protection: 0
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
accept-ranges: none

  "instances": [
      "name": "test-lab-apigee-us-east1",
      "location": "us-east1",
      "host": "",
      "port": "443",
      "createdAt": "1682013894059",
      "lastModifiedAt": "1682015454529",
      "diskEncryptionKeyName": "projects/rtrentin-01/locations/us-east1/keyRings/lab-test-apigee-kr/cryptoKeys/lab-test-apigee-key",
      "state": "ACTIVE",
      "peeringCidrRange": "SLASH_22",
      "runtimeVersion": "1-9-0-apigee-25",
      "ipRange": ",",
      "consumerAcceptList": [
      "serviceAttachment": "projects/u86f317c835229a5b-tp/regions/us-east1/serviceAttachments/apigee-us-east1-kt9m"

Create a network end point group:

gcloud compute network-endpoint-groups create apigee-neg \
  --network-endpoint-type=private-service-connect \
  --psc-target-service=projects/u86f317c835229a5b-tp/regions/us-east1/serviceAttachments/apigee-us-east1-kt9m \
  --region=$RUNTIME_LOCATION \
  --network=vpc001 \
  --subnet=subnet002 \

Resuming the load balancer creation we initiated before:

The remaining of the configuration is straightforward.

Next Steps








Leave a Reply