Skip to content

GCP GKE + Kubernetes

This tutorial walks you through deploying a production-grade GKE cluster with Kubernetes workloads, all defined in TypeScript using two chant lexicons. The GCP lexicon produces Config Connector YAML for infrastructure; the K8s lexicon produces kubectl-ready YAML for workloads.

┌─────────────────────────────────────────────────────────┐
│ GCP Lexicon (Config Connector) (~15 resources) │
│ ├── VPC + Subnets + Cloud NAT │
│ ├── GKE Cluster + Node Pool (Workload Identity) │
│ ├── 4× GCP Service Accounts │
│ ├── IAM Policy Members (WI bindings + role grants) │
│ └── Cloud DNS Managed Zone │
└────────────────────┬────────────────────────────────────┘
│ SA emails via .env → config.ts
┌────────────────────▼────────────────────────────────────┐
│ K8s Lexicon (~20 resources) │
│ ├── Namespace (quotas, limits, network policy) │
│ ├── AutoscaledService (Deployment + HPA + PDB) │
│ ├── WorkloadIdentityServiceAccount (GKE) │
│ ├── GceIngress + GkeExternalDnsAgent │
│ ├── GcePdStorageClass │
│ ├── GkeFluentBitAgent (Cloud Logging) │
│ └── GkeOtelCollector (Cloud Trace + Monitoring) │
└─────────────────────────────────────────────────────────┘
Source fileWhat it produces
src/infra/networking.tsVPC, 2 subnets, firewall rules, Cloud Router, Cloud NAT
src/infra/cluster.tsGKE cluster, node pool, 4 GCP SAs, IAM bindings
src/infra/dns.tsCloud DNS ManagedZone
src/k8s/namespace.tsNamespace with quotas, limits, default-deny policy
src/k8s/app.tsAutoscaledService + WorkloadIdentityServiceAccount + ConfigMap
src/k8s/ingress.tsGceIngress + GkeExternalDnsAgent
src/k8s/storage.tsGcePdStorageClass
src/k8s/observability.tsGkeFluentBitAgent + GkeOtelCollector
src/config.tsCross-lexicon config — reads .env for real SA emails
  • GCP account with a project that has billing enabled
  • gcloud CLI authenticated (gcloud auth list should show your account)
  • kubectl installed
  • Bun runtime and chant installed (Installation guide)

“Bootstrap a GKE cluster with Config Connector enabled.”

Terminal window
cd examples/k8s-gke-microservice
export GCP_PROJECT_ID=<your-project>
# Creates GKE cluster, enables Config Connector, sets up Workload Identity
npm run bootstrap

This enables required APIs, creates the GKE cluster with Config Connector as an add-on, sets up a Config Connector service account with editor/IAM/DNS roles, and waits for the controller to be ready. This is a one-time step — subsequent deploys reuse the cluster.

“Build the Config Connector resources and deploy them.”

Terminal window
# Build both CC YAML and K8s manifests
npm run build
# Deploy Config Connector resources (~15 GCP resources)
npm run deploy-infra

Config Connector reconciles the GCP resources declaratively. The VPC, subnets, GKE node pool, service accounts, IAM bindings, and DNS zone are created and managed as K8s CRDs.

Step 2: Configure kubectl and load outputs

Section titled “Step 2: Configure kubectl and load outputs”

“Configure kubectl for the GKE cluster and load the deployment outputs into .env.”

Terminal window
# Update kubeconfig
npm run configure-kubectl
kubectl get nodes # verify — should show 3 nodes
# Write real SA emails from CC resources into .env
npm run load-outputs

The load-outputs target queries Config Connector resource statuses, extracts service account emails and the project ID, and writes them to .env. Bun auto-loads .env at runtime, so the next K8s build uses real values instead of placeholders.

“Rebuild the K8s manifests with real SA emails and deploy them.”

Terminal window
# Rebuild K8s manifests with real SA emails from .env
npm run build:k8s
# Validate against the live cluster (optional)
npm run validate
# Apply all K8s resources
npm run apply
# Wait for the app deployment to roll out
npm run wait

The K8s output includes ~20 resources across 5 files: namespace with quotas, autoscaled app deployment, GCE ingress with ExternalDNS, PD storage class, and Fluent Bit + OTel observability agents.

“Check that everything is running.”

Terminal window
# Quick status check
npm run status
# Application pods (should show 2+ Running)
kubectl get pods -n microservice
# Ingress (GCE load balancer IP appears after 2-3 min)
kubectl get ingress -n microservice
# Observability agents
kubectl get daemonsets -A
# Test the endpoint (once the load balancer is ready)
LB_IP=$(kubectl get ingress -n microservice microservice-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s -o /dev/null -w "%{http_code}" http://$LB_IP/
  • GCE load balancer takes 2-3 minutes to provision after kubectl apply. Check kubectl describe ingress for events if it’s slow.
  • Delete order matters — always delete K8s resources before Config Connector resources. If CC resources are deleted first, the ingress controller can’t clean up the load balancer, leaving orphaned GCP resources.
  • GKE ships metrics-server — HPA works out of the box. Do not deploy a separate MetricsServer composite.
  • Config Connector reconciliation — CC continuously reconciles resources. Deleting a CC CRD deletes the underlying GCP resource. Do not kubectl delete CC resources unless you intend to destroy the GCP infrastructure.
  • Workload Identity requires IAM binding — the GCP service account must have an IAM policy binding for roles/iam.workloadIdentityUser targeting the K8s service account.
  • K8s-only changes (app config, scaling, new workloads): edit source, run npm run build:k8s && npm run apply. No infra redeploy needed.
  • Infrastructure changes (node count, new SAs, new IAM bindings): edit source, run npm run build && npm run deploy-infra. Then run npm run load-outputs if outputs changed, followed by npm run build:k8s && npm run apply.
Terminal window
npm run teardown

This runs: kubectl delete K8s workloads → 30s drain wait → kubectl delete CC resources → delete CC service account → delete the GKE cluster.