GCP GKE + Kubernetes
This tutorial walks you through deploying a production-grade GKE cluster with Kubernetes workloads, all defined in TypeScript using two chant lexicons. The GCP lexicon produces Config Connector YAML for infrastructure; the K8s lexicon produces kubectl-ready YAML for workloads.
What you’ll deploy
Section titled “What you’ll deploy”┌─────────────────────────────────────────────────────────┐│ GCP Lexicon (Config Connector) (~15 resources) ││ ├── VPC + Subnets + Cloud NAT ││ ├── GKE Cluster + Node Pool (Workload Identity) ││ ├── 4× GCP Service Accounts ││ ├── IAM Policy Members (WI bindings + role grants) ││ └── Cloud DNS Managed Zone │└────────────────────┬────────────────────────────────────┘ │ SA emails via .env → config.ts┌────────────────────▼────────────────────────────────────┐│ K8s Lexicon (~20 resources) ││ ├── Namespace (quotas, limits, network policy) ││ ├── AutoscaledService (Deployment + HPA + PDB) ││ ├── WorkloadIdentityServiceAccount (GKE) ││ ├── GceIngress + GkeExternalDnsAgent ││ ├── GcePdStorageClass ││ ├── GkeFluentBitAgent (Cloud Logging) ││ └── GkeOtelCollector (Cloud Trace + Monitoring) │└─────────────────────────────────────────────────────────┘| Source file | What it produces |
|---|---|
src/infra/networking.ts | VPC, 2 subnets, firewall rules, Cloud Router, Cloud NAT |
src/infra/cluster.ts | GKE cluster, node pool, 4 GCP SAs, IAM bindings |
src/infra/dns.ts | Cloud DNS ManagedZone |
src/k8s/namespace.ts | Namespace with quotas, limits, default-deny policy |
src/k8s/app.ts | AutoscaledService + WorkloadIdentityServiceAccount + ConfigMap |
src/k8s/ingress.ts | GceIngress + GkeExternalDnsAgent |
src/k8s/storage.ts | GcePdStorageClass |
src/k8s/observability.ts | GkeFluentBitAgent + GkeOtelCollector |
src/config.ts | Cross-lexicon config — reads .env for real SA emails |
Prerequisites
Section titled “Prerequisites”- GCP account with a project that has billing enabled
- gcloud CLI authenticated (
gcloud auth listshould show your account) - kubectl installed
- Bun runtime and chant installed (Installation guide)
Step 0: Bootstrap (one-time)
Section titled “Step 0: Bootstrap (one-time)”“Bootstrap a GKE cluster with Config Connector enabled.”
cd examples/k8s-gke-microservice
export GCP_PROJECT_ID=<your-project>
# Creates GKE cluster, enables Config Connector, sets up Workload Identitynpm run bootstrapThis enables required APIs, creates the GKE cluster with Config Connector as an add-on, sets up a Config Connector service account with editor/IAM/DNS roles, and waits for the controller to be ready. This is a one-time step — subsequent deploys reuse the cluster.
Step 1: Build and deploy infrastructure
Section titled “Step 1: Build and deploy infrastructure”“Build the Config Connector resources and deploy them.”
# Build both CC YAML and K8s manifestsnpm run build
# Deploy Config Connector resources (~15 GCP resources)npm run deploy-infraConfig Connector reconciles the GCP resources declaratively. The VPC, subnets, GKE node pool, service accounts, IAM bindings, and DNS zone are created and managed as K8s CRDs.
Step 2: Configure kubectl and load outputs
Section titled “Step 2: Configure kubectl and load outputs”“Configure kubectl for the GKE cluster and load the deployment outputs into .env.”
# Update kubeconfignpm run configure-kubectlkubectl get nodes # verify — should show 3 nodes
# Write real SA emails from CC resources into .envnpm run load-outputsThe load-outputs target queries Config Connector resource statuses, extracts service account emails and the project ID, and writes them to .env. Bun auto-loads .env at runtime, so the next K8s build uses real values instead of placeholders.
Step 3: Deploy Kubernetes workloads
Section titled “Step 3: Deploy Kubernetes workloads”“Rebuild the K8s manifests with real SA emails and deploy them.”
# Rebuild K8s manifests with real SA emails from .envnpm run build:k8s
# Validate against the live cluster (optional)npm run validate
# Apply all K8s resourcesnpm run apply
# Wait for the app deployment to roll outnpm run waitThe K8s output includes ~20 resources across 5 files: namespace with quotas, autoscaled app deployment, GCE ingress with ExternalDNS, PD storage class, and Fluent Bit + OTel observability agents.
Step 4: Verify
Section titled “Step 4: Verify”“Check that everything is running.”
# Quick status checknpm run status
# Application pods (should show 2+ Running)kubectl get pods -n microservice
# Ingress (GCE load balancer IP appears after 2-3 min)kubectl get ingress -n microservice
# Observability agentskubectl get daemonsets -A
# Test the endpoint (once the load balancer is ready)LB_IP=$(kubectl get ingress -n microservice microservice-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')curl -s -o /dev/null -w "%{http_code}" http://$LB_IP/Gotchas
Section titled “Gotchas”- GCE load balancer takes 2-3 minutes to provision after
kubectl apply. Checkkubectl describe ingressfor events if it’s slow. - Delete order matters — always delete K8s resources before Config Connector resources. If CC resources are deleted first, the ingress controller can’t clean up the load balancer, leaving orphaned GCP resources.
- GKE ships metrics-server — HPA works out of the box. Do not deploy a separate MetricsServer composite.
- Config Connector reconciliation — CC continuously reconciles resources. Deleting a CC CRD deletes the underlying GCP resource. Do not
kubectl deleteCC resources unless you intend to destroy the GCP infrastructure. - Workload Identity requires IAM binding — the GCP service account must have an IAM policy binding for
roles/iam.workloadIdentityUsertargeting the K8s service account.
Subsequent deployments
Section titled “Subsequent deployments”- K8s-only changes (app config, scaling, new workloads): edit source, run
npm run build:k8s && npm run apply. No infra redeploy needed. - Infrastructure changes (node count, new SAs, new IAM bindings): edit source, run
npm run build && npm run deploy-infra. Then runnpm run load-outputsif outputs changed, followed bynpm run build:k8s && npm run apply.
Clean up
Section titled “Clean up”npm run teardownThis runs: kubectl delete K8s workloads → 30s drain wait → kubectl delete CC resources → delete CC service account → delete the GKE cluster.
Further reading
Section titled “Further reading”- Kubernetes lexicon — resource reference, composites, examples
- GKE Composites — WorkloadIdentityServiceAccount, GceIngress, GkeGateway, and more
- GCP Config Connector lexicon — resource reference, composites, examples
- Deploying to GKE — GCP lexicon bridge page
- Agent Integration guide — using chant skills with Claude Code