AWS EKS + Kubernetes
This tutorial walks you through deploying a production-grade EKS cluster with Kubernetes workloads, all defined in TypeScript using two chant lexicons. The AWS lexicon produces a CloudFormation template for infrastructure; the K8s lexicon produces kubectl-ready YAML for workloads.
What you’ll deploy
Section titled “What you’ll deploy”┌─────────────────────────────────────┐│ AWS CloudFormation (32 resources) ││ ┌──────────┐ ┌──────────────┐ ││ │ VPC/Nets │ │ EKS Cluster │ ││ └──────────┘ └──────┬───────┘ ││ ┌──────────┐ │ ││ │ IAM Roles│ ←── OIDC Provider ││ └────┬─────┘ ││ ┌────┴─────────────────────┐ ││ │ Add-ons: vpc-cni, ebs, │ ││ │ coredns, kube-proxy, │ ││ │ ALB controller │ ││ └──────────────────────────┘ │└───────┼─────────────────────────────┘ │ ARNs flow down via .env┌───────▼─────────────────────────────┐│ Kubernetes (28 resources) ││ ┌────────────┐ ┌──────────────┐ ││ │ Namespace │ │ IRSA SA │ ││ │ + Quotas │ │ (role-arn) │ ││ └────────────┘ └──────────────┘ ││ ┌────────────┐ ┌──────────────┐ ││ │ Autoscaled │ │ ALB Ingress │ ││ │ Service │ │ (cert-arn) │ ││ └────────────┘ └──────────────┘ ││ ┌────────────┐ ┌──────────────┐ ││ │ EBS Storage│ │ FluentBit │ ││ │ Class │ │ + ADOT │ ││ └────────────┘ └──────────────┘ │└─────────────────────────────────────┘| Source file | What it produces |
|---|---|
src/infra/networking.ts | VPC, subnets, IGW, NAT gateway |
src/infra/cluster.ts | EKS cluster, node group, OIDC, 7 IAM roles |
src/infra/addons.ts | 5 EKS add-ons (including ALB controller) |
src/infra/dns.ts | Route53 hosted zone |
src/infra/params.ts | CF parameters: environment, domain, public access CIDR |
src/k8s/namespace.ts | Namespace with quotas, limits, default-deny policy |
src/k8s/app.ts | Deployment + Service + HPA + PDB + IRSA SA + ConfigMap |
src/k8s/ingress.ts | ALB Ingress + ExternalDNS agent |
src/k8s/storage.ts | gp3 encrypted StorageClass |
src/k8s/observability.ts | Fluent Bit (logging) + ADOT Collector (metrics) |
src/config.ts | Cross-lexicon config — reads .env for real ARNs |
Prerequisites
Section titled “Prerequisites”- AWS account with permissions for CloudFormation, EKS, EC2, IAM, ELB, Route53, CloudWatch
- AWS CLI >= 2.x, configured (
aws sts get-caller-identityshould work) - kubectl installed
- jq installed
- Bun runtime and chant installed (Installation guide)
- ACM certificate pre-created in the target region for your domain
- Route53 hosted zone for the domain (ExternalDNS writes records here)
Step 1: Build and deploy infrastructure
Section titled “Step 1: Build and deploy infrastructure”“Build the CloudFormation template for the EKS microservice example and deploy it.”
cd examples/k8s-eks-microservice
# Build both CF template and K8s manifestsnpm run build
# Deploy the CloudFormation stack (35 resources)# Pass your domain and optionally restrict API endpoint access:DOMAIN=myapp.example.com CIDR=203.0.113.1/32 npm run deploy-infraThe stack creates the VPC, EKS cluster, managed node group, OIDC provider, 8 IAM roles, and 4 EKS add-ons. The ALB controller add-on must be installed separately via Helm — see the chant-eks skill for guidance.
The CIDR= env var restricts EKS API server public access to your IP. Omit it to allow access from any IP (0.0.0.0/0). In production, always restrict to your corporate CIDR or use private-only endpoint access.
CloudFormation deployment takes ~15 minutes (EKS cluster creation is the bottleneck).
Step 2: Configure kubectl and load outputs
Section titled “Step 2: Configure kubectl and load outputs”“Configure kubectl for the EKS cluster and load the stack outputs into .env.”
# Update kubeconfignpm run configure-kubectlkubectl get nodes # verify — should show 3 nodes
# Write real ARNs from CF outputs into .envnpm run load-outputsThe load-outputs target queries aws cloudformation describe-stacks, extracts the IAM role ARNs (from stack outputs) and the domain name and ACM certificate ARN (from stack parameters), and writes them all to .env. Bun auto-loads .env at runtime, so the next K8s build uses real values instead of placeholders.
Step 3: Deploy Kubernetes workloads
Section titled “Step 3: Deploy Kubernetes workloads”“Rebuild the K8s manifests with real ARNs and deploy them.”
# Rebuild K8s manifests with real ARNs from .envnpm run build:k8s
# Validate against the live cluster (optional)npm run validate
# Apply all K8s resourcesnpm run apply
# Wait for the app deployment to roll outnpm run waitThe K8s output includes 28 resources across 5 files: namespace with quotas, autoscaled app deployment, ALB ingress with ExternalDNS, encrypted storage class, and Fluent Bit + ADOT observability agents.
Step 4: Verify
Section titled “Step 4: Verify”“Check that everything is running.”
# Quick status checknpm run status
# Application pods (should show 2+ Running)kubectl get pods -n microservice
# Ingress (ALB address appears after 2-3 min)kubectl get ingress -n microservice
# Observability agentskubectl get daemonsets -n amazon-cloudwatchkubectl get daemonsets -n amazon-metrics
# ALB controller (managed by EKS addon, runs in kube-system)kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
# Test the endpoint (once ALB DNS propagates)ALB_DNS=$(kubectl get ingress -n microservice microservice-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')curl -s -o /dev/null -w "%{http_code}" https://$ALB_DNS/Gotchas
Section titled “Gotchas”- ALB takes 2-3 minutes to provision after
kubectl apply. Checkkubectl describe ingressfor events if it’s slow. - Delete order matters — always delete K8s resources before the CF stack. If you delete the CF stack first, the ALB controller addon is removed and can’t clean up the ALB, leaving orphaned resources.
- ACM certificate must match domain — the cert ARN in
config.ts(orALB_CERT_ARNenv var) must cover the domain used in the Ingress host rule. - Health check path must match image — the ALB health check hits
/becausenginx:stableserves a 200 there. Custom apps need a matching health endpoint.
Subsequent deployments
Section titled “Subsequent deployments”- K8s-only changes (app config, scaling, new workloads): edit source, run
npm run build:k8s && npm run apply. No infra redeploy needed. - Infrastructure changes (node count, new IAM roles, new addons): edit source, run
npm run build && npm run deploy-infra. Then runnpm run load-outputsif outputs changed, followed bynpm run build:k8s && npm run apply.
Clean up
Section titled “Clean up”npm run teardownThis runs: kubectl delete → 30s drain wait → aws cloudformation delete-stack → wait for completion.
Further reading
Section titled “Further reading”- Kubernetes lexicon — resource reference, composites, examples
- AWS CloudFormation lexicon — resource reference, composites, examples
- Composite Resources guide — how
AutoscaledService,AlbIngress, and other composites work - Agent Integration guide — using chant skills with Claude Code