Skip to content

AWS EKS + Kubernetes

This tutorial walks you through deploying a production-grade EKS cluster with Kubernetes workloads, all defined in TypeScript using two chant lexicons. The AWS lexicon produces a CloudFormation template for infrastructure; the K8s lexicon produces kubectl-ready YAML for workloads.

┌─────────────────────────────────────┐
│ AWS CloudFormation (32 resources) │
│ ┌──────────┐ ┌──────────────┐ │
│ │ VPC/Nets │ │ EKS Cluster │ │
│ └──────────┘ └──────┬───────┘ │
│ ┌──────────┐ │ │
│ │ IAM Roles│ ←── OIDC Provider │
│ └────┬─────┘ │
│ ┌────┴─────────────────────┐ │
│ │ Add-ons: vpc-cni, ebs, │ │
│ │ coredns, kube-proxy, │ │
│ │ ALB controller │ │
│ └──────────────────────────┘ │
└───────┼─────────────────────────────┘
│ ARNs flow down via .env
┌───────▼─────────────────────────────┐
│ Kubernetes (28 resources) │
│ ┌────────────┐ ┌──────────────┐ │
│ │ Namespace │ │ IRSA SA │ │
│ │ + Quotas │ │ (role-arn) │ │
│ └────────────┘ └──────────────┘ │
│ ┌────────────┐ ┌──────────────┐ │
│ │ Autoscaled │ │ ALB Ingress │ │
│ │ Service │ │ (cert-arn) │ │
│ └────────────┘ └──────────────┘ │
│ ┌────────────┐ ┌──────────────┐ │
│ │ EBS Storage│ │ FluentBit │ │
│ │ Class │ │ + ADOT │ │
│ └────────────┘ └──────────────┘ │
└─────────────────────────────────────┘
Source fileWhat it produces
src/infra/networking.tsVPC, subnets, IGW, NAT gateway
src/infra/cluster.tsEKS cluster, node group, OIDC, 7 IAM roles
src/infra/addons.ts5 EKS add-ons (including ALB controller)
src/infra/dns.tsRoute53 hosted zone
src/infra/params.tsCF parameters: environment, domain, public access CIDR
src/k8s/namespace.tsNamespace with quotas, limits, default-deny policy
src/k8s/app.tsDeployment + Service + HPA + PDB + IRSA SA + ConfigMap
src/k8s/ingress.tsALB Ingress + ExternalDNS agent
src/k8s/storage.tsgp3 encrypted StorageClass
src/k8s/observability.tsFluent Bit (logging) + ADOT Collector (metrics)
src/config.tsCross-lexicon config — reads .env for real ARNs
  • AWS account with permissions for CloudFormation, EKS, EC2, IAM, ELB, Route53, CloudWatch
  • AWS CLI >= 2.x, configured (aws sts get-caller-identity should work)
  • kubectl installed
  • jq installed
  • Bun runtime and chant installed (Installation guide)
  • ACM certificate pre-created in the target region for your domain
  • Route53 hosted zone for the domain (ExternalDNS writes records here)

“Build the CloudFormation template for the EKS microservice example and deploy it.”

Terminal window
cd examples/k8s-eks-microservice
# Build both CF template and K8s manifests
npm run build
# Deploy the CloudFormation stack (35 resources)
# Pass your domain and optionally restrict API endpoint access:
DOMAIN=myapp.example.com CIDR=203.0.113.1/32 npm run deploy-infra

The stack creates the VPC, EKS cluster, managed node group, OIDC provider, 8 IAM roles, and 4 EKS add-ons. The ALB controller add-on must be installed separately via Helm — see the chant-eks skill for guidance.

The CIDR= env var restricts EKS API server public access to your IP. Omit it to allow access from any IP (0.0.0.0/0). In production, always restrict to your corporate CIDR or use private-only endpoint access.

CloudFormation deployment takes ~15 minutes (EKS cluster creation is the bottleneck).

Step 2: Configure kubectl and load outputs

Section titled “Step 2: Configure kubectl and load outputs”

“Configure kubectl for the EKS cluster and load the stack outputs into .env.”

Terminal window
# Update kubeconfig
npm run configure-kubectl
kubectl get nodes # verify — should show 3 nodes
# Write real ARNs from CF outputs into .env
npm run load-outputs

The load-outputs target queries aws cloudformation describe-stacks, extracts the IAM role ARNs (from stack outputs) and the domain name and ACM certificate ARN (from stack parameters), and writes them all to .env. Bun auto-loads .env at runtime, so the next K8s build uses real values instead of placeholders.

“Rebuild the K8s manifests with real ARNs and deploy them.”

Terminal window
# Rebuild K8s manifests with real ARNs from .env
npm run build:k8s
# Validate against the live cluster (optional)
npm run validate
# Apply all K8s resources
npm run apply
# Wait for the app deployment to roll out
npm run wait

The K8s output includes 28 resources across 5 files: namespace with quotas, autoscaled app deployment, ALB ingress with ExternalDNS, encrypted storage class, and Fluent Bit + ADOT observability agents.

“Check that everything is running.”

Terminal window
# Quick status check
npm run status
# Application pods (should show 2+ Running)
kubectl get pods -n microservice
# Ingress (ALB address appears after 2-3 min)
kubectl get ingress -n microservice
# Observability agents
kubectl get daemonsets -n amazon-cloudwatch
kubectl get daemonsets -n amazon-metrics
# ALB controller (managed by EKS addon, runs in kube-system)
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
# Test the endpoint (once ALB DNS propagates)
ALB_DNS=$(kubectl get ingress -n microservice microservice-alb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
curl -s -o /dev/null -w "%{http_code}" https://$ALB_DNS/
  • ALB takes 2-3 minutes to provision after kubectl apply. Check kubectl describe ingress for events if it’s slow.
  • Delete order matters — always delete K8s resources before the CF stack. If you delete the CF stack first, the ALB controller addon is removed and can’t clean up the ALB, leaving orphaned resources.
  • ACM certificate must match domain — the cert ARN in config.ts (or ALB_CERT_ARN env var) must cover the domain used in the Ingress host rule.
  • Health check path must match image — the ALB health check hits / because nginx:stable serves a 200 there. Custom apps need a matching health endpoint.
  • K8s-only changes (app config, scaling, new workloads): edit source, run npm run build:k8s && npm run apply. No infra redeploy needed.
  • Infrastructure changes (node count, new IAM roles, new addons): edit source, run npm run build && npm run deploy-infra. Then run npm run load-outputs if outputs changed, followed by npm run build:k8s && npm run apply.
Terminal window
npm run teardown

This runs: kubectl delete → 30s drain wait → aws cloudformation delete-stack → wait for completion.