Skip to content

AWS EKS + Kubernetes

Two chant lexicons, one src/ directory: the AWS lexicon produces a CloudFormation template for the EKS cluster and IAM roles; the K8s lexicon produces kubectl-ready YAML for workloads. IAM role ARNs flow between them via .envconfig.ts.

The deploy is intentionally two-phase: ACM certificate DNS validation requires your domain’s NS records to be delegated before the cert can be issued. The example handles this gracefully — you can deploy and use the cluster without TLS, then add the cert later.

┌─────────────────────────────────────┐
│ AWS CloudFormation (35 resources) │
│ VPC + EKS Cluster + OIDC Provider │
│ 8 IAM Roles (IRSA per workload) │
│ 4 EKS Add-ons + Route53 zone │
└───────┬─────────────────────────────┘
│ ARNs via .env → config.ts
┌───────▼─────────────────────────────┐
│ Kubernetes (36 resources) │
│ Namespace + AutoscaledService │
│ IrsaServiceAccount (role-arn ann.) │
│ AlbIngress (cert-arn) + ExternalDNS│
│ EBS StorageClass + FluentBit + ADOT│
└─────────────────────────────────────┘
  • How IRSA (IAM Roles for Service Accounts) works: OIDC provider → trust policy → eks.amazonaws.com/role-arn annotation
  • Why the two-phase deploy exists: ACM cert validation requires NS delegation, which requires the Route53 zone to exist first
  • The cross-lexicon value flow: CF stack outputs → .envconfig.ts → K8s composite props
  • EKS-specific nuances: ALB controller is installed as an EKS addon (not a K8s manifest), KMS envelope encryption for secrets

EKS control plane ~$73/mo, NAT gateway ~$32/mo, ALB ~$16/mo, 3× t3.medium nodes ~$100/mo. Total ~$221/mo. Teardown after testing.

Deploy the k8s-eks-microservice example to AWS. My domain is myapp.example.com.

See examples/k8s-eks-microservice/ for the full README, deploy workflow, and teardown instructions.

Each workload gets its own IAM role with a trust policy that restricts AssumeRoleWithWebIdentity to a specific K8s service account:

// src/infra/cluster.ts — the IAM role for the app workload
export const appRole = new IAMRole({
AssumeRolePolicyDocument: {
Statement: [{
Action: "sts:AssumeRoleWithWebIdentity",
Principal: { Federated: Ref(oidcProviderArn) },
Condition: {
StringEquals: {
[`${oidcIssuer}:sub`]: "system:serviceaccount:microservice:app-sa",
[`${oidcIssuer}:aud`]: "sts.amazonaws.com",
},
},
}],
},
});

The K8s side just needs the annotation:

// src/k8s/app.ts — IrsaServiceAccount sets the annotation
export const appSa = IrsaServiceAccount({
name: "app-sa",
namespace: "microservice",
iamRoleArn: config.appRoleArn, // from .env after load-outputs
});

The StringEquals condition block is critical — without it, any pod in any namespace with any SA name could assume the role using the cluster’s OIDC provider.

The deploy sequence has a deliberate pause between infrastructure and workloads:

1. deploy-infra → creates Route53 hosted zone → prints nameservers
2. [manual] Update NS records at your registrar
3. deploy-cert → ACM requests cert → creates validation CNAME → waits for ISSUED
4. load-outputs → writes cert ARN to .env
5. build:k8s && apply → AlbIngress gets the real cert ARN

Without NS delegation, ACM’s HTTP-01 validation CNAME can’t be resolved and the cert stays in PENDING_VALIDATION indefinitely. The example’s npm run deploy skips steps 2-4 so you can verify the cluster first; add the cert later with npm run deploy-cert.

Always delete K8s resources before the CloudFormation stack. The ALB controller (installed as an EKS addon) must be running to clean up the ALB it provisioned. Deleting the CF stack first removes the addon → controller → ALB becomes orphaned.

Terminal window
npm run teardown # handles the ordering: kubectl delete → drain → cfn delete-stack