Skip to content

Fargate + Lucene/Solr + EFS

Three CloudFormation stacks wire together a VPC, an EFS-backed Solr service on Fargate, and Lambda connectors that index data from DynamoDB and S3. A fourth stack — a local Docker Compose file generated by chant — mirrors the same data flow so you can develop and smoke-test queries before deploying to AWS.

┌──────────────────────────────────────────────────────────────────────┐
│ fargate-lucene-efs-infra │
│ VPC · ALB · ECS cluster · EFS filesystem + access point │
└────────────────────────┬─────────────────────────────────────────────┘
│ outputs: clusterArn, listenerArn, efsId, …
┌────────────────────────────────────────────────────────────────────┐
│ fargate-lucene-efs-solr │
│ Fargate task (solr:9.8.0) — /var/solr on EFS access point │
└────────────────────────┬───────────────────────────────────────────┘
│ outputs: solrUrl, collection
┌────────────────────────────────────────────────────────────────────┐
│ fargate-lucene-efs-connectors │
│ DynamoDB table + Lambda → Solr │
│ S3 bucket + Lambda → Solr │
└────────────────────────────────────────────────────────────────────┘
Local mirror (lexicons/aws/examples/fargate-lucene-efs/docker/):
DynamoDB Local ──[stream poll]──► relay-dynamo ──┐
MinIO ──[webhook]──────► relay-minio ──┴──► solr:8983

Source layout:

StackSource
Infralexicons/aws/examples/fargate-lucene-efs/infra/
Solr servicelexicons/aws/examples/fargate-lucene-efs/solr/
Connectorslexicons/aws/examples/fargate-lucene-efs/connectors/
Local composelexicons/aws/examples/fargate-lucene-efs/docker/
  • How EFS persistent storage connects to a Fargate task — TaskDefinition_EFSVolumeConfiguration, access point POSIX ownership, and the AmazonElasticFileSystemClientReadWriteAccess managed policy
  • How Lambda inline code wires DynamoDB Streams and S3 event notifications into Solr without any build or packaging step
  • Why the three stacks deploy in a fixed order and how outputs flow between them as CloudFormation parameters
  • How to use the Docker lexicon to produce a docker-compose.yml plus companion Dockerfiles from a single chant build invocation
Terminal window
cd lexicons/aws/examples/fargate-lucene-efs
just up && just -f docker/justfile verify

Paste this prompt to Claude Code in the repo root:

Deploy the fargate-lucene-efs example to AWS.
My AWS region is us-east-1.

The agent handles the full sequence:

  • Build — runs chant build in each stack, producing three CloudFormation templates
  • Deploy — deploys infra → solr → connectors in order, reading each stack’s outputs and passing them as --parameter-overrides to the next
  • Verify — polls the ALB health check and confirms Solr is reachable at /solr/
Tear down the fargate-lucene-efs stacks in us-east-1.

Or manually, in reverse order:

Terminal window
cd lexicons/aws/examples/fargate-lucene-efs && just teardown-all

Solr’s index lives at /var/solr inside the container. The EFS access point is created with POSIX UID/GID 8983 (the solr user) so the container can write without running as root:

lexicons/aws/examples/fargate-lucene-efs/infra/src/efs.ts
export const { securityGroup: efsSecurityGroup, fs, accessPoint } = EfsWithAccessPoint({
name: Sub`${AWS.StackName}-${Ref(appName)}`,
vpcId: network.vpc.VpcId,
uid: "8983", // ← POSIX owner for /solr-data (the solr user)
gid: "8983",
rootPath: "/solr-data",
});

The Fargate task mounts the filesystem via the efsMounts prop on FargateService:

lexicons/aws/examples/fargate-lucene-efs/solr/src/solr.ts
export const solr = FargateService({
// … cluster, listener, networking …
image: Ref(solrImage),
containerPort: 8983,
command: ["solr-precreate", Ref(solrCollection)],
efsMounts: [{ fileSystemId: Ref(efsId), accessPointId: Ref(accessPointId), containerPath: "/var/solr" }],
autoscaling: { minCapacity: 1, maxCapacity: 6, cpuTarget: 60 },
});

Lambda inline code for zero-packaging connectors

Section titled “Lambda inline code for zero-packaging connectors”

Both connectors use Python inline code embedded directly in the CloudFormation template. No build step, no Lambda layer, no S3 upload — the entire handler fits in a ZipFile string:

lexicons/aws/examples/fargate-lucene-efs/connectors/src/sources.ts
export const {
table: productsTable,
func: productsRelayFn,
eventSourceMapping: productsEsm,
} = LambdaDynamoDB({
name: Sub`${AWS.StackName}-${Ref(appName)}-products-relay`,
partitionKey: "id",
access: "None",
streams: { startingPosition: "TRIM_HORIZON", bisectOnFunctionError: true },
Runtime: "python3.12",
Handler: "index.handler",
Code: { ZipFile: dynamoRelayCode }, // ← inline Python string
Environment: { Variables: { SOLR_URL: solrUrl, COLLECTION: solrCollection } },
});

Local relay daemons mirror the Lambda logic

Section titled “Local relay daemons mirror the Lambda logic”

relay-dynamo.ts and relay-minio.ts in lexicons/aws/examples/fargate-lucene-efs/docker/ implement the same logic as the Lambda handlers but as long-running daemons (TypeScript, run with bun). The Dockerfiles are generated by chant from src/relays.ts alongside the compose file:

lexicons/aws/examples/fargate-lucene-efs/docker/src/relays.ts
export const dynamoRelayDockerfile = new Dockerfile({
stages: [{ from: RELAY_BASE, workdir: "/app",
run: ["bun add @aws-sdk/client-dynamodb@^3 @aws-sdk/client-dynamodb-streams@^3"],
copy: ["relay-dynamo.ts ."],
cmd: `["bun", "relay-dynamo.ts"]` }],
});
export const dynamoRelay = new Service({
build: { context: "..", dockerfile: "dist/Dockerfile.dynamoRelayDockerfile" },
environment: { DYNAMO_ENDPOINT: "http://dynamo:8000", SOLR_URL: "http://solr:8983/solr", … },
});

The stacks are not independent — each one reads outputs from the previous as CloudFormation parameters. If you deploy connectors before solr, the solrUrl parameter won’t exist yet.

infra outputs → solr params (clusterArn, listenerArn, vpcId, efsId, accessPointId, …)
solr outputs → connectors params (solrUrl, collection)