AWS ECS Fargate

Reference implementation for deploying the FDK server on AWS using ECS Fargate, RDS PostgreSQL, and S3. Terraform modules are published on the FSLabs module registry at tf.fsl.dev.

Architecture

  graph TD
    User(["User"])
    Browser["Web Browser"]
    CF["CloudFront<br/>(WASM Frontend)"]
    S3Static["S3<br/>(Static Assets)"]
    Clients["Desktop / CLI / WASM App"]
    OIDC["OIDC Provider"]
    R53["Route53<br/>(DNS)"]
    ALB["Application Load Balancer<br/>(Public Subnets)"]
    ECS["ECS Fargate<br/>(Private Subnets)"]
    RDS["RDS PostgreSQL<br/>(Private, Encrypted)"]
    S3Data["S3<br/>(Blobs, Large Data)"]
    User --> Browser
    User --> Clients
    Browser --> CF
    CF --> S3Static
    Browser --> OIDC
    Clients --> OIDC
    Clients --> R53
    R53 --> ALB
    ALB --> ECS
    ECS --> RDS
    ECS --> S3Data
    ECS -.-> |"token validation"| OIDC

Components

VPC

Multi-AZ layout with public and private subnets. NAT Gateway provides egress for private subnets. Public subnets host the ALB. Private subnets host ECS tasks and RDS.

RDS PostgreSQL

Placed in private subnets with no public access. Encrypted at rest via KMS. Credentials are generated automatically by the module. Default schema: public. Additional schemas (e.g., spatial) can be added via database_schemas.

DNS & TLS

Route53 hosted zone for the application domain. ACM wildcard certificate with DNS validation. Certificate renewal is automatic.

Load Balancer

Application Load Balancer deployed in public subnets. HTTP requests redirect to HTTPS. TLS terminates at the ALB using the ACM certificate.

S3 & CloudFront

Two S3 concerns exist in the deployment:

  • The WASM bucket (provisioned by Terraform) for frontend static assets
  • Data/blob buckets (created dynamically by the server at runtime) using the buckets_prefix for project data storage

Optional CloudFront distribution for static asset delivery. CORS is configured for the application domain.

WASM Client Applications

The wasm_clients variable maps subdomains to S3 folder paths. When configured, CloudFront is automatically enabled to serve the WASM frontend over HTTPS.

VariableDescriptionExample
wasm_clientsMap of subdomain → S3 folder{ "app" = "latest", "staging" = "next" }

Each entry creates a CloudFront distribution alias at <subdomain>.<dns_name> serving assets from the corresponding S3 folder. A CI IAM user is provisioned with write access to the S3 bucket for automated deployments.

Building the WASM Viewer

The viewer is configured via the FDK_SERVER_URL environment variable at build time:

cd apps/fdk_viewer
FDK_SERVER_URL=https://<your-server-domain> trunk build --release

The build output is in the dist/ directory. The server URL and auth configuration endpoint are compiled into the build artifacts.

A separate build is required for each environment (staging, production, etc.).
Manual Deployment

Build and upload the WASM application to S3:

# Build the WASM application
cd apps/fdk_viewer
FDK_SERVER_URL=https://<your-server-domain> trunk build --release

# Upload to S3 (folder matches your wasm_clients value)
aws s3 sync dist s3://<name>-wasm/latest/ --delete

CloudFront serves the assets at https://<subdomain>.<dns_name> once the distribution deploys (first deploy may take 10–15 minutes).

The WASM viewer requires these HTTP response headers for SharedArrayBuffer support:

  • Cross-Origin-Embedder-Policy: require-corp
  • Cross-Origin-Opener-Policy: same-origin

Configure these in your CloudFront response headers policy or reverse proxy.

CI/CD

Automate WASM client deployments using GitHub Actions. The S3 bucket name and CloudFront distribution ID are available as Terraform outputs: s3_wasm_bucket and cloudfront_distribution_id.

Using static IAM credentials:

The reference implementation provisions a CI IAM user with S3 write and CloudFront invalidation permissions. Store the Terraform outputs as repository secrets and variables.

- name: Deploy WASM client
  env:
    AWS_ACCESS_KEY_ID: ${{ secrets.CI_AWS_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ secrets.CI_AWS_SECRET_ACCESS_KEY }}
    AWS_REGION: ap-southeast-2
  run: |
    cd apps/fdk_viewer
    FDK_SERVER_URL=${{ vars.FDK_SERVER_URL }} trunk build --release
    aws s3 sync dist s3://${{ vars.S3_BUCKET }}/latest/ --delete
    aws cloudfront create-invalidation --distribution-id ${{ vars.CF_DISTRIBUTION_ID }} --paths "/*"
Static IAM access keys do not expire and cannot be automatically rotated. For production deployments, use OIDC federation instead.

Using OIDC federation (recommended):

GitHub Actions supports OpenID Connect federation with AWS, eliminating the need for long-lived credentials. This requires an IAM OIDC identity provider and a role with a trust policy scoped to your repository.

permissions:
  id-token: write
  contents: read

steps:
  - uses: aws-actions/configure-aws-credentials@v4
    with:
      role-to-assume: arn:aws:iam::<account-id>:role/<ci-role-name>
      aws-region: ap-southeast-2

  - name: Deploy WASM client
    run: |
      cd apps/fdk_viewer
      FDK_SERVER_URL=${{ vars.FDK_SERVER_URL }} trunk build --release
      aws s3 sync dist s3://${{ vars.S3_BUCKET }}/latest/ --delete
      aws cloudfront create-invalidation --distribution-id ${{ vars.CF_DISTRIBUTION_ID }} --paths "/*"

With OIDC, AWS issues short-lived credentials per workflow run — no secrets to store or rotate. See the AWS documentation for setting up the identity provider and trust policy.

ECS Cluster & Service

Fargate launch type — no EC2 instances to manage. An optional init container can run database migrations before the server starts. Optional monitoring sidecar for metrics and logs (Alloy/OpenTelemetry). Logs ship to CloudWatch with 14-day retention. Health checks target /healthz on port 8080.

Bastion (Optional)

Private EC2 instance accessible via AWS Systems Manager Session Manager. No SSH or public IP required. Used for tunneling to RDS during debugging.

Module Registry

All infrastructure modules are published on the FSLabs Terraform registry at tf.fsl.dev under the fslabs namespace. The reference implementation uses these modules:

ModulePurpose
fslabs/vpc/awsVPC with public/private subnets and NAT
fslabs/rds-instance/awsManaged PostgreSQL instance
fslabs/rds-databases/awsDatabase and user provisioning
fslabs/bastion/awsOptional SSM-accessible bastion
fslabs/dns/awsRoute53 hosted zone
fslabs/ssl/awsACM certificate with DNS validation
fslabs/load-balancer/awsApplication Load Balancer
fslabs/s3/awsS3 bucket with optional CloudFront
fslabs/iam-user/awsCI/CD IAM user
fslabs/ecs-cluster/awsECS Fargate cluster
fslabs/ecs-service/awsECS service with task definition
fslabs/ecs-monitoring/awsMonitoring sidecar (Alloy/OTel)

Configuration

VariableDescriptionExample
nameApplication name prefix"fdk-demo"
regionAWS region"ap-southeast-2"
root_cidrVPC CIDR block"10.0.0.0/16"
dns_nameRoot domain"demo.fdk.fsl.dev"
hostSubdomain for the server (optional)""
docker_imageContainer image repository"ghcr.io/your-org/fdk_server"
docker_image_versionImage tag"0.3.0"
docker_registry_usernameRegistry credentials (if private)null
docker_registry_passwordRegistry credentials (if private)null
database_schemasPostgreSQL schemas["public"]
with_bastionDeploy bastion hostfalse
bastion_amiAMI ID for bastion (x86_64)(region-specific)
buckets_prefixS3 bucket name prefix for project data storage"dag-tenant-"
environmentEnvironment name for tagging"production"
environment_variablesAdditional environment variables for the server container{}
monitoring_enabledEnable Alloy monitoring sidecarfalse
wasm_clientsWASM client applications served via CloudFront{}

Server Environment Variables

VariableDescription
DATABASE__HOSTRDS endpoint (auto-configured)
DATABASE__PORTRDS port (auto-configured)
DATABASE__NAMEDatabase name (auto-configured)
DATABASE__SSL_MODETLS mode for database connection
DATABASE__AUTH__USERNAMEDatabase user (auto-configured)
DATABASE__AUTH__PASSWORDDatabase password (secret, auto-configured)
STORAGE__BACKENDStorage backend type (s3)
STORAGE__REGIONAWS region for S3
STORAGE__BUCKET_PREFIXS3 bucket name prefix
STORAGE__READER_ROLE_ARNIAM role ARN for S3 project data access (auto-configured)
AUTH__ISSUEROIDC issuer URL
AUTH__CLIENT_IDOIDC client ID
AUTH__REFRESH_URLOIDC token refresh endpoint (optional)
SENTRY_DSNSentry error tracking DSN (optional)
OBSERVABILITY__OTEL_EXPORTER_OTLP_ENDPOINTOpenTelemetry collector endpoint, set automatically when monitoring sidecar is enabled

Variables marked “auto-configured” are wired from module outputs in the reference implementation. Override them in the environment_variables map for custom setups.

The environment_variables map should include OIDC configuration:

  • AUTH__ISSUER — OIDC provider issuer URL
  • AUTH__CLIENT_ID — OIDC client ID
  • AUTH__REFRESH_URL — OIDC token refresh endpoint

Prerequisites

  • Terraform >= 1.10
  • AWS CLI v2
  • AWS credentials with permissions to create VPC, RDS, ECS, ALB, S3, Route53, ACM, IAM, and CloudWatch resources
  • A registered domain with the ability to delegate NS records to Route53
  • An OIDC identity provider (see Authorization)

Module Registry Access

The Terraform modules are hosted on the FSLabs registry at tf.fsl.dev. Authenticate using the Terraform CLI:

terraform login tf.fsl.dev

This opens your browser to authenticate via FSL OAuth2 — the same credentials as the crates registry.

Deployment

  1. Copy the reference implementation from the FDK repository.
  2. Configure terraform.tfvars with your values.
  3. Run terraform init — downloads modules from tf.fsl.dev.
  4. Run terraform plan and review the output.
  5. Run terraform apply.
  6. Delegate NS records from your parent DNS zone to the Route53 hosted zone.
  7. Wait for ACM certificate validation to complete.
  8. Verify the server is reachable at https://<host>.<dns_name> (or https://<dns_name> if host is empty).