This article contains the Terraform and Terragrunt infrastructure-as-code (IaC) for managing the EKB platform on AWS EKS, with support stubs for GKE, AKS, and bare-metal Kubernetes. It includes all Helm chart deployments and environment configuration templates.
Documentation
| Document | Description |
|---|
| Terragrunt Deployment Guide | Step-by-step guide for creating and deploying a new environment |
| AWS Architecture Overview | Architecture overview, component descriptions, and data flow |
| Disaster Recovery Strategy | DR strategy, RTO/RPO targets, backup systems, and recovery procedures |
| Prerequisites Checklist | Pre-deployment checklist to complete with the customer |
Repository Structure
ekb-terraform/
├── root.hcl # Root Terragrunt config (remote state, provider generation)
│
├── modules/ # Reusable Terraform modules
│ ├── eks/ # AWS EKS cluster, VPC, Karpenter, IAM, Helm releases
│ ├── aws-services/ # ElastiCache Redis, Amazon MQ RabbitMQ
│ ├── helm/ # Generic Helm release module
│ ├── state/ # S3 state bucket bootstrap
│ ├── aks/ # Azure AKS (stub)
│ ├── gke/ # Google GKE (stub)
│ └── baremetal/ # Bare-metal Kubernetes (stub)
│
├── terragrunt/
│ ├── .gitignore
│ └── environments/
│ ├── STATE_MANAGEMENT_README.md
│ └── env-template-folder/ # Template for new environments — copy and fill placeholders
│ ├── terragrunt.hcl # Main environment config (cluster, Helm releases, AWS services)
│ ├── state/
│ │ └── terragrunt.hcl # State bucket bootstrap for this environment
│ └── values/ # Per-chart Helm values (one file per chart)
│ ├── infrastructure.yaml # AWS Load Balancer Controller
│ ├── aws-ebs-csi-driver.yaml # EBS CSI Driver
│ ├── karpenter.yaml # Karpenter NodePool
│ ├── karpenter-nodeclasses.yaml# Karpenter EC2NodeClass
│ ├── karpenter-values.yaml # Karpenter controller values
│ ├── keda.yaml # KEDA autoscaler
│ ├── odin-services.yaml # EKB application services
│ ├── cloudnative-pg.yaml # CloudNativePG operator (ENABLE_CNPG)
│ ├── ha-supabase-db.yaml # HA Postgres cluster via CNPG (ENABLE_HA_SUPABASE_DB)
│ ├── supabase.yaml # Supabase application stack (ENABLE_SUPABASE)
│ ├── signoz.yaml # SigNoz observability platform (ENABLE_SIGNOZ)
│ └── signoz-k8s-infra.yaml # SigNoz k8s-infra metrics agent (ENABLE_SIGNOZ)
│
└── helm-deployment/ # Vendored / local Helm charts
├── infrastructure/ # ALB Controller wrapper chart
├── odin-services/odin-services/ # EKB platform (Web, API, Celery, Automator, Ingress)
├── cloudnative-pg/ # CloudNativePG operator chart
├── ha-supabase-db/ # HA Supabase DB (CNPG Cluster + PgBouncer + barman backups)
├── supabase-kubernetes-ha/ # Full Supabase application stack
├── signoz/ # SigNoz observability platform
└── k8s-infra/ # SigNoz k8s-infra cluster metrics agent
Quick Start
Terraform
# macOS
brew install terraform
# Linux
sudo apt update && sudo apt install terraform
Terragrunt
# macOS
brew install terragrunt
# Linux / macOS (manual)
curl -Lo /usr/local/bin/terragrunt \
https://github.com/gruntwork-io/terragrunt/releases/latest/download/terragrunt_linux_amd64
chmod +x /usr/local/bin/terragrunt
kubectl and Helm
# macOS
brew install kubectl helm
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -Ls https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -m 0755 kubectl /usr/local/bin/kubectl
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Verify installation
terraform --version
terragrunt --version
kubectl version --client
helm version
aws sts get-caller-identity
2. Create a New Environment
New environments are created by copying env-template-folder and filling in the placeholders.
See Terragrunt Deployment Guide for the full step-by-step process. The high-level flow is:
# 1. Copy the template
cp -r terragrunt/environments/env-template-folder terragrunt/environments/your-env-name
# 2. Fill all <YOUR_*> placeholders in:
# - terragrunt/environments/your-env-name/terragrunt.hcl
# - terragrunt/environments/your-env-name/state/terragrunt.hcl
# - terragrunt/environments/your-env-name/values/*.yaml
# 3. Bootstrap the state bucket
cd terragrunt/environments/your-env-name/state
terragrunt apply
# 4. Set environment variables and deploy
cd ../
export ENABLE_ALB_CONTROLLER=true
export WEB_DOMAIN="app.example.com"
export WEB_CERTIFICATE_ARN="arn:aws:acm:<region>:<account>:certificate/<id>"
# ... (see TERRAGRUNT_DEPLOYMENT_GUIDE.md for full variable list)
terragrunt apply
Service Enable / Disable Flags
All optional services are toggled via environment variables. Set them before running terragrunt apply.
| Variable | Default | Service |
|---|
ENABLE_ALB_CONTROLLER | true | AWS Load Balancer Controller |
ENABLE_AWS_SERVICES | false | ElastiCache Redis + Amazon MQ RabbitMQ |
ENABLE_CNPG | false | CloudNativePG operator |
ENABLE_HA_SUPABASE_DB | false | HA PostgreSQL cluster (requires ENABLE_CNPG=true) |
ENABLE_SUPABASE | false | Full Supabase stack (requires ENABLE_HA_SUPABASE_DB=true) |
ENABLE_SIGNOZ | false | SigNoz observability + k8s-infra agent |
Supabase Self-Hosted Deployment OrderSupabase components must be deployed in sequence:ENABLE_CNPG=true terragrunt apply --target='helm_release.local["cloudnative-pg"]'
ENABLE_HA_SUPABASE_DB=true terragrunt apply --target='helm_release.local["ha-supabase-db"]'
ENABLE_SUPABASE=true terragrunt apply --target='helm_release.local["supabase"]'
Helm Charts Reference
Chart (in helm-deployment/) | Namespace | Enabled by | Description |
|---|
infrastructure | infrastructure | Always | AWS Load Balancer Controller |
odin-services/odin-services | default | Always | Web, FastAPI, Celery, Automator, Ingress |
aws-ebs-csi-driver (upstream) | kube-system | Always | EBS persistent volume driver |
keda (upstream) | keda | Always | Pod autoscaling |
cloudnative-pg | cnpg-system | ENABLE_CNPG | PostgreSQL operator |
ha-supabase-db | ha-supabase-db | ENABLE_HA_SUPABASE_DB | HA Postgres + PgBouncer + barman backups |
supabase-kubernetes-ha | supabase | ENABLE_SUPABASE | Kong, Auth, Storage, Studio, Realtime |
signoz | monitoring | ENABLE_SIGNOZ | Distributed tracing, metrics, logs |
k8s-infra (upstream) | monitoring | ENABLE_SIGNOZ | Cluster metrics DaemonSet agent |
Modules Reference
modules/eks
The primary module. Provisions:
- VPC with public/private subnets across 3 AZs
- EKS cluster (Kubernetes 1.33) and managed node group for Karpenter
- Karpenter controller + NodePool + EC2NodeClass
- IAM roles (cluster, node group, Karpenter, ALB controller, EBS CSI driver)
- All Helm releases via the
helm_releases input map
modules/aws-services
Provisions ElastiCache Redis and Amazon MQ RabbitMQ. Only active when ENABLE_AWS_SERVICES=true.
modules/state
Bootstraps the S3 state bucket for a new environment (versioning, encryption, public access block).
modules/helm
Reusable Helm release module supporting both upstream chart repositories and local chart_path charts.
Environment Configuration
Placeholders
Every value in env-template-folder that is environment-specific uses a <YOUR_*> placeholder. Run the following to check nothing is left before deploying:
grep -r "<YOUR_" terragrunt/environments/your-env-name/
Remote State
State is stored in S3 per environment. The bucket is bootstrapped by state/terragrunt.hcl and referenced by root.hcl. The naming pattern is ekb-terraform-state-<env-name>.
Secrets
Sensitive values (passwords, API keys, certificate ARNs) are never committed. They are passed as environment variables consumed by get_env() calls in terragrunt.hcl, or as <YOUR_*> placeholders in values files that must be filled before deployment.
Troubleshooting
Verify AWS credentials
aws sts get-caller-identity
Check cluster access
aws eks update-kubeconfig --region <region> --name <cluster-name>
kubectl get nodes
kubectl get pods -A
Check ALB Controller
kubectl get pods -n infrastructure | grep aws-load-balancer-controller
kubectl get ingress -A
Check Karpenter
kubectl get pods -n kube-system | grep karpenter
kubectl get nodepools
kubectl get nodeclaims
Check state lock
# Force-unlock a stuck state (use the lock ID from the error message)
cd terragrunt/environments/your-env-name
terragrunt force-unlock <lock-id>
Verify no remaining placeholders
grep -r "<YOUR_" terragrunt/environments/your-env-name/
Check Helm release status
helm list -A
helm status <release-name> -n <namespace>
helm get values <release-name> -n <namespace>
Multi-Cloud Support (Planned)
| Platform | Status | Module |
|---|
| AWS EKS | Active | modules/eks |
| Azure AKS | Stub | modules/aks |
| Google GKE | Stub | modules/gke |
| Bare metal | Stub | modules/baremetal |