This walkthrough mirrors the checked-in samples (example/env.yaml, example/service.yaml) to get you from zero to a working stack on AWS. You’ll define an Environment, wire a Service into it, then plan/apply with Terraform generated by pltf.
1) Prerequisites
- Terraform v1.5+ available locally or in CI
- AWS credentials (via
aws configure, env vars, or your preferred provider auth) - pltf installed (see Installation)
2) Define the Environment (VPC + EKS + DNS)
Use the sample example/env.yaml as-is or copy it to env.yaml:
apiVersion: platform.io/v1
kind: Environment
metadata:
name: example-aws
org: pltf
provider: aws
labels:
team: platform
cost_center: shared
environments:
prod:
account: "556169302489"
region: ap-northeast-1
variables:
base_domain: prod.pltf.internal
cluster_name: pltf-data
modules:
- id: base
type: aws_base
- id: dns
type: aws_dns
inputs:
domain: ${{var.base_domain}}
delegated: false
- id: eks
type: aws_eks
inputs:
cluster_name: "pltf-app-${layer_name}-${env_name}"
k8s_version: 1.33
enable_metrics: false
max_nodes: 15
- id: nodegroup1
type: aws_nodegroup
inputs:
max_nodes: 15
node_disk_size: 20
What this sets up:
- AWS account/region for
prod - VPC, subnets, security groups (aws_base)
- DNS zone (aws_dns) using
base_domain - EKS control plane and a nodegroup (aws_eks, aws_nodegroup)
Validate, plan, and apply:
pltf validate -f example/env.yaml
pltf terraform plan -f example/env.yaml --env prod
pltf terraform apply -f example/env.yaml --env prod
3) Define the Service (DB + S3 + SNS/SQS + IAM)
Use the sample example/service.yaml:
apiVersion: platform.io/v1
kind: Service
metadata:
name: payments-api
ref: ./env.yaml
envRef:
prod:
variables:
db_name: "testing"
secrets:
api_key:
key: api_key
modules:
- id: postgres
type: aws_postgres
inputs:
database_name: "${{var.db_name}}"
- id: s3
type: aws_s3
inputs:
bucket_name: "pltf-app-${layer_name}-${env_name}"
links:
readWrite: adminpltfrole
readWrite: userpltfrole
- id: topic
type: aws_sns
inputs:
sqs_subscribers:
- "${{module.notifcationsQueue.queue_arn}}"
links:
read: adminpltfrole
- id: notifcationsQueue
type: aws_sqs
inputs:
fifo: false
links:
readWrite: adminpltfrole
- id: schedulesQueue
type: aws_sqs
inputs:
fifo: false
links:
readWrite: adminpltfrole
- id: adminpltfrole
type: aws_iam_role
inputs:
extra_iam_policies:
- "arn:aws:iam::aws:policy/CloudWatchEventsFullAccess"
allowed_k8s_services:
- namespace: "*"
service_name: "*"
- id: userpltfrole
type: aws_iam_role
inputs:
extra_iam_policies:
- "arn:aws:iam::aws:policy/CloudWatchEventsFullAccess"
allowed_k8s_services:
- namespace: "*"
service_name: "*"
# Add helm chart modules
What this sets up:
- A Postgres instance using a per-service DB name
- An S3 bucket named with
layer_name/env_name - SNS topic + two SQS queues wired as subscribers
- Two IAM roles used by the queues/bucket/topic links
- Variable and secret overrides scoped to
prod
Validate, plan, and apply:
pltf validate -f example/service.yaml --env prod
pltf terraform plan -f example/service.yaml --env prod
pltf terraform apply -f example/service.yaml --env prod
Inspect outputs/graphs:
pltf terraform output -f example/service.yaml --env prod
pltf terraform graph -f example/service.yaml --env prod | dot -Tpng > graph.png
4) Cleanup
pltf terraform destroy -f example/service.yaml --env prod
pltf terraform destroy -f example/env.yaml --env prod
5) Next steps
- Extend the service with Helm charts (e.g., Flyte) using the IAM roles and buckets you created.
- Add more modules (Redis, SES, DocumentDB) and wire them with
links. - Configure remote state backends and profiles to match your AWS org structure.