pltf terraform commands target the generated workspace, run the host terraform binary, and only use Dagger for building/pushing Docker images declared in the specs. This keeps toolchains transparent while still giving you predictable, k8s-native deployments.
Shared motivations
- Auto generation: Specs always render before Terraform runs. The CLI materializes
.tf,.tfvars, backend config, and provider wiring for every env. - Terraform runs locally with caching:
terraform init/plan/applyexecute inside.pltf/<environment_name>/workspace(env) or.pltf/<environment_name>/<service_name>/workspace(service), so Terraform’s native.terraformcache downloads providers there without extra wrappers. - Pre-plan for summaries:
planalways runs first (streaming tfsec, cost, Rover logs if enabled);apply/destroythen execute directly with-auto-approve. - Image builds via Dagger: Plan/apply trigger Dagger builds of the declared images using shared caches; apply pushes the resulting tags with host registry creds, while destroy skips image builds altogether.
platformsdrive multi-arch builds and default to the host OS/ARCH.
Runtime steps
- Render the workspace (modules,
.tfvars, backend, and provider config). - Build images via Dagger (plan builds locally, apply builds + pushes, destroy skips this step).
- Run
terraform initlocally; provider plugins are cached inside the workspace and reuse host credentials (AWS, GCP, Azure, Docker). - Run
terraform plan(and apply/destroy) inside the workspace; the plan writes.pltf-plan.tfplan/.json, andapply/destroyproceed after the pre-plan step.
Observability hooks
- Write artifacts (
.pltf-plan.tfplan,.pltf-plan.json) to the workspace so downstream CI or comment bots re-use the data. - Enable tfsec (
--scan), Infracost (--cost), and Rover (--rover) during plan runs; the CLI streams their output alongside Terraform logs. pltf terraform graphcan reuse the generated workspace or an existing plan via--plan-file.
For full CLI reference see CLI Usage.