↑
What Is Terraform?
Life Cycle
CLI
Writing Terraform Configuration
Introduction Using Terraform with AWS
Understanding the terraform init Command
Understanding the terraform Block
Understanding the provider Block
Understanding the resource Block
Understanding the variable Block
Understanding the output Block
Understanding the locals Block
Understanding the import Block
Understanding the module Block
Where to Put Each Terraform Block & Recommended File Structure
Introduction
Terraform is an open-source Infrastructure as Code (IaC) tool created by HashiCorp.
It allows you to define, provision, and manage infrastructure
across multiple cloud providers using a declarative configuration language (HCL — HashiCorp Configuration Language).
Instead of clicking in cloud dashboards, you write code describing your infrastructure — Terraform takes care of creating, updating, and deleting resources safely.
Key Characteristics
You describe what you want, not how to do it.
Supports many providers:
AWS, Azure, Google Cloud
Alibaba Cloud, Oracle Cloud
Kubernetes
GitHub, Cloudflare, Datadog, etc.
Safe execution plan via terraform plan.
Manages dependencies between resources automatically.
Uses a state file to track real-world infrastructure.
Works well in CI/CD pipelines.
Terraform Workflow
Terraform follows a simple but powerful 4-step workflow:
Step
Description
Command
1. Write
Write .tf files describing infrastructure.
—
2. Init
Download providers and initialize the project.
terraform init
3. Plan
Show what Terraform will create, modify, or destroy.
terraform plan
4. Apply
Perform the actual infrastructure change.
terraform apply
A Simple Terraform Example
This example creates an AWS EC2 instance.
provider "aws" {
region = "eu-central-1"
}
resource "aws_instance" "example" {
ami = "ami-12345678"
instance_type = "t2.micro"
}
provider block tells Terraform which cloud provider to use.
resource block describes an actual cloud object.
aws_instance.example becomes a managed resource in the Terraform state.
Terraform State
Terraform stores infrastructure details in a file called terraform.tfstate.
This file tracks:
resource IDs
connections between resources
current configuration applied
This allows Terraform to detect changes and update resources safely.
In teams, the state is usually stored remotely (S3, Azure Blob, Terraform Cloud).
Terraform Language (HCL)
Terraform uses HCL (HashiCorp Configuration Language) , which is:
easy to read
block-based
supports variables, loops, functions, expressions
Examples of HCL blocks:
variable "project_name" {
type = string
}
output "instance_id" {
value = aws_instance.example.id
}
Common Terraform Terminology
Term
Description
provider
Plugin to manage a service (e.g., AWS, Azure).
resource
A cloud object (VM, network, DB, etc.).
data source
Read existing infrastructure.
module
Reusable grouping of Terraform files.
state
The file storing real-world infrastructure details.
plan
Preview of actions before applying changes.
apply
Run the actual changes.
Introduction
Every Terraform resource has a lifecycle — a set of rules controlling how Terraform creates, updates, and destroys it.
You can use the lifecycle block inside a resource to modify Terraform's default behavior.
The lifecycle Block Structure
resource "aws_instance" "example" {
ami = "ami-123456"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = false
prevent_destroy = false
ignore_changes = []
}
}
The lifecycle block works inside resource definitions only.
It modifies how Terraform handles changes, updates, and destruction.
create_before_destroy
Normally Terraform does:
destroy the old resource
then create a new one
create_before_destroy = true reverses this order:
create new resource first
destroy the old one after
Useful when:
you want zero downtime
your resource cannot be deleted before a replacement exists
lifecycle {
create_before_destroy = true
}
prevent_destroy
Protects a resource from accidental deletion .
If Terraform needs to destroy it, the apply will abort with an error.
Commonly used for:
production databases
S3 buckets containing critical data
shared networks
lifecycle {
prevent_destroy = true
}
To override, you must temporarily remove prevent_destroy or force destruction with -target flags.
ignore_changes
Tells Terraform to ignore drift on certain fields.
Useful when some fields are updated by:
cloud providers
external scripts
manually modified settings
Prevents Terraform from trying to "fix" changes you don’t want overwritten.
lifecycle {
ignore_changes = [
tags,
metadata,
]
}
Supports:
single attribute
multiple attributes
ignore_changes = all (dangerous!)
Full Lifecycle Example
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t3.micro"
lifecycle {
create_before_destroy = true
prevent_destroy = false
ignore_changes = [
tags["last_updated"],
user_data,
]
}
}
This resource:
gets replaced without downtime
can be destroyed normally
does not react to changes in user_data or tags.last_updated
Lifecycle vs Depends On
Lifecycle provides behavior customization .
depends_on enforces ordering between resources.
They work together but serve different purposes.
resource "aws_eip" "ip" {
depends_on = [aws_instance.server]
}
Summary
Setting
Description
Common Use Case
create_before_destroy
Create replacement before destroying old resource
Zero downtime deployments
prevent_destroy
Protect resource from deletion
Critical databases, S3 buckets
ignore_changes
Ignore drift on specific attributes
Provider-controlled fields, timestamps
Introduction
The Terraform CLI is the primary way developers interact with Terraform.
All Terraform workflows rely on the CLI, especially when working locally or inside CI/CD pipelines.
Basic Command Structure
$ terraform <command> [options]
$ terraform apply -auto-approve
Essential Terraform CLI Commands
Command
Description
Usage
terraform init
Initializes a working directory
Download providers and set up backend
terraform plan
Show proposed changes
Preview before apply
terraform apply
Apply the plan (create/update/destroy)
Deploy infrastructure
terraform destroy
Destroy managed resources
Cleanup infrastructure
terraform validate
Check module syntax
Catches errors before running plan
terraform fmt
Format .tf files
Enforce consistent style
terraform providers
Show used providers
Dependency inspection
terraform version
Show Terraform version
Debug/build info
terraform init
Must be run once per project (or when providers/backends change).
Downloads provider plugins.
Initializes backend for state storage.
Sets up module dependencies.
$ terraform init
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
terraform plan
Shows what Terraform would do.
No resources are changed.
Output includes:
additions
changes
destructions
$ terraform plan
Plan: 1 to add, 0 to change, 0 to destroy.
terraform apply
Executes the plan and modifies infrastructure.
Default behavior requires user confirmation.
Use -auto-approve in automation.
$ terraform apply
Do you want to perform these actions?
Enter a value: yes
terraform destroy
Destroys all resources managed by Terraform.
Useful for cleaning up dev environments.
Confirmation required unless -auto-approve is used.
$ terraform destroy -auto-approve
terraform validate
Checks configuration syntax.
Verifies that the files are internally consistent.
Does not check cloud provider availability.
$ terraform validate
Success! The configuration is valid.
terraform fmt
Formats all Terraform files (.tf / .tfvars) to standard style.
Keeps code clean in team environments.
$ terraform fmt
terraform show
Displays the current state or a saved plan.
Useful for debugging and documentation.
$ terraform show
# aws_instance.example:
resource "aws_instance" "example" {
ami = "ami-123456"
instance_type = "t2.micro"
}
terraform graph
Outputs a dependency graph in DOT format (Graphviz).
Useful for visualizing relationships between resources.
$ terraform graph | dot -Tpng > graph.png
terraform state Commands
The terraform state family manages state files.
Command
Description
state list
List resources in state
state show
Show details for a resource
state pull
Download remote state
state push
Upload state manually
state rm
Remove resource from state
terraform import
Add existing real-world resources to Terraform state.
Useful when migrating to Terraform-managed infrastructure.
$ terraform import aws_instance.example i-1234567890abcdef0
Useful CLI Options
-auto-approve # Skip yes/no prompt
-refresh=false # Skip refreshing state
-target=<address> # Apply a specific resource
-var name=value # Pass variable
-var-file=file.tfvars
Summary of Terraform CLI
Category
Main Commands
Setup
init, fmt, validate
Execution
plan, apply, destroy
State Management
state, show, list, pull, push
Migration
import
Debugging
graph, providers, version
What Does Terraform Configuration Mean?
A Terraform configuration is a collection of .tf files that describe your desired infrastructure.
Terraform uses a declarative syntax called HCL (HashiCorp Configuration Language).
Configurations describe:
providers (AWS, Azure, GCP, Kubernetes, …),
resources (servers, networks, buckets, etc.),
variables and outputs,
modules,
data sources,
state backend settings.
Terraform reads your .tf files, builds a desired-state graph, and then applies it to real cloud infrastructure.
Terraform Files and Directory Structure
A typical Terraform directory might look like:
.
├── main.tf
├── variables.tf
├── outputs.tf
└── terraform.tfvars
Terraform automatically loads every *.tf files in the working directory.
You don’t need to import them manually, Terraform merges them internally.
The terraform Block
This block configures Terraform itself, especially:
required providers,
backend (where state is stored),
required terraform version.
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-tf-state"
key = "prod/terraform.tfstate"
region = "eu-central-1"
}
}
The backend configuration (S3 example) is optional unless you're working in teams.
Provider Configuration
A provider tells Terraform which cloud platform or service to interact with.
Example for AWS:
provider "aws" {
region = "eu-central-1"
}
Credentials can be set via:
environment variables (AWS_ACCESS_KEY_ID)
AWS CLI config
shared credentials files
instance roles
Do not hardcode secrets inside .tf files.
Resources: The Core of Terraform
A resource is an infrastructure object Terraform manages.
resource "aws_instance" "my_server" {
ami = "ami-05f7491af5eef733a"
instance_type = "t2.micro"
tags = {
Name = "DemoServer"
}
}
The identifier aws_instance.my_server becomes a referenceable object inside your configuration.
You can reference attributes using interpolation:
resource "aws_eip" "my_ip" {
instance = aws_instance.my_server.id
}
Terraform automatically builds dependency graphs based on references.
Variables
Variables make your configuration reusable and configurable.
Defined using:
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
instance_type = var.instance_type
Values can be passed via:
terraform.tfvars
-var command flag
environment variables (TF_VAR_...)
Variable Files (.tfvars)
Used to provide external input values:
instance_type = "t3.small"
bucket_name = "prod-assets-2025"
Terraform automatically loads:
terraform.tfvars
*.auto.tfvars
Other files require manual loading:
terraform apply -var-file=prod.tfvars
Outputs
Outputs allow Terraform to display useful information after apply:
output "public_ip" {
description = "Public IP of the instance"
value = aws_instance.my_server.public_ip
}
terraform output
terraform output public_ip
Locals
Local values are internal configuration shortcuts.
locals {
tags = {
Environment = "production"
Owner = "Junzhe"
}
}
resource "aws_s3_bucket" "bucket" {
bucket = "my-demo-bucket-2025"
tags = local.tags
}
Data Sources
Data sources let Terraform query existing resources .
Example: fetch the latest AMI ID:
data "aws_ami" "latest_amazon_linux" {
owners = ["amazon"]
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
resource "aws_instance" "server" {
ami = data.aws_ami.latest_amazon_linux.id
instance_type = "t2.micro"
}
Data sources are read-only .
Modules
Modules allow you to group and reuse resources.
Example using a public module:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.1"
name = "prod-vpc"
cidr = "10.0.0.0/16"
}
You can also write your own module:
modules/
└── ec2/
├── main.tf
├── variables.tf
└── outputs.tf
Expressions and Interpolation Syntax
Terraform uses ${ ... } for interpolation, but modern style allows direct reference:
ami = data.aws_ami.latest.id
instance_type = var.instance_type
tags = local.tags
timestamp()
upper("hello")
join("-", ["prod", "eu", "vpc"])
Lifecycle Rules
Used for fine control over create/update/destroy behavior.
resource "aws_instance" "server" {
ami = var.ami
instance_type = "t2.micro"
lifecycle {
prevent_destroy = true
}
}
Other lifecycle options:
create_before_destroy
ignore_changes
Putting Everything Together: A Complete Example
This small configuration provisions:
An EC2 instance
An elastic IP
Outputs its public IP
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "eu-central-1"
}
variable "instance_type" {
default = "t2.micro"
}
resource "aws_instance" "vm" {
ami = "ami-05f7491af5eef733a"
instance_type = var.instance_type
tags = {
Name = "ExampleVM"
}
}
resource "aws_eip" "ip" {
instance = aws_instance.vm.id
}
output "public_ip" {
value = aws_eip.ip.public_ip
}
terraform init
terraform plan
terraform apply
What Is AWS Terraform?
Terraform can manage nearly all AWS resources using the AWS Provider .
You write .tf files that describe what AWS infrastructure you want:
EC2 instances
VPCs and subnets
IAM users and roles
S3 buckets
RDS databases
Lambda functions
CloudWatch alarms
Load balancers
Terraform then:
calls AWS APIs via the provider,
creates/updates/destroys resources,
tracks them in its state file.
Prerequisites for AWS Terraform
Before using Terraform with AWS, you must have:
an AWS account,
an IAM user with enough permissions,
AWS credentials saved locally.
Store credentials safely using the AWS CLI:
aws configure
Typical files where credentials go:
~/.aws/credentials
~/.aws/config
Never store AWS secret keys directly in Terraform files.
AWS Provider Block
The AWS provider connects Terraform to AWS.
Basic configuration:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "eu-central-1"
}
Terraform loads credentials automatically from environment variables, the AWS config file, or IAM roles (if on EC2).
Deploying Your First AWS Resource: S3 Bucket
S3 buckets are one of the simplest AWS resources to create:
resource "aws_s3_bucket" "example" {
bucket = "junzhe-demo-bucket-12345"
tags = {
Purpose = "TerraformIntro"
}
}
terraform init
terraform apply
You now created your first AWS resource using Terraform.
Creating an EC2 Instance with Terraform
EC2 deployment includes:
an AMI (machine image)
instance type
optional key pair
security group
resource "aws_instance" "web" {
ami = "ami-05f7491af5eef733a"
instance_type = "t2.micro"
tags = {
Name = "TerraformWebServer"
}
}
Show the server’s public IP:
output "public_ip" {
value = aws_instance.web.public_ip
}
Using Data Sources (AWS Example)
AWS has many dynamic values (latest AMIs, VPC IDs, etc.).
Data sources allow Terraform to look up existing AWS infrastructure .
data "aws_ami" "amazon_linux" {
owners = ["amazon"]
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
resource "aws_instance" "server" {
ami = data.aws_ami.amazon_linux.id
instance_type = "t2.micro"
}
Using a data source ensures your AMI is always up to date.
Managing AWS Networking (VPC, Subnets, SG)
A typical AWS Terraform setup includes:
VPC
subnets
route tables
security groups
A minimal example:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
}
resource "aws_security_group" "web_sg" {
name = "web-sg"
description = "Allow HTTP"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
This is the foundation for most AWS architectures.
IAM with Terraform (Users, Roles, Policies)
IAM is fully manageable through Terraform.
Create a new user:
resource "aws_iam_user" "developer" {
name = "junzhe-dev"
}
resource "aws_iam_access_key" "developer_key" {
user = aws_iam_user.developer.name
}
output "dev_access_key" {
value = aws_iam_access_key.developer_key.id
}
Never output IAM secrets in production environments.
S3 + IAM Example (Realistic Use Case)
This example creates a bucket and a policy that grants read/write access.
resource "aws_s3_bucket" "app_bucket" {
bucket = "terraform-app-bucket-2025"
}
data "aws_iam_policy_document" "rw_access" {
statement {
actions = ["s3:*"]
resources = [
aws_s3_bucket.app_bucket.arn,
"${aws_s3_bucket.app_bucket.arn}/*"
]
}
}
resource "aws_iam_policy" "app_bucket_policy" {
name = "AppBucketRW"
policy = data.aws_iam_policy_document.rw_access.json
}
Using AWS Modules
The easiest way to build AWS infrastructure is by using well-maintained Terraform Registry modules.
Example: AWS VPC module
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.1.0"
name = "production-vpc"
cidr = "10.1.0.0/16"
azs = ["eu-central-1a", "eu-central-1b"]
public_subnets = ["10.1.1.0/24", "10.1.2.0/24"]
private_subnets = ["10.1.3.0/24", "10.1.4.0/24"]
}
Modules drastically reduce effort and errors.
Remote State Backend for AWS Projects (Recommended)
For team projects, store Terraform state in S3 and lock with DynamoDB:
terraform {
backend "s3" {
bucket = "my-terraform-state-prod"
key = "global/state.tfstate"
region = "eu-central-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
This prevents:
conflicting changes,
lost state files,
corrupted environments.
Common AWS Terraform Patterns
Almost every AWS Terraform project involves:
using data sources to fetch AMIs
creating VPC + subnets
security groups for inbound/outbound rules
EC2 instances or ECS/Lambda
IAM roles for services
S3 buckets for artifacts
CloudWatch for monitoring
Route53 for DNS
These patterns can scale from tiny personal projects to large enterprise deployments.
What Does terraform init Do?
terraform init is the first command you run in any new or cloned Terraform project directory.
Its main responsibilities:
Initialize the working directory as a Terraform project
Download and install required providers (e.g. AWS, Azure, Google)
Download modules defined in your configuration
Initialize and configure the backend (where the state is stored)
Create internal metadata like .terraform/ and .terraform.lock.hcl
Typical workflow:
terraform init
terraform plan
terraform apply
If you skip terraform init, plan and apply will fail because providers/backends are not set up.
What Happens in the Working Directory?
After running terraform init, Terraform creates/updates:
.terraform/ — internal directory for:
downloaded provider plugins
downloaded modules
backend metadata
.terraform.lock.hcl — dependency lockfile (provider versions, etc.)
This makes your project:
reproducible (same provider versions)
ready to run plans/applies
Basic Usage: terraform init
Inside a folder containing *.tf files:
cd my-terraform-project
terraform init
Terraform then:
Scans your .tf files for:
required_providers
terraform block
backend configuration
module blocks
Downloads missing providers
Fetches modules from:
Terraform Registry
Git repositories
local paths
Configures backend (local or remote state)
You usually run this:
once when starting a new project
again when:
changing providers/backends
upgrading provider versions
adding new modules
How terraform init Handles Providers
Example configuration (main.tf):
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
When you run:
terraform init
Terraform will:
Look up provider "hashicorp/aws" in the Terraform Registry
Download a version compatible with ~> 5.0
Store it under .terraform/providers/...
Record the exact version in .terraform.lock.hcl
This ensures every developer on the project uses exactly the same provider versions.
How terraform init Handles Modules
module "network" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"
name = "example-vpc"
cidr = "10.0.0.0/16"
}
On terraform init, Terraform will:
Locate module terraform-aws-modules/vpc/aws in the Registry
Download it into .terraform/modules/
Use that local copy when running plan/apply
If you update version or source, re-run terraform init to fetch the new module version.
Backends and terraform init
The backend defines where your Terraform state (terraform.tfstate) is stored.
Example local backend (default):
terraform {
backend "local" {
path = "terraform.tfstate"
}
}
Example remote backend (Amazon S3):
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "envs/prod/terraform.tfstate"
region = "eu-central-1"
}
}
When you run terraform init with a backend block:
Terraform configures the backend
It may prompt you to migrate existing local state into the remote backend
It writes backend metadata into .terraform/
Useful Options for terraform init
1. -upgrade — Upgrade providers and modules
terraform init -upgrade
What it does:
Checks for newer versions of required providers (within your version constraints)
Redownloads modules to the latest acceptable versions
Updates .terraform.lock.hcl
2. -backend-config — Override backend settings
Instead of putting secrets in .tf files, you can pass them via CLI:
terraform init \
-backend-config="bucket=my-tf-state" \
-backend-config="region=eu-central-1"
Each -backend-config argument sets/overrides a backend argument.
3. -reconfigure — Force backend reconfiguration
terraform init -reconfigure
Use when:
you changed backend type or remote config
you want Terraform to forget previous backend settings and ask again
4. -migrate-state — Move state between backends
terraform init -migrate-state
Use when you:
switch from local → remote backend (e.g., local → S3)
move between two different remote backends
Terraform will ask for confirmation before physically moving the state file.
5. -from-module — Initialize a new project from a module
terraform init -from-module=git::https://github.com/example/infra-module.git
What it does:
Clones/copies the specified module into the current directory
Then initializes providers and submodules as usual
Idempotency: Re-running terraform init
It is safe to run terraform init multiple times.
On repeated runs, Terraform will:
Reuse already-downloaded providers (unless -upgrade is set)
Reuse modules (again, unless -upgrade)
Verify backend configuration
You should re-run init when:
cloning a repo for the first time
changing provider versions
changing backend configs
adding/removing modules
Putting It All Together: Example Session
Imagine you have this setup in main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-tf-state-bucket"
key = "dev/terraform.tfstate"
region = "eu-central-1"
}
}
Typical usage:
# 1. Initialize project, providers, modules, backend
terraform init
# 2. (Later) change versions or modules, then upgrade
terraform init -upgrade
# 3. (Later) migrate from local backend to S3
terraform init -migrate-state -reconfigure
This sequence is extremely common in real-world Terraform projects.
What Is the terraform Block?
The terraform block is a special top-level configuration block in Terraform that controls:
required provider versions
required Terraform CLI version
backend configuration (state storage)
module behavior
dependency lockfile rules
It does not provision infrastructure itself. It only configures Terraform’s behavior.
This block is normally placed at the top of main.tf.
The General Structure of the terraform Block
terraform {
required_version = "~> 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "eu-central-1"
}
}
This block lets Terraform know:
What version of Terraform is allowed
Which providers are required
Where to store the state
required_version — Terraform CLI Version Constraint
Ensures compatibility across team members and CI pipelines.
Example:
terraform {
required_version = ">= 1.4.0, < 1.6.0"
}
What it means:
Terraform must be at least version 1.4.0
Terraform must be < 1.6.0
If you run an incompatible version, Terraform will refuse to continue.
required_providers — Provider Requirements
This section tells Terraform which providers your configuration needs.
This also defines:
provider source (where to download it from)
provider version constraints
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = ">= 3.5.0"
}
}
}
Explanation:
aws provider must be downloaded from the Hashicorp registry.
Version must match ~> 5.0 means >= 5.0.0, < 6.0.0
random provider must be at least 3.5.0
These rules will be saved in .terraform.lock.hcl after terraform init.
backend — Terraform State Storage
Terraform stores its state (terraform.tfstate) externally using backends.
Backends DO NOT support interpolation. Everything must be hard-coded or supplied via -backend-config.
Example (S3 backend):
terraform {
backend "s3" {
bucket = "my-tf-state"
key = "dev/state.tfstate"
region = "eu-central-1"
}
}
What it does:
Stores state in S3 bucket
Key identifies the file path inside the bucket
Supports locking when combined with DynamoDB
You can override backend values:
terraform init \
-backend-config="bucket=my-other-bucket" \
-backend-config="key=new-key.tfstate"
Less Common but Useful Settings
provider_meta (rare)
terraform {
provider_meta "aws" {
module_name = "custom-aws-module"
}
}
experiments (very rare)
terraform {
experiments = [module_variable_optional_attrs]
}
cloud block (Terraform Cloud/Enterprise)
terraform {
cloud {
organization = "mycompany"
workspaces {
name = "production"
}
}
}
This integrates Terraform Cloud as the state backend + runs.
Putting It All Together — Complete Example
terraform {
required_version = "~> 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.20"
}
}
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "envs/prod/state.tfstate"
region = "eu-central-1"
encrypt = true
}
}
This prepares Terraform to:
Use Terraform CLI 1.5.x
Download AWS and Kubernetes providers
Store state remotely in S3
Use encryption for the state file
What Is the provider Block?
The provider block configures a specific cloud/vendor provider used by Terraform.
A provider is responsible for:
connecting Terraform to an external system (AWS, Azure, GCP, Kubernetes, etc.)
authenticating
defining regions/endpoints
making API requests
creating/updating/deleting resources
Every resource (like aws_instance, google_compute_network, kubernetes_deployment) requires a provider.
The provider block tells Terraform how to configure that provider (not which version—that belongs in the terraform block).
The Structure of a provider Block
provider "aws" {
region = "eu-central-1"
access_key = "AKIA..."
secret_key = "SECRET..."
}
Important points:
provider "aws" → refers to the AWS provider
arguments inside → configure authentication, regions, settings
DO NOT include provider version here
Basic guideline:
terraform.required_providers → WHICH provider + version
provider → HOW the provider connects to the cloud
Example: AWS Provider Configuration
A realistic AWS provider block:
provider "aws" {
region = "eu-central-1"
default_tags {
tags = {
project = "terraform-demo"
owner = "junzhe"
}
}
}
Explanation:
region is where Terraform creates AWS resources
default_tags are automatically attached tags to every AWS resource
Authentication is normally handled externally:
environment variables
AWS CLI profile
EC2 instance role
AWS provider supports many options such as:
profile (use AWS CLI profile)
assume_role blocks
endpoints (custom AWS endpoints)
Example: Google Cloud Provider
provider "google" {
project = "my-gcp-project"
region = "us-central1"
zone = "us-central1-a"
}
Explanation:
project is your GCP project ID
region are regional GCP resources
zone are zonal resources
Multiple Provider Configurations
You can configure multiple provider blocks for:
multi-region deployments
multi-account deployments
multiple Kubernetes clusters
Example: Two AWS regions
provider "aws" {
alias = "eu"
region = "eu-central-1"
}
provider "aws" {
alias = "us"
region = "us-east-1"
}
Use them in resources:
resource "aws_s3_bucket" "bucket_eu" {
provider = aws.eu
bucket = "bucket-europe"
}
resource "aws_s3_bucket" "bucket_us" {
provider = aws.us
bucket = "bucket-america"
}
Aliases allow multiple configurations for the same provider.
Provider Inheritance Rule
Resources automatically use the default (non-aliased) provider.
Only use provider = aws.aliasname when:
you have multiple provider configs
a resource belongs to a non-default provider
Modules inherit provider configs from the root module unless overridden.
provider Blocks vs. required_providers
required_providers (inside terraform block):
States which providers you want
Specifies versions
Dictates where to download the provider from
provider block:
Configures the provider connection
Sets up authentication
Specifies region/endpoints
Passing Variables into Provider Configurations
You can parameterize provider configuration:
variable "aws_region" {
default = "eu-central-1"
}
provider "aws" {
region = var.aws_region
}
Don’t hardcode credentials in provider blocks.
Debugging Provider Issues
Common mistakes include:
Missing authentication environment variables
Using wrong regions
Incorrect aliases
Mismatched provider versions (but Terraform catches this)
Useful debug command:
TF_LOG=DEBUG terraform apply
This prints all API calls and provider loading behavior.
Complete Example
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "eu-central-1"
default_tags {
tags = {
env = "production"
}
}
}
resource "aws_s3_bucket" "example" {
bucket = "my-s3-bucket-demo-12345"
}
Terraform downloads AWS provider version 5.x
Provider connects to AWS Frankfurt region
An S3 bucket is created using that provider configuration
What Is a resource Block?
The resource block is the core building block of Terraform.
It describes:
WHAT infrastructure should be created
HOW it should look
WHICH provider should manage it
WHAT arguments customize the resource
Example resources include:
aws_instance → EC2 VM
google_compute_network → GCP VPC
kubernetes_deployment → K8S Deployment
azurerm_storage_account → Azure storage
Terraform reads the resource blocks, calculates the desired state, and makes provider API calls to achieve it.
The Structure of a resource Block
resource "PROVIDER_RESOURCE_TYPE" "LOCAL_NAME" {
# arguments (config)
}
Breakdown:
PROVIDER_RESOURCE_TYPE → e.g. aws_s3_bucket
LOCAL_NAME → your own identifier (internal to Terraform)
arguments inside block → specify configuration
Example:
resource "aws_s3_bucket" "my_bucket" {
bucket = "junzhe-terraform-demo"
acl = "private"
}
Here:
aws_s3_bucket is the resource type
my_bucket is the name used within Terraform
It creates a real AWS S3 bucket
Understanding Resource Types
Resource types follow this naming pattern:
<PROVIDER_NAME>_<RESOURCE_TYPE>
Examples:
aws_instance
aws_security_group
google_storage_bucket
kubernetes_service
The provider determines which resource types exist.
Check your provider documentation for the full list.
Example 1: AWS EC2 Instance
resource "aws_instance" "web" {
ami = "ami-0ff8a91507f77f867"
instance_type = "t3.micro"
tags = {
Name = "WebServer"
}
}
Arguments:
ami → machine image
instance_type → VM size
tags → metadata
Resource Arguments vs. Attributes
Arguments are input from your configuration
Attributes are output from the infrastructure provider
Example:
resource "aws_s3_bucket" "example" {
bucket = "my-bucket"
}
output "bucket_arn" {
value = aws_s3_bucket.example.arn
}
bucket → argument
arn → computed attribute
Lifecycles for Resources (lifecycle Block)
You can customize how Terraform manages updates:
resource "aws_s3_bucket" "demo" {
bucket = "demo-bucket"
lifecycle {
prevent_destroy = true
ignore_changes = [tags]
create_before_destroy = true
}
}
Key lifecycle options:
prevent_destroy protects critical resources
ignore_changes makes Terraform ignore drift on selected attributes
create_before_destroy avoids downtime during resource replacement
Meta-Arguments in Resource Blocks
resource "aws_s3_bucket" "buckets" {
count = 3
bucket = "bucket-${count.index}"
}
2. for_each
resource "aws_s3_bucket" "buckets" {
for_each = toset(["a", "b", "c"])
bucket = "bucket-${each.key}"
}
3. depends_on — force explicit ordering
resource "aws_iam_role" "role" {
# ...
}
resource "aws_instance" "server" {
depends_on = [aws_iam_role.role]
# ...
}
4. provider — choose a provider configuration
provider "aws" {
alias = "us"
region = "us-east-1"
}
resource "aws_s3_bucket" "us_bucket" {
provider = aws.us
bucket = "bucket-us"
}
Resource Dependencies
Terraform automatically infers ordering via variables and references:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "subnet" {
vpc_id = aws_vpc.main.id # implicit dependency
}
If implicit dependencies fail, use depends_on.
Importing Existing Resources
You can import existing real cloud resources into Terraform state:
terraform import aws_s3_bucket.my_bucket my-real-bucket
After import:
The resource exists in state
You must write the matching resource block manually
Destroying Resources
Delete a resource from config → terraform apply will destroy it.
Or destroy explicitly:
terraform destroy -target=aws_s3_bucket.my_bucket
Complete Example Project
provider "aws" {
region = "eu-central-1"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "subnet" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "eu-central-1a"
}
resource "aws_instance" "web" {
ami = "ami-0ff8a91507f77f867"
instance_type = "t3.micro"
subnet_id = aws_subnet.subnet.id
}
Terraform will automatically create:
VPC
Subnet inside that VPC
EC2 inside that subnet
What Is a variable Block?
The variable block defines an input variable that you can pass into a Terraform module or configuration.
Input variables allow your Terraform code to be:
reusable
parameterized
dynamic for multiple environments
clean and maintainable
Variables are evaluated at runtime and can be supplied from:
terraform.tfvars
*.auto.tfvars
CLI flags (-var)
environment variables
Terraform Cloud variables
module input parameters
Every variable must be declared using a variable block.
Basic Structure of a variable Block
variable "NAME" {
type = TYPE
default = VALUE
description = "Human-readable text"
sensitive = BOOLEAN
}
All attributes inside the block are optional except the variable name.
Example: A Simple String Variable
variable "region" {
type = string
default = "eu-central-1"
description = "AWS region where resources will be created."
}
Here:
type → string
default → optional fallback value
description → optional but recommended
Supported Variable Types
Type
Description
Example
string
Text value
"hello"
number
Numeric value
42
bool
true / false
true
list(type)
Ordered list
["a", "b"]
map(type)
Key-value map
{ env = "prod" }
set(type)
Unique unordered list
set(["a", "b"])
object({})
Structured key/value object
{ name = string, size = number }
tuple([])
Fixed-length list of various types
[string, number]
Example: List Variable
variable "availability_zones" {
type = list(string)
default = ["eu-central-1a", "eu-central-1b"]
}
subnet_id = var.availability_zones[0]
Example: Map Variable
variable "instance_tags" {
type = map(string)
default = {
project = "terraform-demo"
owner = "junzhe"
}
}
tags = var.instance_tags
Example: Object Variable
variable "server_config" {
type = object({
size = string
count = number
tags = map(string)
})
}
instance_type = var.server_config.size
tags = var.server_config.tags
Using Variables in a Configuration
Reference variables via var.NAME:
provider "aws" {
region = var.region
}
Variables are always prefixed with var..
You can set variable values from many sources
region = "us-east-1"
2. Auto-loaded files: anything.auto.tfvars
3. CLI flags (-var)
terraform apply -var="region=us-east-1"
4. Variable files (-var-file)
terraform apply -var-file="production.tfvars"
5. Environment variables
export TF_VAR_region=us-east-1
Sensitive Variables
To hide variable values in CLI output, use:
variable "db_password" {
type = string
sensitive = true
}
Terraform will:
avoid printing sensitive values in terminal
mask outputs
Default Values
A variable without a default is required .
Example required variable:
variable "project_id" {
type = string
}
If you don't supply project_id, terraform plan will ask for it.
Validation Rules
Terraform allows validation inside a variable block:
variable "env" {
type = string
validation {
condition = contains(["dev", "prod"], var.env)
error_message = "env must be either 'dev' or 'prod'"
}
}
This ensures correct input values.
Complete Example
variable "region" {
type = string
default = "eu-central-1"
description = "The AWS region"
}
variable "tags" {
type = map(string)
default = {
project = "demo"
owner = "junzhe"
}
}
resource "aws_s3_bucket" "example" {
bucket = "my-demo-bucket"
tags = var.tags
}
provider "aws" {
region = var.region
}
What Is an output Block?
The output block defines values that are displayed after terraform apply and can be used by:
the end user
scripts
other Terraform modules
Outputs are useful for showing:
IP addresses
URLs
resource IDs
credentials
any useful computed value
Outputs reflect data stored in the Terraform state , not live API calls.
Basic Structure of an output Block
output "NAME" {
value = EXPRESSION
description = "Human-readable text"
sensitive = BOOLEAN
}
Example: Simple Output
output "bucket_name" {
value = aws_s3_bucket.my_bucket.id
}
Usage: after terraform apply, Terraform prints:
bucket_name = "my-bucket-123"
When Are Outputs Shown?
Outputs display only after terraform apply or terraform output
Example:
terraform output bucket_name
Terraform stores outputs in the state file.
The value Argument
Accepts any Terraform expression:
resource attributes
variables
functions
maps / lists
Example:
output "instance_url" {
value = "https://${aws_instance.web.public_ip}"
}
Output will look like:
instance_url = "https://54.23.111.20"
Marking Outputs as Sensitive
Prevent them from appearing in CLI output:
output "db_password" {
value = random_password.db.result
sensitive = true
}
CLI output becomes masked:
db_password = (sensitive value)
This prevents accidental exposure in CI/CD logs.
Using output in Modules
Outputs from a child module can be referenced by the parent module.
Child module (modules/network/outputs.tf):
output "vpc_id" {
value = aws_vpc.main.id
}
Parent module:
module "network" {
source = "./modules/network"
}
resource "aws_subnet" "example" {
vpc_id = module.network.vpc_id
}
module.network.vpc_id references the child’s output.
Using Outputs for Cross-Project Sharing
Terraform Cloud and some CI setups allow one workspace’s outputs to feed another workspace.
Example use case:
Network workspace outputs VPC ID
Compute workspace uses that VPC ID as input
Output Format Options: terraform output
terraform output
Show one output:
terraform output vpc_id
Show output in JSON (useful for scripts):
terraform output -json
JSON output enables automation with tools such as Ansible, Bash, Python, etc.
Complex Output Values
You can output maps, lists, and objects.
Example 1: List
output "azs" {
value = var.availability_zones
}
Example 2: Map
output "instance_endpoints" {
value = {
app = aws_instance.app.public_ip
db = aws_instance.db.private_ip
}
}
Example 3: Object
output "info" {
value = {
id = aws_instance.web.id
name = aws_instance.web.tags["Name"]
url = "https://${aws_instance.web.public_ip}"
}
}
Conditional Outputs
output "endpoint" {
value = var.is_prod
? aws_lb.prod_lb.dns_name
: aws_lb.dev_lb.dns_name
}
You can dynamically output different values depending on environment variables.
Complete Example
resource "aws_instance" "web" {
ami = "ami-0ff8a91507f77f867"
instance_type = "t3.micro"
}
output "public_ip" {
value = aws_instance.web.public_ip
description = "The public IP of the web server"
}
output "ssh_command" {
value = "ssh ec2-user@${aws_instance.web.public_ip}"
}
public_ip = "54.22.13.91"
ssh_command = "ssh ec2-user@54.22.13.91"
What Is a locals Block?
The locals block defines local values — named expressions that behave like constants inside a Terraform module.
Locals help you:
avoid repeating long or complex expressions
keep configuration clean and readable
group transformation logic in one place
compute values dynamically
Local values cannot be passed from outside the module and cannot be overridden.
They exist only within the module they're declared in.
Basic Structure of a locals Block
locals {
NAME = EXPRESSION
NAME2 = EXPRESSION
}
A module can contain multiple locals blocks; Terraform merges them.
Define as many locals as you want inside the block.
Referencing Local Values
local.NAME
Example:
locals {
region = "eu-central-1"
}
provider "aws" {
region = local.region
}
Example: Avoiding Duplication
locals {
common_tags = {
project = "demo"
owner = "junzhe"
}
}
resource "aws_s3_bucket" "bucket1" {
bucket = "bucket-one"
tags = local.common_tags
}
resource "aws_s3_bucket" "bucket2" {
bucket = "bucket-two"
tags = local.common_tags
}
The same tag structure is reused with consistency.
Local Values Can Use Expressions
Locals can compute values dynamically.
locals {
full_name = "${var.env}-${var.app_name}-${terraform.workspace}"
}
Usage:
resource "aws_s3_bucket" "app" {
bucket = local.full_name
}
This produces predictable naming conventions.
Example: Complex Expression
Compute local values based on logic:
locals {
is_prod = var.env == "prod"
instance_type = local.is_prod ? "t3.medium" : "t3.micro"
}
This allows declarative environment-based logic.
Locals with Maps
locals {
app_ports = {
http = 80
https = 443
metrics = 9100
}
}
output "http_port" {
value = local.app_ports["http"]
}
Locals help group structured data.
Locals with Lists
locals {
zones = [
"${var.region}a",
"${var.region}b",
"${var.region}c"
]
}
resource "aws_subnet" "subnet1" {
availability_zone = local.zones[0]
}
Local lists simplify multi-AZ deployments.
Locals with Resource References
Locals can include resource attributes:
locals {
web_url = "http://${aws_instance.web.public_ip}"
}
Becomes a reusable reference:
output "website" {
value = local.web_url
}
Example: Combining Map + Logic
locals {
configs = {
dev = { instance = "t3.micro", count = 1 }
prod = { instance = "t3.large", count = 3 }
}
active_config = local.configs[var.env]
}
resource "aws_instance" "web" {
instance_type = local.active_config.instance
count = local.active_config.count
}
This pattern is extremely common in production modules.
Using Multiple locals Blocks
You can declare locals in separate files or separate blocks:
locals {
project = "demo"
}
locals {
region = "eu-central-1"
}
Terraform merges them into a single local namespace.
Complete Practical Example
variable "env" {
type = string
default = "dev"
}
locals {
common_tags = {
project = "terraform-demo"
owner = "junzhe"
}
full_name = "${var.env}-webserver"
}
resource "aws_instance" "web" {
ami = "ami-0ff8a91507f77f867"
instance_type = "t3.micro"
tags = local.common_tags
}
output "name" {
value = local.full_name
}
Locals improve readability, reduce duplication, and centralize logic.
What Is the import Block?
The import block is used to bring existing infrastructure resources (created outside Terraform) under Terraform’s management.
This allows Terraform to “adopt” a resource instead of recreating it.
Basic Structure of an import Block
import {
to = RESOURCE_ADDRESS
id = PROVIDER_RESOURCE_ID
}
to: the Terraform resource to map to (must already exist in code)
id: the resource identifier from the cloud provider
Example: Importing an AWS S3 Bucket
Terraform resource declared:
resource "aws_s3_bucket" "my_bucket" {}
Add an import block:
import {
to = aws_s3_bucket.my_bucket
id = "my-existing-bucket"
}
Explanation:
aws_s3_bucket.my_bucket → the Terraform resource to link
my-existing-bucket → the name of the actual S3 bucket
Now run:
terraform plan
Terraform will import the bucket state automatically during apply.
Full Lifecycle of Import Using the import Block
Step 1 — Write the resource block:
resource "aws_security_group" "web" {}
Step 2 — Add the import block:
import {
to = aws_security_group.web
id = "sg-0a1b2c3d4e5f6g7h"
}
Step 3 — run:
terraform apply
Terraform imports the SG and writes its state locally.
Using Terraform to Generate Configuration After Import
You can ask Terraform to generate resource configuration after import:
terraform plan -generate-config-out=generated.tf
This creates a file that contains the attributes of the imported resource.
Useful when importing large or complex resources.
How Import Works Internally
Terraform does NOT modify the real resource.
It simply reads the resource from the provider (AWS, Azure, GCP, etc.) and writes it to terraform.tfstate.
After import:
Terraform manages the resource
Updates are made via Terraform
Destruction happens via Terraform
Importing Multiple Resources
import {
to = aws_iam_role.app_role
id = "my-app-role"
}
import {
to = aws_iam_policy.read_policy
id = "arn:aws:iam::123456789012:policy/ReadPolicy"
}
import {
to = aws_s3_bucket.data
id = "data-bucket"
}
Terraform will import all during apply.
Importing Nested or Child Resources
If a resource exists inside a module:
module "network" {
source = "./network"
}
import {
to = module.network.aws_vpc.main
id = "vpc-123456"
}
Terraform supports importing resources inside modules using full resource addresses.
Example: Importing an Azure Resource
resource "azurerm_resource_group" "main" {
name = "rg-demo"
location = "westeurope"
}
import {
to = azurerm_resource_group.main
id = "/subscriptions/xxx/resourceGroups/rg-demo"
}
Reference: Full Syntax Options
import {
to = RESOURCE_ADDRESS
id = IDENTIFIER
provider = PROVIDER_ALIAS # optional
}
provider: Only required when multiple provider aliases exist.
Common Import Errors
Error: Resource does not exist
The provided id is wrong.
Error: Resource already managed
The resource is already in the Terraform state.
Error: Missing required argument
The resource block must exist before import.
Error: Provider mismatch
Check provider aliases and regions.
Import vs Creating New Resources
Create new resource:
Terraform builds the resource using the provider API.
Import existing resource:
Terraform only maps state → it does NOT modify the resource.
After import, Terraform treats the resource like it created it.
When Should You Use the import Block?
You should use an import block when:
you already have cloud infrastructure created manually
you want to migrate legacy resources under Terraform
a team member created a resource outside Terraform
a CI/CD workflow needs reproducible imports
Import blocks make Terraform ideal for gradual adoption of IaC.
Complete Practical Example
resource "aws_iam_user" "admin" {}
import {
to = aws_iam_user.admin
id = "existing-admin-user"
}
Run:
terraform apply
Terraform imports the IAM user into state automatically.
A module block is how you call or reuse Terraform configurations stored in another directory or registry.
Basic Structure of a module Block
module "NAME" {
source = "SOURCE_LOCATION"
# input variables
variable1 = VALUE
variable2 = VALUE
}
A module must always have a source argument.
Module Source Types
Source Type
Example
Description
Local path
./modules/vpc
Call module stored locally
Git Repository
git::https://github.com/user/repo.git
Load modules directly from Git
Terraform Registry
terraform-aws-modules/vpc/aws
Official or community modules
Private Registry
app.terraform.io/org/module/aws
Enterprise patterns
HTTP URL
https://example.com/module.zip
Download module archive
Example: Local Module
module "network" {
source = "./modules/network"
cidr_block = "10.0.0.0/16"
env = var.env
}
modules/
network/
main.tf
variables.tf
outputs.tf
Example: Using a Public Registry Module
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.0"
name = "demo"
cidr = "10.0.0.0/16"
}
Terraform automatically downloads and caches the module.
Passing Input Variables to a Module
Modules accept input variables via attributes inside the block:
module "compute" {
source = "./modules/compute"
instance_count = 3
instance_type = "t3.micro"
}
Inside the module, variables must be declared:
variable "instance_count" { type = number }
variable "instance_type" { type = string }
Consuming Module Outputs
A module can expose outputs:
output "vpc_id" {
value = aws_vpc.main.id
}
Parent module can access them:
resource "aws_subnet" "subnet1" {
vpc_id = module.vpc.vpc_id
}
Modules behave exactly like objects with attributes.
Using Multiple Modules
A real Terraform project may use many modules:
module "network" { ... }
module "database" { ... }
module "frontend" { ... }
module "backend" { ... }
Terraform keeps each module isolated and reusable.
Meta-Arguments in Modules
Modules support these meta-arguments: depends_on, count, for_each
Example using depends_on:
module "app" {
source = "./modules/app"
depends_on = [module.network]
}
Example using for_each:
module "bucket" {
source = "./modules/s3"
for_each = toset(["dev", "test", "prod"])
env = each.key
}
Modules become fully programmable like resources.
Versioning Modules
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.5"
}
Use semantic versioning to control upgrades.
Complete Real-World Example
module "network" {
source = "./modules/network"
cidr = "10.0.0.0/16"
}
module "compute" {
source = "./modules/compute"
subnet_ids = module.network.subnet_ids
instance_type = "t3.micro"
count = 2
}
output "instance_ips" {
value = module.compute.instance_ips
}
This structure keeps the project scalable and clean.
How Terraform Reads Files (Very Important Concept)
Terraform does not care about file names, it only cares about the directory (module root) and all .tf / .tf.json files in it.
This means:
you can technically put everything into a single main.tf
or spread blocks across many files like variables.tf, outputs.tf, etc.
However, to keep your configuration readable and standard , people follow common conventions.
Terraform loads all .tf files in lexicographical order , but you should not rely on this for logic — Terraform builds a dependency graph instead.
Typical Recommended File Layout in a Small Project
A common starting layout:
project-root/
├── main.tf # core resources & module calls
├── providers.tf # provider blocks
├── versions.tf # terraform block (required_version, required_providers)
├── variables.tf # variable blocks
├── outputs.tf # output blocks
├── locals.tf # locals block(s)
├── import.tf # optional: import blocks
└── modules/ # reusable child modules
└── <module-name>/...
You can adjust names, but this structure is widely understood by other Terraform users.
Where to Put the terraform Block
The terraform block configures:
required_version
required_providers
backend
optionally cloud, experiments, etc.
Common conventionally, put it into a dedicated file named versions.tf or terraform.tf.
Example (versions.tf):
terraform {
required_version = "~> 1.6"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-tf-state"
key = "envs/dev/state.tfstate"
region = "eu-central-1"
}
}
Put exactly one terraform block per module (per folder).
Where to Put provider Blocks
provider blocks configure how Terraform connects to a vendor (AWS, GCP, Azure, Kubernetes...).
Typical convention: Put all provider blocks into providers.tf.
Example (providers.tf):
provider "aws" {
region = var.region
}
provider "aws" {
alias = "us"
region = "us-east-1"
}
In child modules:
you often omit providers or just write provider requirements
the root module usually owns provider configuration
Where to Put variable Blocks
variable blocks define input parameters for a module.
Standard practice: Put them into a dedicated variables.tf file.
Root module variables.tf: variables that users must pass via terraform.tfvars, -var, etc.
Child module variables.tf: inputs that the parent module must provide via a module block
Example (variables.tf):
variable "region" {
type = string
default = "eu-central-1"
description = "AWS region for all resources."
}
variable "env" {
type = string
description = "Environment name (e.g. dev, prod)."
}
Where to Put locals Blocks
locals define internal helper values and computed expressions.
Recommended:
Put most of them in locals.tf, especially shared/common ones.
Optionally keep small, context-specific locals close to where they are used, in the same file as the resources.
Example (locals.tf):
locals {
common_tags = {
project = "demo"
owner = "junzhe"
}
full_env_name = "${var.env}-example"
}
Usage (e.g. in main.tf):
resource "aws_s3_bucket" "logs" {
bucket = "${local.full_env_name}-logs"
tags = local.common_tags
}
Where to Put resource Blocks
resource blocks define actual infrastructure (S3, EC2, VPCs, etc.).
For very small configs:
place all resources into main.tf
For medium/large projects:
split by domain or function:
main.tf # high-level modules, maybe some core resources
network.tf # VPCs, subnets, route tables
security.tf # security groups, IAM
compute.tf # EC2 instances, autoscaling groups
storage.tf # S3, EBS, RDS
kubernetes.tf # EKS / K8S resources
This makes it easier to navigate and reason about your infra.
Where to Put output Blocks
output blocks define values Terraform will show or pass to parent modules.
Standard convention: Put all outputs into outputs.tf.
Example (outputs.tf):
output "vpc_id" {
value = aws_vpc.main.id
description = "The ID of the main VPC."
}
output "public_url" {
value = "https://${aws_lb.app_lb.dns_name}"
}
Both root and child modules can have outputs:
root module outputs → for humans / external tooling
child module outputs → used by parent modules via module.<name>.<output>
Where to Put module Blocks (Calling Modules)
module blocks call reusable submodules.
Common approaches:
Keep them at the top-level in main.tf, so that reading main.tf gives you a high-level overview.
When you have many modules, you can also group them:
modules-network.tf
modules-app.tf
Example (main.tf):
module "network" {
source = "./modules/network"
cidr = "10.0.0.0/16"
env = var.env
}
module "app" {
source = "./modules/app"
vpc_id = module.network.vpc_id
subnet_ids = module.network.subnet_ids
}
The actual module implementations live under modules/<name>.
Where to Put import Blocks
import blocks describe how to map existing resources into Terraform state.
Good practice:
Put them in a dedicated import.tf file.
Optionally add comments explaining where the resource came from.
Example (import.tf):
# Import existing S3 bucket created manually in the console
import {
to = aws_s3_bucket.logs
id = "my-existing-logs-bucket"
}
Related resource must exist somewhere (e.g. in storage.tf):
resource "aws_s3_bucket" "logs" {
# configuration that matches the real bucket
}
Recommended Structure for Child Modules
Inside modules/<module-name>/, use a mini version of the same pattern:
modules/
network/
main.tf # resources (vpc, subnets, routes)
variables.tf # module inputs
outputs.tf # module outputs
locals.tf # internal helpers (optional)
app/
main.tf # EC2, ASG, LB, etc.
variables.tf
outputs.tf
locals.tf
Usually child modules do not define:
The root module owns those.
Putting It All Together: Example Full Layout
project-root/
├── versions.tf # terraform { ... }
├── providers.tf # provider "aws" { ... }
├── variables.tf # variable "env" { ... }, etc.
├── locals.tf # locals { ... }
├── main.tf # top-level modules & maybe a few core resources
├── network.tf # VPC, subnets, routes
├── security.tf # security groups, IAM
├── compute.tf # EC2, autoscaling
├── storage.tf # S3, RDS, EBS
├── outputs.tf # output "..." { ... }
├── import.tf # optional import { ... } blocks
└── modules/
├── network/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ └── locals.tf
└── app/
├── main.tf
├── variables.tf
├── outputs.tf
└── locals.tf
Remember: Terraform itself doesn’t enforce this structure, but following it makes your life (and teamwork) much easier.