Terraform Workflow Revisited
- whenever terraform plan is called an Execution Plan (Compared with state file if present and differences will be added) is created and when the apply command is used this execution plan is executed to create resources. Created resources are stored in state file
- Create a folder and main.tf file with one s3 bucket
provider "aws" {
region = "ap-south-1"
access_key = "AKIA3TXJQGAJFH3KKXSR"
secret_key = "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"
}
resource "aws_s3_bucket" "mys3bucket" {
bucket = "qtbucketfromtf.com"
tags = {
Purpose = "terraform"
}
}
- Now change the main.tf to add one more bucket and create resources
provider "aws" {
region = "ap-south-1"
access_key = "AKIA3TXJQGAJFH3KKXSR"
secret_key = "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"
}
resource "aws_s3_bucket" "mys3bucket" {
bucket = "qtbucketfromtf.com"
tags = {
Purpose = "terraform"
}
}
resource "aws_s3_bucket" "yours3bucket" {
bucket = "studentbucketfromtf.com"
tags = {
Purpose = "terraform"
}
}
- Now execute the plan without any changes
- In the above script lets get rid of static credentials and use environment variable
- Linux or Mac
export AWS_ACCESS_KEY_ID="anaccesskey"
export AWS_SECRET_ACCESS_KEY="asecretkey"
export AWS_DEFAULT_REGION="us-west-2"
* Windows: Launch powershell
$env:AWS_ACCESS_KEY_ID = "anaccesskey"
$env:AWS_SECRET_ACCESS_KEY = "asecretkey"
$env:AWS_DEFAULT_REGION = "ap-south-1"
- Now change the main.tf
provider "aws" {
}
resource "aws_s3_bucket" "mys3bucket" {
bucket = "qtbucketfromtf.com"
tags = {
Purpose = "terraform",
Topic = "environmental variables"
}
}
resource "aws_s3_bucket" "yours3bucket" {
bucket = "studentbucketfromtf.com"
tags = {
Purpose = "terraform"
}
}
- Splitting Terraform in multiple files
- Create a folder and then Create a file provider.tf with the following content
provider "aws" {
}
- Create buckets.tf with following content
resource "aws_s3_bucket" "mys3bucket" {
bucket = "qtbucketfromtf.com"
tags = {
Purpose = "terraform",
Topic = "environmental variables"
}
}
resource "aws_s3_bucket" "yours3bucket" {
bucket = "studentbucketfromtf.com"
tags = {
Purpose = "terraform"
}
}
- Create vpc.tf and add the following content
resource "aws_vpc" "myvpc" {
count = 3
cidr_block = "10.10.0.0/16"
tags = {
Name = "My VPC ${count.index}"
}
}
- In Terraform, input to create execution plan is directory. So terraform will combine the contents of all the files with .tf extension. Lets execute the following commands
cd multifile
terraform init
terraform plan -out 'multifile.plan' .
terraform apply "multifile.plan"
- Arguments: Are the inputs which we give to resources/providers in terraform
- Attributes: Attributes are outputs given by terraform to us. Every resource has a section called as Atrribute reference
Summarize
- Multiple tf files in terraform
- terraform plan => execution plan
- terraform state => represents what is created
- terraform show
- arguments and attributes