Terraform for dummies - A complete guide

Terraform complete tutorial for managing infrastructure as a code in AWS

Terraform Advanced tutorial

Terraform

Terraform can provision infrastructure across public cloud providers such as Amazon Web Services (AWS), Azure and Google Cloud etc, as well as private cloud and virtualization platforms such as OpenStack and VMWare

The syntax for creating a resource in Terraform is:

resource "<PROVIDER>_<TYPE>" "<NAME>" { [CONFIG ...] }

There are many different kinds of resources that you can create, such as servers, databases, and load balancers. In our case, we will create a server.

PROVIDER is the name of a provider. In our case it is AWS.

TYPE is the type of resource to create in that provider (e.g., instance)

NAME is an identifier to refer to this resource (e.g., my_instance)

CONFIG consists of one or more arguments that are specific to that resource.

Create a file in it called main.tf that contains the following contents to deploy an EC2 instance(Virtual server) in AWS

provider "aws" { region = " ap-south-1" } resource "aws_instance" "myserver" { ami = " ami-0cb0e70f44e1a4bb5" instance_type = "t2.micro" }

This configurations instructs Terraform that you are going to use AWS as your provider and that you want to deploy infrastructure into the ap-south-1 region.

We are setting the two arguments, The Amazon Machine Image (AMI) to run on the EC2 Instance and the type of EC2 Instance to run.

In a terminal, go to the folder where you created main.tf and run the terraform init command

terraform init

The above command configures which provider you’re using, and download the provider code into a .terraform folder.

Next, run the command as below.

terraform plan

This command lets you know what Terraform will do before actually making any changes.

Run the command below to create a server instance and when prompts enter value “yes”

terraform apply

To remove a server use the command below,

terraform destroy

After you applied the command, you can see that a server is created in aws website.

Variables in Terraform

Variables are useful in reusing and for dynamically changing values.

As seen as below image, I have created a file vars.tf to store my variables.

Note: you can create any filename, since Terraform loads all files ending in .tf in a directory.

In vars.tf file, we have the following configurations.

In line 6, I have defined variable name as “AMIS” and type is map. There are also other types called as lists, strings, Booleans, etc.

Map value is a lookup table from string keys to string values.

In line 7, default means this sets a default value for the variable. The default value can be of any data types.

We have defined three default ami’s and default region in line 2 as “ap-south-1”. So this will pick the defined ami value of “ap-south-1”.

In main.tf, we have the following configurations

In line 5, we have used the square-bracket index notation.

The map type expression is captured as a variable, with [var.AWS_REGION] and the captured variable references the “var.AMIS” for dynamic lookup.

Lookup function

Now the same “main.tf” can be replaced with the below lookup functions.

lookup gets the value of a single element from a map by providing its key. If the key does not exist, the given default value is returned instead.

syntax of lookup is,

lookup(map, key, default)

In line 5, we have provided the map as “var.AMIS” and key is “var.AWS_REGION”.

Terraform State

When you run Terraform, it records information about what infrastructure it created in a Terraform state file.

Terraform creates the file terraform.tfstate.

This file contains a JSON format that records a mapping from the Terraform resources in your configuration files.

Terraform also has a backup of the previous state in terraform.tfstate.backup. This is how terraform keeps track of remote state.

Attributes in terraform

Terraform stores many attribute values for all your resources.

for example, the aws_instance resource has the attribute public_ip, private_ip etc.

This attributes value can be outputted in the terminal or feed into other external files.

Output of attributes

In below main.tf file,

In line 10, we have defined the output and referred to the particular attribute by “aws_instance.myserver.public_ip”

i.e <PROVIDER_TYPE>.<RESOURCE_NAME>.<ATTRIBUTE_NAME>

After making “terraform apply”, you can see the below output that provides the ip address in terminal.

Provisioners in Terraform

Provisioners does specific actions on the local machine or on a remote machine in order to prepare servers or other infrastructure objects for a service (Described below).

Below is the configuration in main.tf file.

Breakdown of configurations

In line 1, we have created a new key_pair named as “deployer”

In line 11, value of “key_name” will be a reference of “aws_key_pair.deployer.key_name” i.e. <PROVIDER_TYPE>.<NAME>.<KEY>.

In line 14, Now the Provisioner block is nested within the resource block. The file provisioner is used to copy files or directories from the machine executing Terraform to the newly created resource.

In line 18, provisioner require access to the remote resource via SSH. Provisioner is nested inside a resource block so that it does the configured action to all of resource provisioners.

Currently we have only one resource provisioner.

In line 22, coalesce is a function in terraform which takes any number of arguments and returns the first one that is not null or an empty string.

In our case, it will return the “self.public_ip” value. If this value is null, then it will return the “self.private_ip”.

Below is the vars.tf configuration file.

From line 14-20, we have created two variables for public and private key.

Note: you need to generate the public and private key using ssh-keygen command and name those keys as “mykey” and “mykey.pub”

Terraform Remote Backend

As we seen previously, the terraform state is stored locally in terraform.tfstate.

If you’re using Terraform for a local project, storing state locally on your computer works just fine.

But if you want to use Terraform as a team on an organization project, you will face lot of problems.

One such problem is to keep the history of the updated terraform files and another problem is, if a team uses the same state files then they might end up in a conflict.

One such scenario is, one person changes one file and does “terraform apply” and other person does other updates in same file and does “terraform apply” and this leads to conflicts and data loss.

So a locking of state files is definitely needed in order to avoid the above scenario.

So you might think of using “Git” to track the changes.

That’s a bad idea because it does not provide locking that would prevent two team members from running “terraform apply” on state file and one more reason is every time you need to do “git pull” and what if someone forgets doing that?.

Aws s3 is best choice for a remote backend. Because it supports locking via dynamodb and it is highly durable and available.

It also supports versioning so that every changes will be stored.

To enable remote state storage with Amazon S3, let’s do the following configurations,

Below is the main.tf file,

Breakdown of configurations

In line 4-10, we are creating a new bucket and setting the versioning to be enabled for s3.

In line 16, create a DynamoDB table that has a primary key called “LockID”.

In line 21, Attribute type must be a scalar type: S, N, or B for (S)tring, (N)umber or (B)inary data.

Create a new file called backend.tf.

Below is the configurations for backend.tf,

1. terraform { 2. backend "s3" { 3. bucket = "terraform-bucket-demo-traveldiaries4u" 4. region = "ap-south-1" 5. dynamodb_table = "terraform-locking-demo" 6. encrypt = false 7. } 8. }

Breakdown of configurations for backend.tf

In line 3, we have provided the name of the bucket

In line 5, we have provided the table name and set the encryption at rest to false.

Note: variables will not work in backend configuration

Run the command “terraform init” again to download the provider code and this time the remote backend will be set to s3.

After running this command, your Terraform state will be stored in the S3 bucket.

You can check this in the s3 management console.

Modules in Terraform

Modules are helpful in reusing the code or configurations.

Let’s say we have setup a VPC cluster for Dev environment by writing all the terraform configurations.

Now we need to set up a Prod cluster, we can reuse that terraform files of VPC cluster for dev environments by putting them in modules and we can define input variables which are used to change needed resource configuration parameters for prod cluster.

That’s it. You don’t have to write terraform files again!

The syntax for using a module is

module "NAME" { source = "SOURCE-OF-MODULE-PATH" [CONFIG ...] }

The source field specifies where to download the module configuration.

Modules can be downloaded from multiple resources i.e. local path, GitHub, s3, terraform registry etc.

Modules Handson

Before proceeding, complete code is found in this link.

Scenario is to create an EC2 in a VPC. Below are the resources that we need to create.

  1. 1.) VPC has to be created
  2. 2.) Internet gateway should be attached to VPC for network traffic
  3. 3.) A public subnet has to be created and it should be deployed on an availability zone and it should be associated with the vpc
  4. 4.) Route table has to be created for that subnet to route the internet traffic to the Internet gateway
  5. 5.) A security group is needed that controls the traffic from our EC2 instance. We need to define ingress rules and egress rules. The rules must allow ports 22 to send and receive traffic to and from our instance.
  6. 6.) Need to create a key pair and associate it with EC2 instance
  7. 7.) Finally create a test ec2 instance and associate it with those resources we defined above.

In the code which you downloaded, inside the module directory, you will find the network.tf file as seen with below configurations.

In output.tf file as we see below,

We defined output variables, so that other resources or modules can use those values.

This vpc module is used by our main configuration file (main.tf) where we create EC2 instance inside our vpc networking layer created with help of vpc module.

Below is the main.tf file,

Breakdown of configuration

In line 6, we have defined a module with source parameter and provide the path of the vpc module.

In line 19,20, We are going to use public subnet id and security group id from the output of our vpc module inside main.tf file

Now do “terraform init” and “terraform apply”

Now you will see the ec2 server with the updated configurations from terraform resources.

Packer in Terraform

It can build custom AMI’s based on templates.

You can create your own custom AMI’s with all your software dependencies so that it will consecutively increase the booting time of the instances rather than creating an AMI first and installing all software’s.

Using Conditionals in terraform

You can use conditionals in Terraform to evaluate a condition. If it is true, it will return that particular value.

The syntax for conditionals it,

CONDITION ? TRUEVALUE : FALSEVALUE

For example in an access_logs block, we have turned on the condition for “enabled” argument.

If the environment_name equals production, then the access_logs will be enabled to true.

access_logs { bucket = "my-bucket" prefix = "${var.environment_name}-alb" enabled = "${var.environment_name == "production" ? true : false }" }

Automating the ECS-EC2-Type Deployments with Terraform

Here we are going to create the ECS cluster with launch type as EC2-TYPE. This involves the following resource.

Click the link to download the code

  1. ECS-Cluster.tf
  2. ECS-ec2-instance.tf
  3. ECS-task-definition.tf
  4. ECS-services.tf
  5. ECS-ALB.tf
  6. service-discovery.tf
  7. ECS-Route53.tf
Prerequisite: Existing vpc has to be created and replace your vpc id's and subnet id's in the code.

After downloading the code, Go inside "terraform-ecs/module" and run "terraform init and terraform apply"