Contents

How To Set up Your Terraform Project for Multiple Environments

Even in small- to medium-sized projects you’ll want to have an infrastructure setup to run multiple instances, or environments of your software / system. This article describes two ways of realizing this using Terraform, the advantages and disadvantages of the approaches and offers sample code to get you started.

First let’s clarify why you would want to have multiple environments.

Reasons can be manifold but some of the most common are:

  • Separation of your development, staging / quality assurance and production environments,
  • setting up a demo environment for client / customer demos,
  • realizing / supporting multitenancy, and so on.

As are the reasons the benefits are manifold:

  • It enables you to use different setups / configurations per environment;
  • It ensures a strong separation, e.g. your test data doesn’t interfere with your production data or data from customer A doesn’t touch data from customer B;
  • It can enhance security, e.g. customer A is unable to reach data from customer B;
  • It permits you to use different machine sizes / scaling approaches, e.g. a development environment usually is much smaller than a production environment;
  • It facilitates different work flows for different environments, e.g. on your staging environment you can reset data while deploying;

So let’s dive into it!

 

What does Terraform offer?

A Terraform project is centered around the state and hence the obvious approach is to duplicate states to create multiple environments. This is described in the section Approach 1 - Multiple folders and multiple backends and relies on Terraform modules to avoid code duplication when defining the infrastructure. You can think of it as 1:1 relationship between backend and state.

The second solution is to use named workspaces. With workspaces Terraform offers tooling to easily switch between multiple instances of a single configuration within a single backend. This is further explained in Approach 2 - Terraform Workspaces. This ultimately is a 1:N relationship between backend and state, i.e. one backend is associated to multiple stats.

 

Creating the backend(s)

Software development teams consist of multiple team members working together so we will need a remote state for our Terraform project (in contrast to a local state, where the state file is located on the developer machine). Terraform supports Amazon S3, Azure Blob Storage, Google Cloud Storage, Alibaba Cloud OSS, and more for storing the terraform.tfstate file.

The following example sets up the necessary infrastructure for AWS. Note that we’re not yet in an Infrastructure as Code mode so we’ll need to do this manually. Depending on the approach you’ll need one or multiple backends, more on this later.

#!/usr/bin/env bash

aws_region=<your-aws-region>
aws_profile=<your-aws-profile>
project_name=<your-project-name>

tfstate_name=tf-state-${project_name}
tfstate_s3_bucket=${tfstate_name}
tfstate_dynamodb_table=${tfstate_name}
aws s3 mb s3://${tfstate_s3_bucket} --region ${aws_region} --profile ${aws_profile}
aws s3api put-bucket-versioning \
	--region ${aws_region} \
	--profile ${aws_profile} \
	--bucket ${tfstate_s3_bucket} \
	--versioning-configuration "Status=Enabled"
aws s3api put-public-access-block \
	--region ${aws_region} \
	--profile ${aws_profile} \
	--bucket ${tfstate_s3_bucket} \
	--public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
aws dynamodb create-table \
	--region ${aws_region} \
	--profile ${aws_profile} \
	--table-name ${tfstate_dynamodb_table} \
	--attribute-definitions AttributeName=LockID,AttributeType=S \
	--key-schema AttributeName=LockID,KeyType=HASH \
	--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

This will…

  • create an Amazon S3 bucket (aws s3 mb)
  • set versioning and public access on the bucket (aws s3api put-bucket-versioning and aws s3api put-public-access-block)
  • create an Amazon DynamoDB table for locking (aws dynamodb create-table)

 

To use the newly created bucket and table in your Terraform project you’ll need to specify it using the backend "s3" block like this:

terraform {
  backend "s3" {
    bucket         = "tf-state-your-project-name"
    key            = "terraform.tfstate"
    region         = "your-aws-region"
    dynamodb_table = "tf-state-your-project-name"
    profile        = "your-aws-profile"
  }
}

 

Approach 1 - Multiple folders and multiple backends

Let’s look at the folder structure of a Terraform project using multiple folders and multiple backends.

├── env
│   ├── dev
│   │   ├── main.tf
│   │   └── outputs.tf
│   └── prod
│       ├── main.tf
│       └── outputs.tf
└── modules

Each environment becomes a subfolder, in this case we have two environments dev and prod. Each main.tf can hold custom configuration options and backend configurations. The main Terraform project is held in form of a module and submodules in the modules folder which is located alongside the env folder. Note that we need as many backends as we have environments, i.e. in this example we would create two S3 buckets and two DynamoDB tables called tf-state-dev-your-project-name and tf-state-prod-your-project-name respectively.

 

provider "aws" {
  region  = "your-aws-region"
  profile = "your-aws-profile"
}

terraform {
  backend "s3" {
    bucket         = "tf-state-env-your-project-name"
    key            = "terraform.tfstate"
    region         = "your-aws-region"
    dynamodb_table = "tf-state-env-your-project-name"
    profile        = "your-aws-profile"
  }
}

module "your_tf_project_module" {
  source = "../../modules/your_tf_project_module"
  [...]
}

Above you see a lightweight main.tf suitable for this approach, the root module consists only of a backend configuration and your main module declaration. Customize changes in your environments by declaring variables in your main module which you set here (e.g. different names, number of machines, machine sizes, domain names, and so on). If you copy and paste this code don’t forget to replace the env in the bucket and lock table definitions.

In order to deploy an environment change into the directory of the desired environment and issue the familiar Terraform commands:

$ cd env/dev/
$ terraform validate
$ terraform fmt -check
$ terraform apply

To switch on the selected environment in your module define a variable in your module called environment. Then if you want to customize a name depending on the environment for example use name = "Test VM - ${var.environment}".

 

Approach 2 - Terraform Workspaces

Now let’s turn to using Terraform Workspaces. With workspaces we have a 1:N relationship between backends and states and hence we only need to create one backend. Workspaces isolate the state, when you switch workspaces resources you previously created in a different workspace are not visible but they still exist (in the workspace they were crated in).

Here’s how a Terraform project using workspaces would look like:

├── dev.tfvars
├── main.tf
├── prod.tfvars
├── variables.tf
└── modules

Custom configuration for the environments are located in the env.tfvars files, each environment has a separate .tfvars file, here dev.tfvars and prod.tfvars. In order to init a workspace use terraform workspace new env, in our example terraform workspace new dev and terraform workspace new prod. The main.tf remains largely unchanged compared to before. Remember we have only one backend in this case so we reference this in the backend definition in main.tf. Note that not all Terraform backends support multiple workspaces, see the Terraform documentation for a list. And it’s also worth mentioning that even if you don’t use named workspaces Terraform still uses a workspace called default.

In order to deploy you need to select the correct workspace, either by running terraform workspace select env or by setting the environment variable TF_WORKSPACE. In addition to this when running apply we need to specify where to find the variables for the environment by using -var-file=env.tfvars, an example deployment for dev would thus look like this:

$ export TF_WORKSPACE=dev
$ terraform validate
$ terraform fmt -check
$ terraform apply -var-file=dev.tfvars

Again, don’t forget to select the workspace. If no workspace is selected Terraform will use the default workspace. Use terraform workspace list to list all workspaces and verify if the correct workspace is selected.

To switch on the selected workspace in your module use the ${terraform.workspace} interpolation sequence, e.g. if you want to customize a name depending on the workspace use name = "Test VM - ${terraform.workspace}".

 

Conclusion

Let’s wrap it up by looking at the pros and cons of each approach:

ApproachProCon
Multiple folders and multiple backends+ Very strong separation
+ Different backend configurations possible (different credentials and access controls)
+ Multiple backends allow for parallel deployments (no locking).
- Redundant main.tf and outputs.tf declarations.
Terraform Workspaces+ Slightly easier to add new environments.
+ Perfect for feature branches to quickly test new IaC.
- Only weak separation
- One state, so locking problems might happen.
- Easy to deploy to wrong workspace.
- Need for workspace initialization can lead to problems in CI/CD.
- Different backend configurations not possible (same credentials and access controls)
- Not suited for larger projects as each environment should have a set of workspaces to allow for feature branching for example.

As you might have noticed you don’t necessarily have to choose between approach 1 or 2, they also work in tandem. Use multiple folders / backends for a strong separation of your main environments, e.g. development, staging and production, and then use workspaces for feature branches. In essence if you’re looking for something more lightweight in a smaller 1-2 person project consider workspaces otherwise invest in setting up multiple folders / backends as you will thank it yourself later.