Provisioning cloud platform resources is never as simple as AWS and other cloud service providers claim. Especially in hybrid or multi-cloud models creating cloud resources consistently is not always straightforward. Even when a only a single cloud provider is being used, a businesses’ demands for cloud infrastructure are rarely static. New operational requirements may call for frequent and spontaneous changes in cloud architecture. One of our favorite tools to overcome these challenges is Hashicorp’s Terraform.
What is Terraform?
Terraform provides a common workflow to automate provisioning, by establishing a uniform language, state management model and life cycle for software-defined infrastructure across one or multi-cloud environments. To make things simple, try to think of Terraform as a provisioning template where you can define the infrastructure parameters and required resources to service a business application. Terraform comes in both an open-source and two paid flavors called Enterprise Pro and Enterprise Premium.
Terraform helps Devops teams achieve infrastructure agility because once orchestration settings have been written into templates, these can be used to quickly re-provision environments when these need to be destroyed and rebuilt. When new infrastructures have to be spun up, everything is automatically provisioned in the ways dictated by the templates. The templates can be used to provision the new environments to the state required by the application or service within a matter of minutes. A similar process applies to spinning up new environments with previously unutilised cloud providers when multi-cloud setups are pursued. Terraform has a platform-neutral syntax (HCL or JSON). This means it can be used on almost any cloud platform. When using Terraform to spin up environments in different cloud platforms, only minor changes need to be made to the templates.
Finally Terraform can also be used as a rapid prototyping tool to model what new environments could or should look like, and it can be used in continuous integration and continuous deployment (CI/CD) pipelines as a form of validation to quickly spin up and test new environments. Using Terraform as part of your application validation process can help to build confidence in your procedures, tools and test suites, as well as in your applications themselves.
Basic Guidelines and Best Practice
While using Terraform is fairly simple once you’ve picked up the basics, there are some key things we would recommend as best practice when you are building infrastructures that matter. First off, before you start madly scripting away, use the module registry provided by Terraform itself to find the bits of code that you need. The registry offers over 100 different reusable modules that were either provided by the tools developers or by other users. Because Terraform is (at least in part) open source, there is a large community of users who have most probably in one form or another already written the code that you are looking for. Work smart and avoid doing unnecessary redundant work.
You can find the Terraform module registry here: https://registry.terraform.io/
If you can’t find what you are looking for in the registry or if Terraform, even with its many plugins and integration possibilities, doesn’t provide the resources you are looking for, use the Terraform
import command. Import allows you to take resources that were natively developed using a different orchestration or provisioning tool, and to integrate them into your existing Terraform build. Unfortunately the command currently can only import one resource at a time. This means you can’t point Terraform import to an entire collection of resources such as an AWS VPC and import all of it.
Second, due to Terraform’s innate modular provisioning setup, the tool can be used for isolating dependencies within cloud infrastructures. In short, cloud practitioners can use Terraform to write individual configuration assets into modules and then layer these to create complex infrastructures in the deployment phase. This type of divisional internal structure not only allows us to create a catalog of combinable configuration resources that are essentially cloud-ready and can be layered according to the demands of the desired infrastructure, but it also helps keep things nice and clean and separated. We recommend that you have one folder with all your global modules called “modules folder”. Additionally, create one folder for each different environment which contains a main.tf file and call it “environment folder”. The environment folder is the one you can copy and make necessary adjustments to (if any) to provision slightly different environments. For more information on this feel free to read one of our previous blogs discussing this topic.
However, avoid copying Terraform
.tfstate files! Your
.tfstate file is intended to serve as a reference point and single truth should something occur during resource provisioning. Copying your
.tfstate file manually and processing it in multiple different places could potentially have some very undefined results.
In Terraform’s open source version you have the option to store your Terraform state in a remote storage such as Hashicorp Consul or AWS S3 (you can also use your own server on prem for this). Not just your
.tfstate file but also your other Terraform assets should be stored somewhere for safe-keeping. Terraform assets should be stored in a version control repo such as Github. While this is more important for CI/CD pipeline and production deployments, it is an inexpensive way of additionally safeguarding your work, therefore we highly recommend persisting your
.tfstate file to a backend.
However, please be extremely careful not to store your business secrets (Passwords or access keys) in a public facing VCS such as Github or Confluence because there is the risk of your credentials being leaked or corrupted. Additionally, use the built in
.tfvars environment variables file as a proxy for all changes.
Next we recommend that you configure your compute instances using a configuration management tool or a configuration automation agent (CCA) such as Chef, Puppet or Ansible rather than the built in Terraform provisioners. For basic ad hoc configs or simple prototyping Terraform provisioners fully do the job, but for more in depth and sophisticated configurations CCAs are much better because they support profile reuses (for when you want to configure the same instances over and over) and because they grant you much better control.
Finally, the Enterprise version of Terraform augments the open source version with options to automate governance and collaboration policies. While the workflows used for provisioning are the same for Terraform open-source as they are for the Enterprise version, the paid version provides control and access rights to Terraform workspaces while the execution and
.tfstate files are centralized. You don’t HAVE to use Terraform Enterprise for governance and collaboration policies, because this can also be coded manually in the open source version, but Enterprise does well in large organizations looking to ensure identically provisioned resources across multiple, highly similar environments.