Tuesday, March 19, 2024
HomeDevOpsTerraform: Powerful Tool to Make It Easy to Manage your Infrastructure

Terraform: Powerful Tool to Make It Easy to Manage your Infrastructure

ter·ra·form (verb) (literally, “earth-shaping”) from Latin terra “earth” + forma “shape”, (used especially in science fiction), a hypothetical planetary engineering process so as to make the planet inhabitable for human life.

Build, evolve, and manage your infrastructure with confidence. HashiCorp Terraform provides a common interface to infrastructure management — from servers and networks to email and DNS providers. Terraform is a deployment toolkit that provisions to multiple cloud vendors via a common interface and evolves with you over time, all under programmatic control.

Terraform allows you to stitch together a wide range of cloud providers via high-level interfaces in one common paradigm. For instance, you can launch a VM on one cloud provider, then associate its IP with a DNS record on another provider. Built-in dependency resolution gives you assurance that related resources are operated on in proper sequence. As requirements on your infrastructure change, modifications happen on the Terraform configuration and are realized for you via the toolchain. This enables you to have confidence that your infrastructure is orchestrated centrally.

As an analogy, we’ll use a sample terraform project. We’ll launch a Terraforming payload onto an empty planet, sculpting the environment to sustain life for your team’s applications.

Its First in Empty Planet..

Typically, provisioning to any given cloud provider entails implementing code to call proprietary REST-based APIs, often via custom scripts. The scripts might be vendor-distributed SDKs in the language of your choice or a cloud provisioning library like fog or jclouds. They might be integrating against the provisioning plugins available from configuration engines like chef, puppet, or Ansible. Terraform proves to be an outstanding way of managing an evolving deployment, orchestrating inter-related graphs of cloud resources, drift, and mutation. Let’s get started building our planet.

  1. Design

Terraform works by translating a declarative description of the infrastructure deployment and realizing it through a process of state comparison (what does and does not yet exist in the target infrastructure) and graph traversal (which related resources will be affected by the requested changes).

The following Terraform configuration declares a CenturyLink Cloud server group named frontends and provisions a single virtual machine into it:

resource "fxtc_group" "frontends" {
  location_id      = "SFO1"
  name             = "frontends"
  parent           = "Default Group"
}
resource "fxtc_server" "web" {
  name_template    = "web"
  source_server_id = "UBUNTU-16-64-TEMPLATE"
  group_id         = "${fxtc_group.frontends.id}" 
  cpu              = 2
  memory_mb        = 4096
  count            = 1
}
output "ip" {
  value           = "${fxtc_server.web.private_ip_address}"
}

You can pull the latest binaries for Terraform off the downloads page to get rolling with this example.

Invoking the terraform plan will cause Terraform to do the following:

  • Analyze the configuration and compare it to the last known state.
  • Pull down any relevant remote state from the provider.
  • Compute a set of steps required for transition into the desired state.

Next, terraform apply will analyse the configuration, detect that no such server group exists, then proceed to create it. Only then does it create the child VM under the group. Similarly, if the group does exist and the VM doesn’t, Terraform would skip the creation of the group and infer it needs to create the VM only. Finally, once the group and server are created, artifacts for these resources are stored to a local cache file, which is keyed by resource identifier so that subsequent calls can operate on the same resources.

As you may have guessed, modifying the count of servers will either create or destroy VMs in order to converge into the desired state.

  1. Load

Launching a VM is generally pretty straight-forward using any tooling. Things get more interesting when we look at scaling the number of resources up or down and adding dependent resources. Let’s look at a slightly more complicated version of the above example that logically follows provisioning a VM.

  • Add a public IP.
  • Specify exposed ports.
  • Install a file onto the box.
variable "n" {}
variable "cpu" {}
variable "mem" {}
variable "password" {}
variable "dc" {}

resource "fxtc_group" "public" {
  location_id      = "${var.dc}"
  name             = "public"
  parent           = "Default Group"
}

resource "fxtc_server" "srv" {
  group_id         = "${fxtc_group.public.id}"
  name_template    = "FXTC12"
  source_server_id = "UBUNTU-16-64-TEMPLATE"
  cpu              = "${var.cpu}"
  memory_mb        = "${var.mem}"
  password         = "${var.password}"
  count            = "${var.n}"
}

resource "fxtc_public_ip" "pub" {
  count               = "${var.n}"
  server_id           = "${element(fxtc_server.srv.*.id, count.index)}"
  internal_ip_address = "${element(fxtc_server.srv.*.private_ip_address, count.index)}"
  ports
    {
      protocol        = "TCP"
      port            = 22
    }
  provisioner "file" {
    source            = "~/.ssh/id_rsa.pub"
    destination       = "/root/.ssh/authorized_keys"
    connection {
      host            = "${self.id}"
      password        = "${element(fxtc_server.srv.*.password, count.index)}"
    }
  }
}

output "public_ips" {
  value = "${join(" ", fxtc_public_ip.pub.*.id)}"
}

Similar to the first example, a group is created. But this time a variable number (N) of VMs are placed into it. In tandem, every server created receives the following:

  • A public IP attached on the private NIC.
  • Port 22 opened.
  • A provisioner run after creation.

The provisioner transfers a local file to the remote box via the generated public IP. As the N of servers increases, new VMs are created. Additional IPs are allocated according to the same specification. Similarly, if N is decreased, Terraform de-allocates both the VM and the associated public IP. Adding or removing ports triggers all affected IPs to mutate into the desired state. Doing all of this the old way involved managing this state manually by orchestrating the changes via sequences of script invocations. It’s a tedious and time-consuming process.

  1. Launch

Fire off the above configuration and, with the next command, watch it mold the infrastructure of your planet into an ecology of resources arranged just as you designed.

$ terraform apply
var.cpu
  Enter a value: 2
var.dc
  Enter a value: SFO1
var.mem
  Enter a value: 4096
var.n
  Enter a value: 1
var.password
  Enter a value: xxxxxxxxx

fxtc_group.public: Creating...
  location_id:     "" => "SFO1"
  name:            "" => "public"
  parent:          "" => "Default Group"
  parent_group_id: "" => "<computed>"
fxtc_group.public: Creation complete

fxtc_server.srv: Creating...
  cpu:                "" => "2"
  created_date:       "" => "<computed>"
  group_id:           "" => " df00230b8329219d53dced8f0a4f"
  memory_mb:          "" => "4096"
  modified_date:      "" => "<computed>"
  name:               "" => "<computed>"
  name_template:      "" => "FXTC12"
  power_state:        "" => "<computed>"
  private_ip_address: "" => "<computed>"
  public_ip_address:  "" => "<computed>"
  source_server_id:   "" => "UBUNTU-16-64-TEMPLATE"
  storage_type:       "" => "standard"
  type:               "" => "standard"

fxtc_server.srv: Still creating... (10s elapsed)
...
fxtc_server.srv: Creation complete

fxtc_public_ip.pub: Creating...
  internal_ip_address: "" => "10.137.12.42"
  ports.#:             "" => "1"
  ports.0.#:           "" => "2"
  ports.0.port:        "" => "22"
  ports.0.protocol:    "" => "TCP"
  server_id:           "" => "SFO12ABFXTC12"

fxtc_public_ip.pub: Still creating... (10s elapsed)
...
fxtc_public_ip.pub: Provisioning with 'file'...
fxtc_public_ip.pub: Creation complete

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path below. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state, use the `terraform show` command.

State path: terraform.tfstate
Outputs:
public_ips = 59.64.47.12
  1. Re-Design

Reverting the planet back to its pristine state is accomplished by using one command: terraform destroy. This isn’t likely to happen on your production deployments, but it’s certainly helpful for disposable environments. All generated resources are tidied up and your planet is restored to its original state.

In the event that one of your systems appears to have drifted, or is otherwise in an undesirable state, terraform taint can be very helpful. This command triggers the destruction and re-creation of the resource. Additionally, any related resources are also pulled into the state transition.

Under ordinary circumstances you’ll be modifying your server resources, adding or removing capacity, scaling compute resources, or setting up new groups of servers as determined by your applications. As you make modifications to your Terraform configuration, terraform apply will re-analyze and formulate the convergence plan to realize the changes.

For instance, exposing an additional port would involve modifying the source config to add the additional port and then applying the changes:

  ports
    {
      protocol        = "TCP"
      port            = 22
    }
  ports
    {
      protocol        = "TCP"
      port            = 25
    }

When converged, Terraform will detect that the resource may be modified in place and will simply update the firewall rules to expose the additional port, as demonstrated by this plan:

$ terraform plan

Refreshing Terraform state prior to plan...
fxtc_group.public: Refreshing state... (ID: df00230b8329219d53dced8f0a4f5a)
fxtc_server.srv: Refreshing state... (ID: SFO12ABFXTC12)
fxtc_public_ip.pub: Refreshing state... (ID: 59.64.47.12)

The Terraform execution plan has been generated and is shown below. Resources are listed in alphabetical order for quick scanning.

  • Green resources will be created (or destroyed and then created if an existing resource exists).
  • Yellow resources are changed in place.
  • Red resources will be destroyed.
  • Cyan entries are data sources to be read.

Note: If you didn’t specify an “-out” parameter to save this plan, when “apply” is called, Terraform can’t guarantee this is what will execute.

~ fxtc_public_ip.pub
    ports.#:          "1" => "2"
    ports.1.#:        "0" => "2"
    ports.1.port:     "" => "25"
    ports.1.protocol: "" => "TCP"
Plan: 0 to add, 1 to change, 0 to destroy.
  1. Explore

One of the strengths of the Terraform toolkit is the wide range of providers included. You might stitch together a configuration that spans OpenStack, or AWS/GCE, and wire public IPs from each into a global DNS pool configured through one of the supported DNS providers. Further, you might extend your build-out to include SSL certificate generation and registration via the list of available providers. Note the following from the Terraform docs on Multi-Cloud Deployment:

Realizing multi-cloud deployments can be very challenging as many existing tools for infrastructure management are cloud-specific. Terraform is cloud-agnostic and allows a single configuration to be used to manage multiple providers, and to even handle cross-cloud dependencies. This simplifies management and orchestration, helping operators build large-scale multi-cloud infrastructures.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments