Author: Marc LeBlanc


Code reusability is a much sought after goal of organizations and developers alike. The idea conjures ideas of high efficiency resulting in reduced bugs, operational support load, and most importantly–reduced time to market. The thing I see over and over again is locally cloned repositories, locally cloned Terraform modules, and hardcoded variable values within repos. All of these are the enemy of efficiency and standardization. This blog is a quick look at how you can leverage Terraform Enterprise to set variables at a workspace level, allowing an organization to create a many-to-one relationship between Terraform Enterprise Workspaces and GitHub repositories.

The Setup

An Organization - call it LeBlancHQ.com, named after a considerably handsome DevOps consultant and blog author - has an Operations and Development team that has come to an agreement on a standard Google Kubernetes Engine (GKE) offering. This offering has been published as a GitHub repo. The organization has made this repository available through Terraform Enterprise (TFE) via a GitHub connection. App team 1 - Demo Beer and App team 2 - Demo Wings, both want to have ownership of a GKE cluster (whether they actually need sole tenancy, or this should be a multi-tenant scenario, is irrelevant for the purposes of this blog). The organization has made it clear that there should be no deviation from this standard offering, and any attempts to request a deviation will not be approved.

TFE Setup

Terraform has the concept of workspaces: a workspace is associated with a repo. The traditional thought process may have been to clone the standardized repo, customize it accordingly, associate it with a workspace, and go from there. But this is 2020, we do things much more efficiently now. In this scenario, both the Beer team and the Wings team get their own workspace in TFE, but each workspace is associated with the same GitHub repo. Each Workspace then has specific variables set to ensure the respective teams get their own GKE clusters in their own GCP Projects.

Let’s take a look.

TFE Workspaces

First - notice, a tale of 2 workspaces, both associated with the same repo, both in different states. We can see that the Wings workspace has a run status, which effectively means a terraform apply, where as the Beer workspace has not.

Let’s drill down a bit further and look at one of these workspaces.

TFE Workspace variables

Granted, this scenario is relatively simplistic and created for the purpose, but we can see that the Beer workspace, although based on the exact same GitHub repo as Wings, is easily customized through the use of Terraform Variables. When a run against this workspace is triggered - either manually, or via a commit to the branch being watched by the Workspace - these variables will be applied and create a unique GKE environment based on code in a single repo and have no implications on the Wings repo.

The Code

Ok, fair, let’s take a quick look at the code in this repo. A lot of it is standard GKE stuff so let’s look at the important bits. In this case, we are not doing a whole lot of customization, so let’s focus on the 2 keys that make these clusters unique. The 2 variables we are concerned with are var.project_name and var.cluster_name. There are other variables, but for now, we don’t care to customize them and we accept the organization recommended defaults set in the variable declaration.

resource "google_container_cluster" "kubernetes_cluster" {
  name     = var.cluster_name
  project  = var.project_name
  location = var.location

  initial_node_count = var.initial_node_count
  network            = var.network

  node_config {
    machine_type = var.machine_type

    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }
  depends_on=[google_compute_network.vpc_network]
}

Seems Simple

Like I mentioned, this was cooked up with purpose for this blog post, so yes, this is simple. But! What I want you to take away from this, and it’s important. First - terraform variables are critical to the success in your journey with IAC. Hardcoding values in your code, even within a .tfvars file within a repo, will quickly leave you with lots of customized little snow flakes. A million snow flakes is still a heavy bag to lug around. Standardize your offerings, publish them as modules, build them as workspaces, customize them with workspace variables. Creating a one to many relationship between your code repositories and your deployments will quickly send you on a path to efficient standardization allow you to move faster.

When we confirm a Terraform plan against these workspaces, we get 2 dedicated GKE clusters in 2 unique GCP projects. Things to notice - the project name is unique and the cluster name is unique. Other things that could be different are the same because we did not customize.

$ gcloud container clusters list --project arc-demo-wings
NAME              LOCATION       MASTER_VERSION  MASTER_IP     MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
wings-gke-cluster  us-central1-a  1.14.10-gke.27  34.66.124.80  n1-standard-1  1.14.10-gke.27  1          RUNNING

$gcloud container clusters list --project arc-demo-beer
NAME              LOCATION       MASTER_VERSION  MASTER_IP     MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
beer-gke-cluster  us-central1-a  1.14.10-gke.27  35.56.114.56  n1-standard-1  1.14.10-gke.27  1          RUNNING

Bonus

So did you notice in the screenshot of Beer workspace variables there were Environment variables as well? So let’s talk about that for a minute, what was that about? Seriously, this probably requires a longer dedicated blog post but here’s a sneak peek: Traditionally, when we automate infrastructure deployment with Terraform in GCP, we rely on a Service Account Key in JSON format. For this project, instead of the traditional approach, we are making use of HashiCorp’s Vault dynamic secrets for GCP. You saw 2 environment variables set on the workspace 1) The Vault Address and 2) The Vault token. These environment variables are consumed by the Terraform vault provider to request a Service Account key for the Google provider to provision the GKE cluster.

Here’s the code of what that looks like. There’s some magic that needs to be released in a future blog, but you can probably figure it out based on this glimpse.

 provider "vault" {}

data "vault_generic_secret" "gcp_auth" {
  path = "gcp/key/${var.project_name}-roleset" 
}

provider "google" {
    credentials = base64decode(data.vault_generic_secret.gcp_auth.data.private_key_data)
    project = var.project_name
}

Summary

Making use of Terraform Enterprise Variables - Terraform or Environment - gives organizations a way to create offerings that are standardized. This approach allows you to take code, and use it again, and again, and again. And if you decide your standard offering should change, a single commit to the base repo will trigger the change across all workspaces (obviously, hopefully obviously, extreme caution should be taken here). The point: Stop hardcoding variable values into Terraform code; stop hardcoding variable values into variable declarations; stop cloning repos every time you need to deploy something that has already been solved for your organization. Start being efficient, start reusing code, start using Workspace variables.

That’s all for now! Stay tuned for a follow up post. In the meantime, if you are interested in learning more about TFE variables with GitHub source control, we would love to hear from you //take the first step

Tagged:



//comments


//blog search


//other topics