Wat?! Ugh Terraform State is Complicated Sometimes! ‘url has no host’

As I’ve been working through the series (part 1, 2, 3, and 4 so far), a number of issues come up (like this one), and this seemed like a good one to post too. As I’ve been working through I started stumbling into this error around destroying images via terraform destroy . Now, I’m not just creating them with terraform apply and then trying to destroy them. I’m creating the images via Packer, then importing the state (see part 4 for the series for details) and then when I clean up the environment trying to terraform destroy which shows this error.

[…lost image…]

I had taken the default azurerm_image resource configuration from the Hashicorp docs site that I tweaked just a little bit.

[code language=”bash”]
resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

The thing causing the error is the “{blog_uri}”. Which in general, I’d assume that this should be pulled or derived from the existing image created by packer when imported. But the syntax above just doesn’t cut it for actions post-import of the image state.

Time Consuming Troubleshooting

To troubleshoot and confirm this issue takes a long time. Create the image, which is ~15-20 minutes, then run an apply. The apply, even if most of the creation is minimized to imports and the few other things that are created, takes several minutes in Azure. Then a destroy takes several minutes. So all in all, one test cycle is about ~30 minutes.

The First Tricky Fix

I went through several iterations of attempting to get that part of the import of the state pulled in. That didn’t work out so well. What did though, was the simplist of actions, I deleted blob_uri = "{blog_uri}"!  Then upon terraform apply or terraform destroy I got a full cycled application of changes, etc, after adding the state and on destroy terraform wiped out everything as expected!

Problem Fixed, Problem Created

On to the next things! But oh wait, there is another problem. Now if I setup a VM to be created based off of the image, the state doesn’t have the blog_uri. Great, back to square one right? Not entirely, subscribe, keep reading and I’ll have the next steps for this coming real soon!

Development Workspace with Terraform on Azure: Part 4 – DSE w/ Packer + Importing State 4 Terraform

The next thing I wanted setup for my development workspace is a DataStax Enterprise Cluster. This will give me all of the Apache Cassandra database features plus a lot of additional features around search, OpsCenter, analytics, and more. I’ll elaborate on that in some future posts. For now, let’s get an image built we can use to add nodes to our cluster and setup some other elements.

1: DataStax Enterprise

The general installation instructions for the process I’m stepping through here in this article can be found in this documentation. To do this I started with a Packer template like the one I setup in the second part of this series. It looks, with the installation steps taken out, just like the code below.

[code language=”javascript”]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
“managed_image_name”: “{{user `imagename`}}”,

“os_type”: “Linux”,
“image_publisher”: “Canonical”,
“image_offer”: “UbuntuServer”,
“image_sku”: “18.04-LTS”,

“azure_tags”: {
“dept”: “Engineering”,
“task”: “Image deployment”
},

“location”: “westus2”,
“vm_size”: “Standard_DS2_v2”
}],
“provisioners”: [{
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}'”,
“inline”: [
“”
],
“inline_shebang”: “/bin/sh -x”,
“type”: “shell”
}]
}
[/code]

In the section marked “inline” I setup the steps for installing DataStax Enterprise.

[code language=”javascript”]
“apt-get update”,
“apt-get install -y openjdk-8-jre”,
“java -version”,
“apt-get install libaio1”,
“echo \”deb https://debian.datastax.com/enterprise/ stable main\” | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list”,
“curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -“,
“apt-get update”,
“apt-get install -y dse-full”
[/code]

The first part of this process the machine image needs Open JDK installed, which I opted for the required version of 1.8. For more information about the Open JDK check out this material:

The next thing I needed to do was to get everything setup so that I could use this Azure Image to build an actual Virtual Machine. Since this process however is built outside of the primary Terraform main build process, I need to import the various assets that are created for Packer image creation and the actual Packer images. By importing these asset resources into Terraform’s state I can then write configuration and code around them as if I’d created them within the main Terraform build process. This might sound a bit confusing, but check out this process and it might make more sense. If it is still confusing do let me know, ping me on Twitter @adron and I’ll elaborate or edit things so that they read better.

check-box-64Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.

2: Terraform State & Terraform Import Resources

Ok, if you check out part 1 of this series I setup Azure CLI, Terraform, and the pertinent configuration and parts to build out infrastructure as code using HCL (Hashicorp Configuration Language) with a little bit of Bash as glue here and there. Then in Part 2 and Part 3 I setup Packer images and some Terraform resources like Kubernetes and such. All of that is great, but these two parts of the process are now in entirely two different unknown states. The two pieces are:

  1. Packer Images
  2. Terraform Infrastructure

The Terraform Infrastructure doesn’t know the Packer Images exist, but they are sitting there in a resource group in Azure. The way to make Terraform aware that these images exist is to import the various things that store the images. To import these resources into the Terraform state, before doing an apply, run the terraform import command.

In order to get all of the resources we need in which to operate and build images, the following import commands need issued. I wrote a script file to help me out with each of these, and used jq to make retrieval of the Packer created Azure Image ID’s a bit easier. That code looks like this:

[code language=”bash”]
BASECASSANDRA=$(az image list | jq ‘map({name: “basecassandra”, id})’ | jq -r ‘.[0].id’)
BASEDSE=$(az image list | jq ‘map({name: “basedse”, id})’ | jq -r ‘.[0].id’)
[/code]

Breaking down the jq commands above, the following actions are being taken. First, the Azure CLI command is issued, az image list which is then piped | into the jq command of jq 'map({name: "theimagenamehere", id})'. This command takes the results of the Azure CLI command and finds the name element with the appropriate image name, matches that and then gets the id along with it. That command is then piped into another command that returns just the value of the id jq -r '.id'. The -r is a switch that tells jq to just return the raw data, without enclosing double quotes.

I also needed to import the resource group all of these are in too, which following a similar jq command style of piping one command’s results into another, issued this command to get the Resource Group ID RG-IMPORT=$(az group show --name adronsimages | jq -r '.id'). With those three ID’s there is one more element needed to be able to import these into Terraform state.

The Terraform resources that these imported pieces of state will map to need declared, which means the Terraform HCL itself needs written out. For that, there are the two images that are needed and the Resource Group. I added the images in an images.tf files and the Resource Group goes in the resource_groups.tf file.

[code language=”javascript”]
resource “azurerm_image” “basecassandra” {
name = “basecassandra”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}

resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

Then the Resource Group.

[code language=”javascript”]
resource “azurerm_resource_group” “imported_adronsimages” {
name = “adronsimages”
location = var.locationlong

tags = {
environment = “Development Images”
}
}
[/code]

Now, issuing these Terraform commands will pull the current state of those resource into the state, which we can then issue further Terraform commands and applies from.

[code language=”bash]
terraform import azurerm_image.basedse $BASEDSE
terraform import azurerm_image.basecassandra $BASECASSANDRA
terraform import azurerm_resource_group.imported_adronsimages $RG_IMPORT
[/code]

Running those commands, the results come back something like this.

terraform-imports

Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
  • Now there is also state in Terraform, that understands where and what these resources are.

Summary, for now.

This post is shorter than I’d like it to be. But it was taking to long for the next steps to get written up – but fear not they’re on the way! In the coming post I’ll cover more of other resource elements we’ll need to import, what is next for getting a virtual machine built off of the image that is now available, some Terraform HCL refactoring, and most importantly putting together the actual DataStax Enterprise / Apache Cassandra Clusters! So stay tuned, subscribe to the blog, and of course follow me on the Twitters @Adron.

Development Workspace with Terraform on Azure: Part 3 – Next Step Kubernetes

In part 1 of this series I setup Terraform and put together a basic setup for ongoing use. In part 2 I setup Packer and got a template started that installs Apache Cassandra 3.11.4.

In this part there’s one more key piece. Really key piece for iterating and moving quickly with development needs on a day to day basis. I need some development love with Kubernetes. Terraform is extremely well suited to spin this up in Azure! Since I setup Terraform in part 1 I’ll leave those specifics linked here.

1: Terraform ❤️ Kubernetes

There are a couple of different, and very important aspects to how and what can be done with Kubernetes with Terraform. First, which I’ll cover right here, is the creation of a Kubernetes Cluster. Later I’ll cover more material related to working with the cluster itself and managing the resources within the cluster. To a Kubernetes cluster running with Terraform there’s a singular resource we’ll need to use.

Opening up the project – same as in the previous two blog article in this series – and went straight into the main.tf file in the root and added the follow Kubernetes resource.

[code language=”javascript”]
resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

output “client_certificate” {
value = “${azurerm_kubernetes_cluster.test.kube_config.0.client_certificate}”
}

output “kube_config” {
value = “${azurerm_kubernetes_cluster.test.kube_config_raw}”
}
[/code]

With that I immediately applied the changes to build the environment with terraform apply and confirming with a yes.

The agent pool profile is what sets up our virtual machines that will make up our Kubernetes Cluster. In this case I’ve selected the Standard_D1_v2 just because that’s the default example, but depending on use case this may need to change. For more information about the various virtual machine sizes in Azure here are a few important links:

The Service Principal as shown is pulled from variable, as setup in part 1 and part 2 of this series.

The two output variables in the section of configuration above will print out the client cert and raw configuration which we’ll need for other uses in the future. Be sure to put both of those somewhere that can be retrieved for future use. Ideally, keep them secure! I’ll speak to this more in future posts, but for now I’m going to focus on this Kubernetes Cluster.

check-box-64Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.

2: Kubernetes ❤️ Terraform

Alright, with a Kubernetes (K8s) done, we can now add some more elements to it via Terraform in which to work with. Specifically let’s add a pod and then get an Nginx container up and running.

The first thing we need however is a connection to the cluster that is created in the previous step. In the HCL (HashiCorp Configuration Language) above we setup two output variables. The data in those variables is needed for our ongoing connection to Kubernetes, however we don’t particularly need to pass them via output variables. The reason we don’t need to pass them via variables into another phase of deployment is because Terraform can handle the creation of a Kubernetes cluster and the post-creation processing of creating pods and related collateral in the order it needs to occur. Since Terraform knows how to do this, there is a way to setup the connection information for the Kubernetes Provider that prevents us from needing to post output variables. Before moving on those should be removed. Once that is done add the follow provider for Kubernetes.

[code language=”javascript”]
provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}
[/code]

In this section of configuration the host is set from the previously created Kubernetes Cluster, then the username and password, then the certificates, and key. Each of these, as you can see, is pulled from the Kubernetes Cluster resource (azurerm_kubernetes_cluster) and then from the test cluster that was created. With this addition run terraform init to get the provider downloaded and ready for use.

With the connection now set, the provider downloaded, the next step is to add the Kubernetes Pod that we’ll need to run the Nginx container. Hat tip to the HashiCorp docs for this specific example.

[code language=”javascript”]
resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}
[/code]

This will setup a pod that uses and runs a nginx:1.7.8 container image and makes it available on port 80. To make this container available as a service however there is one more step, to create a Kubernetes Service that’ll make port 80 available and mapped to the container within Kubernetes.

[code language=”javascript”]
resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}
[/code]

Alright, now the setup is almost 100% complete. The last step is to create another output variable that’ll provide a way for us to navigate to and make requests against the Nginx service, we’ll need either the IP or the hostname. Depending on the cloud provider one or other other can be retrieved by asking for the load_balancer_ingress[0].ip value or the load_balancer_ingress[0].hostname. For Azure and GCP, one can retrieve the IP, for AWS you’d want to specifically get the hostname.

In the end the HCL looks like this.

[code language=”javascript”]
provider “azurerm” {
version = “=1.27.0”

subscription_id = var.subscription_id
tenant_id = var.tenant_id

}

provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}

resource “azurerm_resource_group” “adrons_resource_group_workspace” {
name = “adrons_workspace”
location = “West US 2”

tags = {
environment = “Development”
}
}

resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}

resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}

output “lb_ip” {
value = kubernetes_service.nginx.load_balancer_ingress[0].ip
}
output “lb_hostname” {
value =
}
[/code]

You can also check out this specific iteration of my developer workspace project on Github at the example-nginx-pod-on-kubernetes branch.

Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.
  • With this second segment done, there is now a pod running an nginx container. The container is then running as a service with a port mapping for port 80.

Summary

At this point in the series there are enough elements to really start to get some work done deploying, building some applications, and getting some databases deployed! So subscribe here to the blog, follow me at @Adron on Twitter, @adronhall on Twitch (for even more coding), and subscribe to my YouTube channel here. More material coming about how to get all of this wired together and running, until next post, cheers!

Development Workspace with Terraform on Azure: Part 1 – Install and Setup Terraform and Azure CLI

Prerequisites before all of this.

Have a basic understanding of how to use Terraform and what it does. This is covered pretty well in the Hashicorp Docs here (single page read <5 minutes) and if you have a LinkedIn Learning account check out my Terraform course “Learning Terraform“.

Beyond that some basic CLI/terminal knowledge, understand where environment variables (as I detail here, here, and here for some starters) are, and miscellaneous knowledge. You’ll also need knowledge and user experience with Git. Most of these things I’ll detail explicitly but otherwise I’ll either link to or provide context for additional information throughout the article.

1: Terraform

Download

You’ll need to first install Terraform and make it available for use on your machine. To do this navigate over to the Hashicorp TerraformTerraform site and to the download section. As of this time 0.12.6 is available, and for the foreseeable future this version or versions coming will be just fine.

Install

You’ll need to unzip this somewhere in a directory that you’ve got the path mapped for execution. In my case I’ve setup a directly I call “Apps” and put all of my CLI apps in that directory. Then add it to my path environment variable and then terraform becomes available to me from any terminal wherever I need it. My path variable export on Linux and Mac look like this.

[code]export PATH=$PATH:/home/adron/Apps[/code]

Now you can verify that the Terraform CLI is available by typing terraform in any terminal and you should get a read out of the available Terraform commands.

For those of you who might be trying to install this on the WSL (Windows Subsystem for Linux), on Windows itself, or some variance there is specific instructions for that too. Check out Hashicorp’s installation instructions for more details on several methods and a tutorial video, plus the Microsoft Docs on installing Terraform on the WSL.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.

2: Azure CLI

For this tutorial, there are several ways for Terraform to authenticate to Azure, I’ll be using the Azure CLI authentication method as detailed in this tutorial from Hashicorp. There are also some important notes about the Azure CLI. The Azure CLI method in conjunction with the AzureRM Terraform Provider is used to build out resources using infrastructure as code paradigms, because of this it is important also to insure we have the right versions of everything to work together.

The Caveats

For the AzureRM, which will be downloaded automatically when we setup the repository and initialize it with the terraform init command, we’ll want to make sure we have version 1.20 or greater. Previous versions of the AzureRM Provider used a method of authorizing that reset credentials after an hour. A clear issue.

Terraform also only support authenticating using the az CLI and it must be avilable in the path of the system, same as the way terraform is available via the path. In other words, if both terraform and az can be executed from anywhere in the terminal we’re all set. Using the older methods of Powershell Cmdlets or azure CLI methods aren’t supported anymore.

Authenticating via the Azure CLI is only supported when using a “User Account” and not via Service Principal (ex: az login --service-principal). This works perfectly since these environments I’m building are specifically for my development needs. If you’d like to use this example as a more production focused example, then using something like Service Principals or another systems level verification, authentication, and authorization model should be used. For other examples check out authentication via a Service Principal and Client Secret, Service Principal and Client Certificate, or Managed Service Identity.

Installing

To get the Azure CLI installed I followed the manual installation on Debian/Ubuntu Linux process. For Windows installation check out these instructions.


# Update the latest packages and make sure certs, curl, https transport, and related packages are updated.
sudo apt-get update
sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg
# Download and install the Microsoft signing key.
curl -sL https://packages.microsoft.com/keys/microsoft.asc | \
gpg –dearmor | \
sudo tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null
# Add the software repository of the Azure CLI.
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
# Update the repository information and install the azure-cli package.
sudo apt-get update
sudo apt-get install azure-cli

view raw

install.sh

hosted with ❤ by GitHub

Login & Setup

az login

It’ll bring up a browser that’ll give you a standard Microsoft auth login for your Azure Account.

login.png

When that completes successfully a response is returned in the terminal as shown.

loggedin.jpg

Pieces of this information will be needed later on so I always copy this to a text file for easy access. I usually put this file in a folder I call “DELETE THIS CUZ SECURITY” so that I remember to delete it shortly thereafter so it doesn’t fall into the hands of evil!

For all the other operating systems and places that the Azure CLI can be installed, check out the docs here.

Once logged in a list of accounts can be retrieved too. Run az account list to get the list of accounts available. If you only have one account (re: subscriptions) then you’ll just see exactly what was displayed when you logged in. However if you have other peripheral information, those accounts will be shown here.

If there is more than one subscription, one needs selected and set. To do that execute the following command by passing the subscription id. That’s the second value in that list of values above. Yes, it is kind of odd that they use account and subscription interchangeably in this situation, and that subscription id isn’t exactly obvious if this data is identified as an account and not a subscription, but we’ll give Microsoft a pass for now. Suffice it to say, account id and subscription id in this data is the stand alone id field in the aforementioned data.

az account set --subscription="id" where id is subscription id, or as shown in az account list the id there, whatever one wants to call it.

Configuring Terraform Azure CLI Auth

To do this we will go ahead and setup the initial repository and files. What I’ve done specifically for this is to navigate to Github to the new repository path https://github.com/new. Then I selected the following options:

  1. Repository Name: terraform-todo-list
  2. Description: This is the infrastructure project that I’ll be using to “turn on” and “turn off” my development environment every day.
  3. Public Repo
  4. I checked Initialize this repository with a README and then added the .gitignore file with the Terraform template, and the Apache License 2.0.

newproject.png

This repository is now available at https://github.com/Adron/terraformer-todo-list. All of the steps and details outlined in this blog entry will be available in this repository plus any of my ongoing work on bastion servers, clusters, kubernetes, or other related items specific to my development needs.

With the repository cloned locally via git clone git@github.com:Adron/terraformer-todo-list.git there needs to be a main.tf file created. Once created I’ve added the azurerm provider block provider "azurerm" { version = "=1.27.0" } into the file. This enables Terraform to be executed from this repository directory with terraform init. Running this will pull down the azurerm provider dependency. If everything succeeds you’ll get a response from the command.

terraform-init-success

However if it fails, routinely I’ve ended up out of sync with Terraform version vs. provider version. As mentioned above we definitely need 1.20.0 or greater for the examples in this post. However, I’m also running Terraform at version 0.12.6 which requires at least 1.27.0 of the azurerm or better. If you see an error like this, it’s usually informative and you’ll just need to change the version number so the version of Terraform you’re using will pull down the right version.

terraform-init-fail

Next I run terraform plan and everything should respond with no change to infrastructure requested response.

terraform-plan-aok.png

At this point I want to verify authentication against my Azure account with my Terraform CLI, to do this there are two additional fields that need to be added to the provider: subscription_id and tenant_id. The configuration will look similar to this, except with the subscription id and tenant id from the az account list data that was retrieved earlier when setting up and finding the the Azure account details from the Azure CLI.

terraform-main-auth

Run terraform plan again to see the authentication results, which will look just like the terraform plan results above. With this done there’s just one more thing to do so that we have a good work space in which to work with Terraform against Azure. I always, at this point of any project with Terraform and Azure, setup a Resource Group.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.

3: Azure Resource Group

Just for clarity, a few details about the resource group. A Resource Group in Azure is a grouping that should share the same lifecycle, which is exactly what I’m aiming to do with all of these resources on a day to day basis for development. Every day I intend to start these resources in this Resource Group and then shut them all down at the end of the day.

There are other specifics about what exactly a Resource Group is, but I’ll leave the documentation to be read to elaborate further, for my mission today I just want to have a Resource Group available for further Terraform work. In Terraform the way I go about creating a Resource Group is by adding the following to my main.tf file.


provider "azurerm" {
version = "=1.27.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "11111111-1111-1111-1111-111111111111"
}
resource "azurerm_resource_group" "adrons_resource_group_workspace" {
name = "adrons_workspace"
location = "West US 2"
tags = {
environment = "Development"
}
}

view raw

main.tf

hosted with ❤ by GitHub

Run terraform plan to see the changes. Then run terraform apply to make the changes, which will need a confirmation of yes.

terraform-apply-done

Once I’m done with that I go ahead and issue a terraform destroy command, giving it a yes confirmation when asked, to destroy and wrap up this work for now.

terraform-destroy-cleanup

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.

4: Using Environment Variables

There is one more thing before I want to commit this code to the repository. I need to get the subscription id and tenant id out of the main.tf file. One wouldn’t want to post their cloud access and identification information to a public repository, or ideally to any repository. The easy fix for this is to implement some interpolated variables to pull from environment variables. I can then set the environment variables via my startup script (such as .bash_profile or .bashrc or even the IDE I’m running the Terraform from like Intellij or Webstorm for example). In that script setting the variables would look something like this.


export TF_VAR_subscription_id="00000000-0000-0000-0000-000000000000"
export TF_VAR_tenant_id="11111111-1111-1111-1111-111111111111"

Note that each variable is prepended with TF_VAR. This is the convention so that Terraform will look through and pick up all of the variables that it needs to work with. Once these variables are added to the startup script, run a source ~/.bashrc (linux) or source ~/.bash_profile (on Mac) to set those variables.  For Windows check out this to set the environment variables. With that set there are a few more steps.

In the repository create a file named variables.tf and add the two variables variable "subscription_id" {} and variable "tenant_id" {}. Then in the main.tf file change the subscription_id and tenant_id fields to be assigned variables like subscription_id = var.subscription_id and tenant_id = var.tenant_id. Now run terraform plan and these results should display.

terraform-plan-after-variables

Now the terraform apply can be applied or terraform destroy to create or destroy the Resource Group. The last step now is to just commit this infrastructure code with the variables now removed from the main.tf file.

git add -A

git commit -m 'First executable resource.'

git rebase to pull in all the remote default files and such and merge those with the local additions.

git push -u origin master then to push the changes and set the master local branch to track with the remote branch master.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.
  • Private sensitive data has been moved from the main.tf file into environment variables so that it isn’t copied to the repository.
  • A variables.tf file has been added for the aforementioned variables that map to environment variables.
  • The code base has been committed to Github at https://github.com/Adron/terraformer-todo-list.
  • Both terraform plan and terraform apply deploy as expected and terraform destroy removes infrastructure cleanly as expected.

Next steps coming soon!

Setting Up Nodes, Firewall, & Instances in Google Cloud Platform

Here’s the run down of what I covered in the latest Thrashing Code Session (go subscribe here to the channel for future sessions or on Twitch). The core focus of this session was getting some further progress on my Terraform Project around getting a basic Cassandra and DataStax Enterprise Apache Cassandra Cluster up and running in Google Cloud Platform.

The code and configuration from the work is available on Github at terraform-fields and a summary of code changes and other actions taken during the session are further along in this blog entry.

Streaming Session Video

In this session I worked toward completing a few key tasks for the Terraform project around creating a Cassandra Cluster. Here’s a run down of the time points where I tackle specific topics.

  • 3:03 – Welcome & objectives list: Working toward DataStax Enterprise Apache Cassandra Cluster and standard Apache Cassandra Cluster.
  • 3:40 – Review of what infrastructure exists from where we left off in the previous episode.
  • 5:00 – Found music to play that is copyright safe! \m/
  • 5:50 – Recap on where the project lives on Github in the terraformed-fields repo.
  • 8:52 – Adding a google_compute_address for use with the instances. Leads into determining static public and private google_compute_address resources. The idea being to know the IP for our cluster to make joining them together easier.
  • 11:44 – Working to get the access_config and related properties set on the instance to assign the google_compute_address resources that I’ve created. I run into a few issues but work through those.
  • 22:28 – Bastion server is set with the IP.
  • 37:05 – I setup some files, following a kind of “bad process” as I note. Which I’ll refactor and clean up in a subsequent episode. But the bad process also limits the amount of resources I have in one file, so it’s a little easier to follow along.
  • 54:27 – Starting to look at provisioners to execute script files and commands before or after the instance creation. Super helpful, with the aim to use this feature to download and install the DataStax Enterprise Apache Cassandra or standard Apache Cassandra software.
  • 1:16:18 – Ah, a need for firewall rule for ssh & port 22. I work through adding those and then end up with an issue that we’ll be resolving next episode!

Session Content

Starting Point: I started this episode from where I left off last session.

Work Done: In this session I added a number of resources to the project and worked through a number of troubleshooting scenarios, as one does.

Added firewall resources to open up port 22 and icmp (ping, etc).

[sourcecode language=”bash”]
resource “google_compute_firewall” “bastion-ssh” {
name = “gimme-bastion-ssh”
network = “${google_compute_network.dev-network.name}”

allow {
protocol = “tcp”
ports = [“22”]
}
}

resource “google_compute_firewall” “bastion-icmp” {
name = “gimme-bastion-icmp”
network = “${google_compute_network.dev-network.name}”

allow {
protocol = “icmp”
}
}
[/sourcecode]

I also broke out the files so that each instances has its own IP addresses with it in the file specific to that instance. Later I’ll add context for why I gave the project file bloat, by refactoring to use modules.

terraform-files.png

Added each node resource as follows. I just increased each specific node count by one for each subsequent node, such as making this node1_internal IP google_compute_address increment to node2_internal. Everything also statically defined, adding to my file and configuration bloat.

[sourcecode language=”bash”]
resource “google_compute_address” “node1_internal” {
name = “node-1-internal”
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
address_type = “INTERNAL”
address = “10.1.0.5”
}

resource “google_compute_instance” “node_frank” {
name = “frank”
machine_type = “n1-standard-1”
zone = “us-west1-a”

boot_disk {
initialize_params {
image = “ubuntu-minimal-1804-bionic-v20180814”
}
}

network_interface {
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
address = “${google_compute_address.node1_internal.address}”
}

service_account {
scopes = [
“userinfo-email”,
“compute-ro”,
“storage-ro”,
]
}
}
[/sourcecode]

I also setup the bastion server so it looks like this. Specifically designating a public IP so that I can connect via SSH.

[sourcecode language=”bash”]
resource “google_compute_address” “bastion_a” {
name = “bastion-a”
}

resource “google_compute_instance” “a” {
name = “a”
machine_type = “n1-standard-1”
zone = “us-west1-a”

provisioner “file” {
source = “install-c.sh”
destination = “install-c.sh”

connection {
type = “ssh”
user = “root”
password = “${var.root_password}”
}
}

boot_disk {
initialize_params {
image = “ubuntu-minimal-1804-bionic-v20180814”
}
}

network_interface {
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
access_config {
nat_ip = “${google_compute_address.bastion_a.address}”
}
}

service_account {
scopes = [
“userinfo-email”,
“compute-ro”,
“storage-ro”,
]
}
}
[/sourcecode]

Plans for next session include getting the nodes setup so that the bastion server can work with and deploy or execute commands against them without the nodes being exposed publicly to the internet. We’ll talk more about that then. For now, happy thrashing code!