Top 3 Refactors for My Hasura GraphQL API Terraform Deploy on Azure

I posted on the 9th of September, the “Setup Postgres, and GraphQL API with Hasura on Azure”. In that post I had a few refactorings that I wanted to make. The following are the top 3 refactorings that make the project in that repo easier to use!

1 Changed the Port Used to a Variable

In the docker-compose and the Terraform automation the port used was using the default for the particular types of deployments. This led to a production and a developer port that is different. It’s much easier, and more logical for the port to be the same on both dev and production, for at least while we have the console available on the production server (i.e. it should be disabled, more on that in a subsequent post). Here are the details of that change.

In the docker-compose file under the graphql-engine the ports, I insured were set to the specific port mapping I’d want. For this, the local dev version, I wanted to stick to port 8080. I thus, left this as 8080:8080.

version: '3.6'
services:
postgres:
image: library/postgres:12
restart: always
environment:
POSTGRES_PASSWORD: ${PPASSWORD}
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:${PPASSWORD}@postgres:5432/logistics
HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
volumes:
db_data:

The production version, or whichever version this may be in your build, I added a Terraform variable called apiport. This variable I set to be passed in via the script files I use to execute the Terraform.

The script file change looks like this now for launching the environment.

cd terraform
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics' \
-var 'apiport=8080'

The destroy script now looks like this.

cd terraform
terraform destroy \
-var 'server="logisticscoresystemsdb"' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database="logistics"' \
-var 'apiport=8080'

There are then three additional sections in the Terraform file, the first is here, the next I’ll talk about in refactor 2 below. The changes in the resource as shown below, in the container ports section and the environment_variables section, simply as var.apiport.

resource "azurerm_container_group" "adronshasure" {
name = "adrons-hasura-logistics-data-layer"
location = azurerm_resource_group.adronsrg.location
resource_group_name = azurerm_resource_group.adronsrg.name
ip_address_type = "public"
dns_name_label = "logisticsdatalayer"
os_type = "Linux"
  container {
name = "hasura-data-layer"
image = "hasura/graphql-engine:v1.3.2"
cpu = "0.5"
memory = "1.5"
    ports {
port = var.apiport
protocol = "TCP"
}
    environment_variables = {
HASURA_GRAPHQL_SERVER_PORT = var.apiport
HASURA_GRAPHQL_ENABLE_CONSOLE = true
}
secure_environment_variables = {
HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}
}
  tags = {
environment = "datalayer"
}
}

With that I now have the port standardized across dev and prod to be 8080. Of course, it could be another port, that’s just the one I decided to go with.

2 Get the Fully Qualified Domain Name (FQDN) via a Terraform Output Variable

One thing I kept needing to do after Terraform got production up and going everytime is navigating over to Azure and finding the FQDN to open the console up at (or API calls, etc). To make this easier, since I’m obviously running the script, I added an output variable that concatenates the interpolated FQDN from the results of execution. The output variable looks like this.

output "hasura_uri_path" {
value = "${azurerm_container_group.adronshasure.fqdn}:${var.apiport}"
}

Again, you’ll notice I have the var.apiport concatenated there at the end of the value. With that, it returns at the end of execution the exact FQDN that I need to navigate to for the Hasura Console!

3 Have Terraform Create the Local “Dev” Database on the Postgres Server

I started working with what I had from the previous post “Setup Postgres, and GraphQL API with Hasura on Azure”, and realized I had made a mistake. I wasn’t using a database on the database server that actually had the same name. Dev was using the default database and prod was using a newly created named database! Egads, this could cause problems down the road, so I added some Terraform just for creating a new Postgres database for the local deployment. Everything basically stays the same, just a new part to the local script was added to execute this Terraform along with the docker-compose command.

First, the Terraform for creating a default logistics database.

terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
}
}
required_version = ">= 0.13"
}
provider "postgresql" {
host = "localhost"
port = 5432
username = var.username
password = var.password
sslmode = "disable"
connect_timeout = 15
}
resource "postgresql_database" "db" {
name = var.database
owner = "postgres"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
variable "database" {
type = string
}
variable "server" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}

Now the script as I setup to call it.

docker-compose up -d
terraform init
sleep 1
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics'

There are more refactoring that I made, but these were the top 3 I did right away! Now my infrastructure as code is easier to use, the scripts are a little bit more seamless, and everything is wrapping into a good development workflow a bit better.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode.

Setup Postgres, and GraphQL API with Hasura on Azure

Key Technologies: HasuraPostgresTerraformDocker, and Azure.

I created a data model to store railroad systems, services, scheduled, time points, and related information, detailing the schema “Beyond CRUD n’ Cruft Data-Modeling” with a few tweaks. The original I’d created for Apache Cassandra, and have since switched to Postgres giving the option of primary and foreign keys, relations, and the related connections for the model.

In this post I’ll use that schema to build out an infrastructure as code solution with Terraform, utilizing Postgres and Hasura (OSS).

Prerequisites

Docker Compose Development Environment

For the Docker Compose file I just placed them in the root of the repository. Add a docker-compose.yaml file and then added services. The first service I setup was the Postgres/PostgreSQL database. This is using the standard Postgres image on Docker Hub. I opted for version 12, I do want it to always restart if it gets shutdown or crashes, and then the last of the obvious settings is the port which maps from 5432 to 5432.

For the volume, since I might want to backup or tinker with the volume, I put the db_data location set to my own Codez directory. All my databases I tend to setup like this in case I need to debug things locally.

The POSTGRES_PASSWORD is an environment variable, thus the syntax ${PPASSWORD}. This way no passwords go into the repo. Then I can load the environment variable via a standard export POSTGRES_PASSWORD="theSecretPasswordHere!" line in my system startup script or via other means.

services:
  postgres:
    image: postgres:12
    restart: always
    volumes:
      - db_data:/Users/adron/Codez/databases
    environment:
      POSTGRES_PASSWORD: ${PPASSWORD}
    ports:
      - 5432:5432

For the db_data volume, toward the bottom I add the key value setting to reference it.

volumes:
  db_data:

Next I added the GraphQL solution with Hasura. The image for the v1.1.0 probably needs to be updated (I believe we’re on version 1.3.x now) so I’ll do that soon, but got the example working with v1.1.0. Next I’ve got the ports mapped to open 8080 to 8080. Next, this service will depend on the postgres service already detailed. Restart, also set on always just as the postgres service. Finally two evnironment variables for the container:

  • HASURA_GRAPHQL_DATABASE_URL – this variable is the base postgres URL connection string.
  • HASURA_GRAPHQL_ENABLE_CONSOLE – this is the variable that will set the console user interface to initiate. We’ll definitely want to have this for the development environment. However in production I’d likely want this turned off.
  graphql-engine:
    image: hasura/graphql-engine:v1.1.0
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:logistics@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true"

At this point the commands to start this are relatively minimal, but in spite of that I like to create a start and stop shell script. My start script and stop script simply look like this:

Starting the services.

docker-compose up -d

For the first execution of the services you may want to skip the -d and instead watch the startup just to become familiar with the events and connections as they start.

Stopping the services.

docker-compose down

🚀 That’s it for the basic development environment, we’re launched and ready for development. With the services started, navigate to https://localhost:8080/console to start working with the user interface, which I’ll have a more details on the “Beyond CRUD n’ Cruft Data-Modeling” swap to Hasura and Postgres in an upcoming blog post.

For full syntax of the docker-compose.yaml check out this gist: https://gist.github.com/Adron/0b2ea637b5e00681f4d62404805c3a00

Terraform Production Environment

For the production deployment of this stack I want to deploy to Azure, use Terraform for infrastructure as code, and the Azure database service for Postgres while running Hasura for my API GraphQL tier.

For the Terraform files I created a folder and added a main.tf file. I always create a folder to work in, generally, to keep the state files and initial prototyping of the infrastructre in a singular place. Eventually I’ll setup a location to store the state and fully automate the process through a continues integration (CI) and continuous delivery (CD) process. For now though, just a singular folder to keep it all in.

For this I know I’ll need a few variables and add those to the file. These are variables that I’ll use to provide values to multiple resources in the Terraform templating.

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

One other variable I’ll want so that it is a little easier to verify what my Hasura connection information is, will look like this.

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

Let’s take this one apart a bit. There is a lot of concatenated and interpolated variables being wedged together here. This is basically the Postgres connection string that Hasura will need to make a connection. It includes the username and password, and all of the pertinent parsed and string escaped values. Note specifically the %40 between the ${var.username} and ${azurerm_postgresql_server.logisticsserver.name} variables while elsewhere certain characters are not escaped, such as the @ sign. When constructing this connection string, it is very important to be prescient of all these specific values being connected together. But, I did the work for you so it’s a pretty easy copy and paste now!

Next I’ll need the Azure provider information.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

Note that there is a features array that is just empty, it is now required for the provider to designate this even if the array is empty.

Next up is the resource group that everything will be deployed to.

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

Now the Postgres Server itself. Note the location and resource_group_name simply map back to the resource group. Another thing I found a little confusing, as I wasn’t sure if it was a Terraform name or resource name tag or the server name itself, is the “name” key value pair in this resource. It is however the server name, which I’ve assigned var.server. The next value assigned “B_Gen5_2” is the Azure designator, which is a bit cryptic. More on that in a future post.

After that information the storage is set to, I believe if I RTFM’ed correctly to 5 gigs of storage. For what I’m doing this will be fine. The backup is setup for 7 days of retention. This means I’ll be able to fall back to a backup from any of the last seven days, but after 7 days the backups are rolled and the last day is deleted to make space for the newest backup. The geo_redundant_backup_enabled setting is set to false, because with Postgres’ excellent reliability and my desire to not pay for that extra reliability insurance, I don’t need geographic redundancy. Last I set auto_grow_enabled to true, albeit I do need to determine the exact flow of logic this takes for this particular implementation and deployment of Postgres.

The last chunk of details for this resource are simply the username and password, which are derived from variables, which are derived from environment variables to keep the actual username and passwords out of the repository. The last two bits set the ssl to enabled and the version of Postgres to v9.5.

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

Since the database server is all setup, now I can confidently add an actual database to that database. Here the resource_group_name pulls from the resource group resource and the server_name pulls from the server resource. The name, being the database name itself, I derive from a variable too. Then the character set is UTF8 and collation is set to US English, which are generally standard settings on Postgres being installed for use within the US.

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

The next thing I discovered, after some trial and error and a good bit of searching, is the Postgres specific firewall rule. It appears this is related to the Postgres service in Azure specifically, as for a number of trials and many errors I attempted to use the standard available firewalls and firewall rules that are available in virtual networks. My understanding now is that the Postgres Servers exist outside of that paradigm and by relation to that have their own firewall rules.

This firewall rule basically attaches the firewall to the resource group, then the server itself, and allows internal access between the Postgres Server and the Hasura instance.

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

The last and final step is setting up the Hasura instance to work with the Postgres Server and the designated database now available.

To setup the Hasura instance I decided to go with the container service that Azure has. It provides a relatively inexpensive, easier to setup, and more concise way to setup the server than setting up an entire VM or full Kubernetes environment just to run a singular instance.

The first section sets up a public IP address, which of course I’ll need to change as the application is developed and I’ll need to provide an actual secured front end. But for now, to prove out the deployment, I’ve left it public, setup the DNS label, and set the OS type.

The next section in this resource I then outline the container details. The name of the container can be pretty much whatever you want it to be, it’s your designator. The image however is specifically hasura/graphql-engine. I’ve set the CPU and memory pretty low, at 0.5 and 1.5 respectively as I don’t suspect I’ll need a ton of horsepower just to test things out.

Next I set the port available to port 80. Then the environment variables HASURA_GRAPHQL_SERVER_PORT and HASURA_GRAPHQL_ENABLE_CONSOLE to that port to display the console there. Then finally that wild concatenated interpolated connection string that I have setup as an output variable – again specifically for testing – HASURA_GRAPHQL_DATABASE_URL.

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

With all that setup it’s time to test. But first, just for clarity here’s the entire Terraform file contents.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

To run this, similarly to how I setup the dev environment, I’ve setup a startup and shutdown script. The startup script named prod-start.sh has the following commands. Note the $PUSERNAME and $PPASSWORD are derived from environment variables, where as the other two values are just inline.

cd terraform

terraform apply -auto-approve \
    -var 'server=logisticscoresystemsdb' \
    -var 'username='$PUSERNAME'' \
    -var 'password='$PPASSWORD'' \
    -var 'database=logistics'

For the full Terraform file check out this gist: https://gist.github.com/Adron/6d7cb4be3a22429d0ff8c8bd360f3ce2

Executing that script gives me results that, if everything goes right, looks similarly to this.

./prod-start.sh 
azurerm_resource_group.adronsrg: Creating...
azurerm_resource_group.adronsrg: Creation complete after 1s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg]
azurerm_postgresql_server.logisticsserver: Creating...
azurerm_postgresql_server.logisticsserver: Still creating... [10s elapsed]
azurerm_postgresql_server.logisticsserver: Still creating... [20s elapsed]


...and it continues.

Do note that this process will take a different amount of time and is completely normal for it to take ~3 or more minutes. Once the server is done in the build process a lot of the other activities start to take place very quickly. Once it’s all done, toward the end of the output I get my hasura_url output variable so that I can confirm that it is indeed put together correctly! Now that this is preformed I can take next steps and remove that output variable, start to tighten security, and other steps. Which I’ll detail in a future blog post once more of the application is built.

... other output here ...


azurerm_container_group.adronshasure: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [50s elapsed]
azurerm_container_group.adronshasure: Still creating... [50s elapsed]
azurerm_postgresql_database.logisticsdb: Creation complete after 51s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.DBforPostgreSQL/servers/logisticscoresystemsdb/databases/logistics]
azurerm_container_group.adronshasure: Still creating... [1m0s elapsed]
azurerm_container_group.adronshasure: Creation complete after 1m4s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.ContainerInstance/containerGroups/adrons-hasura-logistics-data-layer]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

hasura_url = postgres://postgres%40logisticscoresystemsdb:theSecretPassword!@logisticscoresystemsdb.postgres.database.azure.com:5432/logistics

Now if I navigate over to logisticsdatalayer.westus2.azurecontainer.io I can view the Hasura console! But where in the world is this fully qualified domain name (FQDN)? Well, the quickest way to find it is to navigate to the Azure portal and take a look at the details page of the container itself. In the upper right the FQDN is available as well as the IP that has been assigned to the container!

Navigating to that FQDN URI will bring up the Hasura console!

Next Steps

From here I’ll take up next steps in a subsequent post. I’ll get the container secured, map the user interface or CLI or whatever the application is that I build lined up to the API end points, and more!

References

Sign Up for Thrashing Code

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode.

Wat?! Ugh Terraform State is Complicated Sometimes! ‘url has no host’

As I’ve been working through the series (part 1, 2, 3, and 4 so far), a number of issues come up (like this one), and this seemed like a good one to post too. As I’ve been working through I started stumbling into this error around destroying images via terraform destroy . Now, I’m not just creating them with terraform apply and then trying to destroy them. I’m creating the images via Packer, then importing the state (see part 4 for the series for details) and then when I clean up the environment trying to terraform destroy which shows this error.

[…lost image…]

I had taken the default azurerm_image resource configuration from the Hashicorp docs site that I tweaked just a little bit.

[code language=”bash”]
resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

The thing causing the error is the “{blog_uri}”. Which in general, I’d assume that this should be pulled or derived from the existing image created by packer when imported. But the syntax above just doesn’t cut it for actions post-import of the image state.

Time Consuming Troubleshooting

To troubleshoot and confirm this issue takes a long time. Create the image, which is ~15-20 minutes, then run an apply. The apply, even if most of the creation is minimized to imports and the few other things that are created, takes several minutes in Azure. Then a destroy takes several minutes. So all in all, one test cycle is about ~30 minutes.

The First Tricky Fix

I went through several iterations of attempting to get that part of the import of the state pulled in. That didn’t work out so well. What did though, was the simplist of actions, I deleted blob_uri = "{blog_uri}"!  Then upon terraform apply or terraform destroy I got a full cycled application of changes, etc, after adding the state and on destroy terraform wiped out everything as expected!

Problem Fixed, Problem Created

On to the next things! But oh wait, there is another problem. Now if I setup a VM to be created based off of the image, the state doesn’t have the blog_uri. Great, back to square one right? Not entirely, subscribe, keep reading and I’ll have the next steps for this coming real soon!

Development Workspace with Terraform on Azure: Part 4 – DSE w/ Packer + Importing State 4 Terraform

The next thing I wanted setup for my development workspace is a DataStax Enterprise Cluster. This will give me all of the Apache Cassandra database features plus a lot of additional features around search, OpsCenter, analytics, and more. I’ll elaborate on that in some future posts. For now, let’s get an image built we can use to add nodes to our cluster and setup some other elements.

1: DataStax Enterprise

The general installation instructions for the process I’m stepping through here in this article can be found in this documentation. To do this I started with a Packer template like the one I setup in the second part of this series. It looks, with the installation steps taken out, just like the code below.

[code language=”javascript”]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
“managed_image_name”: “{{user `imagename`}}”,

“os_type”: “Linux”,
“image_publisher”: “Canonical”,
“image_offer”: “UbuntuServer”,
“image_sku”: “18.04-LTS”,

“azure_tags”: {
“dept”: “Engineering”,
“task”: “Image deployment”
},

“location”: “westus2”,
“vm_size”: “Standard_DS2_v2”
}],
“provisioners”: [{
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}'”,
“inline”: [
“”
],
“inline_shebang”: “/bin/sh -x”,
“type”: “shell”
}]
}
[/code]

In the section marked “inline” I setup the steps for installing DataStax Enterprise.

[code language=”javascript”]
“apt-get update”,
“apt-get install -y openjdk-8-jre”,
“java -version”,
“apt-get install libaio1”,
“echo \”deb https://debian.datastax.com/enterprise/ stable main\” | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list”,
“curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -“,
“apt-get update”,
“apt-get install -y dse-full”
[/code]

The first part of this process the machine image needs Open JDK installed, which I opted for the required version of 1.8. For more information about the Open JDK check out this material:

The next thing I needed to do was to get everything setup so that I could use this Azure Image to build an actual Virtual Machine. Since this process however is built outside of the primary Terraform main build process, I need to import the various assets that are created for Packer image creation and the actual Packer images. By importing these asset resources into Terraform’s state I can then write configuration and code around them as if I’d created them within the main Terraform build process. This might sound a bit confusing, but check out this process and it might make more sense. If it is still confusing do let me know, ping me on Twitter @adron and I’ll elaborate or edit things so that they read better.

check-box-64Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.

2: Terraform State & Terraform Import Resources

Ok, if you check out part 1 of this series I setup Azure CLI, Terraform, and the pertinent configuration and parts to build out infrastructure as code using HCL (Hashicorp Configuration Language) with a little bit of Bash as glue here and there. Then in Part 2 and Part 3 I setup Packer images and some Terraform resources like Kubernetes and such. All of that is great, but these two parts of the process are now in entirely two different unknown states. The two pieces are:

  1. Packer Images
  2. Terraform Infrastructure

The Terraform Infrastructure doesn’t know the Packer Images exist, but they are sitting there in a resource group in Azure. The way to make Terraform aware that these images exist is to import the various things that store the images. To import these resources into the Terraform state, before doing an apply, run the terraform import command.

In order to get all of the resources we need in which to operate and build images, the following import commands need issued. I wrote a script file to help me out with each of these, and used jq to make retrieval of the Packer created Azure Image ID’s a bit easier. That code looks like this:

[code language=”bash”]
BASECASSANDRA=$(az image list | jq ‘map({name: “basecassandra”, id})’ | jq -r ‘.[0].id’)
BASEDSE=$(az image list | jq ‘map({name: “basedse”, id})’ | jq -r ‘.[0].id’)
[/code]

Breaking down the jq commands above, the following actions are being taken. First, the Azure CLI command is issued, az image list which is then piped | into the jq command of jq 'map({name: "theimagenamehere", id})'. This command takes the results of the Azure CLI command and finds the name element with the appropriate image name, matches that and then gets the id along with it. That command is then piped into another command that returns just the value of the id jq -r '.id'. The -r is a switch that tells jq to just return the raw data, without enclosing double quotes.

I also needed to import the resource group all of these are in too, which following a similar jq command style of piping one command’s results into another, issued this command to get the Resource Group ID RG-IMPORT=$(az group show --name adronsimages | jq -r '.id'). With those three ID’s there is one more element needed to be able to import these into Terraform state.

The Terraform resources that these imported pieces of state will map to need declared, which means the Terraform HCL itself needs written out. For that, there are the two images that are needed and the Resource Group. I added the images in an images.tf files and the Resource Group goes in the resource_groups.tf file.

[code language=”javascript”]
resource “azurerm_image” “basecassandra” {
name = “basecassandra”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}

resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

Then the Resource Group.

[code language=”javascript”]
resource “azurerm_resource_group” “imported_adronsimages” {
name = “adronsimages”
location = var.locationlong

tags = {
environment = “Development Images”
}
}
[/code]

Now, issuing these Terraform commands will pull the current state of those resource into the state, which we can then issue further Terraform commands and applies from.

[code language=”bash]
terraform import azurerm_image.basedse $BASEDSE
terraform import azurerm_image.basecassandra $BASECASSANDRA
terraform import azurerm_resource_group.imported_adronsimages $RG_IMPORT
[/code]

Running those commands, the results come back something like this.

terraform-imports

Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
  • Now there is also state in Terraform, that understands where and what these resources are.

Summary, for now.

This post is shorter than I’d like it to be. But it was taking to long for the next steps to get written up – but fear not they’re on the way! In the coming post I’ll cover more of other resource elements we’ll need to import, what is next for getting a virtual machine built off of the image that is now available, some Terraform HCL refactoring, and most importantly putting together the actual DataStax Enterprise / Apache Cassandra Clusters! So stay tuned, subscribe to the blog, and of course follow me on the Twitters @Adron.

Development Workspace with Terraform on Azure: Part 3 – Next Step Kubernetes

In part 1 of this series I setup Terraform and put together a basic setup for ongoing use. In part 2 I setup Packer and got a template started that installs Apache Cassandra 3.11.4.

In this part there’s one more key piece. Really key piece for iterating and moving quickly with development needs on a day to day basis. I need some development love with Kubernetes. Terraform is extremely well suited to spin this up in Azure! Since I setup Terraform in part 1 I’ll leave those specifics linked here.

1: Terraform ❤️ Kubernetes

There are a couple of different, and very important aspects to how and what can be done with Kubernetes with Terraform. First, which I’ll cover right here, is the creation of a Kubernetes Cluster. Later I’ll cover more material related to working with the cluster itself and managing the resources within the cluster. To a Kubernetes cluster running with Terraform there’s a singular resource we’ll need to use.

Opening up the project – same as in the previous two blog article in this series – and went straight into the main.tf file in the root and added the follow Kubernetes resource.

[code language=”javascript”]
resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

output “client_certificate” {
value = “${azurerm_kubernetes_cluster.test.kube_config.0.client_certificate}”
}

output “kube_config” {
value = “${azurerm_kubernetes_cluster.test.kube_config_raw}”
}
[/code]

With that I immediately applied the changes to build the environment with terraform apply and confirming with a yes.

The agent pool profile is what sets up our virtual machines that will make up our Kubernetes Cluster. In this case I’ve selected the Standard_D1_v2 just because that’s the default example, but depending on use case this may need to change. For more information about the various virtual machine sizes in Azure here are a few important links:

The Service Principal as shown is pulled from variable, as setup in part 1 and part 2 of this series.

The two output variables in the section of configuration above will print out the client cert and raw configuration which we’ll need for other uses in the future. Be sure to put both of those somewhere that can be retrieved for future use. Ideally, keep them secure! I’ll speak to this more in future posts, but for now I’m going to focus on this Kubernetes Cluster.

check-box-64Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.

2: Kubernetes ❤️ Terraform

Alright, with a Kubernetes (K8s) done, we can now add some more elements to it via Terraform in which to work with. Specifically let’s add a pod and then get an Nginx container up and running.

The first thing we need however is a connection to the cluster that is created in the previous step. In the HCL (HashiCorp Configuration Language) above we setup two output variables. The data in those variables is needed for our ongoing connection to Kubernetes, however we don’t particularly need to pass them via output variables. The reason we don’t need to pass them via variables into another phase of deployment is because Terraform can handle the creation of a Kubernetes cluster and the post-creation processing of creating pods and related collateral in the order it needs to occur. Since Terraform knows how to do this, there is a way to setup the connection information for the Kubernetes Provider that prevents us from needing to post output variables. Before moving on those should be removed. Once that is done add the follow provider for Kubernetes.

[code language=”javascript”]
provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}
[/code]

In this section of configuration the host is set from the previously created Kubernetes Cluster, then the username and password, then the certificates, and key. Each of these, as you can see, is pulled from the Kubernetes Cluster resource (azurerm_kubernetes_cluster) and then from the test cluster that was created. With this addition run terraform init to get the provider downloaded and ready for use.

With the connection now set, the provider downloaded, the next step is to add the Kubernetes Pod that we’ll need to run the Nginx container. Hat tip to the HashiCorp docs for this specific example.

[code language=”javascript”]
resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}
[/code]

This will setup a pod that uses and runs a nginx:1.7.8 container image and makes it available on port 80. To make this container available as a service however there is one more step, to create a Kubernetes Service that’ll make port 80 available and mapped to the container within Kubernetes.

[code language=”javascript”]
resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}
[/code]

Alright, now the setup is almost 100% complete. The last step is to create another output variable that’ll provide a way for us to navigate to and make requests against the Nginx service, we’ll need either the IP or the hostname. Depending on the cloud provider one or other other can be retrieved by asking for the load_balancer_ingress[0].ip value or the load_balancer_ingress[0].hostname. For Azure and GCP, one can retrieve the IP, for AWS you’d want to specifically get the hostname.

In the end the HCL looks like this.

[code language=”javascript”]
provider “azurerm” {
version = “=1.27.0”

subscription_id = var.subscription_id
tenant_id = var.tenant_id

}

provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}

resource “azurerm_resource_group” “adrons_resource_group_workspace” {
name = “adrons_workspace”
location = “West US 2”

tags = {
environment = “Development”
}
}

resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}

resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}

output “lb_ip” {
value = kubernetes_service.nginx.load_balancer_ingress[0].ip
}
output “lb_hostname” {
value =
}
[/code]

You can also check out this specific iteration of my developer workspace project on Github at the example-nginx-pod-on-kubernetes branch.

Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.
  • With this second segment done, there is now a pod running an nginx container. The container is then running as a service with a port mapping for port 80.

Summary

At this point in the series there are enough elements to really start to get some work done deploying, building some applications, and getting some databases deployed! So subscribe here to the blog, follow me at @Adron on Twitter, @adronhall on Twitch (for even more coding), and subscribe to my YouTube channel here. More material coming about how to get all of this wired together and running, until next post, cheers!