Terrazura – A Build Out of an Azure based, Hasura GraphQL API on Postgres

I created this repo https://github.com/Adron/terrazura​ during a live stream on my Twitch Thrashing Code Channel 🤘 at 10am on the 30th of December, 2020. The VOD is now available on my YouTube Thrashing Code Channel https://youtube.com/thrashingcode​. A rough as hell year, but wanted to wrap it up with some solid content. In this stream I tackled a ton of specifics, in detail about getting Hasura deployed in Azure, Postgres backed, a database schema designed and created, using database schema migrations, and all sorts of tips n’ tricks along the way. 3 hours of solid how to get shit done material!

For live streams, check out and follow at https://www.twitch.tv/thrashingcode​ 👊🏻 or for VOD viewing check out https://youtube.com/thrashingcode

A point in coding during the video!

02:49​ – Shout out to the stream sponsor, Azure, and links to some collateral material.
14:50​ – In this first segment, I start but run into some troubleshooting needs around the provider versions for Terraform in regards to Azure. You can skip this part unless you want to see what issue I ran into.
18:24​ – Since I ran into issues with the current version of Terraform I had installed, at this time I show a quick upgrade to the latest version.
27:22​ – After upgrading and fighting through trial and error execution of Terraform until I finally get the right combination of provider and Terraform versions.
27:53​ – Adding the first Terraform resource, the Azure resource group.
29:47​ – Azure Portal oddness, just to take note off if/when you’re working through this. Workaround later in the stream.
32:00​ – Adding the Postgres server resource.
44:43​ – In this segment I switched over to Jetbrain’s Intellij to do the rest of the work. I also tweak the IDE to re-add the plugin for the material design themes and icons. If you use this IDE, it’s very much IMHO worth getting this to switch between themes.
59:32​ – After getting leveled-up with the IDE, I wrap up the #Postgres​ server resource and #terraform​ apply it the overall set of resources. At this point I also move forward with the infrastructure as code, with emphasis on additive changes to the immutable infrastructure by emphasizing use of terraform apply and minimizing any terraform destroy use.
1:02:07​ – At this time, I try figuring out the portal issue by az logout and logging back in az login to Azure Still no resources shown but…
1:08:47​ – eventually I realize I have to use the hack solution of pasting the subscription ID into the
@Azure portal to get resources for the particular subscription account which seems highly counter intuitive since its the ONLY account. 🧐
1:22:54​ – The next thing I setup, now that I have variables that need passed in on every terraform execution, I add a script to do this for me.
1:29:35​ – Next up is adding the database to the database server and firewall rule. Also we get to see Jetbrains #Intellij​ HCL plugin introspection at work adding required properties to the firewall resource! A really useful feature.
1:38:24​ – Next up, creating the Azure container to deploy our Hasura GraphQL API for #Postgres​ to!
1:51:42​ – BAM! API Server is done and launched! I’ve got a live #GraphQL​ API up and running in Azure and we’re ready to start building a data model!
1:56:22​ – In this segment I show how to turn off the public facing console and shift one’s development workflow to the local Hasura console working against – local OR your live dev environment.
1:58:29​ – Next segment I get into schema migrations, initializing a directory structure for Hasura CLI use, and metadata, migrations, and related data. Including an update to the latest CLI so you can see how to do that, after a run into a slight glitch. 😬
2:23:02​ – I also shift over to dbdiagram to graphically build out some of the schema via their markdown, then use the SQL export option for #postgres​ combined with Hasura’s option to execute plain ole SQL via migrations…
2:31:48​ – Getting a bit more in depth in this segment, I delve through – via the Hasura console – to build out relationships between the tables and data so the graphql queries can introspect accordingly.
2:40:30​ – Next segment, graphql time! I show some of the options of what is available immediately for queries and mutations via the console.
2:50:36​ – Then some more details about metadata. I’m going to do a stream with further details, since I was a little fuzzy on some of those details myself, in the very very near future. However a good introduction to what the metadata does for the #graphql​ API.
2:59:07​ – Then as a wrap up to all of this… I nuke EVERYTHING and deploy it all out to Azure again inclusive of schema migrations, metadata, etc. 🤘🏻
3:16:30​ – Final segment, I add some data to the database and get into a few basic queries and mutations in #graphql​ via the #graphiql​ console interface in #Hasura​.

Wat?! Ugh Terraform State is Complicated Sometimes! ‘url has no host’

As I’ve been working through the series (part 1, 2, 3, and 4 so far), a number of issues come up (like this one), and this seemed like a good one to post too. As I’ve been working through I started stumbling into this error around destroying images via terraform destroy . Now, I’m not just creating them with terraform apply and then trying to destroy them. I’m creating the images via Packer, then importing the state (see part 4 for the series for details) and then when I clean up the environment trying to terraform destroy which shows this error.

[…lost image…]

I had taken the default azurerm_image resource configuration from the Hashicorp docs site that I tweaked just a little bit.

[code language=”bash”]
resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

The thing causing the error is the “{blog_uri}”. Which in general, I’d assume that this should be pulled or derived from the existing image created by packer when imported. But the syntax above just doesn’t cut it for actions post-import of the image state.

Time Consuming Troubleshooting

To troubleshoot and confirm this issue takes a long time. Create the image, which is ~15-20 minutes, then run an apply. The apply, even if most of the creation is minimized to imports and the few other things that are created, takes several minutes in Azure. Then a destroy takes several minutes. So all in all, one test cycle is about ~30 minutes.

The First Tricky Fix

I went through several iterations of attempting to get that part of the import of the state pulled in. That didn’t work out so well. What did though, was the simplist of actions, I deleted blob_uri = "{blog_uri}"!  Then upon terraform apply or terraform destroy I got a full cycled application of changes, etc, after adding the state and on destroy terraform wiped out everything as expected!

Problem Fixed, Problem Created

On to the next things! But oh wait, there is another problem. Now if I setup a VM to be created based off of the image, the state doesn’t have the blog_uri. Great, back to square one right? Not entirely, subscribe, keep reading and I’ll have the next steps for this coming real soon!

Development Workspace with Terraform on Azure: Part 4 – DSE w/ Packer + Importing State 4 Terraform

The next thing I wanted setup for my development workspace is a DataStax Enterprise Cluster. This will give me all of the Apache Cassandra database features plus a lot of additional features around search, OpsCenter, analytics, and more. I’ll elaborate on that in some future posts. For now, let’s get an image built we can use to add nodes to our cluster and setup some other elements.

1: DataStax Enterprise

The general installation instructions for the process I’m stepping through here in this article can be found in this documentation. To do this I started with a Packer template like the one I setup in the second part of this series. It looks, with the installation steps taken out, just like the code below.

[code language=”javascript”]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
“managed_image_name”: “{{user `imagename`}}”,

“os_type”: “Linux”,
“image_publisher”: “Canonical”,
“image_offer”: “UbuntuServer”,
“image_sku”: “18.04-LTS”,

“azure_tags”: {
“dept”: “Engineering”,
“task”: “Image deployment”
},

“location”: “westus2”,
“vm_size”: “Standard_DS2_v2”
}],
“provisioners”: [{
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}'”,
“inline”: [
“”
],
“inline_shebang”: “/bin/sh -x”,
“type”: “shell”
}]
}
[/code]

In the section marked “inline” I setup the steps for installing DataStax Enterprise.

[code language=”javascript”]
“apt-get update”,
“apt-get install -y openjdk-8-jre”,
“java -version”,
“apt-get install libaio1”,
“echo \”deb https://debian.datastax.com/enterprise/ stable main\” | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list”,
“curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -“,
“apt-get update”,
“apt-get install -y dse-full”
[/code]

The first part of this process the machine image needs Open JDK installed, which I opted for the required version of 1.8. For more information about the Open JDK check out this material:

The next thing I needed to do was to get everything setup so that I could use this Azure Image to build an actual Virtual Machine. Since this process however is built outside of the primary Terraform main build process, I need to import the various assets that are created for Packer image creation and the actual Packer images. By importing these asset resources into Terraform’s state I can then write configuration and code around them as if I’d created them within the main Terraform build process. This might sound a bit confusing, but check out this process and it might make more sense. If it is still confusing do let me know, ping me on Twitter @adron and I’ll elaborate or edit things so that they read better.

check-box-64Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.

2: Terraform State & Terraform Import Resources

Ok, if you check out part 1 of this series I setup Azure CLI, Terraform, and the pertinent configuration and parts to build out infrastructure as code using HCL (Hashicorp Configuration Language) with a little bit of Bash as glue here and there. Then in Part 2 and Part 3 I setup Packer images and some Terraform resources like Kubernetes and such. All of that is great, but these two parts of the process are now in entirely two different unknown states. The two pieces are:

  1. Packer Images
  2. Terraform Infrastructure

The Terraform Infrastructure doesn’t know the Packer Images exist, but they are sitting there in a resource group in Azure. The way to make Terraform aware that these images exist is to import the various things that store the images. To import these resources into the Terraform state, before doing an apply, run the terraform import command.

In order to get all of the resources we need in which to operate and build images, the following import commands need issued. I wrote a script file to help me out with each of these, and used jq to make retrieval of the Packer created Azure Image ID’s a bit easier. That code looks like this:

[code language=”bash”]
BASECASSANDRA=$(az image list | jq ‘map({name: “basecassandra”, id})’ | jq -r ‘.[0].id’)
BASEDSE=$(az image list | jq ‘map({name: “basedse”, id})’ | jq -r ‘.[0].id’)
[/code]

Breaking down the jq commands above, the following actions are being taken. First, the Azure CLI command is issued, az image list which is then piped | into the jq command of jq 'map({name: "theimagenamehere", id})'. This command takes the results of the Azure CLI command and finds the name element with the appropriate image name, matches that and then gets the id along with it. That command is then piped into another command that returns just the value of the id jq -r '.id'. The -r is a switch that tells jq to just return the raw data, without enclosing double quotes.

I also needed to import the resource group all of these are in too, which following a similar jq command style of piping one command’s results into another, issued this command to get the Resource Group ID RG-IMPORT=$(az group show --name adronsimages | jq -r '.id'). With those three ID’s there is one more element needed to be able to import these into Terraform state.

The Terraform resources that these imported pieces of state will map to need declared, which means the Terraform HCL itself needs written out. For that, there are the two images that are needed and the Resource Group. I added the images in an images.tf files and the Resource Group goes in the resource_groups.tf file.

[code language=”javascript”]
resource “azurerm_image” “basecassandra” {
name = “basecassandra”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}

resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

Then the Resource Group.

[code language=”javascript”]
resource “azurerm_resource_group” “imported_adronsimages” {
name = “adronsimages”
location = var.locationlong

tags = {
environment = “Development Images”
}
}
[/code]

Now, issuing these Terraform commands will pull the current state of those resource into the state, which we can then issue further Terraform commands and applies from.

[code language=”bash]
terraform import azurerm_image.basedse $BASEDSE
terraform import azurerm_image.basecassandra $BASECASSANDRA
terraform import azurerm_resource_group.imported_adronsimages $RG_IMPORT
[/code]

Running those commands, the results come back something like this.

terraform-imports

Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
  • Now there is also state in Terraform, that understands where and what these resources are.

Summary, for now.

This post is shorter than I’d like it to be. But it was taking to long for the next steps to get written up – but fear not they’re on the way! In the coming post I’ll cover more of other resource elements we’ll need to import, what is next for getting a virtual machine built off of the image that is now available, some Terraform HCL refactoring, and most importantly putting together the actual DataStax Enterprise / Apache Cassandra Clusters! So stay tuned, subscribe to the blog, and of course follow me on the Twitters @Adron.

Development Workspace with Terraform on Azure: Part 3 – Next Step Kubernetes

In part 1 of this series I setup Terraform and put together a basic setup for ongoing use. In part 2 I setup Packer and got a template started that installs Apache Cassandra 3.11.4.

In this part there’s one more key piece. Really key piece for iterating and moving quickly with development needs on a day to day basis. I need some development love with Kubernetes. Terraform is extremely well suited to spin this up in Azure! Since I setup Terraform in part 1 I’ll leave those specifics linked here.

1: Terraform ❤️ Kubernetes

There are a couple of different, and very important aspects to how and what can be done with Kubernetes with Terraform. First, which I’ll cover right here, is the creation of a Kubernetes Cluster. Later I’ll cover more material related to working with the cluster itself and managing the resources within the cluster. To a Kubernetes cluster running with Terraform there’s a singular resource we’ll need to use.

Opening up the project – same as in the previous two blog article in this series – and went straight into the main.tf file in the root and added the follow Kubernetes resource.

[code language=”javascript”]
resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

output “client_certificate” {
value = “${azurerm_kubernetes_cluster.test.kube_config.0.client_certificate}”
}

output “kube_config” {
value = “${azurerm_kubernetes_cluster.test.kube_config_raw}”
}
[/code]

With that I immediately applied the changes to build the environment with terraform apply and confirming with a yes.

The agent pool profile is what sets up our virtual machines that will make up our Kubernetes Cluster. In this case I’ve selected the Standard_D1_v2 just because that’s the default example, but depending on use case this may need to change. For more information about the various virtual machine sizes in Azure here are a few important links:

The Service Principal as shown is pulled from variable, as setup in part 1 and part 2 of this series.

The two output variables in the section of configuration above will print out the client cert and raw configuration which we’ll need for other uses in the future. Be sure to put both of those somewhere that can be retrieved for future use. Ideally, keep them secure! I’ll speak to this more in future posts, but for now I’m going to focus on this Kubernetes Cluster.

check-box-64Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.

2: Kubernetes ❤️ Terraform

Alright, with a Kubernetes (K8s) done, we can now add some more elements to it via Terraform in which to work with. Specifically let’s add a pod and then get an Nginx container up and running.

The first thing we need however is a connection to the cluster that is created in the previous step. In the HCL (HashiCorp Configuration Language) above we setup two output variables. The data in those variables is needed for our ongoing connection to Kubernetes, however we don’t particularly need to pass them via output variables. The reason we don’t need to pass them via variables into another phase of deployment is because Terraform can handle the creation of a Kubernetes cluster and the post-creation processing of creating pods and related collateral in the order it needs to occur. Since Terraform knows how to do this, there is a way to setup the connection information for the Kubernetes Provider that prevents us from needing to post output variables. Before moving on those should be removed. Once that is done add the follow provider for Kubernetes.

[code language=”javascript”]
provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}
[/code]

In this section of configuration the host is set from the previously created Kubernetes Cluster, then the username and password, then the certificates, and key. Each of these, as you can see, is pulled from the Kubernetes Cluster resource (azurerm_kubernetes_cluster) and then from the test cluster that was created. With this addition run terraform init to get the provider downloaded and ready for use.

With the connection now set, the provider downloaded, the next step is to add the Kubernetes Pod that we’ll need to run the Nginx container. Hat tip to the HashiCorp docs for this specific example.

[code language=”javascript”]
resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}
[/code]

This will setup a pod that uses and runs a nginx:1.7.8 container image and makes it available on port 80. To make this container available as a service however there is one more step, to create a Kubernetes Service that’ll make port 80 available and mapped to the container within Kubernetes.

[code language=”javascript”]
resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}
[/code]

Alright, now the setup is almost 100% complete. The last step is to create another output variable that’ll provide a way for us to navigate to and make requests against the Nginx service, we’ll need either the IP or the hostname. Depending on the cloud provider one or other other can be retrieved by asking for the load_balancer_ingress[0].ip value or the load_balancer_ingress[0].hostname. For Azure and GCP, one can retrieve the IP, for AWS you’d want to specifically get the hostname.

In the end the HCL looks like this.

[code language=”javascript”]
provider “azurerm” {
version = “=1.27.0”

subscription_id = var.subscription_id
tenant_id = var.tenant_id

}

provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}

resource “azurerm_resource_group” “adrons_resource_group_workspace” {
name = “adrons_workspace”
location = “West US 2”

tags = {
environment = “Development”
}
}

resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}

resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}

output “lb_ip” {
value = kubernetes_service.nginx.load_balancer_ingress[0].ip
}
output “lb_hostname” {
value =
}
[/code]

You can also check out this specific iteration of my developer workspace project on Github at the example-nginx-pod-on-kubernetes branch.

Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.
  • With this second segment done, there is now a pod running an nginx container. The container is then running as a service with a port mapping for port 80.

Summary

At this point in the series there are enough elements to really start to get some work done deploying, building some applications, and getting some databases deployed! So subscribe here to the blog, follow me at @Adron on Twitter, @adronhall on Twitch (for even more coding), and subscribe to my YouTube channel here. More material coming about how to get all of this wired together and running, until next post, cheers!

Development Workspace with Terraform on Azure: Part 2 – Packer Images

Series Links: 1, 2 (this entry)

Today I’m going to walk through the install and setup of Packer, and get our first builds running for Azure. I started this work in the previous article in this series, “Development Workspace with Terraform on Azure: Part 1 – Install and Setup Terraform and Azure CLI“. This is needed to continue and simplify what I’ll be elaborating on in subsequent articles.

1: Packer

Packer is another great Hashicorp tool that is available to build virtual machine and related images for a wide range of platforms. Specifically this article is going to cover Azure but the list is long with AWS, GCP, Alicloud, Cloudstack, Digital Ocean, Docker, Hyper-V, Virtual Box, VMWare, and others!

Download & Setup

To download Packer navigate over to the Hashicorp download page. Currently it is at version 1.4.2. I installed this executable the same way I installed Terraform, by untar or unzipping the executable into a path I have setup for all my executable CLI’s. As shown, I simply open up the compressed file and pulled the executable over into my Apps directory, making it immediately executable just like Terraform from anywhere and any path on my system. If you want it somewhere specific and need to setup a path, refer to the previous blog entry for details and links to where and how to set that up.

packer-apps.png

If the packer command is executed, which I’ve done in the following image, it shows the basic commands as a good CLI would do (cuz Hashicorp makes really good CLI tools!)

packer-default-response

The command we’ll be working with mostly to get images setup is the build command. The inspect, validate, and version commands however will become mainstays of use when one really gets into Packer, they’re worth checking out further.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.

2: Packer Azure Setup

The next thing that needs to be done is to setup Packer to work with Azure. To get these first few images building the Device Login will be used for authorization. Eventually, upon further automation I’ll change this to a Service Principal and likely recreate images accordingly, but for now Device Login is going to be fine for these examples. Most of the directions are available via HashiCorp Docs located here. As always, I’m going to elaborate a bit beyond the docs.

The Device Login will need three pieces of information: A SubscriptionID, Resource Group, and Storage Account. The Resource Group will be used in Azure to maintain this group of resources created specific to the Resource Group that live around a particular life-cycle of work. The Storage Account is just a place to store the images that will be created. The SubscriptionID is the ID of your account with Azure.

NOTE: To enable this mode, simply don’t set client_id and client_secret in the Packer build provider.

To get your SubscriptionID just type az login at the terminal. The response will include this value designated simply as “id”. Another value that you’ll routinely need is displayed here too the “tenant_id”. Once logged in you can also get a list of accounts and respective SubscriptionID values for each of the accounts you might have as I’ve done here.

cleanup.png

Here you can see I’ve authenticated against and been authorized in two Azure Accounts. Location also needs to be determined, and can be done so with az account list-locations.

locations.png

This list will continue and continue and continue. From it, a singular location will be chosen for the storage.

In a bash script, I go ahead and assign these values that I’ll need to build this particular image.

[code]
GROUPNAME=”adrons-images”
LOCATION=”westus2″
STORAGENAME=”adronsimagestorage”
[/code]

Now the next step is to create the Resource Group and Storage. With a few echo commands to print out what is going on to the console, I add these commands as shown.

[code]
echo ‘Creating the managed resource group for images.’

az group create –name $GROUPNAME –location $LOCATION

echo ‘Creating the storage account for image storage.’

az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage
[/code]

The final command here is another echo just to identify what is going to be built and then the Packer build command itself.

[code]
echo ‘Building Apache cluster node image.’

packer build node.json
[/code]

The script is now ready to be run. I’ve placed the script inside my packer phase of my build project (see previous post for overall build project and the repository here for details). The last thing needed is a Packer template to build.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.

3: Azure Images

The next step is getting an image or some images built that are needed for further work. For my use case I want several key images built around various servers and their respective services that I want to use to deploy via Terraform. Here’s an immediate shortlist of images we’ll create:

  1. Apache Cassandra Node – An image that is built with the latest Apache Cassandra installed and ready for deployment into a cluster. In this particular case that would be Apache Cassandra v4, albeit I’m going to go with 3.11.4 first and then work on getting v4 installed in a subsequent post. The installation instructions we’ll mostly be following can be found here.
  2. Gitlab Server – This is a product I like to use, especially for pre-rolled build services and all of those needs. For this it takes care of all source control, build services, and any related work that needs to be from inside the workspace itself that I’m building. i.e. it’s a necessary component for an internal corporate style continuous build or even continuous integration setup. It just happens this is all getting setup for use internally but via a public cloud provider on Azure! It can be done in parallel to other environments that one would prospectively control and manage autonomous of any cloud provider. The installation instructions we’ll largely be following are also available via Gitlab here.
  3. DataStax Enterprise 6.7 – DataStax Enterprise is built on Apache Cassandra and extends the capabilities of that database with multi-model options for graph, analytics, search, and many other capabilities and security. For download and installation most of the instructions I’ll be using are located here.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • Now there is a list of images we need to create, in which we can work from to create the images in Azure.

4: Building an Azure Image with Packer

The first image I want to create is going to be used for an Apache Cassandra 3.11.4 Cluster. First a basic image test is a good idea. For that I’ve used the example below to build a basic Ubuntu 16.04 image.

In the code below there are also two environment variables setup, which I’ve included in my bash profile so they’re available on my machine whenever I run this Packer build or any of the Terraform builds. You can see they’re setup in the variables section with a "{{env ​`​TF_VAR_tenant_id`}}". Not that the TF_VAR_tenant_id is prefaced with TF_VAR per Terraform convention, which in Terraform makes the variable just “tenant_id” when used. Also note that things that might look like single quotes are indeed back ticks, not single quotes around TF_VAR_tenant_id. Sometimes the blog formats those oddly so I wanted to call that out! (For example of environment variables, I set up all of them for the Service Principal setup below, just scroll further down)


{
"variables": {
"tenant_id": "{{env `TF_VAR_tenant_id`}}",
"subscription_id": "{{env `TF_VAR_subscription_id`}}",
"storage_account": "adronsimagestorage",
"resource_group_name": "adrons-images"
},
"builders": [{
"type": "azure-arm",
"tenant_id": "{{user `tenant_id`}}",
"subscription_id": "{{user `subscription_id`}}",
"managed_image_resource_group_name": "{{user `resource_group_name`}}",
"managed_image_name": "base_ubuntu_image",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
"location": "westus2",
"vm_size": "Standard_DS2_v2"
}],
"provisioners": [{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"echo 'does this even work?'"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}]
}

During this build, when Packer begins there will be several prompts during the build to authorize the resources being built. Because earlier in the post Device Login was used instead of a Service Principal this step is necessary. It looks something like this.

device-login

You’ll need to then select, copy, and paste that code into the page at https://microsoft.com/devicelogin.

logging-in-code-ddevice-login

This will happen a few times and eventually the build will complete and the image will be available. What we really want to do however is get a Service Principal setup and use that so the process can be entirely automated.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.

5: Azure Service Principal for Automation

Ok, a Service Principal is needed now. This is singular command, but it has to be very specifically the command from what I can tell. Before running that though, know where you are going to store or how you will pass the client id and client secret that will be provided by the principal when it is created.

The command for this is az ad sp create-for-rbac -n "Packer" --role contributor --scopes /subscriptions/00000000-0000-0000-0000-000000000 where all those zeros are the Subscription ID. When this executes and completes all the peripheral values that are needed for authorization via Service Principal.

the-rbac

One of the easiest ways to keep all of the bits out of your repositories is to setup environment variables. If there’s a secrets vault or something like that then it would be a good idea to use that, but for this example I’m going to setup use of environment variables in the template.

Another thing to notice, which is important when building these systems, is that the “Retrying role assignment creation: 1/36″ message. Which points to the fact there are 36 retries built into this because of timing and other irregularities in working with cloud systems. For various reasons, this means when coding against such systems we routinely will have to put in timeouts, waits, and other errata to ensure we get messages we want or mark things disabled as needed.

After running that, just for clarity, here’s what my .bashrc/bash_profile file looks like with the added variables.

[code]
export TF_VAR_clientid=”00000000-0000-0000-0000-000000000″
export TF_VAR_clientsecret=”00000000-0000-0000-0000-000000000″
export TF_VAR_tenant_id=”00000000-0000-0000-0000-000000000″
export TF_VAR_subscription_id=”00000000-0000-0000-0000-000000000”
[/code]

With that set, a quick source ~/.bashrc or source ~/.bash_profile and the variables are all set for use.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
  • The Service Principal is now setup so the image can be built with full automation.
  • Environment variables are setup so that they won’t be checked in to the code and configuration repository.

6: Apache Cassandra 3.11.4 Image

Ok, all the pieces are in place. With confirmation that the image builds, that Packer is installed correctly, with Azure Service Principal, Managed Resource Group, and related collateral setup now, building an actual image with installation steps for Apache Cassandra 3.11.4 can now begin!

First add the client_id and client_secret environment variables to the variables section of the template.

[code]
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
[/code]

Next add those same variables to the builder for the image in the template.

[code]
“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,
[/code]

That whole top section of template configuration looks like this now.

[code language=javascript]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “something”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
[/code]

Now the image can be executed, but let’s streamline the process a little bit more. Since I won’t want but only one image at any particular time from this template and I want to use the template in a way where I can create images and pass in a few more pertinent pieces of information I’ll tweak that in the Packer build script.

Below I’ve added the variable name for the image, and dubbed in Cassandra so that I can specifically reference this image in the bash script with IMAGECASSANDRA="basecassandra". Next I added a command to delete an existing image that would be called this with the az image delete -g $GROUPNAME -n $IMAGECASSANDRA line of script. Finally toward the end of the file I’ve added the variable to be passed into the template with packer build -var 'imagename='$IMAGECASSANDRA node-cassandra.json. Note the odd way to concatenate imagename and the variable of the passed in variable from the bash script. This isn’t super clear which way to do this, but after some troubleshooting this at least works on Linux! I’m assuming it works on MacOS, if anybody else tries it and it doesn’t please let me know.

[code language=bash]
GROUPNAME=”adrons-images”
LOCATION=”westus2″
STORAGENAME=”adronsimagestorage”
IMAGECASSANDRA=”basecassandra”

echo ‘Deleting existing image.’

az image delete -g $GROUPNAME -n $IMAGECASSANDRA

echo ‘Creating the managed resource group for images.’

az group create –name $GROUPNAME –location $LOCATION

echo ‘Creating the storage account for image storage.’

az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage

echo ‘Building Apache cluster node image.’

packer build -var “‘imagename=$IMAGECASSANDRA'” node-cassandra.json
[/code]

With that done the build can be run things without needing to manually delete the image each time since it is part of the script now. The next part to add to the template is more of the needed installation steps for Apache Cassandra. These steps can be found on the Apache Cassandra site here.

Under the provisioners section of the Packer template I’ve added the installation steps and removed the sudo part of the commands. Since this runs as root there’s really no need for sudo. The inline part of the provisioner when I finished looks like this.

[code]
“inline”: [
“echo ‘Starting Cassandra Repo Add & Installation.'”,
“echo ‘deb http://www.apache.org/dist/cassandra/debian 311x main’ | tee -a /etc/apt/sources.list.d/cassandra.sources.list”,
“curl https://www.apache.org/dist/cassandra/KEYS | apt-key add -“,
“apt-get update”,
“apt-key adv –keyserver pool.sks-keyservers.net –recv-key A278B781FE4B2BDA”,
“apt-get install cassandra”
],
[/code]

With that completed we now have the full workable template to build a node for use in starting or using as a node within an Apache Cassandra cluster. All the key pieces are there. The finished template is below, with the build script just below that.


{
"variables": {
"client_id": "{{env `TF_VAR_clientid`}}",
"client_secret": "{{env `TF_VAR_clientsecret`}}",
"tenant_id": "{{env `TF_VAR_tenant_id`}}",
"subscription_id": "{{env `TF_VAR_subscription_id`}}",
"imagename": "",
"storage_account": "adronsimagestorage",
"resource_group_name": "adrons-images"
},
"builders": [{
"type": "azure-arm",
"client_id": "{{user `client_id`}}",
"client_secret": "{{user `client_secret`}}",
"tenant_id": "{{user `tenant_id`}}",
"subscription_id": "{{user `subscription_id`}}",
"managed_image_resource_group_name": "{{user `resource_group_name`}}",
"managed_image_name": "{{user `imagename`}}",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "18.04-LTS",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
"location": "westus2",
"vm_size": "Standard_DS2_v2"
}],
"provisioners": [{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"echo 'Starting Cassandra Repo Add & Installation.'",
"echo 'deb http://www.apache.org/dist/cassandra/debian 311x main' | tee -a /etc/apt/sources.list.d/cassandra.sources.list",
"curl https://www.apache.org/dist/cassandra/KEYS | apt-key add –",
"apt-get update",
"apt-key adv –keyserver pool.sks-keyservers.net –recv-key A278B781FE4B2BDA",
"apt-get -y install cassandra"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}]
}


GROUPNAME="adrons-images"
LOCATION="westus2"
STORAGENAME="adronsimagestorage"
IMAGECASSANDRA="basecassandra"
echo 'Deleting existing image.'
az image delete -g $GROUPNAME -n $IMAGECASSANDRA
echo 'Creating the managed resource group for images.'
az group create –name $GROUPNAME –location $LOCATION
echo 'Creating the storage account for image storage.'
az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage
echo 'Building Apache cluster node image.'
packer build -var 'imagename='$IMAGECASSANDRA node-cassandra.json

view raw

build.sh

hosted with ❤ by GitHub

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
  • The Service Principal is now setup so the image can be built with full automation.
  • Environment variables are setup so that they won’t be checked in to the code and configuration repository.
  • Packer add package repository and installs Cassandra 3.11.4 on Ubuntu 18.04 LTS in Azure.

I’ll get to the next images real soon, but for now, go enjoy the weekend and the next post will be up in this series in about a week and a half!