Development Workspace with Terraform on Azure: Part 3 – Next Step Kubernetes

In part 1 of this series I setup Terraform and put together a basic setup for ongoing use. In part 2 I setup Packer and got a template started that installs Apache Cassandra 3.11.4.

In this part there’s one more key piece. Really key piece for iterating and moving quickly with development needs on a day to day basis. I need some development love with Kubernetes. Terraform is extremely well suited to spin this up in Azure! Since I setup Terraform in part 1 I’ll leave those specifics linked here.

1: Terraform ❤️ Kubernetes

There are a couple of different, and very important aspects to how and what can be done with Kubernetes with Terraform. First, which I’ll cover right here, is the creation of a Kubernetes Cluster. Later I’ll cover more material related to working with the cluster itself and managing the resources within the cluster. To a Kubernetes cluster running with Terraform there’s a singular resource we’ll need to use.

Opening up the project – same as in the previous two blog article in this series – and went straight into the main.tf file in the root and added the follow Kubernetes resource.

[code language=”javascript”]
resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

output “client_certificate” {
value = “${azurerm_kubernetes_cluster.test.kube_config.0.client_certificate}”
}

output “kube_config” {
value = “${azurerm_kubernetes_cluster.test.kube_config_raw}”
}
[/code]

With that I immediately applied the changes to build the environment with terraform apply and confirming with a yes.

The agent pool profile is what sets up our virtual machines that will make up our Kubernetes Cluster. In this case I’ve selected the Standard_D1_v2 just because that’s the default example, but depending on use case this may need to change. For more information about the various virtual machine sizes in Azure here are a few important links:

The Service Principal as shown is pulled from variable, as setup in part 1 and part 2 of this series.

The two output variables in the section of configuration above will print out the client cert and raw configuration which we’ll need for other uses in the future. Be sure to put both of those somewhere that can be retrieved for future use. Ideally, keep them secure! I’ll speak to this more in future posts, but for now I’m going to focus on this Kubernetes Cluster.

check-box-64Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.

2: Kubernetes ❤️ Terraform

Alright, with a Kubernetes (K8s) done, we can now add some more elements to it via Terraform in which to work with. Specifically let’s add a pod and then get an Nginx container up and running.

The first thing we need however is a connection to the cluster that is created in the previous step. In the HCL (HashiCorp Configuration Language) above we setup two output variables. The data in those variables is needed for our ongoing connection to Kubernetes, however we don’t particularly need to pass them via output variables. The reason we don’t need to pass them via variables into another phase of deployment is because Terraform can handle the creation of a Kubernetes cluster and the post-creation processing of creating pods and related collateral in the order it needs to occur. Since Terraform knows how to do this, there is a way to setup the connection information for the Kubernetes Provider that prevents us from needing to post output variables. Before moving on those should be removed. Once that is done add the follow provider for Kubernetes.

[code language=”javascript”]
provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}
[/code]

In this section of configuration the host is set from the previously created Kubernetes Cluster, then the username and password, then the certificates, and key. Each of these, as you can see, is pulled from the Kubernetes Cluster resource (azurerm_kubernetes_cluster) and then from the test cluster that was created. With this addition run terraform init to get the provider downloaded and ready for use.

With the connection now set, the provider downloaded, the next step is to add the Kubernetes Pod that we’ll need to run the Nginx container. Hat tip to the HashiCorp docs for this specific example.

[code language=”javascript”]
resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}
[/code]

This will setup a pod that uses and runs a nginx:1.7.8 container image and makes it available on port 80. To make this container available as a service however there is one more step, to create a Kubernetes Service that’ll make port 80 available and mapped to the container within Kubernetes.

[code language=”javascript”]
resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}
[/code]

Alright, now the setup is almost 100% complete. The last step is to create another output variable that’ll provide a way for us to navigate to and make requests against the Nginx service, we’ll need either the IP or the hostname. Depending on the cloud provider one or other other can be retrieved by asking for the load_balancer_ingress[0].ip value or the load_balancer_ingress[0].hostname. For Azure and GCP, one can retrieve the IP, for AWS you’d want to specifically get the hostname.

In the end the HCL looks like this.

[code language=”javascript”]
provider “azurerm” {
version = “=1.27.0”

subscription_id = var.subscription_id
tenant_id = var.tenant_id

}

provider “kubernetes” {
host = “${azurerm_kubernetes_cluster.test.kube_config.0.host}”
username = “${azurerm_kubernetes_cluster.test.kube_config.0.username}”
password = “${azurerm_kubernetes_cluster.test.kube_config.0.password}”
client_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_certificate)}”
client_key = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.client_key)}”
cluster_ca_certificate = “${base64decode(azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate)}”
}

resource “azurerm_resource_group” “adrons_resource_group_workspace” {
name = “adrons_workspace”
location = “West US 2”

tags = {
environment = “Development”
}
}

resource “azurerm_kubernetes_cluster” “test” {
name = “acctestaks1”
location = “${azurerm_resource_group.adrons_resource_group_workspace.location}”
resource_group_name = “${azurerm_resource_group.adrons_resource_group_workspace.name}”
dns_prefix = “acctestagent1”

agent_pool_profile {
name = “default”
count = 1
vm_size = “Standard_D1_v2”
os_type = “Linux”
os_disk_size_gb = 30
}

service_principal {
client_id = var.clientid
client_secret = var.clientsecret
}

tags = {
Environment = “Production”
}
}

resource “kubernetes_pod” “nginx” {
metadata {
name = “nginx-example”
labels = {
App = “nginx”
}
}

spec {
container {
image = “nginx:1.7.8”
name = “example”

port {
container_port = 80
}
}
}
}

resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}

type = “LoadBalancer”
}
}

output “lb_ip” {
value = kubernetes_service.nginx.load_balancer_ingress[0].ip
}
output “lb_hostname” {
value =
}
[/code]

You can also check out this specific iteration of my developer workspace project on Github at the example-nginx-pod-on-kubernetes branch.

Verification Checklist

  • Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.
  • With this second segment done, there is now a pod running an nginx container. The container is then running as a service with a port mapping for port 80.

Summary

At this point in the series there are enough elements to really start to get some work done deploying, building some applications, and getting some databases deployed! So subscribe here to the blog, follow me at @Adron on Twitter, @adronhall on Twitch (for even more coding), and subscribe to my YouTube channel here. More material coming about how to get all of this wired together and running, until next post, cheers!

Development Workspace with Terraform on Azure: Part 2 – Packer Images

Series Links: 1, 2 (this entry)

Today I’m going to walk through the install and setup of Packer, and get our first builds running for Azure. I started this work in the previous article in this series, “Development Workspace with Terraform on Azure: Part 1 – Install and Setup Terraform and Azure CLI“. This is needed to continue and simplify what I’ll be elaborating on in subsequent articles.

1: Packer

Packer is another great Hashicorp tool that is available to build virtual machine and related images for a wide range of platforms. Specifically this article is going to cover Azure but the list is long with AWS, GCP, Alicloud, Cloudstack, Digital Ocean, Docker, Hyper-V, Virtual Box, VMWare, and others!

Download & Setup

To download Packer navigate over to the Hashicorp download page. Currently it is at version 1.4.2. I installed this executable the same way I installed Terraform, by untar or unzipping the executable into a path I have setup for all my executable CLI’s. As shown, I simply open up the compressed file and pulled the executable over into my Apps directory, making it immediately executable just like Terraform from anywhere and any path on my system. If you want it somewhere specific and need to setup a path, refer to the previous blog entry for details and links to where and how to set that up.

packer-apps.png

If the packer command is executed, which I’ve done in the following image, it shows the basic commands as a good CLI would do (cuz Hashicorp makes really good CLI tools!)

packer-default-response

The command we’ll be working with mostly to get images setup is the build command. The inspect, validate, and version commands however will become mainstays of use when one really gets into Packer, they’re worth checking out further.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.

2: Packer Azure Setup

The next thing that needs to be done is to setup Packer to work with Azure. To get these first few images building the Device Login will be used for authorization. Eventually, upon further automation I’ll change this to a Service Principal and likely recreate images accordingly, but for now Device Login is going to be fine for these examples. Most of the directions are available via HashiCorp Docs located here. As always, I’m going to elaborate a bit beyond the docs.

The Device Login will need three pieces of information: A SubscriptionID, Resource Group, and Storage Account. The Resource Group will be used in Azure to maintain this group of resources created specific to the Resource Group that live around a particular life-cycle of work. The Storage Account is just a place to store the images that will be created. The SubscriptionID is the ID of your account with Azure.

NOTE: To enable this mode, simply don’t set client_id and client_secret in the Packer build provider.

To get your SubscriptionID just type az login at the terminal. The response will include this value designated simply as “id”. Another value that you’ll routinely need is displayed here too the “tenant_id”. Once logged in you can also get a list of accounts and respective SubscriptionID values for each of the accounts you might have as I’ve done here.

cleanup.png

Here you can see I’ve authenticated against and been authorized in two Azure Accounts. Location also needs to be determined, and can be done so with az account list-locations.

locations.png

This list will continue and continue and continue. From it, a singular location will be chosen for the storage.

In a bash script, I go ahead and assign these values that I’ll need to build this particular image.

[code]
GROUPNAME=”adrons-images”
LOCATION=”westus2″
STORAGENAME=”adronsimagestorage”
[/code]

Now the next step is to create the Resource Group and Storage. With a few echo commands to print out what is going on to the console, I add these commands as shown.

[code]
echo ‘Creating the managed resource group for images.’

az group create –name $GROUPNAME –location $LOCATION

echo ‘Creating the storage account for image storage.’

az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage
[/code]

The final command here is another echo just to identify what is going to be built and then the Packer build command itself.

[code]
echo ‘Building Apache cluster node image.’

packer build node.json
[/code]

The script is now ready to be run. I’ve placed the script inside my packer phase of my build project (see previous post for overall build project and the repository here for details). The last thing needed is a Packer template to build.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.

3: Azure Images

The next step is getting an image or some images built that are needed for further work. For my use case I want several key images built around various servers and their respective services that I want to use to deploy via Terraform. Here’s an immediate shortlist of images we’ll create:

  1. Apache Cassandra Node – An image that is built with the latest Apache Cassandra installed and ready for deployment into a cluster. In this particular case that would be Apache Cassandra v4, albeit I’m going to go with 3.11.4 first and then work on getting v4 installed in a subsequent post. The installation instructions we’ll mostly be following can be found here.
  2. Gitlab Server – This is a product I like to use, especially for pre-rolled build services and all of those needs. For this it takes care of all source control, build services, and any related work that needs to be from inside the workspace itself that I’m building. i.e. it’s a necessary component for an internal corporate style continuous build or even continuous integration setup. It just happens this is all getting setup for use internally but via a public cloud provider on Azure! It can be done in parallel to other environments that one would prospectively control and manage autonomous of any cloud provider. The installation instructions we’ll largely be following are also available via Gitlab here.
  3. DataStax Enterprise 6.7 – DataStax Enterprise is built on Apache Cassandra and extends the capabilities of that database with multi-model options for graph, analytics, search, and many other capabilities and security. For download and installation most of the instructions I’ll be using are located here.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • Now there is a list of images we need to create, in which we can work from to create the images in Azure.

4: Building an Azure Image with Packer

The first image I want to create is going to be used for an Apache Cassandra 3.11.4 Cluster. First a basic image test is a good idea. For that I’ve used the example below to build a basic Ubuntu 16.04 image.

In the code below there are also two environment variables setup, which I’ve included in my bash profile so they’re available on my machine whenever I run this Packer build or any of the Terraform builds. You can see they’re setup in the variables section with a "{{env ​`​TF_VAR_tenant_id`}}". Not that the TF_VAR_tenant_id is prefaced with TF_VAR per Terraform convention, which in Terraform makes the variable just “tenant_id” when used. Also note that things that might look like single quotes are indeed back ticks, not single quotes around TF_VAR_tenant_id. Sometimes the blog formats those oddly so I wanted to call that out! (For example of environment variables, I set up all of them for the Service Principal setup below, just scroll further down)


{
"variables": {
"tenant_id": "{{env `TF_VAR_tenant_id`}}",
"subscription_id": "{{env `TF_VAR_subscription_id`}}",
"storage_account": "adronsimagestorage",
"resource_group_name": "adrons-images"
},
"builders": [{
"type": "azure-arm",
"tenant_id": "{{user `tenant_id`}}",
"subscription_id": "{{user `subscription_id`}}",
"managed_image_resource_group_name": "{{user `resource_group_name`}}",
"managed_image_name": "base_ubuntu_image",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
"location": "westus2",
"vm_size": "Standard_DS2_v2"
}],
"provisioners": [{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"echo 'does this even work?'"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}]
}

During this build, when Packer begins there will be several prompts during the build to authorize the resources being built. Because earlier in the post Device Login was used instead of a Service Principal this step is necessary. It looks something like this.

device-login

You’ll need to then select, copy, and paste that code into the page at https://microsoft.com/devicelogin.

logging-in-code-ddevice-login

This will happen a few times and eventually the build will complete and the image will be available. What we really want to do however is get a Service Principal setup and use that so the process can be entirely automated.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.

5: Azure Service Principal for Automation

Ok, a Service Principal is needed now. This is singular command, but it has to be very specifically the command from what I can tell. Before running that though, know where you are going to store or how you will pass the client id and client secret that will be provided by the principal when it is created.

The command for this is az ad sp create-for-rbac -n "Packer" --role contributor --scopes /subscriptions/00000000-0000-0000-0000-000000000 where all those zeros are the Subscription ID. When this executes and completes all the peripheral values that are needed for authorization via Service Principal.

the-rbac

One of the easiest ways to keep all of the bits out of your repositories is to setup environment variables. If there’s a secrets vault or something like that then it would be a good idea to use that, but for this example I’m going to setup use of environment variables in the template.

Another thing to notice, which is important when building these systems, is that the “Retrying role assignment creation: 1/36″ message. Which points to the fact there are 36 retries built into this because of timing and other irregularities in working with cloud systems. For various reasons, this means when coding against such systems we routinely will have to put in timeouts, waits, and other errata to ensure we get messages we want or mark things disabled as needed.

After running that, just for clarity, here’s what my .bashrc/bash_profile file looks like with the added variables.

[code]
export TF_VAR_clientid=”00000000-0000-0000-0000-000000000″
export TF_VAR_clientsecret=”00000000-0000-0000-0000-000000000″
export TF_VAR_tenant_id=”00000000-0000-0000-0000-000000000″
export TF_VAR_subscription_id=”00000000-0000-0000-0000-000000000”
[/code]

With that set, a quick source ~/.bashrc or source ~/.bash_profile and the variables are all set for use.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
  • The Service Principal is now setup so the image can be built with full automation.
  • Environment variables are setup so that they won’t be checked in to the code and configuration repository.

6: Apache Cassandra 3.11.4 Image

Ok, all the pieces are in place. With confirmation that the image builds, that Packer is installed correctly, with Azure Service Principal, Managed Resource Group, and related collateral setup now, building an actual image with installation steps for Apache Cassandra 3.11.4 can now begin!

First add the client_id and client_secret environment variables to the variables section of the template.

[code]
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
[/code]

Next add those same variables to the builder for the image in the template.

[code]
“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,
[/code]

That whole top section of template configuration looks like this now.

[code language=javascript]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “something”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
[/code]

Now the image can be executed, but let’s streamline the process a little bit more. Since I won’t want but only one image at any particular time from this template and I want to use the template in a way where I can create images and pass in a few more pertinent pieces of information I’ll tweak that in the Packer build script.

Below I’ve added the variable name for the image, and dubbed in Cassandra so that I can specifically reference this image in the bash script with IMAGECASSANDRA="basecassandra". Next I added a command to delete an existing image that would be called this with the az image delete -g $GROUPNAME -n $IMAGECASSANDRA line of script. Finally toward the end of the file I’ve added the variable to be passed into the template with packer build -var 'imagename='$IMAGECASSANDRA node-cassandra.json. Note the odd way to concatenate imagename and the variable of the passed in variable from the bash script. This isn’t super clear which way to do this, but after some troubleshooting this at least works on Linux! I’m assuming it works on MacOS, if anybody else tries it and it doesn’t please let me know.

[code language=bash]
GROUPNAME=”adrons-images”
LOCATION=”westus2″
STORAGENAME=”adronsimagestorage”
IMAGECASSANDRA=”basecassandra”

echo ‘Deleting existing image.’

az image delete -g $GROUPNAME -n $IMAGECASSANDRA

echo ‘Creating the managed resource group for images.’

az group create –name $GROUPNAME –location $LOCATION

echo ‘Creating the storage account for image storage.’

az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage

echo ‘Building Apache cluster node image.’

packer build -var “‘imagename=$IMAGECASSANDRA'” node-cassandra.json
[/code]

With that done the build can be run things without needing to manually delete the image each time since it is part of the script now. The next part to add to the template is more of the needed installation steps for Apache Cassandra. These steps can be found on the Apache Cassandra site here.

Under the provisioners section of the Packer template I’ve added the installation steps and removed the sudo part of the commands. Since this runs as root there’s really no need for sudo. The inline part of the provisioner when I finished looks like this.

[code]
“inline”: [
“echo ‘Starting Cassandra Repo Add & Installation.'”,
“echo ‘deb http://www.apache.org/dist/cassandra/debian 311x main’ | tee -a /etc/apt/sources.list.d/cassandra.sources.list”,
“curl https://www.apache.org/dist/cassandra/KEYS | apt-key add -“,
“apt-get update”,
“apt-key adv –keyserver pool.sks-keyservers.net –recv-key A278B781FE4B2BDA”,
“apt-get install cassandra”
],
[/code]

With that completed we now have the full workable template to build a node for use in starting or using as a node within an Apache Cassandra cluster. All the key pieces are there. The finished template is below, with the build script just below that.


{
"variables": {
"client_id": "{{env `TF_VAR_clientid`}}",
"client_secret": "{{env `TF_VAR_clientsecret`}}",
"tenant_id": "{{env `TF_VAR_tenant_id`}}",
"subscription_id": "{{env `TF_VAR_subscription_id`}}",
"imagename": "",
"storage_account": "adronsimagestorage",
"resource_group_name": "adrons-images"
},
"builders": [{
"type": "azure-arm",
"client_id": "{{user `client_id`}}",
"client_secret": "{{user `client_secret`}}",
"tenant_id": "{{user `tenant_id`}}",
"subscription_id": "{{user `subscription_id`}}",
"managed_image_resource_group_name": "{{user `resource_group_name`}}",
"managed_image_name": "{{user `imagename`}}",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "18.04-LTS",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
"location": "westus2",
"vm_size": "Standard_DS2_v2"
}],
"provisioners": [{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"echo 'Starting Cassandra Repo Add & Installation.'",
"echo 'deb http://www.apache.org/dist/cassandra/debian 311x main' | tee -a /etc/apt/sources.list.d/cassandra.sources.list",
"curl https://www.apache.org/dist/cassandra/KEYS | apt-key add –",
"apt-get update",
"apt-key adv –keyserver pool.sks-keyservers.net –recv-key A278B781FE4B2BDA",
"apt-get -y install cassandra"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}]
}


GROUPNAME="adrons-images"
LOCATION="westus2"
STORAGENAME="adronsimagestorage"
IMAGECASSANDRA="basecassandra"
echo 'Deleting existing image.'
az image delete -g $GROUPNAME -n $IMAGECASSANDRA
echo 'Creating the managed resource group for images.'
az group create –name $GROUPNAME –location $LOCATION
echo 'Creating the storage account for image storage.'
az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage
echo 'Building Apache cluster node image.'
packer build -var 'imagename='$IMAGECASSANDRA node-cassandra.json

view raw

build.sh

hosted with ❤ by GitHub

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
  • The Service Principal is now setup so the image can be built with full automation.
  • Environment variables are setup so that they won’t be checked in to the code and configuration repository.
  • Packer add package repository and installs Cassandra 3.11.4 on Ubuntu 18.04 LTS in Azure.

I’ll get to the next images real soon, but for now, go enjoy the weekend and the next post will be up in this series in about a week and a half!

Development Workspace with Terraform on Azure: Part 1 – Install and Setup Terraform and Azure CLI

Prerequisites before all of this.

Have a basic understanding of how to use Terraform and what it does. This is covered pretty well in the Hashicorp Docs here (single page read <5 minutes) and if you have a LinkedIn Learning account check out my Terraform course “Learning Terraform“.

Beyond that some basic CLI/terminal knowledge, understand where environment variables (as I detail here, here, and here for some starters) are, and miscellaneous knowledge. You’ll also need knowledge and user experience with Git. Most of these things I’ll detail explicitly but otherwise I’ll either link to or provide context for additional information throughout the article.

1: Terraform

Download

You’ll need to first install Terraform and make it available for use on your machine. To do this navigate over to the Hashicorp TerraformTerraform site and to the download section. As of this time 0.12.6 is available, and for the foreseeable future this version or versions coming will be just fine.

Install

You’ll need to unzip this somewhere in a directory that you’ve got the path mapped for execution. In my case I’ve setup a directly I call “Apps” and put all of my CLI apps in that directory. Then add it to my path environment variable and then terraform becomes available to me from any terminal wherever I need it. My path variable export on Linux and Mac look like this.

[code]export PATH=$PATH:/home/adron/Apps[/code]

Now you can verify that the Terraform CLI is available by typing terraform in any terminal and you should get a read out of the available Terraform commands.

For those of you who might be trying to install this on the WSL (Windows Subsystem for Linux), on Windows itself, or some variance there is specific instructions for that too. Check out Hashicorp’s installation instructions for more details on several methods and a tutorial video, plus the Microsoft Docs on installing Terraform on the WSL.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.

2: Azure CLI

For this tutorial, there are several ways for Terraform to authenticate to Azure, I’ll be using the Azure CLI authentication method as detailed in this tutorial from Hashicorp. There are also some important notes about the Azure CLI. The Azure CLI method in conjunction with the AzureRM Terraform Provider is used to build out resources using infrastructure as code paradigms, because of this it is important also to insure we have the right versions of everything to work together.

The Caveats

For the AzureRM, which will be downloaded automatically when we setup the repository and initialize it with the terraform init command, we’ll want to make sure we have version 1.20 or greater. Previous versions of the AzureRM Provider used a method of authorizing that reset credentials after an hour. A clear issue.

Terraform also only support authenticating using the az CLI and it must be avilable in the path of the system, same as the way terraform is available via the path. In other words, if both terraform and az can be executed from anywhere in the terminal we’re all set. Using the older methods of Powershell Cmdlets or azure CLI methods aren’t supported anymore.

Authenticating via the Azure CLI is only supported when using a “User Account” and not via Service Principal (ex: az login --service-principal). This works perfectly since these environments I’m building are specifically for my development needs. If you’d like to use this example as a more production focused example, then using something like Service Principals or another systems level verification, authentication, and authorization model should be used. For other examples check out authentication via a Service Principal and Client Secret, Service Principal and Client Certificate, or Managed Service Identity.

Installing

To get the Azure CLI installed I followed the manual installation on Debian/Ubuntu Linux process. For Windows installation check out these instructions.


# Update the latest packages and make sure certs, curl, https transport, and related packages are updated.
sudo apt-get update
sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg
# Download and install the Microsoft signing key.
curl -sL https://packages.microsoft.com/keys/microsoft.asc | \
gpg –dearmor | \
sudo tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null
# Add the software repository of the Azure CLI.
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
# Update the repository information and install the azure-cli package.
sudo apt-get update
sudo apt-get install azure-cli

view raw

install.sh

hosted with ❤ by GitHub

Login & Setup

az login

It’ll bring up a browser that’ll give you a standard Microsoft auth login for your Azure Account.

login.png

When that completes successfully a response is returned in the terminal as shown.

loggedin.jpg

Pieces of this information will be needed later on so I always copy this to a text file for easy access. I usually put this file in a folder I call “DELETE THIS CUZ SECURITY” so that I remember to delete it shortly thereafter so it doesn’t fall into the hands of evil!

For all the other operating systems and places that the Azure CLI can be installed, check out the docs here.

Once logged in a list of accounts can be retrieved too. Run az account list to get the list of accounts available. If you only have one account (re: subscriptions) then you’ll just see exactly what was displayed when you logged in. However if you have other peripheral information, those accounts will be shown here.

If there is more than one subscription, one needs selected and set. To do that execute the following command by passing the subscription id. That’s the second value in that list of values above. Yes, it is kind of odd that they use account and subscription interchangeably in this situation, and that subscription id isn’t exactly obvious if this data is identified as an account and not a subscription, but we’ll give Microsoft a pass for now. Suffice it to say, account id and subscription id in this data is the stand alone id field in the aforementioned data.

az account set --subscription="id" where id is subscription id, or as shown in az account list the id there, whatever one wants to call it.

Configuring Terraform Azure CLI Auth

To do this we will go ahead and setup the initial repository and files. What I’ve done specifically for this is to navigate to Github to the new repository path https://github.com/new. Then I selected the following options:

  1. Repository Name: terraform-todo-list
  2. Description: This is the infrastructure project that I’ll be using to “turn on” and “turn off” my development environment every day.
  3. Public Repo
  4. I checked Initialize this repository with a README and then added the .gitignore file with the Terraform template, and the Apache License 2.0.

newproject.png

This repository is now available at https://github.com/Adron/terraformer-todo-list. All of the steps and details outlined in this blog entry will be available in this repository plus any of my ongoing work on bastion servers, clusters, kubernetes, or other related items specific to my development needs.

With the repository cloned locally via git clone git@github.com:Adron/terraformer-todo-list.git there needs to be a main.tf file created. Once created I’ve added the azurerm provider block provider "azurerm" { version = "=1.27.0" } into the file. This enables Terraform to be executed from this repository directory with terraform init. Running this will pull down the azurerm provider dependency. If everything succeeds you’ll get a response from the command.

terraform-init-success

However if it fails, routinely I’ve ended up out of sync with Terraform version vs. provider version. As mentioned above we definitely need 1.20.0 or greater for the examples in this post. However, I’m also running Terraform at version 0.12.6 which requires at least 1.27.0 of the azurerm or better. If you see an error like this, it’s usually informative and you’ll just need to change the version number so the version of Terraform you’re using will pull down the right version.

terraform-init-fail

Next I run terraform plan and everything should respond with no change to infrastructure requested response.

terraform-plan-aok.png

At this point I want to verify authentication against my Azure account with my Terraform CLI, to do this there are two additional fields that need to be added to the provider: subscription_id and tenant_id. The configuration will look similar to this, except with the subscription id and tenant id from the az account list data that was retrieved earlier when setting up and finding the the Azure account details from the Azure CLI.

terraform-main-auth

Run terraform plan again to see the authentication results, which will look just like the terraform plan results above. With this done there’s just one more thing to do so that we have a good work space in which to work with Terraform against Azure. I always, at this point of any project with Terraform and Azure, setup a Resource Group.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.

3: Azure Resource Group

Just for clarity, a few details about the resource group. A Resource Group in Azure is a grouping that should share the same lifecycle, which is exactly what I’m aiming to do with all of these resources on a day to day basis for development. Every day I intend to start these resources in this Resource Group and then shut them all down at the end of the day.

There are other specifics about what exactly a Resource Group is, but I’ll leave the documentation to be read to elaborate further, for my mission today I just want to have a Resource Group available for further Terraform work. In Terraform the way I go about creating a Resource Group is by adding the following to my main.tf file.


provider "azurerm" {
version = "=1.27.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "11111111-1111-1111-1111-111111111111"
}
resource "azurerm_resource_group" "adrons_resource_group_workspace" {
name = "adrons_workspace"
location = "West US 2"
tags = {
environment = "Development"
}
}

view raw

main.tf

hosted with ❤ by GitHub

Run terraform plan to see the changes. Then run terraform apply to make the changes, which will need a confirmation of yes.

terraform-apply-done

Once I’m done with that I go ahead and issue a terraform destroy command, giving it a yes confirmation when asked, to destroy and wrap up this work for now.

terraform-destroy-cleanup

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.

4: Using Environment Variables

There is one more thing before I want to commit this code to the repository. I need to get the subscription id and tenant id out of the main.tf file. One wouldn’t want to post their cloud access and identification information to a public repository, or ideally to any repository. The easy fix for this is to implement some interpolated variables to pull from environment variables. I can then set the environment variables via my startup script (such as .bash_profile or .bashrc or even the IDE I’m running the Terraform from like Intellij or Webstorm for example). In that script setting the variables would look something like this.


export TF_VAR_subscription_id="00000000-0000-0000-0000-000000000000"
export TF_VAR_tenant_id="11111111-1111-1111-1111-111111111111"

Note that each variable is prepended with TF_VAR. This is the convention so that Terraform will look through and pick up all of the variables that it needs to work with. Once these variables are added to the startup script, run a source ~/.bashrc (linux) or source ~/.bash_profile (on Mac) to set those variables.  For Windows check out this to set the environment variables. With that set there are a few more steps.

In the repository create a file named variables.tf and add the two variables variable "subscription_id" {} and variable "tenant_id" {}. Then in the main.tf file change the subscription_id and tenant_id fields to be assigned variables like subscription_id = var.subscription_id and tenant_id = var.tenant_id. Now run terraform plan and these results should display.

terraform-plan-after-variables

Now the terraform apply can be applied or terraform destroy to create or destroy the Resource Group. The last step now is to just commit this infrastructure code with the variables now removed from the main.tf file.

git add -A

git commit -m 'First executable resource.'

git rebase to pull in all the remote default files and such and merge those with the local additions.

git push -u origin master then to push the changes and set the master local branch to track with the remote branch master.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.
  • Private sensitive data has been moved from the main.tf file into environment variables so that it isn’t copied to the repository.
  • A variables.tf file has been added for the aforementioned variables that map to environment variables.
  • The code base has been committed to Github at https://github.com/Adron/terraformer-todo-list.
  • Both terraform plan and terraform apply deploy as expected and terraform destroy removes infrastructure cleanly as expected.

Next steps coming soon!

Coding, WTF Twitter, Twitch FTW, Getting Shit Done, Twitch Hacks, Tips, Tricks, and One Excellent Jazz Influenced Tune

WTF Twitter

I’ve been doing a lot more coding, thanks largely to the discipline that Twitch has brought to my day. It seems almost surprising to me at this point because Twitch started similarly to the way Twitter did for me. You see, I thought at first Twitter was the dumbest thing that had happened in ages. Arguably, it’s come full circle and I kind of feel the same thing about Twitter now, but during the middle decade in between that (yes, Twitter is over 10 years old!) Twitter has brought me connection, opportunities, and so much more. I couldn’t have imagined a lot of what I’ve been able to pull together because of Twitter. It’s still useful in many ways for this, albeit I like all of us are at risk of suffering the idiocy of today’s politics and political cronies, and the dog piling trash pile that follows them onto Twitter.

I’m not leaving Twitter any time soon but I’ve definitely put in on a very short leash, and limited what impact it does or doesn’t have in my day to day flow.

Twitch FTW

Amazingly however a new social and productive tool, not that it intended both, has come into being. Coding on Twitch. Don’t get me wrong I game, I just don’t game socially or on Twitch, what I do is code on Twitch. With a fair dose of hacking, breaking things, and then figuring out how to make them work. All at the same time I along with others have created a pretty excellent developers community there on Twitch. It seems to be growing all the time too. Twitch, at this point has become a focal point that has the benefits without all the annoying garbage that Twitter does these days, while adding the vast and hugely important fact that I can do things, be productive, chit chat, and generally get shit done all while I’m Twitch streaming.

VidStreamHacking

@ https://github.com/Adron/VidStreamHacking

With that, let’s talk about some of the recent notes and information I’ve been working on putting together to make Twitch even more useful. My first motive with this was to keep track of all the things I was doing, hardware I was putting together, and related things, but then another purpose grew out of all this note taking. It became obvious that this repository of information could be useful for other people. Here’s a survey of the things that I’ve added so far, hope they’re helpful to those of you digging into streaming out there!

I added some badges to identify various elements of information about the repo in the README.md.

badges

Is it maintained, yup, contributors, so far just me, zero issues filed but please feel free to add an issue or two, markdown yup, and there is indeed a Trello Board! The Trello Board is a key to insight, inspection, and what I’ve got going on in a number of my repositories. It’s where I’m keeping track of all the projects, what’s next, and what’s up in queue for the blog (this one right here). At least, in the context of the big code heavy or video reviews of sessions with code, extra commentary, and related content. If you want to get involved in any of the repos just let me know and I’m happy to walk through whatever and even get you added to the Trello board so we can work together on code.

Streaming Gear

https://github.com/Adron/VidStreamHacking/blob/master/hardware.md

My main machine is now a Dell XPS 15, which I fought through to get Linux running on it, and now that I have it’s been an absolutely stellar machine. I’ve also added additional monitor & port replicator/docking station gear to get it even more usable. The actual page I’ve got the details listed on are in the repo on the Dell XPS 15 item on the hardware page.

Along with the XPS 15 I wrote up coverage of the unboxing via video and blog entry. After a few weeks I also wrote up the conflict I had getting Linux running and removing Windows 10. In addition to the XPS 15 though I do use a MacBook from 2015 as my primary Mac machine, with an iMac from 2013 available as backup. Both machines are still resoundingly solid and performant enough to get the job done. Rounding out my fleet of machines is a Dell XPS 13 (covered here and here with the re-review).

For screens I have one at my office and one at home. They’re almost the same thing, ultra-widescreen monitors, curved displays, running 3880-1440 resolution from LG. These make keeping an eye on chat, OBS, and all sorts of other monitoring while coding, gaming, or whatever a breeze!

shotone.png
Ex 1: Just viewing a giant OBS view to get everything sorted out before starting a stream.
shottwo
Ex 2: OBS w/ VM running w/ Twitch chat, dashboard etc to the right. This way I can work, see the stream, and see chat and such all at the same time.

The docking stations and/or port replicators, whatever one calls these things these days also bring all of this tech together for me. There’s a couple I have tried and retired already (unfortunately, cuz dammit that cost some money!) and others that I use in some scenarios and others I use in others.

My main docking station contraption, shout out to James & others suggestion the Caldigit TS3. I got to this docking station through the Dell TB16 which for Linux, and kind of for Windows, is an unstable mess. Awesome potential if it worked, but it doesn’t so I tried out this USB-C pluggable option (in the tweet) which had HDMI that was unfortunately limited in resolution. Having a wide screen made this – albeit it being super compatible with Linux – unusable too. So I finally upgraded to the Caldigit TS3 and WOW, the Caldigit is super seriously wickedly bad ass. Extra USB-C ports, USB 2/3 ports, power, and more all rolled into one. It even supplies some power to the laptop, however I keep it plugged in since it’s kind of a power hog when the processor start chomping!

After trying out this USB-C pluggable (the tweet) I got the CalDigit into play. It’s really really good, here’s a shot of that from various angles with the extensive cables that I don’t have to plug into my laptop anymore. Out of this also runs a 28 port USB powered hub too, no picture, but just know I’ve got a crazy number of devices I routinely like to use!

That’s my main configuration when using the ultra widescreens and all. Good setup there, very usable, and the 32GB of memory in the laptop really get put to use in this regard. As for storage, that’s another thing. I’ve got 1 TB in my laptop but another 1 TB in a USB-C Thunderbolt Samsung Drive which is practically as fast for most things. So much so I attach it via the TS3 via USB-C and it’s screaming fast and adds that extra storage. So far, primarily I’ve been using it to store all of my virtual machines or use it as video storage while I do edits.

There’s other gear too, check out the list, like the Rode Podcoster and other things. But that gear I’ll elaborate on some other time.

Meetup Streaming Gear

https://github.com/Adron/VidStreamHacking/blob/master/meetup-streaming-kit-gear.md

Another effort I’ve undertaken is recording meetups. To do this one needs to be able to stream things with several screens combined – i.e. picture in picture and all. To do this, one needs a camera that can focus on the speaker, ideally at least 1080p with at least some ability to work in less than ideal light. Then next to that, a splitter and capture card to get the slides! Once all those pieces come together, with a little OBS finesse one can get a pretty solid single pass recording of a meetup. An example of one of my better attempts was the last meetup “Does the Cloud Kill Open Source” with Richard Seroter. If you take a look at past talks in the Meetups Playlist you can see my iterative progress from one meetup to another!

Here’s the specific gear I’m using to get this done. At least, so far, and if and when it becomes financially reasonable I might upgrade some of the gear. It largely depends on what I can get more use out of beyond just streaming meetups.

Cords and Splitter – I picked up a selection of lengths and types so that I’d have wiring options for the particular environments the meetups would be located in. Generally speaking 25ft seems to be a safe maximum for HDMI. I’ve been meaning to check out the actual specifications on it but for now it’s more than enough regardless.81fhh-w-DeL._SX679_

The splitter wasn’t expensive at all ($16.99), and kind of surprised me considering the costs of the cables. Picture to the right, or above, or somewhere depending on mobile layout.

I needed capture cards for this, one for the line out of the splitter that would capture the slides. The first I had picked up based on suggestions focusing around quality and that was the Avermedia Extreme Cap HDMI to USB 3 Capture Card. It’s really solid for higher resolution and related capabilities. For the USB 3.0 HDMI HD Game Video Capture Card I picked it up based on price (it’s almost a 1/3rd of the price) but not particular focused on quality. However, now that I’ve used both they are capable and seem fine, so I might have been able to just buy two of the cheaper options.

The camera, ideally, I’d have a much higher quality one but the Canon VIXIA HF R800 Camcorder has actually worked excellently. A little less feature rich for audio out and related things, but it zooms in good and can record at the same time I’m getting the cam feed into the stream. So it’s always a nice way to have a backup of the talk.

The last, and one of the most important aspects is getting good audio.

Streaming Meetups

https://github.com/Adron/VidStreamHacking/blob/master/meetup.md

At first thought, I made the mistake that just the gear would be enough but holy smokes there were about a million other things I needed to write. I created meetup.md to get the list going.

Jazz Influence Amidst the Heaviness!

As promised. Some music, not actually jazz, but heavily influenced by some jazz, progressive instrumentation, and esoteric, expansive, exquisite playing skills by the band. As always, be prepared. My music referrals aren’t always gentle! Happy code streaming!

Bunches of Databases in Bunches of Weeks – PostgreSQL Day 1

May the database deluge begin, it’s time for “Bunches of Databases in Bunches of Weeks”. We’ll get into looking at databases similar to how they’re approached in “7 Databases in 7 Weeks“. In this session I got into a hard look at PostgreSQL or as some refer to it just Postgres. This is the first of a few sessions on PostgreSQL in which I get the database installed locally on Ubuntu. Which is transferable to any other operating system really, PostgreSQL is awesome like that. Then after installing and getting pgAdmin 4, the user interface for PostgreSQL working against that, I go the Docker route. Again, pointing pgAdmin 4 at that and creating a database and an initial table.

Below the video here I’ve added the timeline and other details, links, and other pertinent information about this series.

0:00 – The intro image splice and metal intro with tunes..
3:34 – Start of the video database content.
4:34 – Beginning the local installation of Postgres/PostgreSQL on the local machine.
20:30 – Getting pgAdmin 4 installed on local machine.
24:20 – Taking a look at pgAdmin 4, a stroll through setting up a table, getting some basic SQL from and executing with pgAdmin 4.
1:00:05 – Installing Docker and getting PostgreSQL setup as a container!
1:00:36 – Added the link to the stellar post at Digital Ocean’s Blog.
1:00:55 – My declaration that if Digital Ocean just provided documentation I’d happily pay for it, their blog entries, tutorials, and docs are hands down some of the best on the web!
1:01:10 – Installing Postgesql on Ubuntu 18.04.
1:06:44 – Signing in to Docker hub and finding the official Postgresql Docker Image.
1:09:28 – Starting the container with Docker.
1:10:24 – Connecting to the Docker Postgresql Container with pgadmin4.
1:13:00 – Creating a database and working with SQL, tables, and other resources with pgAdmin4 against the Docker container.
1:16:03 – The hacker escape outtro. Happy thrashing code!

For each of these sessions for the “Bunches of Databases in Bunches of Weeks” series I’ll follow this following sequence. I’ll go through each database in this list of my top 7 databases for day 1 (see below), then will go through each database and work through the day 2, and so on. Accumulating additional days similarly to the “7 Databases in 7 Weeks

Day 1” of the Database, I’ll work toward building a development installation of the particular database. For example, in this session I setup PostgreSQL by installing it to the local machine and also pulled a Docker image to run PostgreSQL.

Day 2” of the respective database, I’ll get into working against the database with CQL, SQL, or whatever that one would use to work specifically with the database directly. At this point I’ll also get more deeply into the types, inserting, and storing data in the respective database.

Day 3” of the respective database, I’ll get into connecting an application with C#, Node.js, and Go. Implementing a simple connection, prospectively a test of the connection, and do a simple insert, update, and delete of some sort against the respective database built on the previous day 2 of the same database.

Day 4” and onward I’ll determine the path and layout of the topic later, so subscribe on YouTube and Twitch, and tune in. The events are scheduled, with the option to be notified when a particular episode is coming on that you’d like to watch here on Twitch.

Next Events for “Bunches of Databases in Bunches of Days