Top 3 Refactors for My Hasura GraphQL API Terraform Deploy on Azure

I posted on the 9th of September, the “Setup Postgres, and GraphQL API with Hasura on Azure”. In that post I had a few refactorings that I wanted to make. The following are the top 3 refactorings that make the project in that repo easier to use!

1 Changed the Port Used to a Variable

In the docker-compose and the Terraform automation the port used was using the default for the particular types of deployments. This led to a production and a developer port that is different. It’s much easier, and more logical for the port to be the same on both dev and production, for at least while we have the console available on the production server (i.e. it should be disabled, more on that in a subsequent post). Here are the details of that change.

In the docker-compose file under the graphql-engine the ports, I insured were set to the specific port mapping I’d want. For this, the local dev version, I wanted to stick to port 8080. I thus, left this as 8080:8080.

version: '3.6'
services:
postgres:
image: library/postgres:12
restart: always
environment:
POSTGRES_PASSWORD: ${PPASSWORD}
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:${PPASSWORD}@postgres:5432/logistics
HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
volumes:
db_data:

The production version, or whichever version this may be in your build, I added a Terraform variable called apiport. This variable I set to be passed in via the script files I use to execute the Terraform.

The script file change looks like this now for launching the environment.

cd terraform
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics' \
-var 'apiport=8080'

The destroy script now looks like this.

cd terraform
terraform destroy \
-var 'server="logisticscoresystemsdb"' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database="logistics"' \
-var 'apiport=8080'

There are then three additional sections in the Terraform file, the first is here, the next I’ll talk about in refactor 2 below. The changes in the resource as shown below, in the container ports section and the environment_variables section, simply as var.apiport.

resource "azurerm_container_group" "adronshasure" {
name = "adrons-hasura-logistics-data-layer"
location = azurerm_resource_group.adronsrg.location
resource_group_name = azurerm_resource_group.adronsrg.name
ip_address_type = "public"
dns_name_label = "logisticsdatalayer"
os_type = "Linux"
  container {
name = "hasura-data-layer"
image = "hasura/graphql-engine:v1.3.2"
cpu = "0.5"
memory = "1.5"
    ports {
port = var.apiport
protocol = "TCP"
}
    environment_variables = {
HASURA_GRAPHQL_SERVER_PORT = var.apiport
HASURA_GRAPHQL_ENABLE_CONSOLE = true
}
secure_environment_variables = {
HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}
}
  tags = {
environment = "datalayer"
}
}

With that I now have the port standardized across dev and prod to be 8080. Of course, it could be another port, that’s just the one I decided to go with.

2 Get the Fully Qualified Domain Name (FQDN) via a Terraform Output Variable

One thing I kept needing to do after Terraform got production up and going everytime is navigating over to Azure and finding the FQDN to open the console up at (or API calls, etc). To make this easier, since I’m obviously running the script, I added an output variable that concatenates the interpolated FQDN from the results of execution. The output variable looks like this.

output "hasura_uri_path" {
value = "${azurerm_container_group.adronshasure.fqdn}:${var.apiport}"
}

Again, you’ll notice I have the var.apiport concatenated there at the end of the value. With that, it returns at the end of execution the exact FQDN that I need to navigate to for the Hasura Console!

3 Have Terraform Create the Local “Dev” Database on the Postgres Server

I started working with what I had from the previous post “Setup Postgres, and GraphQL API with Hasura on Azure”, and realized I had made a mistake. I wasn’t using a database on the database server that actually had the same name. Dev was using the default database and prod was using a newly created named database! Egads, this could cause problems down the road, so I added some Terraform just for creating a new Postgres database for the local deployment. Everything basically stays the same, just a new part to the local script was added to execute this Terraform along with the docker-compose command.

First, the Terraform for creating a default logistics database.

terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
}
}
required_version = ">= 0.13"
}
provider "postgresql" {
host = "localhost"
port = 5432
username = var.username
password = var.password
sslmode = "disable"
connect_timeout = 15
}
resource "postgresql_database" "db" {
name = var.database
owner = "postgres"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
variable "database" {
type = string
}
variable "server" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}

Now the script as I setup to call it.

docker-compose up -d
terraform init
sleep 1
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics'

There are more refactoring that I made, but these were the top 3 I did right away! Now my infrastructure as code is easier to use, the scripts are a little bit more seamless, and everything is wrapping into a good development workflow a bit better.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode.

Name Casing Conventions, The Quick Comparison

There are a number of name casing practices, which I’ll cover in quick fashion here, for variables and other objects when programming.

The Quickie Synopsis

  • camelCaseLikeThis
  • PascalCaseLikeThis
  • snake_case_like_this
  • kebab-case-like-this
  • MixAnd-match_like-this

Camel Case

With Camel Case the first letter of the first word is lower case, then every subsequent first letter of each word is upper cased. For example, thisIsCamelCasedsomething if a singular word, or somethingElse.

My 2 ¢ – I’m a big fan of this case when I’m name private variables or related objects. I’m also a fan of using Camel Casing for certain types of functions, especially in JavaScript such as this.doesStuff() or somethingAnother.doesOtherStuff().

Pascal Case

In usage of Pascal Case, the first letter of each word within the name is upper cased. Examples include ThisOjectThatThing, or WhateverElse().

My 2 ¢ – This is something I like to use for class declaracters (i.e. public class ThisClassThing) and then for functions or methods on that class, or respectively similarly for functions on functions in JavaScript, or the like. Basically, anything that will be used similar to MyThing.DoesThisThing().

Snake Case

Using Snake Case has the empty spaces replaced with underscores. So a file name like this is my filename.md would become this_is_my_filename.md.

My 2 ¢ – Snake case is great when I need to have variables that would otherwise have spaces, but also wouldn’t be displayed regularly with anything that might obfuscate the underscore, such as an HTML.

Kebab Case

Kebabs are tasty, the case however is similar to Snake Case, except instead of underscores dashes are used. Each space in the name is replaced with a - character. Such as my-table-name-is-this or this-is-a-variable.

My 2 ¢ – Kebab case is excellent for filenames, URIs, or something of that sort where spacing between the words of a file or path are important to read, but an underscore would be obfuscated by the font rendering such as with a URI.

Language Stack Installation for Python & Go

Previously I’ve gone through the steps I take to get a solid development machine setup. From the base operating system load, to the browser and basic IDEs I install. Now I’ve got more videos and the respective notes and details about what two language stacks I setup next; Python and Go.

Python

In regards to the Python stack, this one can often be somewhat confusing. Depending on the operating system I setup the stack a little differently.

MacOS

For MacOS I’ve written two posts about this previously, one titled “Getting Started with Python Right!” and one “Unbreaking Python Through Virtual Environments“. Those two posts cover most of the nuance to getting a base Python stack installed on MacOS and then using virtual environments to manage project specific versions per repository.

Linux

For the Linux OS, usually a debian variant, the systems tend to have Python 3 installed by default. I then take the next step of installing pip3 and work from there. The IDE, PyCharm from Jetbrains uses virtualenv to setup virtual environments per repository from that point forward.

Python Setup && Reasons

For more details about the specific walk through, I’ve created this video to walk through setting up Python 3 on Ubuntu and verifying, and also by use of PyCharm to setup a small verification app it shows how the virtualenv sets up a specific environment for the new verification project.

The reasons for installing Python first are numerous. One of the first reasons is that Python is required for installing and using numerous Python related CLIs, such as AWS’s CLI, among many others. It pays off to just have a good install at the system level (i.e. not particular just in a virutal environment, but executable at the terminal on system) to ensure it is available for any and all CLIs that would need it. If you’re into data science work, that’s a huge second reason, because Python is used in about every aspect of data science work, machine learning, and related endeavors.

Go Setup && Reasons

The reason I go for Go as my second language stack install is driven by two primary reasons:

  1. I like writing Go and use it myself for a number of reasons. Such as, it is ridiculously quick and minimal work to build a CLI for use in systems that requires only a single binary executable for use.
  2. I use Go for a lot of other work-related efforts, around Kubernetes, Docker, Terraform, and others.

With that, here’s the quick install and initial project for verification setup.

If you’d like to take a few other quick tours of Go, here are some posts, with videos, putting together a Go module project and writing an initial test in under 3 minutes and setting up an HTTP server in about 15 minutes.

That’s it for now. However, if you’re interested in joining me for next steps, language stack setup, and more in addition to writing some JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and all sorts of coding I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode. Feel free to check out a coding session, ask questions, interject, or just come and enjoy the tunes!

For more on my open source efforts and related projects, sign up for the Thrashing Code Newsletter!

My Top 2 Reading List for Go & Data

Right now I’ve got a couple books in queue.

“Black Hat Go” Go Programming for Hackers and Pentesters by Tom Steele, Chris Patten, and Dan Kottmann.

Alt Text

I picked this book up after a good look at the index. There is a basic intro to Go at the beginning of the book to cover language fundamentals, but immediately after that dives into the meat of the topic. Immediately getting into understanding the TCP handshake, TCP itself, writing a scanner, and a proxy. There is then some basics about HTTP Servers, routers, and middleware in chapter 3 and 4, but returns immediately into topics of interest in chapter 5 around exploiting DNS, and writing DNS clients. I perused the index a bit further to note it covers SMB and NTLM, a chapter on “Abusing Databases and Filesystems”, then on to raw packet processing, writing and porting exploit code, and a host of other topics. This is going to be an interesting book to dig into and write about in the coming weeks. I’m pretty excited about it and hope to write a thorough review upon completion.

“Designing Data-Intensive Applications” by Martin Kleppmann

Alt Text

This book is familiar territory, as I’ve spent much of my career working on similar topics that this book covers. What I suspect is that I’ll enjoy reading the material presented in an orderly and concise way versus the chaos and disruptive way I’ve acquired the knowledge on these topics.

From the index, the book starts off with foundations of data systems and the ideas around building reliable, scalable, and maintainable applications. This provides a good basis in which to dive into the other topics. From there it looks like we’ll get a run into the birth of NoSQL, object-relational mismatches and the related insanity that this has bred in the industry! Then a solid dive into graph-like, traditional, and multi-model data modeling. With the beginning quarter of the book then covering everything from hash indexes, SSTables (another familiar topic), LSM-Trees, B-Trees, and related indexing structures before wrapping up this first 25% of the book with stars and snowflake topics for analytics and column-oriented storage, compression, sort orders in column storage, and related material on aggregation in data cubes and materialized views.

That’s just the first 25%! From there Martin covers a wide range of topics, that if you’re in the industry and plan to deal with large scale data-intensive applications these are topics you need to be intimately familiar with!

Reading

Over the next few months while I read through these books I hope to provide summaries and related notes on the material. Who knows, maybe you’ll want to dive into the material yourself! Until then happy thrashing code and may you have high retention and comprehension in reading!

Development Machine Environment Build and Base OS Load

For the next post in this series building a development machine environment check out the browser & IDEs post.

Development Machine Environment Build & Language Stack Installations – Getting a Base OS Load

Operating System Notes

When choosing an operating system to work with there are a number of factors to take into account. Some of these factors include;

  1. What language stack and tooling will you need to use?
  2. What focus beyond software development will the machine have?
  3. What apps and tooling do you already have?

There are many other questions, around costs and licensing, and other errata that you’ll need to take into account. One thing however, regardless of what you choose, the bulk of software development with most software languages, their respective tooling stacks work on almost any operating system you’d choose.

In this particular example and the following material I’ve chosen Ubuntu 20.04 LTS (Long Term Support) to use as my operating system. I’ll build a virtualization image of the operating system so that I can reuse time and time again. The virtualization software I’m using is VMWare’s VMWare Fusion software, but just like the operating system you can use most virtualization software and it will work just fine.

Getting a Base OS Loaded

It is often important to load a new operating system installation onto a machine for use in application (re: web/software/IoT/mobile/etc) development. This can help us better understand what is installed and effecting various affects on the machine itself, it can insure that we have known assets that aren’t influencing server or other manipulations, and a long list of positives over taking whatever one is given from the various computer system manufacturers.

This post is the first in a series (here on Dev.to and on my core blog https://compositecode.blog). I’m going through precise steps, but I’m keeping it general enough with pointers and related tips for MacOS and Windows that the series will be useful for anyone building their first or tenth or twentieth development machine regardless of operating system or hardware.

My Installation Specifics

I’m using a Macbook Pro 16″ w/ enough RAM and compute cores that I split off a solid 4-8GB of RAM dedicated to the VM that has the host OS. In addition, I set the compute

To do this, in VMWare Fusion pull up the settings for the Virtual Machine that was created. To note, when the virtual machine is shutdown you can change these settings to add or decrease memory, compute cores, etc. But do note, sometimes this will make an operating system unstable. The rule to follow is, set the resources at time of creation and leave them set.

Once the settings are open, click on the processors and memory option.

Settings Screen for the Virtual Machine

That will display this screen where the values can be set. I also show stepping through this process in the video.

Alt Text

That’s it for now. However, if you’re interested in joining me for next steps, language stack setup, and more in addition to writing some JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and all sorts of coding I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode. Feel free to check out a coding session, ask questions, interject, or just come and enjoy the tunes!

For more Thrashing Code Newsletter for more details about open source projects and related efforts I work on, sign up for it here!