Setup Postgres, and GraphQL API with Hasura on Azure

Key Technologies: HasuraPostgresTerraformDocker, and Azure.

I created a data model to store railroad systems, services, scheduled, time points, and related information, detailing the schema “Beyond CRUD n’ Cruft Data-Modeling” with a few tweaks. The original I’d created for Apache Cassandra, and have since switched to Postgres giving the option of primary and foreign keys, relations, and the related connections for the model.

In this post I’ll use that schema to build out an infrastructure as code solution with Terraform, utilizing Postgres and Hasura (OSS).

Prerequisites

Docker Compose Development Environment

For the Docker Compose file I just placed them in the root of the repository. Add a docker-compose.yaml file and then added services. The first service I setup was the Postgres/PostgreSQL database. This is using the standard Postgres image on Docker Hub. I opted for version 12, I do want it to always restart if it gets shutdown or crashes, and then the last of the obvious settings is the port which maps from 5432 to 5432.

For the volume, since I might want to backup or tinker with the volume, I put the db_data location set to my own Codez directory. All my databases I tend to setup like this in case I need to debug things locally.

The POSTGRES_PASSWORD is an environment variable, thus the syntax ${PPASSWORD}. This way no passwords go into the repo. Then I can load the environment variable via a standard export POSTGRES_PASSWORD="theSecretPasswordHere!" line in my system startup script or via other means.

services:
  postgres:
    image: postgres:12
    restart: always
    volumes:
      - db_data:/Users/adron/Codez/databases
    environment:
      POSTGRES_PASSWORD: ${PPASSWORD}
    ports:
      - 5432:5432

For the db_data volume, toward the bottom I add the key value setting to reference it.

volumes:
  db_data:

Next I added the GraphQL solution with Hasura. The image for the v1.1.0 probably needs to be updated (I believe we’re on version 1.3.x now) so I’ll do that soon, but got the example working with v1.1.0. Next I’ve got the ports mapped to open 8080 to 8080. Next, this service will depend on the postgres service already detailed. Restart, also set on always just as the postgres service. Finally two evnironment variables for the container:

  • HASURA_GRAPHQL_DATABASE_URL – this variable is the base postgres URL connection string.
  • HASURA_GRAPHQL_ENABLE_CONSOLE – this is the variable that will set the console user interface to initiate. We’ll definitely want to have this for the development environment. However in production I’d likely want this turned off.
  graphql-engine:
    image: hasura/graphql-engine:v1.1.0
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:logistics@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true"

At this point the commands to start this are relatively minimal, but in spite of that I like to create a start and stop shell script. My start script and stop script simply look like this:

Starting the services.

docker-compose up -d

For the first execution of the services you may want to skip the -d and instead watch the startup just to become familiar with the events and connections as they start.

Stopping the services.

docker-compose down

🚀 That’s it for the basic development environment, we’re launched and ready for development. With the services started, navigate to https://localhost:8080/console to start working with the user interface, which I’ll have a more details on the “Beyond CRUD n’ Cruft Data-Modeling” swap to Hasura and Postgres in an upcoming blog post.

For full syntax of the docker-compose.yaml check out this gist: https://gist.github.com/Adron/0b2ea637b5e00681f4d62404805c3a00

Terraform Production Environment

For the production deployment of this stack I want to deploy to Azure, use Terraform for infrastructure as code, and the Azure database service for Postgres while running Hasura for my API GraphQL tier.

For the Terraform files I created a folder and added a main.tf file. I always create a folder to work in, generally, to keep the state files and initial prototyping of the infrastructre in a singular place. Eventually I’ll setup a location to store the state and fully automate the process through a continues integration (CI) and continuous delivery (CD) process. For now though, just a singular folder to keep it all in.

For this I know I’ll need a few variables and add those to the file. These are variables that I’ll use to provide values to multiple resources in the Terraform templating.

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

One other variable I’ll want so that it is a little easier to verify what my Hasura connection information is, will look like this.

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

Let’s take this one apart a bit. There is a lot of concatenated and interpolated variables being wedged together here. This is basically the Postgres connection string that Hasura will need to make a connection. It includes the username and password, and all of the pertinent parsed and string escaped values. Note specifically the %40 between the ${var.username} and ${azurerm_postgresql_server.logisticsserver.name} variables while elsewhere certain characters are not escaped, such as the @ sign. When constructing this connection string, it is very important to be prescient of all these specific values being connected together. But, I did the work for you so it’s a pretty easy copy and paste now!

Next I’ll need the Azure provider information.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

Note that there is a features array that is just empty, it is now required for the provider to designate this even if the array is empty.

Next up is the resource group that everything will be deployed to.

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

Now the Postgres Server itself. Note the location and resource_group_name simply map back to the resource group. Another thing I found a little confusing, as I wasn’t sure if it was a Terraform name or resource name tag or the server name itself, is the “name” key value pair in this resource. It is however the server name, which I’ve assigned var.server. The next value assigned “B_Gen5_2” is the Azure designator, which is a bit cryptic. More on that in a future post.

After that information the storage is set to, I believe if I RTFM’ed correctly to 5 gigs of storage. For what I’m doing this will be fine. The backup is setup for 7 days of retention. This means I’ll be able to fall back to a backup from any of the last seven days, but after 7 days the backups are rolled and the last day is deleted to make space for the newest backup. The geo_redundant_backup_enabled setting is set to false, because with Postgres’ excellent reliability and my desire to not pay for that extra reliability insurance, I don’t need geographic redundancy. Last I set auto_grow_enabled to true, albeit I do need to determine the exact flow of logic this takes for this particular implementation and deployment of Postgres.

The last chunk of details for this resource are simply the username and password, which are derived from variables, which are derived from environment variables to keep the actual username and passwords out of the repository. The last two bits set the ssl to enabled and the version of Postgres to v9.5.

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

Since the database server is all setup, now I can confidently add an actual database to that database. Here the resource_group_name pulls from the resource group resource and the server_name pulls from the server resource. The name, being the database name itself, I derive from a variable too. Then the character set is UTF8 and collation is set to US English, which are generally standard settings on Postgres being installed for use within the US.

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

The next thing I discovered, after some trial and error and a good bit of searching, is the Postgres specific firewall rule. It appears this is related to the Postgres service in Azure specifically, as for a number of trials and many errors I attempted to use the standard available firewalls and firewall rules that are available in virtual networks. My understanding now is that the Postgres Servers exist outside of that paradigm and by relation to that have their own firewall rules.

This firewall rule basically attaches the firewall to the resource group, then the server itself, and allows internal access between the Postgres Server and the Hasura instance.

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

The last and final step is setting up the Hasura instance to work with the Postgres Server and the designated database now available.

To setup the Hasura instance I decided to go with the container service that Azure has. It provides a relatively inexpensive, easier to setup, and more concise way to setup the server than setting up an entire VM or full Kubernetes environment just to run a singular instance.

The first section sets up a public IP address, which of course I’ll need to change as the application is developed and I’ll need to provide an actual secured front end. But for now, to prove out the deployment, I’ve left it public, setup the DNS label, and set the OS type.

The next section in this resource I then outline the container details. The name of the container can be pretty much whatever you want it to be, it’s your designator. The image however is specifically hasura/graphql-engine. I’ve set the CPU and memory pretty low, at 0.5 and 1.5 respectively as I don’t suspect I’ll need a ton of horsepower just to test things out.

Next I set the port available to port 80. Then the environment variables HASURA_GRAPHQL_SERVER_PORT and HASURA_GRAPHQL_ENABLE_CONSOLE to that port to display the console there. Then finally that wild concatenated interpolated connection string that I have setup as an output variable – again specifically for testing – HASURA_GRAPHQL_DATABASE_URL.

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

With all that setup it’s time to test. But first, just for clarity here’s the entire Terraform file contents.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

To run this, similarly to how I setup the dev environment, I’ve setup a startup and shutdown script. The startup script named prod-start.sh has the following commands. Note the $PUSERNAME and $PPASSWORD are derived from environment variables, where as the other two values are just inline.

cd terraform

terraform apply -auto-approve \
    -var 'server=logisticscoresystemsdb' \
    -var 'username='$PUSERNAME'' \
    -var 'password='$PPASSWORD'' \
    -var 'database=logistics'

For the full Terraform file check out this gist: https://gist.github.com/Adron/6d7cb4be3a22429d0ff8c8bd360f3ce2

Executing that script gives me results that, if everything goes right, looks similarly to this.

./prod-start.sh 
azurerm_resource_group.adronsrg: Creating...
azurerm_resource_group.adronsrg: Creation complete after 1s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg]
azurerm_postgresql_server.logisticsserver: Creating...
azurerm_postgresql_server.logisticsserver: Still creating... [10s elapsed]
azurerm_postgresql_server.logisticsserver: Still creating... [20s elapsed]


...and it continues.

Do note that this process will take a different amount of time and is completely normal for it to take ~3 or more minutes. Once the server is done in the build process a lot of the other activities start to take place very quickly. Once it’s all done, toward the end of the output I get my hasura_url output variable so that I can confirm that it is indeed put together correctly! Now that this is preformed I can take next steps and remove that output variable, start to tighten security, and other steps. Which I’ll detail in a future blog post once more of the application is built.

... other output here ...


azurerm_container_group.adronshasure: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [50s elapsed]
azurerm_container_group.adronshasure: Still creating... [50s elapsed]
azurerm_postgresql_database.logisticsdb: Creation complete after 51s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.DBforPostgreSQL/servers/logisticscoresystemsdb/databases/logistics]
azurerm_container_group.adronshasure: Still creating... [1m0s elapsed]
azurerm_container_group.adronshasure: Creation complete after 1m4s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.ContainerInstance/containerGroups/adrons-hasura-logistics-data-layer]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

hasura_url = postgres://postgres%40logisticscoresystemsdb:theSecretPassword!@logisticscoresystemsdb.postgres.database.azure.com:5432/logistics

Now if I navigate over to logisticsdatalayer.westus2.azurecontainer.io I can view the Hasura console! But where in the world is this fully qualified domain name (FQDN)? Well, the quickest way to find it is to navigate to the Azure portal and take a look at the details page of the container itself. In the upper right the FQDN is available as well as the IP that has been assigned to the container!

Navigating to that FQDN URI will bring up the Hasura console!

Next Steps

From here I’ll take up next steps in a subsequent post. I’ll get the container secured, map the user interface or CLI or whatever the application is that I build lined up to the API end points, and more!

References

Sign Up for Thrashing Code

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode.

My Top 2 Reading List for Go & Data

Right now I’ve got a couple books in queue.

“Black Hat Go” Go Programming for Hackers and Pentesters by Tom Steele, Chris Patten, and Dan Kottmann.

Alt Text

I picked this book up after a good look at the index. There is a basic intro to Go at the beginning of the book to cover language fundamentals, but immediately after that dives into the meat of the topic. Immediately getting into understanding the TCP handshake, TCP itself, writing a scanner, and a proxy. There is then some basics about HTTP Servers, routers, and middleware in chapter 3 and 4, but returns immediately into topics of interest in chapter 5 around exploiting DNS, and writing DNS clients. I perused the index a bit further to note it covers SMB and NTLM, a chapter on “Abusing Databases and Filesystems”, then on to raw packet processing, writing and porting exploit code, and a host of other topics. This is going to be an interesting book to dig into and write about in the coming weeks. I’m pretty excited about it and hope to write a thorough review upon completion.

“Designing Data-Intensive Applications” by Martin Kleppmann

Alt Text

This book is familiar territory, as I’ve spent much of my career working on similar topics that this book covers. What I suspect is that I’ll enjoy reading the material presented in an orderly and concise way versus the chaos and disruptive way I’ve acquired the knowledge on these topics.

From the index, the book starts off with foundations of data systems and the ideas around building reliable, scalable, and maintainable applications. This provides a good basis in which to dive into the other topics. From there it looks like we’ll get a run into the birth of NoSQL, object-relational mismatches and the related insanity that this has bred in the industry! Then a solid dive into graph-like, traditional, and multi-model data modeling. With the beginning quarter of the book then covering everything from hash indexes, SSTables (another familiar topic), LSM-Trees, B-Trees, and related indexing structures before wrapping up this first 25% of the book with stars and snowflake topics for analytics and column-oriented storage, compression, sort orders in column storage, and related material on aggregation in data cubes and materialized views.

That’s just the first 25%! From there Martin covers a wide range of topics, that if you’re in the industry and plan to deal with large scale data-intensive applications these are topics you need to be intimately familiar with!

Reading

Over the next few months while I read through these books I hope to provide summaries and related notes on the material. Who knows, maybe you’ll want to dive into the material yourself! Until then happy thrashing code and may you have high retention and comprehension in reading!

Simple Go HTTP Server Starter in 15 Minutes

This is a quick starter, to show some of the features of Go’s http library. (sample repo here) The docs for the library are located here if you want to dig in deeper. In this post however I cover putting together a simple http server with a video on creating the http server, setting a status code, which provides the basic elements you need to further build out a fully featured server. This post is paired with a video I’ve put together, included below.

00:15 Reference to the Github Repo where I’ve put the code written in this sample.
https://github.com/Adron/coro-era-coding-go-sample
00:18 Using Goland IDE (Jetbrains) to clone the repository from Github.
00:40 Creating code files.
01:00 Pasted in some sample code, and review the code and what is being done.
02:06 First run of the web server.
02:24 First function handler to handle the request and response of an HTTP request and response cycle.
04:56 Checking out the response in the browser.
05:40 Checking out the same interaction with Postman. Also adding a header value and seeing it returned via the browser & related header information.
09:28 Starting the next function to provide further HTTP handler functionality.
10:08 Setting the status code to 200.
13:28 Changing the status code to 500 to display an error.

Getting a Server Running

I start off the project by pulling an empty repository that I had created before starting the video. In this I use the Jetbrains Goland IDE to pull this repository from Github.

1

Next I create two files; main.go and main_test.go. We won’t use the main_test.go file right now, but in a subsequent post I’ll put together some unit tests specifically to test out our HTTP handlers we’ll create. Once those are created I pasted in some code that just has a basic handler, using an anonymous function, provides some static file hosting, and then sets up the server and starts listening. I make a few tweaks, outlined in the video, and execute a first go with the following code.

package main

import (
    "fmt"
    "net/http"
)

func main() {
    http.HandleFunc("/", func (w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "Welcome to my website!")
    })

    http.ListenAndServe(":8080", nil)
}

When that executes, opening a browser to localhost:8080 will bring up the website which then prints out “Welcome to my website!”.

2

Adding a Function as an HTTP Handler

The next thing I want to add is a function that can act as an HTTP handler for the server. To do this create a function just like we’d create any function in Go. For this example, the function I built included several print line calls to the ResponseWriter with Request properties and a string passed in.

func RootHandler(w http.ResponseWriter, r *http.Request){
    fmt.Fprintln(w, "This is my content.")
    fmt.Fprintln(w, r.Header)
    fmt.Fprintln(w, r.Body)
}

In the func main I changed out the root handler to use this newly created handler instead of the anonymous function that it currently has in place. So swap out this…

http.HandleFunc("/", func (w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Welcome to my website!")
})

with this…

http.HandleFunc("/", RootHandler)

Now the full file reads as shown.

package main

import (
    "fmt"
    "net/http"
)

func main() {
    http.HandleFunc("/", RootHandler)
    http.ListenAndServe(":8080", nil)
}

Now executing this and navigating to localhost:8080 will display the following.

3

The string is displayed first “This is my content.”, then the header, and body respectively. The body, we can see is empty. Just enclosed with two braces {}. The header is more interesting. It is returned as a map type, between the brackets []. Showing an accept, accept-encoding, accept-language, user-agent, and other header information that was passed.

This is a good thing to explore further, check out how to view or set the values associated with the header values in HTTP responses, requests, and their related metadata. To go a step further, and get into this metadata a tool like Postman comes in handy. I open this tool up, setup a GET request and add an extra header value just to test things out.

4

Printing Readable Body Contents

For the next change I wanted to get a better print out of body contents, as the previous display was actually just attempting to print out the body in an unreadable way. In this next section I used an ioutil function to get the body to print out in a readable format. The ioutil.ReadAll function takes the body, then I retrieve a body variable with the results, pending no error, the body variable is then cast as a string and print out to the ResponseWriter on the last line. The RootHandler function then reads like this with the changes.

func RootHandler(w http.ResponseWriter, r *http.Request){
    fmt.Fprintln(w, "This is my content.")
    fmt.Fprintln(w, r.Header)

    defer r.Body.Close()

    body, err := ioutil.ReadAll(r.Body)
    if err != nil {
        fmt.Fprintln(w, err)
    }
    fmt.Fprintln(w, string(body))
}

If the result is then requested using Postman again, the results now display appropriately!

5

Response Status Codes!

HTTP Status codes fit in to a set of ranges for various categories of responses. The most common code is of course the success code, which is 200 “Status OK”. Another common one is status code 500, which is a generic catch all for “Server Error”. The ranges are as follows:

  • Informational responses (100–199)
  • Successful responses (200–299)
  • Redirects (300–399)
  • Client errors (400–499)
  • and Server errors (500–599)

For the next function, to get an example working of how to set this status code, I added the following function.

func ResponseExampleHandler(w http.ResponseWriter, r *http.Request) {
    w.WriteHeader(200)
    fmt.Fprintln(w, "Testing status code. Manually added a 200 Status OK.")
    fmt.Fprintln(w, "Another line.")
}

With that, add a handler to the main function.

http.HandleFunc("/response", ResponseExampleHandler)

Now we’re ready to try that out. In the upper right of Postman, the 200 status is displayed. The other data is shown in the respective body & header details of the response.

6

Next up, let’s just write a function specifically to return an error. We’ll use the standard old default 500 status code.

func ErrorResponseHandler(w http.ResponseWriter, r *http.Request) {
    w.WriteHeader(500)
    fmt.Fprintln(w, "Server error.")
}

Then in main, as before, we’ll add the http handle for the function handler.

http.HandleFunc("/errexample", ErrorResponseHandler)

Now if the server is run again and an HTTP request is sent to the end point, the status code changes to 500 and the message “Server error.” displays on the page.

7

Summary

That’s a quick intro to writing an HTTP server with Go. From here, we can take many next steps such as writing tests to verify the function handlers, or setup a Docker image in which to deploy the server itself. In subsequent blog entries I’ll write up just those and many other step by step procedures. For now, a great next step is to expand on trying out the different functions and features of the http library.

That’s it for now. However, if you’re interested in joing me to write some JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode. Feel free to check out a coding session, ask questions, interject, or just come and enjoy the tunes!

Creating a Go Module & Writing Tests in Less Than 3 Minutes

In this short (less than 4 minutes) video I put together a Go module library, then setup some first initial tests calling against functions in the module.

 

00:12 – Writing unit tests.

00:24 – Create the first go file.

00:40 – Creating the Go go.mod file. When creating the go.mode file, note the command is go mod init [repopath] where repopath is the actual URI to the repo. The go.mod file that is generated would look like this.

module github.com/adron/awesomeLib

go 1.13

00:56 Enabling Go mod integration for Goland IDE. This dialog has an option to turn off the proxy or go direct to repo bypassing the proxy. For more details on the Go proxy, check out goproxy.io and the Go docs for more details.

01:09Instead of TDD, first I cover a simple function implementation. Note the casing is very important in setting up a test in Go. The test function needs to be public, thus capitalized, and the function needs accessible, thus also capitalized. The function in the awesomeLib.go file, just to have a function that would return some value, looks like this.

package awesomeLib

func GetKnownResult() string {
    return "known"
}

01:30Creating the test file, awesomeLib_test.go, then creating the test. In this I also show some of the features of the Goland IDE that extracts and offers the function names of a module prefaced with Test and other naming that provides they perform a test, performance, or related testing functionality.

package awesomeLib

import "testing"

func TestGetKnownResult(t *testing.T) {
    got := GetKnownResult()
    if got != "known" {
        t.Error("Known result not received, test failed.")
    }
}

02:26Running the standard go test to execute the test. To run tests for all files within this module, such as if we’ve added multiple directories and other files, would be go test ./....

02:36Using Goland to run the tests with singular or multiple tests being executed. With Goland there are other capabilities to show test coverage, what percentage of functionality is covered by tests, and other various code metrics around testing, performance, and other telemetry.

That’s it, now the project is ready for elaboration and is setup for a TDD, BDD, or implement and test style approach to development.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode.

5 Ways to Learn & Improve Your Programming Language Use

Posed the question,

What to write when starting to learn and programming language?

I started an answer to the question in this video. But read on for more extensive list and details!

I’ve written up this article. It started last week when I wrote up “Top 5 Ways to get the coding basics down!” This week I wanted to delve deeper into a specific item on that list and provide what I came up with, which is this List of 10 things to write in order to effectively learn a programming language.

1. Find a Known Domain

What I mean by this is to find a domain that you personally know. It seemed the 80’s generation of programmer dorks wrote role playing game character generators for Dungeons & Dragons. I did myself but for Robotech. But there’s a million other domains that you would know, and those domains are what you should pick and find data and actions in that domain you could implement with code.

Two examples I’ve used recently includes the made up e-commerce store of Better Botz, a store that works with the domain of robots and mecha, and Railroads such as Union Pacific, Norfolk Southern, Amtrak, and the respective domain around that. Many of the other things you could use might be transportation, automotive, health care, or more specific the universe of Harry Potter, Star Wars, Tumble Leaf, Sesame Street, style and fashion trends,

2. Model a Domain According to the Language

Let’s say you’re learning JavaScript. Whatever the domain you’ll want to find actions and data to implement and use specificly to how the language is best used. With JavaScript that would be to use the language features and design to build a working application, like; perform mathematical calculations, such as with bigint, using dynamic import, chaining, promises, async and await, global variables, arrow functions, inheritence through prototyping, and how the properties and methods, or functions, for starters work best for the application domain.

In C#, Java, Rust, Go, Erlang, F#, and other languages these features and characteristics would be different. Whatever the language, make a list of the features and characteristics that are widely used and work with those to best implement the application domain.

List Interupt: Refactoring

I’ve written this list in order of what I personally have found the easiest path in gaining a high level of profiecience with a programming language. You could however mix the list to your preferred style. However at this point between the first two item efforts and the next a new skill will be learned both out of necessity and intent: refactoring.

Code Refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior. Refactoring is intended to improve the design, structure, and/or implementation of the software (its non-functional attributes), while preserving the functionality of the software. Potential advantages of refactoring may include improved code readability and reduced complexity; these can improve source code maintainability and create a simpler, cleaner, or more expressive internal architecture or object model to improve extensibility.”

3. Learn to Test

Learning to test an application will help you tremendously in writing better code. If you’ve committed the actions of the first two items in this list, this is an opportunity to significantly refactor the application you’ve built around your known domain. You’ll need to break apart existing functions so that they’re bit size enough to test. Sometimes you’ll need to differentiate between making some testable at a small unit size versus across an application’s hard boundaries, like an integration test between a database and the application’s data access code.

Pushing for testing the core domain functionality of the application is the best thing to focus on. Don’t worry about the specifics of testing if data is written to the database, or data is retrieved from the database, as much as testing that a function returns the specific answer you expected based on passing it the specific parameters you’ve assumed it would be given. This isn’t to say sometimes you may want to test what you’ve written to the database, so sure, that is something that you can test. But focusing on the core application domain functionality will ensure the best return on your investment of time!

4. Clean Coding Practices

Once you’ve gained familiarity with the language by building an application or two using the features, rooted in a known domain that you understand well, and have some test coverage of the application, refactor that application using clean coding practices. Clean code is often described using the SOLID principles:

  • Single Responsibility Principle
  • Open / Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

See also: KISS (Keep It Simple Stupid), DRY (Don’t Repeat Yourself), YAGNI (You Aint Gonna Need It), and others.

The idea behind these, is to keep the application code regardless of languages as simple as it can possibly be. With a secondary effect that once software becomes painful to modify, it should be changed as to make modification simpler. Thus the idea of refactoring to the SOLID principles listed, and doing so continuously will keep software in a usable, maintainable, and one could arguable say even a performant state.

5. The Hardest Suggestion: Naming & Thinking

It is often stated, “The hardest problem in computer science is naming things.” to which I, not in jest, whole heartedly agree. Elements in programming, programming design, programming implementation, and every permeable space around programming is littered with poorly named things. I don’t blame anybody because it is damned hard to get a good name for features and functionality, and have truth in the naming of those features and functionality remain throughout the course of its lifetime.

Naming is hard, which I’d combine to say stop programming regularly and think about your naming. Think about refactoring to names that are more accurate to the intent. Stop and ponder what a good name would be, and if the function or feature will replication what that name implies. Add in some metaphysical thought to the whole effort, talk it through with someone you know (or a stranger, it’s a good skill for debuggin!), but think, think, ponder, analyze, and think some more.

The short of the long part of this suggetion can simply be summarized as: Practice naming things, think a bunch, then rename things, and repeat this cycle.

In Summary

To really get a programming language down by heart, be able to readily implement pretty much anything, this is my go to list of actions and practices to follow. Hopefully, it’ll provide you with some ideas to explore how best you would also like to go about learning a programming language. I’m always open to suggestions, comments, thoughts, and discussions on the topic, so feel free to ping me via the Twitters @Adron.

Search Words

Type in any of the following, in pretty much any search engine, like duckduckgo to dig up the latest and greatest on this topic.

  • Single Responsibility Principle
  • Open / Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle
  • How to Model a Domain
  • Clean Coding Practices
  • Unit Testing
  • Unit Testing Practices
  • SOLID Principles
  • Package Principles (REP, CRP, CCP, ADP, SDP, SAP)
  • Don’t Repeat Yourself (DRY)
  • Keep It Simple Stupid (KISS)
  • You Aint Gonna Need It (YAGNI)