Terrazura – A Build Out of an Azure based, Hasura GraphQL API on Postgres

I created this repo https://github.com/Adron/terrazura​ during a live stream on my Twitch Thrashing Code Channel 🀘 at 10am on the 30th of December, 2020. The VOD is now available on my YouTube Thrashing Code Channel https://youtube.com/thrashingcode​. A rough as hell year, but wanted to wrap it up with some solid content. In this stream I tackled a ton of specifics, in detail about getting Hasura deployed in Azure, Postgres backed, a database schema designed and created, using database schema migrations, and all sorts of tips n’ tricks along the way. 3 hours of solid how to get shit done material!

For live streams, check out and follow at https://www.twitch.tv/thrashingcode​ πŸ‘ŠπŸ» or for VOD viewing check out https://youtube.com/thrashingcode

A point in coding during the video!

02:49​ – Shout out to the stream sponsor, Azure, and links to some collateral material.
14:50​ – In this first segment, I start but run into some troubleshooting needs around the provider versions for Terraform in regards to Azure. You can skip this part unless you want to see what issue I ran into.
18:24​ – Since I ran into issues with the current version of Terraform I had installed, at this time I show a quick upgrade to the latest version.
27:22​ – After upgrading and fighting through trial and error execution of Terraform until I finally get the right combination of provider and Terraform versions.
27:53​ – Adding the first Terraform resource, the Azure resource group.
29:47​ – Azure Portal oddness, just to take note off if/when you’re working through this. Workaround later in the stream.
32:00​ – Adding the Postgres server resource.
44:43​ – In this segment I switched over to Jetbrain’s Intellij to do the rest of the work. I also tweak the IDE to re-add the plugin for the material design themes and icons. If you use this IDE, it’s very much IMHO worth getting this to switch between themes.
59:32​ – After getting leveled-up with the IDE, I wrap up the #Postgres​ server resource and #terraform​ apply it the overall set of resources. At this point I also move forward with the infrastructure as code, with emphasis on additive changes to the immutable infrastructure by emphasizing use of terraform apply and minimizing any terraform destroy use.
1:02:07​ – At this time, I try figuring out the portal issue by az logout and logging back in az login to Azure Still no resources shown but…
1:08:47​ – eventually I realize I have to use the hack solution of pasting the subscription ID into the
@Azure portal to get resources for the particular subscription account which seems highly counter intuitive since its the ONLY account. 🧐
1:22:54​ – The next thing I setup, now that I have variables that need passed in on every terraform execution, I add a script to do this for me.
1:29:35​ – Next up is adding the database to the database server and firewall rule. Also we get to see Jetbrains #Intellij​ HCL plugin introspection at work adding required properties to the firewall resource! A really useful feature.
1:38:24​ – Next up, creating the Azure container to deploy our Hasura GraphQL API for #Postgres​ to!
1:51:42​ – BAM! API Server is done and launched! I’ve got a live #GraphQL​ API up and running in Azure and we’re ready to start building a data model!
1:56:22​ – In this segment I show how to turn off the public facing console and shift one’s development workflow to the local Hasura console working against – local OR your live dev environment.
1:58:29​ – Next segment I get into schema migrations, initializing a directory structure for Hasura CLI use, and metadata, migrations, and related data. Including an update to the latest CLI so you can see how to do that, after a run into a slight glitch. 😬
2:23:02​ – I also shift over to dbdiagram to graphically build out some of the schema via their markdown, then use the SQL export option for #postgres​ combined with Hasura’s option to execute plain ole SQL via migrations…
2:31:48​ – Getting a bit more in depth in this segment, I delve through – via the Hasura console – to build out relationships between the tables and data so the graphql queries can introspect accordingly.
2:40:30​ – Next segment, graphql time! I show some of the options of what is available immediately for queries and mutations via the console.
2:50:36​ – Then some more details about metadata. I’m going to do a stream with further details, since I was a little fuzzy on some of those details myself, in the very very near future. However a good introduction to what the metadata does for the #graphql​ API.
2:59:07​ – Then as a wrap up to all of this… I nuke EVERYTHING and deploy it all out to Azure again inclusive of schema migrations, metadata, etc. 🀘🏻
3:16:30​ – Final segment, I add some data to the database and get into a few basic queries and mutations in #graphql​ via the #graphiql​ console interface in #Hasura​.

Pragmatic Database Schema Naming Conventions, Practices, and Patterns

In this post I’ve put together some of the naming conventions, rules, and ideas that I tend to follow when creating database schemas to work with. This also applies to schema-less databases, distributed systems databases, graph, time series, or whatever else I am working with. It’s always good to have some good conventions to work with, and the descriptions and ideas in this post are a solid starting point.

What We’re Working With, Some of The Database Rules

Let’s start with a SQL Server rule. Table names must be less than 128 characters. To force SQL Server to use non-standard table names one can use brackets. Then in scripts these names have to be single quoted.

There are many other rules about naming things in SQL Server. But let’s talk about some other database specific rules for other databases.

Postgres names fold to lowercase versus uppercase, which is different then many other databases. Throw in some double quotes however and you can use names like MyTable, MYTABLE, and mytable. These would all be the same without double qutoes, add the double quotes around those names and “MYTABLE” becomes different than “MyTable” and different than “mytable”.

SQL identifiers in Postgres and key words must begin with letters (a-z), which include diacritical marks and non-Latin letters. After the first letter and identifier can have letters, underscores, digits, or dollar signs. If an identifier is double quoted, you can also yse keywords, albeit I would very strongly reccommend against this practice.

As these examples provide, there are a number of ways that the rules are just different enough from one database to another that it is often very helpful to use a naming convention that would work across databases. I’m often working with a variety of databases so this post will cover naming convention ideas and respective patterns and practices around them that would work with every conceivable database I know to exist!

Table, Column, Tuple, or Related Naming Conventions

  • Table, column, and related object names should contain only letters, numbers as characters in the body of the name and not as the preface characters, underscores and absolutely no spaces or special characters. In summary, use a case scheme like Camel or Pascal Case but do not use Snake or Kebab Case.
  • Use meaningful names from the business or organizational domain being modeled. Such as “BankingUsers”, “Transactions”, “railroads”, or “railroad_Systems”.
  • Use singular word names if at all possible, only moving to compound word naming if absolutely necessary. Ideally names would be single words like “User”, “Transactions”, “railroad”, or “system” and exclude compound names like “railroadSystem” until it is needed to prevent confusion or naming collisions.
  • Columns that are primary or foreign keys should be prefaced with PK_ and FK_ respectively, and in my moderately humble opinion, stick to just PK or FK using Camel or Pascal Case. For other metadata, indexes, and related names use a respective preface or postfix conventions.
  • It’s also a good idea to choose plural or singular for table names. However, be sure to choose one or the other so that frameworks, Object Relationship/Relational Mappers/Mapping (ORM), and other tools can effectively name things when used. For example, when table names are singular, many ORM frameworks when generating code would take a singular table name like Customer and make it Customers and have objects of Customer.

Schema/Domain Naming Conventions

In many databases there are additional organizational and related structures that help us to setup tables, functions, stored procedures, compiled SQL/queries, and other objects in groupings. Naming these objects accordingly is easiest by following the same convention as the table naming convention.

For example, if the table naming convention is following Camel Case then continue that;

Table Names for Returns in an E-commerce Domain:

  • Purchase
  • Return
  • Shipped

Table Names for Pricing in an E-commerce Domain:

  • Price
  • Cost

Table Names for Core Tables, for multiple schemas, within the E-commerce Domain:

  • Item

This could be split out to three schemas;

  • Core
  • Prices
  • Returns

The names would then look like this:

Core.Item
Prices.Purchase
Prices.Return
Prices.Shipped
Returns.Purchase
Returns.Return
Returns.ShippedItem

An Example

Here’s an example I put together with some naming conventions I’ve found useful, reduces confusion, and manages to tell a reasonable amount of information about the domain space and schema of the database without conflicts.

The Database Schema

Throughout this schema I’ve used Camel Casing, with most single word column and single word table names. Keeping it simple, such as Id and Stamp are some of the recurring columsn that are useful for retrieval, relationships, and determining origins of data over time. In production settings there are default columns that are often needed and one can rest assured, a time stamp is most likely one of those that is needed everywhere for audits!

For my foriegn key columns, one can determine the relationship by the name of the column itself. For example, in the Connection table the are two foreign keys, one to the Action table and one to the Source table, to the respective Id columns in each of those tables. The one exception is NoteJot, which I named because Note tends to conflict in certain systems. In that table I’ve added a relationship, for recursive data, back to itself with the use of the NoteId foreign key back to the table’s primary key Id.

The diagram above, without any further details can be used to create the schema, with SQL code that would look like the following.

CREATE TABLE "Source" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "Name" text,
  "Uri" text,
  "Details" text
);

CREATE TABLE "SourceNotes" (
  "SourceId" uuid,
  "NotesId" uuid,
  "Details" text,
  "Stamp" timestamp
);

CREATE TABLE "NoteJot" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "NoteId" uuid,
  "Details" text
);

CREATE TABLE "Action" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "Action" json
);

CREATE TABLE "Connection" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ActionId" uuid,
  "SourceId" uuid
);

CREATE TABLE "Formatter" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ConnectionId" uuid,
  "FormatterMap" json
);

CREATE TABLE "Schema" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ConnectionId" uuid,
  "SchemaMap" json
);

ALTER TABLE "SourceNotes" ADD FOREIGN KEY ("SourceId") REFERENCES "Source" ("Id");

ALTER TABLE "SourceNotes" ADD FOREIGN KEY ("NotesId") REFERENCES "NoteJot" ("Id");

ALTER TABLE "NoteJot" ADD FOREIGN KEY ("NoteId") REFERENCES "NoteJot" ("Id");

ALTER TABLE "Connection" ADD FOREIGN KEY ("ActionId") REFERENCES "Action" ("Id");

ALTER TABLE "Connection" ADD FOREIGN KEY ("SourceId") REFERENCES "Source" ("Id");

ALTER TABLE "Formatter" ADD FOREIGN KEY ("ConnectionId") REFERENCES "Connection" ("Id");

ALTER TABLE "Schema" ADD FOREIGN KEY ("ConnectionId") REFERENCES "Connection" ("Id");

We have the tables and keys, all following the Camel Case standard. Much of this however could be – if you preferred – to Pascal Case however switching them to Snake or Kebab Case would cause a number of issues depending on the database.

Do NOT Use These

Unless you want to spend tons of time with errors, debugging, and related issues skip these practices.

  • Generally throughout databases it is best to skip Snake or Kebab Case. Various situations they’re fine, but overall they’re likely to run into conflicts, naming limitations, or other concerns. It’s best to just skip them and remove the concern.
  • When you’re using data that has variance in how it is represented, do not use multitudes of formats. For dates, location, geographic, or related data it is best to stick to a particular format that is repeated throughout the database. Using mixed representations, for example with dates a MMDDYYYY format and then a DD-MM-YYYY format, methods, functions, or other elements that consume or process this data will need to account for this. Creating more time consuming and error prone code.
  • Once you pick a naming scheme for any particular database object type, stick to the naming scheme. For example, if you go with Camel Casing for your tables, use Camel Casing for all of the tables and don’t switch to Pascal for some of them. Specifically, however with this guidance, is if you switch object types, for example you name the tables Pascal Cased but switch to Camel Case for indexes, that’s perfect. Then if ever reviewing a list of objects irrespective of kinds of objects, one can differentiate merely by the conventions used.

Summary & Caveat

Working up a set of patterns, practices, rules, and generally conventions to work with on the database side of things is immensely useful. It helps the Database Administrators, Data Scientists, Software Developers, others that need to utilize the database, and to communicate with each other in reference to the database and data.


That’s it for the database schema topic for now. However, if you’re interested in joining me for more database and data oriented things, language stack setup, software development, patterns, practices, and more in addition to writing some JavaScript, Go, Python, Terraform, and infrastructure, web dev, and all sorts of coding I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode. Feel free to check out a coding session, ask questions, interject, or just come and enjoy the tunes!

Setup Postgres, and GraphQL API with Hasura on Azure

Key Technologies: HasuraPostgresTerraformDocker, and Azure.

I created a data model to store railroad systems, services, scheduled, time points, and related information, detailing the schema “Beyond CRUD n’ Cruft Data-Modeling” with a few tweaks. The original I’d created for Apache Cassandra, and have since switched to Postgres giving the option of primary and foreign keys, relations, and the related connections for the model.

In this post I’ll use that schema to build out an infrastructure as code solution with Terraform, utilizing Postgres and Hasura (OSS).

Prerequisites

Docker Compose Development Environment

For the Docker Compose file I just placed them in the root of the repository. Add a docker-compose.yaml file and then added services. The first service I setup was the Postgres/PostgreSQL database. This is using the standard Postgres image on Docker Hub. I opted for version 12, I do want it to always restart if it gets shutdown or crashes, and then the last of the obvious settings is the port which maps from 5432 to 5432.

For the volume, since I might want to backup or tinker with the volume, I put the db_data location set to my own Codez directory. All my databases I tend to setup like this in case I need to debug things locally.

The POSTGRES_PASSWORD is an environment variable, thus the syntax ${PPASSWORD}. This way no passwords go into the repo. Then I can load the environment variable via a standard export POSTGRES_PASSWORD="theSecretPasswordHere!" line in my system startup script or via other means.

services:
  postgres:
    image: postgres:12
    restart: always
    volumes:
      - db_data:/Users/adron/Codez/databases
    environment:
      POSTGRES_PASSWORD: ${PPASSWORD}
    ports:
      - 5432:5432

For the db_data volume, toward the bottom I add the key value setting to reference it.

volumes:
  db_data:

Next I added the GraphQL solution with Hasura. The image for the v1.1.0 probably needs to be updated (I believe we’re on version 1.3.x now) so I’ll do that soon, but got the example working with v1.1.0. Next I’ve got the ports mapped to open 8080 to 8080. Next, this service will depend on the postgres service already detailed. Restart, also set on always just as the postgres service. Finally two evnironment variables for the container:

  • HASURA_GRAPHQL_DATABASE_URL – this variable is the base postgres URL connection string.
  • HASURA_GRAPHQL_ENABLE_CONSOLE – this is the variable that will set the console user interface to initiate. We’ll definitely want to have this for the development environment. However in production I’d likely want this turned off.
  graphql-engine:
    image: hasura/graphql-engine:v1.1.0
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:logistics@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true"

At this point the commands to start this are relatively minimal, but in spite of that I like to create a start and stop shell script. My start script and stop script simply look like this:

Starting the services.

docker-compose up -d

For the first execution of the services you may want to skip the -d and instead watch the startup just to become familiar with the events and connections as they start.

Stopping the services.

docker-compose down

πŸš€ That’s it for the basic development environment, we’re launched and ready for development. With the services started, navigate to https://localhost:8080/console to start working with the user interface, which I’ll have a more details on the “Beyond CRUD n’ Cruft Data-Modeling” swap to Hasura and Postgres in an upcoming blog post.

For full syntax of the docker-compose.yaml check out this gist: https://gist.github.com/Adron/0b2ea637b5e00681f4d62404805c3a00

Terraform Production Environment

For the production deployment of this stack I want to deploy to Azure, use Terraform for infrastructure as code, and the Azure database service for Postgres while running Hasura for my API GraphQL tier.

For the Terraform files I created a folder and added a main.tf file. I always create a folder to work in, generally, to keep the state files and initial prototyping of the infrastructre in a singular place. Eventually I’ll setup a location to store the state and fully automate the process through a continues integration (CI) and continuous delivery (CD) process. For now though, just a singular folder to keep it all in.

For this I know I’ll need a few variables and add those to the file. These are variables that I’ll use to provide values to multiple resources in the Terraform templating.

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

One other variable I’ll want so that it is a little easier to verify what my Hasura connection information is, will look like this.

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

Let’s take this one apart a bit. There is a lot of concatenated and interpolated variables being wedged together here. This is basically the Postgres connection string that Hasura will need to make a connection. It includes the username and password, and all of the pertinent parsed and string escaped values. Note specifically the %40 between the ${var.username} and ${azurerm_postgresql_server.logisticsserver.name} variables while elsewhere certain characters are not escaped, such as the @ sign. When constructing this connection string, it is very important to be prescient of all these specific values being connected together. But, I did the work for you so it’s a pretty easy copy and paste now!

Next I’ll need the Azure provider information.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

Note that there is a features array that is just empty, it is now required for the provider to designate this even if the array is empty.

Next up is the resource group that everything will be deployed to.

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

Now the Postgres Server itself. Note the location and resource_group_name simply map back to the resource group. Another thing I found a little confusing, as I wasn’t sure if it was a Terraform name or resource name tag or the server name itself, is the “name” key value pair in this resource. It is however the server name, which I’ve assigned var.server. The next value assigned “B_Gen5_2” is the Azure designator, which is a bit cryptic. More on that in a future post.

After that information the storage is set to, I believe if I RTFM’ed correctly to 5 gigs of storage. For what I’m doing this will be fine. The backup is setup for 7 days of retention. This means I’ll be able to fall back to a backup from any of the last seven days, but after 7 days the backups are rolled and the last day is deleted to make space for the newest backup. The geo_redundant_backup_enabled setting is set to false, because with Postgres’ excellent reliability and my desire to not pay for that extra reliability insurance, I don’t need geographic redundancy. Last I set auto_grow_enabled to true, albeit I do need to determine the exact flow of logic this takes for this particular implementation and deployment of Postgres.

The last chunk of details for this resource are simply the username and password, which are derived from variables, which are derived from environment variables to keep the actual username and passwords out of the repository. The last two bits set the ssl to enabled and the version of Postgres to v9.5.

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

Since the database server is all setup, now I can confidently add an actual database to that database. Here the resource_group_name pulls from the resource group resource and the server_name pulls from the server resource. The name, being the database name itself, I derive from a variable too. Then the character set is UTF8 and collation is set to US English, which are generally standard settings on Postgres being installed for use within the US.

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

The next thing I discovered, after some trial and error and a good bit of searching, is the Postgres specific firewall rule. It appears this is related to the Postgres service in Azure specifically, as for a number of trials and many errors I attempted to use the standard available firewalls and firewall rules that are available in virtual networks. My understanding now is that the Postgres Servers exist outside of that paradigm and by relation to that have their own firewall rules.

This firewall rule basically attaches the firewall to the resource group, then the server itself, and allows internal access between the Postgres Server and the Hasura instance.

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

The last and final step is setting up the Hasura instance to work with the Postgres Server and the designated database now available.

To setup the Hasura instance I decided to go with the container service that Azure has. It provides a relatively inexpensive, easier to setup, and more concise way to setup the server than setting up an entire VM or full Kubernetes environment just to run a singular instance.

The first section sets up a public IP address, which of course I’ll need to change as the application is developed and I’ll need to provide an actual secured front end. But for now, to prove out the deployment, I’ve left it public, setup the DNS label, and set the OS type.

The next section in this resource I then outline the container details. The name of the container can be pretty much whatever you want it to be, it’s your designator. The image however is specifically hasura/graphql-engine. I’ve set the CPU and memory pretty low, at 0.5 and 1.5 respectively as I don’t suspect I’ll need a ton of horsepower just to test things out.

Next I set the port available to port 80. Then the environment variables HASURA_GRAPHQL_SERVER_PORT and HASURA_GRAPHQL_ENABLE_CONSOLE to that port to display the console there. Then finally that wild concatenated interpolated connection string that I have setup as an output variable – again specifically for testing – HASURA_GRAPHQL_DATABASE_URL.

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

With all that setup it’s time to test. But first, just for clarity here’s the entire Terraform file contents.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

To run this, similarly to how I setup the dev environment, I’ve setup a startup and shutdown script. The startup script named prod-start.sh has the following commands. Note the $PUSERNAME and $PPASSWORD are derived from environment variables, where as the other two values are just inline.

cd terraform

terraform apply -auto-approve \
    -var 'server=logisticscoresystemsdb' \
    -var 'username='$PUSERNAME'' \
    -var 'password='$PPASSWORD'' \
    -var 'database=logistics'

For the full Terraform file check out this gist: https://gist.github.com/Adron/6d7cb4be3a22429d0ff8c8bd360f3ce2

Executing that script gives me results that, if everything goes right, looks similarly to this.

./prod-start.sh 
azurerm_resource_group.adronsrg: Creating...
azurerm_resource_group.adronsrg: Creation complete after 1s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg]
azurerm_postgresql_server.logisticsserver: Creating...
azurerm_postgresql_server.logisticsserver: Still creating... [10s elapsed]
azurerm_postgresql_server.logisticsserver: Still creating... [20s elapsed]


...and it continues.

Do note that this process will take a different amount of time and is completely normal for it to take ~3 or more minutes. Once the server is done in the build process a lot of the other activities start to take place very quickly. Once it’s all done, toward the end of the output I get my hasura_url output variable so that I can confirm that it is indeed put together correctly! Now that this is preformed I can take next steps and remove that output variable, start to tighten security, and other steps. Which I’ll detail in a future blog post once more of the application is built.

... other output here ...


azurerm_container_group.adronshasure: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [50s elapsed]
azurerm_container_group.adronshasure: Still creating... [50s elapsed]
azurerm_postgresql_database.logisticsdb: Creation complete after 51s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.DBforPostgreSQL/servers/logisticscoresystemsdb/databases/logistics]
azurerm_container_group.adronshasure: Still creating... [1m0s elapsed]
azurerm_container_group.adronshasure: Creation complete after 1m4s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.ContainerInstance/containerGroups/adrons-hasura-logistics-data-layer]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

hasura_url = postgres://postgres%40logisticscoresystemsdb:theSecretPassword!@logisticscoresystemsdb.postgres.database.azure.com:5432/logistics

Now if I navigate over to logisticsdatalayer.westus2.azurecontainer.io I can view the Hasura console! But where in the world is this fully qualified domain name (FQDN)? Well, the quickest way to find it is to navigate to the Azure portal and take a look at the details page of the container itself. In the upper right the FQDN is available as well as the IP that has been assigned to the container!

Navigating to that FQDN URI will bring up the Hasura console!

Next Steps

From here I’ll take up next steps in a subsequent post. I’ll get the container secured, map the user interface or CLI or whatever the application is that I build lined up to the API end points, and more!

References

Sign Up for Thrashing Code

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode.

My Top 2 Reading List for Go & Data

Right now I’ve got a couple books in queue.

“Black Hat Go” Go Programming for Hackers and Pentesters by Tom Steele, Chris Patten, and Dan Kottmann.

Alt Text

I picked this book up after a good look at the index. There is a basic intro to Go at the beginning of the book to cover language fundamentals, but immediately after that dives into the meat of the topic. Immediately getting into understanding the TCP handshake, TCP itself, writing a scanner, and a proxy. There is then some basics about HTTP Servers, routers, and middleware in chapter 3 and 4, but returns immediately into topics of interest in chapter 5 around exploiting DNS, and writing DNS clients. I perused the index a bit further to note it covers SMB and NTLM, a chapter on “Abusing Databases and Filesystems”, then on to raw packet processing, writing and porting exploit code, and a host of other topics. This is going to be an interesting book to dig into and write about in the coming weeks. I’m pretty excited about it and hope to write a thorough review upon completion.

“Designing Data-Intensive Applications” by Martin Kleppmann

Alt Text

This book is familiar territory, as I’ve spent much of my career working on similar topics that this book covers. What I suspect is that I’ll enjoy reading the material presented in an orderly and concise way versus the chaos and disruptive way I’ve acquired the knowledge on these topics.

From the index, the book starts off with foundations of data systems and the ideas around building reliable, scalable, and maintainable applications. This provides a good basis in which to dive into the other topics. From there it looks like we’ll get a run into the birth of NoSQL, object-relational mismatches and the related insanity that this has bred in the industry! Then a solid dive into graph-like, traditional, and multi-model data modeling. With the beginning quarter of the book then covering everything from hash indexes, SSTables (another familiar topic), LSM-Trees, B-Trees, and related indexing structures before wrapping up this first 25% of the book with stars and snowflake topics for analytics and column-oriented storage, compression, sort orders in column storage, and related material on aggregation in data cubes and materialized views.

That’s just the first 25%! From there Martin covers a wide range of topics, that if you’re in the industry and plan to deal with large scale data-intensive applications these are topics you need to be intimately familiar with!

Reading

Over the next few months while I read through these books I hope to provide summaries and related notes on the material. Who knows, maybe you’ll want to dive into the material yourself! Until then happy thrashing code and may you have high retention and comprehension in reading!

Beyond CRUD n’ Cruft Data-Modeling

I dig through a lot of internet results and blog entries that show CRUD data modeling all the time. A lot of these blog entries and documentation are pretty solid. Unfortunately, rarely do we end up with data that is accurately or precisely modeled the way it ought to be or the way we would ideally use it. In this post I’m going to take some sample elements of data and model it out for various uses. Then reconstitute that data into different structures for various uses within microservices, loading, reading, both in normalized form and denormalized form.

The Domain: Railroad Systems & Services

The domain I chose for this particular example is the entire global spectrum of rail services. Imagine if you would a system that can track all the trains in the world, or even just the trains in a particular area of the world, like the United States. In the United States the trains can be broken down into logical structures of data for various things like freight trains and passenger trains. Trains operated under a particular operator like Amtrak, Union Pacific, or Norfolk Southern, and their respective consists that the train is made up of. Let’s get into some particular word definitions to fully detail this domain. Continue reading “Beyond CRUD n’ Cruft Data-Modeling”