Hasura 2.0 – A Short Story of v1.3.3 to v2.0 Upgrades

Today at Hasura we released Hasura v2.0! This is a pretty major release with a number of new features that will dramatically increase the capabilities for Hasura. For several of my projects, specifically the infrastructure as code projects terrazura (check out the previous blog post w/ video time points and more) and tenancy-bydata I was able to get the upgrade to Hasura v2.0 done in moments! Since I don’t have to pull backups or anything for these projects, it merely involved the following steps.

  1. Upgrade the Hasura CLI. This is super easy, just issue the command hasura update-cli --version v2.0.0-alpha.1. This command will then download and update the CLI.
  2. Next I updated the Terraform file so the container pulls the latest version image = "hasura/graphql-engine:v2.0.0-alpha.1".

Next run an updated terraform apply command, which in my case is this command in the case of the terrazura project for example.

terraform init

terraform apply -auto-approve \
  -var 'server=terrazuraserver' \
  -var 'username='$PUSERNAME'' \
  -var 'password='$PPASSWORD'' \
  -var 'database=terrazuradb' \
  -var 'apiport=8080'

cd migrations

hasura migrate apply

Boom! Everything is now updated to v2.0 and we’re ready for all the upcoming Twitch streams relating back to these particular projects!

For more, be sure to subscribe to the HasuraHQ Twitch Channel and my Twitch Channel Thrashing Code as I’ll be covering more of the new features in the coming days!

Terrazura – A Build Out of an Azure based, Hasura GraphQL API on Postgres

I created this repo https://github.com/Adron/terrazura​ during a live stream on my Twitch Thrashing Code Channel 🤘 at 10am on the 30th of December, 2020. The VOD is now available on my YouTube Thrashing Code Channel https://youtube.com/thrashingcode​. A rough as hell year, but wanted to wrap it up with some solid content. In this stream I tackled a ton of specifics, in detail about getting Hasura deployed in Azure, Postgres backed, a database schema designed and created, using database schema migrations, and all sorts of tips n’ tricks along the way. 3 hours of solid how to get shit done material!

For live streams, check out and follow at https://www.twitch.tv/thrashingcode​ 👊🏻 or for VOD viewing check out https://youtube.com/thrashingcode

A point in coding during the video!

02:49​ – Shout out to the stream sponsor, Azure, and links to some collateral material.
14:50​ – In this first segment, I start but run into some troubleshooting needs around the provider versions for Terraform in regards to Azure. You can skip this part unless you want to see what issue I ran into.
18:24​ – Since I ran into issues with the current version of Terraform I had installed, at this time I show a quick upgrade to the latest version.
27:22​ – After upgrading and fighting through trial and error execution of Terraform until I finally get the right combination of provider and Terraform versions.
27:53​ – Adding the first Terraform resource, the Azure resource group.
29:47​ – Azure Portal oddness, just to take note off if/when you’re working through this. Workaround later in the stream.
32:00​ – Adding the Postgres server resource.
44:43​ – In this segment I switched over to Jetbrain’s Intellij to do the rest of the work. I also tweak the IDE to re-add the plugin for the material design themes and icons. If you use this IDE, it’s very much IMHO worth getting this to switch between themes.
59:32​ – After getting leveled-up with the IDE, I wrap up the #Postgres​ server resource and #terraform​ apply it the overall set of resources. At this point I also move forward with the infrastructure as code, with emphasis on additive changes to the immutable infrastructure by emphasizing use of terraform apply and minimizing any terraform destroy use.
1:02:07​ – At this time, I try figuring out the portal issue by az logout and logging back in az login to Azure Still no resources shown but…
1:08:47​ – eventually I realize I have to use the hack solution of pasting the subscription ID into the
@Azure portal to get resources for the particular subscription account which seems highly counter intuitive since its the ONLY account. 🧐
1:22:54​ – The next thing I setup, now that I have variables that need passed in on every terraform execution, I add a script to do this for me.
1:29:35​ – Next up is adding the database to the database server and firewall rule. Also we get to see Jetbrains #Intellij​ HCL plugin introspection at work adding required properties to the firewall resource! A really useful feature.
1:38:24​ – Next up, creating the Azure container to deploy our Hasura GraphQL API for #Postgres​ to!
1:51:42​ – BAM! API Server is done and launched! I’ve got a live #GraphQL​ API up and running in Azure and we’re ready to start building a data model!
1:56:22​ – In this segment I show how to turn off the public facing console and shift one’s development workflow to the local Hasura console working against – local OR your live dev environment.
1:58:29​ – Next segment I get into schema migrations, initializing a directory structure for Hasura CLI use, and metadata, migrations, and related data. Including an update to the latest CLI so you can see how to do that, after a run into a slight glitch. 😬
2:23:02​ – I also shift over to dbdiagram to graphically build out some of the schema via their markdown, then use the SQL export option for #postgres​ combined with Hasura’s option to execute plain ole SQL via migrations…
2:31:48​ – Getting a bit more in depth in this segment, I delve through – via the Hasura console – to build out relationships between the tables and data so the graphql queries can introspect accordingly.
2:40:30​ – Next segment, graphql time! I show some of the options of what is available immediately for queries and mutations via the console.
2:50:36​ – Then some more details about metadata. I’m going to do a stream with further details, since I was a little fuzzy on some of those details myself, in the very very near future. However a good introduction to what the metadata does for the #graphql​ API.
2:59:07​ – Then as a wrap up to all of this… I nuke EVERYTHING and deploy it all out to Azure again inclusive of schema migrations, metadata, etc. 🤘🏻
3:16:30​ – Final segment, I add some data to the database and get into a few basic queries and mutations in #graphql​ via the #graphiql​ console interface in #Hasura​.

Another VMWare Error Resolved with Hackiness!

I use VMWare Fusion for almost all of my VMs these days. Especially for the VMs I use for streaming via Twitch when I code. I always like to have good, clean, out of the box loads for the operating systems I’m going to use. That way, whatever I’m doing is in the easiest state for anyone viewing and trying to follow along to replicate.

With this great power, however comes great hackiness in keeping these images in a good state. Recently I’d been coding away on a Go project and boom, I killed the VM, likely because I was also running multiple videos processing with Adobe products, including Photoshop, and I had HOI4 running in the background. That’s a strategy game if you’re curious. The machine was getting a bit overloaded, but that’s fairly standard practice for me. Anyway, the VM just killed over.

I tried restarting it and got this message.

Something something proprietary to the whatever and,

“Cannot open the disk ‘xxxxx.vmdk’ or one of the snapshot disks it depends on.”

I started searching and didn’t find anything useful. So I went about trying some random things. It’s really crazy too, because usually I’d just forget it and trash the image. But ya see, I had code I’d actually NOT COMMITTED YET!!! I kept trying, and finally came to a solution, oddly enough, that seemed to work as if nothing odd had happened at all!

I opened up the contents of the virtual machine file and deleted the *.lck files. Not sure what they were anyway, and kind of just frustrated, but hey, it worked like a charm!

So if you run into this problem, you may want to just nuke the *.lck files and try to kick start it that way.

First Quarter Workshops, Code Sessions, & Twitch Streaming Schedule

I present, the details for upcoming workshops, sessions, and streams for the first quarter of 2021. This quarter includes January through March. In late March an updated list of new content coming and existing context continuing will be posted then.

Thrashing Code Channel on YouTube for the VODs and VLOGs.

Thrashing Code Channel on Twitch for the live streams.

Hasura HQ on Twitch and YouTube too.

Composite Thrashing Code Blog where I put together articles about the above content plus list events, metal, and a lot more. Which I also mirror here for those that like to read on Medium and also on Dev.to who like to read there.

Workshops

Workshops are 1+ hour long, have breaks, and a mostly a set curriculum. They’ll often have collateral available before and after the workshop such as slide decks, documentation, and often a code repository or two. The following are the scheduled workshops I’ve got for first quarter of 2021.

January 28th @ 12:00-14:00 PT Relational Data Modeling for GraphQL – This will be a data modeling workshop focused around getting a GraphQL API up and running built around a relational data model. In this workshop I’ll be showing how to do this using dbdiagramJetbrains DataGrip, and the Hasura API & CLI tooling. The ideas, concepts, and axioms I lay out in this workshop are not limited or tightly coupled to these tools, I used them simply to provide a quick and effective way to get much further into concept and ideas, and move beyond this to actual implementation of concept and ideas within the workshop. Thus, the tools aren’t must haves, but they will help you follow along with the workshop.

February 17th @ 14:00-16:00 Relational Data Modeling for GraphQL – See above description. This will be a live rerun of the workshop I’ll do, so a new group can join in live, ask questions, and work through the material.

February 18th @ 12:00-14:00 PT Introduction to GraphQL – In this introduction to GraphQL, I’ll cover specifically the client side usage of GraphQL queries, mutations, and subscriptions. Even with a focus on the client side queries, I will provide a tutorial during this workshop of setting up our sample server side GraphQL API using Hasura. From this an example will be provided of basic data modeling, database schema and design creation in relation to our GraphQL entities, and how all fo this connects. From there I’ll add some data, discussing the pros and cons of various structures in relation to GraphQL, and then get into the semantic details of queries, mutations, and subscriptions in GraphQL, with our Hasura API.

March 23rd @ 14:00-16:00 Introduction to GraphQL – See above, a rerun of the workshop.

March 24th @ 12:00-14:00 PT Relational Data Modeling for GraphQL – See above, a rerun of the workshop.

One Offs

These sessions will be on a set list of topics that will be provided at the beginning of the event. They’ll also include various collateral like a Github repository, pertinent notes that detail what I’m showing in video.

January 11th @ 10~Noon Join me as I show you how I setup my Hasura workflow for the Go language stack setup. In this session I’ll delve into the specifics, the IDE, and work toward building out a CLI application that uses Hasura and GraphQL as the data store. Join me for some coding, environment setup, workflow, and more. This is a session I’ll be happy to field questions too (as most of my sessions), so if you’ve got questions about Hasura, my workflow, Go, CLI development, or anything else I’m working on join in and toss a question into chat!

February 1st @ 10am~Noon Join me as I broach the GraphQL Coding topic, similar to the one off coding session on January 11th, but this time with JavaScript – a more direct and native way to access, use, and benefit from GraphQL! This session will range from server side Node.js coding to some client side coding too, we’ll talk about the various ways to make calls and a number of other subjects on this topic. As always, dive in and AMA.

Coding Session Series

These sessions may vary from day to day, but will be centered around a particular project or effort, or just around learning something newabout a bit of code, how something works, or other technologically related explorations. An example is in March I’m kick starting #100DaysOfCode which will be a blast! 1 hour a day, for 100 days. What will we learn? Who knows, but it’ll be a blast hacking through it all!

March TBD @ TBD, but in March and repeating on weekdays daily! Day 1 and ongoing of #100DaysOfCode! Yup, I’ve decided to jump on the 100 Days of Code Train! The very general scheduling of topics I intend to cover so far goes like this; algorithms, data/storage, vuejs, and then into building a project. Project isn’t decided yet, nor algorithms, nor specific data and storage topics, but that’ll be the general flow. More details to come in late Febuary and early March!

Tuesday, January 12th @ 10:00 PT on Hasura HQ repeating weekly on Tuesdays I’ll be putting together a full stack app, learning new parts of doing so, and more using the Hasura tooling along with the Go language and stack. Join me for some full stack app dev, we’ll be getting – over time – deep into all the things!

Wrap Up TLDR; && Full Schedule

That’s the material I’m putting together for the Thrashing Code and Hasura Channels on Twitch & YouTube, I hope you’ll enjoy it, get value out of it all, or just join me to hang out on stream and in workshops. Give the channel a follow on Twitch, and if you ever miss a live session on Twitch, in ~24 hours or shortly thereafter I’ll have the Twitch stream posted for VOD on the Thrashing Code YouTube Channel, which you can navigate over to and subscribe for all the latest updates for the above videos and more!

Top 3 Refactors for My Hasura GraphQL API Terraform Deploy on Azure

I posted on the 9th of September, the “Setup Postgres, and GraphQL API with Hasura on Azure”. In that post I had a few refactorings that I wanted to make. The following are the top 3 refactorings that make the project in that repo easier to use!

1 Changed the Port Used to a Variable

In the docker-compose and the Terraform automation the port used was using the default for the particular types of deployments. This led to a production and a developer port that is different. It’s much easier, and more logical for the port to be the same on both dev and production, for at least while we have the console available on the production server (i.e. it should be disabled, more on that in a subsequent post). Here are the details of that change.

In the docker-compose file under the graphql-engine the ports, I insured were set to the specific port mapping I’d want. For this, the local dev version, I wanted to stick to port 8080. I thus, left this as 8080:8080.

version: '3.6'
services:
postgres:
image: library/postgres:12
restart: always
environment:
POSTGRES_PASSWORD: ${PPASSWORD}
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:${PPASSWORD}@postgres:5432/logistics
HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
volumes:
db_data:

The production version, or whichever version this may be in your build, I added a Terraform variable called apiport. This variable I set to be passed in via the script files I use to execute the Terraform.

The script file change looks like this now for launching the environment.

cd terraform
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics' \
-var 'apiport=8080'

The destroy script now looks like this.

cd terraform
terraform destroy \
-var 'server="logisticscoresystemsdb"' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database="logistics"' \
-var 'apiport=8080'

There are then three additional sections in the Terraform file, the first is here, the next I’ll talk about in refactor 2 below. The changes in the resource as shown below, in the container ports section and the environment_variables section, simply as var.apiport.

resource "azurerm_container_group" "adronshasure" {
name = "adrons-hasura-logistics-data-layer"
location = azurerm_resource_group.adronsrg.location
resource_group_name = azurerm_resource_group.adronsrg.name
ip_address_type = "public"
dns_name_label = "logisticsdatalayer"
os_type = "Linux"
  container {
name = "hasura-data-layer"
image = "hasura/graphql-engine:v1.3.2"
cpu = "0.5"
memory = "1.5"
    ports {
port = var.apiport
protocol = "TCP"
}
    environment_variables = {
HASURA_GRAPHQL_SERVER_PORT = var.apiport
HASURA_GRAPHQL_ENABLE_CONSOLE = true
}
secure_environment_variables = {
HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}
}
  tags = {
environment = "datalayer"
}
}

With that I now have the port standardized across dev and prod to be 8080. Of course, it could be another port, that’s just the one I decided to go with.

2 Get the Fully Qualified Domain Name (FQDN) via a Terraform Output Variable

One thing I kept needing to do after Terraform got production up and going everytime is navigating over to Azure and finding the FQDN to open the console up at (or API calls, etc). To make this easier, since I’m obviously running the script, I added an output variable that concatenates the interpolated FQDN from the results of execution. The output variable looks like this.

output "hasura_uri_path" {
value = "${azurerm_container_group.adronshasure.fqdn}:${var.apiport}"
}

Again, you’ll notice I have the var.apiport concatenated there at the end of the value. With that, it returns at the end of execution the exact FQDN that I need to navigate to for the Hasura Console!

3 Have Terraform Create the Local “Dev” Database on the Postgres Server

I started working with what I had from the previous post “Setup Postgres, and GraphQL API with Hasura on Azure”, and realized I had made a mistake. I wasn’t using a database on the database server that actually had the same name. Dev was using the default database and prod was using a newly created named database! Egads, this could cause problems down the road, so I added some Terraform just for creating a new Postgres database for the local deployment. Everything basically stays the same, just a new part to the local script was added to execute this Terraform along with the docker-compose command.

First, the Terraform for creating a default logistics database.

terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
}
}
required_version = ">= 0.13"
}
provider "postgresql" {
host = "localhost"
port = 5432
username = var.username
password = var.password
sslmode = "disable"
connect_timeout = 15
}
resource "postgresql_database" "db" {
name = var.database
owner = "postgres"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
variable "database" {
type = string
}
variable "server" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}

Now the script as I setup to call it.

docker-compose up -d
terraform init
sleep 1
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics'

There are more refactoring that I made, but these were the top 3 I did right away! Now my infrastructure as code is easier to use, the scripts are a little bit more seamless, and everything is wrapping into a good development workflow a bit better.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode.