Next Week is Hasura Conf 2021

Next week is Hasura Con 2021, which you can register here, and just attend instead of reading any further. But if you want some reasons to attend, read on, I’ll provide a few in this blog entry!

First Reason – What Have People Built w/ Hasura

You’re curious to learn about what is implemented with Hasura’s API and tooling. We’ve got several people that will be talking about what they’ve built with Hasura, including;

Second Reason – Curious About GraphQL

You’re still curious about GraphQL but haven’t really delved into what it is or what it can do. This is a chance, for just a little of our time, to check out some of the features and capabilities in specific detail. The following are a few talks I’d suggest, to get an idea of around what GraphQL can do and what various aspects of it provides.

Third Reason – Minimal Time, Maximum Benefit

Attending the conference, which is online, will only require whatever amount of time you’d like to put into it! There’s no cost, registration is free, so join for the talks you want or even join me for one of the topic tables or workshops that I’ll be hosting and teaching!

Hope to see you in the chat rooms! If you’ve got any questions feel free to reach out and ask me, my DMs are open on Twitter @Adron and you can always just leave a comment here too!

Dynamic Data Generation with JavaScript

This video shows the process detailed below in this blog entry, to provide the choice of video or a quick read! 👍🏻😁

I coded up some JavaScript to generate some data for a table recently and it seemed relatively useful, so here it is ready to use as you may. (The complete js file is below the description of the individual code segments below). This file simple data generation is something I put together to create a csv for some quick data imports into a database (Postgres, SQL Server, or anything you may want). With that in mind, I added the libraries and initialized the repo with the libraries I would need.

npm install faker
npm install fs
faker = require('faker');
fs = require('fs');

Next up I included the column row of data for the csv. I decided to go ahead and setup the variable at this point, as it would be needed as I would add the rest of the csv data to the variable itself. There is probably a faster way to do this, but this was the quickest path from the perspective of getting something working right now.

After the colum row, I also setup the base 8 UUIDs that would related to the project_id values to randomly use throughout data generation. The idea behind this is that the project_id values are the range of values that would be in the data that Subhendu would have, and all the ip and other recorded data would be recorded with and related to a specific project_id. I used a UUID generation site to generate these first 8 values, that site is available here.

After that I went ahead and added the for loop that would be used to step through and generate each record.

var data = "id,country,ip,created_at,updated_at,project_id\n";
let project_ids = [
    'c16f6dd8-facb-406f-90d9-45529f4c8eb7',
    'b6dcbc07-e237-402a-bf11-12bf2226c243',
    '33f45cab-0e14-4830-a51c-fd44a62d1adc',
    '5d390c9e-2cfa-471d-953d-f6727972aeba',
    'd6ef3dfd-9596-4391-b0ef-3d7a8a1a6d10',
    'e72c0ed8-d649-4c53-97c5-da793d7a8228',
    'bf020fd2-2514-4709-8108-a2810e61c503',
    'ead66a4a-968a-448c-a796-51c6a1da0c20'];

for (var i = 0; i < 500000; i++) {
    // TODO: Generation will go here.
}

The next thing that I wanted to sort out are the two dates. One would be the created_at value and the other the updated_at value. The updated_at date needed to show as occurring after the created_at date, for obvious reasons. To make sure I could get this calculated I added a function to perform the randomization! First two functions to get additions for days and hours, then getting the random value to add for each, then getting the calculated dates.

function addDays(datetime, days) {
    let date = new Date(datetime.valueOf());
    date.setDate(date.getDate() + days);
    return date;
}

function addHours(datetime, hours) {
    let time = new Date(datetime.valueOf())
    time.setTime(time.getTime() + (hours*60*60*1000));
    return time;
}

var days = faker.datatype.number({min:0, max:7})
var hours = faker.datatype.number({min:0, max:24})

var updated_at = new Date(faker.date.past())
var created_at = addHours(addDays(updated_at, -days), -hours)

With the date time stamps setup for the row data generation I moved on to selecting the specific project_id for the row.

var proj_id = project_ids[faker.datatype.number({min:0, max: 7})]

One other thing that I knew I’d need to do is filter for the ' or , values located in the countries that would be selected. The way I clean that data to ensure it doesn’t break the SQL bulk import process is kind of cheap and in production data I wouldn’t do this, but it works great for generated data like this.

var cleanCountry = faker.address.country().replace(",", " ").replace("'", " ")

If you’re curious why I’m calculating these before I do the general data generation and set the row up, I like to keep the row of actual data calls to either a set variable assignment or at most one dot level deep in my calls. As you’ll see now in the row level data being generated below.

data2 += 
    faker.datatype.uuid() + "," +
    cleanCountry + "," +
    faker.internet.ip() + "," +
    created_at.toISOString() + "," +
    updated_at.toISOString() + "," +
    proj_id + "\n"

Now the last step is to create the file for all these csv rows.

fs.writeFile('kundu_table_data.csv', data, function (err) {
  if (err) return console.log(err);
  console.log('Data file written.');
});

The results.

Hasura 2.0 – A Short Story of v1.3.3 to v2.0 Upgrades

Today at Hasura we released Hasura v2.0! This is a pretty major release with a number of new features that will dramatically increase the capabilities for Hasura. For several of my projects, specifically the infrastructure as code projects terrazura (check out the previous blog post w/ video time points and more) and tenancy-bydata I was able to get the upgrade to Hasura v2.0 done in moments! Since I don’t have to pull backups or anything for these projects, it merely involved the following steps.

  1. Upgrade the Hasura CLI. This is super easy, just issue the command hasura update-cli --version v2.0.0-alpha.1. This command will then download and update the CLI.
  2. Next I updated the Terraform file so the container pulls the latest version image = "hasura/graphql-engine:v2.0.0-alpha.1".

Next run an updated terraform apply command, which in my case is this command in the case of the terrazura project for example.

terraform init

terraform apply -auto-approve \
  -var 'server=terrazuraserver' \
  -var 'username='$PUSERNAME'' \
  -var 'password='$PPASSWORD'' \
  -var 'database=terrazuradb' \
  -var 'apiport=8080'

cd migrations

hasura migrate apply

Boom! Everything is now updated to v2.0 and we’re ready for all the upcoming Twitch streams relating back to these particular projects!

For more, be sure to subscribe to the HasuraHQ Twitch Channel and my Twitch Channel Thrashing Code as I’ll be covering more of the new features in the coming days!

Top 3 Refactors for My Hasura GraphQL API Terraform Deploy on Azure

I posted on the 9th of September, the “Setup Postgres, and GraphQL API with Hasura on Azure”. In that post I had a few refactorings that I wanted to make. The following are the top 3 refactorings that make the project in that repo easier to use!

1 Changed the Port Used to a Variable

In the docker-compose and the Terraform automation the port used was using the default for the particular types of deployments. This led to a production and a developer port that is different. It’s much easier, and more logical for the port to be the same on both dev and production, for at least while we have the console available on the production server (i.e. it should be disabled, more on that in a subsequent post). Here are the details of that change.

In the docker-compose file under the graphql-engine the ports, I insured were set to the specific port mapping I’d want. For this, the local dev version, I wanted to stick to port 8080. I thus, left this as 8080:8080.

version: '3.6'
services:
postgres:
image: library/postgres:12
restart: always
environment:
POSTGRES_PASSWORD: ${PPASSWORD}
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:${PPASSWORD}@postgres:5432/logistics
HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
volumes:
db_data:

The production version, or whichever version this may be in your build, I added a Terraform variable called apiport. This variable I set to be passed in via the script files I use to execute the Terraform.

The script file change looks like this now for launching the environment.

cd terraform
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics' \
-var 'apiport=8080'

The destroy script now looks like this.

cd terraform
terraform destroy \
-var 'server="logisticscoresystemsdb"' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database="logistics"' \
-var 'apiport=8080'

There are then three additional sections in the Terraform file, the first is here, the next I’ll talk about in refactor 2 below. The changes in the resource as shown below, in the container ports section and the environment_variables section, simply as var.apiport.

resource "azurerm_container_group" "adronshasure" {
name = "adrons-hasura-logistics-data-layer"
location = azurerm_resource_group.adronsrg.location
resource_group_name = azurerm_resource_group.adronsrg.name
ip_address_type = "public"
dns_name_label = "logisticsdatalayer"
os_type = "Linux"
  container {
name = "hasura-data-layer"
image = "hasura/graphql-engine:v1.3.2"
cpu = "0.5"
memory = "1.5"
    ports {
port = var.apiport
protocol = "TCP"
}
    environment_variables = {
HASURA_GRAPHQL_SERVER_PORT = var.apiport
HASURA_GRAPHQL_ENABLE_CONSOLE = true
}
secure_environment_variables = {
HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}
}
  tags = {
environment = "datalayer"
}
}

With that I now have the port standardized across dev and prod to be 8080. Of course, it could be another port, that’s just the one I decided to go with.

2 Get the Fully Qualified Domain Name (FQDN) via a Terraform Output Variable

One thing I kept needing to do after Terraform got production up and going everytime is navigating over to Azure and finding the FQDN to open the console up at (or API calls, etc). To make this easier, since I’m obviously running the script, I added an output variable that concatenates the interpolated FQDN from the results of execution. The output variable looks like this.

output "hasura_uri_path" {
value = "${azurerm_container_group.adronshasure.fqdn}:${var.apiport}"
}

Again, you’ll notice I have the var.apiport concatenated there at the end of the value. With that, it returns at the end of execution the exact FQDN that I need to navigate to for the Hasura Console!

3 Have Terraform Create the Local “Dev” Database on the Postgres Server

I started working with what I had from the previous post “Setup Postgres, and GraphQL API with Hasura on Azure”, and realized I had made a mistake. I wasn’t using a database on the database server that actually had the same name. Dev was using the default database and prod was using a newly created named database! Egads, this could cause problems down the road, so I added some Terraform just for creating a new Postgres database for the local deployment. Everything basically stays the same, just a new part to the local script was added to execute this Terraform along with the docker-compose command.

First, the Terraform for creating a default logistics database.

terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
}
}
required_version = ">= 0.13"
}
provider "postgresql" {
host = "localhost"
port = 5432
username = var.username
password = var.password
sslmode = "disable"
connect_timeout = 15
}
resource "postgresql_database" "db" {
name = var.database
owner = "postgres"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
variable "database" {
type = string
}
variable "server" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}

Now the script as I setup to call it.

docker-compose up -d
terraform init
sleep 1
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics'

There are more refactoring that I made, but these were the top 3 I did right away! Now my infrastructure as code is easier to use, the scripts are a little bit more seamless, and everything is wrapping into a good development workflow a bit better.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode.

Hasura CLI Installation & Notes

This post just elaborates on the existing documentation here with a few extra notes and details about installation. This can be helpful when determining how to install, deploy, or use the Hasura CLI for development, continuous integration, or continuous deployment purposes.

Installation

There are three primary operating system binary installations: Windows, MacOS, and Linux.

Linux

In the shell run the following curl command to download and install the binary.

curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | bash

This command installs the command to /usr/local/bin, but if you’d like to install it to another location you can add the follow variable INSTALL_PATH and set it to the path that you want.

curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | INSTALL_PATH=$HOME/bin bash

What this script does, in short, follows these steps. The latest version is determined, then downloaded with curl. Once that is done, it then assigns permissions to the binary for execution. It also does some checks to determine OS version, type, distro, if the CLI exists or not, and other validations. To download the binary, and source if you’d want for a particular version of the binary, check out the manual steps listed later in this post.

MacOS

For MacOS the same steps are followed as Linux, as the installation script steps through the same procedures.

curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | bash

As with the previous Linux installation, the INSTALL_PATH variable can also be set if you’d want the installation of the binary to another path.

Windows

Windows is a different installation, as one might imagine. The executable (cli-hasura-windows-amd64.exe) is available on the releases page. This leaves it up to you to determine how exactly you want this executable to be called. It’s ideal, in my opinion, to download this and then put it into a directory where you’ll map the path. You’ll also want to rename the executable itself from cli-hasura-windows-amd64.exe to hasura.exe if for any reason because it’s easier to type and then it’ll match the general examples provided in the docs.

To setup a path on Windows to point to the directory where you have the executable, you’ll want to open up ht environment variables dialog. That would be following Start > System > System Settings > Environment Variables. Scroll down until the PATH is viewable. Click the edit button to edit that path. Set it, and be sure to set it up like c:\pathWhatevsAlreadyHere;c:\newPath\directory\where\hasura\executable\is\. Save that, launch a new console and that new console should have the executable available now.

Manual Download & Installation

You can also navigate directly to the releases page and get the CLI at https://github.com/hasura/graphql-engine/releases. All of the binaries are compiled and ready for download along with source code zip file of the particular builds for those binaries.

Installation via npm

The CLI is avialable via an npm package also. It is independently maintained (the package that wraps the executable) by members in the community. If you want to provide a set Hasura CLI version to a project, using npm is a great way to do so.

For example, if you want to install the Hasura CLI, version in your project as a development dependency, use the following command to get version 1.3.0 for example.

npm install --save-dev hasura-cli@1.3.0

For version 1.3.1 it would be npm install --save-dev hasura-cli@1.3.1 for example.

The dev dependencies in the package.json file of your project would then look like the following.

"devDependencies": {
  "hasura-cli": "^1.3.0"
}

Another way you can install the CLI with npm is to just install it globally, with the same format but swap the --save-dev with --global. The following would install the latest command. You can add the @version to the end and get a specific version installed globally too, just like as shown with the dev install previously.

npm install --global hasura-cli

Notes

Using the npm option is great, if you’re installing, using, writing, or otherwise working with JavaScript, have Node.js installed on the dev and other machines, and need to have the CLI available on those particular machine instances. If not, I’d suggest installing via one of the binary options, especially if you’re creating something like a slimmed down Alpine Linux container to automate some Hasura CLI executions during a build process or something. There are a lot of variance to how you’d want to install and use the CLI, beyond just installing it to run the commands manually.

If you’re curious about any particular installation scenarios, ask me @Adron and I’ll answer there and I’ll elaborate here on this post!

Happy Hasura CLI Hacking! 🤘

References