Database Naming Convention Ideas

A while ago I wrote a post titled “Pragmatic Database Schema Naming Conventions, Practices, and Patterns“. In that post I outlined basic naming convention use in naming the elements of your database and the database itself using general naming conventions like; Camel, Pascal, Kebab, and the like. For a quick lookup, I also wrote a post titled “Name Casing Conventions, The Quick Comparison” just to list out the specific naming conventions.

Recently Jens Dahl Møllerhøj wrote to me about a post he wrote on the topic of many to many tables and a convention around how they could be named. I read the post and indeed, very good observation of language and how we ought to name many to many relationship based tables, or junction tables. Give it a read at “How to Name Your Junction Tables“.

Also in the original post a few comments and notes have been made about database naming conventions. Here is a short list of those:

  • Generally, many if not most database administrator’s prefer naming of tables be singular items. Like “user”, “address”, or “item”. Some prefer plural like “users”, “addresses”, or “items”. Pick one, stick to it.
  • More than a few tools take singular table names and will pluralize them or the object arrays/entities that are returned, be aware of this and know the effect your naming strategy may have on this feature and practice.
  • Most would prefer lower case naming for database objects as it prevents the need for quotes and other syntax in the SQL statements. Making statements much easier to write if one doesn’t need to quote every table, schema(namespace), column, and related items because it has uppercase characters. Yes, this does make picking a convention a bit tricky, but see the next item to simplify things.
  • Compound names should largely be avoided for column and table names. This prevents the naming conventions conflicting with the need for lowercase naming.

While developing your schemas and naming conventions, here’s another addendum of do and do nots for postgres.

That’s it for now. More comments, ideas, thoughts, complaints, or opinions about naming things in databases or naming objects and such in programming, let me know @Adron or leave a comment below.

Terrazura – A Build Out of an Azure based, Hasura GraphQL API on Postgres

I created this repo https://github.com/Adron/terrazura​ during a live stream on my Twitch Thrashing Code Channel 🤘 at 10am on the 30th of December, 2020. The VOD is now available on my YouTube Thrashing Code Channel https://youtube.com/thrashingcode​. A rough as hell year, but wanted to wrap it up with some solid content. In this stream I tackled a ton of specifics, in detail about getting Hasura deployed in Azure, Postgres backed, a database schema designed and created, using database schema migrations, and all sorts of tips n’ tricks along the way. 3 hours of solid how to get shit done material!

For live streams, check out and follow at https://www.twitch.tv/thrashingcode​ 👊🏻 or for VOD viewing check out https://youtube.com/thrashingcode

A point in coding during the video!

02:49​ – Shout out to the stream sponsor, Azure, and links to some collateral material.
14:50​ – In this first segment, I start but run into some troubleshooting needs around the provider versions for Terraform in regards to Azure. You can skip this part unless you want to see what issue I ran into.
18:24​ – Since I ran into issues with the current version of Terraform I had installed, at this time I show a quick upgrade to the latest version.
27:22​ – After upgrading and fighting through trial and error execution of Terraform until I finally get the right combination of provider and Terraform versions.
27:53​ – Adding the first Terraform resource, the Azure resource group.
29:47​ – Azure Portal oddness, just to take note off if/when you’re working through this. Workaround later in the stream.
32:00​ – Adding the Postgres server resource.
44:43​ – In this segment I switched over to Jetbrain’s Intellij to do the rest of the work. I also tweak the IDE to re-add the plugin for the material design themes and icons. If you use this IDE, it’s very much IMHO worth getting this to switch between themes.
59:32​ – After getting leveled-up with the IDE, I wrap up the #Postgres​ server resource and #terraform​ apply it the overall set of resources. At this point I also move forward with the infrastructure as code, with emphasis on additive changes to the immutable infrastructure by emphasizing use of terraform apply and minimizing any terraform destroy use.
1:02:07​ – At this time, I try figuring out the portal issue by az logout and logging back in az login to Azure Still no resources shown but…
1:08:47​ – eventually I realize I have to use the hack solution of pasting the subscription ID into the
@Azure portal to get resources for the particular subscription account which seems highly counter intuitive since its the ONLY account. 🧐
1:22:54​ – The next thing I setup, now that I have variables that need passed in on every terraform execution, I add a script to do this for me.
1:29:35​ – Next up is adding the database to the database server and firewall rule. Also we get to see Jetbrains #Intellij​ HCL plugin introspection at work adding required properties to the firewall resource! A really useful feature.
1:38:24​ – Next up, creating the Azure container to deploy our Hasura GraphQL API for #Postgres​ to!
1:51:42​ – BAM! API Server is done and launched! I’ve got a live #GraphQL​ API up and running in Azure and we’re ready to start building a data model!
1:56:22​ – In this segment I show how to turn off the public facing console and shift one’s development workflow to the local Hasura console working against – local OR your live dev environment.
1:58:29​ – Next segment I get into schema migrations, initializing a directory structure for Hasura CLI use, and metadata, migrations, and related data. Including an update to the latest CLI so you can see how to do that, after a run into a slight glitch. 😬
2:23:02​ – I also shift over to dbdiagram to graphically build out some of the schema via their markdown, then use the SQL export option for #postgres​ combined with Hasura’s option to execute plain ole SQL via migrations…
2:31:48​ – Getting a bit more in depth in this segment, I delve through – via the Hasura console – to build out relationships between the tables and data so the graphql queries can introspect accordingly.
2:40:30​ – Next segment, graphql time! I show some of the options of what is available immediately for queries and mutations via the console.
2:50:36​ – Then some more details about metadata. I’m going to do a stream with further details, since I was a little fuzzy on some of those details myself, in the very very near future. However a good introduction to what the metadata does for the #graphql​ API.
2:59:07​ – Then as a wrap up to all of this… I nuke EVERYTHING and deploy it all out to Azure again inclusive of schema migrations, metadata, etc. 🤘🏻
3:16:30​ – Final segment, I add some data to the database and get into a few basic queries and mutations in #graphql​ via the #graphiql​ console interface in #Hasura​.

Top 3 Refactors for My Hasura GraphQL API Terraform Deploy on Azure

I posted on the 9th of September, the “Setup Postgres, and GraphQL API with Hasura on Azure”. In that post I had a few refactorings that I wanted to make. The following are the top 3 refactorings that make the project in that repo easier to use!

1 Changed the Port Used to a Variable

In the docker-compose and the Terraform automation the port used was using the default for the particular types of deployments. This led to a production and a developer port that is different. It’s much easier, and more logical for the port to be the same on both dev and production, for at least while we have the console available on the production server (i.e. it should be disabled, more on that in a subsequent post). Here are the details of that change.

In the docker-compose file under the graphql-engine the ports, I insured were set to the specific port mapping I’d want. For this, the local dev version, I wanted to stick to port 8080. I thus, left this as 8080:8080.

version: '3.6'
services:
postgres:
image: library/postgres:12
restart: always
environment:
POSTGRES_PASSWORD: ${PPASSWORD}
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:${PPASSWORD}@postgres:5432/logistics
HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
volumes:
db_data:

The production version, or whichever version this may be in your build, I added a Terraform variable called apiport. This variable I set to be passed in via the script files I use to execute the Terraform.

The script file change looks like this now for launching the environment.

cd terraform
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics' \
-var 'apiport=8080'

The destroy script now looks like this.

cd terraform
terraform destroy \
-var 'server="logisticscoresystemsdb"' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database="logistics"' \
-var 'apiport=8080'

There are then three additional sections in the Terraform file, the first is here, the next I’ll talk about in refactor 2 below. The changes in the resource as shown below, in the container ports section and the environment_variables section, simply as var.apiport.

resource "azurerm_container_group" "adronshasure" {
name = "adrons-hasura-logistics-data-layer"
location = azurerm_resource_group.adronsrg.location
resource_group_name = azurerm_resource_group.adronsrg.name
ip_address_type = "public"
dns_name_label = "logisticsdatalayer"
os_type = "Linux"
  container {
name = "hasura-data-layer"
image = "hasura/graphql-engine:v1.3.2"
cpu = "0.5"
memory = "1.5"
    ports {
port = var.apiport
protocol = "TCP"
}
    environment_variables = {
HASURA_GRAPHQL_SERVER_PORT = var.apiport
HASURA_GRAPHQL_ENABLE_CONSOLE = true
}
secure_environment_variables = {
HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}
}
  tags = {
environment = "datalayer"
}
}

With that I now have the port standardized across dev and prod to be 8080. Of course, it could be another port, that’s just the one I decided to go with.

2 Get the Fully Qualified Domain Name (FQDN) via a Terraform Output Variable

One thing I kept needing to do after Terraform got production up and going everytime is navigating over to Azure and finding the FQDN to open the console up at (or API calls, etc). To make this easier, since I’m obviously running the script, I added an output variable that concatenates the interpolated FQDN from the results of execution. The output variable looks like this.

output "hasura_uri_path" {
value = "${azurerm_container_group.adronshasure.fqdn}:${var.apiport}"
}

Again, you’ll notice I have the var.apiport concatenated there at the end of the value. With that, it returns at the end of execution the exact FQDN that I need to navigate to for the Hasura Console!

3 Have Terraform Create the Local “Dev” Database on the Postgres Server

I started working with what I had from the previous post “Setup Postgres, and GraphQL API with Hasura on Azure”, and realized I had made a mistake. I wasn’t using a database on the database server that actually had the same name. Dev was using the default database and prod was using a newly created named database! Egads, this could cause problems down the road, so I added some Terraform just for creating a new Postgres database for the local deployment. Everything basically stays the same, just a new part to the local script was added to execute this Terraform along with the docker-compose command.

First, the Terraform for creating a default logistics database.

terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
}
}
required_version = ">= 0.13"
}
provider "postgresql" {
host = "localhost"
port = 5432
username = var.username
password = var.password
sslmode = "disable"
connect_timeout = 15
}
resource "postgresql_database" "db" {
name = var.database
owner = "postgres"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
variable "database" {
type = string
}
variable "server" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}

Now the script as I setup to call it.

docker-compose up -d
terraform init
sleep 1
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics'

There are more refactoring that I made, but these were the top 3 I did right away! Now my infrastructure as code is easier to use, the scripts are a little bit more seamless, and everything is wrapping into a good development workflow a bit better.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode.

Name Casing Conventions, The Quick Comparison

There are a number of name casing practices, which I’ll cover in quick fashion here, for variables and other objects when programming.

The Quickie Synopsis

  • camelCaseLikeThis
  • PascalCaseLikeThis
  • snake_case_like_this
  • kebab-case-like-this
  • MixAnd-match_like-this

Camel Case

With Camel Case the first letter of the first word is lower case, then every subsequent first letter of each word is upper cased. For example, thisIsCamelCasedsomething if a singular word, or somethingElse.

My 2 ¢ – I’m a big fan of this case when I’m name private variables or related objects. I’m also a fan of using Camel Casing for certain types of functions, especially in JavaScript such as this.doesStuff() or somethingAnother.doesOtherStuff().

Pascal Case

In usage of Pascal Case, the first letter of each word within the name is upper cased. Examples include ThisOjectThatThing, or WhateverElse().

My 2 ¢ – This is something I like to use for class declaracters (i.e. public class ThisClassThing) and then for functions or methods on that class, or respectively similarly for functions on functions in JavaScript, or the like. Basically, anything that will be used similar to MyThing.DoesThisThing().

Snake Case

Using Snake Case has the empty spaces replaced with underscores. So a file name like this is my filename.md would become this_is_my_filename.md.

My 2 ¢ – Snake case is great when I need to have variables that would otherwise have spaces, but also wouldn’t be displayed regularly with anything that might obfuscate the underscore, such as an HTML.

Kebab Case

Kebabs are tasty, the case however is similar to Snake Case, except instead of underscores dashes are used. Each space in the name is replaced with a - character. Such as my-table-name-is-this or this-is-a-variable.

My 2 ¢ – Kebab case is excellent for filenames, URIs, or something of that sort where spacing between the words of a file or path are important to read, but an underscore would be obfuscated by the font rendering such as with a URI.

Hasura CLI Installation & Notes

This post just elaborates on the existing documentation here with a few extra notes and details about installation. This can be helpful when determining how to install, deploy, or use the Hasura CLI for development, continuous integration, or continuous deployment purposes.

Installation

There are three primary operating system binary installations: Windows, MacOS, and Linux.

Linux

In the shell run the following curl command to download and install the binary.

curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | bash

This command installs the command to /usr/local/bin, but if you’d like to install it to another location you can add the follow variable INSTALL_PATH and set it to the path that you want.

curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | INSTALL_PATH=$HOME/bin bash

What this script does, in short, follows these steps. The latest version is determined, then downloaded with curl. Once that is done, it then assigns permissions to the binary for execution. It also does some checks to determine OS version, type, distro, if the CLI exists or not, and other validations. To download the binary, and source if you’d want for a particular version of the binary, check out the manual steps listed later in this post.

MacOS

For MacOS the same steps are followed as Linux, as the installation script steps through the same procedures.

curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | bash

As with the previous Linux installation, the INSTALL_PATH variable can also be set if you’d want the installation of the binary to another path.

Windows

Windows is a different installation, as one might imagine. The executable (cli-hasura-windows-amd64.exe) is available on the releases page. This leaves it up to you to determine how exactly you want this executable to be called. It’s ideal, in my opinion, to download this and then put it into a directory where you’ll map the path. You’ll also want to rename the executable itself from cli-hasura-windows-amd64.exe to hasura.exe if for any reason because it’s easier to type and then it’ll match the general examples provided in the docs.

To setup a path on Windows to point to the directory where you have the executable, you’ll want to open up ht environment variables dialog. That would be following Start > System > System Settings > Environment Variables. Scroll down until the PATH is viewable. Click the edit button to edit that path. Set it, and be sure to set it up like c:\pathWhatevsAlreadyHere;c:\newPath\directory\where\hasura\executable\is\. Save that, launch a new console and that new console should have the executable available now.

Manual Download & Installation

You can also navigate directly to the releases page and get the CLI at https://github.com/hasura/graphql-engine/releases. All of the binaries are compiled and ready for download along with source code zip file of the particular builds for those binaries.

Installation via npm

The CLI is avialable via an npm package also. It is independently maintained (the package that wraps the executable) by members in the community. If you want to provide a set Hasura CLI version to a project, using npm is a great way to do so.

For example, if you want to install the Hasura CLI, version in your project as a development dependency, use the following command to get version 1.3.0 for example.

npm install --save-dev hasura-cli@1.3.0

For version 1.3.1 it would be npm install --save-dev hasura-cli@1.3.1 for example.

The dev dependencies in the package.json file of your project would then look like the following.

"devDependencies": {
  "hasura-cli": "^1.3.0"
}

Another way you can install the CLI with npm is to just install it globally, with the same format but swap the --save-dev with --global. The following would install the latest command. You can add the @version to the end and get a specific version installed globally too, just like as shown with the dev install previously.

npm install --global hasura-cli

Notes

Using the npm option is great, if you’re installing, using, writing, or otherwise working with JavaScript, have Node.js installed on the dev and other machines, and need to have the CLI available on those particular machine instances. If not, I’d suggest installing via one of the binary options, especially if you’re creating something like a slimmed down Alpine Linux container to automate some Hasura CLI executions during a build process or something. There are a lot of variance to how you’d want to install and use the CLI, beyond just installing it to run the commands manually.

If you’re curious about any particular installation scenarios, ask me @Adron and I’ll answer there and I’ll elaborate here on this post!

Happy Hasura CLI Hacking! 🤘

References