Shortlist of Database as a Service Providers

Some top database providers for various open source databases like MariaDB, PostgreSQL, MongoDB, Apache Cassandra, Redis, Elasticsearch, and Neo4j:

  1. MariaDB:
  2. PostgreSQL:
  3. MongoDB:
  4. Apache Cassandra:
  5. Redis:
  6. Elasticsearch:
  7. Neo4j (Graph Database):

Splitting a Postgres Timestamp with Generated Columns & GraphQL Query with Hasura

Recently I created a video short on how to split out a timestamp column for Hasura. This included the SQL for Postgres via a schema migration and also details on how this appears in the Hasura user interface. You can check out the video here.

The break out of what I show in the video is available in a Github repository also.

https://github.com/Adron/graphql-relational-concept-mapping

Postgres Table Creation SQL

Here is the specific database query that creates the table with the timestamp being broken out to the year, month, and day as generated column data.

create table standard_relational_model.users_data
(
    user_id uuid PRIMARY KEY,
    address_id uuid,
    signup_date timestamp DEFAULT now(),
    year int  GENERATED ALWAYS AS (date_part('year', signup_date)) STORED,
    month int  GENERATED ALWAYS AS (date_part('month', signup_date)) STORED,
    day int  GENERATED ALWAYS AS (date_part('day', signup_date)) STORED,
    points int,
    details jsonb
);

In this SQL the signup_date column is the timestamp column that I want split out to year, month, and day. I’ve set it up with a default function call of now() just to seed the column and not require entry when inserting a new row. With that seed, then the generated columns of year, month, and day use the date_part() function to extract the particular value out of the signup_date column and store it in the respective column.

The other columns are just there for other references.

The Hasura Console

In the Hasura Console those columns would look something like this.

Notice the syntax displayed for these is different than the migration that created them.

date_part('day'::text, signup_date)

The above of course is for day, and each respective part is designated by month, year, etc.

When the data is added to the table the results return as follow with GraphQL and results.

GraphQL

The query.

query MyQuery {
   users_data {
    signup_date
    year
    month
    day
  }
}

The results.

{
  "data": {
    "users_data": [
      {
        "signup_date": "1999-04-21T00:00:00",
        "year": 1999,
        "month": 4,
        "day": 21
      },
    
            ... etc ...
            
      {
        "signup_date": "2007-01-02T00:00:00",
        "year": 2007,
        "month": 1,
        "day": 2
      },
      {
        "signup_date": "2021-06-29T00:09:48.359247",
        "year": 2021,
        "month": 6,
        "day": 29
      }
    ]
  }
}

SQL

The query.

select signup_date, year, month, day
from standard_relational_model.users_data;

The results.

1999-04-21 00:00:00.000000,1999,4,21
2012-07-04 00:00:00.000000,2012,7,4
2019-06-24 00:00:00.000000,2019,6,24
2013-03-07 00:00:00.000000,2013,3,7
2007-01-02 00:00:00.000000,2007,1,2
2021-06-29 00:09:48.359247,2021,6,29

That is how to build generated columns in Postgres and how they’re available via Hasura to expose via GraphQL!

Coding Effort Introspection, 2nd Quarter Workshops, Code Sessions, & Twitch Streaming Schedule

I began learning Vue.js with sincerity a few months ago. But I also started several other #100DaysOfCode efforts (Database Dev Work and GraphQL Design & Dev) at the same time. This, in hindsight wasn’t the best idea. Since I was going to work on some of the #100DaysOfCode tracks outside of my regular day’s workload tackling multiple language stacks, even with the experience and familiarity I have with so many existing stacks it didn’t put me in a position to succeed.

But I digress, even in failure lessons have been learned and I’ll be beginning new with a different plan just next week. Hopefully, not only will this plan work better, but it could tangibly be of much better use for anybody that would want to learn these things too!

With that, I present, the new plan!

Starting on the week 27th I’ll start streaming on Wednesday at 5pm Pacific on Thrashing Code. On the Hasura HQ Channel I’ll be streaming on Tuesday at 9am Pacific and Thursday at 9pm Pacific, scheduling on two time points to cover more of the globe. Instead of daily hour long segments, these will likely go on for a few hours and it’ll be easier to join, ask questions, hang out, chat, and all the things that make a stream entertaining and useful. The lagniappe of this schedule will allow me to more easily cut shorter segments for those that will find those useful, but can’t really join for the longer session. It’ll be a win win for me and the audience.

Thrashing Code Guests

On the 14th of July, Russell Spitzer will be joining me to talk about tech stack, dev environment, and very likely a few things about ole’ New Orleans! Join us for that conversation and let’s dig into all the topics!

The Music of Thrashing Code

The music streams, alas, are again pushed further into the future. I’ve decided I’m going to put together a little bit more before I start streaming that, plus I’d like to get a little bit more into practice before shredding live on stream. For your sake and mine! 🤘🏻

To join in on live sessions;

Aside from the regularly scheduled things above, I’ve scheduled some workshops again as people found those useful. These workshops I’ve scheduled below. It is important to note that these are in addition to the workshops I will provide in the coming weeks and months through Hasura.

  • Getting Started with Hasura GraphQL API and Postgres (Click for tickets)
    • Short introduction to GraphQL
    • Server
    • Client
    • Architectural Overview of Hasura API Server and Tooling
    • Instant GraphQL API
    • CLI Tooling
    • Building a GraphQL Schema with the Hasura Console
    • Database Schema (vs GraphQL)
    • Tables
    • Data Types
    • Relationships
    • Overview of Migrations
    • Using Postgres Functions
    • Short identifiers
    • Default columns (functions & triggers)
  • Full Migrations, Metadata, and Seeds Workflow with Hasura(Click for tickets)
    • Migrations
    • Setup for migrations workflow
    • Versioning migrations.
    • Metadata
    • Setup metadata for workflow
    • Versioning metadata
    • Seeds
    • Setup seeds for initial data loads
    • Versioning seeds
    • Peripheral Workflow Tools & Practices
    • Docker & Local Database Environment
    • Additional Tooling
      • Visual Studio Code
      • JetBrains DataGrip
      • JetBrains Database Plugin
      • Postgres pgAdmin
      • SQL Server Enterprise Manager

Future Workshops

Quick Link to Poll for Priority: https://docs.google.com/forms/d/e/1FAIpQLSdrxPEjPqDLn8GhG1pVOvrhMXP_0LEBqiJirIOUsLkQWA1_jw/viewform?usp=sf_link

Finally, a little help from you dear readers, below I’ve added a poll of several presentations and workshops that I’d like to give in the coming months and would like to get your suggestions on prioritization – i.e. which would the most of you find useful for me to focus on first?

  • SQL Coding (Click to vote) – An introduction to SQL coding. Covering the following material:
    • Introduction to what SQL is and the history. Including, why it’s pronounced sequal, not S, Q, L, and that it does not stand for “Standard Query Language” because nerds are funny about their naming of things, and naming is hard!
    • The basic structure of SQL statements. How they’re built from object, predicate, and verb formation.
    • Putting together a database with SQL. Including creating a database, schema, table, columns, and how to alter these elements.
    • How to go about editing and dropping the elements we created.
    • A quick overview of database migrations.
    • Query writing, joins, inner and outter, and the deluge of Cartesian products.
  • Data Modeling with Relational Databases (Click to vote) – A dive into the 3rd normal form, and the normalization and denormalization of data, including nuanced tips n’ tricks to types and modeling.
    • Basic data modeling, normal forms, and the implications of building schema around normalized forms.
    • Denormalizing schema.
    • Data types and their usage around data modeling.
    • Data types and their implications within data modeling.
    • Common tips n’ tricks for using data types to build effective normalized or denormalized schema.
  • Advanced SQL Coding(Click to vote) – Going beyond the introduction material and delving into the depths of query writing, batch processing, transactions, and other advanced features of SQL.
    • Writing a basic query and growing this complexity to advanced joins, views, and query options to make data available.
    • Getting Cartesian products and ensuring we don’t.
    • Denormalizing data with SQL and some of the complexities of doing so once you have data, and especially with lots of data.
    • Writing loops in SQL and why not to do this.
    • Other SQL tips n’ tricks to awesome SQL coding!
  • GraphQL Servers (Click to vote) – Need a custom GraphQL server? Not sure where to start? In this workshop I’ll provide an introduction to writing GraphQL Servers. Somewhat a language agnostic workshop, but I will pick one to implement a server in for reference in the workshop. Ideally we’ll pick one before the workshop and I’ll use it based on what the students in the workshop would prefer.
    • Introduction to GraphQL Servers and what they do and how they work.
    • Elements of a GraphQL Server
    • Schema
    • Data Set
    • Resolvers
    • Query Operations
    • GraphQL Types
    • Aliases and Fragments
    • Variables
    • Query Nested Objects
    • Directives
  • GraphQL Clients (Click to vote) – This workshop assumes you’ve got your GraphQL Server all setup and ready for use. Now we just need to ensure our clients are getting, and using the data from the server effectively.
    • Client options for the various languages stacks; JavaScript, C#, Java, Go, and possibly other languages.
    • Implementation of queries and mutations in;
    • JavaScript via client and Node.js Server calls (server acting as client).
    • C# and/or Java calls as clients.
    • Go calls as systems client.
    • How to deal with JSON results with JavaScript, C#, Java, and Go.
    • JavaScript with JavaScript Object Notation.
    • C#/Java options for managing JSON.
    • Go options for managing JSON.

Top 3 Refactors for My Hasura GraphQL API Terraform Deploy on Azure

I posted on the 9th of September, the “Setup Postgres, and GraphQL API with Hasura on Azure”. In that post I had a few refactorings that I wanted to make. The following are the top 3 refactorings that make the project in that repo easier to use!

1 Changed the Port Used to a Variable

In the docker-compose and the Terraform automation the port used was using the default for the particular types of deployments. This led to a production and a developer port that is different. It’s much easier, and more logical for the port to be the same on both dev and production, for at least while we have the console available on the production server (i.e. it should be disabled, more on that in a subsequent post). Here are the details of that change.

In the docker-compose file under the graphql-engine the ports, I insured were set to the specific port mapping I’d want. For this, the local dev version, I wanted to stick to port 8080. I thus, left this as 8080:8080.

version: '3.6'
services:
postgres:
image: library/postgres:12
restart: always
environment:
POSTGRES_PASSWORD: ${PPASSWORD}
ports:
- 5432:5432
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:${PPASSWORD}@postgres:5432/logistics
HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
volumes:
db_data:

The production version, or whichever version this may be in your build, I added a Terraform variable called apiport. This variable I set to be passed in via the script files I use to execute the Terraform.

The script file change looks like this now for launching the environment.

cd terraform
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics' \
-var 'apiport=8080'

The destroy script now looks like this.

cd terraform
terraform destroy \
-var 'server="logisticscoresystemsdb"' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database="logistics"' \
-var 'apiport=8080'

There are then three additional sections in the Terraform file, the first is here, the next I’ll talk about in refactor 2 below. The changes in the resource as shown below, in the container ports section and the environment_variables section, simply as var.apiport.

resource "azurerm_container_group" "adronshasure" {
name = "adrons-hasura-logistics-data-layer"
location = azurerm_resource_group.adronsrg.location
resource_group_name = azurerm_resource_group.adronsrg.name
ip_address_type = "public"
dns_name_label = "logisticsdatalayer"
os_type = "Linux"
  container {
name = "hasura-data-layer"
image = "hasura/graphql-engine:v1.3.2"
cpu = "0.5"
memory = "1.5"
    ports {
port = var.apiport
protocol = "TCP"
}
    environment_variables = {
HASURA_GRAPHQL_SERVER_PORT = var.apiport
HASURA_GRAPHQL_ENABLE_CONSOLE = true
}
secure_environment_variables = {
HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}
}
  tags = {
environment = "datalayer"
}
}

With that I now have the port standardized across dev and prod to be 8080. Of course, it could be another port, that’s just the one I decided to go with.

2 Get the Fully Qualified Domain Name (FQDN) via a Terraform Output Variable

One thing I kept needing to do after Terraform got production up and going everytime is navigating over to Azure and finding the FQDN to open the console up at (or API calls, etc). To make this easier, since I’m obviously running the script, I added an output variable that concatenates the interpolated FQDN from the results of execution. The output variable looks like this.

output "hasura_uri_path" {
value = "${azurerm_container_group.adronshasure.fqdn}:${var.apiport}"
}

Again, you’ll notice I have the var.apiport concatenated there at the end of the value. With that, it returns at the end of execution the exact FQDN that I need to navigate to for the Hasura Console!

3 Have Terraform Create the Local “Dev” Database on the Postgres Server

I started working with what I had from the previous post “Setup Postgres, and GraphQL API with Hasura on Azure”, and realized I had made a mistake. I wasn’t using a database on the database server that actually had the same name. Dev was using the default database and prod was using a newly created named database! Egads, this could cause problems down the road, so I added some Terraform just for creating a new Postgres database for the local deployment. Everything basically stays the same, just a new part to the local script was added to execute this Terraform along with the docker-compose command.

First, the Terraform for creating a default logistics database.

terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
}
}
required_version = ">= 0.13"
}
provider "postgresql" {
host = "localhost"
port = 5432
username = var.username
password = var.password
sslmode = "disable"
connect_timeout = 15
}
resource "postgresql_database" "db" {
name = var.database
owner = "postgres"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
variable "database" {
type = string
}
variable "server" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}

Now the script as I setup to call it.

docker-compose up -d
terraform init
sleep 1
terraform apply -auto-approve \
-var 'server=logisticscoresystemsdb' \
-var 'username='$PUSERNAME'' \
-var 'password='$PPASSWORD'' \
-var 'database=logistics'

There are more refactoring that I made, but these were the top 3 I did right away! Now my infrastructure as code is easier to use, the scripts are a little bit more seamless, and everything is wrapping into a good development workflow a bit better.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode.

Pragmatic Database Schema Naming Conventions, Practices, and Patterns

In this post I’ve put together some of the naming conventions, rules, and ideas that I tend to follow when creating database schemas to work with. This also applies to schema-less databases, distributed systems databases, graph, time series, or whatever else I am working with. It’s always good to have some good conventions to work with, and the descriptions and ideas in this post are a solid starting point.

What We’re Working With, Some of The Database Rules

Let’s start with a SQL Server rule. Table names must be less than 128 characters. To force SQL Server to use non-standard table names one can use brackets. Then in scripts these names have to be single quoted.

There are many other rules about naming things in SQL Server. But let’s talk about some other database specific rules for other databases.

Postgres names fold to lowercase versus uppercase, which is different then many other databases. Throw in some double quotes however and you can use names like MyTable, MYTABLE, and mytable. These would all be the same without double qutoes, add the double quotes around those names and “MYTABLE” becomes different than “MyTable” and different than “mytable”.

SQL identifiers in Postgres and key words must begin with letters (a-z), which include diacritical marks and non-Latin letters. After the first letter and identifier can have letters, underscores, digits, or dollar signs. If an identifier is double quoted, you can also yse keywords, albeit I would very strongly reccommend against this practice.

As these examples provide, there are a number of ways that the rules are just different enough from one database to another that it is often very helpful to use a naming convention that would work across databases. I’m often working with a variety of databases so this post will cover naming convention ideas and respective patterns and practices around them that would work with every conceivable database I know to exist!

Table, Column, Tuple, or Related Naming Conventions

  • Table, column, and related object names should contain only letters, numbers as characters in the body of the name and not as the preface characters, underscores and absolutely no spaces or special characters. In summary, use a case scheme like Camel or Pascal Case but do not use Snake or Kebab Case.
  • Use meaningful names from the business or organizational domain being modeled. Such as “BankingUsers”, “Transactions”, “railroads”, or “railroad_Systems”.
  • Use singular word names if at all possible, only moving to compound word naming if absolutely necessary. Ideally names would be single words like “User”, “Transactions”, “railroad”, or “system” and exclude compound names like “railroadSystem” until it is needed to prevent confusion or naming collisions.
  • Columns that are primary or foreign keys should be prefaced with PK_ and FK_ respectively, and in my moderately humble opinion, stick to just PK or FK using Camel or Pascal Case. For other metadata, indexes, and related names use a respective preface or postfix conventions.
  • It’s also a good idea to choose plural or singular for table names. However, be sure to choose one or the other so that frameworks, Object Relationship/Relational Mappers/Mapping (ORM), and other tools can effectively name things when used. For example, when table names are singular, many ORM frameworks when generating code would take a singular table name like Customer and make it Customers and have objects of Customer.

Schema/Domain Naming Conventions

In many databases there are additional organizational and related structures that help us to setup tables, functions, stored procedures, compiled SQL/queries, and other objects in groupings. Naming these objects accordingly is easiest by following the same convention as the table naming convention.

For example, if the table naming convention is following Camel Case then continue that;

Table Names for Returns in an E-commerce Domain:

  • Purchase
  • Return
  • Shipped

Table Names for Pricing in an E-commerce Domain:

  • Price
  • Cost

Table Names for Core Tables, for multiple schemas, within the E-commerce Domain:

  • Item

This could be split out to three schemas;

  • Core
  • Prices
  • Returns

The names would then look like this:

Core.Item
Prices.Purchase
Prices.Return
Prices.Shipped
Returns.Purchase
Returns.Return
Returns.ShippedItem

An Example

Here’s an example I put together with some naming conventions I’ve found useful, reduces confusion, and manages to tell a reasonable amount of information about the domain space and schema of the database without conflicts.

The Database Schema

Throughout this schema I’ve used Camel Casing, with most single word column and single word table names. Keeping it simple, such as Id and Stamp are some of the recurring columsn that are useful for retrieval, relationships, and determining origins of data over time. In production settings there are default columns that are often needed and one can rest assured, a time stamp is most likely one of those that is needed everywhere for audits!

For my foriegn key columns, one can determine the relationship by the name of the column itself. For example, in the Connection table the are two foreign keys, one to the Action table and one to the Source table, to the respective Id columns in each of those tables. The one exception is NoteJot, which I named because Note tends to conflict in certain systems. In that table I’ve added a relationship, for recursive data, back to itself with the use of the NoteId foreign key back to the table’s primary key Id.

The diagram above, without any further details can be used to create the schema, with SQL code that would look like the following.

CREATE TABLE "Source" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "Name" text,
  "Uri" text,
  "Details" text
);

CREATE TABLE "SourceNotes" (
  "SourceId" uuid,
  "NotesId" uuid,
  "Details" text,
  "Stamp" timestamp
);

CREATE TABLE "NoteJot" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "NoteId" uuid,
  "Details" text
);

CREATE TABLE "Action" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "Action" json
);

CREATE TABLE "Connection" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ActionId" uuid,
  "SourceId" uuid
);

CREATE TABLE "Formatter" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ConnectionId" uuid,
  "FormatterMap" json
);

CREATE TABLE "Schema" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ConnectionId" uuid,
  "SchemaMap" json
);

ALTER TABLE "SourceNotes" ADD FOREIGN KEY ("SourceId") REFERENCES "Source" ("Id");

ALTER TABLE "SourceNotes" ADD FOREIGN KEY ("NotesId") REFERENCES "NoteJot" ("Id");

ALTER TABLE "NoteJot" ADD FOREIGN KEY ("NoteId") REFERENCES "NoteJot" ("Id");

ALTER TABLE "Connection" ADD FOREIGN KEY ("ActionId") REFERENCES "Action" ("Id");

ALTER TABLE "Connection" ADD FOREIGN KEY ("SourceId") REFERENCES "Source" ("Id");

ALTER TABLE "Formatter" ADD FOREIGN KEY ("ConnectionId") REFERENCES "Connection" ("Id");

ALTER TABLE "Schema" ADD FOREIGN KEY ("ConnectionId") REFERENCES "Connection" ("Id");

We have the tables and keys, all following the Camel Case standard. Much of this however could be – if you preferred – to Pascal Case however switching them to Snake or Kebab Case would cause a number of issues depending on the database.

Do NOT Use These

Unless you want to spend tons of time with errors, debugging, and related issues skip these practices.

  • Generally throughout databases it is best to skip Snake or Kebab Case. Various situations they’re fine, but overall they’re likely to run into conflicts, naming limitations, or other concerns. It’s best to just skip them and remove the concern.
  • When you’re using data that has variance in how it is represented, do not use multitudes of formats. For dates, location, geographic, or related data it is best to stick to a particular format that is repeated throughout the database. Using mixed representations, for example with dates a MMDDYYYY format and then a DD-MM-YYYY format, methods, functions, or other elements that consume or process this data will need to account for this. Creating more time consuming and error prone code.
  • Once you pick a naming scheme for any particular database object type, stick to the naming scheme. For example, if you go with Camel Casing for your tables, use Camel Casing for all of the tables and don’t switch to Pascal for some of them. Specifically, however with this guidance, is if you switch object types, for example you name the tables Pascal Cased but switch to Camel Case for indexes, that’s perfect. Then if ever reviewing a list of objects irrespective of kinds of objects, one can differentiate merely by the conventions used.

Summary & Caveat

Working up a set of patterns, practices, rules, and generally conventions to work with on the database side of things is immensely useful. It helps the Database Administrators, Data Scientists, Software Developers, others that need to utilize the database, and to communicate with each other in reference to the database and data.


That’s it for the database schema topic for now. However, if you’re interested in joining me for more database and data oriented things, language stack setup, software development, patterns, practices, and more in addition to writing some JavaScript, Go, Python, Terraform, and infrastructure, web dev, and all sorts of coding I stream regularly on Twitch at https://twitch.tv/thrashingcode, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/ThrashingCode. Feel free to check out a coding session, ask questions, interject, or just come and enjoy the tunes!