Splitting a Postgres Timestamp with Generated Columns & GraphQL Query with Hasura

Recently I created a video short on how to split out a timestamp column for Hasura. This included the SQL for Postgres via a schema migration and also details on how this appears in the Hasura user interface. You can check out the video here.

The break out of what I show in the video is available in a Github repository also.

https://github.com/Adron/graphql-relational-concept-mapping

Postgres Table Creation SQL

Here is the specific database query that creates the table with the timestamp being broken out to the year, month, and day as generated column data.

create table standard_relational_model.users_data
(
    user_id uuid PRIMARY KEY,
    address_id uuid,
    signup_date timestamp DEFAULT now(),
    year int  GENERATED ALWAYS AS (date_part('year', signup_date)) STORED,
    month int  GENERATED ALWAYS AS (date_part('month', signup_date)) STORED,
    day int  GENERATED ALWAYS AS (date_part('day', signup_date)) STORED,
    points int,
    details jsonb
);

In this SQL the signup_date column is the timestamp column that I want split out to year, month, and day. I’ve set it up with a default function call of now() just to seed the column and not require entry when inserting a new row. With that seed, then the generated columns of year, month, and day use the date_part() function to extract the particular value out of the signup_date column and store it in the respective column.

The other columns are just there for other references.

The Hasura Console

In the Hasura Console those columns would look something like this.

Notice the syntax displayed for these is different than the migration that created them.

date_part('day'::text, signup_date)

The above of course is for day, and each respective part is designated by month, year, etc.

When the data is added to the table the results return as follow with GraphQL and results.

GraphQL

The query.

query MyQuery {
   users_data {
    signup_date
    year
    month
    day
  }
}

The results.

{
  "data": {
    "users_data": [
      {
        "signup_date": "1999-04-21T00:00:00",
        "year": 1999,
        "month": 4,
        "day": 21
      },
    
            ... etc ...
            
      {
        "signup_date": "2007-01-02T00:00:00",
        "year": 2007,
        "month": 1,
        "day": 2
      },
      {
        "signup_date": "2021-06-29T00:09:48.359247",
        "year": 2021,
        "month": 6,
        "day": 29
      }
    ]
  }
}

SQL

The query.

select signup_date, year, month, day
from standard_relational_model.users_data;

The results.

1999-04-21 00:00:00.000000,1999,4,21
2012-07-04 00:00:00.000000,2012,7,4
2019-06-24 00:00:00.000000,2019,6,24
2013-03-07 00:00:00.000000,2013,3,7
2007-01-02 00:00:00.000000,2007,1,2
2021-06-29 00:09:48.359247,2021,6,29

That is how to build generated columns in Postgres and how they’re available via Hasura to expose via GraphQL!

Coding Effort Introspection, 2nd Quarter Workshops, Code Sessions, & Twitch Streaming Schedule

I began learning Vue.js with sincerity a few months ago. But I also started several other #100DaysOfCode efforts (Database Dev Work and GraphQL Design & Dev) at the same time. This, in hindsight wasn’t the best idea. Since I was going to work on some of the #100DaysOfCode tracks outside of my regular day’s workload tackling multiple language stacks, even with the experience and familiarity I have with so many existing stacks it didn’t put me in a position to succeed.

But I digress, even in failure lessons have been learned and I’ll be beginning new with a different plan just next week. Hopefully, not only will this plan work better, but it could tangibly be of much better use for anybody that would want to learn these things too!

With that, I present, the new plan!

Starting on the week 27th I’ll start streaming on Wednesday at 5pm Pacific on Thrashing Code. On the Hasura HQ Channel I’ll be streaming on Tuesday at 9am Pacific and Thursday at 9pm Pacific, scheduling on two time points to cover more of the globe. Instead of daily hour long segments, these will likely go on for a few hours and it’ll be easier to join, ask questions, hang out, chat, and all the things that make a stream entertaining and useful. The lagniappe of this schedule will allow me to more easily cut shorter segments for those that will find those useful, but can’t really join for the longer session. It’ll be a win win for me and the audience.

Thrashing Code Guests

On the 14th of July, Russell Spitzer will be joining me to talk about tech stack, dev environment, and very likely a few things about ole’ New Orleans! Join us for that conversation and let’s dig into all the topics!

The Music of Thrashing Code

The music streams, alas, are again pushed further into the future. I’ve decided I’m going to put together a little bit more before I start streaming that, plus I’d like to get a little bit more into practice before shredding live on stream. For your sake and mine! 🤘🏻

To join in on live sessions;

Aside from the regularly scheduled things above, I’ve scheduled some workshops again as people found those useful. These workshops I’ve scheduled below. It is important to note that these are in addition to the workshops I will provide in the coming weeks and months through Hasura.

  • Getting Started with Hasura GraphQL API and Postgres (Click for tickets)
    • Short introduction to GraphQL
    • Server
    • Client
    • Architectural Overview of Hasura API Server and Tooling
    • Instant GraphQL API
    • CLI Tooling
    • Building a GraphQL Schema with the Hasura Console
    • Database Schema (vs GraphQL)
    • Tables
    • Data Types
    • Relationships
    • Overview of Migrations
    • Using Postgres Functions
    • Short identifiers
    • Default columns (functions & triggers)
  • Full Migrations, Metadata, and Seeds Workflow with Hasura(Click for tickets)
    • Migrations
    • Setup for migrations workflow
    • Versioning migrations.
    • Metadata
    • Setup metadata for workflow
    • Versioning metadata
    • Seeds
    • Setup seeds for initial data loads
    • Versioning seeds
    • Peripheral Workflow Tools & Practices
    • Docker & Local Database Environment
    • Additional Tooling
      • Visual Studio Code
      • JetBrains DataGrip
      • JetBrains Database Plugin
      • Postgres pgAdmin
      • SQL Server Enterprise Manager

Future Workshops

Quick Link to Poll for Priority: https://docs.google.com/forms/d/e/1FAIpQLSdrxPEjPqDLn8GhG1pVOvrhMXP_0LEBqiJirIOUsLkQWA1_jw/viewform?usp=sf_link

Finally, a little help from you dear readers, below I’ve added a poll of several presentations and workshops that I’d like to give in the coming months and would like to get your suggestions on prioritization – i.e. which would the most of you find useful for me to focus on first?

  • SQL Coding (Click to vote) – An introduction to SQL coding. Covering the following material:
    • Introduction to what SQL is and the history. Including, why it’s pronounced sequal, not S, Q, L, and that it does not stand for “Standard Query Language” because nerds are funny about their naming of things, and naming is hard!
    • The basic structure of SQL statements. How they’re built from object, predicate, and verb formation.
    • Putting together a database with SQL. Including creating a database, schema, table, columns, and how to alter these elements.
    • How to go about editing and dropping the elements we created.
    • A quick overview of database migrations.
    • Query writing, joins, inner and outter, and the deluge of Cartesian products.
  • Data Modeling with Relational Databases (Click to vote) – A dive into the 3rd normal form, and the normalization and denormalization of data, including nuanced tips n’ tricks to types and modeling.
    • Basic data modeling, normal forms, and the implications of building schema around normalized forms.
    • Denormalizing schema.
    • Data types and their usage around data modeling.
    • Data types and their implications within data modeling.
    • Common tips n’ tricks for using data types to build effective normalized or denormalized schema.
  • Advanced SQL Coding(Click to vote) – Going beyond the introduction material and delving into the depths of query writing, batch processing, transactions, and other advanced features of SQL.
    • Writing a basic query and growing this complexity to advanced joins, views, and query options to make data available.
    • Getting Cartesian products and ensuring we don’t.
    • Denormalizing data with SQL and some of the complexities of doing so once you have data, and especially with lots of data.
    • Writing loops in SQL and why not to do this.
    • Other SQL tips n’ tricks to awesome SQL coding!
  • GraphQL Servers (Click to vote) – Need a custom GraphQL server? Not sure where to start? In this workshop I’ll provide an introduction to writing GraphQL Servers. Somewhat a language agnostic workshop, but I will pick one to implement a server in for reference in the workshop. Ideally we’ll pick one before the workshop and I’ll use it based on what the students in the workshop would prefer.
    • Introduction to GraphQL Servers and what they do and how they work.
    • Elements of a GraphQL Server
    • Schema
    • Data Set
    • Resolvers
    • Query Operations
    • GraphQL Types
    • Aliases and Fragments
    • Variables
    • Query Nested Objects
    • Directives
  • GraphQL Clients (Click to vote) – This workshop assumes you’ve got your GraphQL Server all setup and ready for use. Now we just need to ensure our clients are getting, and using the data from the server effectively.
    • Client options for the various languages stacks; JavaScript, C#, Java, Go, and possibly other languages.
    • Implementation of queries and mutations in;
    • JavaScript via client and Node.js Server calls (server acting as client).
    • C# and/or Java calls as clients.
    • Go calls as systems client.
    • How to deal with JSON results with JavaScript, C#, Java, and Go.
    • JavaScript with JavaScript Object Notation.
    • C#/Java options for managing JSON.
    • Go options for managing JSON.

Next Week is Hasura Conf 2021

Next week is Hasura Con 2021, which you can register here, and just attend instead of reading any further. But if you want some reasons to attend, read on, I’ll provide a few in this blog entry!

First Reason – What Have People Built w/ Hasura

You’re curious to learn about what is implemented with Hasura’s API and tooling. We’ve got several people that will be talking about what they’ve built with Hasura, including;

Second Reason – Curious About GraphQL

You’re still curious about GraphQL but haven’t really delved into what it is or what it can do. This is a chance, for just a little of our time, to check out some of the features and capabilities in specific detail. The following are a few talks I’d suggest, to get an idea of around what GraphQL can do and what various aspects of it provides.

Third Reason – Minimal Time, Maximum Benefit

Attending the conference, which is online, will only require whatever amount of time you’d like to put into it! There’s no cost, registration is free, so join for the talks you want or even join me for one of the topic tables or workshops that I’ll be hosting and teaching!

Hope to see you in the chat rooms! If you’ve got any questions feel free to reach out and ask me, my DMs are open on Twitter @Adron and you can always just leave a comment here too!

Quick Answers: What is the UUID column type good for and what exactly is a UUID?

The question has come up a few times recently about what UUIDs are and what they’re good for. In this quick answers post I explain what they are and provide reference links to further material, as well as a Hasura Short video to implementation and use in Postgres & Hasura.

A Hasura Bit

“A Hasura Bit – What sit he UUID column type good for and what exactly is a UUID?” video.

UUID

UUID stands for a universally unique identifier. Another term which is common for a UUID, is the GUID, or globally unique identifier, which is the common term Microsoft created for their UUIDs. Just know, that a GUID is a UUID, it’s just company naming convention vs the standard industry naming convention.

UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computer Environment (DCE). Specifically UUID are designed and used from an USO/IEC spec, which if you’d like to know more about the standards they’re based on check out the Wikipedia page @ https://en.wikipedia.org/wiki/Universally_unique_identifier

UUID Format

The canonical textual representation, the 16 octets of a UUID are represented as 32 hexadecimal (base-016) digits. They’re displayed across five groups separated by hyphens in 8-4-4-4-12 format. Even though there are 32 hexadecimal digits, this makes a total of 36 characters for format display. If storing as a string for example, the string needs to be able to hold 36 characters.

Uses for a UUID

The first key use case for a UUID is to have something generate the UUID to use it as a completely unique value for use with a subset of related data. UUIDs are prefect for primary keys in a database, or simply any type of key to ensure uniqueness across a system.

UUIDs can be generated from many different origin points too without any significant concern for collision (i.e. duplicate UUIDs). For example, the database itself has database functions that enable the generation of a UUID at time of a data row’s insertion, as a default value. This means a client inserting data wouldn’t need to generate that UUID. However this can be flipped over to the client side as a responsibility and the client side development stack (i.e. like Go UUID generation) can generate the UUID. Which then enables the creation of a primary key entity being created with a UUID as the primary key, that can then be used to create what would be foreign key items and so on down the chain of a relationship. Then once all of these are created on the client side they can all be inserted in a batch, and even if ordered appropriately can be made transactional to ensure the integrity of the data.

Hasura 2.0 – A Short Story of v1.3.3 to v2.0 Upgrades

Today at Hasura we released Hasura v2.0! This is a pretty major release with a number of new features that will dramatically increase the capabilities for Hasura. For several of my projects, specifically the infrastructure as code projects terrazura (check out the previous blog post w/ video time points and more) and tenancy-bydata I was able to get the upgrade to Hasura v2.0 done in moments! Since I don’t have to pull backups or anything for these projects, it merely involved the following steps.

  1. Upgrade the Hasura CLI. This is super easy, just issue the command hasura update-cli --version v2.0.0-alpha.1. This command will then download and update the CLI.
  2. Next I updated the Terraform file so the container pulls the latest version image = "hasura/graphql-engine:v2.0.0-alpha.1".

Next run an updated terraform apply command, which in my case is this command in the case of the terrazura project for example.

terraform init

terraform apply -auto-approve \
  -var 'server=terrazuraserver' \
  -var 'username='$PUSERNAME'' \
  -var 'password='$PPASSWORD'' \
  -var 'database=terrazuradb' \
  -var 'apiport=8080'

cd migrations

hasura migrate apply

Boom! Everything is now updated to v2.0 and we’re ready for all the upcoming Twitch streams relating back to these particular projects!

For more, be sure to subscribe to the HasuraHQ Twitch Channel and my Twitch Channel Thrashing Code as I’ll be covering more of the new features in the coming days!