Big changes coming up for me in the coming weeks and months, but in this moment I want to dedicate this post to a giant, huge, magnificent farewell to the absolutely stellar crew at Hasura! This has, without question, been one of my favorite companies to work for and their mission has been extremely enjoyable to help further! A huge shout out to Rajoshi and Tanmai (they’re the awesome co-founders!) for bringing me on board to help advocate for more GraphQL goodness and Postgres + SQL Server + GraphQL successes!
Cheers to all and thanks for a great time!
No worries either, I’ll keep in touch with you all and ALL y’all Hasurians know where I am! Here on the ole’ blog, on the email(s), the Twitch streams, or on ole’ Twitter (@Adron) itself!
…and as always, stay tuned (i.e. subscribe to the blog, or the Twitch stream, or Twitter) as my adventure continues, I’ll share all the gory details right here!
A parting shout out, if you’re using GraphQL, and into a Postgres + SQL Server instant GraphQL option, absolutely check out Hasura’s tech. It is seriously impressive!
That ends my #100DaysOfCode && AMA with Hasura’s GraphQL!
Recently I created a video short on how to split out a timestamp column for Hasura. This included the SQL for Postgres via a schema migration and also details on how this appears in the Hasura user interface. You can check out the video here.
The break out of what I show in the video is available in a Github repository also.
Here is the specific database query that creates the table with the timestamp being broken out to the year, month, and day as generated column data.
create table standard_relational_model.users_data
(
user_id uuid PRIMARY KEY,
address_id uuid,
signup_date timestamp DEFAULT now(),
year int GENERATED ALWAYS AS (date_part('year', signup_date)) STORED,
month int GENERATED ALWAYS AS (date_part('month', signup_date)) STORED,
day int GENERATED ALWAYS AS (date_part('day', signup_date)) STORED,
points int,
details jsonb
);
In this SQL the signup_date column is the timestamp column that I want split out to year, month, and day. I’ve set it up with a default function call of now() just to seed the column and not require entry when inserting a new row. With that seed, then the generated columns of year, month, and day use the date_part() function to extract the particular value out of the signup_date column and store it in the respective column.
The other columns are just there for other references.
The Hasura Console
In the Hasura Console those columns would look something like this.
Notice the syntax displayed for these is different than the migration that created them.
date_part('day'::text, signup_date)
The above of course is for day, and each respective part is designated by month, year, etc.
When the data is added to the table the results return as follow with GraphQL and results.
GraphQL
The query.
query MyQuery {
users_data {
signup_date
year
month
day
}
}
Next week is Hasura Con 2021, which you can register here, and just attend instead of reading any further. But if you want some reasons to attend, read on, I’ll provide a few in this blog entry!
First Reason – What Have People Built w/ Hasura
You’re curious to learn about what is implemented with Hasura’s API and tooling. We’ve got several people that will be talking about what they’ve built with Hasura, including;
You’re still curious about GraphQL but haven’t really delved into what it is or what it can do. This is a chance, for just a little of our time, to check out some of the features and capabilities in specific detail. The following are a few talks I’d suggest, to get an idea of around what GraphQL can do and what various aspects of it provides.
Attending the conference, which is online, will only require whatever amount of time you’d like to put into it! There’s no cost, registration is free, so join for the talks you want or even join me for one of the topic tables or workshops that I’ll be hosting and teaching!
Hope to see you in the chat rooms! If you’ve got any questions feel free to reach out and ask me, my DMs are open on Twitter @Adron and you can always just leave a comment here too!
This video shows the process detailed below in this blog entry, to provide the choice of video or a quick read! 👍🏻😁
I coded up some JavaScript to generate some data for a table recently and it seemed relatively useful, so here it is ready to use as you may. (The complete js file is below the description of the individual code segments below). This file simple data generation is something I put together to create a csv for some quick data imports into a database (Postgres, SQL Server, or anything you may want). With that in mind, I added the libraries and initialized the repo with the libraries I would need.
npm install faker
npm install fs
faker = require('faker');
fs = require('fs');
Next up I included the column row of data for the csv. I decided to go ahead and setup the variable at this point, as it would be needed as I would add the rest of the csv data to the variable itself. There is probably a faster way to do this, but this was the quickest path from the perspective of getting something working right now.
After the colum row, I also setup the base 8 UUIDs that would related to the project_id values to randomly use throughout data generation. The idea behind this is that the project_id values are the range of values that would be in the data that Subhendu would have, and all the ip and other recorded data would be recorded with and related to a specific project_id. I used a UUID generation site to generate these first 8 values, that site is available here.
After that I went ahead and added the for loop that would be used to step through and generate each record.
var data = "id,country,ip,created_at,updated_at,project_id\n";
let project_ids = [
'c16f6dd8-facb-406f-90d9-45529f4c8eb7',
'b6dcbc07-e237-402a-bf11-12bf2226c243',
'33f45cab-0e14-4830-a51c-fd44a62d1adc',
'5d390c9e-2cfa-471d-953d-f6727972aeba',
'd6ef3dfd-9596-4391-b0ef-3d7a8a1a6d10',
'e72c0ed8-d649-4c53-97c5-da793d7a8228',
'bf020fd2-2514-4709-8108-a2810e61c503',
'ead66a4a-968a-448c-a796-51c6a1da0c20'];
for (var i = 0; i < 500000; i++) {
// TODO: Generation will go here.
}
The next thing that I wanted to sort out are the two dates. One would be the created_at value and the other the updated_at value. The updated_at date needed to show as occurring after the created_at date, for obvious reasons. To make sure I could get this calculated I added a function to perform the randomization! First two functions to get additions for days and hours, then getting the random value to add for each, then getting the calculated dates.
function addDays(datetime, days) {
let date = new Date(datetime.valueOf());
date.setDate(date.getDate() + days);
return date;
}
function addHours(datetime, hours) {
let time = new Date(datetime.valueOf())
time.setTime(time.getTime() + (hours*60*60*1000));
return time;
}
var days = faker.datatype.number({min:0, max:7})
var hours = faker.datatype.number({min:0, max:24})
var updated_at = new Date(faker.date.past())
var created_at = addHours(addDays(updated_at, -days), -hours)
With the date time stamps setup for the row data generation I moved on to selecting the specific project_id for the row.
var proj_id = project_ids[faker.datatype.number({min:0, max: 7})]
One other thing that I knew I’d need to do is filter for the ' or , values located in the countries that would be selected. The way I clean that data to ensure it doesn’t break the SQL bulk import process is kind of cheap and in production data I wouldn’t do this, but it works great for generated data like this.
var cleanCountry = faker.address.country().replace(",", " ").replace("'", " ")
If you’re curious why I’m calculating these before I do the general data generation and set the row up, I like to keep the row of actual data calls to either a set variable assignment or at most one dot level deep in my calls. As you’ll see now in the row level data being generated below.
Today at Hasura we released Hasura v2.0! This is a pretty major release with a number of new features that will dramatically increase the capabilities for Hasura. For several of my projects, specifically the infrastructure as code projects terrazura (check out the previous blog post w/ video time points and more) and tenancy-bydata I was able to get the upgrade to Hasura v2.0 done in moments! Since I don’t have to pull backups or anything for these projects, it merely involved the following steps.
Upgrade the Hasura CLI. This is super easy, just issue the command hasura update-cli --version v2.0.0-alpha.1. This command will then download and update the CLI.
Next I updated the Terraform file so the container pulls the latest version image = "hasura/graphql-engine:v2.0.0-alpha.1".
Next run an updated terraform apply command, which in my case is this command in the case of the terrazura project for example.
Boom! Everything is now updated to v2.0 and we’re ready for all the upcoming Twitch streams relating back to these particular projects!
For more, be sure to subscribe to the HasuraHQ Twitch Channel and my Twitch Channel Thrashing Code as I’ll be covering more of the new features in the coming days!
You must be logged in to post a comment.