A Hasura Quick Start with Remote Schema, Remote Joins

I’ve been building GraphQL APIs for a number of years now – of along side RESTful, gRPC, XML, and other API styles I won’t even bring up right now – and so far GraphQL APIs have been great to work with. The libraries in different languages form .NET’s Hot Chocolate, Go’s graphql-go, Apollo’s JavaScript based tooling and servers, to Java’s GraphQL for Spring have worked great.

Sometimes you’re in the fortunate situation where you’re using PostgreSQL or SQL Server, or other supported database for a tool like Hasura. Being able to get a full GraphQL (with REST options too) API running in seconds is pretty impressive. From a development perspective it is a massive boost. As Hasura adds more database connectors as they have with Snowflake and Amazon Athena, the server and tooling becomes even more powerful.

With that I wanted to show a N+1 demo where N is day 1 with Hasura. The idea is what do you do immediately after you get a sample service running with Hasura. How do you integrate it with other services, or more specifically how do you integrate your Hasura API along side APIs you’ve written yourself, such as an enterprise GraphQL for Spring based API running against Mongo or other data source? This repo is the basis for several demonstration repositories I am building that will show how you can setup – generally for local development – Hasura + X API with Y Language stack.

This is the Hasura quick start repository here, with migrations and metadata for a local setup. The first demonstration repo for a peripheral GraphQL API will be a Spring based API in this repository. The following steps will get the quick start repository up and running.

  1. Clone this repo git clone git@github.com:Adron/hasura-quick-start.git.
  2. From the root (where the docker-compose.yml file is located) execute docker compose up -d.
  3. Navigate into the hasura directory.
  4. Execute hasura metadata apply, then hasura migrate apply, and then hasura metadata apply. Just do it, it’s a strange workflow thing.
  5. Navigate now into the `hasura` directory and execute hasura console.

These steps are demonstrated in this video from 48 seconds.

What do you get once deployed?

The following are some of the core capabilities of Hasura and showcase what you can get up and running in a matter of seconds, even when you start from a completely empty database! First off you’ll find the database now has 3 tables along with their pertinent schema built out in PostgreSQL and available via Hasura, as shown here under the Data tab of the console.

I also created a schema diagram just to provide a visual of how these tables are designed.

For the remote schema, the Spring API, the following steps will get it cloned and running locally.

  1. Clone this repo git clone git@github.com:Adron/hasura-spring-boot-graphql.git.
  2. Execute ./gradlew build to get the jar file build. It will then be located in the build/libs directory of the project.
  3. Next build the docker image with docker build -t adron/hasura-spring-boot-graphql . to build the docker image locally.
  4. Now you can either start this container with docker compose up -d using the docker-compose.yml in the project or you can run the image with Docker specifically with docker run -p 8081:8080 adron/hasura-spring-boot-graphql.

For a walkthrough of getting the Spring API running, check out 2:28 onward in this video.

Now both of these instances are running locally and you can test each out respectively, but not specifically together. I’ll have probably write up another post on how to get services that spin up separately to run together for localized development. However, with the way things are setup in the two repos, it’s as if one team is the Hasura team building a GraphQL API and another is a Spring Java GraphQL API team, and they’re working autonomously of each other just based on contract of the APIs themselves.

Remote Schema

With that being the scenario, I’ve deployed the Spring API out remotely so that I could show how to put together a remote schema connection and then a remote join query, i.e. nested query in GraphQL speak, across these two APIs.

To add the remote schema, click on the remote schemas tab on the console. Add a name (1), then the URI (2), and optionally if needed add appropriate headers (3) or forward all headers from client requests.

Once that’s added, navigate to the relationships tab of the new remote schema and click on add. Then for this example, select remote database (1), then add a name (4) (Customer in the example) and then for type choose object (3) (per the example).

Then scroll down on that console screen and choose sales_data (1) and default, public, and users (2) under the reference database, schema, and table. Next up choose the source field (3) and reference column (4).

Once added it will look like this in the console.

This creates a relationship to be able to make nested queries against these sources with GraphQL. If it were a single contiguous database the schema would look like this. I’ve color coded the sales_data table as red, to signify it is the table we know is in another database (or, specifically, provided via another hosted API). However, as stated, in a single database the relationships would now look like this. The relationship however, isn’t in a database, but stored in the Hasura metadata between users and sales_data.

Now writing a query across this data would shape up like this. Because of the way the relationship was drawn via the remote schema, the path to get the nested object Customer (2) for the sales data is to start with the sales_data (1) entity. As shown.

sales_data {
  sales_number
  updated_at
  Customer {
    name
  }
}

Now we want to add more details about the particular customer like their email and details. To do this we’ll utilize another nesting level within this query that delves into relationships that are in the PostgreSQL database itself.

sales_data {
  sales_number
  updated_at
  Customer {
    name
    emails {
      email
    }
    details {
      details
    }
  }
}

With this the nested details email (3) and details (4) will be provided, which is foreign key relationships to the primary key table users in the underlying database, made available by Hasura’s relationships in metadata.

Boom! That’s it. Pretty easy setup if the databases and APIs have Hasura available to connect them in this way. Otherwise, this is a huge challenge to develop against if you’re just using solely a tech stack like Apollo, Spring Boot, or Hot Chocolate. Often something along federation and more complexities would come into play. But more on that later, I’ve got a piece coming on federation, stitching, remote schemas, and gateway – among various ways – to get multiple GraphQL, or GraphQL and RESTful APIs together into a singular, or singularly managed, API end point.

Hope that was useful, if you’ve got comments, questions, or curiosities let me know in the comments here, or pop over to the video and leave a comment there.

References:

The full video of setup and how the remote schema & joins work in Hasura.

Gradle Build Tool

A few helpful links and details to where information is on the Gradle Build Tool.

Installation

Via SDKMAN sdk install gradle x.y.z where x.y.z is the version, like 8.0.2.

Via Brew with brew install gradle.

Manually check out the instructions here.

Building a Java Library (or application, Gradle plugin, etc)

Using the init task. From inside a directory with the pertinent project.

gradle init

You’ll be prompted for options.

With the project initialized this is what that initialized folder structure looks like.

At this point add the Java code for the library, similar to this example, and execute a build like this.

./gradlew build

Build Collateral

View the test report via the HTML output file at lib/build/reports/tests/test/index.html.

The JAR file is available in lib/build/libs with the name lib.jar. Verify the archive is valid with jar tf lib/build/libs/lib.jar.

Add the version by setting the version = '0.1.1' in the build.gradle file.

Run the jar task ./gradlew jar and the build will create a lib/build/libs/lib-0.1.1.jar with the expected version.

Add all this to the build by adding the following to the build.gradle file:

tasks.named('jar') {
    manifest {
        attributes('Implementation-Title': project.name,
                   'Implementation-Version': project.version)
    }
}

Verifying this all works, execute a ./gradlew jar and then extract the MANIFEST.MF via jar xf lib/build/libs/lib-0.1.0.jar META-INF/MANIFEST.MF.

Adding API Docs

In the */Library.java file, replace the / in the comment by / * so that we get javadoc markup.

Run the ./gradlew javadoc task. The generated javadoc files are located at lib/build/docs/javadoc/index.html.

To add this as a build task, in build.gradle add a section with the following:

java {
    withJavadocJar()
}

Publish a Build Scan

Execute a build scan with ./gradlew build --scan.

Common Issues + Tips n’ Tricks

gradlew – Permission Denied issue

Let’s say you execute Gradle with ./gradlew with whatever parameter and immediately get a response of “Permission Denied”. The most common solution, especially for included gradlew executables included in repositories, is to just give the executable permission to execute. This is done with a simple addition chmod +x gradelw and you should now be ready to execute!

WE DID IT! DataStax Astra is GA

Yesterday we finally went full GA (General Availability) with DataStax Astra. For the quick TLDR think of it as Apache Cassandra that you can spin up as a service and use in about a minute. I, as I wrote about some months ago, joined the engineering team to help build out the system! I quickly got to reconnoitering the role and working toward build out of features, which now are available to you!

With Astra, if you’ve used Apache Cassandra or DataStax Enterprise you can use the same drivers or CQL you’re familiar with. But with Astra there are two additional capabilities we’ve just released to use in connecting to and working with your databases:

  • Astra REST API
  • Astra GraphQL API

With the REST API there are a number of capabilities to add a table, return a list of all the tables, return content of a table, and delete a table. In addition to tables, there is functionality to retrieve, retrieve all, add, update, and delete columns. All of the standard CRUD (Create, Read, Update, and Delete) commands can also be performed.

For the GraphQL API it gives you the ability to perform CRUD actions and query with filters using the GraphQL syntax.

Authorization Token

To use either of these services, the first thing you’ll need is to create one of Astra’s time based authorization tokens. These tokens work until 30 minutes after the last call made with the token. Once expired a new token must be created. To create a token an HTTP POST to the API can be made, passing several header values, and username and password in the body of a POST request.

For an example of retrieving an authorization token I’ve put together a cURL request below. To get the URL for your database navigate to the Astra dashboard, and on the summary screen of any database the API Access URL’s are listed.

curl --request POST \
  --url https://12c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/rest/v1/auth \
  --header 'accept: */*' \
  --header 'content-type: application/json' \
  --header 'x-cassandra-request-id: 24cc6f6f-c1d9-4d4e-a4d3-e34c7d8b148a' \
  --data '{"username":"betterbot","password":"betterbot"}'

A successful request will return a result with the auth token that looks like this.

{"authToken":"9a38437f-7e03-49a8-bc5d-b4e305d7c1e8"}

With that authorization token we can now call actions against the REST, or GraphQL APIs.

Creating a Table via the Astra REST API

To create a table, we need a few key elements: The table name, whether it should create if a table exists or not, and column definitions with at least one column as a primary key. This is done by using JSON to pass this schema to the REST API. Here’s an example of some JSON that can be used to create a table.

'{"name":"products","ifNotExists":true,"columnDefinitions":
  [ {"name":"id","typeDefinition":"uuid","static":false},
    {"name":"name","typeDefinition":"text","static":false},
    {"name":"description","typeDefinition":"text","static":false},
    {"name":"price","typeDefinition":"decimal","static":false},
    {"name":"created","typeDefinition":"timestamp","static":false}],"primaryKey":
    {"partitionKey":["id"]},"tableOptions":{"defaultTimeToLive":0}}'

To use this JSON to create a table, just add the pertinent headers, insert your keyspace into the URL, and the x-cassandra-token and POST this data to the REST API end point. A cURL request to create the table would look like this.

curl --request POST \
  --url https://12c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/rest/v1/keyspaces/betterbotz/tables \
  --header 'accept: */*' \
  --header 'content-type: application/json' \
  --header 'x-cassandra-request-id: 07e37064-b265-4618-94ce-1c4606f584f9' \
  --header 'x-cassandra-token: ' \
  --data '{"name":"products","ifNotExists":true,"columnDefinitions":
  [ {"name":"id","typeDefinition":"uuid","static":false},
    {"name":"name","typeDefinition":"text","static":false},
    {"name":"description","typeDefinition":"text","static":false},
    {"name":"price","typeDefinition":"decimal","static":false},
    {"name":"created","typeDefinition":"timestamp","static":false}],"primaryKey":
    {"partitionKey":["id"]},"tableOptions":{"defaultTimeToLive":0}}'

Adding data via a GraphQL Mutation

At this point, with a data created, we can add, update, or delete data. The sample curl statement I’ve put together here is a sample GraphQL mutation to add a record to the products table.

curl --request POST \
  --url https://ba965c97-86f1-4d38-8cne-58qa1d2209a1-us-east1.apps.astra.datastax.com/api/rest/v1/keyspaces/betterbotz/tables/orders/rows \
  --header 'accept: application/json' \
  --header 'content-type: application/json' \
  --header 'x-cassandra-request-id: xyzaa27b-de8e-4afc-8431-8f06a326047d' \
  --header 'x-cassandra-token: 3ad1ca6a-62pq-4e1b-b273-4c08ea334909' \
  --data-raw '{"query":"mutation {superarms: insertProducts(value:{id:\"65cad0df-4fc8-42df-90e5-4effcd221ef7\"\n name:\"Arm Spec A1\" description:\"Powerful Robot Arm Spec A.\"price: \"9999.99\" created: \"2012-04-23T18:25:43.511Z\"}){value {name description price created}}}","variables":{}}'

For some other examples issuing a GraphQL mutation to add a record, just for good measure.

Go

package main

import (
  "fmt"
  "strings"
  "net/http"
  "io/ioutil"
)

func main() {

  url := "https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql"
  method := "POST"

  payload := strings.NewReader("{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}")

  client := &http.Client {
  }
  req, err := http.NewRequest(method, url, payload)

  if err != nil {
    fmt.Println(err)
  }
  req.Header.Add("accept", "*/*")
  req.Header.Add("content-type", "application/json")
  req.Header.Add("X-Cassandra-Token", "e85b3021-fb89-4f43-9ba6-a64a49ba5f68")
  req.Header.Add("Content-Type", "application/json")

  res, err := client.Do(req)
  defer res.Body.Close()
  body, err := ioutil.ReadAll(res.Body)

  fmt.Println(string(body))
}

Python

import requests

url = "https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql"

payload = "{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}"
headers = {
  'accept': '*/*',
  'content-type': 'application/json',
  'X-Cassandra-Token': 'e85b3021-fb89-4f43-9ba6-a64a49ba5f68',
  'Content-Type': 'application/json'
}

response = requests.request("POST", url, headers=headers, data = payload)

print(response.text.encode('utf8'))

Java

OkHttpClient client = new OkHttpClient().newBuilder()
  .build();
MediaType mediaType = MediaType.parse("application/json");
RequestBody body = RequestBody.create(mediaType, "{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}");
Request request = new Request.Builder()
  .url("https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql")
  .method("POST", body)
  .addHeader("accept", "*/*")
  .addHeader("content-type", "application/json")
  .addHeader("X-Cassandra-Token", "e85b3021-fb89-4f43-9ba6-a64a49ba5f68")
  .addHeader("Content-Type", "application/json")
  .build();
Response response = client.newCall(request).execute();

and C#!

var client = new RestClient("https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql");
client.Timeout = -1;
var request = new RestRequest(Method.POST);
request.AddHeader("accept", "*/*");
request.AddHeader("content-type", "application/json");
request.AddHeader("X-Cassandra-Token", "e85b3021-fb89-4f43-9ba6-a64a49ba5f68");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}",
           ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
Console.WriteLine(response.Content);

With that short tour, check out your free database today @ https://astra.datastax.com/register! Feel free to ping me on Twitter @Adron or here in comments, I’m open to and would love to discuss your experience!

Do Java Code Streams Exist?

Recently while doing some coding on Twitch I was posed a question, “Are there any people streaming Java?”

It’s an interesting question, as I’ve seen a lot of people streaming a lot of languages. The bulk of streamers seem to be streaming JavaScript and the related frameworks and tools like React, Node.js, Vue.js, and others. I’ve also seen a lot of people using Python, and a few doing things like Rust, C++, and a few others but barely anybody using Java.

Continue reading “Do Java Code Streams Exist?”

Glassfish, Java, JSF and Explorations of IntelliJ IDEA 13

I recently dove into working with some tooling at Jetbrains. The first thing I needed to take a dive into is the latest EAP of IntelliJ IDEA 13. If you do any sort of Java development you’ve definitely heard of IntelliJ, and if you work in other realms like C#, Node.js & JavaScript, Python or other languages you’ve likely heard of other Jetbrains tools like ReSharper, WebStorm or PHPStorm, Pycharm or a host of the other IDEs that they produce. A few ways to describe their product line; solid, quality, useful and kick ass. But I digress, here’s a run down of the look into IntelliJ IDEA 13.

Getting Something to Serve These Pages

Glassfish 4.0 is the latest and greatest of the Glassfish Server. I downloaded and got the server up and running to use as a base to do local development off of. I had installed Glassfish 3.1.2 but realized rapidly that it just wasn’t a good idea. Thus, a piece of advice, stick to Glassfish 4.

When installing Glassfish 4. I ran into a recurring problem, which is obviously recurring far beyond my use. The installer has the error trapped with an intelligent response.

Click for full size image.
Click for full size image.

So thus the Glassfish doesn’t understand where the default installation location is for the JRE. Thus you’ll likely have to help it out and provide it the path via the -j switch. Your path will likely look like this:

[sourcecode language=”bash”]
"C:\Users\you\Downloads\glassfish-4.0-windows.exe" -j "C:\Program Files\Java\jre7"
[/sourcecode]

Once that was taken care of Glassfish 4.0 installed just fine and I was on my way to some sample app building. On to that sample app building shortly, for now let’s talk about configuration of IntelliJ IDEA 13 to work with Glassfish as an application server and the respective bits to get going.

IntelliJ IDEA 13 Configuration

Once you have IntelliJ IDEA 13 open, create a new project.

Welcome Screen for IntelliJ IDEA 13. Click for full size image.
Welcome Screen for IntelliJ IDEA 13. Click for full size image.

Once you click that, pending you’ve already configured IntelliJ IDEA you’ll see a screen display as shown below. If you haven’t configured it already, I’ll go through how to configure things after project creation. That way one can double back and configure the applications you might have already started or pulled from git or something without Glassfish or other servers being setup.

Glassfish and other things, already setup, available for options during project creation. Click for full size image.
Glassfish and other things, already setup, available for options during project creation. Click for full size image.

Note the JavaEE Web Module selection, then set a project name, project location (or leave the default, it’ll fill in when you enter the project name), set the Project SDK. If one isn’t available you’ll need to setup the SDK via the New button. This will bring up a dialog that will allow you to point to the SDK path and IntelliJ will know which version is available. You can do this for additional versions also, but for this example I’ve just installed 1.7 and run with it. Next set the Application server to Glassfish 4.0.0. Next click on Finish, then you’re all set.

When setting up Glassfish a few other options I like to set are shown in the dialog below. This is my default Glassfish setup for Local development. I’ve set the default browser to launch after a build to Chrome. Set the application server to GlassFish 4.0.0. It might seem silly to have this selection on the GlassFish Server dialog, but when there are multiple versions available you may need to setup a different version for different configurations. Next, I often run the server locally without security username and password, so be sure to remove the admin username that is there for the default configuration.

Glassfish Local setup. Click for full size image.
Glassfish Local setup. Click for full size image.

One last thing that I do to myself when setting up JSF, is keep forgetting to add the appropriate option to GlassFish to run JSF Workflow Apps. To run these apps open up the application server configuration for glassfish itself and select the JSF framework library.

Selecting JSF. Click for full size image.
Selecting JSF. Click for full size image.

Now with the application created there is a list of files & libraries.

To the left you can view a number of files & libraries listed. Click for full size image.
To the left you can view a number of files & libraries listed. Click for full size image.

Just to make sure everything is wired up right and the application is able to run, double click on the index.jsp file and add some HTML content. I’ve set up my file like this just to get started.

[sourcecode language=”html”]
<%– Created by IntelliJ IDEA. –%>
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<html>
<head>
<title>Octo Bear!</title>
</head>
<body>
<h1>Bears!</h1>

<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla at iaculis eros. Cras eget aliquam mi, ut suscipit
arcu. Suspendisse accumsan auctor tellus in condimentum. Integer interdum a neque eget pharetra. Mauris nec dolor
ipsum. Quisque quis fermentum lorem. Sed tempor egestas dui, sed bibendum justo vulputate eget. Pellentesque tempus
auctor tellus id aliquam. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In erat leo, pharetra eget
convallis ut, interdum et eros. Morbi in tortor id tellus tristique aliquet at non urna. Pellentesque habitant morbi
tristique senectus et netus et malesuada fames ac turpis egestas.</p>
</body>
</html>
[/sourcecode]

Now click on the run button in the top right corner of IntelliJ. You might hit this error if you’ve just installed Glassfish and moved straight to creating the project.

Running the new project immediately after installing Glassfish.
Running the new project immediately after installing Glassfish.

Open up a browser and navigate to http://localhost:4848/ to bring up the administration page for Glassfish 4.0.

Glassfish 4.0 Administration site. (click for full size image)
Glassfish 4.0 Administration site. (click for full size image)

Click on the server (Admin server) section that’s pointed to in the image above. A page will display that has a button to stop the Glassfish server. Click on stop and that will enable IntelliJ to take over and run the server instead. You’ll want IntelliJ to do this as it will manage the restart, redeploy, update of classes and resources during coding & running the web application. This is much easier than attempting to attach or otherwise manage the server during development.

That’s just a few of the tips and tricks to getting started with the latest IntelliJ IDEA 13 EAP IDE for Java. In my next article I’m going to dive into a few of IntelliJ IDEA 13’s newest features for JSF Workflow Faces Applications.