A Hasura Quick Start with Remote Schema, Remote Joins

I’ve been building GraphQL APIs for a number of years now – of along side RESTful, gRPC, XML, and other API styles I won’t even bring up right now – and so far GraphQL APIs have been great to work with. The libraries in different languages form .NET’s Hot Chocolate, Go’s graphql-go, Apollo’s JavaScript based tooling and servers, to Java’s GraphQL for Spring have worked great.

Sometimes you’re in the fortunate situation where you’re using PostgreSQL or SQL Server, or other supported database for a tool like Hasura. Being able to get a full GraphQL (with REST options too) API running in seconds is pretty impressive. From a development perspective it is a massive boost. As Hasura adds more database connectors as they have with Snowflake and Amazon Athena, the server and tooling becomes even more powerful.

With that I wanted to show a N+1 demo where N is day 1 with Hasura. The idea is what do you do immediately after you get a sample service running with Hasura. How do you integrate it with other services, or more specifically how do you integrate your Hasura API along side APIs you’ve written yourself, such as an enterprise GraphQL for Spring based API running against Mongo or other data source? This repo is the basis for several demonstration repositories I am building that will show how you can setup – generally for local development – Hasura + X API with Y Language stack.

This is the Hasura quick start repository here, with migrations and metadata for a local setup. The first demonstration repo for a peripheral GraphQL API will be a Spring based API in this repository. The following steps will get the quick start repository up and running.

  1. Clone this repo git clone git@github.com:Adron/hasura-quick-start.git.
  2. From the root (where the docker-compose.yml file is located) execute docker compose up -d.
  3. Navigate into the hasura directory.
  4. Execute hasura metadata apply, then hasura migrate apply, and then hasura metadata apply. Just do it, it’s a strange workflow thing.
  5. Navigate now into the `hasura` directory and execute hasura console.

These steps are demonstrated in this video from 48 seconds.

What do you get once deployed?

The following are some of the core capabilities of Hasura and showcase what you can get up and running in a matter of seconds, even when you start from a completely empty database! First off you’ll find the database now has 3 tables along with their pertinent schema built out in PostgreSQL and available via Hasura, as shown here under the Data tab of the console.

I also created a schema diagram just to provide a visual of how these tables are designed.

For the remote schema, the Spring API, the following steps will get it cloned and running locally.

  1. Clone this repo git clone git@github.com:Adron/hasura-spring-boot-graphql.git.
  2. Execute ./gradlew build to get the jar file build. It will then be located in the build/libs directory of the project.
  3. Next build the docker image with docker build -t adron/hasura-spring-boot-graphql . to build the docker image locally.
  4. Now you can either start this container with docker compose up -d using the docker-compose.yml in the project or you can run the image with Docker specifically with docker run -p 8081:8080 adron/hasura-spring-boot-graphql.

For a walkthrough of getting the Spring API running, check out 2:28 onward in this video.

Now both of these instances are running locally and you can test each out respectively, but not specifically together. I’ll have probably write up another post on how to get services that spin up separately to run together for localized development. However, with the way things are setup in the two repos, it’s as if one team is the Hasura team building a GraphQL API and another is a Spring Java GraphQL API team, and they’re working autonomously of each other just based on contract of the APIs themselves.

Remote Schema

With that being the scenario, I’ve deployed the Spring API out remotely so that I could show how to put together a remote schema connection and then a remote join query, i.e. nested query in GraphQL speak, across these two APIs.

To add the remote schema, click on the remote schemas tab on the console. Add a name (1), then the URI (2), and optionally if needed add appropriate headers (3) or forward all headers from client requests.

Once that’s added, navigate to the relationships tab of the new remote schema and click on add. Then for this example, select remote database (1), then add a name (4) (Customer in the example) and then for type choose object (3) (per the example).

Then scroll down on that console screen and choose sales_data (1) and default, public, and users (2) under the reference database, schema, and table. Next up choose the source field (3) and reference column (4).

Once added it will look like this in the console.

This creates a relationship to be able to make nested queries against these sources with GraphQL. If it were a single contiguous database the schema would look like this. I’ve color coded the sales_data table as red, to signify it is the table we know is in another database (or, specifically, provided via another hosted API). However, as stated, in a single database the relationships would now look like this. The relationship however, isn’t in a database, but stored in the Hasura metadata between users and sales_data.

Now writing a query across this data would shape up like this. Because of the way the relationship was drawn via the remote schema, the path to get the nested object Customer (2) for the sales data is to start with the sales_data (1) entity. As shown.

sales_data {
  sales_number
  updated_at
  Customer {
    name
  }
}

Now we want to add more details about the particular customer like their email and details. To do this we’ll utilize another nesting level within this query that delves into relationships that are in the PostgreSQL database itself.

sales_data {
  sales_number
  updated_at
  Customer {
    name
    emails {
      email
    }
    details {
      details
    }
  }
}

With this the nested details email (3) and details (4) will be provided, which is foreign key relationships to the primary key table users in the underlying database, made available by Hasura’s relationships in metadata.

Boom! That’s it. Pretty easy setup if the databases and APIs have Hasura available to connect them in this way. Otherwise, this is a huge challenge to develop against if you’re just using solely a tech stack like Apollo, Spring Boot, or Hot Chocolate. Often something along federation and more complexities would come into play. But more on that later, I’ve got a piece coming on federation, stitching, remote schemas, and gateway – among various ways – to get multiple GraphQL, or GraphQL and RESTful APIs together into a singular, or singularly managed, API end point.

Hope that was useful, if you’ve got comments, questions, or curiosities let me know in the comments here, or pop over to the video and leave a comment there.

References:

The full video of setup and how the remote schema & joins work in Hasura.

Gradle Build Tool

A few helpful links and details to where information is on the Gradle Build Tool.

Installation

Via SDKMAN sdk install gradle x.y.z where x.y.z is the version, like 8.0.2.

Via Brew with brew install gradle.

Manually check out the instructions here.

Building a Java Library (or application, Gradle plugin, etc)

Using the init task. From inside a directory with the pertinent project.

gradle init

You’ll be prompted for options.

With the project initialized this is what that initialized folder structure looks like.

At this point add the Java code for the library, similar to this example, and execute a build like this.

./gradlew build

Build Collateral

View the test report via the HTML output file at lib/build/reports/tests/test/index.html.

The JAR file is available in lib/build/libs with the name lib.jar. Verify the archive is valid with jar tf lib/build/libs/lib.jar.

Add the version by setting the version = '0.1.1' in the build.gradle file.

Run the jar task ./gradlew jar and the build will create a lib/build/libs/lib-0.1.1.jar with the expected version.

Add all this to the build by adding the following to the build.gradle file:

tasks.named('jar') {
    manifest {
        attributes('Implementation-Title': project.name,
                   'Implementation-Version': project.version)
    }
}

Verifying this all works, execute a ./gradlew jar and then extract the MANIFEST.MF via jar xf lib/build/libs/lib-0.1.0.jar META-INF/MANIFEST.MF.

Adding API Docs

In the */Library.java file, replace the / in the comment by / * so that we get javadoc markup.

Run the ./gradlew javadoc task. The generated javadoc files are located at lib/build/docs/javadoc/index.html.

To add this as a build task, in build.gradle add a section with the following:

java {
    withJavadocJar()
}

Publish a Build Scan

Execute a build scan with ./gradlew build --scan.

Common Issues + Tips n’ Tricks

gradlew – Permission Denied issue

Let’s say you execute Gradle with ./gradlew with whatever parameter and immediately get a response of “Permission Denied”. The most common solution, especially for included gradlew executables included in repositories, is to just give the executable permission to execute. This is done with a simple addition chmod +x gradelw and you should now be ready to execute!

Do Java Code Streams Exist?

Recently while doing some coding on Twitch I was posed a question, “Are there any people streaming Java?”

It’s an interesting question, as I’ve seen a lot of people streaming a lot of languages. The bulk of streamers seem to be streaming JavaScript and the related frameworks and tools like React, Node.js, Vue.js, and others. I’ve also seen a lot of people using Python, and a few doing things like Rust, C++, and a few others but barely anybody using Java.

Continue reading “Do Java Code Streams Exist?”

Starting an Ubuntu Dev Tools List

I’ve recently setup a completely clean virtual machine for doing web, system, and related development on Ubuntu. Here’s the shortlist of what I’ve installed after a default installation. The ongoing list of tools and related items I have installed on my Linux dev box I’m keeping here, and it will be kept as a living doc, so I’ll change it as I add new tools, apps and related changes. So lemme know what I ought to add to that list and I’ll add it to my docs page here. Here’s what I have so far…

Other To-dos

  • Always run sudo apt-get update once the system is installed. It never hurts to have the latest updates.
  • I always install Chrome as my first app. Sometimes the Ubuntu Software Center flakes out on this, but just try again and it’ll work. I use the 64-bit Chrome btw, as I’ve noticed that the 32-bit often flakes out when attempting installation on my virtual machines. Your mileage may vary.

What this enables…

At this point I can launch into about any language; Java, JavaScript, and a few others with a minimal amount of headache. Since it’s a Linux instance it gives me a full range of Linuxy things at my disposal.


Default Java Installation

  1. Run a ‘sudo apt-get update’.
  2. To install the default Java JRE and the JDK run the following commands.

    [sourcecode language=”bash”]sudo apt-get install default-jre
    sudo apt-get install default-jdk[/sourcecode]


Oracle Java v8 Installation

  1. [sourcecode language=”bash”]sudo add-apt-repository ppa:webupd8team/java
    sudo apt-get update
    sudo apt-get install oracle-java8-installer[/sourcecode]


WebStorm Installation

  1. I download the application zip from JetBrains and then run

    [sourcecode language=”bash”]tar xfz WebStorm-*.tar.gz[/sourcecode]

  2. Next I always move the unzipped content to the directory in which I’d like to have the application stored. It’s good practice to not keep things in the download directory, just sayin’. Generally I put these in my usr/bin directory.

    [sourcecode language=”bash”]mv /downloads/WebStorm-* your/desired/spot[/sourcecode]

  3. Now at your terminal, navigate to the path where the application is stored and run the WebStorm.sh executable.

    [sourcecode language=”bash”]./bin/webstorm.sh[/sourcecode]

  4. To add WebStorm to the Quicklaunch, just right click on the icon and select to Lock to Launcher.


IDEA IntelliJ Installation

  1. Follow all the steps listed under WebStorm, it’s the exact same process.


Sublime 3

  1. Go to download the latest v3.
  2. Run the package and it should launch the actual Ubuntu installer, setup Sublime for bash use and get it installed.

(NOTE UPDATED 1/18/2016 > The installer doesn’t seem to get it installed, so I went with this link http://olivierlacan.com/posts/launch-sublime-text-3-from-the-command-line/ which has a good solution.)

In-memory Orchestrate Local Development Database

I was talking with Tory Adams @BEZEI2K about working with Orchestrate‘s Services. We’re totally sold on what they offer and are looking forward to a lot of the technology that is in the works. The day to day building against Orchestrate is super easy, and setting up collections for dev or test or whatever are so easy nothing has stood in our way. Except one thing…

Every once in a while we have to work disconnected. For whatever the reason might be; Comcast cable goes out, we decide to jump on a train or one of us ends up on one of those Q400 puddle jumpers that doesn’t have wifi! But regardless of being disconnected from wifi, cable or internet connectivity we still want to be able to code and test!

In Memory Orchestrate Wrapper

Enter the idea of creating an in memory Orchestrate database wrapper. Using something like convict.js one could easily redirect all the connections as necessary when developing locally. That way development continues right along and when the application is pushed live, it’s redirected to the appropriate Orchestrate connections and keys!

This in memory “fake” or “mock” would need to have the key value, events, and graph store setup just like Orchestrate. With the possibility of having this in memory one could also easily write tests against a real fake and be able to test connected or disconnected without mocking. Not to say that’s a good or bad idea, but just one more tool in the tool chest doesn’t hurt!

If something like this doesn’t pop up in the next week or three, I might just have to kick off this project myself! If anybody is interested please reach out to me and let’s discuss! I’m open to writing it in JavaScript, C#, Java or whatever poison pill you’d prefer. (I’m not polyglot to limit my options!!)

Other Ideas, Development Shop Swap

Another idea that I’ve been pondering is setting up a development shop swap. I’ll leave the reader to determine what that means!  😉  Feel free to throw down ideas that this might bring up and I’ll incorporate that into the soon to be implementation. I’ll have more information about that idea right here once the project gets rolling. In the meantime, happy coding!