GraphQL Pagination with Java Spring Boot’s “GraphQL for Spring”

There are several different approaches for implementing pagination in GraphQL, and specifically with Java Spring Boot. Here are these commonly used patterns for paging in APIs:

  1. Offset Pagination:
    • This pattern uses an offset and limit approach, where you specify the starting offset (number of records to skip) and the maximum number of records to return.
    • Example parameters: offset=0 and limit=10
  2. Cursor-based Pagination:
    • This pattern uses a cursor (typically an encoded value representing a record) to determine the position in the dataset.
    • The cursor can be an ID, a timestamp, or any other value that uniquely identifies a record.
    • Example parameters: cursor=eyJpZCI6MX0= and limit=10
  3. Page-based Pagination:
    • This pattern divides the dataset into pages, each containing a fixed number of records.
    • It uses page numbers to navigate through the dataset, typically with links or metadata indicating the previous, next, and current pages.
    • Example parameters: page=1 and size=10
  4. Time-based Pagination:
    • This pattern uses time-based boundaries, such as a start and end timestamps, to fetch records within a specific time range.
    • It is commonly used in scenarios where the dataset is time-ordered, such as logs or social media posts.
    • Example parameters: start_time=1621234567 and end_time=1622345678
  5. Keyset Pagination:
    • This pattern relies on ordering the dataset by one or more columns and using the column values as the paging keys.
    • Each page request includes the last record’s key from the previous page, and the API returns records greater than that key.
    • It provides efficient pagination for large datasets with indexed columns.
    • Example parameters: last_key=12345 and limit=10
  6. Combination of Patterns:
    • You can also combine different pagination patterns based on the requirements of your API and the nature of the data being paginated.
    • For example, you might use cursor-based pagination for real-time updates and keyset pagination for efficient retrieval of large datasets.

The type of pattern to use depends on numerous factors like the size of the dataset, ordering requirements, and related performance characteristics. This post doesn’t cover the logic or details needed to determine the type of paging to use, just the options that are available. With that time to get into paging! 👊🏻

Here’s an example of how you can write – generally – a Java Spring Boot GraphQL API with paging for a “Customer” object:

  1. Set up the project:
    • Create a new Spring Boot project in your preferred IDE.
    • Add the necessary dependencies to your pom.xml file:
      • Spring Boot Starter Web
      • Spring Boot Starter Data JPA
      • GraphQL Java Tools
      • GraphQL Java Spring Boot Starter
  2. Define the Customer entity:
    • Create a new class named Customer with the following fields:
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class Customer {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long customerId;
    private String firstName;
    private String lastName;
    private String customerDetails;
    private Integer customerAccountId;
    private Integer customerSalesId;
    private Long engId;
    private Long forgoId;

    // Constructors, getters, and setters
}
  1. Set up the Customer repository:
  • Create a new interface named CustomerRepository that extends JpaRepository<Customer, Long>.
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface CustomerRepository extends JpaRepository<Customer, Long> {
}
  1. Create the GraphQL schema:
  • Create a new file named schema.graphqls under the resources directory.
  • Define the GraphQL schema with the required types, queries, and mutations:
type Customer {
  customerId: ID!
  firstName: String!
  lastName: String!
  customerDetails: String!
  customerAccountId: Int!
  customerSalesId: Int!
  engId: ID!
  forgoId: ID!
}

type Query {
  getCustomers(page: Int!): [Customer!]!
}

schema {
  query: Query
}
  1. Implement the GraphQL resolver:
  • Create a new class named GraphQLResolver and define the resolver methods.
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.List;

@Component
public class GraphQLResolver implements GraphQLQueryResolver {
    private final CustomerRepository customerRepository;

    @Autowired
    public GraphQLResolver(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    public List<Customer> getCustomers(int page) {
        int pageSize = 42;
        int offset = (page - 1) * pageSize;
        return customerRepository.findAll(PageRequest.of(offset, pageSize)).getContent();
    }
}
  1. Run the application:
    • Run the Spring Boot application.
    • Navigate to http://localhost:8080/graphql to access the GraphQL Playground.
  2. Testing the API:
    • Use the following query in the GraphQL Playground to fetch customers with pagination:
query {
  getCustomers(page: 1) {
    customerId
    firstName
    lastName
    customerDetails
    customerAccountId
    customerSalesId
    engId
    forgoId
  }
}

Replace page: 1 with the desired page number to retrieve different sets of customers.


Page-based + Caching

Previous Page, Next Page, and Current Page Model

  1. Modify the GraphQL schema:
    • Update the getCustomers query in the schema.graphqls file to include the new pagination fields:
type CustomerConnection {
  pageInfo: PageInfo!
  edges: [CustomerEdge!]!
}

type CustomerEdge {
  cursor: ID!
  node: Customer!
}

type PageInfo {
  startCursor: ID
  endCursor: ID
  hasPreviousPage: Boolean!
  hasNextPage: Boolean!
}

type Query {
  getCustomers(page: Int!): CustomerConnection!
}

schema {
  query: Query
}
  1. Update the GraphQL resolver:
    • Modify the GraphQLResolver class to include the new pagination logic and return the CustomerConnection type:
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.PageRequest;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.stream.Collectors;

@Component
public class GraphQLResolver implements GraphQLQueryResolver {
    private final CustomerRepository customerRepository;

    @Autowired
    public GraphQLResolver(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    public CustomerConnection getCustomers(int page) {
        int pageSize = 42;
        int offset = (page - 1) * pageSize;

        List<Customer> customers = customerRepository.findAll(PageRequest.of(offset, pageSize)).getContent();
        List<CustomerEdge> customerEdges = customers.stream()
                .map(customer -> new CustomerEdge(String.valueOf(customer.getCustomerId()), customer))
                .collect(Collectors.toList());

        boolean hasPreviousPage = page > 1;
        boolean hasNextPage = customers.size() == pageSize;

        String startCursor = customerEdges.isEmpty() ? null : customerEdges.get(0).getCursor();
        String endCursor = customerEdges.isEmpty() ? null : customerEdges.get(customerEdges.size() - 1).getCursor();

        PageInfo pageInfo = new PageInfo(startCursor, endCursor, hasPreviousPage, hasNextPage);
        return new CustomerConnection(pageInfo, customerEdges);
    }
}
  1. Define additional classes:
    • Create the following additional classes to support the new pagination model:
public class CustomerConnection {
    private final PageInfo pageInfo;
    private final List<CustomerEdge> edges;

    public CustomerConnection(PageInfo pageInfo, List<CustomerEdge> edges) {
        this.pageInfo = pageInfo;
        this.edges = edges;
    }

    public PageInfo getPageInfo() {
        return pageInfo;
    }

    public List<CustomerEdge> getEdges() {
        return edges;
    }
}

public class CustomerEdge {
    private final String cursor;
    private final Customer node;

    public CustomerEdge(String cursor, Customer node) {
        this.cursor = cursor;
        this.node = node;
    }

    public String getCursor() {
        return cursor;
    }

    public Customer getNode() {
        return node;
    }
}

public class PageInfo {
    private final String startCursor;
    private final String endCursor;
    private final boolean hasPreviousPage;
    private final boolean hasNextPage;

    public PageInfo(String startCursor, String endCursor, boolean hasPreviousPage, boolean hasNextPage) {
        this.startCursor = startCursor;
        this.endCursor = endCursor;
        this.hasPreviousPage = hasPreviousPage;
        this.hasNextPage = hasNextPage;
    }

    public String getStartCursor() {
        return startCursor;
    }

    public String getEndCursor() {
        return endCursor;
    }

    public boolean isHasPreviousPage() {
        return hasPreviousPage;
    }

    public boolean isHasNextPage() {
        return hasNextPage;
    }
}
  • Inspect the pageInfo field to access the pagination information.
  • The edges field contains the list of customers with their respective cursors.

Offset Pagination

  1. Modify the GraphQL resolver:
  • Update the getCustomers method in the GraphQLResolver class to accept an additional parameter for the page size and offset:
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.List;

@Component
public class GraphQLResolver implements GraphQLQueryResolver {
    private final CustomerRepository customerRepository;

    @Autowired
    public GraphQLResolver(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    public List<Customer> getCustomers(int pageSize, int offset) {
        return customerRepository.findAll(PageRequest.of(offset, pageSize)).getContent();
    }
}
  1. Update the GraphQL schema:
  • Modify the getCustomers query in the schema.graphqls file to include the additional parameters for page size and offset:
type Query {
  getCustomers(pageSize: Int!, offset: Int!): [Customer!]!
}

schema {
  query: Query
}
  1. Run the application and test the API:
  • Run the Spring Boot application.
  • Use the following query in the GraphQL Playground to fetch customers with offset pagination:
query {
  getCustomers(pageSize: 42, offset: 0) {
    customerId
    firstName
    lastName
    customerDetails
    customerAccountId
    customerSalesId
    engId
    forgoId
  }
}
  • Adjust the values of pageSize and offset as needed to navigate through the dataset.
  • For example, to retrieve the next page, set offset to 42 (assuming pageSize is 42).

Page-based Pagination

  1. Modify the GraphQL resolver:
  • Update the getCustomers method in the GraphQLResolver class to accept an additional parameter for the page number and page size:
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageRequest;
import org.springframework.stereotype.Component;

@Component
public class GraphQLResolver implements GraphQLQueryResolver {
    private final CustomerRepository customerRepository;

    @Autowired
    public GraphQLResolver(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    public Page<Customer> getCustomers(int pageNumber, int pageSize) {
        return customerRepository.findAll(PageRequest.of(pageNumber - 1, pageSize));
    }
}
  1. Update the GraphQL schema:
  • Modify the getCustomers query in the schema.graphqls file to include the additional parameters for page number and page size:
type CustomerConnection {
  pageInfo: PageInfo!
  edges: [CustomerEdge!]!
}

type CustomerEdge {
  cursor: ID!
  node: Customer!
}

type PageInfo {
  startCursor: ID
  endCursor: ID
  hasPreviousPage: Boolean!
  hasNextPage: Boolean!
}

type Query {
  getCustomers(pageNumber: Int!, pageSize: Int!): CustomerConnection!
}

schema {
  query: Query
}
  1. Update the CustomerConnection and PageInfo classes:
  • Modify the CustomerConnection and PageInfo classes to match the updated schema:
import java.util.List;

public class CustomerConnection {
    private final PageInfo pageInfo;
    private final List<CustomerEdge> edges;

    public CustomerConnection(PageInfo pageInfo, List<CustomerEdge> edges) {
        this.pageInfo = pageInfo;
        this.edges = edges;
    }

    public PageInfo getPageInfo() {
        return pageInfo;
    }

    public List<CustomerEdge> getEdges() {
        return edges;
    }
}

public class PageInfo {
    private final String startCursor;
    private final String endCursor;
    private final boolean hasPreviousPage;
    private final boolean hasNextPage;

    public PageInfo(String startCursor, String endCursor, boolean hasPreviousPage, boolean hasNextPage) {
        this.startCursor = startCursor;
        this.endCursor = endCursor;
        this.hasPreviousPage = hasPreviousPage;
        this.hasNextPage = hasNextPage;
    }

    public String getStartCursor() {
        return startCursor;
    }

    public String getEndCursor() {
        return endCursor;
    }

    public boolean isHasPreviousPage() {
        return hasPreviousPage;
    }

    public boolean isHasNextPage() {
        return hasNextPage;
    }
}
  1. Run the application and test the API:
  • Run the Spring Boot application.
  • Use the following query in the GraphQL Playground to fetch customers with page-based pagination:
query {
  getCustomers(pageNumber: 1, pageSize: 42) {
    pageInfo {
      startCursor
      endCursor
      hasPreviousPage
      hasNextPage
    }
    edges {
      cursor
      node {
        customerId
        firstName
        lastName
        customerDetails
        customerAccountId
        customerSalesId
        engId
        forgoId
      }
    }
  }
}
  • Adjust the values of pageNumber and pageSize as needed to navigate through the dataset.
  • The response includes the pageInfo object, which provides information about the current page and pagination state.
  • The edges field contains the list of customers with their respective cursors.

Time-based Pagination

  1. Modify the GraphQL resolver:
  • Update the getCustomers method in the GraphQLResolver class to accept additional parameters for the start time, end time, and page size:
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.time.LocalDateTime;
import java.util.List;

@Component
public class GraphQLResolver implements GraphQLQueryResolver {
    private final CustomerRepository customerRepository;

    @Autowired
    public GraphQLResolver(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    public List<Customer> getCustomers(LocalDateTime startTime, LocalDateTime endTime, int pageSize) {
        return customerRepository.findByTimeRange(startTime, endTime, PageRequest.of(0, pageSize));
    }
}
  1. Update the GraphQL schema:
  • Modify the getCustomers query in the schema.graphqls file to include the additional parameters for the start time, end time, and page size:
type Query {
  getCustomers(startTime: String!, endTime: String!, pageSize: Int!): [Customer!]!
}

schema {
  query: Query
}
  1. Update the CustomerRepository:
  • Update the CustomerRepository interface to include a method that queries customers within a specified time range:
import org.springframework.data.domain.Pageable;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;

import java.time.LocalDateTime;
import java.util.List;

public interface CustomerRepository extends JpaRepository<Customer, Long> {

    @Query("SELECT c FROM Customer c WHERE c.timestamp >= :startTime AND c.timestamp <= :endTime")
    List<Customer> findByTimeRange(LocalDateTime startTime, LocalDateTime endTime, Pageable pageable);
}
  1. Run the application and test the API:
  • Run the Spring Boot application.
  • Use the following query in the GraphQL Playground to fetch customers with time-based pagination:
query {
  getCustomers(startTime: "2023-05-01T00:00:00", endTime: "2023-05-17T23:59:59", pageSize: 42) {
    customerId
    firstName
    lastName
    customerDetails
    customerAccountId
    customerSalesId
    engId
    forgoId
  }
}
  • Adjust the values of startTimeendTime, and pageSize as needed to fetch customers within the desired time range.
  • Make sure to provide valid time range values in ISO 8601 format.

Keyset Pagination

  1. Modify the GraphQL resolver:
  • Update the getCustomers method in the GraphQLResolver class to accept additional parameters for the last key and page size:
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.List;

@Component
public class GraphQLResolver implements GraphQLQueryResolver {
    private final CustomerRepository customerRepository;

    @Autowired
    public GraphQLResolver(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    public List<Customer> getCustomers(String lastKey, int pageSize) {
        return customerRepository.findNextCustomers(lastKey, pageSize);
    }
}
  1. Update the GraphQL schema:
  • Modify the getCustomers query in the schema.graphqls file to include the additional parameters for the last key and page size:
type Query {
  getCustomers(lastKey: String, pageSize: Int!): [Customer!]!
}

schema {
  query: Query
}
  1. Update the CustomerRepository:
  • Update the CustomerRepository interface to include a method that queries the next set of customers based on the last key:
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;

import java.util.List;

public interface CustomerRepository extends JpaRepository<Customer, Long> {

    @Query("SELECT c FROM Customer c WHERE c.key > :lastKey ORDER BY c.key ASC")
    List<Customer> findNextCustomers(String lastKey, int pageSize);
}
  1. Run the application and test the API:
  • Run the Spring Boot application.
  • Use the following query in the GraphQL Playground to fetch customers with keyset pagination:
query {
  getCustomers(lastKey: "", pageSize: 42) {
    customerId
    firstName
    lastName
    customerDetails
    customerAccountId
    customerSalesId
    engId
    forgoId
  }
}
  • Adjust the value of pageSize as needed to control the number of records per page.
  • The lastKey parameter is used to retrieve the next set of customers based on the provided key. Initially, use an empty string as the lastKey.
  • Subsequent requests can use the last key value received from the previous response to fetch the next set of customers.

Alright, with all those covered – which I mostly just put together as quickly as possible as examples – I had little time to research any of the latest or greatest ways to put these pagniation patterns together specifically with Java Spring Boot. If you’ve got pointers, suggestions, or otherwise, I’d love a critique of my general code slinging in this post. Cheers!

Other GraphQL Standards, Practices, Patterns, & Related Posts

DataLoader for GraphQL Implementations

A popular library used in GraphQL implementations is called DataLoader, and in many ways the name is somewhat descriptive of its purpose. As described in the JavaScript repo for the Node.js implementation for GraphQL

“DataLoader is a generic utility to be used as part of your application’s data fetching layer to provide a simplified and consistent API over various remote data sources such as databases or web services via batching and caching.”

The DataLoader solvers the N+1 problem that otherwise requires a resolver to make multiple individual requests to a database (or data source, i.e. another API), resulting in inefficient and slow data retrieval.

A DataLoader serves as a batching and caching layer for combining multiple requests int a single request. Grouping together identical requests and executing them more efficiently, thus minimizing the number of database or API round trips.

DataLoader Operation:

  1. Create a new instance of DataLoader, specifying a batch loading function. This function would define how to load the data for a given set of keys.
  2. The resolver iterates through the collection and instead of fetching the related data adds the keys for the data to be fetched to the DataLoader instance.
  3. The DataLoader collects the keys and for multiple keys, deduplicates the request and executes.
  4. Once the batch is executed DataLoader returns the results associating them with their respective keys.
  5. The resolver can then access the response data and resolve the field or relationships as needed.

DataLoader also caches the results of the previous requests so if the same key is requested again DataLoader retrieves from cache instead of making another request. This caching further improves performance and reduces redundant fetching.

DataLoader Implementation Examples

JavaScript & Node.js

The following is a basic implementation using Apollo Server of DataLoader for GraphQL.

const { ApolloServer, gql } = require("apollo-server");
const { DataLoader } = require("dataloader");

// Simulated data source
const db = {
  users: [
    { id: 1, name: "John" },
    { id: 2, name: "Jane" },
  ],
  posts: [
    { id: 1, userId: 1, title: "Post 1" },
    { id: 2, userId: 2, title: "Post 2" },
    { id: 3, userId: 1, title: "Post 3" },
  ],
};

// Simulated asynchronous data loader function
const batchPostsByUserIds = async (userIds) => {
  console.log("Fetching posts for user ids:", userIds);
  const posts = db.posts.filter((post) => userIds.includes(post.userId));
  return userIds.map((userId) => posts.filter((post) => post.userId === userId));
};

// Create a DataLoader instance
const postsLoader = new DataLoader(batchPostsByUserIds);

const resolvers = {
  Query: {
    getUserById: (_, { id }) => {
      return db.users.find((user) => user.id === id);
    },
  },
  User: {
    posts: (user) => {
      // Use DataLoader to load posts for the user
      return postsLoader.load(user.id);
    },
  },
};

// Define the GraphQL schema
const typeDefs = gql`
  type User {
    id: ID!
    name: String!
    posts: [Post]
  }

  type Post {
    id: ID!
    title: String!
  }

  type Query {
    getUserById(id: ID!): User
  }
`;

// Create Apollo Server instance
const server = new ApolloServer({ typeDefs, resolvers });

// Start the server
server.listen().then(({ url }) => {
  console.log(`Server running at ${url}`);
});

This example I created a DataLoader instance postsLoader using the DataLoader class from the dataloader package. I define a batch loading function batchPostsByUserIds that takes an array of user IDs and retrieves the corresponding posts for each user from the db.posts array. The function returns an array of arrays, where each sub-array contains the posts for a specific user.

In the User resolver I user the load method of DataLoader to load the posts for a user. The load method handles batching and caching behind the scenes, ensuring that redundant requests are minimized and results are cached for subsequent requests.

When the GraphQL server receives a query for the posts field of a User the DataLoader automatically batches the requests for multiple users and executes the batch loading function to retrieve the posts.

This example demonstrates a very basic implementation of DataLoader in a GraphQL server. In a real-world scenario there would of course be a number of additional capabilities and implementation details that you’d need to work on for your particular situation.

Spring Boot Java Implementation

Just furthering the kinds of examples, the following is a Spring Boot example.

First add the dependencies.

<dependencies>
  <!-- GraphQL for Spring Boot -->
  <dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-spring-boot-starter</artifactId>
    <version>5.0.2</version>
  </dependency>
  
  <!-- DataLoader -->
  <dependency>
    <groupId>org.dataloader</groupId>
    <artifactId>dataloader</artifactId>
    <version>3.4.0</version>
  </dependency>
</dependencies>

Next create the components and configure DataLoader.

import com.graphql.spring.boot.context.GraphQLContext;
import graphql.servlet.context.DefaultGraphQLServletContext;
import org.dataloader.BatchLoader;
import org.dataloader.DataLoader;
import org.dataloader.DataLoaderRegistry;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.web.context.request.WebRequest;

import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.stream.Collectors;

@SpringBootApplication
public class DataLoaderExampleApplication {

  // Simulated data source
  private static class Db {
    List<User> users = List.of(
        new User(1, "John"),
        new User(2, "Jane")
    );

    List<Post> posts = List.of(
        new Post(1, 1, "Post 1"),
        new Post(2, 2, "Post 2"),
        new Post(3, 1, "Post 3")
    );
  }

  // User class
  private static class User {
    private final int id;
    private final String name;

    User(int id, String name) {
      this.id = id;
      this.name = name;
    }

    int getId() {
      return id;
    }

    String getName() {
      return name;
    }
  }

  // Post class
  private static class Post {
    private final int id;
    private final int userId;
    private final String title;

    Post(int id, int userId, String title) {
      this.id = id;
      this.userId = userId;
      this.title = title;
    }

    int getId() {
      return id;
    }

    int getUserId() {
      return userId;
    }

    String getTitle() {
      return title;
    }
  }

  // DataLoader batch loading function
  private static class BatchPostsByUserIds implements BatchLoader<Integer, List<Post>> {
    private final Db db;

    BatchPostsByUserIds(Db db) {
      this.db = db;
    }

    @Override
    public CompletionStage<List<List<Post>>> load(List<Integer> userIds) {
      System.out.println("Fetching posts for user ids: " + userIds);
      List<List<Post>> result = userIds.stream()
          .map(userId -> db.posts.stream()
              .filter(post -> post.getUserId() == userId)
              .collect(Collectors.toList()))
          .collect(Collectors.toList());
      return CompletableFuture.completedFuture(result);
    }
  }

  // GraphQL resolver
  private static class UserResolver implements GraphQLResolver<User> {
    private final DataLoader<Integer, List<Post>> postsDataLoader;

    UserResolver(DataLoader<Integer, List<Post>> postsDataLoader) {
      this.postsDataLoader = postsDataLoader;
    }

    List<Post> getPosts(User user) {
      return postsDataLoader.load(user.getId()).join();
    }
  }

  // GraphQL configuration
  @Bean
  public GraphQLSchemaProvider graphQLSchemaProvider() {
    return (graphQLSchemaBuilder, environment) -> {
      // Define the GraphQL schema
      GraphQLObjectType userObjectType = GraphQLObjectType.newObject()
          .name("User")
          .field(field -> field.name("id").type(Scalars.GraphQLInt))
          .field(field -> field.name("name").type(Scalars.GraphQLString))
          .field(field -> field.name("posts").type(new GraphQLList(postObjectType)))
          .build();

      GraphQLObjectType postObjectType = GraphQLObjectType.newObject()
          .name("Post")
          .field(field -> field.name("id").type(Scalars.GraphQLInt))
          .field(field -> field.name("title").type(Scalars.GraphQLString))
          .build();

      GraphQLObjectType queryObjectType = GraphQLObjectType.newObject()
          .name("Query")
          .field(field -> field.name("getUserById")
              .type(userObjectType)
              .argument(arg -> arg.name("id").type(Scalars.GraphQLInt))
              .dataFetcher(environment -> {
                // Retrieve the requested user ID
                int userId = environment.getArgument("id");
                // Fetch the user by ID from the data source
                Db db = new Db();
                return db.users.stream()
                    .filter(user -> user.getId() == userId)
                    .findFirst()
                    .orElse(null);
              }))
          .build();

      return graphQLSchemaBuilder.query(queryObjectType).build();
    };
  }

  // DataLoader registry bean
  @Bean
  public DataLoaderRegistry dataLoaderRegistry() {
    DataLoaderRegistry dataLoaderRegistry = new DataLoaderRegistry();
    Db db = new Db();
    dataLoaderRegistry.register("postsDataLoader", DataLoader.newDataLoader(new BatchPostsByUserIds(db)));
    return dataLoaderRegistry;
  }

  // GraphQL context builder
  @Bean
  public GraphQLContext.Builder graphQLContextBuilder(DataLoaderRegistry dataLoaderRegistry) {
    return new GraphQLContext.Builder().dataLoaderRegistry(dataLoaderRegistry);
  }

  public static void main(String[] args) {
    SpringApplication.run(DataLoaderExampleApplication.class, args);
  }
}

This example I define the Db class as a simulated data source with users and posts lists. I create a BatchPostsByUserIds class that implements the BatchLoader interface from DataLoader for batch loading of posts based on user IDs.

The UserResolver class is a GraphQL resolver that uses the postsDataLoader to load posts for a specific user.

For the configuration I define the schema using GraphQLSchemaProvider and create GraphQLObjectType for User and Post, and Query object type with a resolver for the getUserById field.

The dataLoaderRegistry bean registers the postsDataLoader with the DataLoader registry.

This implementation will efficiently batch and cache requests for loading posts based on user IDs.

References

Other GraphQL Standards, Practices, Patterns, & Related Posts

A Hasura Quick Start with Remote Schema, Remote Joins

I’ve been building GraphQL APIs for a number of years now – of along side RESTful, gRPC, XML, and other API styles I won’t even bring up right now – and so far GraphQL APIs have been great to work with. The libraries in different languages form .NET’s Hot Chocolate, Go’s graphql-go, Apollo’s JavaScript based tooling and servers, to Java’s GraphQL for Spring have worked great.

Sometimes you’re in the fortunate situation where you’re using PostgreSQL or SQL Server, or other supported database for a tool like Hasura. Being able to get a full GraphQL (with REST options too) API running in seconds is pretty impressive. From a development perspective it is a massive boost. As Hasura adds more database connectors as they have with Snowflake and Amazon Athena, the server and tooling becomes even more powerful.

With that I wanted to show a N+1 demo where N is day 1 with Hasura. The idea is what do you do immediately after you get a sample service running with Hasura. How do you integrate it with other services, or more specifically how do you integrate your Hasura API along side APIs you’ve written yourself, such as an enterprise GraphQL for Spring based API running against Mongo or other data source? This repo is the basis for several demonstration repositories I am building that will show how you can setup – generally for local development – Hasura + X API with Y Language stack.

This is the Hasura quick start repository here, with migrations and metadata for a local setup. The first demonstration repo for a peripheral GraphQL API will be a Spring based API in this repository. The following steps will get the quick start repository up and running.

  1. Clone this repo git clone git@github.com:Adron/hasura-quick-start.git.
  2. From the root (where the docker-compose.yml file is located) execute docker compose up -d.
  3. Navigate into the hasura directory.
  4. Execute hasura metadata apply, then hasura migrate apply, and then hasura metadata apply. Just do it, it’s a strange workflow thing.
  5. Navigate now into the `hasura` directory and execute hasura console.

These steps are demonstrated in this video from 48 seconds.

What do you get once deployed?

The following are some of the core capabilities of Hasura and showcase what you can get up and running in a matter of seconds, even when you start from a completely empty database! First off you’ll find the database now has 3 tables along with their pertinent schema built out in PostgreSQL and available via Hasura, as shown here under the Data tab of the console.

I also created a schema diagram just to provide a visual of how these tables are designed.

For the remote schema, the Spring API, the following steps will get it cloned and running locally.

  1. Clone this repo git clone git@github.com:Adron/hasura-spring-boot-graphql.git.
  2. Execute ./gradlew build to get the jar file build. It will then be located in the build/libs directory of the project.
  3. Next build the docker image with docker build -t adron/hasura-spring-boot-graphql . to build the docker image locally.
  4. Now you can either start this container with docker compose up -d using the docker-compose.yml in the project or you can run the image with Docker specifically with docker run -p 8081:8080 adron/hasura-spring-boot-graphql.

For a walkthrough of getting the Spring API running, check out 2:28 onward in this video.

Now both of these instances are running locally and you can test each out respectively, but not specifically together. I’ll have probably write up another post on how to get services that spin up separately to run together for localized development. However, with the way things are setup in the two repos, it’s as if one team is the Hasura team building a GraphQL API and another is a Spring Java GraphQL API team, and they’re working autonomously of each other just based on contract of the APIs themselves.

Remote Schema

With that being the scenario, I’ve deployed the Spring API out remotely so that I could show how to put together a remote schema connection and then a remote join query, i.e. nested query in GraphQL speak, across these two APIs.

To add the remote schema, click on the remote schemas tab on the console. Add a name (1), then the URI (2), and optionally if needed add appropriate headers (3) or forward all headers from client requests.

Once that’s added, navigate to the relationships tab of the new remote schema and click on add. Then for this example, select remote database (1), then add a name (4) (Customer in the example) and then for type choose object (3) (per the example).

Then scroll down on that console screen and choose sales_data (1) and default, public, and users (2) under the reference database, schema, and table. Next up choose the source field (3) and reference column (4).

Once added it will look like this in the console.

This creates a relationship to be able to make nested queries against these sources with GraphQL. If it were a single contiguous database the schema would look like this. I’ve color coded the sales_data table as red, to signify it is the table we know is in another database (or, specifically, provided via another hosted API). However, as stated, in a single database the relationships would now look like this. The relationship however, isn’t in a database, but stored in the Hasura metadata between users and sales_data.

Now writing a query across this data would shape up like this. Because of the way the relationship was drawn via the remote schema, the path to get the nested object Customer (2) for the sales data is to start with the sales_data (1) entity. As shown.

sales_data {
  sales_number
  updated_at
  Customer {
    name
  }
}

Now we want to add more details about the particular customer like their email and details. To do this we’ll utilize another nesting level within this query that delves into relationships that are in the PostgreSQL database itself.

sales_data {
  sales_number
  updated_at
  Customer {
    name
    emails {
      email
    }
    details {
      details
    }
  }
}

With this the nested details email (3) and details (4) will be provided, which is foreign key relationships to the primary key table users in the underlying database, made available by Hasura’s relationships in metadata.

Boom! That’s it. Pretty easy setup if the databases and APIs have Hasura available to connect them in this way. Otherwise, this is a huge challenge to develop against if you’re just using solely a tech stack like Apollo, Spring Boot, or Hot Chocolate. Often something along federation and more complexities would come into play. But more on that later, I’ve got a piece coming on federation, stitching, remote schemas, and gateway – among various ways – to get multiple GraphQL, or GraphQL and RESTful APIs together into a singular, or singularly managed, API end point.

Hope that was useful, if you’ve got comments, questions, or curiosities let me know in the comments here, or pop over to the video and leave a comment there.

References:

The full video of setup and how the remote schema & joins work in Hasura.

Gradle Build Tool

A few helpful links and details to where information is on the Gradle Build Tool.

Installation

Via SDKMAN sdk install gradle x.y.z where x.y.z is the version, like 8.0.2.

Via Brew with brew install gradle.

Manually check out the instructions here.

Building a Java Library (or application, Gradle plugin, etc)

Using the init task. From inside a directory with the pertinent project.

gradle init

You’ll be prompted for options.

With the project initialized this is what that initialized folder structure looks like.

At this point add the Java code for the library, similar to this example, and execute a build like this.

./gradlew build

Build Collateral

View the test report via the HTML output file at lib/build/reports/tests/test/index.html.

The JAR file is available in lib/build/libs with the name lib.jar. Verify the archive is valid with jar tf lib/build/libs/lib.jar.

Add the version by setting the version = '0.1.1' in the build.gradle file.

Run the jar task ./gradlew jar and the build will create a lib/build/libs/lib-0.1.1.jar with the expected version.

Add all this to the build by adding the following to the build.gradle file:

tasks.named('jar') {
    manifest {
        attributes('Implementation-Title': project.name,
                   'Implementation-Version': project.version)
    }
}

Verifying this all works, execute a ./gradlew jar and then extract the MANIFEST.MF via jar xf lib/build/libs/lib-0.1.0.jar META-INF/MANIFEST.MF.

Adding API Docs

In the */Library.java file, replace the / in the comment by / * so that we get javadoc markup.

Run the ./gradlew javadoc task. The generated javadoc files are located at lib/build/docs/javadoc/index.html.

To add this as a build task, in build.gradle add a section with the following:

java {
    withJavadocJar()
}

Publish a Build Scan

Execute a build scan with ./gradlew build --scan.

Common Issues + Tips n’ Tricks

gradlew – Permission Denied issue

Let’s say you execute Gradle with ./gradlew with whatever parameter and immediately get a response of “Permission Denied”. The most common solution, especially for included gradlew executables included in repositories, is to just give the executable permission to execute. This is done with a simple addition chmod +x gradelw and you should now be ready to execute!

Do Java Code Streams Exist?

Recently while doing some coding on Twitch I was posed a question, “Are there any people streaming Java?”

It’s an interesting question, as I’ve seen a lot of people streaming a lot of languages. The bulk of streamers seem to be streaming JavaScript and the related frameworks and tools like React, Node.js, Vue.js, and others. I’ve also seen a lot of people using Python, and a few doing things like Rust, C++, and a few others but barely anybody using Java.

Continue reading “Do Java Code Streams Exist?”