Building Content: Corporate Channels or Personal

Before getting into this topic, let’s get clear definitions for Corporate Channel and Personal Channel for the context of this article with an added specific detailed definition for “advocate” and “advocacy”.

Corporate Channel: This is the channel of communications that falls within the realm of a corporate blog (like Digital Ocean’s Blog which is one of the best, HashiCorp’s, or New Relic’s are all examples), corporate Twitter account (the best of course are one’s like Wendy’s), and the normal slew of stuff on LinkedIn and Facebook. Largely, in all honesty developer’s rightfully just ignore the huge bulk of junk on both LinkedIn and Facebook. The other part of this channel is of course the plethora of ads that rain from corporations, but those aren’t anything to do with advocacy as you know except in a disingenuous way.

Personal Channel: This is the channel that often advocates work with most. This is advocacy that they work with and build up within the community that is largely autonomous of the corporate channel. However there is always a corporate entity – their employer or otherwise related – that will of course benefit also. But first and foremost the personal channel is one that an advocate builds for themselves, the community, and a particular technology, language, or other thing that they’re interested in. Above all things, from an external point of view this is where people who follow or consume an advocates gain trust for that particular individual.

Advocate/Advocacy: In this article, note that I’m using an expanded notion, simply put if you’re on Twitter or Github or somewhere public in even the slightest way you are indeed an advocate and providing advocacy for some technology product or platform. This includes people with titles like Developer, Engineer, Architect, or whatever else that has a professional presence online.

What’s What

Of these two channels, it is often difficult as an advocate to determine which to use for various messages out to the prospective community. Do I push content for a corporate entity? Do I push content via my personal blog? Will one damage the other or damage my future prospects at reaching the audience I hope to gain? What advantages does one have over the other? Let’s get into and answer these and other questions.

Advantage Goes To… Neither

Each channel has advantages over the other. For content that you are very specifically curious about, that you’ve spent the time learning, and want to share your personal journey with one’s personal channels are dramatically better then putting that on a corporate channel. If something is more of a manual, how-to, similar to documentation, or an update about specific products or services, that should and rightfully fits on a corporate channel.

Things do become unclear when you may want to provide release notes for a product, but also commentary about what and why various features or issues were built and resolved. It’s something that could add positive characteristic and a human touch to a corporate channel, or on a personal channel it may add specific technical know-how that is then related to the owner of the personal channels. In this case there isn’t specifically a win-lose, but a kind of win-win for either channel, except the one that doesn’t get that content.

For best integrity and focusing content where people can trust it most post this on…

Corporate Channels

  • Product or Service Release or Release Notes
  • Announcements of Corporate Events, News, etc.
  • How-to Articles
  • Documentation, new documentation
  • Manuals or walk through information

Personal Channels

  • Any of the above but with person details, thoughts, ideas, hope, or whatever else has come to mind about what is being released or posted.
  • How-to articles that detail specific personal experiences using products or services and especially anything positive or negative related to those products or services.
  • Personal adventures in coding, conferences, meetups, events, or other human elements of one’s advocacy around something.
  • Other interesting, but generally tangentially related content that you’re interested in that informs some idea of who you are. i.e. I post stuff about music (metal), and biking. If you’re going to be active and advocate, you might as well be you (more on this below).

Building Channels

Another major point of contention is how to and in what way should a corporate channel be built up from an advocacy perspective versus a personal channel. Should either relate to the other, reference each other, or otherwise interconnect? In many ways, my not so humble opinion is yes. A prime example, at DataStax I worked diligently to provide content for the corporate channels, but I also very specifically aim to build my own channels which I use to have a voice and provide the community information, details, and information about the things I’ve learned and am working on. these things often don’t fit on the corporate presence but are exactly the types of things that build integrity in the corporate channels.

Take for example, personal anecdote here, I undertook recon around Twitch streaming. This is a medium that has been growing significantly among developers as a way to hang out online together and teach, learn, and generally build cool things to help with what we do on a day to day basis. Sometimes it is game development, other times it might be setting up infrastructure as code for something that will host a site or pull data for and use Grafana for example. This is a very personal realm and has that personal interaction as a key underpinning of what it is as an experience. This makes going in as a corporate channel difficult, with the need to build a personal presence something that needs to be done first to build and maintain some integrity. Then building the corporate presence becomes a bit easier since there is solid familiarity with the platform.

This goes the same for Twitter. Personal channels, especially today, can gain far greater impact than a lot of the corporate channels. A lot of corporate channels just get outright blocked as people grow tired of having ads and other things force pushed at them. Twitter, after all and in spite of the Twitter Corp itself, is still largely about and for a personal experience. Them shoving news into it has made it a faux commons, and distracted from many things and created a toxic environment for many, but for those of us that can make use of it this still stands out among social media as one of the more valuable. As mentioned, it’s a personal environment that needs a human touch, and the best corporate channels are those like Wendy’s!

Sharing Content

Now there is also the realm where content starts to be shared. Sometimes things are cross-posted, but whatever the case shared content needs to be proofed, kept in a voice for ease of reading and maintained over time. The search engines and other ways that content is organically found on the internet often goes through a kind of auto-updating process to remove cruft and old links, but for some things links really stay put and continuously – even when horrifically outdated – show up in search results. For advocates it is of key importance to maintain relevancy and in many cases update the content they produce. It’s rarely done, but something that is worth working toward!

Advocacy for Advocates

When working through building content, every advocate and every company hiring advocates should work to build up their individual advocates. If the company isn’t, and the advocates aren’t, then it does a disservice to the advocates and the company – as it is of vital importance to not forget that we build software for people. Ideologically the connections that advocates have are with people, and it’s important that this is focused on, built up, and maintained as a core element of developer, or any kind of advocacy.

My take on the situation and general modus operandi “Advocates should first and foremost ensure they advocate and build up their own work and presence which itself is built on core relationships throughout the industry. Corporate content comes 2nd and in doing so the corporate content can be and will become much more valuable, usable, and important.

On that note, good luck on your advocacy efforts!

A Shiny New Vuejs v3 Web App Using & Deployed to Amplify/AppSync/Cognito

No cruft, let’s just start.

Prerequisites

These details plus yarn and a few other notes are available and derived from the Amplify Docs located here. What I’ve done is take those docs and add some specific details and information for this happy path. It includes additional references for the steps I took, and specifically what I’m running this on for this particular tutorial. As noted below, there is a section where this deviates from those steps and I get into next steps beyond the initial setup of the app, Amplify, and AppSync. I’ll note that part of this tutorial, or you can navigate directly to that part with this anchor thatPartiDeviate.

You’ll need the following for this specific tutorial. If you’re acclimate to various OSes and their respective needs around this software, you can get this sorted yourself and it’s mostly the same for each OS, but for this tutorial I’m rolling with MacOS Big Sur v11.6.2.

  • Your OS, as stated mine is Big Sur for this tutorial.
  • git. Probably any version released in the last decade will work just fine.
  • Node.js. Probably anything since v14 would work great but this tutorial is written against v16.11.1. As of this writing the LTS is 16.13.1 and current is 17.3.0.
  • Vue.js v3. For this tutorial I’m on a version of the v3 Preview. For the CLI a quick yarn global add @vue/cli does the job.
  • Amplify CLI. Version for this tutorial is 7.6.5. One can NPM install it with 🤙🏻 npm install -g @aws-amplify/cli or get it via cURL 👍🏻 curl -sL https://aws-amplify.github.io/amplify-cli/install | bash && $SHELL and of course, Windows has gotta be Windowsy with 😑 curl -sL https://aws-amplify.github.io/amplify-cli/install-win -o install.cmd && install.cmd.

A few first steps that only need done once. If you’ve already setup your amplify cli then this isn’t needed a second time.

First, get the Vue.js v3 base app skeleton running.

vue create mywhateverproject

Issuing this command will then provide prompts to select Vue.js v3 Preview (or likely just v3 when fully released, which will come along with other tooling as needed). Once this is done, following the standard steps of navigating into the directory cd myWhateverProejct , then executing the yarn command and finally yarn serve --open will open up the running web app in your default browser.

Next initialize the Vue.js App as an Amplify Project and get some defaults set and accepted. Executing amplify init and accepting the defaults will get this done. As displayed when done the Vue.js v3 App will now have multiple defaults and respective items selected.

Amplify Init

With the core Amplify folder and settings set, adding the Amplify libraries for use in user interface components is next up.

yarn add aws-amplify @aws-amplify/ui-components

Now navigate into the src/main.js file and add the Amplify and initial configure in the code, that will do the actual initialization execution when the app launches.

This is replacing this code…

import { createApp } from 'vue'
import App from './App.vue'

createApp(App).mount('#app')

with this code.

import { createApp } from 'vue'
import App from './App.vue'
import Amplify from 'aws-amplify';
import aws_exports from './aws-exports';
import {
	applyPolyfills,
	defineCustomElements,
} from '@aws-amplify/ui-components/loader';

Amplify.configure(aws_exports);
applyPolyfills().then(() => {
	defineCustomElements(window);
});
createApp(App).mount('#app')

This completes the steps we need for a running application. To cover full stack let’s cover the back end build out and schema construction. Then after that I’ll delve into thatPartiDeviate. Next up is getting the Amplify elements added.

npm install aws-amplify @aws-amplify/ui-components

Before even launching I went ahead and added the back end and database, GraphQL API, and related collateral.

amplify add api
amplify add api

Notice in the screenshot, once I selected to edit the schema now, it simply opened the file in the editor of my choice, which is Visual Studio Code for this tutorial. Since I’m executing this from the terminal in Visual Studio Code it simply opened the file in the active editor that I’m in, win win! The file that is opened by default for the schema includes the following GraphQL schema code.

# This "input" configures a global authorization rule to enable public access to
# all models in this schema. Learn more about authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules

input AMPLIFY { globalAuthRule: AuthRule = { allow: public } } # FOR TESTING ONLY!

type Todo @model {
	id: ID!
	name: String!
	description: String
}

For now, I’ll just leave the comment, the input AMPLIFY and the Todo type just as it is. It’s important to note that this schema.graphql file is located at app/amplify/backend/schema.graphql. I’ll come back to this later in thatPartiDeviate.

Next I want to push the app, api, and backend to Amplify and AppSync.

amplify push

During this phase a lot of things happen. The GraphQL Schema is turned into an API and this is deployed along with the Database is deployed to DynamoDB.

To get the backend shipped, i.e. deployed to AppSync, issue the amplify push command. Again, following through with the default choices. If amplify console is issued just after this a review of the API can be made.

amplify push

Ok, now it’s auth time. Adding that is somewhat mind boggling minimal, just amplify add auth. For this I chose Default config, then Username for the way users sign in, and then the No, I am done option followed by issuing another amplify push, confirmed that and let it go through its process.

After this next steps included adding the following code to the App.vue file to get the initial interations, security and related things into place for the todo app. Again, I feel it important to note that I’ll be changing all of this later in the post here. But it’s a solid way to start building an application and then looping back around after it is up and running, deployed before moving on to next steps.

<template>
  <amplify-authenticator>
    <div id="app">
      <h1>Todo App</h1>
      <input type="text" v-model="name" placeholder="Todo name">
      <input type="text" v-model="description" placeholder="Todo description">
      <button v-on:click="createTodo">Create Todo</button>
      <div v-for="item in todos" :key="item.id">
        <h3>{{ item.name }}</h3>
        <p>{{ item.description }}</p>
      </div>
    </div>
    <amplify-sign-out></amplify-sign-out>
  </amplify-authenticator>
</template>

<script>
import { API } from 'aws-amplify';
import { createTodo } from './graphql/mutations';
import { listTodos } from './graphql/queries';
import { onCreateTodo } from './graphql/subscriptions';

export default {
  name: 'App',
  async created() {
    this.getTodos();
    this.subscribe();
  },
  data() {
    return {
      name: '',
      description: '',
      todos: []
    }
  },
  methods: {
    async createTodo() {
      const { name, description } = this;
      if (!name || !description) return;
      const todo = { name, description };
      this.todos = [...this.todos, todo];
      await API.graphql({
        query: createTodo,
        variables: {input: todo},
      });
      this.name = '';
      this.description = '';
    },
    async getTodos() {
      const todos = await API.graphql({
        query: listTodos
      });
      this.todos = todos.data.listTodos.items;
    },
    subscribe() {
      API.graphql({ query: onCreateTodo })
        .subscribe({
          next: (eventData) => {
            let todo = eventData.value.data.onCreateTodo;
            if (this.todos.some(item => item.name === todo.name)) return; // remove duplications
            this.todos = [...this.todos, todo];
          }
        });
    }
  }
}
</script>

With this added now I could run yarn serve and check out the site. At this point I signed up just to have an account to use and added a todo item. Everything worked swimmingly at this point!

The final step before getting into a proper deviation from this todo example involves now getting the app properly published to Amplify. This is done by executing amplify add hosting. Accept Hosting with Amplify Console (Managed hosting with custom domains, Continuous deployment) and Manual deployment when prompted. Now, finally, issue the command amplify publish.

Boom, the todo app site is live!

thatPartWhereiDeviate

Now it’s time to get into the nitty gritty of deviations from the easy path!

New GraphQL Schema!

My schema that I want to add is around building out collections for a number of data sets. The first one is a data set that I routinely talk about on a regular basis, and yes, it is indeed centered around trains! If you’re uninterested in the trains part and schema and more interested in the changes skip down to the "Deploying The Changes" section of the post.

Alright, describing the data model that I want to have and use will start with the minimal part of just having a list of railroads. This would be a list, or more specifically a table of railraods, that we can add railroads to and collect peripheral information about them. For this table I’ll add the following fields, AKA columns of data to store. I would want to collect the following for a railroad:

  1. railroad name
  2. wikipedia URI
  3. map URI
  4. peripheral details of an unstructured nature
  5. founding year, month, and day of the railroad
  6. record stamp

In addition, I want to keep a list of trains – specifically named trains – that each railroad operates. This data would include:

  1. train name
  2. active – yes / no
  3. peripheral details of an unstructured type
  4. wikipedia URI
  5. route map URI
  6. time table URI
  7. train URI – i.e. like a website or something that might be dedicated to the particular train.
  8. record stamp

Deploying The Changes

Now it is time to deploy these additional database and schema changes. One of the easiest ways to make these changes is to use Amplify Studio, which has a great section for data modeling, which in turn puts together and ensures your schema is usable. Then it will enable you to deploy that new schema with changes to the database and the active service!

Navigate to the interface from here.

Opening Amplify Studio with the Launch Studio button.

Once I navigated to the interface I built out the additional tables like this.

Building a Schema with Amplify Studio

Then just click on Save and Deploy and then Deploy on the following modal dialog and Amplify will deploy the AppSync schema changes.

Amplify Studio Save & Deploy

With that deployed, in the same Amplify Studio interface I then clicked on the GraphQL API tab section and then on the Resource name for mywahteverproject to open up the AppSync Console.

Opening an AppSync Schema.

Further down in the schema toward the bottom I can then find and confirm my types are in and ready for use. The Todo type is still there, since I didn’t need to really remove it yet and it acts as a good working reference. But more importantly you can see the other types and the correlative relationship that was added via the Amplify data modeling interface.

...more schema

type Todo @aws_iam
@aws_api_key {
	id: ID!
	name: String!
	description: String
	_version: Int!
	_deleted: Boolean
	_lastChangedAt: AWSTimestamp!
	createdAt: AWSDateTime!
	updatedAt: AWSDateTime!
}

type Train @aws_iam
@aws_api_key {
	id: ID!
	train_name: String!
	active: Boolean!
	peripheral_details: AWSJSON
	wikipedia_uri: AWSURL
	route_map_uri: AWSURL
	timetable_uri: AWSURL
	train_uri: AWSJSON
	record_stamp: AWSTimestamp
	_version: Int!
	_deleted: Boolean
	_lastChangedAt: AWSTimestamp!
	createdAt: AWSDateTime!
	updatedAt: AWSDateTime!
	railroads(
		railroadID: ModelIDKeyConditionInput,
		filter: ModelRailroadTrainFilterInput,
		sortDirection: ModelSortDirection,
		limit: Int,
		nextToken: String
	): ModelRailroadTrainConnection
		@aws_iam
@aws_api_key
}

type Railroad @aws_iam
@aws_api_key {
	id: ID!
	railroad: String!
	wikipedia_ur: AWSURL
	map_uri: AWSURL
	peripheral_details: AWSJSON
	founding_year: Int
	founding_month: Int
	founding_day: Int
	record_stamp: AWSTimestamp
	_version: Int!
	_deleted: Boolean
	_lastChangedAt: AWSTimestamp!
	createdAt: AWSDateTime!
	updatedAt: AWSDateTime!
	RailroadTrains(
		trainID: ModelIDKeyConditionInput,
		filter: ModelRailroadTrainFilterInput,
		sortDirection: ModelSortDirection,
		limit: Int,
		nextToken: String
	): ModelRailroadTrainConnection
		@aws_iam
@aws_api_key
}

...more schema

The relationship can be seen via the object connections here in the ModelRailroadTrainConnection and the keys associated.

Next steps to get this updated in the local repository from these changes that were just made out of sync via the Amplify Studio interface requires two quick commands, both of which are displayed on the screen of the GraphQL interface in the studio. It’s best to get the command, as it’ll have the appId already included in a copypasta option on the screen, which looks like this.

amplify pull --appId app-id-which-is-in-studio --envName dev

Executing that will get everything updated and pull in the remote GraphQL Schema to the local schema.graphql file located in the amplify/backend/api/ location. Next run this command.

amplify update api

This will update everything to synchronize things, which will also prompt me for code generation so that I can have the client side code ready for use whenever I build out the user interface later.

Next Up

Some of the things I’ll cover in the next article, as I continue this effort, is what has been done with all these steps from a project perspective. As one can see, some things might be a little confusing at this point, for example the above schema shown in AppSync, but after the synchronization if you look at the schema.graphql file locally it shows this.

type Train @model @auth(rules: [{allow: public}]) {
  id: ID!
  train_name: String!
  active: Boolean!
  peripheral_details: AWSJSON
  wikipedia_uri: AWSURL
  route_map_uri: AWSURL
  timetable_uri: AWSURL
  train_uri: AWSJSON
  railroads: [RailroadTrain] @connection(keyName: "byTrain", fields: ["id"])
  record_stamp: AWSTimestamp
}

type Railroad @model @auth(rules: [{allow: public}]) {
  id: ID!
  railroad: String!
  wikipedia_ur: AWSURL
  map_uri: AWSURL
  peripheral_details: AWSJSON
  founding_year: Int
  founding_month: Int
  founding_day: Int
  record_stamp: AWSTimestamp
  RailroadTrains: [RailroadTrain] @connection(keyName: "byRailroad", fields: ["id"])
}

type Todo @model @auth(rules: [{allow: public}]) {
  id: ID!
  name: String!
  description: String
}

type RailroadTrain @model(queries: null) @key(name: "byRailroad", fields: ["railroadID", "trainID"]) @key(name: "byTrain", fields: ["trainID", "railroadID"]) @auth(rules: [{allow: public}]) {
  id: ID!
  railroadID: ID!
  trainID: ID!
  railroad: Railroad! @connection(fields: ["railroadID"])
  train: Train! @connection(fields: ["trainID"])
}

Obviously this is very different than what is shown from one place to another, so I’ll discuss this and other things. So subscribe (over on the right side of the blog), follow (@Adron), and you’ll be updated on the next post when it’s published.

SITREP (Situational Report)

Alright, what have I wrapped up so far? Here’s a bullet list of the things finished:

  • Vue.js App created.
  • Vue.js Form put together for todo entries.
  • Authentication added with Cognito.
  • An AppSync GraphQL created and published.
  • Additional types added to the AppSync GraphQL API.
  • Updates and code regenerated for our API.

What’s next to do? This is the short list, there will be after that, much more to do!

  • Get the Vue.js app spified up, get some nice design put together for it, add some reasonable CSS, graphics, etc to make the interface pop. But above all, it needs to feel usable and be usable.
  • Add the forms for each of the respective interfaces to manipulate the data. This could amount to lots of different things, adding navigation, routing, and other menues and the like.
  • Add screens that can provide some nice reports on the data that I’m putting together. For example, it’d be nice to get a list of the actual named trains or the railroads and have their images, maps, and other respective elements shown.

…and the list goes on. Until next session, enjoy your thrashing code! 🤘🏻

References

The Best Collected Details on the GraphQL Specification – Overview & Language

Reference https://spec.graphql.org

GraphQL, a query language and execution engine is described in this specification based on capabilities and requirements of data models for client-server applications. This article details and elaborates on the specification, the features and capabilities of GraphQL and implementations. I hope this collection of details around the GraphQL Specification can be used as a reference and launch point into learning about GraphQL use, implementation – server and client side – and ongoing references during future specification additions or changes!

The Humans

Every aspect of languages and specifications are created with context of an end user human. The specification is a project by the Joint Development Foundation, with a current Working Group charter that includes the IP policy governing all working group deliverables (i.e. new features, changes of spec, source code, and datasets, etc). To join the Working Group there are details for membership and details in the agreement for joining the efforts of the group.

Licensing, Notation, Grammar, & Syntax

Current licensing for the GraphQL Specification and related Working Group deliverables fall under the Open Web Foundation Agreement 1.0 Mode (Patent and Copyright).

The syntax grammar and related specifics are laid out in the document, which for this article it isn’t necessary to dig through the specifics of this. The research done and collected for this article has that covered for you dear reader, however if you do dig into the specification itself I strongly suggest reading these to insure you specifically know what is represented by what.

Description of GraphQL

The specification starts off with a detailed description of GraphQL. More detailed than a lot of descriptions that one would find in articles on the topic, which makes it extremely valuable for anyone who wants to really get a rich and thorough understanding of GraphQL. The first sentence of the October 2021 Edition of the specification provides a great high level definition,

…a query language and execution engine originally created at Facebook in 2012 for describing the capabilities and requirements of data models for client-server applications.

A few things outside of the spec you’ll read often is, "GraphQL is a query language similar to SQL", which is true, but not. I’ve even seen descriptions like "GraphQL is a programming language" which is a hard no. Taking the specification description provides clarity around some of these simplified definitions that could leave one confused.

GraphQL, as defined, is not a programming language and not capable of arbitrary computation. This is important to note, as many of the platforms and services that provide GraphQL APIs could lead one to think that GraphQL is providing much of the functionality in these platforms, when really it is merely the facade and presentation via API of the capabilities of the underlying systems and platforms (re: Hasura, AppSync, Astra, Atlas, Dgraph, Contentful, GraphCMS, etc).

Enough about what GraphQL isn’t per the spec, what does define GraphQL? Reading the design principles behind the specification provide a much clearer idea of what GraphQL is intended to do.

  • Product-centric – The idea behind GraphQL is focused on the product first. With emphasis around what the user interface, and specifically the front-end engineers, want and need for display and interaction with an application’s data. Extending this it behooves one to design GraphQL APIs around data storage mechanisms that encourage this type of user interface first, and arguably even user experience first design practices. This often includes databases like Dynamo DB, Apache Cassandra, or AWS Neptune as systems that necessitate designing from the front end into the data. Where as it draws conflicts on those that try to follow tightly coupled database first design practices with systems like relational databases. However, that identified as a characteristic, note that it doesn’t preclude design first practices – like GraphQL is designed for – with databases like relational databases. It just provides an avenue of conflict for those that want data first design since that is an entrenched practice with relational databases.
  • Hierarchical – GraphQL is oriented toward the creation of and manipulation of hierarchical views. So much that GraphQL requests are structured as such.
  • Strong-typing – Every GraphQL service defines an application-specific type system and requests are made in that context. This design principle is also why one will find regular use of TypeScript with GraphQL, specifically in the JavaScript web world. The two are matched very well to manage and extend strong-types to the systems using the GraphQL API. This also extends well, albeit with more mapping specifics needed to ensure types match. This design principle provides a solid level of type safety for GraphQL use within application development.
  • Client-specified response – Based on this design pattern GraphQL provides a published capability for how clients will, or can, access the API. These requests provide field-level granularity. With that provided, the client can then provide exactly what it needs to retrieve from this field-level granularity. This particular characteristic is what gives GraphQL it’s famed
  • Introspective – The ability to introspect against an API and derive what is available, and in many cases derive how or what to do with what is available, is a very powerful feature of GraphQL APIs. All of the intricate power of SOA Architectures without the conflagrations of XML, SOAP, and WSDLs. Arguably, one could say that GraphQL is SOA right? Ok, getting off in the weeds here, let’s keep rolling!

Language

Clients accessing the GraphQL API use the GraphQL Query Language. These requests are referred to as documents. These documents can contain one of the operations available from a GraphQL API: queries, mutations, or subscription, as well as fragments that allow for various data requirements reuse.

The GraphQL document follows a particular processing paradigm. First the document is converted into tokens and ignored tokens. This is done scanning left to right, repeatedly taking the next possible sequence of code-points allowed by the lexical grammar as the next token. This produces the AST (Abstract Syntax Tree). There are other specifics to how the document is processed, but from a usage perspective the primary paradigm of tokens, ignored tokens, and processing order are sometimes helpful to know about the processing of the GraphQL Document.

So far, this covers sections 1 and the start of section 2. The other parts of section 2.x cover a wide range of what the document can use and be made of from a source text perspective, which Unicode characters, that it needs to be Unicode, can have and utilizes white space and line terminators to improve legibility, and other characteristics that can be assumed, since almost every text formatted document type in the industry uses this today.

2.1.4 covers comments, which are important to note that the comment character is the # sign. 2.1.5 describes the role of insignificant commas, those that provide readability such as stylistic use of either trailing commas or line terminators as list delimiters.

2.1.6 is about Lexical Tokens, where we get into one of the two key elements of the overall GraphQL document. A Lexical Token is made up of several kinds of indivisible lexical grammar. These tokens can be separated by Ignored Tokens. The Lexical Tokens consist of the following:

Token :: Punctuator Name InValue FloatValue StringValue

2.1.7 is about Ignored Tokens, the element that can be used to improve readability and separated between Lexical Tokens. Ignored Tokens are Unicode BOM, white space, line terminator, comments, or comma.

Within a token, there are punctuators, made up of one the following:

! $ & ( ) … : = @ [ ] { | }

Names in 2.1.9 are defined as alphanumeric characters and the underscore. These are case-sensitive letters thus word, Word, and WORD are entirely different names.

The next key element of the language are the Operations (defined in 2.3). There are three specific operations:

  1. query
  2. mutation
  3. subscription

An example, inclusive of additional tokens would look something like this.

mutation {
  getThisWidget(widgetId: 666) {
	widget {
	  widgetValues
	}
  }
}

A special case is shorthand, provided for the query operation. In this case, if the only operation in a GraphQL Document is a query, the query operation keyword can be left out. So an example would be that this

query {
	widget {
		widgetValues
	}
}

would end up looking like this.

{
	widget {
		widgetValues
	}
}

In 2.4 Selection Sets are defined as "An operation selects the set of information it needs, and will receive exactly that information and nothing more, avoiding over-fetching and under-fetching data" which is of course one of the key feature-sets of GraphQL. The idea of minimizing or eliminating over- or under-fetching of data is a very strong selling point! A query, for example

{
	id
	train
	railroad
}

would only return exactly the data shown, eliminating excess across the wire to the client. Elaborating on this, imagine the underlying table or database storing not just the id, train, and railroad, but the inception of the railroad, extra peripheral details, maybe some extra key codes, or other information. Querying all of the data would look like this.

{
	id
	train
	railroad
	inceptionDate
	details
	peripheralDetails
	keyCodeA
	keyCodeB
	keyCodeC
	information
}

This of course, would get all of the data, but pending we don’t need all of that, fetching only the key fields we need with the absolute minimal amount of language syntax is a feature set, and strength of GraphQL.

Each of the Selection Sets, as in the examples above, is made up of Fields (2.5 in spec). Each field is either a discrete piece of data, complex data, or relationship to other data.

This example shows a discrete piece of data that is being requested.

{
	train {
		namedTrain
	}
}

This discrete request would return a value that would provide the named trains of the train type.

Then a complex type in a query might look like this.

{
	train {
		startDate {
			day
			month
			year
		}
	}
}

Even though one could use a date field as a singular discrete piece of data, in this example startDate is a complex type with the parts of the starting date for the train type being broken out to day, month, and year.

Another might have a correlative relationship that looks similar to the above discrete data example, except there are the nested values of the related element.

{
	train {
		namedTrain
		startDate {
			year
		}
		railroads {
			foundingYear
			history
		}
	}
}

In the above example, we are specifically fetching only the year of the complex type startDate, and returning the related object railroad that has correlative related values foundingYear and history.

From a conceptual point of view, fields are functions that return a value. GraphQL doesn’t dictate what or how that function would execute to return that value, only that the value would be returned. The underlying function would many times need an argument passed to identify the field value to return, in this case Arguments are implemented through an argument list in parenthesis attached to the field identifier.

{
	train(id: 1) {
		namedTrain
	}
}

In this example the train retrieved has an id equal to 1, which will return a singular train with the field namedTrain returned. Let’s say the train had a certain seat type that could be returned based on various parameters.

{
	train(id: 1, seatClass: 1) {
		namedTrain
		seats {
			passengerCar
			number
		}
	}
}

The return list of seats for the train would consist of the seat and passenger car the seat is in, based on the seatClass equaling 1.

Another way to build results is with the Field Alias specification (2.7). Imagine you want to return a field with a picture of the train at thumbnail size and display size.

{
	train(id: 1) {
		smallTrainImage: trainPic(imageType: "thumbnail")
		fullsizeTrainImage: trainPic(imageType: "display")
	}
}

This example would return the thumbnail size image, stored as field trainPic, in the smallTrainImage field alias. The fullsizeTrainImage field alias providing the return field for the trainPic that is matched to display imageType.

Another example, similarly focused on the above might be providing return the types of seats that are available for a particular train divided into the 1st, 2nd, and 3rd class named as firstClass, businessClass, and coachClass seats accordingly.

{
	train(id: 1) {
		namedTrain
		firstClass: seats(seatClass: 1) {
			passengerCar
			number
		}
		businessClass: seats(seatClass: 2) {
			passengerCar
			number
		}
		coachClass: seats(seatClass: 3) {
			passengerCar
			number
		}
	}
}

The above also display the concept described in 2.8 Fragments. Fragments allo for the reuse of common repeated selects of fields, reducing duplicated text in the document.

In the above this also provides further accentuation and focus to the aforementioned Selection Sections fetching specificity. Most specifically stated, providing more options to prevent needless round-trips, excess data per request, and preventing getting too little data and requiring those extra round trips. Fetching problems mitigated!

A sub section of a subsection, for the language section of the specification is on Type Conditions 2.8.1 and Inline Fragments 2.8.2. Fragments must specify the type they apply to, cannot be specified on any input value, and only return values when the concrete type of the object matches the type fragment. Fragments can also be defined inline to a selection set. This conditionally includes fields at runtime based on their type.

query FragmentTyping {
	trainConsist(handles: ["baggage", "passenger"]) {
		handle
		...baggageFragment
		...passengerFragment
	}
}

fragment baggageFragment on BaggageUnit {
	baggageUnits {
		count
	}
}

fragment passengerFragment on PassengerUnit {
	passengerUnits {
		count
	}
}

With a result that would look like this.

{
  "profiles": [
    {
      "handle": "baggage",
      "baggageUnits": { "count": 1 }
    },
    {
      "handle": "passenger",
      "passengerUnits": { "count": 11 }
    }
  ]
}

Something similar could be done with inline fragments too. Additionally Inline Fragments can be used to apply a directive too. More on that and this later!

Input Values, starting in section 2.9 have a number of sub section defining the characteristic and features of Input Values. Field and directive argument accept input values with literal primitives. Input Values can include scalars, enumeration values, lists, or input objects. Another ability of Input Values is to define them as variables. For each of these there are numerous semantic details. The following breakdown are the specific core details of note for the values.

  • 2.9.1 Int Value – This value is specified as a decimal point or exponent, no leading zero, and can be negative.
  • 2.9.2 Float Value – Floats include either a decimal point or exponent or both can be negative, and no leading zero.
  • 2.9.3 Boolean Value – Simple, either true or false.
  • 2.9.4 String Value – Strings are sequences of characters wrapped in quotation marks (i.e. "This is a string value that is a sentence."). There can also be block strings, across multiple lines using three quotes to start and end on the line before and after the string text. As shown here “`""" The text goes here just after the starting quotes.

then some more text.

last line… then followed by the three quotes. """“`.

  • 2.9.5 Null Value – null which is kind of nuff’ said. Sometimes, just like in databases, I’m not entirely sure how I feel about null being included in so many things.
  • 2.9.6 Enum Value – These values are represented as unquoted names, and recommended to be all caps.
  • 2.9.7 List Value – Wrapped by square-brackets (i.e. brackets vs. braces) [ ]. Commas are optional for separation and readability. Both [1, 2, 3] and [1 2 3] are the same.
  • 2.9.8 Input Object Value – These are unordered lists wrapped in curly-braces (i.e. braces, vs brackets) { }. These are referred to as object literals and might look like { name: Benjamin } or { price: 4.39 }.

Variables for Input Values are for parameterized for reuse. An example would look like this.

query getTrainsList($inceptionYear: Int) {
	train {
		id
		namedTrain
		details
	}
}

Type references (2.11) are types of data used for arguments and variables, can be lists of another input type, or non-null variant of any other input type.

Even though 2.12 is a minimal section in the specification, it’s a hugely powerful feature that is used extensively in various GraphQL services options, which is Directives. Directives provide a way to define runtime execution and type validation behavior in a GraphQL document that is different than specification based behaviors. Directives have a name with arguments listed of whichever input types. They can also describe additional information for types, fields, fragments, and operations. New configuration options for example could be setup via Directives.

Note Directive order is significant. For example these two examples could have difference resolutions:

type Freight
	@addFreight(source: "farmSystems")
	@excludeFreight(source: "toxicities") {
	name: String
}
type Freight
	@excludeFreight(source: "toxicities") 
	@addFreight(source: "lightCars"){
	name: String
}

That wraps up GraphQL section 1 and 2, covering the core language. Next up is the type system, schema, and related topics in section 3 of the specification. Notes coming soon!

Apollo GraphQL Federation Schema Validation Error [Solved!]

This is the error I’ve bumped into while working through the example for Apollo’s GraphQL Federation when setting up a subgraph API. I’ve tried several things to resolve this error including changing versions for the GraphQL library in use but that hasn’t fixed it. I’ve also got this now on MacOS, Linux, and Windows so it isn’t something odd about the environment.

 ~/Codez/AppoloFederationCore-v2/ [main] node index.js
/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/apollo-graphql/lib/schema/buildSchemaFromSDL.js:50
        throw new GraphQLSchemaValidationError_1.GraphQLSchemaValidationError(errors);
        ^

GraphQLSchemaValidationError: Unknown directive "@entity".
    at buildSchemaFromSDL (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/apollo-graphql/lib/schema/buildSchemaFromSDL.js:50:15)
    at buildSubgraphSchema (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/@apollo/subgraph/dist/buildSubgraphSchema.js:26:58)
    at Object.<anonymous> (/Users/adronhall/Codez/AppoloFederationCore-v2/index.js:29:11)
    at Module._compile (node:internal/modules/cjs/loader:1095:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1124:10)
    at Module.load (node:internal/modules/cjs/loader:975:32)
    at Function.Module._load (node:internal/modules/cjs/loader:816:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:79:12)
    at node:internal/main/run_main_module:17:47 {
  errors: [
    GraphQLError [Object]: Unknown directive "@entity".
        at Object.Directive (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/graphql/validation/rules/KnownDirectivesRule.js:56:29)
        at Object.enter (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/graphql/language/visitor.js:323:29)
        at visit (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/graphql/language/visitor.js:243:26)
        at Object.validateSDL (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/graphql/validation/validate.js:92:22)
        at buildSchemaFromSDL (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/apollo-graphql/lib/schema/buildSchemaFromSDL.js:48:31)
        at buildSubgraphSchema (/Users/adronhall/Codez/AppoloFederationCore-v2/node_modules/@apollo/subgraph/dist/buildSubgraphSchema.js:26:58)
        at Object.<anonymous> (/Users/adronhall/Codez/AppoloFederationCore-v2/index.js:29:11)
        at Module._compile (node:internal/modules/cjs/loader:1095:14)
        at Object.Module._extensions..js (node:internal/modules/cjs/loader:1124:10)
        at Module.load (node:internal/modules/cjs/loader:975:32)
  ]
}

The code I’m running is the happy path code offered in the docs.

const { ApolloServer, gql } = require('apollo-server');
const { buildSubgraphSchema } = require('@apollo/subgraph');

const typeDefs = gql`
  type Query {
    me: User
  }

  type User @entity @key(fields: "id") {
    id: ID!
    username: String
  }
`;

const resolvers = {
  Query: {
    me() {
      return { id: "1", username: "@ava" }
    }
  },
  User: {
    __resolveReference(user, { fetchUserById }){
      return fetchUserById(user.id)
    }
  }
}

const server = new ApolloServer({
  schema: buildSubgraphSchema([{ typeDefs, resolvers }])
});

server.listen(4001).then(({ url }) => {
    console.log(`🚀 Server ready at ${url}`);
});

Anybody seen this? Got a fix? Do I need to add the @entity directive myself? Whatever the case the I think the documentation should point out the dependency needed if it’s just a dependency, but this reliably breaks just following the happy path.

Thoughts?

Will update this post with the resolution as I dig through troubleshooting. 👍🏻

Click to read a full size image of the response.

UPDATE: Dec 3rd, 2021.

It looks like the docs were just updated incorrectly when posted and the entity directive isn’t ready for v2 just yet. Removing this directive resolves the error and we’ll need to wait until it’s added in a subsequent iteration for that capability.

Stephen Barlow (@barlow_vo) from Apollo responded via the Twitters, but also added a link to the roadmap for federation.

AWS Amplify Release, GraphQL, and Recent Curated Links

This release kicked off this week in time for re:Invent and I put together a quick write up. Any questions, feel free to ping me via my contact form or better yet, just pop a question at me via the Twitters @Adron.

Authenticator

Amplify’s new Authenticator

Docs here

The new authenticator is a component that adds a full authentication flow for your app with coordinated boilerplate. This covers vue.js, angular, and react frameworks. The component has a customizable UI (as you’d expect), includes social logins for Google, Facebook, Apple, and Amazon. The initial setup is zero-configuration, and does have password manager support, along with show/hide confirm password forms.

The zero configuration works based on your Amplify CLI setup. To use this component grab the latest version of Amplify 6.4.0.

npm

npm install -g @aws-amplify/cli@latest

yarn

yarn global add @aws-amplify/cli@latest

Once that runs it will update your aws-exports.js with the latest backend configuration for the Authenticator. So, zero configuration you have to add, but there’s configuration back there if you need to or want to dig in.

There is then an initial state for the component that sets a user up for sign in, sign up, or resetting their password. You might start with some code to get the component in your page like this.

export default function App() {
  return (
    <Authenticator>
      {({ signOut, user }) => (
        <main>
          <h1>Hello {user.username}</h1>
          <button onClick={signOut}>Sign out</button>
        </main>
      )}
    </Authenticator>
  );
}

Then to set the initial state add the following parameter.

export default function App() {
  return (
    <Authenticator initialState="signUp">
      {({ signOut, user }) => (
        <main>
          <h1>Hello {user.username}</h1>
          <button onClick={signOut}>Sign out</button>
        </main>
      )}
    </Authenticator>
  );
}

Setting many of the other options to your needs includes adding additional parameters to the Authenticator component like;

Social providers

export default function App() {
  return (
    <Authenticator socialProviders={['amazon', 'apple', 'facebook', 'google']}>
      {({ signOut, user }) => (
        <main>
          <h1>Hello {user.username}</h1>
          <button onClick={signOut}>Sign out</button>
        </main>
      )}
    </Authenticator>
  );
}

Sign up attributes

export default function App() {
  return (
    <Authenticator signUpAttributes={[]}>
      {({ signOut, user }) => (
        <main>
          <h1>Hello {user.username}</h1>
          <button onClick={signOut}>Sign out</button>
        </main>
      )}
    </Authenticator>
  );
}

Login mechanisms

export default function App() {
  return (
    <Authenticator loginMechanisms={['username']}>
      {({ signOut, user }) => (
        <main>
          <h1>Hello {user.username}</h1>
          <button onClick={signOut}>Sign out</button>
        </main>
      )}
    </Authenticator>
  );
}

There are lots of other features too, give the docs a quick read for the full details. For more details on the overall authentication worflow check out these docs.

In-App Messaging

In App Messaging

This library is, sadly for my vue.js app, only available for react native. A quick install will get you started.

npm install -E aws-amplify@in-app-messaging aws-amplify-react-native@in-app-messaging amazon-cognito-identity-js @react-native-community/netinfo @react-native-async-storage/async-storage @react-native-picker/picker react-native-get-random-values react-native-url-polyfill

Then install pod dependencies for iOS.

pod install

An example looks like this.

import 'react-native-get-random-values';
import 'react-native-url-polyfill/auto';

import { AppRegistry } from 'react-native';
import App from './App';
import { name as appName } from './app.json';

AppRegistry.registerComponent(appName, () => App);

Then import the awsconfig vis aws.exports.js.

import Amplify from 'aws-amplify';
import awsconfig from './src/aws-exports';

Amplify.configure(awsconfig);

Then integrate the Amplify React Native UI component if your app’s root component.

import {
  InAppMessagingProvider,
  InAppMessageDisplay
} from 'aws-amplify-react-native';

const App = () => (
  <InAppMessagingProvider>
    {/* Your application */}
    <InAppMessageDisplay />
  </InAppMessagingProvider>
);

re: from the docs, here’s an app.jsx example.

import React, { useEffect } from 'react';
import { SafeAreaView, Button } from 'react-native';
import { Analytics, Notifications } from 'aws-amplify';
import {
  InAppMessagingProvider,
  InAppMessageDisplay
} from 'aws-amplify-react-native';

const { InAppMessaging } = Notifications;

// To display your in-app message, make sure this event name matches one you created
// in an In-App Messaging campaign!
const myFirstEvent = { name: 'my_first_event' };

const App = () => {
  useEffect(() => {
    // Messages from your campaigns need to be synced from the backend before they
    // can be displayed. You can trigger this anywhere in your app. Here we are
    // syncing just once when this component (your app) renders for the first time.
    InAppMessaging.syncMessages();
  }, []);

  return (
    <SafeAreaView>
      <InAppMessagingProvider>
        {/* This button has an example of an analytics event triggering the in-app message. */}
        <Button
          onPress={() => {
            Analytics.record(myFirstEvent);
          }}
          title="Record Analytics Event"
        />

        {/* This button has an example of an In-app Messaging event triggering the in-app message.*/}
        <Button
          onPress={() => {
            InAppMessaging.dispatchEvent(myFirstEvent);
          }}
          title="Send In-App Messaging Event"
        />

        <InAppMessageDisplay />
      </InAppMessagingProvider>
    </SafeAreaView>
  );
};

export default App;
In App Messaging

Custom Resources w/ AWS CDK or Cloudformation

René (@renebrandel) wrote a blog post on extending the Amplify backend with custom AWS resources using AWS CDK or CloudFormation. The post is avilable here but I’ll give you a quick summary.

The new CLI comand amplify add custom will get almost any of the AWS services added to an Amplify backend. The core tech here is backed with AWS Cloud Development Kit (CDK) or CloudFormation. For example if you want to pull in AWS SNS as custom resource, this is a very quick and concise way to do just that.

Again, check out René’s post to really get into it and check out some of the possibilities.

Overriding Amplify Backend Resources with CDK

Amplify sets up various capabilities out of the box in many situations such as project-level IAM roles, Cognito Auth, or S3 resources. As with the previous section, this one I’ll keep short as René (@renebrandel) has wrote a great blog post about this capability too titled “Override Amplify-generated backend resources using CDK“. If you’re interesting in nixing (overriding) any of these features and using another choice, this is your go to.

Prototype a Fullstack App without an AWS Account

Not specifically related to the release, this capability that Christian Nwamba AKA Codebeast wrote up in a blog post will show you how to do just that. The docs focused around what he details in the post are avilable here.

GraphQL Transformer v2

This I saved for last, it’s in my wheelhouse after all. Some of the features of the new v2 drop include; deny-by-default auth, lambda authorizer, customizable pipeline resolvers, and OpenSearch aggregations and use-scoped queries. The accomplished blogger, as mentioned in this very blog post, blogging legend René continues with “AWS Amplify announces the new GraphQL Transformer v2. More feature-rich, flexible, and extensible.“.

The first thing René brings up in the post is more explicit data modeling. I’ve stolen two of the screen shots from that post as examples and motivation to click through and check out the post. But I’ll also elaborate.

Adding primary and secondary indexes.

With the new explicit data modeling options, we’ve got @primaryKey and @index added as directoves to configure primary and secondary indexes from schema for your Dynamo Database. The directives in AppSync GraphQL really makes for a powerful schema definition capability to push via code first or to map from database to GraphQL scehma.

Adding relational directives.

The other part that is hugely important is the ability in schema to draw relationships that add referential integrity to your schema and the inherent data. There are now @hasOne, @hasMany, @belongsTo, and @manyToMany directives. I’m really looking forward to some data schema and modeling build outs in the near future. I’ll be sure to put together some tutorials and content detailing some of the design considerations and where and how to get all the schema hacks working best for your particular app data builds.

Thanks & Curated Follows

That’s it for this post. I’m always endeavoring to bring interesting tech and blog about it, but another way to get the quickest updates, links, details, and information about these releases is to follow the following people in the Twittersphere. They’re all in the cohort I run with at AWS with the Amplify team. I owe thanks to each for helping me find the following information and details included in this blog entry.