A Shiny New Vuejs v3 Web App Using & Deployed to Amplify/AppSync/Cognito

No cruft, let’s just start.

Prerequisites

These details plus yarn and a few other notes are available and derived from the Amplify Docs located here. What I’ve done is take those docs and add some specific details and information for this happy path. It includes additional references for the steps I took, and specifically what I’m running this on for this particular tutorial. As noted below, there is a section where this deviates from those steps and I get into next steps beyond the initial setup of the app, Amplify, and AppSync. I’ll note that part of this tutorial, or you can navigate directly to that part with this anchor thatPartiDeviate.

You’ll need the following for this specific tutorial. If you’re acclimate to various OSes and their respective needs around this software, you can get this sorted yourself and it’s mostly the same for each OS, but for this tutorial I’m rolling with MacOS Big Sur v11.6.2.

  • Your OS, as stated mine is Big Sur for this tutorial.
  • git. Probably any version released in the last decade will work just fine.
  • Node.js. Probably anything since v14 would work great but this tutorial is written against v16.11.1. As of this writing the LTS is 16.13.1 and current is 17.3.0.
  • Vue.js v3. For this tutorial I’m on a version of the v3 Preview. For the CLI a quick yarn global add @vue/cli does the job.
  • Amplify CLI. Version for this tutorial is 7.6.5. One can NPM install it with 🤙🏻 npm install -g @aws-amplify/cli or get it via cURL 👍🏻 curl -sL https://aws-amplify.github.io/amplify-cli/install | bash && $SHELL and of course, Windows has gotta be Windowsy with 😑 curl -sL https://aws-amplify.github.io/amplify-cli/install-win -o install.cmd && install.cmd.

A few first steps that only need done once. If you’ve already setup your amplify cli then this isn’t needed a second time.

First, get the Vue.js v3 base app skeleton running.

vue create mywhateverproject

Issuing this command will then provide prompts to select Vue.js v3 Preview (or likely just v3 when fully released, which will come along with other tooling as needed). Once this is done, following the standard steps of navigating into the directory cd myWhateverProejct , then executing the yarn command and finally yarn serve --open will open up the running web app in your default browser.

Next initialize the Vue.js App as an Amplify Project and get some defaults set and accepted. Executing amplify init and accepting the defaults will get this done. As displayed when done the Vue.js v3 App will now have multiple defaults and respective items selected.

Amplify Init

With the core Amplify folder and settings set, adding the Amplify libraries for use in user interface components is next up.

yarn add aws-amplify @aws-amplify/ui-components

Now navigate into the src/main.js file and add the Amplify and initial configure in the code, that will do the actual initialization execution when the app launches.

This is replacing this code…

import { createApp } from 'vue'
import App from './App.vue'

createApp(App).mount('#app')

with this code.

import { createApp } from 'vue'
import App from './App.vue'
import Amplify from 'aws-amplify';
import aws_exports from './aws-exports';
import {
	applyPolyfills,
	defineCustomElements,
} from '@aws-amplify/ui-components/loader';

Amplify.configure(aws_exports);
applyPolyfills().then(() => {
	defineCustomElements(window);
});
createApp(App).mount('#app')

This completes the steps we need for a running application. To cover full stack let’s cover the back end build out and schema construction. Then after that I’ll delve into thatPartiDeviate. Next up is getting the Amplify elements added.

npm install aws-amplify @aws-amplify/ui-components

Before even launching I went ahead and added the back end and database, GraphQL API, and related collateral.

amplify add api
amplify add api

Notice in the screenshot, once I selected to edit the schema now, it simply opened the file in the editor of my choice, which is Visual Studio Code for this tutorial. Since I’m executing this from the terminal in Visual Studio Code it simply opened the file in the active editor that I’m in, win win! The file that is opened by default for the schema includes the following GraphQL schema code.

# This "input" configures a global authorization rule to enable public access to
# all models in this schema. Learn more about authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules

input AMPLIFY { globalAuthRule: AuthRule = { allow: public } } # FOR TESTING ONLY!

type Todo @model {
	id: ID!
	name: String!
	description: String
}

For now, I’ll just leave the comment, the input AMPLIFY and the Todo type just as it is. It’s important to note that this schema.graphql file is located at app/amplify/backend/schema.graphql. I’ll come back to this later in thatPartiDeviate.

Next I want to push the app, api, and backend to Amplify and AppSync.

amplify push

During this phase a lot of things happen. The GraphQL Schema is turned into an API and this is deployed along with the Database is deployed to DynamoDB.

To get the backend shipped, i.e. deployed to AppSync, issue the amplify push command. Again, following through with the default choices. If amplify console is issued just after this a review of the API can be made.

amplify push

Ok, now it’s auth time. Adding that is somewhat mind boggling minimal, just amplify add auth. For this I chose Default config, then Username for the way users sign in, and then the No, I am done option followed by issuing another amplify push, confirmed that and let it go through its process.

After this next steps included adding the following code to the App.vue file to get the initial interations, security and related things into place for the todo app. Again, I feel it important to note that I’ll be changing all of this later in the post here. But it’s a solid way to start building an application and then looping back around after it is up and running, deployed before moving on to next steps.

<template>
  <amplify-authenticator>
    <div id="app">
      <h1>Todo App</h1>
      <input type="text" v-model="name" placeholder="Todo name">
      <input type="text" v-model="description" placeholder="Todo description">
      <button v-on:click="createTodo">Create Todo</button>
      <div v-for="item in todos" :key="item.id">
        <h3>{{ item.name }}</h3>
        <p>{{ item.description }}</p>
      </div>
    </div>
    <amplify-sign-out></amplify-sign-out>
  </amplify-authenticator>
</template>

<script>
import { API } from 'aws-amplify';
import { createTodo } from './graphql/mutations';
import { listTodos } from './graphql/queries';
import { onCreateTodo } from './graphql/subscriptions';

export default {
  name: 'App',
  async created() {
    this.getTodos();
    this.subscribe();
  },
  data() {
    return {
      name: '',
      description: '',
      todos: []
    }
  },
  methods: {
    async createTodo() {
      const { name, description } = this;
      if (!name || !description) return;
      const todo = { name, description };
      this.todos = [...this.todos, todo];
      await API.graphql({
        query: createTodo,
        variables: {input: todo},
      });
      this.name = '';
      this.description = '';
    },
    async getTodos() {
      const todos = await API.graphql({
        query: listTodos
      });
      this.todos = todos.data.listTodos.items;
    },
    subscribe() {
      API.graphql({ query: onCreateTodo })
        .subscribe({
          next: (eventData) => {
            let todo = eventData.value.data.onCreateTodo;
            if (this.todos.some(item => item.name === todo.name)) return; // remove duplications
            this.todos = [...this.todos, todo];
          }
        });
    }
  }
}
</script>

With this added now I could run yarn serve and check out the site. At this point I signed up just to have an account to use and added a todo item. Everything worked swimmingly at this point!

The final step before getting into a proper deviation from this todo example involves now getting the app properly published to Amplify. This is done by executing amplify add hosting. Accept Hosting with Amplify Console (Managed hosting with custom domains, Continuous deployment) and Manual deployment when prompted. Now, finally, issue the command amplify publish.

Boom, the todo app site is live!

thatPartWhereiDeviate

Now it’s time to get into the nitty gritty of deviations from the easy path!

New GraphQL Schema!

My schema that I want to add is around building out collections for a number of data sets. The first one is a data set that I routinely talk about on a regular basis, and yes, it is indeed centered around trains! If you’re uninterested in the trains part and schema and more interested in the changes skip down to the "Deploying The Changes" section of the post.

Alright, describing the data model that I want to have and use will start with the minimal part of just having a list of railroads. This would be a list, or more specifically a table of railraods, that we can add railroads to and collect peripheral information about them. For this table I’ll add the following fields, AKA columns of data to store. I would want to collect the following for a railroad:

  1. railroad name
  2. wikipedia URI
  3. map URI
  4. peripheral details of an unstructured nature
  5. founding year, month, and day of the railroad
  6. record stamp

In addition, I want to keep a list of trains – specifically named trains – that each railroad operates. This data would include:

  1. train name
  2. active – yes / no
  3. peripheral details of an unstructured type
  4. wikipedia URI
  5. route map URI
  6. time table URI
  7. train URI – i.e. like a website or something that might be dedicated to the particular train.
  8. record stamp

Deploying The Changes

Now it is time to deploy these additional database and schema changes. One of the easiest ways to make these changes is to use Amplify Studio, which has a great section for data modeling, which in turn puts together and ensures your schema is usable. Then it will enable you to deploy that new schema with changes to the database and the active service!

Navigate to the interface from here.

Opening Amplify Studio with the Launch Studio button.

Once I navigated to the interface I built out the additional tables like this.

Building a Schema with Amplify Studio

Then just click on Save and Deploy and then Deploy on the following modal dialog and Amplify will deploy the AppSync schema changes.

Amplify Studio Save & Deploy

With that deployed, in the same Amplify Studio interface I then clicked on the GraphQL API tab section and then on the Resource name for mywahteverproject to open up the AppSync Console.

Opening an AppSync Schema.

Further down in the schema toward the bottom I can then find and confirm my types are in and ready for use. The Todo type is still there, since I didn’t need to really remove it yet and it acts as a good working reference. But more importantly you can see the other types and the correlative relationship that was added via the Amplify data modeling interface.

...more schema

type Todo @aws_iam
@aws_api_key {
	id: ID!
	name: String!
	description: String
	_version: Int!
	_deleted: Boolean
	_lastChangedAt: AWSTimestamp!
	createdAt: AWSDateTime!
	updatedAt: AWSDateTime!
}

type Train @aws_iam
@aws_api_key {
	id: ID!
	train_name: String!
	active: Boolean!
	peripheral_details: AWSJSON
	wikipedia_uri: AWSURL
	route_map_uri: AWSURL
	timetable_uri: AWSURL
	train_uri: AWSJSON
	record_stamp: AWSTimestamp
	_version: Int!
	_deleted: Boolean
	_lastChangedAt: AWSTimestamp!
	createdAt: AWSDateTime!
	updatedAt: AWSDateTime!
	railroads(
		railroadID: ModelIDKeyConditionInput,
		filter: ModelRailroadTrainFilterInput,
		sortDirection: ModelSortDirection,
		limit: Int,
		nextToken: String
	): ModelRailroadTrainConnection
		@aws_iam
@aws_api_key
}

type Railroad @aws_iam
@aws_api_key {
	id: ID!
	railroad: String!
	wikipedia_ur: AWSURL
	map_uri: AWSURL
	peripheral_details: AWSJSON
	founding_year: Int
	founding_month: Int
	founding_day: Int
	record_stamp: AWSTimestamp
	_version: Int!
	_deleted: Boolean
	_lastChangedAt: AWSTimestamp!
	createdAt: AWSDateTime!
	updatedAt: AWSDateTime!
	RailroadTrains(
		trainID: ModelIDKeyConditionInput,
		filter: ModelRailroadTrainFilterInput,
		sortDirection: ModelSortDirection,
		limit: Int,
		nextToken: String
	): ModelRailroadTrainConnection
		@aws_iam
@aws_api_key
}

...more schema

The relationship can be seen via the object connections here in the ModelRailroadTrainConnection and the keys associated.

Next steps to get this updated in the local repository from these changes that were just made out of sync via the Amplify Studio interface requires two quick commands, both of which are displayed on the screen of the GraphQL interface in the studio. It’s best to get the command, as it’ll have the appId already included in a copypasta option on the screen, which looks like this.

amplify pull --appId app-id-which-is-in-studio --envName dev

Executing that will get everything updated and pull in the remote GraphQL Schema to the local schema.graphql file located in the amplify/backend/api/ location. Next run this command.

amplify update api

This will update everything to synchronize things, which will also prompt me for code generation so that I can have the client side code ready for use whenever I build out the user interface later.

Next Up

Some of the things I’ll cover in the next article, as I continue this effort, is what has been done with all these steps from a project perspective. As one can see, some things might be a little confusing at this point, for example the above schema shown in AppSync, but after the synchronization if you look at the schema.graphql file locally it shows this.

type Train @model @auth(rules: [{allow: public}]) {
  id: ID!
  train_name: String!
  active: Boolean!
  peripheral_details: AWSJSON
  wikipedia_uri: AWSURL
  route_map_uri: AWSURL
  timetable_uri: AWSURL
  train_uri: AWSJSON
  railroads: [RailroadTrain] @connection(keyName: "byTrain", fields: ["id"])
  record_stamp: AWSTimestamp
}

type Railroad @model @auth(rules: [{allow: public}]) {
  id: ID!
  railroad: String!
  wikipedia_ur: AWSURL
  map_uri: AWSURL
  peripheral_details: AWSJSON
  founding_year: Int
  founding_month: Int
  founding_day: Int
  record_stamp: AWSTimestamp
  RailroadTrains: [RailroadTrain] @connection(keyName: "byRailroad", fields: ["id"])
}

type Todo @model @auth(rules: [{allow: public}]) {
  id: ID!
  name: String!
  description: String
}

type RailroadTrain @model(queries: null) @key(name: "byRailroad", fields: ["railroadID", "trainID"]) @key(name: "byTrain", fields: ["trainID", "railroadID"]) @auth(rules: [{allow: public}]) {
  id: ID!
  railroadID: ID!
  trainID: ID!
  railroad: Railroad! @connection(fields: ["railroadID"])
  train: Train! @connection(fields: ["trainID"])
}

Obviously this is very different than what is shown from one place to another, so I’ll discuss this and other things. So subscribe (over on the right side of the blog), follow (@Adron), and you’ll be updated on the next post when it’s published.

SITREP (Situational Report)

Alright, what have I wrapped up so far? Here’s a bullet list of the things finished:

  • Vue.js App created.
  • Vue.js Form put together for todo entries.
  • Authentication added with Cognito.
  • An AppSync GraphQL created and published.
  • Additional types added to the AppSync GraphQL API.
  • Updates and code regenerated for our API.

What’s next to do? This is the short list, there will be after that, much more to do!

  • Get the Vue.js app spified up, get some nice design put together for it, add some reasonable CSS, graphics, etc to make the interface pop. But above all, it needs to feel usable and be usable.
  • Add the forms for each of the respective interfaces to manipulate the data. This could amount to lots of different things, adding navigation, routing, and other menues and the like.
  • Add screens that can provide some nice reports on the data that I’m putting together. For example, it’d be nice to get a list of the actual named trains or the railroads and have their images, maps, and other respective elements shown.

…and the list goes on. Until next session, enjoy your thrashing code! 🤘🏻

References

Setup Postgres, and GraphQL API with Hasura on Azure

Key Technologies: HasuraPostgresTerraformDocker, and Azure.

I created a data model to store railroad systems, services, scheduled, time points, and related information, detailing the schema “Beyond CRUD n’ Cruft Data-Modeling” with a few tweaks. The original I’d created for Apache Cassandra, and have since switched to Postgres giving the option of primary and foreign keys, relations, and the related connections for the model.

In this post I’ll use that schema to build out an infrastructure as code solution with Terraform, utilizing Postgres and Hasura (OSS).

Prerequisites

Docker Compose Development Environment

For the Docker Compose file I just placed them in the root of the repository. Add a docker-compose.yaml file and then added services. The first service I setup was the Postgres/PostgreSQL database. This is using the standard Postgres image on Docker Hub. I opted for version 12, I do want it to always restart if it gets shutdown or crashes, and then the last of the obvious settings is the port which maps from 5432 to 5432.

For the volume, since I might want to backup or tinker with the volume, I put the db_data location set to my own Codez directory. All my databases I tend to setup like this in case I need to debug things locally.

The POSTGRES_PASSWORD is an environment variable, thus the syntax ${PPASSWORD}. This way no passwords go into the repo. Then I can load the environment variable via a standard export POSTGRES_PASSWORD="theSecretPasswordHere!" line in my system startup script or via other means.

services:
  postgres:
    image: postgres:12
    restart: always
    volumes:
      - db_data:/Users/adron/Codez/databases
    environment:
      POSTGRES_PASSWORD: ${PPASSWORD}
    ports:
      - 5432:5432

For the db_data volume, toward the bottom I add the key value setting to reference it.

volumes:
  db_data:

Next I added the GraphQL solution with Hasura. The image for the v1.1.0 probably needs to be updated (I believe we’re on version 1.3.x now) so I’ll do that soon, but got the example working with v1.1.0. Next I’ve got the ports mapped to open 8080 to 8080. Next, this service will depend on the postgres service already detailed. Restart, also set on always just as the postgres service. Finally two evnironment variables for the container:

  • HASURA_GRAPHQL_DATABASE_URL – this variable is the base postgres URL connection string.
  • HASURA_GRAPHQL_ENABLE_CONSOLE – this is the variable that will set the console user interface to initiate. We’ll definitely want to have this for the development environment. However in production I’d likely want this turned off.
  graphql-engine:
    image: hasura/graphql-engine:v1.1.0
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:logistics@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true"

At this point the commands to start this are relatively minimal, but in spite of that I like to create a start and stop shell script. My start script and stop script simply look like this:

Starting the services.

docker-compose up -d

For the first execution of the services you may want to skip the -d and instead watch the startup just to become familiar with the events and connections as they start.

Stopping the services.

docker-compose down

🚀 That’s it for the basic development environment, we’re launched and ready for development. With the services started, navigate to https://localhost:8080/console to start working with the user interface, which I’ll have a more details on the “Beyond CRUD n’ Cruft Data-Modeling” swap to Hasura and Postgres in an upcoming blog post.

For full syntax of the docker-compose.yaml check out this gist: https://gist.github.com/Adron/0b2ea637b5e00681f4d62404805c3a00

Terraform Production Environment

For the production deployment of this stack I want to deploy to Azure, use Terraform for infrastructure as code, and the Azure database service for Postgres while running Hasura for my API GraphQL tier.

For the Terraform files I created a folder and added a main.tf file. I always create a folder to work in, generally, to keep the state files and initial prototyping of the infrastructre in a singular place. Eventually I’ll setup a location to store the state and fully automate the process through a continues integration (CI) and continuous delivery (CD) process. For now though, just a singular folder to keep it all in.

For this I know I’ll need a few variables and add those to the file. These are variables that I’ll use to provide values to multiple resources in the Terraform templating.

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

One other variable I’ll want so that it is a little easier to verify what my Hasura connection information is, will look like this.

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

Let’s take this one apart a bit. There is a lot of concatenated and interpolated variables being wedged together here. This is basically the Postgres connection string that Hasura will need to make a connection. It includes the username and password, and all of the pertinent parsed and string escaped values. Note specifically the %40 between the ${var.username} and ${azurerm_postgresql_server.logisticsserver.name} variables while elsewhere certain characters are not escaped, such as the @ sign. When constructing this connection string, it is very important to be prescient of all these specific values being connected together. But, I did the work for you so it’s a pretty easy copy and paste now!

Next I’ll need the Azure provider information.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

Note that there is a features array that is just empty, it is now required for the provider to designate this even if the array is empty.

Next up is the resource group that everything will be deployed to.

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

Now the Postgres Server itself. Note the location and resource_group_name simply map back to the resource group. Another thing I found a little confusing, as I wasn’t sure if it was a Terraform name or resource name tag or the server name itself, is the “name” key value pair in this resource. It is however the server name, which I’ve assigned var.server. The next value assigned “B_Gen5_2” is the Azure designator, which is a bit cryptic. More on that in a future post.

After that information the storage is set to, I believe if I RTFM’ed correctly to 5 gigs of storage. For what I’m doing this will be fine. The backup is setup for 7 days of retention. This means I’ll be able to fall back to a backup from any of the last seven days, but after 7 days the backups are rolled and the last day is deleted to make space for the newest backup. The geo_redundant_backup_enabled setting is set to false, because with Postgres’ excellent reliability and my desire to not pay for that extra reliability insurance, I don’t need geographic redundancy. Last I set auto_grow_enabled to true, albeit I do need to determine the exact flow of logic this takes for this particular implementation and deployment of Postgres.

The last chunk of details for this resource are simply the username and password, which are derived from variables, which are derived from environment variables to keep the actual username and passwords out of the repository. The last two bits set the ssl to enabled and the version of Postgres to v9.5.

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

Since the database server is all setup, now I can confidently add an actual database to that database. Here the resource_group_name pulls from the resource group resource and the server_name pulls from the server resource. The name, being the database name itself, I derive from a variable too. Then the character set is UTF8 and collation is set to US English, which are generally standard settings on Postgres being installed for use within the US.

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

The next thing I discovered, after some trial and error and a good bit of searching, is the Postgres specific firewall rule. It appears this is related to the Postgres service in Azure specifically, as for a number of trials and many errors I attempted to use the standard available firewalls and firewall rules that are available in virtual networks. My understanding now is that the Postgres Servers exist outside of that paradigm and by relation to that have their own firewall rules.

This firewall rule basically attaches the firewall to the resource group, then the server itself, and allows internal access between the Postgres Server and the Hasura instance.

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

The last and final step is setting up the Hasura instance to work with the Postgres Server and the designated database now available.

To setup the Hasura instance I decided to go with the container service that Azure has. It provides a relatively inexpensive, easier to setup, and more concise way to setup the server than setting up an entire VM or full Kubernetes environment just to run a singular instance.

The first section sets up a public IP address, which of course I’ll need to change as the application is developed and I’ll need to provide an actual secured front end. But for now, to prove out the deployment, I’ve left it public, setup the DNS label, and set the OS type.

The next section in this resource I then outline the container details. The name of the container can be pretty much whatever you want it to be, it’s your designator. The image however is specifically hasura/graphql-engine. I’ve set the CPU and memory pretty low, at 0.5 and 1.5 respectively as I don’t suspect I’ll need a ton of horsepower just to test things out.

Next I set the port available to port 80. Then the environment variables HASURA_GRAPHQL_SERVER_PORT and HASURA_GRAPHQL_ENABLE_CONSOLE to that port to display the console there. Then finally that wild concatenated interpolated connection string that I have setup as an output variable – again specifically for testing – HASURA_GRAPHQL_DATABASE_URL.

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

With all that setup it’s time to test. But first, just for clarity here’s the entire Terraform file contents.

provider "azurerm" {
  version = "=2.20.0"
  features {}
}

resource "azurerm_resource_group" "adronsrg" {
  name     = "adrons-rg"
  location = "westus2"
}

resource "azurerm_postgresql_server" "logisticsserver" {
  name = var.server
  location = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  sku_name = "B_Gen5_2"

  storage_mb                   = 5120
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = true

  administrator_login          = var.username
  administrator_login_password = var.password
  version                      = "9.5"
  ssl_enforcement_enabled      = true
}

resource "azurerm_postgresql_database" "logisticsdb" {
  name                = var.database
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
}

resource "azurerm_postgresql_firewall_rule" "pgfirewallrule" {
  name                = "allow-azure-internal"
  resource_group_name = azurerm_resource_group.adronsrg.name
  server_name         = azurerm_postgresql_server.logisticsserver.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

resource "azurerm_container_group" "adronshasure" {
  name                = "adrons-hasura-logistics-data-layer"
  location            = azurerm_resource_group.adronsrg.location
  resource_group_name = azurerm_resource_group.adronsrg.name
  ip_address_type     = "public"
  dns_name_label      = "logisticsdatalayer"
  os_type             = "Linux"


  container {
    name   = "hasura-data-layer"
    image  = "hasura/graphql-engine"
    cpu    = "0.5"
    memory = "1.5"

    ports {
      port     = 80
      protocol = "TCP"
    }

    environment_variables = {
      HASURA_GRAPHQL_SERVER_PORT = 80
      HASURA_GRAPHQL_ENABLE_CONSOLE = true
    }
    secure_environment_variables = {
      HASURA_GRAPHQL_DATABASE_URL = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
    }
  }

  tags = {
    environment = "datalayer"
  }
}

variable "database" {
  type = string
}

variable "server" {
  type = string
}

variable "username" {
  type = string
}

variable "password" {
  type = string
}

output "hasura_url" {
  value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}

To run this, similarly to how I setup the dev environment, I’ve setup a startup and shutdown script. The startup script named prod-start.sh has the following commands. Note the $PUSERNAME and $PPASSWORD are derived from environment variables, where as the other two values are just inline.

cd terraform

terraform apply -auto-approve \
    -var 'server=logisticscoresystemsdb' \
    -var 'username='$PUSERNAME'' \
    -var 'password='$PPASSWORD'' \
    -var 'database=logistics'

For the full Terraform file check out this gist: https://gist.github.com/Adron/6d7cb4be3a22429d0ff8c8bd360f3ce2

Executing that script gives me results that, if everything goes right, looks similarly to this.

./prod-start.sh 
azurerm_resource_group.adronsrg: Creating...
azurerm_resource_group.adronsrg: Creation complete after 1s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg]
azurerm_postgresql_server.logisticsserver: Creating...
azurerm_postgresql_server.logisticsserver: Still creating... [10s elapsed]
azurerm_postgresql_server.logisticsserver: Still creating... [20s elapsed]


...and it continues.

Do note that this process will take a different amount of time and is completely normal for it to take ~3 or more minutes. Once the server is done in the build process a lot of the other activities start to take place very quickly. Once it’s all done, toward the end of the output I get my hasura_url output variable so that I can confirm that it is indeed put together correctly! Now that this is preformed I can take next steps and remove that output variable, start to tighten security, and other steps. Which I’ll detail in a future blog post once more of the application is built.

... other output here ...


azurerm_container_group.adronshasure: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [50s elapsed]
azurerm_container_group.adronshasure: Still creating... [50s elapsed]
azurerm_postgresql_database.logisticsdb: Creation complete after 51s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.DBforPostgreSQL/servers/logisticscoresystemsdb/databases/logistics]
azurerm_container_group.adronshasure: Still creating... [1m0s elapsed]
azurerm_container_group.adronshasure: Creation complete after 1m4s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.ContainerInstance/containerGroups/adrons-hasura-logistics-data-layer]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

hasura_url = postgres://postgres%40logisticscoresystemsdb:theSecretPassword!@logisticscoresystemsdb.postgres.database.azure.com:5432/logistics

Now if I navigate over to logisticsdatalayer.westus2.azurecontainer.io I can view the Hasura console! But where in the world is this fully qualified domain name (FQDN)? Well, the quickest way to find it is to navigate to the Azure portal and take a look at the details page of the container itself. In the upper right the FQDN is available as well as the IP that has been assigned to the container!

Navigating to that FQDN URI will bring up the Hasura console!

Next Steps

From here I’ll take up next steps in a subsequent post. I’ll get the container secured, map the user interface or CLI or whatever the application is that I build lined up to the API end points, and more!

References

Sign Up for Thrashing Code

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode.

Development Machine Environment Build & Language Stack Installations – Browsers & IDE’s

In this video I put together some basic IDE’s and browsers I install. In the case of browsers that includes more than a few. For the case of IDE’s it’s my standard arsenal of Jetbrains IDE’s and Visual Studio Code. For the previous step in this series, check out the Base OS Load post.

Additional Notes

Here’s the full list of IDE’s I installed.

IDEs

Browsers

That’s it for now. However, if you’re interested in joining me for next steps, language stack setup, and more in addition to writing some JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and all sorts of coding I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode. Feel free to check out a coding session, ask questions, interject, or just come and enjoy the tunes!

For more blogging, I’ve write on https://compositecode.blog and the Thrashing Code Newsletter for more details about open source projects and related efforts I work on, sign up for it here!

Wat?! Ugh Terraform State is Complicated Sometimes! ‘url has no host’

As I’ve been working through the series (part 1, 2, 3, and 4 so far), a number of issues come up (like this one), and this seemed like a good one to post too. As I’ve been working through I started stumbling into this error around destroying images via terraform destroy . Now, I’m not just creating them with terraform apply and then trying to destroy them. I’m creating the images via Packer, then importing the state (see part 4 for the series for details) and then when I clean up the environment trying to terraform destroy which shows this error.

[…lost image…]

I had taken the default azurerm_image resource configuration from the Hashicorp docs site that I tweaked just a little bit.

[code language=”bash”]
resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

The thing causing the error is the “{blog_uri}”. Which in general, I’d assume that this should be pulled or derived from the existing image created by packer when imported. But the syntax above just doesn’t cut it for actions post-import of the image state.

Time Consuming Troubleshooting

To troubleshoot and confirm this issue takes a long time. Create the image, which is ~15-20 minutes, then run an apply. The apply, even if most of the creation is minimized to imports and the few other things that are created, takes several minutes in Azure. Then a destroy takes several minutes. So all in all, one test cycle is about ~30 minutes.

The First Tricky Fix

I went through several iterations of attempting to get that part of the import of the state pulled in. That didn’t work out so well. What did though, was the simplist of actions, I deleted blob_uri = "{blog_uri}"!  Then upon terraform apply or terraform destroy I got a full cycled application of changes, etc, after adding the state and on destroy terraform wiped out everything as expected!

Problem Fixed, Problem Created

On to the next things! But oh wait, there is another problem. Now if I setup a VM to be created based off of the image, the state doesn’t have the blog_uri. Great, back to square one right? Not entirely, subscribe, keep reading and I’ll have the next steps for this coming real soon!

Development Workspace with Terraform on Azure: Part 4 – DSE w/ Packer + Importing State 4 Terraform

The next thing I wanted setup for my development workspace is a DataStax Enterprise Cluster. This will give me all of the Apache Cassandra database features plus a lot of additional features around search, OpsCenter, analytics, and more. I’ll elaborate on that in some future posts. For now, let’s get an image built we can use to add nodes to our cluster and setup some other elements.

1: DataStax Enterprise

The general installation instructions for the process I’m stepping through here in this article can be found in this documentation. To do this I started with a Packer template like the one I setup in the second part of this series. It looks, with the installation steps taken out, just like the code below.

[code language=”javascript”]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
“managed_image_name”: “{{user `imagename`}}”,

“os_type”: “Linux”,
“image_publisher”: “Canonical”,
“image_offer”: “UbuntuServer”,
“image_sku”: “18.04-LTS”,

“azure_tags”: {
“dept”: “Engineering”,
“task”: “Image deployment”
},

“location”: “westus2”,
“vm_size”: “Standard_DS2_v2”
}],
“provisioners”: [{
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}'”,
“inline”: [
“”
],
“inline_shebang”: “/bin/sh -x”,
“type”: “shell”
}]
}
[/code]

In the section marked “inline” I setup the steps for installing DataStax Enterprise.

[code language=”javascript”]
“apt-get update”,
“apt-get install -y openjdk-8-jre”,
“java -version”,
“apt-get install libaio1”,
“echo \”deb https://debian.datastax.com/enterprise/ stable main\” | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list”,
“curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -“,
“apt-get update”,
“apt-get install -y dse-full”
[/code]

The first part of this process the machine image needs Open JDK installed, which I opted for the required version of 1.8. For more information about the Open JDK check out this material:

The next thing I needed to do was to get everything setup so that I could use this Azure Image to build an actual Virtual Machine. Since this process however is built outside of the primary Terraform main build process, I need to import the various assets that are created for Packer image creation and the actual Packer images. By importing these asset resources into Terraform’s state I can then write configuration and code around them as if I’d created them within the main Terraform build process. This might sound a bit confusing, but check out this process and it might make more sense. If it is still confusing do let me know, ping me on Twitter @adron and I’ll elaborate or edit things so that they read better.

check-box-64Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.

2: Terraform State & Terraform Import Resources

Ok, if you check out part 1 of this series I setup Azure CLI, Terraform, and the pertinent configuration and parts to build out infrastructure as code using HCL (Hashicorp Configuration Language) with a little bit of Bash as glue here and there. Then in Part 2 and Part 3 I setup Packer images and some Terraform resources like Kubernetes and such. All of that is great, but these two parts of the process are now in entirely two different unknown states. The two pieces are:

  1. Packer Images
  2. Terraform Infrastructure

The Terraform Infrastructure doesn’t know the Packer Images exist, but they are sitting there in a resource group in Azure. The way to make Terraform aware that these images exist is to import the various things that store the images. To import these resources into the Terraform state, before doing an apply, run the terraform import command.

In order to get all of the resources we need in which to operate and build images, the following import commands need issued. I wrote a script file to help me out with each of these, and used jq to make retrieval of the Packer created Azure Image ID’s a bit easier. That code looks like this:

[code language=”bash”]
BASECASSANDRA=$(az image list | jq ‘map({name: “basecassandra”, id})’ | jq -r ‘.[0].id’)
BASEDSE=$(az image list | jq ‘map({name: “basedse”, id})’ | jq -r ‘.[0].id’)
[/code]

Breaking down the jq commands above, the following actions are being taken. First, the Azure CLI command is issued, az image list which is then piped | into the jq command of jq 'map({name: "theimagenamehere", id})'. This command takes the results of the Azure CLI command and finds the name element with the appropriate image name, matches that and then gets the id along with it. That command is then piped into another command that returns just the value of the id jq -r '.id'. The -r is a switch that tells jq to just return the raw data, without enclosing double quotes.

I also needed to import the resource group all of these are in too, which following a similar jq command style of piping one command’s results into another, issued this command to get the Resource Group ID RG-IMPORT=$(az group show --name adronsimages | jq -r '.id'). With those three ID’s there is one more element needed to be able to import these into Terraform state.

The Terraform resources that these imported pieces of state will map to need declared, which means the Terraform HCL itself needs written out. For that, there are the two images that are needed and the Resource Group. I added the images in an images.tf files and the Resource Group goes in the resource_groups.tf file.

[code language=”javascript”]
resource “azurerm_image” “basecassandra” {
name = “basecassandra”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}

resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

Then the Resource Group.

[code language=”javascript”]
resource “azurerm_resource_group” “imported_adronsimages” {
name = “adronsimages”
location = var.locationlong

tags = {
environment = “Development Images”
}
}
[/code]

Now, issuing these Terraform commands will pull the current state of those resource into the state, which we can then issue further Terraform commands and applies from.

[code language=”bash]
terraform import azurerm_image.basedse $BASEDSE
terraform import azurerm_image.basecassandra $BASECASSANDRA
terraform import azurerm_resource_group.imported_adronsimages $RG_IMPORT
[/code]

Running those commands, the results come back something like this.

terraform-imports

Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
  • Now there is also state in Terraform, that understands where and what these resources are.

Summary, for now.

This post is shorter than I’d like it to be. But it was taking to long for the next steps to get written up – but fear not they’re on the way! In the coming post I’ll cover more of other resource elements we’ll need to import, what is next for getting a virtual machine built off of the image that is now available, some Terraform HCL refactoring, and most importantly putting together the actual DataStax Enterprise / Apache Cassandra Clusters! So stay tuned, subscribe to the blog, and of course follow me on the Twitters @Adron.