Another VMWare Error Resolved with Hackiness!

I use VMWare Fusion for almost all of my VMs these days. Especially for the VMs I use for streaming via Twitch when I code. I always like to have good, clean, out of the box loads for the operating systems I’m going to use. That way, whatever I’m doing is in the easiest state for anyone viewing and trying to follow along to replicate.

With this great power, however comes great hackiness in keeping these images in a good state. Recently I’d been coding away on a Go project and boom, I killed the VM, likely because I was also running multiple videos processing with Adobe products, including Photoshop, and I had HOI4 running in the background. That’s a strategy game if you’re curious. The machine was getting a bit overloaded, but that’s fairly standard practice for me. Anyway, the VM just killed over.

I tried restarting it and got this message.

Something something proprietary to the whatever and,

“Cannot open the disk ‘xxxxx.vmdk’ or one of the snapshot disks it depends on.”

I started searching and didn’t find anything useful. So I went about trying some random things. It’s really crazy too, because usually I’d just forget it and trash the image. But ya see, I had code I’d actually NOT COMMITTED YET!!! I kept trying, and finally came to a solution, oddly enough, that seemed to work as if nothing odd had happened at all!

I opened up the contents of the virtual machine file and deleted the *.lck files. Not sure what they were anyway, and kind of just frustrated, but hey, it worked like a charm!

So if you run into this problem, you may want to just nuke the *.lck files and try to kick start it that way.

First Quarter Workshops, Code Sessions, & Twitch Streaming Schedule

I present, the details for upcoming workshops, sessions, and streams for the first quarter of 2021. This quarter includes January through March. In late March an updated list of new content coming and existing context continuing will be posted then.

Thrashing Code Channel on YouTube for the VODs and VLOGs.

Thrashing Code Channel on Twitch for the live streams.

Hasura HQ on Twitch and YouTube too.

Composite Thrashing Code Blog where I put together articles about the above content plus list events, metal, and a lot more. Which I also mirror here for those that like to read on Medium and also on who like to read there.


Workshops are 1+ hour long, have breaks, and a mostly a set curriculum. They’ll often have collateral available before and after the workshop such as slide decks, documentation, and often a code repository or two. The following are the scheduled workshops I’ve got for first quarter of 2021.

January 28th @ 12:00-14:00 PT Relational Data Modeling for GraphQL – This will be a data modeling workshop focused around getting a GraphQL API up and running built around a relational data model. In this workshop I’ll be showing how to do this using dbdiagramJetbrains DataGrip, and the Hasura API & CLI tooling. The ideas, concepts, and axioms I lay out in this workshop are not limited or tightly coupled to these tools, I used them simply to provide a quick and effective way to get much further into concept and ideas, and move beyond this to actual implementation of concept and ideas within the workshop. Thus, the tools aren’t must haves, but they will help you follow along with the workshop.

February 17th @ 14:00-16:00 Relational Data Modeling for GraphQL – See above description. This will be a live rerun of the workshop I’ll do, so a new group can join in live, ask questions, and work through the material.

February 18th @ 12:00-14:00 PT Introduction to GraphQL – In this introduction to GraphQL, I’ll cover specifically the client side usage of GraphQL queries, mutations, and subscriptions. Even with a focus on the client side queries, I will provide a tutorial during this workshop of setting up our sample server side GraphQL API using Hasura. From this an example will be provided of basic data modeling, database schema and design creation in relation to our GraphQL entities, and how all fo this connects. From there I’ll add some data, discussing the pros and cons of various structures in relation to GraphQL, and then get into the semantic details of queries, mutations, and subscriptions in GraphQL, with our Hasura API.

March 23rd @ 14:00-16:00 Introduction to GraphQL – See above, a rerun of the workshop.

March 24th @ 12:00-14:00 PT Relational Data Modeling for GraphQL – See above, a rerun of the workshop.

One Offs

These sessions will be on a set list of topics that will be provided at the beginning of the event. They’ll also include various collateral like a Github repository, pertinent notes that detail what I’m showing in video.

January 11th @ 10~Noon Join me as I show you how I setup my Hasura workflow for the Go language stack setup. In this session I’ll delve into the specifics, the IDE, and work toward building out a CLI application that uses Hasura and GraphQL as the data store. Join me for some coding, environment setup, workflow, and more. This is a session I’ll be happy to field questions too (as most of my sessions), so if you’ve got questions about Hasura, my workflow, Go, CLI development, or anything else I’m working on join in and toss a question into chat!

February 1st @ 10am~Noon Join me as I broach the GraphQL Coding topic, similar to the one off coding session on January 11th, but this time with JavaScript – a more direct and native way to access, use, and benefit from GraphQL! This session will range from server side Node.js coding to some client side coding too, we’ll talk about the various ways to make calls and a number of other subjects on this topic. As always, dive in and AMA.

Coding Session Series

These sessions may vary from day to day, but will be centered around a particular project or effort, or just around learning something newabout a bit of code, how something works, or other technologically related explorations. An example is in March I’m kick starting #100DaysOfCode which will be a blast! 1 hour a day, for 100 days. What will we learn? Who knows, but it’ll be a blast hacking through it all!

March TBD @ TBD, but in March and repeating on weekdays daily! Day 1 and ongoing of #100DaysOfCode! Yup, I’ve decided to jump on the 100 Days of Code Train! The very general scheduling of topics I intend to cover so far goes like this; algorithms, data/storage, vuejs, and then into building a project. Project isn’t decided yet, nor algorithms, nor specific data and storage topics, but that’ll be the general flow. More details to come in late Febuary and early March!

Tuesday, January 12th @ 10:00 PT on Hasura HQ repeating weekly on Tuesdays I’ll be putting together a full stack app, learning new parts of doing so, and more using the Hasura tooling along with the Go language and stack. Join me for some full stack app dev, we’ll be getting – over time – deep into all the things!

Wrap Up TLDR; && Full Schedule

That’s the material I’m putting together for the Thrashing Code and Hasura Channels on Twitch & YouTube, I hope you’ll enjoy it, get value out of it all, or just join me to hang out on stream and in workshops. Give the channel a follow on Twitch, and if you ever miss a live session on Twitch, in ~24 hours or shortly thereafter I’ll have the Twitch stream posted for VOD on the Thrashing Code YouTube Channel, which you can navigate over to and subscribe for all the latest updates for the above videos and more!

Hasura CLI Installation & Notes

This post just elaborates on the existing documentation here with a few extra notes and details about installation. This can be helpful when determining how to install, deploy, or use the Hasura CLI for development, continuous integration, or continuous deployment purposes.


There are three primary operating system binary installations: Windows, MacOS, and Linux.


In the shell run the following curl command to download and install the binary.

curl -L | bash

This command installs the command to /usr/local/bin, but if you’d like to install it to another location you can add the follow variable INSTALL_PATH and set it to the path that you want.

curl -L | INSTALL_PATH=$HOME/bin bash

What this script does, in short, follows these steps. The latest version is determined, then downloaded with curl. Once that is done, it then assigns permissions to the binary for execution. It also does some checks to determine OS version, type, distro, if the CLI exists or not, and other validations. To download the binary, and source if you’d want for a particular version of the binary, check out the manual steps listed later in this post.


For MacOS the same steps are followed as Linux, as the installation script steps through the same procedures.

curl -L | bash

As with the previous Linux installation, the INSTALL_PATH variable can also be set if you’d want the installation of the binary to another path.


Windows is a different installation, as one might imagine. The executable (cli-hasura-windows-amd64.exe) is available on the releases page. This leaves it up to you to determine how exactly you want this executable to be called. It’s ideal, in my opinion, to download this and then put it into a directory where you’ll map the path. You’ll also want to rename the executable itself from cli-hasura-windows-amd64.exe to hasura.exe if for any reason because it’s easier to type and then it’ll match the general examples provided in the docs.

To setup a path on Windows to point to the directory where you have the executable, you’ll want to open up ht environment variables dialog. That would be following Start > System > System Settings > Environment Variables. Scroll down until the PATH is viewable. Click the edit button to edit that path. Set it, and be sure to set it up like c:\pathWhatevsAlreadyHere;c:\newPath\directory\where\hasura\executable\is\. Save that, launch a new console and that new console should have the executable available now.

Manual Download & Installation

You can also navigate directly to the releases page and get the CLI at All of the binaries are compiled and ready for download along with source code zip file of the particular builds for those binaries.

Installation via npm

The CLI is avialable via an npm package also. It is independently maintained (the package that wraps the executable) by members in the community. If you want to provide a set Hasura CLI version to a project, using npm is a great way to do so.

For example, if you want to install the Hasura CLI, version in your project as a development dependency, use the following command to get version 1.3.0 for example.

npm install --save-dev hasura-cli@1.3.0

For version 1.3.1 it would be npm install --save-dev hasura-cli@1.3.1 for example.

The dev dependencies in the package.json file of your project would then look like the following.

"devDependencies": {
  "hasura-cli": "^1.3.0"

Another way you can install the CLI with npm is to just install it globally, with the same format but swap the --save-dev with --global. The following would install the latest command. You can add the @version to the end and get a specific version installed globally too, just like as shown with the dev install previously.

npm install --global hasura-cli


Using the npm option is great, if you’re installing, using, writing, or otherwise working with JavaScript, have Node.js installed on the dev and other machines, and need to have the CLI available on those particular machine instances. If not, I’d suggest installing via one of the binary options, especially if you’re creating something like a slimmed down Alpine Linux container to automate some Hasura CLI executions during a build process or something. There are a lot of variance to how you’d want to install and use the CLI, beyond just installing it to run the commands manually.

If you’re curious about any particular installation scenarios, ask me @Adron and I’ll answer there and I’ll elaborate here on this post!

Happy Hasura CLI Hacking! 🤘


Simple Go HTTP Server Starter in 15 Minutes

This is a quick starter, to show some of the features of Go’s http library. (sample repo here) The docs for the library are located here if you want to dig in deeper. In this post however I cover putting together a simple http server with a video on creating the http server, setting a status code, which provides the basic elements you need to further build out a fully featured server. This post is paired with a video I’ve put together, included below.

00:15 Reference to the Github Repo where I’ve put the code written in this sample.
00:18 Using Goland IDE (Jetbrains) to clone the repository from Github.
00:40 Creating code files.
01:00 Pasted in some sample code, and review the code and what is being done.
02:06 First run of the web server.
02:24 First function handler to handle the request and response of an HTTP request and response cycle.
04:56 Checking out the response in the browser.
05:40 Checking out the same interaction with Postman. Also adding a header value and seeing it returned via the browser & related header information.
09:28 Starting the next function to provide further HTTP handler functionality.
10:08 Setting the status code to 200.
13:28 Changing the status code to 500 to display an error.

Getting a Server Running

I start off the project by pulling an empty repository that I had created before starting the video. In this I use the Jetbrains Goland IDE to pull this repository from Github.


Next I create two files; main.go and main_test.go. We won’t use the main_test.go file right now, but in a subsequent post I’ll put together some unit tests specifically to test out our HTTP handlers we’ll create. Once those are created I pasted in some code that just has a basic handler, using an anonymous function, provides some static file hosting, and then sets up the server and starts listening. I make a few tweaks, outlined in the video, and execute a first go with the following code.

package main

import (

func main() {
    http.HandleFunc("/", func (w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "Welcome to my website!")

    http.ListenAndServe(":8080", nil)

When that executes, opening a browser to localhost:8080 will bring up the website which then prints out “Welcome to my website!”.


Adding a Function as an HTTP Handler

The next thing I want to add is a function that can act as an HTTP handler for the server. To do this create a function just like we’d create any function in Go. For this example, the function I built included several print line calls to the ResponseWriter with Request properties and a string passed in.

func RootHandler(w http.ResponseWriter, r *http.Request){
    fmt.Fprintln(w, "This is my content.")
    fmt.Fprintln(w, r.Header)
    fmt.Fprintln(w, r.Body)

In the func main I changed out the root handler to use this newly created handler instead of the anonymous function that it currently has in place. So swap out this…

http.HandleFunc("/", func (w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Welcome to my website!")

with this…

http.HandleFunc("/", RootHandler)

Now the full file reads as shown.

package main

import (

func main() {
    http.HandleFunc("/", RootHandler)
    http.ListenAndServe(":8080", nil)

Now executing this and navigating to localhost:8080 will display the following.


The string is displayed first “This is my content.”, then the header, and body respectively. The body, we can see is empty. Just enclosed with two braces {}. The header is more interesting. It is returned as a map type, between the brackets []. Showing an accept, accept-encoding, accept-language, user-agent, and other header information that was passed.

This is a good thing to explore further, check out how to view or set the values associated with the header values in HTTP responses, requests, and their related metadata. To go a step further, and get into this metadata a tool like Postman comes in handy. I open this tool up, setup a GET request and add an extra header value just to test things out.


Printing Readable Body Contents

For the next change I wanted to get a better print out of body contents, as the previous display was actually just attempting to print out the body in an unreadable way. In this next section I used an ioutil function to get the body to print out in a readable format. The ioutil.ReadAll function takes the body, then I retrieve a body variable with the results, pending no error, the body variable is then cast as a string and print out to the ResponseWriter on the last line. The RootHandler function then reads like this with the changes.

func RootHandler(w http.ResponseWriter, r *http.Request){
    fmt.Fprintln(w, "This is my content.")
    fmt.Fprintln(w, r.Header)

    defer r.Body.Close()

    body, err := ioutil.ReadAll(r.Body)
    if err != nil {
        fmt.Fprintln(w, err)
    fmt.Fprintln(w, string(body))

If the result is then requested using Postman again, the results now display appropriately!


Response Status Codes!

HTTP Status codes fit in to a set of ranges for various categories of responses. The most common code is of course the success code, which is 200 “Status OK”. Another common one is status code 500, which is a generic catch all for “Server Error”. The ranges are as follows:

  • Informational responses (100–199)
  • Successful responses (200–299)
  • Redirects (300–399)
  • Client errors (400–499)
  • and Server errors (500–599)

For the next function, to get an example working of how to set this status code, I added the following function.

func ResponseExampleHandler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintln(w, "Testing status code. Manually added a 200 Status OK.")
    fmt.Fprintln(w, "Another line.")

With that, add a handler to the main function.

http.HandleFunc("/response", ResponseExampleHandler)

Now we’re ready to try that out. In the upper right of Postman, the 200 status is displayed. The other data is shown in the respective body & header details of the response.


Next up, let’s just write a function specifically to return an error. We’ll use the standard old default 500 status code.

func ErrorResponseHandler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintln(w, "Server error.")

Then in main, as before, we’ll add the http handle for the function handler.

http.HandleFunc("/errexample", ErrorResponseHandler)

Now if the server is run again and an HTTP request is sent to the end point, the status code changes to 500 and the message “Server error.” displays on the page.



That’s a quick intro to writing an HTTP server with Go. From here, we can take many next steps such as writing tests to verify the function handlers, or setup a Docker image in which to deploy the server itself. In subsequent blog entries I’ll write up just those and many other step by step procedures. For now, a great next step is to expand on trying out the different functions and features of the http library.

That’s it for now. However, if you’re interested in joing me to write some JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at, post the VOD’s to YouTube along with entirely new tech and metal content at Feel free to check out a coding session, ask questions, interject, or just come and enjoy the tunes!

Creating a Go Module & Writing Tests in Less Than 3 Minutes

In this short (less than 4 minutes) video I put together a Go module library, then setup some first initial tests calling against functions in the module.


00:12 – Writing unit tests.

00:24 – Create the first go file.

00:40 – Creating the Go go.mod file. When creating the go.mode file, note the command is go mod init [repopath] where repopath is the actual URI to the repo. The go.mod file that is generated would look like this.


go 1.13

00:56 Enabling Go mod integration for Goland IDE. This dialog has an option to turn off the proxy or go direct to repo bypassing the proxy. For more details on the Go proxy, check out and the Go docs for more details.

01:09Instead of TDD, first I cover a simple function implementation. Note the casing is very important in setting up a test in Go. The test function needs to be public, thus capitalized, and the function needs accessible, thus also capitalized. The function in the awesomeLib.go file, just to have a function that would return some value, looks like this.

package awesomeLib

func GetKnownResult() string {
    return "known"

01:30Creating the test file, awesomeLib_test.go, then creating the test. In this I also show some of the features of the Goland IDE that extracts and offers the function names of a module prefaced with Test and other naming that provides they perform a test, performance, or related testing functionality.

package awesomeLib

import "testing"

func TestGetKnownResult(t *testing.T) {
    got := GetKnownResult()
    if got != "known" {
        t.Error("Known result not received, test failed.")

02:26Running the standard go test to execute the test. To run tests for all files within this module, such as if we’ve added multiple directories and other files, would be go test ./....

02:36Using Goland to run the tests with singular or multiple tests being executed. With Goland there are other capabilities to show test coverage, what percentage of functionality is covered by tests, and other various code metrics around testing, performance, and other telemetry.

That’s it, now the project is ready for elaboration and is setup for a TDD, BDD, or implement and test style approach to development.

For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at, post the VOD’s to YouTube along with entirely new tech and metal content at