Terraform “Invalid JWT Signature.”

I ran into this issue recently. The “Invalid JWT Signature.” error while running some Terraform. It appeared to occur whenever I was setting up a bucket in Google Cloud Platform to use for a back-end to store Terraform’s state. In console, here’s the exact error.

terraform-jwt-invalid.png

My first quick searches uncovered some Github issues that looked curiously familiar. Invalid JWT Token when using Service Account JSON #3100 which was closed without any particular resolution. Upon further searching it didn’t help to much but I’d be curious as to what the resolution was. The second is Creating GCP project in terraform #13109 which sounded much more on point compared to my issue. This appeared closer to my issue but it looked like I should probably just start from scratch, since this did work on one machine already but just didn’t work on this machine I shifted to. (Grumble grumble, what’d I miss).

The Solution(s)?

In the end this is a message, if you work on multiple machines with multiple cloud accounts you might get the keys mixed up. In this particular case I reset my NIC (i.e. you can just reboot too, especially if on Windows it’s just easier to do that). Then everything just started working. In some cases however, the JSON with the gcloud/gcp keys needs to be regenerated as the old key was rolled or otherwise invalidated.

 

Terraform Changes, Provisioners, Connections, and Static Nodes

NOTE: Video toward the bottom of the post with the timeline so you can navigate into certain points of the video where I speak to different topics.

Cluster Nodes Currently

First I start out this session with a simple issue I ran into the previous night. The fact that a variable in the name of the instance resource being created causes the instance to be destroyed and created on every single execution of terraform. That won’t do so I needed to just declare each node, at least the first three, maybe five (cuz’ it’s best to go with 2n+1 instances in most distributed systems). So I removed the module I created in the last session and created two nodes to work with, for creation and setup of the provisioners that I’d use to setup Cassandra.

In addition to making each of the initial nodes statically declared, I also, at least for now added public IP’s for troubleshooting purposes. Once those are setup I’ll likely remove those and ensure that the cluster just communicates on the private IP’s, or at least disallow traffic from hitting the individual nodes of the cluster even via the public IP’s via the firewall settings.

The nodes, public and private IP addresses, now look like this.

resource "google_compute_address" "node_zero_address" {
  name         = "node-0-address"
  subnetwork   = "${module.network_development.subnetwork_west}"
  address_type = "INTERNAL"
  address      = "10.1.0.2"
}

resource "google_compute_address" "node_zero_public_address" {
  name = "node-0-public"
}

resource "google_compute_instance" "node_zero" {
  name         = "node-0"
  machine_type = "n1-standard-1"
  zone         = "us-west1-a"

  boot_disk {
    initialize_params {
      image = "ubuntu-minimal-1804-bionic-v20180814"
    }
  }

  network_interface {
    subnetwork = "${module.network_development.subnetwork_west}"
    address    = "${google_compute_address.node_zero_address.address}"

    access_config {
      nat_ip = "${google_compute_address.node_zero_public_address.address}"
    }
  }

  service_account {
    scopes = [
      "userinfo-email",
      "compute-ro",
      "storage-ro",
    ]
  }
}

Provisioners

When using Terraform provisioners there are a few things to take into account. First, which is kind of obvious but none the less I’m going to mention it, is to know which user, ssh keys, and related authentication details you’re using to actually login to the instances you create. It can get a little confusing sometimes, at least it’s gotten me more than once, when gcloud initiates making ssh keys for me while I’ve got my own keys already created. Then when I setup a provisioner I routinely forget that I don’t want to use the id_rsa but instead the google created keys. I talk more about this in the video, with more ideas about insuring also that the user on the machine is the right user, it’s setup for centos or ubuntu or whatever distro your using, and all of these specifics.

Once that’s all figured out, sorted, and not confused, it’s just a few steps to get a provisioner setup and working. The first thing is we need to setup whichever provisioner it is we want to use. There are several options; file, local-exec, remote-exec, and some others for chef, salt-masterless, null_resource, and habitat. Here I’m just going to cover file, local-exec, and remote-exec. First let’s take a look at the file provisioner.

resource "google_compute_instance" "someserver" {
  # ... some other fields for the particular instance being created.

  # Copies the myapp.conf file to /etc/myapp.conf
  provisioner "file" {
    source      = "directory/relative/to/repo/path/thefile.json"
    destination = "/the/path/that/will/be/created/andthefile.json"
  }
}

This example of a provisioner shows, especially with my fancy naming, several key things;

The provisioner goes into an instance resource, as a field within the instance resource itself.
The provisioner, which I’ve selected the “file” type of provisioner here, then has several key fields within the field, for the file provisioner those two required fields are source and destination.
The source takes the file to be copied to the instance, which is locally in this repository, or I suppose could be outside of it, based on a general path and the file name. You could just put the filename if it’s just there in the root of where this is being executed.
The destination is where within the instance users folder is on the remote server. So if I set a folder/filename.json it’ll create a folder called “folder” and put the file in it, possibly renamed if it isn’t the same name, here as filename.json.

But of course, one can’t just simply copy a file to a remote instance without a user and the respective authentication, which is where the connection resource comes into play within the provisioner resource. That makes the provisioner look like this.

resource "google_compute_instance" "someserver" {
  # ... some other fields for the particular instance being created.

  # Copies the myapp.conf file to /etc/myapp.conf
  provisioner "file" {
    source      = "directory/relative/to/repo/path/thefile.json"
    destination = "/the/path/that/will/be/created/andthefile.json"
    connection {
      type        = "ssh"
      user        = "useraccount"
      private_key = "${file("~/.ssh/id_rsa")}"
      agent       = false
      timeout     = "30s"
    }
  }
}

Here the connection resource adds several new fields that are used to make the connection. This is the same for the file or remote-exec provisioner. As mentioned earlier, and through troubleshooting in the video, in this case I’ve just put in the syntax id_rsa since that’s the common ssh private key. However with GCP I needed to be using the google_compute_engine ssh private key.

The other fields are pretty self-explanatory, but let’s discuss real quick. The type, is simply that it is an ssh connection that will be used. One can also use plain text and some other options, or in the case of windows one uses the winrm connection type.

The user field is simply the name of the user which will be used to authenticate to the remote instance upon creation. The private key is the private ssh key of the user that needs to connect to that particular instance and do whatever the provisioner is going to do.

The agent, if set to false, doesn’t use ssh-agent, but if set to true does. On windows the only supported ssh authentication is pageant.

Then the timeout field is the time, in seconds as shown, that the provisioner will wait for execution to respond before an error is thrown in the terraform execution.

After all of those things were setup I created a simple script to install Cassandra which I called install-c.sh.

#!/usr/bin/env bash

# Installing Cassandra 3.11.3

echo "deb http://www.apache.org/dist/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
sudo apt-get update
sudo apt-key adv --keyserver pool.sks-keyservers.net --recv-key A278B781FE4B2BDA
sudo apt-get install -Y cassandra

Now with the script done I can use the file provisioner to move it over to the server. Which I did in the video, and then started putting together the remote-exec provisioner. However I messed around with the local-exec provisioner too but then realized I’d gotten them turned around in my head. The local-exec actually initiates a script executing here on the local machine that terraform execution starts from, which I needed terraform to execute the script that is out on the remote instance that was just created. That provisioner looks like this.

provisioner "remote-exec" {
  inline = [
    "chmod +x install-c.sh",
    "install-c.sh",
  ]

  connection {
    type        = "ssh"
    user        = "adron"
    private_key = "${file("~/.ssh/google_compute_engine")}"
    agent       = false
    timeout     = "30s"
  }
}

In this provisioner the inline field takes an array of strings which will be executed on the remote server. I could have just entered the installation steps for installing cassandra here, but considering I’ll actually want to take additional steps and actions, that would make the privisioner exceedingly messy. So what I do here is setup the connection and then for the inline field I modify the file so that it can be executed and then execute the script on the server. It then goes through the steps of adding the apt-get repo, taking respective verification steps, running and update, and then installing cassandra.

After that I wrap up for this session. Static instances created, provisioners dedicated, and connections made and verified for the provisioners with cassandra being installed. So for the next session I’m all set to start connecting the nodes together into a cluster and taking whatever next steps I need.

  • 0:57 Thrashing Code Intro
  • 3:43 Episode Intro, stepping into Terraform for the session.
  • 4:25 Starting to take notes during session!
  • 6:12 Describing the situation where variables in key values of resources causes the resources to recreate every `terraform apply` so I broke out each individual node.
  • 9:36 I start working with the provisioner to get a script file on machine and also execute that script file. It’s tricky, because one has to figure out what the user for the particular image is. Also some oddball mess with my mic starts. Troubleshooting this, but if you know what’s up with OBS streaming software and why this is happening, lemme know I’m all ears! 🙂
  • 31:25 Upon getting the file provisioner to finally work, setting extra fields for time out and such, I moved on to take next steps with IntelliJ cuz the IDE is kind of awesome. In addition, the plugins for HCL (HashiCorp Configuration Language) and related elements including features built into the actual IDE provide some extra autocomplete and such for Terraform work. At this time I go through actually setting up the IDE (installed it earlier while using Visual Studio Code).
  • 34:15 Setup of IntelliJ plugins specific to Terraform and the work at hand.
  • 35:50 I show some of the additional fields that the autocomplete in Terraform actually show, which helps a lot during troubleshooting or just building out various resources.
  • 37:05 Checking out the Apache Cassandra (http://cassandra.apache.org/) download and install commands and adding those to the install script for the Terraform provisioner file copy to copy over to the instance.
  • 43:18 I add some configurations for terraform builds/execution via IntelliJ. There are terraform specific build options and also bash file execution options which I cover in video.
  • 45:32 I take a look at IntelliJ settings to determine where to designate the Terraform executable file. This is important for getting the other terraform build configuration options to work.
  • 48:00 Start of debugging and final steps between now and…
  • 1:07:16 …finish of successful Cassandra install on nodes using provisioners.
  • 1:07:18 Finishing up by committing the latest changes to repository.
  • 1:09:04 Hacker outtro!

Resources:

 

New Trio of Series: Go, Bash, and Distributed Databases

I’m starting three new series and only making this post now that I’ve got the first round posted and live. Each of these I’ll be back posting so that I have a kind of linked list of blog entries for each of the series. So if you’re into learning Go, I’ve got that going, or you’re into bash hacking, got that too, and of course if you’re digging into distributed systems and databases, I’m tackling that too. It’s kind of the trio of core technologies I’m working on these days, enjoy:

The Nuances of Go – This is going to be a series where I go through some of the details of Go. It’s going to be kind of all over the board, but drill into usage of the language, the why, and what for of various features, capabilities, and related topics.

‘bash’ A.K.A. The Solution for Everything – Bash has been around for a while. But let’s not talk about how old it is, the shell has been used and is being used by about every single operating system on the planet. It’s hugely popular and it isn’t exactly being replaced. You can also basically do anything on a computer that you would want to or need to do with it. However there are lot’s of features and commands one ought to know, this series is going to tackle a new command every new post and go into details of how to use it, what it can be used for, and related tips n’ tricks.

Distributed Database Things to Know – This series is going to cover various features and nuances of the Cassandra distributed database cluster technology. I’ll be diving into a whole host of capabilities, code, and a pointer or three back to white papers when relevant that helped bring these distributed databases into existence.

My goal with each of these is to rise early Monday morning, and time box each article to about 30 minutes. 30 minutes for Go, 30 minutes for bash, and 30 for Cassandra. Then I’ll publish those throughout the week. I set this goal with the hilarious fact that I’m taking time to go record some LinkedIn learning courses on Go and Terraform. With that it might be a week before you see the next trio published. But they’re halfway written already so I might surprise myself. Happy thrashing code!

‘bash’ A.K.A. The Solution for Everything – Bourne Shell as per v7 Unix to Today’s

In 1979 Unix v7 started being distributed with the original Bourne Shell. Simply, it’s a program that sits at /bin/sh and runs in the terminal. You may ask, “what’s the difference between a shell and the terminal?” Let’s cover that right now, because it’s something that routinely isn’t common knowledge, but it really ought to be as it sets the basis for understanding a lot of what is going on in Unix based systems (that includes almost every practical system on a PC, Server, in the cloud, on your phones, and more. Probably easiest to explain it simply as everything that isn’t the Microsoft Windows OS)

A Shell and the Terminal

Terminal – A terminal is the text input and output environment on the system.

Shell – This is the command line interpreter that is run at the terminal.

Another point of context, is that a terminal, shell, and the word console are all used in various ways and sometimes interchangeably. However, these words do not mean the same thing at all. They are distinct individual parts of the system. For example, console, which is used in a strangely disingenuous way all over Microsoft phrasing, is the physical terminal of the system, which is where the system terminal, i.e. the thing I’ve described above, actually runs in so that we can type and interface with it as humans.

Albeit, as English does, these definitions aren’t always taken as the exact, appropriate, and pedantically correct definitions today. For example, many at Microsoft argue that the console is just the terminal, that the terminal is the console. Sure, ok, that’s fine I can still follow along in the conversation, and this adds context, for when someone steps out of line and uses the more historically specific definition in context of a conversation.

Alright, that’s all groovy, so now we can get back to just talking about the shell, all the power it gives the Unix/Linux/POSIX System user, and touch on the terminal or console as we need to with full context of what these things actually are!

Gnu-bash-logo.svgIntroducing Bash!

Alright, with that little bit of context around Bourne Shell, let’s talk about what we’ve actually got today running as our shell in our terminal on our console on the computers we work with! The Bourne Shell, years later had a replacement written for it by Brian Fox. He released it in 1989 and over the years it became a kind of defacto replacement of Bourne Shell. The term ‘BASH’ stands for Bourne Again SHell.

440px-Bash_screenshotThe Bash command syntax is a superset of the Bourne Shell syntax. It provides a wide range of commands that includes ideas drawn from the Korn shell (ksh) and the C shell (csh) such as command line editing, command history, the directory stack, the $RANDOM and $PPID variables, and POSIX command substitution syntax. If many of those things make you think, “WTF are these variables and such?” have no worries, I’ll get to em’ soon enough in this series!

But with that, this is the beginning of many short entries on tips, tricks, tutorials, syntax, history, and context of bash so until next time, cheers!

References & Collected Materials

Consistent Hashing

I wrote about consistent hashing in 2013 when I worked at Basho, when I had started a series called “Learning About Distributed Databases” and today I’m kicking that back off after a few years (ok, after 5 or so years!) with this post on consistent hashing.

As with Riak, which I wrote about in 2013, Cassandra remains one of the core active distributed database projects alive today that provides an effective and reliable consistent hash ring for the clustered distributed database system. This hash function, is an algorithm that maps data to variable length to data that’s fixed. This consistent hash is a kind of hashing that provides this pattern for mapping keys to particular nodes around the ring in Cassandra. One can think of this as a kind of Dewey Decimal Classification system where the cluster nodes are the various bookshelves in the library.

Ok, so maybe the Dewey Decimal system isn’t the best analogy. Does anybody even learn about that any more? If you don’t know what it is, please read up and support your local library.

Consistent hashing allows data distributed across a cluster to minimize reorganization when nodes are added or removed. These partitions are based on a particular partition key. The partition key shouldn’t be confused with a primary key either, it’s more like a unique identifier controlled by the system that would make up part of a primary key of a primary key that is made up of multiple candidate keys in a composite key.

For an example, let’s take a look at sample data from the DataStax docs on consistent hashing.

For example, if you have the following data:

 
name age car gender
jim 36 camaro M
carol 37 345s F
johnny 12 supra M
suzy 10 mustang F

The database assigns a hash value to each partition key:

 
Partition key Murmur3 hash value
jim -2245462676723223822
carol 7723358927203680754
johnny -6723372854036780875
suzy 1168604627387940318

Each node in the cluster is responsible for a range of data based on the hash value.

Hash values in a four node cluster

DataStax Enterprise places the data on each node according to the value of the partition key and the range that the node is responsible for. For example, in a four node cluster, the data in this example is distributed as follows:

Node Start range End range Partition key Hash value
1 -9223372036854775808 -4611686018427387904 johnny -6723372854036780875
2 -4611686018427387903 -1 jim -2245462676723223822
3 0 4611686018427387903 suzy 1168604627387940318
4 4611686018427387904 9223372036854775807 carol 7723358927203680754

So there ya go, that’s consistent hashing and how it works in a distributed database like Apache Cassandra, the derived distributed database DataStax Enterprise, or the mostly defunct RIP Riak. If you’d like to dig in further, I’ve also found Distributed Hash Tables interesting and also a host of other articles that delve into coding up a consistent has table, respective ring, and the whole enchilada. Check out these articles for more information and details:

    • Simple Magic Consistent by Mathias Meyer @roidrage CTO of Travis CI. Mathias’s post is well written and drives home some good points.
    • Consistent Hashing: Algorithmic Tradeoffs by Damien Gryski @dgryski. This post from Damien is pretty intense, and if you want code, he’s got code for ya.
    • How Ably Efficiently Implemented Consistent Hashing by Srushtika Neelakantam. Srushtika does a great job not only of describing what consistent hashing is but also has drawn up diagrams, charts, and more to visualize what is going on. But that isn’t all, she also wrote up some code to show nodes coming and going. A really great post.

 

A Go Repo & Some Go Code

Wrapped up another Twitch stream (follow here) and streamed live via YouTube where the video is now (subscribe here, video here). However the more interesting part IMHO was where I broke down a few key parts of the application building features around file writes, reads, JSON marshalling, and some other functionality. I also put together a repo of the code I put together here on Github (code shown and explained below). I dove into this topic in the later part of the stream which I’ve time tagged below. For the full timeline and the rest of the video just watch from the beginning of the video. I do make some progress working on Colligere, but I’ll cover that topic in a subsequent blog entry and Twitch stream.

  • 1:32:40 – Creating a new application with Jetbrains Goland to show off how to use the Go core libraries around the user, JSON, and some basic file creation and writing.
  • 1:36:40 – At this point I start adding some basic code to pull the current user and collect some information about that user.
  • 1:40:42 – I extract the error code using the Goland refactoring feature and setup a func check() for error checking. That cleans up the inline code a bit.
  • 1:43:10 – Here I add to the user data retrieved some environment variables to that list of collected data. I also cover again, as I have a number of times, how the environment variables are pulled in IDE versus user session versus out of IDE.
  • 1:47:46 – Now I add a file exists check and start working on that logic.
  • 1:48:56 – Props to Edd Turtle on a solid site on Go. Here’s the blog entry, and respectively the path to more of Edd’s material @ https://golangcode.com/. Also, Edd seems like a good guy to follow @eddturtle.
  • 2:01:33 – Starting the JSON Work here to marshal and unmarshall.
  • 2:26:33 – At this point I push the code up to Github (repo here) using the built in Goland VCS features. I realize I’ve named the repo “adron” by accident so I rename it, close Goland, and then clone the code back down locally with Goland’s VCS features. It’s kind of interesting to see Goland go through the 2-factor auth for this too.
  • 2:58:56 – The Seattle Thrashing Code outtro!

There were a few notes I took during the session with my collected links and references for things I looked up. Those included the following:

The code for the Github repo, ended in this state. Just a single main.go file, which shows how to use several features of Go and capabilities of the core libraries.

package main

import (
	"encoding/json"
	"io/ioutil"
	"log"
	"os"
	"os/user"
	"strconv"
)

type UserInformation struct {
	Name     string   `json:"name"`
	UserId   int64    `json:"userid"`
	GroupId  int64    `json:"groupid"`
	HomeDir  string   `json:"homedir"`
	UserName string   `json:"username"`
	GroupIds []string `json:"groupids"`
	GoPath   string   `json:"gopath"`
	EnvVar   string   `json:"environmentvariable"`
}

func main() {
	currentUser, err := user.Current()
	check(err)
	changeDataForMarshalling(currentUser)
}

func changeDataForMarshalling(currentUser *user.User) {
	groupsIds, err := currentUser.GroupIds()
	check(err)
	userId, err := strconv.ParseInt(currentUser.Uid, 6, 64)
	check(err)
	groupId, err := strconv.ParseInt(currentUser.Gid, 6, 64)
	check(err)

	workingUserInformation := &UserInformation{
		Name:     currentUser.Name,
		UserId:   userId,
		GroupId:  groupId,
		HomeDir:  currentUser.HomeDir,
		UserName: currentUser.Username,
		GroupIds: groupsIds,
		GoPath:   os.Getenv("GOPATH"),
		EnvVar:   os.Getenv("NEW_STRING_ENVIRONMENT_VARIABLE"),
	}

	resultingUserInformation, _ := json.Marshal(workingUserInformation)

	filename := "collected_values.json"
	if _, err := os.Stat(filename); os.IsNotExist(err) {
		writeFileContents(err, filename, string(resultingUserInformation))
	} else {
		newUserInformation, err := openFileMakeChanges(filename)
		writeFileContents(err, filename, newUserInformation)
	}
}

func openFileMakeChanges(filename string) (string, error) {
	jsonFile, err := os.Open(filename)
	check(err)
	defer jsonFile.Close()
var changingUserInformation UserInformation
	byteValue, _ := ioutil.ReadAll(jsonFile)
	json.Unmarshal(byteValue, &changingUserInformation)
changingUserInformation.Name = "Adron Hall"
	changingUserInformation.GoPath = "/where/is/the/goland"
	newUserInformation, _ := json.Marshal(changingUserInformation)
return string(newUserInformation), err
}

func writeFileContents(err error, filename string, text string) {
	f, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY, 0600)
	check(err)
	defer f.Close()

	if _, err = f.WriteString(text); err != nil {
		panic(err)
	}
}

func check(err error) {
	if err != nil {
		log.Fatal(err)
	}
}

The import section includes several core libraries for the application “encoding/json”, “io/ioutil”, “log”, “os”, “os/user”, and “strconv”. Then I’ve got a structure declared that I use throughout the code with various fields.

Then just for the heck of it I created a changeDataForMarshalling function and passed in userInfo and parsed the string results of gid and uid to int data types. From there the remaining values are passed in then marshalled to JSON and written to a file, depending on if it’s a new file or existing. The writing however I broke out to a function dubbed writeFileContents. Definitely some more refactoring and tweaking to really make it functional and usable, but shows easily what strconv, marshalling, and other features of these core libraries do. This code provides some examples on what functionality Colligere will use to read in schema configuration, edit, and save that schema. More about that in the next Twitch few streams.

The Nuances of Go: Go Program Structure

Talking About Types, Variables, Pointers, and More

In my endeavor to get more of the specifics of Go figured out, I’ve been diving a bit more in depth to each of the specific features, capabilities, and characteristics of the language. I’ll be doing the same for Cassandra over the coming months and producing a blog entry series about it. This new series however is about Go, and this first in the series is about types, variables, pointers, and naming of these things. Enjoy.

Reserved Words, Keywords & Related Systemic Parts of Go

First off let’s talk about the reserved words in Go. The list includes 25 keywords that can’t be used as variables or names of functions or anything like that. This list includes:

if import interface map
select return range package
struct switch type var
for func go goto
default defer else fallthrough
case const chan continue
break

Beyond these 25 keywords there are also some predeclared names that can be but usually shouldn’t be used for naming. These include:

Constants

true false iota nil

Types

int int8 int16 int32
int64 uint uint8 uint16
uint32 uint64 uintptr float32
float64 complex128 complex64 bool
byte rune string error

Functions

make len cap new
append copy close delete
complex real imag panic
recover

When declaring an entity within a function, it’s scope is limited to that function. When declaring it outside of the function it is available to the contents of the package it is declared inside of. The case of the first letter also has the effect of allowing access within or across package boundaries. If the name begins with an upper case character it is exported for other packages. Package names themselves are always lower case however.

Camel Casing

It’s also important to know about the emphasis and standard within Go to use camel casing. Except it appears that for acronyms and such like; HTML, XML, JSON, or such things, it is routine to use names like HTMLthingy, XMLjunkHere, or rockStarJSON. Go is somewhat particular about these types of things, so it is extremely important to use the Go format standards. To help with this, there’s also the gofmt function, but it’s always good to just use the proper formatting during writing of the code anyway.

The Four Declarations

The four declarations are kind of like the four horsement, except not really. Forget that analogy as it was already going nowhere. The four declarations include:

var const type func

Here are some code samples of various declarations. First use of the func. In any new Go program there’s always the main() function. Declared like this.

func main() {
    // Stuff goes here.
}

Another example of a function declaration with parameters passed and expected results to return. The signature works like func followed by the name of the function, then in parenthesis parameter name followed by its type, for as may parameters as is needed. Then after the parenthesis the type of the expected result of the function. Of course, if no result is expected to be returned, then there would be no tpe specified here. Just as if no parameters, the parenthesis would still be used but no parameters specified.

func doSomeComputation(someValue float64, anotherFloaty float64) float64 {
    return someValue * anotherFloat
}

Using the var for declaration of variables looks like this.

var f = 212.0
var result = 2 + 2
var celsius = (f - 32) * 5 / 9

Variable types can be defined in this way. Using var keyword, followed by the variable name, then followed by its type, and then if desired assigned via = and then the specific value or function with a result to assign to the variable.

var name type = expression

Without an assignment, the variable is simply created with the intent of assigning a value later. Such as these variable declarations.

var f float32
var result float64
var celsius float64
var someVariable int
var junk string

A shorter version of the creation and assignment of variables looks like this. It however can only be used within a function.

name := expression

Actual use of this shorter version would look like this.

f := 212.0
randomStuff := rand.Float64()
wordContent := "This is some character string nonsense."

One can also do some other tricks with this shorter method of assignment. Let’s say some variables are needed that will have integers assigned to them. You could set up the variables with values like this. I’ve noticed some get a bit confused about this and tuple assignments, this is not the same thing. This merely gives you three variables: k, l, and m and assigns the values 0, 1, and 2 to the respective variable.

k, l, m := 0, 1, 2

It’s also important to note, a short declaration doesn’t always declare all of the variables on the left-hand side. Some, if already declared in the same lexical block, it then acts as an assignment to the variables instead.

Declaring a constant value with const would look like this.

const fahrenheitBoiling = 212.0

Pointers

A variable holds a particular value, where as a pointer holds the address in memory of a particular value. Sometimes, it gets a bit difficult to mentally keep track of pointers, so let’s look at a few examples. First I’ll start with a plain old variable.

someVariable := "The original value of this variable."

Then I’ll add another variable, which will point at the address and respective value of someVariable.

someVariable := "The original value of this variable."
anotherVariable := &someVariable
fmt.Println(*anotherVariable)

In this situation what does the fmt.Println(*anotherVariable) function print out? The result would be “The original value of this variable.” because anotherVariable is a variable that points to the value of someVariable. Which means anotherVariable doesn’t take the resources that someVariable does, but merely points to it.

The reasoning here is, if you have variable for example that stores a very large value in memory. One can merely point to that data with a pointer and the large value doesn’t have to change it’s memory space. Imagine if that variable were a gigabyte of data and it had to be moved every time something needed referenced or calculated with it, with a pointer we don’t have to move it. It can be used in various ways without it getting shifted around.

Now if I take the pointer value anotherVariable and assign it a value, like this.

*anotherVariable = "42"

Then what happens? The pointer now is re-allocated and assigned the new value of “42”. It no longer references the address space of the pointer and instead now has the value 42 assigned to the variable.

The Fancy new Function

There’s also another function, a built-in one, that creates a variable. Using the expression new(type) will create an unnamed variable of type, initializes it, and returns the address of the value of type. In pointer speak, it’d be *type. Here are some examples.

pinto := new(int)
fmt.Println(*pinto)

This code will create a variable of type int, then initialize the value to zero since it is an int. Let’s say we want to actually give it a value after it’s created though. We could assign pinto like this.

*pinto = 42
fmt.Println(*p)

This will then print out the value 42, of type integer. Note, this new is just sugar, a syntactical convenience feature really. They can be useful however, and there are a few caveats that need to be noted. For example, check out this gotcha.

If I declare two new values using new.

firstThing := new(int)
secondThing := new(int)
fmt.Println(firstThing == secondThing)

In this case, the firstThing and secondThing don’t actually match, because the comparison is on the address space, and they’re different address spaces!

Alright, that’s it for now, just a few important nuances around how variables, pointers, and Go features work.