Setting Up Nodes, Firewall, & Instances in Google Cloud Platform

Here’s the run down of what I covered in the latest Thrashing Code Session (go subscribe here to the channel for future sessions or on Twitch). The core focus of this session was getting some further progress on my Terraform Project around getting a basic Cassandra and DataStax Enterprise Apache Cassandra Cluster up and running in Google Cloud Platform.

The code and configuration from the work is available on Github at terraform-fields and a summary of code changes and other actions taken during the session are further along in this blog entry.

Streaming Session Video

In this session I worked toward completing a few key tasks for the Terraform project around creating a Cassandra Cluster. Here’s a run down of the time points where I tackle specific topics.

  • 3:03 – Welcome & objectives list: Working toward DataStax Enterprise Apache Cassandra Cluster and standard Apache Cassandra Cluster.
  • 3:40 – Review of what infrastructure exists from where we left off in the previous episode.
  • 5:00 – Found music to play that is copyright safe! \m/
  • 5:50 – Recap on where the project lives on Github in the terraformed-fields repo.
  • 8:52 – Adding a google_compute_address for use with the instances. Leads into determining static public and private google_compute_address resources. The idea being to know the IP for our cluster to make joining them together easier.
  • 11:44 – Working to get the access_config and related properties set on the instance to assign the google_compute_address resources that I’ve created. I run into a few issues but work through those.
  • 22:28 – Bastion server is set with the IP.
  • 37:05 – I setup some files, following a kind of “bad process” as I note. Which I’ll refactor and clean up in a subsequent episode. But the bad process also limits the amount of resources I have in one file, so it’s a little easier to follow along.
  • 54:27 – Starting to look at provisioners to execute script files and commands before or after the instance creation. Super helpful, with the aim to use this feature to download and install the DataStax Enterprise Apache Cassandra or standard Apache Cassandra software.
  • 1:16:18 – Ah, a need for firewall rule for ssh & port 22. I work through adding those and then end up with an issue that we’ll be resolving next episode!

Session Content

Starting Point: I started this episode from where I left off last session.

Work Done: In this session I added a number of resources to the project and worked through a number of troubleshooting scenarios, as one does.

Added firewall resources to open up port 22 and icmp (ping, etc).

[sourcecode language=”bash”]
resource “google_compute_firewall” “bastion-ssh” {
name = “gimme-bastion-ssh”
network = “${google_compute_network.dev-network.name}”

allow {
protocol = “tcp”
ports = [“22”]
}
}

resource “google_compute_firewall” “bastion-icmp” {
name = “gimme-bastion-icmp”
network = “${google_compute_network.dev-network.name}”

allow {
protocol = “icmp”
}
}
[/sourcecode]

I also broke out the files so that each instances has its own IP addresses with it in the file specific to that instance. Later I’ll add context for why I gave the project file bloat, by refactoring to use modules.

terraform-files.png

Added each node resource as follows. I just increased each specific node count by one for each subsequent node, such as making this node1_internal IP google_compute_address increment to node2_internal. Everything also statically defined, adding to my file and configuration bloat.

[sourcecode language=”bash”]
resource “google_compute_address” “node1_internal” {
name = “node-1-internal”
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
address_type = “INTERNAL”
address = “10.1.0.5”
}

resource “google_compute_instance” “node_frank” {
name = “frank”
machine_type = “n1-standard-1”
zone = “us-west1-a”

boot_disk {
initialize_params {
image = “ubuntu-minimal-1804-bionic-v20180814”
}
}

network_interface {
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
address = “${google_compute_address.node1_internal.address}”
}

service_account {
scopes = [
“userinfo-email”,
“compute-ro”,
“storage-ro”,
]
}
}
[/sourcecode]

I also setup the bastion server so it looks like this. Specifically designating a public IP so that I can connect via SSH.

[sourcecode language=”bash”]
resource “google_compute_address” “bastion_a” {
name = “bastion-a”
}

resource “google_compute_instance” “a” {
name = “a”
machine_type = “n1-standard-1”
zone = “us-west1-a”

provisioner “file” {
source = “install-c.sh”
destination = “install-c.sh”

connection {
type = “ssh”
user = “root”
password = “${var.root_password}”
}
}

boot_disk {
initialize_params {
image = “ubuntu-minimal-1804-bionic-v20180814”
}
}

network_interface {
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
access_config {
nat_ip = “${google_compute_address.bastion_a.address}”
}
}

service_account {
scopes = [
“userinfo-email”,
“compute-ro”,
“storage-ro”,
]
}
}
[/sourcecode]

Plans for next session include getting the nodes setup so that the bastion server can work with and deploy or execute commands against them without the nodes being exposed publicly to the internet. We’ll talk more about that then. For now, happy thrashing code!

Thrashing Code Twitch Schedule September 19th-October 2nd

I’ve got everything queued back up with some extra Thrashing Code Sessions and will have some on the rails travel streams. Here’s what the schedule looks like so far.

Today at 3pm PST (UPDATED: Sep 19th 2018)

thrashing-code-terraformUPDATED: Video available https://youtu.be/NmlzGKUnln4

I’m going to get back into the roll of things this session after the travels last week. In this session I’m aiming to do several things:

  1. Complete next steps toward getting a DataStax Enterprise Apache Cassandra cluster up and running via Terraform in Google Cloud Platform. My estimate is I’ll get to the point that I’ll have three instances that launch and will automate the installation of Cassandra on the three instances. Later I’ll aim to expand this, but for now I’m just going to deploy 3 nodes and then take it from there. Another future option is to bake the installation into a Packer deployed image and use it for the Terraform execution. Tune in to find out the steps and what I decide to go with.
  2. I’m going to pull up the InteroperabilityBlackBox and start to flesh out some objects for our model. The idea, is based around something I stumbled into last week during travels, the thread on that is here.

Friday (Today) @ 3pm PST

thrashing-code-gopherThis Friday I’m aiming to cover some Go basics before moving further into the Colligere CLI  app. Here are the highlights of the plan.

  1.  I’m going to cover some of the topics around program structure including: type declarations, tuple assignment, variable lifetime, pointers, and other variables.
  2.  I’m going to cover some basics on packages, initialization of packages, imports, and scope. This is an important aspect of ongoing development with Colligere since we’ll be pulling in a number of packages for generation of the data.
  3. Setting up configuration and schema for the Colligere application using Viper and related tooling.

Tuesday, October 2nd @ 3pm PST

thrashing-code-terraformThis session I’m aiming to get some more Terraform work done around the spin up and shutdown of the cluster. I’ll dig into some more specific points depending on where I progress to in sessions previous to this one. But it’s on the schedule, so I’ll update this one in the coming days.

 

Thrashing Code Sessions via Twitch & Kick Ass Dis-Sys Meetup

Got some excellent coding and systems setup coming up in the next few days. Also a meetup on the 28th with Tim Kellogg and Alena Hall presenting on some interesting topics around distributed database data working on Kubernetes and WebAssembly of the hot temperament type. A new surprise guest addition on my Twitch channel that is scheduled to swing into Valhalla and help build out a cluster and respective needed DHCP, DNS, and related configuration for a setup on the metal!

Schedule

 

__2 “Starting a Basic Loopback API & Continuous Integration”

In this article Keartida is going to dive into setting up a basic Loopback API project and get a build of that project running on a continuous integration service. In this example she’s going to get the project setup with Codeship.

Prerequisites:

  • Be sure, whichever system you are using, to have a C++ compiler installed. For Windows that usually means installing Visual Studio or something, on OS-X install XCode and the Developer Tools. On Ubuntu the GCC compiler and other options exist. For instructions on OS-X and Linux check out installing compiler tools.
  • Ubuntu
  • OS-X
  • For windows, I’d highly suggest setting up a VM of Ubuntu to do any work with Loopback, Node.js, or follow along with this material. It’s possible on Windows, but there are a number of things that are lacking. If you still want to make a go of using Windows, here are some initial setup steps here.

Nice to Haves:

  • git-flow – works on any bash, handles the branching and merging. Very nice scripts to have.
  • bashit – Adding more information to the bash prompt (works on OS-X, not Ubuntu or Windows Bash)

Continue reading “__2 “Starting a Basic Loopback API & Continuous Integration””

__1 “Getting Started, Kanban & First Steps for a Sharing App”

This is the first (of course the precursor to this entry was the zero day team introduction article) of an ongoing series I’m going to put together. I’m going to write this series from the context of a team building a product. I’ll have code samples and more as I work along through the material.

The first step included Oi Elffaw having a discussion with the team to setup the first week’s working effort. Oi decided to call it a sprint and the rest of the team decided that was cool too. This was week one after all and there wasn’t going to be much else besides testing, research, and setup that took place.

Prerequisites

Before starting everything I went ahead and created a project repository on github for Oi to use waffle.io with. Waffle.io is an online service that works with github issues to provide a kanban style inferface to the issues. This provides an easier view, especially for leads and management, to get insight into where things are and what’s on the plate for the team for the week. I included the default node.js .gitignore file and an Apache 2.0 license when I created the repository. Github then seeds the project with a .gitignore, README.md and the license files.

After setting up the repository in github I pinged Oi and he set to work after the team’s initial meet to discuss what week one would include. Continue reading “__1 “Getting Started, Kanban & First Steps for a Sharing App””