Schedule Updates: Coding, Meetups, Twitch Streams, Etc. Starting Monday 7th

Another updated schedule for the coming weeks starting September 26th onward.

SeriesThrashing Code General Calamity

This is going to be a mix of tech this week. Probably some database hacking, code hacking, samples, and setup of even more examples for use throughout your coding week!

Series: Meetups!

Series: Emerald City Tech Conversations

Join me and several guests, on several different days to get into particulars about programming, coding, hacking, ops, devops, or whatever comes up. Ask questions, inquire, infer, or inform we’re here for the conversation. Join my guests and I!

Schedule Updates: Coding, Meetups, Twitch Streams, Etc.

Another updated schedule for the coming weeks starting September 26th onward.

Series: Thrashing Code General Calamity

This is going to be a mix of tech this week. Probably some database hacking, code hacking, samples, and setup of even more examples for use throughout your coding week!

Series: Emerald City Tech Conversations

Join me and several guests, on several different days to get into particulars about programming, coding, hacking, ops, devops, or whatever comes up. Ask questions, inquire, infer, or inform we’re here for the conversation. Join my guests and I!

Series: DataStax’s Apache Cassandra and DSE C# Hour

I’ll probably have a few additions over the next couple of days, but for now, this is the bulk of content and coding coming up in the next couple of weeks! See ya in the code!

 

New Live Coding Streams and Episodes!

I’ve been working away in Valhalla on the next episodes of Thrashing Code TV and subsequent content for upcoming Thrashing Code Sessions on Twitch (follow) and Youtube (subscribe). The following I’ve broken into the main streams and shows that I’ll be putting together over the next days, weeks, and months and links to sessions and shows already recorded. If you’ve got any ideas, questions, thoughts, just send them my way.

Colligere (Next Session)

Coding has been going a little slow, in light of other priorities and all, but it’ll still be one of the featured projects I’ll be working on. Past episodes are available here, however join in on Friday and I’ll catch everybody up, so you can skip past episodes if you aren’t after specific details and just want to join in on future work and sessions.

In this next session, this Friday the 9th at 3:33pm PST, I’m going to be working on reading in JSON, determining what type of structure the JSON should be unmarshalled into, and how best to make that determination through logic and flow.

Since Go needs something to unmarshall JSON into, a specific structure, I’ll be working on determining a good way to pre-read information in the schema configuration files (detailed in the issue listed below) so that a logic flow can be implemented that will then begin the standard Go JSON unmarshalling of the object. So this will likely end up including some hackery around reading in JSON without the assistance of the Go JSON library. Join in and check out what solution(s) I come up with.

The specific issue I’ll be working on is located on Github here. These sessions I’m going to continue working on, but will be a little vague and will start working on the Colligere CLI primarily on Saturday’s at 10am. So you can put that on your schedule and join me then for hacks. If you’d like to contribute, as always, reach out via here, @Adron, or via the Github Colligere Repository and let’s discuss what you’d like to add.

Getting Started with Go

This set of sessions, which I’ve detailed in “Getting Started with Go“, I’ll be starting on January 12th at 4pm PST. You can get the full outline and further details of what I’ll be covering on my “Getting Started with Gopage and of course the first of these sessions I’ve posted details on the Twitch event page here.

  • Packages & the Go Tool – import paths, package declarations, blank imports, naming, and more.
  • Structure – names, declarations, variables, assignments, scope, etc., etc.
  • Basic Types – integers, floats, complex numbers, booleans, strings, and constants.

Infrastructure as Code with Terraform and Apache Cassandra

I’ll be continuing the Terraform, bash, and related configuration and coding of using infrastructure as code practices to build out, maintain, and operate Apache Cassandra distributed database clusters. At some point I’ll likely add Kubernetes, some additional on the metal cluster systems and start looking at Kubernetes Operators and how one can manage distributed systems on Kubernetes using this on the metal environment. But for now, these sessions will continue real soon as we’ve got some systems to build!

Existing episodes of this series you can check out here.

Getting Started with Multi-model Databases

This set of sessions I’ve detailed in “Getting Started with a Multi-model Database“, and this one I’ll be starting in the new years also. Here’s the short run down of the next several streams. So stay tuned, subscribe or follow my Twitch and Youtube and of course subscribe to the Composite Code blog (should be to the left, or if on mobile click the little vertical ellipses button)

  1. An introduction to a range of databases: Apache Cassandra, Postgresql and SQL Server, Neo4j, and … in memory database. Kind of like 7 Databases in 7 Weeks but a bunch of databases in just a short session!
  2. An Introduction – Apache Cassandra and what it is, how to get a minimal cluster started, options for deploying something quickly to try it out.
  3. Adding to Apache Cassandra with DataStax Enterprise, gaining analytics, graph, and search. In this session I’ll dive into what each of these capabilities within DataStax Enterprise give us and how the architecture all fits together.
  4. Deployment of Apache Cassandra and getting a cluster built. Options around ways to effectively deploy and maintain Apache Cassandra in production.
  5. Moving to DataStax Enterprise (DSE) from Apache Cassandra. Getting a DSE Cluster up and running with OpsCenter, Lifecycle Manager (LCM), and getting some queries tried out with Studio.

Setting Up Nodes, Firewall, & Instances in Google Cloud Platform

Here’s the run down of what I covered in the latest Thrashing Code Session (go subscribe here to the channel for future sessions or on Twitch). The core focus of this session was getting some further progress on my Terraform Project around getting a basic Cassandra and DataStax Enterprise Apache Cassandra Cluster up and running in Google Cloud Platform.

The code and configuration from the work is available on Github at terraform-fields and a summary of code changes and other actions taken during the session are further along in this blog entry.

Streaming Session Video

In this session I worked toward completing a few key tasks for the Terraform project around creating a Cassandra Cluster. Here’s a run down of the time points where I tackle specific topics.

  • 3:03 – Welcome & objectives list: Working toward DataStax Enterprise Apache Cassandra Cluster and standard Apache Cassandra Cluster.
  • 3:40 – Review of what infrastructure exists from where we left off in the previous episode.
  • 5:00 – Found music to play that is copyright safe! \m/
  • 5:50 – Recap on where the project lives on Github in the terraformed-fields repo.
  • 8:52 – Adding a google_compute_address for use with the instances. Leads into determining static public and private google_compute_address resources. The idea being to know the IP for our cluster to make joining them together easier.
  • 11:44 – Working to get the access_config and related properties set on the instance to assign the google_compute_address resources that I’ve created. I run into a few issues but work through those.
  • 22:28 – Bastion server is set with the IP.
  • 37:05 – I setup some files, following a kind of “bad process” as I note. Which I’ll refactor and clean up in a subsequent episode. But the bad process also limits the amount of resources I have in one file, so it’s a little easier to follow along.
  • 54:27 – Starting to look at provisioners to execute script files and commands before or after the instance creation. Super helpful, with the aim to use this feature to download and install the DataStax Enterprise Apache Cassandra or standard Apache Cassandra software.
  • 1:16:18 – Ah, a need for firewall rule for ssh & port 22. I work through adding those and then end up with an issue that we’ll be resolving next episode!

Session Content

Starting Point: I started this episode from where I left off last session.

Work Done: In this session I added a number of resources to the project and worked through a number of troubleshooting scenarios, as one does.

Added firewall resources to open up port 22 and icmp (ping, etc).

[sourcecode language=”bash”]
resource “google_compute_firewall” “bastion-ssh” {
name = “gimme-bastion-ssh”
network = “${google_compute_network.dev-network.name}”

allow {
protocol = “tcp”
ports = [“22”]
}
}

resource “google_compute_firewall” “bastion-icmp” {
name = “gimme-bastion-icmp”
network = “${google_compute_network.dev-network.name}”

allow {
protocol = “icmp”
}
}
[/sourcecode]

I also broke out the files so that each instances has its own IP addresses with it in the file specific to that instance. Later I’ll add context for why I gave the project file bloat, by refactoring to use modules.

terraform-files.png

Added each node resource as follows. I just increased each specific node count by one for each subsequent node, such as making this node1_internal IP google_compute_address increment to node2_internal. Everything also statically defined, adding to my file and configuration bloat.

[sourcecode language=”bash”]
resource “google_compute_address” “node1_internal” {
name = “node-1-internal”
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
address_type = “INTERNAL”
address = “10.1.0.5”
}

resource “google_compute_instance” “node_frank” {
name = “frank”
machine_type = “n1-standard-1”
zone = “us-west1-a”

boot_disk {
initialize_params {
image = “ubuntu-minimal-1804-bionic-v20180814”
}
}

network_interface {
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
address = “${google_compute_address.node1_internal.address}”
}

service_account {
scopes = [
“userinfo-email”,
“compute-ro”,
“storage-ro”,
]
}
}
[/sourcecode]

I also setup the bastion server so it looks like this. Specifically designating a public IP so that I can connect via SSH.

[sourcecode language=”bash”]
resource “google_compute_address” “bastion_a” {
name = “bastion-a”
}

resource “google_compute_instance” “a” {
name = “a”
machine_type = “n1-standard-1”
zone = “us-west1-a”

provisioner “file” {
source = “install-c.sh”
destination = “install-c.sh”

connection {
type = “ssh”
user = “root”
password = “${var.root_password}”
}
}

boot_disk {
initialize_params {
image = “ubuntu-minimal-1804-bionic-v20180814”
}
}

network_interface {
subnetwork = “${google_compute_subnetwork.dev-sub-west1.self_link}”
access_config {
nat_ip = “${google_compute_address.bastion_a.address}”
}
}

service_account {
scopes = [
“userinfo-email”,
“compute-ro”,
“storage-ro”,
]
}
}
[/sourcecode]

Plans for next session include getting the nodes setup so that the bastion server can work with and deploy or execute commands against them without the nodes being exposed publicly to the internet. We’ll talk more about that then. For now, happy thrashing code!

Wrap Up for August of 2018