KubeCon 2018 Mission Objectives: Is the developer story any better?

This is my second year at KubeCon. This year I had a very different mission than I did last year. Last year I wanted to learn more about services meshes, what the status of various features around Kubernetes like stateful sets, and overall get a better idea of what the overall shape of the product was and where the project was headed.

This year, I wanted to find out two specific things.

  1. Developer Story: Has the developer story gotten any better?
  2. Database Story: Do any databases, and their respective need for storage, have a good story on Kubernetes yet?

Well, I took my trusty GoPro Camera, mounted it up like the canon the Predator uses on my should and departed. I was going to attend this conference with a slightly different plan of attack. I wanted to have video and not only take a few notes, attend some sessions, and generally try to grok the information I collected that way. My thinking went along the lines, with additional resources, I’d be able to recall and use even more of the information available. With that, here’s the video I shot while perusing the showroom floor and some general observations. Below the fold and video I’ll have additional commentary, links, and updates along with more debunking the cruft from the garbage from the good news!

Gloo-01Gloo – Ok, this looks kind of interesting. I stopped to look just because it had interesting characters. When I strolled up to the booth I listened for a few minutes, but eventually realized I needed to just dig into what docs and collateral existed on the web. Here’s what’s out there, with some quick summaries.

Gloo exists as an application gateway. Kind of a mesh of meshes or something, it wasn’t immediately clear. But I RTFMed the Github Repo here and snagged this high level architecture diagram. Makes it interesting and prospectively offers insight to its use.

high_level_architecture

Gloo has some, I suppose they’re “sub” projects too. Here’s a screenshot of the set of em’. Click it to navigate to solo.io which appears to be the parent organization. Some pretty cool software there, lots of open source goodness. It also leads me to think that maybe this is part of that first point above that I’m looking for, which is where is the improved developer story?

gloo.png

More on that later, for now, I want to touch on one more thing before moving on to next blog posts about the KubeCon details I’m keen to tell you about.

ballerina-logo

Ballerina – Ok, when I approached the booth I wasn’t 100% sure this was going to be what I was wanting it to be. Upon getting a demo (in video too) and then returning to the web – as ya do – then digging into the details and RTFMing a bit I have become hopeful. This stack of technology looks good. Let’s review!

The description on the website describes Ballerina as,

A compiled, transactional, statically and strongly typed programming language with textual and graphical syntaxes. Ballerina incorporates fundamental concepts of distributed system integration and offers a type safe, concurrent environment to implement microservices with distributed transactions, reliable messaging, stream processing, and workflows.

which sounds like something pretty solid, that could really help developers build on – let’s say Kubernetes – in a very meaningful way. However it could also expand far beyond just Kubernetes, which is something I’ve wanted to see, and help developers expand and expedite their processes and development around line of business applications. Which currently, is still the same old schtick with the now ancient RAD tools all the way to today’s React and web tools without a good way to develop those with understanding or integrations with modern tooling like Kubernetes. It’s always, jam it on top, config a bunch of yaml, and toss it over the wall still.

A few more key points of how Ballerina is described on the website. Here’s the stated philosophy goal,

Ballerina makes it easy to write cloud native applications while maintaining reliability, scalability, observability, and security.

Observability and security eh? Ok, I’ll be checking into this further, along with finally diving into Rust in the coming weeks. It looks like 2019 is going to be the year I delve into more than one new language! Yikes, that’s gonna be intense!

TiDB – Clearly the team at PingCap didn’t listen to the repeated advice of “don’t write your own database, it’ll…” from Charity Majors and a zillion other people who have written their own databases! But hey, this one, regardless of the advice being unheeded, looks kind of interesting. Right in the TiDB repo they’ve got an architecture diagram which is…  well, check out the diagram first.

tidb-architecture

So it has a mySQL app protocol, that connects with the TiDB cluster, which then has a DistSQL API (??) and KV API connecting to the TiKV (which stands for Ti Key Value) which is a cluster, that then uses a DistSQL API to connect the other direction to a Spark Cluster. The Spark SQL can then be used. It appears the running theme is SQL all the things.

Above this, to manage the clusters and such there’s a “PD Cluster” which I also need to read about. If you watched the video above, you’ll notice the reference to it being the ZooKeeper of the system. This “PD Cluster ZooKeeper” thing manages the metadata, TSO Data Location and data location pertinent to the Spark Cluster. Overall, 4 clusters to manage the system.

Just for good measure (also in the video) the TiDB is built in Go, while the TiKV is built in Rust, and some of the data location or part of the Spark comms are handled with some Java Virtual Machine – I think – I might have misunderstood some of the explanation. So how does all this work? I’ve no idea at this point, but I’m curious to find out. But before that in the next few weeks and months I’m going to be delving into building applications in Node.js, Java, and C# against Cassandra and DataStax Enterprise, so I might add some cross-comparisons against TiDB.

Also, even though I didn’t get to have a conversation with anybody from Foundation DB I’m interested in how it’s working these days too, especially considering it’s somewhat storied history. But hey, what project doesn’t have a storied history these days right! Stay tuned, subscribe here on the blog, and I’ll have updates when that work and other videos, twitch streams, and the like are published.

After all those conversations and running around the floor, at this point I had to take a coffee break. So with that, enjoy this video on how to appropriately grab good coffee real quick and an amazing cookie treat. Cheers!

Yes, I mispelled “dummy” it’s ok I don’t want to re-render it. I also know that the cookie name is kind of vulgar LOLz but you know what, welcome to Seattle we love ya even when you’re mind is in the gutter!

Property Graph Modeling with an FU Towards Supernodes – Jonathan Lacefield

Some notes along with this talk. Which is about ways to mitigate super nodes, partitioning strategies, and related efforts. Jonathan’s talk is vendor neutral, even though he works at DataStax. Albeit that’s not odd to me, since that’s how we roll at DataStax anyway. We take pride in working with DSE but also with knowing the various products out there, as things are, we’re all database nerds after all. (more below video)

In the video, I found the definition slide for super node was perfect.

supernodes.png

See that super node? Wow, Florida is just covered up by the explosive nature of that super node! YIKES!

In the talk Jonathan also delves deeper into the vertexes, adjacent vertices, and the respective neighbors. With definitions along the way, so it’s a great talk to watch even if you’re not up to speed on graph databases and graph math and all that related knowledge.

impacts

traversal.png

The super node problem he continues on to describe have two specific problems that are detailed; query performance traversals and storage retrieval. Such as a Gremlin traversal (one’s query), moving along creating traversers, until it hits a super node, where a computational explosion occurs.

Whatever your experience, this talk has some great knowledge to expand your ideas on how to query, design, and setup data in your graph databases to work against. Along with that more than a few elements of knowledge about what not to do when designing a schema for your graph data. Give a listen, it’s worth your time.

 

Twitz Coding Session in Go – Cobra + Viper CLI for Parsing Text Files

Part 1 of 3 – Coding Session in Go – Cobra + Viper CLI for Parsing Text Files, Retrieval of Twitter Data, Exports to various file formats, and export to Apache Cassandra.

Updated links to each part will be posted at bottom of  this post when I publish them. For code, written walk through, and the like scroll down and I’ll have code samples toward the bottom of the timestamps under the video, so scroll, scroll, scroll.

3:40 Stated goals of application. I go through a number of features that I intend to build into this application, typing them up in a text doc. The following are the items I added to the list.

  1. The ability to import a text file of Twitter handles that are mixed in among a bunch of other text.
  2. The ability to clean up that list.
  3. The ability to export the cleaned up list of handles to a text, csv, JSON, xml, or plain text file.
  4. The ability, using the Twitter API, to pull data form the bio, latest tweets, and other related information.

github8:26 Creating a new Github Repo. A few tips and tricks with creating a new repo with the Github.

9:46 Twitz is brought into existence! Woo
~~ Have to reset up my SSH key real quick. This is a quick tutorial if you aren’t sure how to do it, otherwise skip forward to the part where I continue with project setup and initial coding.

12:40 Call out for @justForFunc and Francesc’s episode on Cobra!

Check out @Francesc’s “Just for Func”. It’s a great video/podcast series of lots and lots of Go!

13:02 Back to project setup. Cloning repo, getting initial README.emd, .gitignore file setup, and related collateral for the basic project. I also add Github’s “issues” files via the Github interface and rebase these changes in later.
14:20 Adding some options to the .gitignore file.
15:20 Set dates and copyright in license file.
16:00 Further setup of project, removing WIKIs, projects, and reasoning for keeping issues.
16:53 Opening Goland up after an update. Here I start going through the specific details of how I setup a Go Project in Goland, setting configuration and related collateral.
17:14 Setup of Goland’s golang Command Line Launcher.
25:45 Introduction to Cobra (and first mention of Viper by association).
26:43 Installation of Cobra with go get github.com/spf13/cobra/cobra/ and the gotchas (remember to have your paths set, etc).
29:50 Using Cobra CLI to build out the command line interface app’s various commands.
35:03 Looking over the generated code and commenting on the comments and their usefulness.
36:00 Wiring up the last bits of Cobra with some code, via main func, for CLI execution.
48:07 I start wiring up the Viper configuration at this point. Onward from here it’s coding, coding, and configuration, and more coding.

Implementing the `twitz config` Command

1:07:20 Confirming config and working up the twitz config command implementation.
1:10:40 First execution of twitz config and I describe where and how Cobra’s documentation strings work through the --help flag and is put into other help files based on the need for the short or long description.

The basic config code when implemented looked something like this end of session. This snippet of code doesn’t do much beyond provide the displace of the configuration variables from the configuration file. Eventually I’ll add more to this file so the CLI can be easier to debug when setting it up.

package cmd

import (
	"fmt"
	"github.com/spf13/cobra"
	"github.com/spf13/viper"
)

// configCmd represents the config command
var configCmd = &cobra.Command{
	Use:   "config",
	Short: "A brief description of your command",
	Long: `A longer description that spans multiple lines and likely contains examples
and usage of using your command. For the custom example:
Cobra is a CLI library for Go that empowers applications.
This application is a tool to generate the needed files
to quickly create a Cobra application.`,
	Run: func(cmd *cobra.Command, args []string) {
		fmt.Printf("Twitterers File: %s\n", viper.GetString("file"))
		fmt.Printf("Export File: %s\n", viper.GetString("fileExport"))
		fmt.Printf("Export Format: %s\n", viper.GetString("fileFormat"))<span data-mce-type="bookmark" id="mce_SELREST_start" data-mce-style="overflow:hidden;line-height:0" style="overflow:hidden;line-height:0;"></span>
	},
}

func init() {
	rootCmd.AddCommand(configCmd)
}

Implementing the `twitz parse` Command

1:14:10 Starting on the implementation of the twitz parse command.
1:16:24 Inception of my “helpers.go” file to consolidate and clean up some of the code.
1:26:22 REDEX Implementation time!
1:32:12 Trying out the REGEX101.com site.

I’ll post the finished code and write up some details on the thinking behind it when I post video two of this application development sessions.

That’s really it for this last session. Hope the summary helps for anybody working through building CLI apps with Go and Cobra. Until next session, happy thrashing code.

UPDATED SERIES PARTS

  1. Twitz Coding Session in Go – Cobra + Viper CLI for Parsing Text Files (this post)

How to Become a Data Scientist in 6 Months a Hacker’s Approach to Career Planning

I’ve been digging around for some good presentations, giving a listen to some videos from PyData London 2016. Out of the videos I watched, I chose this talk by Tetiana Ivanova. This is a good talk for those looking to get into working as a data scientist. Here’s a few notes I made while watching the video.

Why did Tetiana hack here career? Well, first off, she was a mathematician she wanted to get out of that world of academia and make a change.

…more, including references and links, below the video…

Why does our society support higher education? Tetiana points emphasizes a number of things that that interline in a person’s life such as social pressure, historical changes, prestige, and status signaling to dictate their career options and provide choices in direction. I found this fascinating as I’ve read about, and routinely notice that higher education at an established and known school is largely about prestige and status signaling more than the actual education itself. As we see all the time in society, people will gain a position of status and power all too often based on the prestige and status arbitrarily associated with them because they went to this school or that school or some Ivy League fancy pants school.

But wrap all that up, and Tetiana outlines an important detail, “Prestige is exploitable!”

There’s also some hard realities Tetiana bullet points in a slide that I found worthy of note.

  • Set a realistic time frame.
  • Don’t trust yourself with sticking to deadlines.
  • Make a study plan.
  • Prepare for uncertainty.

I can’t draw enough emphasis to this list. There are also two points that I’d like to point out even more detail.

First is “don’t trust yourself with sticking to deadlines“. The second is “prepare for uncertainty“. If you expect to meet all your deadlines, you are going to dramatically increase your need to prepare for uncertainty. Because you will simply not meet all of your deadlines. For many of us we won’t meet most of our deadlines, let alone all of the deadlines.

Tetiana continues to cover a number of details around willpower, self management and self organization, and an insightful take on the topic of nerd networking. Watch the talk, it’s a good one and will provide a lot of insight, from a clearly introspective and intelligent individual, into clearing and stepping into a data science – or possibly other – career path!

References:

Jonathan Ellis talks about Five Lessons in Distributed Databases

Notes on the talk…

  1. If it’s not SQL it’s not a database. Watch, you’ll get to hear why… ha!

Then Jonathan covers the recent history (sort of recent, the last ~20ish years) of the industry and how we’ve gotten to this point in database technology.

  1. It takes 5+ years to build a database.

Also the tens of millions of dollars with that period of time. Both are needed, in droves, time and money.

…more below the video.

  1. The customer is always right.

Even when they’re clearly wrong, they’re largely right.

For number 4 and 5 you’ll have to watch the video. Lot’s good stuff in this video including comparisons of Cosmos, Dynamo DB, Apache Cassandra, DataStax Enterprise, and how these distributed databases work, their performance (3rd Party metrics are shown) and more details!

7 Tips for Creating Technical Content on The Open Source Show

I’m in a video with the rad Christina Warren!

In this video we walk about 7 tips for creating technical content in a kind of rapid fire back and forth of ideas. Recording this was great, as the way we did it presented us with a chance to put these ideas together like this. Being that both of us have presented and helped people out presenting a few more times then we’ve been able to keep a count of, the crew leading this endeavor basically said, “start brainstorming” as if the show is live. We did that, and it worked rather well. Then the artists, animators, and crew went to work splicing and dicing the video into this watchable format! Lotsa fun, enjoy.

For each of these I’ve elaborated on in the past.

Those 7 Tips for Creating Technical Content

  1. Always be learning!
  2. Know your audience.
  3. Bring ALL the connectors.
  4. Backup, backup, backup your presentations.
  5. Write continuously, regularly, and tell yourself a story.
  6. Try tutorials on a fresh machine.
  7. Observe how people create and present content.

Even more details on all these in the future.

References:

 

 

New Live Coding Streams and Episodes!

I’ve been working away in Valhalla on the next episodes of Thrashing Code TV and subsequent content for upcoming Thrashing Code Sessions on Twitch (follow) and Youtube (subscribe). The following I’ve broken into the main streams and shows that I’ll be putting together over the next days, weeks, and months and links to sessions and shows already recorded. If you’ve got any ideas, questions, thoughts, just send them my way.

Colligere (Next Session)

Coding has been going a little slow, in light of other priorities and all, but it’ll still be one of the featured projects I’ll be working on. Past episodes are available here, however join in on Friday and I’ll catch everybody up, so you can skip past episodes if you aren’t after specific details and just want to join in on future work and sessions.

In this next session, this Friday the 9th at 3:33pm PST, I’m going to be working on reading in JSON, determining what type of structure the JSON should be unmarshalled into, and how best to make that determination through logic and flow.

Since Go needs something to unmarshall JSON into, a specific structure, I’ll be working on determining a good way to pre-read information in the schema configuration files (detailed in the issue listed below) so that a logic flow can be implemented that will then begin the standard Go JSON unmarshalling of the object. So this will likely end up including some hackery around reading in JSON without the assistance of the Go JSON library. Join in and check out what solution(s) I come up with.

The specific issue I’ll be working on is located on Github here. These sessions I’m going to continue working on, but will be a little vague and will start working on the Colligere CLI primarily on Saturday’s at 10am. So you can put that on your schedule and join me then for hacks. If you’d like to contribute, as always, reach out via here, @Adron, or via the Github Colligere Repository and let’s discuss what you’d like to add.

Getting Started with Go

This set of sessions, which I’ve detailed in “Getting Started with Go“, I’ll be starting on January 12th at 4pm PST. You can get the full outline and further details of what I’ll be covering on my “Getting Started with Gopage and of course the first of these sessions I’ve posted details on the Twitch event page here.

  • Packages & the Go Tool – import paths, package declarations, blank imports, naming, and more.
  • Structure – names, declarations, variables, assignments, scope, etc., etc.
  • Basic Types – integers, floats, complex numbers, booleans, strings, and constants.

Infrastructure as Code with Terraform and Apache Cassandra

I’ll be continuing the Terraform, bash, and related configuration and coding of using infrastructure as code practices to build out, maintain, and operate Apache Cassandra distributed database clusters. At some point I’ll likely add Kubernetes, some additional on the metal cluster systems and start looking at Kubernetes Operators and how one can manage distributed systems on Kubernetes using this on the metal environment. But for now, these sessions will continue real soon as we’ve got some systems to build!

Existing episodes of this series you can check out here.

Getting Started with Multi-model Databases

This set of sessions I’ve detailed in “Getting Started with a Multi-model Database“, and this one I’ll be starting in the new years also. Here’s the short run down of the next several streams. So stay tuned, subscribe or follow my Twitch and Youtube and of course subscribe to the Composite Code blog (should be to the left, or if on mobile click the little vertical ellipses button)

  1. An introduction to a range of databases: Apache Cassandra, Postgresql and SQL Server, Neo4j, and … in memory database. Kind of like 7 Databases in 7 Weeks but a bunch of databases in just a short session!
  2. An Introduction – Apache Cassandra and what it is, how to get a minimal cluster started, options for deploying something quickly to try it out.
  3. Adding to Apache Cassandra with DataStax Enterprise, gaining analytics, graph, and search. In this session I’ll dive into what each of these capabilities within DataStax Enterprise give us and how the architecture all fits together.
  4. Deployment of Apache Cassandra and getting a cluster built. Options around ways to effectively deploy and maintain Apache Cassandra in production.
  5. Moving to DataStax Enterprise (DSE) from Apache Cassandra. Getting a DSE Cluster up and running with OpsCenter, Lifecycle Manager (LCM), and getting some queries tried out with Studio.