Twitz Coding Session in Go – Cobra + Viper CLI with Initial Twitter + Cassandra Installation

Part 2 of 3 – Coding Session in Go – Cobra + Viper CLI for Parsing Text Files, Retrieval of Twitter Data, Exports to various file formats, and export to Apache Cassandra.

Part 1 is available “Twitz Coding Session in Go – Cobra + Viper CLI for Parsing Text Files“.

Updated links to each part will be posted at bottom of  this post when I publish them. For code, written walk through, and the like scroll down below the video and timestamps.

Hacking Together a CLI Installing Cassandra, Setting Up the Twitter API, ENV Vars, etc.

0:04 Kick ass intro. Just the standard rocking tune.

3:40 A quick recap. Check out the previous write “Twitz Coding Session in Go – Cobra + Viper CLI for Parsing Text Files” of this series.

4:30 Beginning of completion of twitz parse command for exporting out to XML, JSON, and CSV (already did the text export previous session). This segment also includes a number of refactorings to clean up the functions, break out the control structures and make the code more readable.

In the end of refactoring twitz parse came out like this. The completed list is put together by calling the buildTwitterList() function which is actually in the helpers.go file. Then prints that list out as is, and checks to see if a file export should be done. If there is a configuration setting set for file export then that process starts with a call to exportParsedTwitterList(exportFilename string, exportFormat string, ... etc ... ). Then a simple single level control if then else structure to determine which format to export the data to, and a call to the respective export function to do the actual export of data and writing of the file to the underlying system. There’s some more refactoring that could be done, but for now, this is cleaned up pretty nicely considering the splattering of code I started with at first.

50:00 I walk through a quick install of an Apache Cassandra single node that I’ll use for development use later. I also show quickly how to start and stop post-installation.

Reference: Apache Cassandra, Download Page, and Installation Instructions.

53:50 Choosing the go-twitter API library for Go. I look at a few real quickly just to insure that is the library I want to use.

Reference: go-twitter library

56:35 At this point I go through how I set a Twitter App within the API interface. This is a key part of the series where I take a look at the consumer keys and access token and access token secrets and where they’re at in the Twitter interface and how one needs to reset them if they just showed the keys on a stream (like I just did, shockers!)

57:55 Here I discuss and show where to setup the environment variables inside of Goland IDE to building and execution of the CLI. Once these are setup they’ll be the main mechanism I use in the IDE to test the CLI as I go through building out further features.

1:00:18 Updating the twitz config command to show the keys that we just added as environment variables. I set these up also with some string parsing and cutting off the end of the secrets so that the whole variable value isn’t shown but just enough to confirm that it is indeed a set configuration or environment variable.

1:16:53 At this point I work through some additional refactoring of functions to clean up some of the code mess that exists. Using Goland’s extract method feature and other tooling I work through several refactoring efforts that clean up the code.

1:23:17 Copying a build configuration in Goland. A handy little thing to know you can do when you have a bunch of build configuration options.

1:37:32 At this part of the video I look at the app-auth example in the code library, but I gotta add the caveat, I run into problems using the exact example. But I work through it and get to the first error messages that anybody would get to pending they’re using the same examples. I get them fixed however in the next session, this segment of the video however provides a basis for my pending PR’s and related work I’ll submit to the repo.

The remainder of the video is trying to figure out what is or isn’t exactly happening with the error.

I’ll include the working findem code in the next post on this series. Until then, watch the wrap up and enjoy!

1:59:20 Wrap up of video and upcoming stream schedule on Twitch.


    1. Twitz Coding Session in Go – Cobra + Viper CLI for Parsing Text Files
    2. Twitz Coding Session in Go – Cobra + Viper CLI with Initial Twitter + Cassandra Installation (this post)


Twitz Coding Session in Go – Cobra + Viper CLI for Parsing Text Files

Part 1 of 3 – Coding Session in Go – Cobra + Viper CLI for Parsing Text Files, Retrieval of Twitter Data, Exports to various file formats, and export to Apache Cassandra.

Updated links to each part will be posted at bottom of  this post when I publish them. For code, written walk through, and the like scroll down and I’ll have code samples toward the bottom of the timestamps under the video, so scroll, scroll, scroll.

3:40 Stated goals of application. I go through a number of features that I intend to build into this application, typing them up in a text doc. The following are the items I added to the list.

  1. The ability to import a text file of Twitter handles that are mixed in among a bunch of other text.
  2. The ability to clean up that list.
  3. The ability to export the cleaned up list of handles to a text, csv, JSON, xml, or plain text file.
  4. The ability, using the Twitter API, to pull data form the bio, latest tweets, and other related information.

github8:26 Creating a new Github Repo. A few tips and tricks with creating a new repo with the Github.

9:46 Twitz is brought into existence! Woo
~~ Have to reset up my SSH key real quick. This is a quick tutorial if you aren’t sure how to do it, otherwise skip forward to the part where I continue with project setup and initial coding.

12:40 Call out for @justForFunc and Francesc’s episode on Cobra!

Check out @Francesc’s “Just for Func”. It’s a great video/podcast series of lots and lots of Go!

13:02 Back to project setup. Cloning repo, getting initial README.emd, .gitignore file setup, and related collateral for the basic project. I also add Github’s “issues” files via the Github interface and rebase these changes in later.
14:20 Adding some options to the .gitignore file.
15:20 Set dates and copyright in license file.
16:00 Further setup of project, removing WIKIs, projects, and reasoning for keeping issues.
16:53 Opening Goland up after an update. Here I start going through the specific details of how I setup a Go Project in Goland, setting configuration and related collateral.
17:14 Setup of Goland’s golang Command Line Launcher.
25:45 Introduction to Cobra (and first mention of Viper by association).
26:43 Installation of Cobra with go get and the gotchas (remember to have your paths set, etc).
29:50 Using Cobra CLI to build out the command line interface app’s various commands.
35:03 Looking over the generated code and commenting on the comments and their usefulness.
36:00 Wiring up the last bits of Cobra with some code, via main func, for CLI execution.
48:07 I start wiring up the Viper configuration at this point. Onward from here it’s coding, coding, and configuration, and more coding.

Implementing the `twitz config` Command

1:07:20 Confirming config and working up the twitz config command implementation.
1:10:40 First execution of twitz config and I describe where and how Cobra’s documentation strings work through the --help flag and is put into other help files based on the need for the short or long description.

The basic config code when implemented looked something like this end of session. This snippet of code doesn’t do much beyond provide the displace of the configuration variables from the configuration file. Eventually I’ll add more to this file so the CLI can be easier to debug when setting it up.

package cmd

import (

// configCmd represents the config command
var configCmd = &cobra.Command{
	Use:   "config",
	Short: "A brief description of your command",
	Long: `A longer description that spans multiple lines and likely contains examples
and usage of using your command. For the custom example:
Cobra is a CLI library for Go that empowers applications.
This application is a tool to generate the needed files
to quickly create a Cobra application.`,
	Run: func(cmd *cobra.Command, args []string) {
		fmt.Printf("Twitterers File: %s\n", viper.GetString("file"))
		fmt.Printf("Export File: %s\n", viper.GetString("fileExport"))
		fmt.Printf("Export Format: %s\n", viper.GetString("fileFormat"))<span data-mce-type="bookmark" id="mce_SELREST_start" data-mce-style="overflow:hidden;line-height:0" style="overflow:hidden;line-height:0;"></span>

func init() {

Implementing the `twitz parse` Command

1:14:10 Starting on the implementation of the twitz parse command.
1:16:24 Inception of my “helpers.go” file to consolidate and clean up some of the code.
1:26:22 REDEX Implementation time!
1:32:12 Trying out the site.

I’ll post the finished code and write up some details on the thinking behind it when I post video two of this application development sessions.

That’s really it for this last session. Hope the summary helps for anybody working through building CLI apps with Go and Cobra. Until next session, happy thrashing code.


  1. Twitz Coding Session in Go – Cobra + Viper CLI for Parsing Text Files (this post)

Chapter 2 in My Twitch Streaming

A while back I started down the path of getting a Twitch Channel started. At this point I’ve gotten a channel setup which I’ve dubbed Thrashing Code albeit it still just has “adronhall” all over it. I’ll get those details further refined as I work on it more.

Today I recorded a new Twitch stream about doing a twitch stream and created an edited video of all the pieces and cameras and angles. I could prospectively help people get started, it’s just my experiences so far and efforts to get everything connected right. The actual video stream recording is available, and I’ll leave it on the channel. However the video I edited will be available and I’ll post a link here.

Tomorrow will be my first official Twitch stream at 3pm PST. If you’re interested in watching check out my Twitch profile here follow and it’ll ping you when I go live. This first streaming session, or episode, or whatever you want to call it, will include a couple topics. I’ll be approaching these topics from that of someone just starting, so if you join help hold me to that! Don’t let me skip ahead or if you notice I left out something key please join and chat at me during the process. I want to make sure I’m covering all the bases as I step through toward achieving the key objectives. Which speaking of…

Tomorrow’s Mission Objectives

  1. Create a DataStax Enterprise Cassandra Cluster in Google Cloud Platform.
  2. Create a .NET project using the latest cross-platform magical bits that will have a library for abstracting the data source(s), a console interface for using the application, and of course a test project.
  3. Configure & connect to the distributed database cluster.

Mission Stretch Objectives

  1. Start a github repo to share the project with others.
  2. Setup some .github templates for feature request issues or related issues.
  3. Write up some Github Issue Feature requests and maybe even sdd some extra features to the CLI for…??? no idea ??? determine 2-3 during the Twitch stream.

If you’d like to follow along, here’s what I have installed. You’re welcome to a range of tooling to follow along with that is the same as what I’ve got here or a variance of other things. Feel free to bring up tooling if you’re curious about it via chat and I’ll answer questions where and when I can.

  • Ubuntu v18.04
  • .NET core v2.1
  • DataStax Enterprise v6

Quick San Francisco Trip, Multi-Cloud Conversations, and A Network of Excellent People to Follow

A Quick Trip

Boarded the 17x King County Metro Bus at 6:11am yesterday. The trip was entirely uneventful until we arrived in Belltown in downtown Seattle. A guy with only his underwear on boarded the bus. Yup, you got that right, just like those underwear nightmares people have! That guy was living this situation for whatever unknown reasons. He didn’t make to much fuss though and eventually settled into one of the back seats on our packed bus.

Downtown as I got off I was pleasantly surprised that Victrola was open in the newly leased Amazon building near Pine & Pike on 3rd. I remember it being built and opening, but for some reason had completely forgotten it was there until today. Having a Victrola at this location sets it up perfectly for the mornings one needs to get to the airport. It’s located in a way that you can get off any connecting bus, grab an espresso at Victrola, and then enter the downtown tunnel to board LINK to the airport. Previously, the only option was really to wait until you get to the airport and buy some of the lackluster espresso coffee trash options they have at SEATAC.

Once I boarded the LINK I got into some notes for the upcoming Distributed Data Show that I would be recording while in San Francisco. But also got into a little review of some Node.js JavaScript code that I’d pulled down previously. I’ve been hard pressed to get into the code base and add some updates, logging, and minor logic for Killrvideo. Hopefully today, or maybe even this evening while I’m on the rest of this adventure.

All the things in between then occurred. The flight to San Jose, the oddball Taxi ride to Santa Clara DataStax HQ, then the Lyft ride to Caltrain to ride into downtown San Francisco to my hotel. Twas’ an excellently chill ride into the city and even a little productive. A good night of sleep and then up and at em’. Headed to the nearest Blue Bottle, which was on the way to my next effort of the trip. Blue Bottle was solid as always, and into the studios I went.

Multi-Cloud Conversations

At this point of this quick trip I’ve just finished shooting a few new episodes of the Distributed Data Show (subscribe) with Jeff Carpenter (@jscarp) and Amanda Moran (@amandadatastax). Just recently I also got to shoot an episode with Patrick McFadin (@patrickmcfadin) too. We’ve been enjoying a number of conversations around multi-cloud considerations, what going multi-cloud means, and even what exactly does multi-cloud even really mean. It’s been fun and now I’m putting together some additional blog posts and related material to follow the posts with more detail. So keep an eye out for the Distributed Data Shows. You can also take the lazy route and subscribe to be notified when new episodes are released or you could even set your calendar for every Tuesday, because we follow an old school traditional schedule approaches to episode releases.

I’ve included a recap on a few of the recent episodes below, which you may want to check out just to get an idea of what’s to come. Great as a listen while commuting, something to put on while you’re relaxing a bit but want to think about or listen to a conversation on a topic, or just curious what’s up!

In this last episode DataStax Vanguard Lead Chelsea Navo (@mazdagirluk) joined in to talk about how the team helps enterprises meet the challenges posed by disrupting technology and competitors. Some of the highlights of the conversation include:

1:57 Defining what exactly enterprise transformation is.

4:00 Trends including retooling batch workloads around real-time requirements, handing larger data sets and the uber popular use of streaming data we see today.

7:37 Techniques on getting teams trained up on cutting-edge technology.

9:46 There’s an argument for “skateboard solutions“! I now want to write up a whole practice around this!

16:27 Data modeling. Challenges of. Challenges around. Data modeling, it’s a good topic of exploration.

25:56 The data layer is still typically the hardest part of your application! (The assertion is made; agree, disagree, or ??)

Continue reading “Quick San Francisco Trip, Multi-Cloud Conversations, and A Network of Excellent People to Follow”

Getting Started with Twitch | Twitch Thrashing Code Stream

I’d been meaning to get started for some time. I even tweeted, just trying to get further insight into what and why people watch Twitch streams and of course why and who produces their own Twitch streams.

With that I set out to figure out how to get the right tooling setup for Twitch streaming on Linux and MacOS. Here’s what I managed to dig up.

First things first, and somewhat obviously, go create a Twitch account. The sign up link is up in the top right corner. Continue reading “Getting Started with Twitch | Twitch Thrashing Code Stream”

A Day in The Life of

So I sat down and hacked up a new version of I snagged a site theme and skin from Theme Forest and ran with it. Broke apart each of a few sections to get a minimally viable site up within 24 hours. I got interrupted a few times with a few other things I needed to wrap up, more about those things later. For now I put together the site, check it out @ I also put together a video of the hack session during various stages of getting the site live.

During the video I also have a few excursions away form the code to help stay focused on the code. At one point I’m actually working on the Junction App too. Also, keep an eye on it and you can see my Sublime 2 usage, iMac, Lenovo Carbon X1, Ubuntu and a whole slew of other tech. More on all those things too, for now… here’s the video.

…and yeah, no real code complexities or such, mostly an excuse to make a video to some oddball dubstep from the scraps of video I put together during building Hope it was entertaining, cheers – Adron.

Conference Recap – The awe inspiring quality & number of conferences in Cascadia!

Rails 2013 Conf (April 29th-May 1st)

The Rails 2013 Conference kicked off for me, with a short bike ride through town to the conference center. The Portland conference center is one of the most connected conference centers I’ve seen; light rail, streetcar, bus, bicycle boulevards, trails & of course pedestrian access is all available. I personally have no idea if you can drive to it, but I hear there is parking & such for drivers.



Rails Conf however clearly places itself in the category of a conference of people that give a shit! This is evident in so many things among the community, from the inclusive nature creating one of the most diverse groups of developers to the fact they handed out 7 day transit passes upon picking up your Rails Conf Pass!



The keynote was by DHH (obviously right?). He laid out where the Rails stack is, some roadmap topics & drew out how much the community had grown. Overall, Rails is now in the state of maintain and grow the ideal. Considering its inclusive nature I hope to see it continue to grow and to increase options out there for people getting into software development.

Railsconf 2013

Railsconf 2013

I also met a number of people while at the conference. One person I ran into again was Travis, who lives out yonder in Jacksonville, Florida and works with Hashrocket. Travis & I, besides the pure metal, have Jacksonville as common stomping ground. Last year I’d met him while the Hash Rocket Crew were in town. We discussed Portland, where to go and how to get there, plus what Hashrocket has been up to in regards to use around Mongo, other databases and how Ruby on Rails was treating them. The conclusion, all good on the dev front!

One of these days though, the Hashrocket crew is just gonna have to move to Portland. Sorry Jacksonville, we’ll visit one day. 😉

For the later half of the conferene I actually dove out and headed down for some client discussions in the country of Southern California. Nathan Aschbacher headed up Basho attendance at the conference from this point on. Which reminds me, I’ve gotta get a sitrep with Nathan…

RICON East (May 13th & 14th)



Ok, so I didn’t actually attend RICON East (sad face), I had far too many things to handle over here in Portlandia – but I watched over 1/3rd of the talks via the 1080p live stream. The basic idea of the RICON Conferences, is a conference series focused on distributed systems. Riak is of course a distributed database, falling into that category, but RICON is by no means merely about Riak at all. At RICON the talks range from competing products to acedemic heavy hitting talks about how, where and why distributed systems are the future of computing. They may touch on things you may be familiar with such as;

  • PaaS (Platform as a Service)
  • Existing databases and how they may fit into the fabric of distributed systems (such as Postgresql)
  • How to scale distributed across AWS Cloud Services, Azure or other cloud providers


As the videos are posted online I’ll be providing some blog entries around the talks. It will however be extremely difficult to choose the first to review, just as RICON back in October of 2012, every single talk was far above the modicum of the median!

Two immediate two talks that stand out was Christopher Meiklejohn’s @cmeik talk, doing a bit o’ proofs and all, in realtime off the cuff and all. It was merely a 5 minute lightnight talk, but holy shit this guy can roll through and hand off intelligence via a talk so fast in blew my mind!

The other talk was Kyle’s, AKA @aphry, who went through network partitions with databases. Basically destroying any comfort you might have with your database being effective at getting reads in a partition event. Kyle knows his stuff, that is without doubt.

There are many others, so subscribe keep reading and I’ll be posting them in the coming weeks.

Node PDX 2013 (May 16th & 17th)

Horse_js and other characters, planning some JavaScript hacking!

Horse_js and other characters, planning some JavaScript hacking!

Holy moley we did it, again! Thanks to EVERYBODY out there in the community for helping us pull together another kick ass Node PDX event! That’s two years in a row now! My fellow cohort of Troy Howard @thoward37 and Luc Perkins @lucperkins had hustled like some crazed worker bees to get everything together and ready – as always a lot always comes together the last minute and we don’t get a wink of sleep until its all done and everybody has had a good time!

Node PDX Sticker Selection was WICKED COOL!

Node PDX Sticker Selection was WICKED COOL!

Node PDX, it’s pretty self descriptive. It’s a one Node.js conference that also includes topics on hardware, javascript on the client side and a host of other topics. It’s also Portland specific. We have Portland Local Roasted Coffee (thanks Ristretto for the pour over & Coava for the custom roast!), Portland Beer (thanks brew capital of the world!), Portland Food (thanks Nicolas’!), Portland DJs (thanks Monika Mhz!), Portland Bands and tons of Portland wierdness all over the place. It’s always a good time! We get the notion at Node PDX, with all the Portlandia spread all over it’s one of the reasons that 8-12 people move to and get hired in Portland after this conference every year (it might become a larger range, as there are a few people planning to make the move in the coming months!).

A wide angle view of Holocene where Node PDX magic happened!

A wide angle view of Holocene where Node PDX magic happened!

The talks this year increased in number, but maintained a solid range of topics. We had a node.js disco talk, client side JavaScript, sensors and node.js, and even heard about people’s personal stories of how they got into programming JavaScript. Excellent talks, and as with RICON, I’ll be posting a blog entry and adding a few penny thoughts of my own to each talk.

Polyglot Conference 2013 (May 24th Workshops, 25th Conference)

Tea & Chris kick off Polyglot Conference 2013!

Tea & Chris kick off Polyglot Conference 2013!

A smiling crowd!

A smiling crowd!

Polyglot Conference was held in Vancouver again this year, with clear intent to expand to Portland and Seattle in the coming year or two. I’m super stoked about this and will definitely be looking to help out – if you’re interested in helping let me know and I’ll get you in contact with the entire crew that’s been handling things so far!

Polyglot Conference itself is a yearly conference held as an open spaces event. The way open space conferences work is described well on Wikipedia were it is referred to as Open Spaces Technology.

The crowds amass to order the chaos of tracks.

The crowds amass to order the chaos of tracks.

The biggest problem with this conference, is that it’s technically only one day. I hope that we can extend it to two days for next year – and hopefully even have the Seattle and Portland branches go with an extended two day itenerary.

A counting system...

A counting system…

This year the break out sessions that that I attended included “Dev Tools”, “How to Be a Better Programmer”, “Go (Language) Noises”, other great sessions and I threw down a session of my own on “Distributed Systems”. Overall, great time and great sessions! I had a blast and am looking forward to next year.

By the way, I’m not sure if I’ve mentioned this at the beginning of this blog entry, but this is only THE BEGINNING OF SUMMER IN CASCADIA! I’ll have more coverage of these events and others coming up, the roadmap includes OS Bridge (where I’m also speaking) and Portland’s notorious OSCON.

Until the next conference, keep hacking on that next bad ass piece of software, cheers!