It’s happening! It’s really happening y’all! People have opinions and things to say!
I’m starting a new segment for my Twitch channel – and by proxy this new future prospective podcasting that will go along with it as lagniappe – and am looking for those that have something they’d like to converse about!
If you’re in the Seattle area visiting, living, or otherwise and would like to join me on a live stream sometime this is your invite! If you’ve just gotten into programming, started handling infrastructure, dealing with that big data in those database, let’s talk. I want to hear about your interest in what you do, what use cases you have, what the mission is, and how you aim to accomplish innovative ways to solve the problems you and/or your organization are working to solve.
As I was saying programming, infrastructure, database, are all open topics for the live stream. There are a few caveats and topics that I do have an extra interest in. Come join me and tell me, and by proxy have a conversation with the audience about database tech, databases, how your company is using databases and managing all of its data, and of course especially if that database happens to be Apache Cassandra, DataStax Enterprise, or even some other large scale distributed database or multi-model database system. I want to hear from you and what you’re building, so let’s get together and have a conversation and have our audience pull up a chair to the table for questions, comments, and more!
I’ve been doing a lot more coding, thanks largely to the discipline that Twitch has brought to my day. It seems almost surprising to me at this point because Twitch started similarly to the way Twitter did for me. You see, I thought at first Twitter was the dumbest thing that had happened in ages. Arguably, it’s come full circle and I kind of feel the same thing about Twitter now, but during the middle decade in between that (yes, Twitter is over 10 years old!) Twitter has brought me connection, opportunities, and so much more. I couldn’t have imagined a lot of what I’ve been able to pull together because of Twitter. It’s still useful in many ways for this, albeit I like all of us are at risk of suffering the idiocy of today’s politics and political cronies, and the dog piling trash pile that follows them onto Twitter.
I’m not leaving Twitter any time soon but I’ve definitely put in on a very short leash, and limited what impact it does or doesn’t have in my day to day flow.
Twitch FTW
Amazingly however a new social and productive tool, not that it intended both, has come into being. Coding on Twitch. Don’t get me wrong I game, I just don’t game socially or on Twitch, what I do is code on Twitch. With a fair dose of hacking, breaking things, and then figuring out how to make them work. All at the same time I along with others have created a pretty excellent developers community there on Twitch. It seems to be growing all the time too. Twitch, at this point has become a focal point that has the benefits without all the annoying garbage that Twitter does these days, while adding the vast and hugely important fact that I can do things, be productive, chit chat, and generally get shit done all while I’m Twitch streaming.
With that, let’s talk about some of the recent notes and information I’ve been working on putting together to make Twitch even more useful. My first motive with this was to keep track of all the things I was doing, hardware I was putting together, and related things, but then another purpose grew out of all this note taking. It became obvious that this repository of information could be useful for other people. Here’s a survey of the things that I’ve added so far, hope they’re helpful to those of you digging into streaming out there!
I added some badges to identify various elements of information about the repo in the README.md.
Is it maintained, yup, contributors, so far just me, zero issues filed but please feel free to add an issue or two, markdown yup, and there is indeed a Trello Board! The Trello Board is a key to insight, inspection, and what I’ve got going on in a number of my repositories. It’s where I’m keeping track of all the projects, what’s next, and what’s up in queue for the blog (this one right here). At least, in the context of the big code heavy or video reviews of sessions with code, extra commentary, and related content. If you want to get involved in any of the repos just let me know and I’m happy to walk through whatever and even get you added to the Trello board so we can work together on code.
My main machine is now a Dell XPS 15, which I fought through to get Linux running on it, and now that I have it’s been an absolutely stellar machine. I’ve also added additional monitor & port replicator/docking station gear to get it even more usable. The actual page I’ve got the details listed on are in the repo on the Dell XPS 15 item on the hardware page.
Along with the XPS 15 I wrote up coverage of the unboxing via video and blog entry. After a few weeks I also wrote up the conflict I had getting Linux running and removing Windows 10. In addition to the XPS 15 though I do use a MacBook from 2015 as my primary Mac machine, with an iMac from 2013 available as backup. Both machines are still resoundingly solid and performant enough to get the job done. Rounding out my fleet of machines is a Dell XPS 13 (covered here and here with the re-review).
For screens I have one at my office and one at home. They’re almost the same thing, ultra-widescreen monitors, curved displays, running 3880-1440 resolution from LG. These make keeping an eye on chat, OBS, and all sorts of other monitoring while coding, gaming, or whatever a breeze!
Ex 1: Just viewing a giant OBS view to get everything sorted out before starting a stream.
Ex 2: OBS w/ VM running w/ Twitch chat, dashboard etc to the right. This way I can work, see the stream, and see chat and such all at the same time.
The docking stations and/or port replicators, whatever one calls these things these days also bring all of this tech together for me. There’s a couple I have tried and retired already (unfortunately, cuz dammit that cost some money!) and others that I use in some scenarios and others I use in others.
My main docking station contraption, shout out to James & others suggestion the Caldigit TS3. I got to this docking station through the Dell TB16 which for Linux, and kind of for Windows, is an unstable mess. Awesome potential if it worked, but it doesn’t so I tried out this USB-C pluggable option (in the tweet) which had HDMI that was unfortunately limited in resolution. Having a wide screen made this – albeit it being super compatible with Linux – unusable too. So I finally upgraded to the Caldigit TS3 and WOW, the Caldigit is super seriously wickedly bad ass. Extra USB-C ports, USB 2/3 ports, power, and more all rolled into one. It even supplies some power to the laptop, however I keep it plugged in since it’s kind of a power hog when the processor start chomping!
I’ve replaced it with this. Much smaller, lighter, and slightly easier to use. Albeit the USB-C can’t power the laptop. pic.twitter.com/sy3L1fbUnH
After trying out this USB-C pluggable (the tweet) I got the CalDigit into play. It’s really really good, here’s a shot of that from various angles with the extensive cables that I don’t have to plug into my laptop anymore. Out of this also runs a 28 port USB powered hub too, no picture, but just know I’ve got a crazy number of devices I routinely like to use!
That’s my main configuration when using the ultra widescreens and all. Good setup there, very usable, and the 32GB of memory in the laptop really get put to use in this regard. As for storage, that’s another thing. I’ve got 1 TB in my laptop but another 1 TB in a USB-C Thunderbolt Samsung Drive which is practically as fast for most things. So much so I attach it via the TS3 via USB-C and it’s screaming fast and adds that extra storage. So far, primarily I’ve been using it to store all of my virtual machines or use it as video storage while I do edits.
There’s other gear too, check out the list, like the Rode Podcoster and other things. But that gear I’ll elaborate on some other time.
Another effort I’ve undertaken is recording meetups. To do this one needs to be able to stream things with several screens combined – i.e. picture in picture and all. To do this, one needs a camera that can focus on the speaker, ideally at least 1080p with at least some ability to work in less than ideal light. Then next to that, a splitter and capture card to get the slides! Once all those pieces come together, with a little OBS finesse one can get a pretty solid single pass recording of a meetup. An example of one of my better attempts was the last meetup “Does the Cloud Kill Open Source” with Richard Seroter. If you take a look at past talks in the Meetups Playlist you can see my iterative progress from one meetup to another!
Here’s the specific gear I’m using to get this done. At least, so far, and if and when it becomes financially reasonable I might upgrade some of the gear. It largely depends on what I can get more use out of beyond just streaming meetups.
Cords and Splitter – I picked up a selection of lengths and types so that I’d have wiring options for the particular environments the meetups would be located in. Generally speaking 25ft seems to be a safe maximum for HDMI. I’ve been meaning to check out the actual specifications on it but for now it’s more than enough regardless.
The splitter wasn’t expensive at all ($16.99), and kind of surprised me considering the costs of the cables. Picture to the right, or above, or somewhere depending on mobile layout.
I needed capture cards for this, one for the line out of the splitter that would capture the slides. The first I had picked up based on suggestions focusing around quality and that was the Avermedia Extreme Cap HDMI to USB 3 Capture Card. It’s really solid for higher resolution and related capabilities. For the USB 3.0 HDMI HD Game Video Capture Card I picked it up based on price (it’s almost a 1/3rd of the price) but not particular focused on quality. However, now that I’ve used both they are capable and seem fine, so I might have been able to just buy two of the cheaper options.
The camera, ideally, I’d have a much higher quality one but the Canon VIXIA HF R800 Camcorder has actually worked excellently. A little less feature rich for audio out and related things, but it zooms in good and can record at the same time I’m getting the cam feed into the stream. So it’s always a nice way to have a backup of the talk.
At first thought, I made the mistake that just the gear would be enough but holy smokes there were about a million other things I needed to write. I created meetup.md to get the list going.
Jazz Influence Amidst the Heaviness!
As promised. Some music, not actually jazz, but heavily influenced by some jazz, progressive instrumentation, and esoteric, expansive, exquisite playing skills by the band. As always, be prepared. My music referrals aren’t always gentle! Happy code streaming!
Updated links to each part will be posted at bottom of this post when I publish them. For code, written walk through, and the like scroll down below the video and timestamps.
0:54 The thrashing introduction. 3:40 Getting started, with a recap of the previous sessions but I’ve not got the sound on so ignore this until 5:20. 5:20 I notice, and turn on the volume. Now I manage to get the recap, talking about some of the issues with the Twitter API. I step through setup of the app and getting the appropriate ID’s and such for the Twitter API Keys and Secrets. 9:12 I open up the code base, and review where the previous sessions got us to. Using Cobra w/ Go, parsing and refactoring that was previously done. 10:30 Here I talk about configuration again and the specifics of getting it setup for running the application. 12:50 Talking about Go’s fatal panic I was getting. The dependency reference to Github for the application was different than what is in application and don’t show the code that is actually executing. I show a quick fix and move on. 17:12 Back to the Twitter API use by using the go-twitter library. Here I review the issue and what the fix was for another issue I was having previous session with getting the active token! Thought the library handled it but that wasn’t the case! 19:26 Now I step through creating a function to get the active oath bearer token to use. 28:30 After deleting much of the code that doesn’t work from the last session, I go about writing the code around handling the retrieval of Twitter results for various passed in Twitter Accounts.
The bulk of the next section is where I work through a number of functions, a little refactoring, and answering some questions from the audience/Twitch Chat (working on a way to get it into the video!), fighting with some dependency tree issues, and a whole slew of silliness. Once that wraps up I get some things committed into the Github repo and wrap up the core functionality of the Twitz Application.
58:00 Reviewing some of the other examples in the go-twitter library repo. I also do a quick review of the other function calls form the library that take action against the Twitter API. 59:40 One of the PR’s I submitted to the project itself I review and merge into the repo that adds documentation and a build badge for the README.md. 1:02:48 Here I add some more information about the configuration settings to the README.md file.
1:05:48 The Twitz page is now updated: https://adron.github.io/twitz/ 1:06:48 Setup of the continuous integration for the project on Travis CI itself: https://travis-ci.org/Adron/twitz 1:08:58 Setup fo the actual travis.yml file for Go. After this I go through a few stages of troubleshooting getitng the build going, with some white space in the ole’ yaml file and such. Including also, the famous casing issue! Ugh! 1:26:20 Here I start a wrap up of what is accomplished in this session.
NOTE:Yes, I realize I spaced and forgot the feature where I export it out to Apache Cassandra. Yes, I will indeed have a future stream where I build out the part that exports the responses to Apache Cassandra! So subcribe, stay tuned, and I’ll get that one done ASAP!!!
1:31:10 Further CI troubleshooting as one build is green and one build is yellow. More CI troubleshooting! Learn about the travis yaml here. 1:34:32 Finished, just the bad ass outtro now!
The Codez
In the previous posts I outlined two specific functions that were built out:
Part 1 – The config function for the twitz config command.
Part 2 – The parse function for the twitz parse command.
In this post I focused on updating both of these and adding additional functions for the bearer token retrieval for auth and ident against the Twitter API and other functionality. Let’s take a look at what the functions looked like and read like after this last session wrap up.
The config command basically ended up being 5 lines of fmt.Printf functions to print out pertinent configuration values and environment variables that are needed for the CLI to be used.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
The parse command was a small bit changed. A fair amount of the functionality I refactored out to the buildTwitterList() and exportFile, and rebuildForExport functions. The buildTwitterList() I put in the helper.go file, which I’ll cover a littler later. But in this file, which could still use some refactoring which I’ll get to, I have several pieces of functionality; the export to formats functions, and the if else if logic of the exportParsedTwitterList function.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Next up after parse, it seems fitting to cover the helpers.go file code. First I have the check function, which simply wraps the routinely copied error handling code snippet. Check out the file directly for that. Then below that I have the buildTwitterList() function which gets the config setting for the file name to open to parse for Twitter accounts. Then the code reads the file, splits the results of the text file into fields, then steps through and parses out the Twitter accounts. This is done with a REGEX (I know I know now I have two problems, but hey, this is super simple!). It basically finds fields that start with an @ and then verifies the alphanumeric nature, combined with a possible underscore, that then remove unnecessary characters on those fields. Wrapping all that up by putting the fields into a string/slice array and returning that string array to the calling code.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
The next function in the Helpers.go file is the getBearerToken function. This was a tricky bit of code. This function takes in the consumer key and secret from the Twitter app (check out the video at 5:20 for where to set it up). It returns a string and error, empty string if there’s an error, as shown below.
The code starts out with establishing a POST request against the Twitter API, asking for a token and passing the client credentials. Catches an error if that doesn’t work out, but if it can the code then sets up the b64Token variable with the standard encoding functionality when it receives the token string byte array ( lines 9 and 10). After that the request then has the header built based on the needed authoriztaion and content-type properties (properties, values? I don’t recall what spec calls these), then the request is made with http.DefaultClient.Do(req). The response is returned, or error and empty response (or nil? I didn’t check the exact function signature logic). Next up is the defer to ensure the response is closed when everything is done.
Next up the JSON result is parsed (unmarshalled) into the v struct which I now realize as I write this I probably ought to rename to something that isn’t a single letter. But it works for now, and v has the pertinent AccessToken variable which is then returned.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Wow, ok, that’s a fair bit of work. Up next, the findem.go file and related function for twitz. Here I start off with a few informative prints to the console just to know where the CLI has gotten to at certain points. The twitter list is put together, reusing that same function – yay code reuse right! Then the access token is retrieved. Next up the http client is built, the twitter client is passed that and initialized, and the user lookup request is sent. Finally the users are printed out and below that a count and print out of the count of users is printed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
I realized, just as I wrapped this up I completely spaced on the Apache Cassandra export. I’ll have those post coming soon and will likely do another refactor to get the output into a more usable state before I call this one done. But the core functionality, setup of the systemic environment needed for the tool, the pertinent data and API access, and other elements are done. For now, that’s a wrap, if you’re curious about the final refactor and the Apache Cassandra export then subscribe to my Twitch @adronhall and/or my YouTube channel ThrashingCode.
However, I’m still starting with a todo app anyway, but it’s going to turn into something else that is much more than a mere todo app. In this post I’m going to write up some of those larger plans and what complexities lie in wait – dragons are indeed there – for this more extensive real world app.
Modernizing Real World US Passenger Rail Ticket Sales!
Ok, I picked this topic since it is one of the things I find frustrating in the United States. The passenger rail systems, pretty much all of them, are barely better than many 3rd world countries, let alone the developed nations. One of those elements that the United States falls far behind on is an effective, efficient, accurate, and useful ticketing and seat assignment system. Let’s talk about this particular problem for a moment and you’ll start to visualize the problems that exist with the current system.
The Problem(s): Train Seating Options
Siemens Charger engine waiting with Talgo train.
Getting people on and off of a transport system like a train, airplane, ferry, or other mode of transport isn’t a simple process. However, many times it doesn’t have to be as complex, wrought with error, confusion, or disarray as there often is in the United States. Let’s step back and focus on one particular set of trains, the four particular trains that leave form King Street Station in Seattle, Washington on an almost daily basis.
South Line – Seattle to Tacoma, then onward to Lakewood.
Amtrak Cascades – [Wikipedia] Seattle is one of the major stops on the Cascades route, which starts in Eugene down in Oregon and traverses all the way into Canada to Vancouver.
Amtrak Empire Build – [Wikipedia] This is one of the two Superliner cross country overnight trains that leaves Seattle, connects with a sister train in Spokane everyday from Portland, and then combines and travels all the way to Chicago!
Amtrak Coast Starlight – [Wikipedia] This is one of the other Superliner cross country overnight trains. It departs from Seattle, travels south with a number of stops and eventually ends in Los Angeles.
These four trains use specific train equipment with a particular accommodations for ticket sales.
One of the Amtrak Superliner Coach Car’s seating layout. (Images found here)
The Sounder provides tickets via the Sound Transit System in the area, which is a relatively cheap, non-reserved seat, heavily used train. Often there’s standing room only. It’s one of those things, that if one could purchase a ticket and know if they’re getting a seat, or if the train is full or not, that would encourage or discourage use accordingly. Currently, you buy a ticket and just get on. Rarely are they even checked, there is no gated entry, it’s basically a free for all.
The Amtrak Cascades are a reserved seat system. You purchase a ticket with the contract agreement that you will be provided a seat – either business class or regular – upon boarding. Emphasis on upon boarding as this can cause great confusion when entering the station and attempting to determine how to pick up these seat assignments even though you’ve already purchased a ticket. It adds time to boarding, requires the train sits waiting longer, and passengers have to arrive much earlier than the train departure. Albeit, just for context this earlier arrival (~20-30 minutes before) is nothing compared to the horrors of airports (2 hour suggested arrival before departure), it’s still unnecessary if modern systems were used to provide a streamlined and more efficient boarding process.
Amtrak Empire Builder
The Amtrak Empire Builder and Coast Starlight are currently an interesting mix. Both trains have sleeping accommodations that give a reserved room number before boarding. A very efficient process indeed, something to aim for. Since one knows the car number and room number, one could theoretically just board without even being guided. The rest of the seats however, some 200-300 or more of them depending on the train, are reserved seats albeit one doesn’t receive the seat assignment until they arrive at the station. Again, causing unnecessary chaos.
The Problem(s): Technology Deeper Dive
Problem: Passenger Navigation to Seat Reservation
Amtrak Cascades Bistro
Every single one of the trains listed above: Amtrak Empire Builder, Amtrak Coast Starlight, Amtrak Cascades, and Sound Transit Sounder all have some similar characteristics that would make it cheap and relatively easy to implement a ticketing and seat reservation system. In all of the train equipment, whether Sounder Bombardier, Superliner, or Talgo Amtrak Cascades there are seat numbers and car numbers. This provides us a core basis in which to work, to make all of this processing much easier.
At each station where these trains stop, each car of each train stops at a particular point – or could be made to stop at a particular point – at each station. The Sounder trains for example all have floor mats at the station that read “Welcome Aboard”! This is another element we could use to navigate a particular seat reservation. Automating the process of not just assigning a seat, but providing the information on each ticket for where and exactly when each passenger should arrive at a particular point at the station.
Since the cars and stations all have known characteristics about where to be, where the train will arrive and depart from, and what car number and door position is at this can all be automated per train. This is a repeatable process. Something that easily meets the exact definition of why we build computer systems and automate things with computer systems!
Problem: Equipment Changes, Modifiable Trains
Sometimes I’ve had conversations with what might change within the system. Almost all changes with a rail system are very known. From a disaster all the way to a simple everyday equipment change. For example, the train arriving may have an extra coach car or sleeper car on the Coast Starlight for some reason. Since we can build a system to model around the specific vehicles, and the vehicles numbers on a train can easily be set these changes can extrapolate out to tickets so they can be accurately assigned by a computer the day of. Changing equipment may take multiple minutes in the rail yard, but in the computer it’s a few keystrokes and it’s done. All tickets re-assigned, everything rebalanced, it’s almost as magical as a distributed database.
Problem: Common Concurrency, Purchasing, and Related Issues
There are also a number of issues a proper ticketing and reservation system would have to cover, such as managing for multiple people attempting to buy the same seat at relatively the same time. A locking and concurrency mechanism will be needed, something that’s been solved before, so appropriate planning around this will solve the issue.
There are of course timing issues too, once a ticket is locked, eventing within the system should unlock it appropriately. These event based timers will be an interesting challenge too. Solved already, but fun that they’ll need solved again specifically for this system!
Problem: Or Feature “See a Mountain”?
Aerial view of mount Rainier
Some other things I’ve pondered include, the selling of some seats as choice preferences. For example, for the Empire Builder, Coast Starlight, and Cascades trains each have specific views that are easier or harder to see depending on the side of the train the accommodations are for. An example, if you’re facing west on the Coast Starlight you get all of the ocean views in southern California. If you’re on the east side, you get views of all the mountains like Rainier (see above picture!) and even Shasta if there is a full moon. Depending on these views and related characteristics, I’d happily pay a few bucks more to ensure I get a specific assignment or get to pick a specific assignment, so why not offer the ability to choose the seat for a specific fare?
The Puget Sound, traveling north out of Seattle on the Amtrak Cascades or Sound Transit Sounder north line.
Summary & Next Steps
Summary – This is post one of many about the very distributed nature of purchasing tickets for one of the trains into and out of the city. As comparison with my todo app, this will definitely provide a very real world application option indeed! As soon as I wrap up the initial todo app samples, just to get started and provide details on how to get started I’m going to move on to building a real, real world applications sample, so real that it could be implemented by Sound Transit, Brightline, Virgin Rail, SNCF’s TGV, Germany’s ICE, or even good ole’ Amtrak here in the United States.
Next Steps – Next up I’m going to finish up the todo applications, with the notion that they provide some starting points for people but also for this more complex real world application. I’ll also add some more details and thoughts, and would love to converse, discuss, contributions, or co-hack on this project. Maybe you’ll join me, onward, and may you enjoy this flanged wheel ride and code slinging adventure!
You must be logged in to post a comment.