Setting Up a Dev Machine for Node.js

It seems every few months setup of whatever tech stack is always tweaked a bit. This is a collection of information I used to setup my box recently. First off, for the development box I always use nvm as it is routine to need a different version of Node.js installed for various repositories and such. The best article I’ve found that is super up to date for Ubuntu 18.04 is Digital Ocean’s article (kind of typical for them to have the best article, as their blog is exceptionally good). In it the specific installation of nvm I’ve noticed has changed since I last worked with it some many months ago. Continue reading “Setting Up a Dev Machine for Node.js”

Quick San Francisco Trip, Multi-Cloud Conversations, and A Network of Excellent People to Follow

A Quick Trip

Boarded the 17x King County Metro Bus at 6:11am yesterday. The trip was entirely uneventful until we arrived in Belltown in downtown Seattle. A guy with only his underwear on boarded the bus. Yup, you got that right, just like those underwear nightmares people have! That guy was living this situation for whatever unknown reasons. He didn’t make to much fuss though and eventually settled into one of the back seats on our packed bus.

Downtown as I got off I was pleasantly surprised that Victrola was open in the newly leased Amazon building near Pine & Pike on 3rd. I remember it being built and opening, but for some reason had completely forgotten it was there until today. Having a Victrola at this location sets it up perfectly for the mornings one needs to get to the airport. It’s located in a way that you can get off any connecting bus, grab an espresso at Victrola, and then enter the downtown tunnel to board LINK to the airport. Previously, the only option was really to wait until you get to the airport and buy some of the lackluster espresso coffee trash options they have at SEATAC.

Once I boarded the LINK I got into some notes for the upcoming Distributed Data Show that I would be recording while in San Francisco. But also got into a little review of some Node.js JavaScript code that I’d pulled down previously. I’ve been hard pressed to get into the code base and add some updates, logging, and minor logic for Killrvideo. Hopefully today, or maybe even this evening while I’m on the rest of this adventure.

All the things in between then occurred. The flight to San Jose, the oddball Taxi ride to Santa Clara DataStax HQ, then the Lyft ride to Caltrain to ride into downtown San Francisco to my hotel. Twas’ an excellently chill ride into the city and even a little productive. A good night of sleep and then up and at em’. Headed to the nearest Blue Bottle, which was on the way to my next effort of the trip. Blue Bottle was solid as always, and into the studios I went.

Multi-Cloud Conversations

At this point of this quick trip I’ve just finished shooting a few new episodes of the Distributed Data Show (subscribe) with Jeff Carpenter (@jscarp) and Amanda Moran (@amandadatastax). Just recently I also got to shoot an episode with Patrick McFadin (@patrickmcfadin) too. We’ve been enjoying a number of conversations around multi-cloud considerations, what going multi-cloud means, and even what exactly does multi-cloud even really mean. It’s been fun and now I’m putting together some additional blog posts and related material to follow the posts with more detail. So keep an eye out for the Distributed Data Shows. You can also take the lazy route and subscribe to be notified when new episodes are released or you could even set your calendar for every Tuesday, because we follow an old school traditional schedule approaches to episode releases.

I’ve included a recap on a few of the recent episodes below, which you may want to check out just to get an idea of what’s to come. Great as a listen while commuting, something to put on while you’re relaxing a bit but want to think about or listen to a conversation on a topic, or just curious what’s up!

In this last episode DataStax Vanguard Lead Chelsea Navo (@mazdagirluk) joined in to talk about how the team helps enterprises meet the challenges posed by disrupting technology and competitors. Some of the highlights of the conversation include:

1:57 Defining what exactly enterprise transformation is.

4:00 Trends including retooling batch workloads around real-time requirements, handing larger data sets and the uber popular use of streaming data we see today.

7:37 Techniques on getting teams trained up on cutting-edge technology.

9:46 There’s an argument for “skateboard solutions“! I now want to write up a whole practice around this!

16:27 Data modeling. Challenges of. Challenges around. Data modeling, it’s a good topic of exploration.

25:56 The data layer is still typically the hardest part of your application! (The assertion is made; agree, disagree, or ??)

Continue reading “Quick San Francisco Trip, Multi-Cloud Conversations, and A Network of Excellent People to Follow”

Collecting Terraform Resources

I just finished and got a LinkedIn Learning (AKA Lynda.com) course published last month on Learning Terraform (LinkedIn Learning Course & Lynda.com Course). Immediately after posting that I spoke with my editor at LinkedIn Learning and agreed on the next two courses I’ll record: Terraform Essentials and Go for Site Reliability Engineers. Consider me stoked to be putting this material together and recording more video courses, this is a solid win, as the internet dogge would say, “much excite, very wow”!

The following are some recent materials I’ve dug up in regards to Terraform, Go, and Site Reliability work. Some of which will very likely find it’s way into influencing my courses. Good material here if you’re looking for some solid, and arguably more advanced approaches, to your Terraform work.

Advanced Terraform Materials

The HashiCorp Documentation Material

Writing customer providers

Running Terraform in Automation

Getting Started with Twitch | Twitch Thrashing Code Stream

I’d been meaning to get started for some time. I even tweeted, just trying to get further insight into what and why people watch Twitch streams and of course why and who produces their own Twitch streams.

With that I set out to figure out how to get the right tooling setup for Twitch streaming on Linux and MacOS. Here’s what I managed to dig up.

First things first, and somewhat obviously, go create a Twitch account. The sign up link is up in the top right corner.

https://www.twitch.tv/ Continue reading “Getting Started with Twitch | Twitch Thrashing Code Stream”

Dell XPS 13 Re-review of Existing Laptop

Some years ago, while working with a whole crew of great people at CDK in Portland, Oregon I picked up a new Dell XPS 13. I even wrote a review about it titled “The Latest 5th Generation Dell XPS 13 Developer Edition“. I had been inspired at the time to pick up this laptop after checking out my friend’s XPS 13. Joe (@joeinglish) provided a good sell of the thing to me, and I purchased the laptop less than 24 hours after checking his laptop out!

08

Joe’s laptop, my  laptop, or is it my laptop and Joe’s laptop?

Continue reading “Dell XPS 13 Re-review of Existing Laptop”

Go Library Data Generation Timings

Recently I put together some quick code to give some timings on the various data generation libraries available for Go. For each library there were a few key pieces of data generation I wanted to time:

  • First Name – basically a first name of some sort, like Adam, Nancy, or Frank.
  • Full Name – something like Jason McCormick or Sally Smith.
  • Address – A basic street address, or whatever the generator might provide.
  • User Agent – Such as that which is sent along with the browser response.
  • Color – Something like red, blue, green, or other color beyond the basics.
  • Email – A fully formed, albeit faked email address.
  • Phone – A phone number, ideally with area code and prefix too.
  • Credit Card Number – Ideally a properly formed one, which many of the generators seem to provide based on VISA, Mastercard, or related company specifications.
  • Sentence – A stand multi-word lorem ipsum based sentence would be perfect.

I went through and searched for libraries that I wanted to try out. Of all the libraries I found I narrowed it down to three specific libraries. When I add the imports for these libraries, by way of how Go works, it gives you the repo locations:

  • “github.com/bxcodec/faker” – faker – Faker generates data based on a Struct, which is a pretty cool way to determine what type of data you want and to get it returned in a particularly useful format.
  • “github.com/icrowley/fake” – fake – Fake is a library inspired by the ffaker and forgery Ruby gems. Not that you’d be familiar with those, but if you are you have instant insight into how this library works.
  • “github.com/malisit/kolpa” – kolpa – This is another data generator that creates fake data for various types of data, structures, strings, sentences, and more.

Continue reading “Go Library Data Generation Timings”

A Collection of Links & Tour of DataStax Docker Images

Another way to get up and running with a DataStax Enterprise 6 setup on your local machine is to use the available (and supported by DataStax) Docker images. For additional description of what each of the images is, what is contained on the images, I read up on via Kathryn Erickson’s (@012345) blog entry “DataStax Now Offering Docker Images for Development“. Also there’s a video Jeff Carpenter (@jscarp) put together which talks from the 5.x version of the release (since v6 wasn’t released at the time).

Continue reading “A Collection of Links & Tour of DataStax Docker Images”