Development Workspace with Terraform on Azure: Part 2 – Packer Images

Series Links: 1, 2 (this entry)

Today I’m going to walk through the install and setup of Packer, and get our first builds running for Azure. I started this work in the previous article in this series, “Development Workspace with Terraform on Azure: Part 1 – Install and Setup Terraform and Azure CLI“. This is needed to continue and simplify what I’ll be elaborating on in subsequent articles.

1: Packer

Packer is another great Hashicorp tool that is available to build virtual machine and related images for a wide range of platforms. Specifically this article is going to cover Azure but the list is long with AWS, GCP, Alicloud, Cloudstack, Digital Ocean, Docker, Hyper-V, Virtual Box, VMWare, and others!

Download & Setup

To download Packer navigate over to the Hashicorp download page. Currently it is at version 1.4.2. I installed this executable the same way I installed Terraform, by untar or unzipping the executable into a path I have setup for all my executable CLI’s. As shown, I simply open up the compressed file and pulled the executable over into my Apps directory, making it immediately executable just like Terraform from anywhere and any path on my system. If you want it somewhere specific and need to setup a path, refer to the previous blog entry for details and links to where and how to set that up.

packer-apps.png

If the packer command is executed, which I’ve done in the following image, it shows the basic commands as a good CLI would do (cuz Hashicorp makes really good CLI tools!)

packer-default-response

The command we’ll be working with mostly to get images setup is the build command. The inspect, validate, and version commands however will become mainstays of use when one really gets into Packer, they’re worth checking out further.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.

2: Packer Azure Setup

The next thing that needs to be done is to setup Packer to work with Azure. To get these first few images building the Device Login will be used for authorization. Eventually, upon further automation I’ll change this to a Service Principal and likely recreate images accordingly, but for now Device Login is going to be fine for these examples. Most of the directions are available via HashiCorp Docs located here. As always, I’m going to elaborate a bit beyond the docs.

The Device Login will need three pieces of information: A SubscriptionID, Resource Group, and Storage Account. The Resource Group will be used in Azure to maintain this group of resources created specific to the Resource Group that live around a particular life-cycle of work. The Storage Account is just a place to store the images that will be created. The SubscriptionID is the ID of your account with Azure.

NOTE: To enable this mode, simply don’t set client_id and client_secret in the Packer build provider.

To get your SubscriptionID just type az login at the terminal. The response will include this value designated simply as “id”. Another value that you’ll routinely need is displayed here too the “tenant_id”. Once logged in you can also get a list of accounts and respective SubscriptionID values for each of the accounts you might have as I’ve done here.

cleanup.png

Here you can see I’ve authenticated against and been authorized in two Azure Accounts. Location also needs to be determined, and can be done so with az account list-locations.

locations.png

This list will continue and continue and continue. From it, a singular location will be chosen for the storage.

In a bash script, I go ahead and assign these values that I’ll need to build this particular image.

[code]
GROUPNAME=”adrons-images”
LOCATION=”westus2″
STORAGENAME=”adronsimagestorage”
[/code]

Now the next step is to create the Resource Group and Storage. With a few echo commands to print out what is going on to the console, I add these commands as shown.

[code]
echo ‘Creating the managed resource group for images.’

az group create –name $GROUPNAME –location $LOCATION

echo ‘Creating the storage account for image storage.’

az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage
[/code]

The final command here is another echo just to identify what is going to be built and then the Packer build command itself.

[code]
echo ‘Building Apache cluster node image.’

packer build node.json
[/code]

The script is now ready to be run. I’ve placed the script inside my packer phase of my build project (see previous post for overall build project and the repository here for details). The last thing needed is a Packer template to build.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.

3: Azure Images

The next step is getting an image or some images built that are needed for further work. For my use case I want several key images built around various servers and their respective services that I want to use to deploy via Terraform. Here’s an immediate shortlist of images we’ll create:

  1. Apache Cassandra Node – An image that is built with the latest Apache Cassandra installed and ready for deployment into a cluster. In this particular case that would be Apache Cassandra v4, albeit I’m going to go with 3.11.4 first and then work on getting v4 installed in a subsequent post. The installation instructions we’ll mostly be following can be found here.
  2. Gitlab Server – This is a product I like to use, especially for pre-rolled build services and all of those needs. For this it takes care of all source control, build services, and any related work that needs to be from inside the workspace itself that I’m building. i.e. it’s a necessary component for an internal corporate style continuous build or even continuous integration setup. It just happens this is all getting setup for use internally but via a public cloud provider on Azure! It can be done in parallel to other environments that one would prospectively control and manage autonomous of any cloud provider. The installation instructions we’ll largely be following are also available via Gitlab here.
  3. DataStax Enterprise 6.7 – DataStax Enterprise is built on Apache Cassandra and extends the capabilities of that database with multi-model options for graph, analytics, search, and many other capabilities and security. For download and installation most of the instructions I’ll be using are located here.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • Now there is a list of images we need to create, in which we can work from to create the images in Azure.

4: Building an Azure Image with Packer

The first image I want to create is going to be used for an Apache Cassandra 3.11.4 Cluster. First a basic image test is a good idea. For that I’ve used the example below to build a basic Ubuntu 16.04 image.

In the code below there are also two environment variables setup, which I’ve included in my bash profile so they’re available on my machine whenever I run this Packer build or any of the Terraform builds. You can see they’re setup in the variables section with a "{{env ​`​TF_VAR_tenant_id`}}". Not that the TF_VAR_tenant_id is prefaced with TF_VAR per Terraform convention, which in Terraform makes the variable just “tenant_id” when used. Also note that things that might look like single quotes are indeed back ticks, not single quotes around TF_VAR_tenant_id. Sometimes the blog formats those oddly so I wanted to call that out! (For example of environment variables, I set up all of them for the Service Principal setup below, just scroll further down)


{
"variables": {
"tenant_id": "{{env `TF_VAR_tenant_id`}}",
"subscription_id": "{{env `TF_VAR_subscription_id`}}",
"storage_account": "adronsimagestorage",
"resource_group_name": "adrons-images"
},
"builders": [{
"type": "azure-arm",
"tenant_id": "{{user `tenant_id`}}",
"subscription_id": "{{user `subscription_id`}}",
"managed_image_resource_group_name": "{{user `resource_group_name`}}",
"managed_image_name": "base_ubuntu_image",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
"location": "westus2",
"vm_size": "Standard_DS2_v2"
}],
"provisioners": [{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"echo 'does this even work?'"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}]
}

During this build, when Packer begins there will be several prompts during the build to authorize the resources being built. Because earlier in the post Device Login was used instead of a Service Principal this step is necessary. It looks something like this.

device-login

You’ll need to then select, copy, and paste that code into the page at https://microsoft.com/devicelogin.

logging-in-code-ddevice-login

This will happen a few times and eventually the build will complete and the image will be available. What we really want to do however is get a Service Principal setup and use that so the process can be entirely automated.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.

5: Azure Service Principal for Automation

Ok, a Service Principal is needed now. This is singular command, but it has to be very specifically the command from what I can tell. Before running that though, know where you are going to store or how you will pass the client id and client secret that will be provided by the principal when it is created.

The command for this is az ad sp create-for-rbac -n "Packer" --role contributor --scopes /subscriptions/00000000-0000-0000-0000-000000000 where all those zeros are the Subscription ID. When this executes and completes all the peripheral values that are needed for authorization via Service Principal.

the-rbac

One of the easiest ways to keep all of the bits out of your repositories is to setup environment variables. If there’s a secrets vault or something like that then it would be a good idea to use that, but for this example I’m going to setup use of environment variables in the template.

Another thing to notice, which is important when building these systems, is that the “Retrying role assignment creation: 1/36″ message. Which points to the fact there are 36 retries built into this because of timing and other irregularities in working with cloud systems. For various reasons, this means when coding against such systems we routinely will have to put in timeouts, waits, and other errata to ensure we get messages we want or mark things disabled as needed.

After running that, just for clarity, here’s what my .bashrc/bash_profile file looks like with the added variables.

[code]
export TF_VAR_clientid=”00000000-0000-0000-0000-000000000″
export TF_VAR_clientsecret=”00000000-0000-0000-0000-000000000″
export TF_VAR_tenant_id=”00000000-0000-0000-0000-000000000″
export TF_VAR_subscription_id=”00000000-0000-0000-0000-000000000”
[/code]

With that set, a quick source ~/.bashrc or source ~/.bash_profile and the variables are all set for use.

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
  • The Service Principal is now setup so the image can be built with full automation.
  • Environment variables are setup so that they won’t be checked in to the code and configuration repository.

6: Apache Cassandra 3.11.4 Image

Ok, all the pieces are in place. With confirmation that the image builds, that Packer is installed correctly, with Azure Service Principal, Managed Resource Group, and related collateral setup now, building an actual image with installation steps for Apache Cassandra 3.11.4 can now begin!

First add the client_id and client_secret environment variables to the variables section of the template.

[code]
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
[/code]

Next add those same variables to the builder for the image in the template.

[code]
“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,
[/code]

That whole top section of template configuration looks like this now.

[code language=javascript]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “something”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
[/code]

Now the image can be executed, but let’s streamline the process a little bit more. Since I won’t want but only one image at any particular time from this template and I want to use the template in a way where I can create images and pass in a few more pertinent pieces of information I’ll tweak that in the Packer build script.

Below I’ve added the variable name for the image, and dubbed in Cassandra so that I can specifically reference this image in the bash script with IMAGECASSANDRA="basecassandra". Next I added a command to delete an existing image that would be called this with the az image delete -g $GROUPNAME -n $IMAGECASSANDRA line of script. Finally toward the end of the file I’ve added the variable to be passed into the template with packer build -var 'imagename='$IMAGECASSANDRA node-cassandra.json. Note the odd way to concatenate imagename and the variable of the passed in variable from the bash script. This isn’t super clear which way to do this, but after some troubleshooting this at least works on Linux! I’m assuming it works on MacOS, if anybody else tries it and it doesn’t please let me know.

[code language=bash]
GROUPNAME=”adrons-images”
LOCATION=”westus2″
STORAGENAME=”adronsimagestorage”
IMAGECASSANDRA=”basecassandra”

echo ‘Deleting existing image.’

az image delete -g $GROUPNAME -n $IMAGECASSANDRA

echo ‘Creating the managed resource group for images.’

az group create –name $GROUPNAME –location $LOCATION

echo ‘Creating the storage account for image storage.’

az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage

echo ‘Building Apache cluster node image.’

packer build -var “‘imagename=$IMAGECASSANDRA'” node-cassandra.json
[/code]

With that done the build can be run things without needing to manually delete the image each time since it is part of the script now. The next part to add to the template is more of the needed installation steps for Apache Cassandra. These steps can be found on the Apache Cassandra site here.

Under the provisioners section of the Packer template I’ve added the installation steps and removed the sudo part of the commands. Since this runs as root there’s really no need for sudo. The inline part of the provisioner when I finished looks like this.

[code]
“inline”: [
“echo ‘Starting Cassandra Repo Add & Installation.'”,
“echo ‘deb http://www.apache.org/dist/cassandra/debian 311x main’ | tee -a /etc/apt/sources.list.d/cassandra.sources.list”,
“curl https://www.apache.org/dist/cassandra/KEYS | apt-key add -“,
“apt-get update”,
“apt-key adv –keyserver pool.sks-keyservers.net –recv-key A278B781FE4B2BDA”,
“apt-get install cassandra”
],
[/code]

With that completed we now have the full workable template to build a node for use in starting or using as a node within an Apache Cassandra cluster. All the key pieces are there. The finished template is below, with the build script just below that.


{
"variables": {
"client_id": "{{env `TF_VAR_clientid`}}",
"client_secret": "{{env `TF_VAR_clientsecret`}}",
"tenant_id": "{{env `TF_VAR_tenant_id`}}",
"subscription_id": "{{env `TF_VAR_subscription_id`}}",
"imagename": "",
"storage_account": "adronsimagestorage",
"resource_group_name": "adrons-images"
},
"builders": [{
"type": "azure-arm",
"client_id": "{{user `client_id`}}",
"client_secret": "{{user `client_secret`}}",
"tenant_id": "{{user `tenant_id`}}",
"subscription_id": "{{user `subscription_id`}}",
"managed_image_resource_group_name": "{{user `resource_group_name`}}",
"managed_image_name": "{{user `imagename`}}",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "18.04-LTS",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
"location": "westus2",
"vm_size": "Standard_DS2_v2"
}],
"provisioners": [{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"echo 'Starting Cassandra Repo Add & Installation.'",
"echo 'deb http://www.apache.org/dist/cassandra/debian 311x main' | tee -a /etc/apt/sources.list.d/cassandra.sources.list",
"curl https://www.apache.org/dist/cassandra/KEYS | apt-key add –",
"apt-get update",
"apt-key adv –keyserver pool.sks-keyservers.net –recv-key A278B781FE4B2BDA",
"apt-get -y install cassandra"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}]
}


GROUPNAME="adrons-images"
LOCATION="westus2"
STORAGENAME="adronsimagestorage"
IMAGECASSANDRA="basecassandra"
echo 'Deleting existing image.'
az image delete -g $GROUPNAME -n $IMAGECASSANDRA
echo 'Creating the managed resource group for images.'
az group create –name $GROUPNAME –location $LOCATION
echo 'Creating the storage account for image storage.'
az storage account create \
–name $STORAGENAME –resource-group $GROUPNAME \
–location $LOCATION \
–sku Standard_LRS \
–kind Storage
echo 'Building Apache cluster node image.'
packer build -var 'imagename='$IMAGECASSANDRA node-cassandra.json

view raw

build.sh

hosted with ❤ by GitHub

check-box-64Verification Checklist

  • Packer is now setup and can be executed from any path on the system.
  • The build has been setup for use with Device Login for this first build.
  • A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
  • We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
  • The Service Principal is now setup so the image can be built with full automation.
  • Environment variables are setup so that they won’t be checked in to the code and configuration repository.
  • Packer add package repository and installs Cassandra 3.11.4 on Ubuntu 18.04 LTS in Azure.

I’ll get to the next images real soon, but for now, go enjoy the weekend and the next post will be up in this series in about a week and a half!

Bunches of Databases in Bunches of Weeks – PostgreSQL Day 1

May the database deluge begin, it’s time for “Bunches of Databases in Bunches of Weeks”. We’ll get into looking at databases similar to how they’re approached in “7 Databases in 7 Weeks“. In this session I got into a hard look at PostgreSQL or as some refer to it just Postgres. This is the first of a few sessions on PostgreSQL in which I get the database installed locally on Ubuntu. Which is transferable to any other operating system really, PostgreSQL is awesome like that. Then after installing and getting pgAdmin 4, the user interface for PostgreSQL working against that, I go the Docker route. Again, pointing pgAdmin 4 at that and creating a database and an initial table.

Below the video here I’ve added the timeline and other details, links, and other pertinent information about this series.

0:00 – The intro image splice and metal intro with tunes..
3:34 – Start of the video database content.
4:34 – Beginning the local installation of Postgres/PostgreSQL on the local machine.
20:30 – Getting pgAdmin 4 installed on local machine.
24:20 – Taking a look at pgAdmin 4, a stroll through setting up a table, getting some basic SQL from and executing with pgAdmin 4.
1:00:05 – Installing Docker and getting PostgreSQL setup as a container!
1:00:36 – Added the link to the stellar post at Digital Ocean’s Blog.
1:00:55 – My declaration that if Digital Ocean just provided documentation I’d happily pay for it, their blog entries, tutorials, and docs are hands down some of the best on the web!
1:01:10 – Installing Postgesql on Ubuntu 18.04.
1:06:44 – Signing in to Docker hub and finding the official Postgresql Docker Image.
1:09:28 – Starting the container with Docker.
1:10:24 – Connecting to the Docker Postgresql Container with pgadmin4.
1:13:00 – Creating a database and working with SQL, tables, and other resources with pgAdmin4 against the Docker container.
1:16:03 – The hacker escape outtro. Happy thrashing code!

For each of these sessions for the “Bunches of Databases in Bunches of Weeks” series I’ll follow this following sequence. I’ll go through each database in this list of my top 7 databases for day 1 (see below), then will go through each database and work through the day 2, and so on. Accumulating additional days similarly to the “7 Databases in 7 Weeks

Day 1” of the Database, I’ll work toward building a development installation of the particular database. For example, in this session I setup PostgreSQL by installing it to the local machine and also pulled a Docker image to run PostgreSQL.

Day 2” of the respective database, I’ll get into working against the database with CQL, SQL, or whatever that one would use to work specifically with the database directly. At this point I’ll also get more deeply into the types, inserting, and storing data in the respective database.

Day 3” of the respective database, I’ll get into connecting an application with C#, Node.js, and Go. Implementing a simple connection, prospectively a test of the connection, and do a simple insert, update, and delete of some sort against the respective database built on the previous day 2 of the same database.

Day 4” and onward I’ll determine the path and layout of the topic later, so subscribe on YouTube and Twitch, and tune in. The events are scheduled, with the option to be notified when a particular episode is coming on that you’d like to watch here on Twitch.

Next Events for “Bunches of Databases in Bunches of Days

Oh, exFAT Doesn’t Work on Linux

But to the rescue comes the search engine. I found some material on the matter and, as I’ve learned frequently, don’t count out Linux when it comes to support of nearly everything on Earth. Sure enough, there’s support for exFAT (really, why wouldn’t there be?)

Check out this repo: https://github.com/relan/exfat

There’s of course the git clone and make and make install path or there’s also the apt install path.

git clone https://github.com/relan/exfat.git
cd exfat
autoreconf --install
./configure
make

Then make install.

make install

Of course, as with things on Linux, no reboot needed just use it now to mount a drive.

mount.exfat-fuse /dev/spec /mnt/exfat

To note, if you’re using Ubuntu 18.04 the support will just be available now so re-click on the attached drive or memory device you’ve just attached and it will now appear. Pretty sweet. If you want to use apt just run this command.

apt install exfat-fuse

That’s it. Now you’ve

Buying a Leopard!

system76-smallJanuary 24th, 2017 UPDATE: After I wrote this, I spoke with the System 76 team and I’m getting the chance to go out and tour their Denver Headquarters. This happened well after I made my purchase, which all of the following was written after. But just for full transparency, I’ve added this note. Also, I’m aiming to get a full write up of my System76 trip put together with Denver tidbits and more! Until then, here’s the review…

In the trailing days of 2016, after having moved to Redmond, Washington I sat working at my desktop workstation. This workstation, which still exists, is a iMac with an i7, 16GB RAM, 256 GB SSD, and a 1GB Video Card with a 1TB secondary drive. The machine is a 27” all in one style design, and the screen is rather beautiful. But as I did a build and tried to run Transport Tycoon at the same time in the background the machine sputtered a bit. It was definitely maxed out doing this Go code build, putting together a Docker image build, and spinning it up for go live at the same time my game ran in the background. I thought, this machine has served me extremely well, at over 5 years old it had surpassed the standard 5 year lifespan of peak Apple oomf. At the moment, I thought, maybe it’s time to dig into a serious machine with some premium hardware again.

In that moment I thought about the last dedicated, custom built, super powerful workstation machine I had. It was a powerful machine, nice form factor, and easily drove two giant 27” screens. However this machine had lived and finished it’s useful life over 6 years before 2016 had even started. But it was a sweet machine, that offered a lot of productive gaming and code writing efficiencies. It was thus, time to get in gear and get a machine again.

Immediately I thought through a few of the key features I wanted and other prerequisites of purchase.

  1. Enough RAM and processor power to drive my aforementioned gaming, docker, and code building scenario with ease.
  2. SSD drive of at least 1TB with at least a beefy 8GB Video Card.
  3. It needed to run, with full support, not-Windows. Ubuntu would be fine, but if any Linux was installed from factory or at least fully supported on the hardware I put together, that would suffice.
  4. If I were to buy it from a company, it had to be a company that wasn’t some myopic afterthought of 50s era suburbia (i.e. I didn’t really want to deal with Dell or Alienware again after the XPS 13 situation). This definitely narrowed down the options.

I started digging into hardware specifications and looking into form factors, cases, and all the various parts I’d need for a solid machine. In parallel I started checking out several companies.

  • System76 – Located in Denver, I was curious about this company and had been following them for some time. I had seen a few of the laptops over the years but had never seen or used any of their desktops.
  • Los Alamos Computers which is now LAC Portland! – Holy smokes, I had not realized this company moved. They definitely meet the 4th criteria above.
  • Puget Systems is a company located somewhere in the Puget Sound area and used to be called Puget Sound Systems. After digging I found they are located in a suburb of Seattle, in a town called Auburn. I didn’t want to rule them out so I kept them on the list and started researching.
  • Penguin Computing is another one of the companies, and kidn of a mainstay of Linux machines. They were a must have in the run up.
  • Think Penguin is another I dove into.
  • Emperor Linux is another company I found specializing in Linux machines.
  • Zareason was another, that specialized in Linux machines.

First Decision > Build or Buy?

I wrangled hardware specifications and the idea of building my own machine for some time. I came to the conclusion that the time versus money investment for me was on the side of buying a built machine. This first decision was pretty easy, but educating myself on the latest hardware was eye opening and a lot of fun. In the end however, better to let a builder get it done right instead of me creating a catastrophe for myself and nuking a whole weekend!

Decision Buy!

Second Decision > Who should I buy from?

I dug through each of the computer builders previously mentioned. I scouted out where they were located, what the general process was they used to build the machines, what testing, what involvement in the community they have, and finally a cost and parts review.

Each of the builders has a lot of positives in regards to Linux, the only one that I was hesitant about at first in regards to Linux was Puget Computing. Because by default the machines come with Windows 10. However after asking around and reviewing other reviews online, I came to find they do have Linux and a solid skill set around Linux. Puget remained a leader in the selection process.

pugetsystems

I went through Los Alamos Computers, which I realized are now LAC Portland (Win for Portland!), then Penguin, Think Penguin, and Emperor Linux. All had great skills and ethos around Linux. LAC definitely had the preeminently preferable choice in physical location (I mean, I do love Portland!), but each were short in either their customer facing desktop options. Albeit for a company or other reason, I’d likely buy a Thinkpad or other computing platform running Linux from them. But for this scenario each were disqualified for my personal workstation.

The last two I started checking out were Zareason and System76. I had been following what System76 for a while and a few things had caught my eye on their site. It led me to realize that they’re located out of Denver. Being a transit nerd, one of their website video photo coffee shop scenes had the RTD Light Rail passing in the background. But all things aside I started checking out cases and hardware that each builder puts in a box.

  • Berkely BARTZareason had several cases as shown below. With each of these I checked out the hardware options.

  • SounderNext up I checked out a number of Puget Systems.

  • RTD Light RailNext I started looking at System76 machines.

 

Challenge: Extra nerd credit if you guess why I used each of those pictures for each of those companies!

After working through and reviewing prices, features, hardware, and options things were close. I started reviewing location and what I could derive about each company’s community involvement in Linux, how they’re involved locally, and what the word is about those companies in their respective communities. Out of the three, I ended up not finding any customers to talk to about Zareason. For Puget, I found one friend that had a box purchased from a few years ago, and for System76 I actually found 2 different feedback bits from users within an hour or so of diffing around.

Kenny Spence @tekjava – Kenny and I have known each other for more years than I’m going to count. We got to meetup here in Seattle recently and he showed me his System76 laptop. The build quality was good and the overall review he gave me was a +1. Before this he’d mentioned in Twitter DM convo that this was the case, and I’d taken his word for it back then.

Dev Shop X – A group of individuals I reached out to I had met 3 years ago at the Portland @OSBridge Conference. I spoke to them again and found they were still using the System76 machines with no real complaints. They’d also bought the XPS 13 laptops well before the model I did and had a few complaints. With a short conversation we ended with them offering a +1for System76.

With the reviews from trusted sources, seeing the involvement and related culture of System76 I decided that they would be the builder of choice.

Decision System76 Leopard WS!

Leopard Workstation

With the decision made, I pulled the trigger on the purchase. In spite of the holiday season, I still received the machine in short order. It arrived at my door via UPS in a box, ya know, like a computer does when its shipped somewhere. 😉

system76-leopard-01

I cleared off the desk next, and dug into the box.

system76-leopard-02

system76-leopard-03

The computer was packaged cleanly and neatly with minimal waste compared to some I’ve seen. So far so good. I pulled pieces gently from the box. The first thing I extracted was the static bag which had all of the extra cords and respective attachments that had come with various parts of the computer hardware that were unnecessary. Another plus in my opinion, as many would likely not notice this having not built computers themselves, nor even cared, but I’m glad to have the extra pieces for this or other things I might need them for.

system76-leopard-04

The next thing I pulled out of the box was a thank you letter envelope with cool sticker and related swag.

system76-leopard-05

Stickers!

system76-leopard-07

That was it for peripheral things just floating around in the box. Next, out came the computer itself.

system76-leopard-06

It was wrapped in a static free bag itself. As it should be. I did notice a strange ink like bit of dusted debris in and around the box. I’m not really sure, and still am not sure today what exactly it was. I cleaned it up immediately. It wasn’t excessive, but was leaving slight marks on the white table which required a little scrubbing to remove.

After all things were removed from the box I removed them from envelopes and static free bags and placed them on the desk for a simple shot of all the parts in the box.

system76-leopard-08

Next I went through the steps of desk cleanup again and then connected my 28 port USB Hub, Razor Mouse, and a keyboard to the machine. It was finally time to boot this machine up!

system76-leopard-09

As for the screen which you see, it’s an LG 34” Extra Wide Screen monitor with slight curved view to it. Yes, it’s awesome, and yes it actually makes it relatively easy to not need dual monitors.

BOOTING!

system76-leopard-10

Ubuntu started, monitor fussing.

system76-leopard-11

I toyed around and had for whatever reason plugged in the HDMI, when I should have used the other monitor connection. It immediately provided more resolution options when I changed the connection and the monitor and related elements detected appropriately!

On the side of the machine is a clear window cut through the case to view the internals. The cords were managed well and overall build was very clean. Upon boot up the graphics card immediately lit up too. The nice blue tone provided a nice light within the room.

system76-leopard-12

Ubuntu booted up cleanly, and I might crazy bloody fast.

system76-leopard-13

Here’s a non-flash shot of the machine and monitor side by side.

system76-leopard-14

I then changed the respective positioning and the lighting, as you can see actually changed dramatically just by repositioning the hardware and the rear light I was shooting with.

system76-leopard-15

Lights off shot. The widow is beautiful!

system76-leopard-16

A slightly closer shot of the GTX 1080 humming away inside.

system76-leopard-17

The Ubuntu on Leopard WS Review

So far I’ve done a ton of coding & game playing on the machine. Here’s a break down of some specifics and some respective comments with a full read on the specifications of the machine.

  • Ubuntu 16.10 (64-bit)
  • 4.0 GHz i7-6850K (3.6 up to 4.0 GHz – 15 MB Cache – 6 Cores – 12 threads)
  • High Performance Self-Contained Liquid Cooler
  • 32 GB Quad Channel DDR4 at 2400MHz (2× 16 GB)
  • GB GTX 1080 with 2560 CUDA Cores
  • Chipset Intel® X99
  • Front: 2× USB 3.0 Type-A, 1× USB 2.0 Type-A, 1× eSATA
  • Rear: 3× USB 3.0 Type-A, 1× USB 3.1 Type-A, 1× USB 3.1 Type-C, 4× USB 2.0, Type-A, 1× PS/2
  • Gigabit Ethernet, optional Intel® Wireless-AC (a/b/g/n/ac)
  • GTX 1080: DVI-D, HDMI, 3× Display Port
  • Audio Front: Headphone Jack, Mic Jack
  • Audio Rear: 8 channel (HDMI, S/PDIF), Mic Jack, Line In, Line Out
  • Power Supply 750 W 80+ Certified (80% or greater power efficiency)
  • Dimensions 15.8″ × 8.3″ × 19.5″ (40.13 × 21.08 × 49.53cm)

Gaming

Using Steam I downloaded several games including my latest addiction Transport Tycoon. The others included Warhammer 40k: Dawn of War, Stronghold 3, Stellaris, Sid Meier’s Civ V, Master of Orion, and Cities: Skylines. Each of these games I loaded up and played for at least 20-30 minutes, with every graphics detail maxed out and full audio feature enabled. Where the option existed to run it at full resolution of 3440×1440 I ran the game at that resolution.

Not a blip, stir, or flake out of any sort. The color was solid (which obviously is also largely the monitor) and being able to move around these games in their respective 3d worlds was exception. All the while the speed of elapsed time in games like Transport Tycoon and Cities: Skylines barely slowed at all no matter how massive the city or layout was.

At this point I’ve also added about 16 hours of Transport Tycoon play to this, and I’ve built absurdly extensive layouts (100s of trains plus massively grown cities) and this processor and video card handles it. The aforementioned previous desktop easily choked to 1/10th the speed of this beast while running the game.

More on the gaming elements of this machine in the coming days.

Coding

I used Jetbrains Toolbox to download IntelliJWebstormCLionDataGripProject Rider, and RubyMine. I dug around for some sample projects and slung together some basic “hello world!” apps to build with each of the IDEs. All built at absurd rates, but nothing real specific as I didn’t load any large projects just yet.

One of the things I did do was load Go so that I could continue work on the Data Diluvium Project that I’ve started (Repo on Github). To hack around with Go I also installed Atom and Visual Studio Code. Both editors on this particular machine were screaming fast and with the 34” display, I could easily have both to test out features side by side. Albeit, that makes shortcut combos a nightmare! DON’T DO THIS AT HOME!

Build time for the C, Go, and C# Projects I tried out were all crazy fast, but I’m holding off posting any results as I want to get some more apples to apples comparisons put together before posting. I’m also aiming to post versus some other hardware just so there are some baselines in which to compare the build times against.

More on the coding and related projects in the coming days too.

Important Software

You may think, if you’re not an Ubuntu or Linux user, what about all the other stuff like office software and … big long list goes here. Well, most of the software that we use is either available or a comparable product is available on Linux these days. There’s really not many things that keep me – or would keep anybody tied to – OS-X/MacOS or Windows. Here are a few that I’ve tried out and am using regularly that are 1 to 1 across Windows, OS-X, and Linux.

  • Jetbrains – as mentioned before these work across all the platforms. They’re excellent developer tools.
  • Spotify – even though it states that there hasn’t been support or what not for the app for many months, it still works seemlessly on Linux. That’s what you get when you build an app for a solid platform – one doesn’t have to fix shit every week like on OS-X or Windows.
  • Slack – Slack is available on Linux too. After all the native app (or pseudo native) is built on Electron, which at its core runs on Node.js. So thus, feature parity is pretty much 100%. If you’re going to use slack, it’s not an excuse to be stuck on Windows or OS-X. The choice of platform is yours.

Summary

me-horns-up

NOTE: Nobody paid me a damned penny to write any of this btw, I reviewed all of these things because I love writing about my nerd adventures. No shill shit here. With that stated…

I have more things to review across all of these platforms and much more to write about this mean machine from System76. However, this review has gotten long enough. The TLDR; of this is, if you’re looking for a machine then System76 definitely gets the horns from me! Highly recommended!

Nagios on Ubuntu 14.04 LTS Setup and Configuration

1st – The Virtual Machine

First I created a virtual machine for use with VMware Fusion on OS-X. Once I got a nice clean Ubuntu 14.04 image setup I installed SSH on it so I could manage it as if it were a headless (i.e. no monitor attached) machine (instructions).

In addition to installing openssh, these steps also include build-essential, make, and gcc along with instructions for, but don’t worry about installing VMware Tools. The instructions are cumbersome and in parts just wrong, so skip that. The virtual machine is up and running with ssh and a good C compiler at this point, so we’re all set.

2nd – The LAMP Stack

shell-script sudo apt-get install apache2

Once installed the default page will be available on the server, so navigate over to 192.168.x.x and view the page to insure it is up and running.

lampsetup

Next install mysql and php5 mysql.

sudo apt-get install mysql-server php5-mysql

During this installation you will be prompted for the mysql root account password. It is advisable to set one.

Then you will be asked to enter the password (the one you just set about 2 seconds ago) for the MySQL root account. Next, it will ask you if you want to change that password. Select ‘n’ so as not to create another password for the root acount since you’ve already created the password just a few seconds before.

For the rest of the questions, you should simply hit the enter key for each prompt. This will accept the default values. This will remove some sample users and databases, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.

Next up is to install PHP. No grumbling, just install PHP.

sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt

Next let’s open up dir.conf and change a small section to change what files apache will provide upon request. Here’s what the edit should look like.

Open up the file to edit. (in vi, to insert or edit, hit the ‘i’ button. To save hit escape and ‘:w’ and to exit vi after saving it escape and then ‘:q’. To force exit without saving hit escape and ‘:q!’)

sudo vi /etc/apache2/mods-enabled/dir.conf

This is what the file will likely look like once opened.

<IfModule mod_dir.c>
DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>

Move the index.php file to the beginning of the DirectoryIndex list.

<IfModule mod_dir.c>
DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>

Now restart apache so the changes will take effect.

sudo service apache2 restart

Next let’s setup some public key for authentication. On your local box complete the following.

ssh-keygen

If you don’t enter a passphrase, you will be able to use the private key for auth without entering a passphrase. If you’ve entered one, you’ll need it and the private key to log in. Securing your keys with passphrases is more secure, but either way the system is more secure this way then with basic password authentication. For this particular situation, I’m skipping the passphrase.

What is generated is id_rsa, the private key and the id_rsa.pub the public key. They’re put in a directory called .ssh of the local user.

At this point copy the public key to the remote server. On OS-X grab the easy to use ssh-copy-id script with this command.

brew install ssh-copy-id

or

curl -L https://raw.githubusercontent.com/beautifulcode/ssh-copy-id-for-OSX/master/install.sh | sh

Then use the script to copy the ssh key to the server.

ssh-copy-id adron@192.168.x.x

Next let’s setup some public key for authentication. On your local box complete the following.

ssh-keygen

That should give you the ability to log into the machine without a password everytime. Give it a try.

Ok, so now on to the meat of this entry, Nagios itself.

Nagios Installation

Create a user and group that will be used to run the Nagios process.

sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios

Install these other essentials.

sudo apt-get install libgd2-xpm-dev openssl libssl-dev xinetd apache2-utils unzip

Download the source and extract it, then change into the directory.

curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.1.1.tar.gz
tar xvf nagios-*.tar.gz
cd nagios-*

Next run the command to configure Nagios with the appropriate user and group.

./configure --with-nagios-group=nagios --with-command-group=nagcmd

When the configuration is done you’ll see a display like this.

Creating sample config files in sample-config/ ...

*** Configuration summary for nagios 4.1.1 08-19-2015 ***:

General Options:
-------------------------
Nagios executable: nagios
Nagios user/group: nagios,nagios
Command user/group: nagios,nagcmd
Event Broker: yes
Install ${prefix}: /usr/local/nagios
Install ${includedir}: /usr/local/nagios/include/nagios
Lock file: ${prefix}/var/nagios.lock
Check result directory: ${prefix}/var/spool/checkresults
Init directory: /etc/init.d
Apache conf.d directory: /etc/httpd/conf.d
Mail program: /bin/mail
Host OS: linux-gnu
IOBroker Method: epoll

Web Interface Options:
------------------------
HTML URL: http://localhost/nagios/
CGI URL: http://localhost/nagios/cgi-bin/
Traceroute (used by WAP):

Review the options above for accuracy. If they look okay,
type 'make all' to compile the main program and CGIs.

Now run the following make commands. First run make all as shown.

make all

Once that runs the following will be displayed upon success. I’ve included it here as there are a few useful commands in it.

*** Compile finished ***

If the main program and CGIs compiled without any errors, you
can continue with installing Nagios as follows (type 'make'
without any arguments for a list of all possible options):

make install
- This installs the main program, CGIs, and HTML files

make install-init
- This installs the init script in /etc/init.d

make install-commandmode
- This installs and configures permissions on the
directory for holding the external command file

make install-config
- This installs *SAMPLE* config files in /usr/local/nagios/etc
You'll have to modify these sample files before you can
use Nagios. Read the HTML documentation for more info
on doing this. Pay particular attention to the docs on
object configuration files, as they determine what/how
things get monitored!

make install-webconf
- This installs the Apache config file for the Nagios
web interface

make install-exfoliation
- This installs the Exfoliation theme for the Nagios
web interface

make install-classicui
- This installs the classic theme for the Nagios
web interface

*** Support Notes *******************************************

If you have questions about configuring or running Nagios,
please make sure that you:

- Look at the sample config files
- Read the documentation on the Nagios Library at:
https://library.nagios.com

before you post a question to one of the mailing lists.
Also make sure to include pertinent information that could
help others help you. This might include:

- What version of Nagios you are using
- What version of the plugins you are using
- Relevant snippets from your config files
- Relevant error messages from the Nagios log file

For more information on obtaining support for Nagios, visit:

https://support.nagios.com

*************************************************************

Enjoy.

After that successfully finishes, then execute the following.

sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

Now some tinkering to setup the web server user in www-data and nagcmd group.

sudo usermod -G nagcmd www-data

Now some Nagios plugins. You can find the plugins listed for download here: http://nagios-plugins.org/download/ The following are based on the 2.1.1 release of plugins.

Change back out to the user directory on the server and download, tar, and change into the newly unzipped files.

cd ~
curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz
tar xvf nagios-plugins-*.tar.gz
cd nagios-plugins-*
./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now for some ole compilation magic.

make
sudo make install

Now pretty much the same things for NRPE. Look here to insure that 2.15 is the latest version.

cd ~
curl -L -O http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
tar xvf nrpe-*.tar.gz
cd nrpe-*

Then configure the NRPE bits.

./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Then get to making it all.

make all
sudo make install
sudo make install-xinetd
sudo make install-daemon-config

Then a little file editing.

sudo vi /etc/xinetd.d/nrpe

Edit the file for the line only_from to include the following where 192.x.x.x is the IP of the Nagios Server.

only_from = 127.0.0.1 192.x.x.x

Save the file, and restart the Nagios server service.

sudo service xinetd restart

Now begins the Nagios Server configuration. Edit the Nagios configuration file.

sudo vi /usr/local/nagios/etc/nagios.cfg

Find this line and uncomment the line.

#cfg_dir=/usr/local/nagios/etc/servers

Save it and exit.

Next creat the configuration file for the servers to monitor.

sudo mkdir /usr/local/nagios/etc/servers

Next configure the contacts config file.

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

Fine this line and set the email address to one you’ll be using.

email adronsemail@compositecode.com

Now add a Nagios service definition for the check_nrpe command.

sudo vi /usr/local/nagios/etc/objects/commands.cfg

Add this to the end of the file.

define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}

Save and exit the file.

Now a few last touches for configuration in Apache. We’ll want the Apache rewrite and cgi modules enabled.

sudo a2enmod rewrite
sudo a2enmod cgi

Now create an admin user, we’ll call them ‘nagiosadmin’.

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Create a symbolic link of nagios.conf to the sites-enabled directory and then start the Nagios server and restart apache2.

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/
sudo service nagios start
sudo service apache2 restart

Enable Nagios to start on server boot (because, ya know, that’s what this server is going to be used for).

sudo ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios

Now navigate to the server and you’ll be prompted to login to the web user interface.

nagioslogin

Now begins the process of setting up servers you want to monitor… stay tuned, more to come.