Development Workspace with Terraform on Azure: Part 4 – DSE w/ Packer + Importing State 4 Terraform

The next thing I wanted setup for my development workspace is a DataStax Enterprise Cluster. This will give me all of the Apache Cassandra database features plus a lot of additional features around search, OpsCenter, analytics, and more. I’ll elaborate on that in some future posts. For now, let’s get an image built we can use to add nodes to our cluster and setup some other elements.

1: DataStax Enterprise

The general installation instructions for the process I’m stepping through here in this article can be found in this documentation. To do this I started with a Packer template like the one I setup in the second part of this series. It looks, with the installation steps taken out, just like the code below.

[code language=”javascript”]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
“managed_image_name”: “{{user `imagename`}}”,

“os_type”: “Linux”,
“image_publisher”: “Canonical”,
“image_offer”: “UbuntuServer”,
“image_sku”: “18.04-LTS”,

“azure_tags”: {
“dept”: “Engineering”,
“task”: “Image deployment”
},

“location”: “westus2”,
“vm_size”: “Standard_DS2_v2”
}],
“provisioners”: [{
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}'”,
“inline”: [
“”
],
“inline_shebang”: “/bin/sh -x”,
“type”: “shell”
}]
}
[/code]

In the section marked “inline” I setup the steps for installing DataStax Enterprise.

[code language=”javascript”]
“apt-get update”,
“apt-get install -y openjdk-8-jre”,
“java -version”,
“apt-get install libaio1”,
“echo \”deb https://debian.datastax.com/enterprise/ stable main\” | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list”,
“curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -“,
“apt-get update”,
“apt-get install -y dse-full”
[/code]

The first part of this process the machine image needs Open JDK installed, which I opted for the required version of 1.8. For more information about the Open JDK check out this material:

The next thing I needed to do was to get everything setup so that I could use this Azure Image to build an actual Virtual Machine. Since this process however is built outside of the primary Terraform main build process, I need to import the various assets that are created for Packer image creation and the actual Packer images. By importing these asset resources into Terraform’s state I can then write configuration and code around them as if I’d created them within the main Terraform build process. This might sound a bit confusing, but check out this process and it might make more sense. If it is still confusing do let me know, ping me on Twitter @adron and I’ll elaborate or edit things so that they read better.

check-box-64Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.

2: Terraform State & Terraform Import Resources

Ok, if you check out part 1 of this series I setup Azure CLI, Terraform, and the pertinent configuration and parts to build out infrastructure as code using HCL (Hashicorp Configuration Language) with a little bit of Bash as glue here and there. Then in Part 2 and Part 3 I setup Packer images and some Terraform resources like Kubernetes and such. All of that is great, but these two parts of the process are now in entirely two different unknown states. The two pieces are:

  1. Packer Images
  2. Terraform Infrastructure

The Terraform Infrastructure doesn’t know the Packer Images exist, but they are sitting there in a resource group in Azure. The way to make Terraform aware that these images exist is to import the various things that store the images. To import these resources into the Terraform state, before doing an apply, run the terraform import command.

In order to get all of the resources we need in which to operate and build images, the following import commands need issued. I wrote a script file to help me out with each of these, and used jq to make retrieval of the Packer created Azure Image ID’s a bit easier. That code looks like this:

[code language=”bash”]
BASECASSANDRA=$(az image list | jq ‘map({name: “basecassandra”, id})’ | jq -r ‘.[0].id’)
BASEDSE=$(az image list | jq ‘map({name: “basedse”, id})’ | jq -r ‘.[0].id’)
[/code]

Breaking down the jq commands above, the following actions are being taken. First, the Azure CLI command is issued, az image list which is then piped | into the jq command of jq 'map({name: "theimagenamehere", id})'. This command takes the results of the Azure CLI command and finds the name element with the appropriate image name, matches that and then gets the id along with it. That command is then piped into another command that returns just the value of the id jq -r '.id'. The -r is a switch that tells jq to just return the raw data, without enclosing double quotes.

I also needed to import the resource group all of these are in too, which following a similar jq command style of piping one command’s results into another, issued this command to get the Resource Group ID RG-IMPORT=$(az group show --name adronsimages | jq -r '.id'). With those three ID’s there is one more element needed to be able to import these into Terraform state.

The Terraform resources that these imported pieces of state will map to need declared, which means the Terraform HCL itself needs written out. For that, there are the two images that are needed and the Resource Group. I added the images in an images.tf files and the Resource Group goes in the resource_groups.tf file.

[code language=”javascript”]
resource “azurerm_image” “basecassandra” {
name = “basecassandra”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}

resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

Then the Resource Group.

[code language=”javascript”]
resource “azurerm_resource_group” “imported_adronsimages” {
name = “adronsimages”
location = var.locationlong

tags = {
environment = “Development Images”
}
}
[/code]

Now, issuing these Terraform commands will pull the current state of those resource into the state, which we can then issue further Terraform commands and applies from.

[code language=”bash]
terraform import azurerm_image.basedse $BASEDSE
terraform import azurerm_image.basecassandra $BASECASSANDRA
terraform import azurerm_resource_group.imported_adronsimages $RG_IMPORT
[/code]

Running those commands, the results come back something like this.

terraform-imports

Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
  • Now there is also state in Terraform, that understands where and what these resources are.

Summary, for now.

This post is shorter than I’d like it to be. But it was taking to long for the next steps to get written up – but fear not they’re on the way! In the coming post I’ll cover more of other resource elements we’ll need to import, what is next for getting a virtual machine built off of the image that is now available, some Terraform HCL refactoring, and most importantly putting together the actual DataStax Enterprise / Apache Cassandra Clusters! So stay tuned, subscribe to the blog, and of course follow me on the Twitters @Adron.

Development Workspace with Terraform on Azure: Part 1 – Install and Setup Terraform and Azure CLI

Prerequisites before all of this.

Have a basic understanding of how to use Terraform and what it does. This is covered pretty well in the Hashicorp Docs here (single page read <5 minutes) and if you have a LinkedIn Learning account check out my Terraform course “Learning Terraform“.

Beyond that some basic CLI/terminal knowledge, understand where environment variables (as I detail here, here, and here for some starters) are, and miscellaneous knowledge. You’ll also need knowledge and user experience with Git. Most of these things I’ll detail explicitly but otherwise I’ll either link to or provide context for additional information throughout the article.

1: Terraform

Download

You’ll need to first install Terraform and make it available for use on your machine. To do this navigate over to the Hashicorp TerraformTerraform site and to the download section. As of this time 0.12.6 is available, and for the foreseeable future this version or versions coming will be just fine.

Install

You’ll need to unzip this somewhere in a directory that you’ve got the path mapped for execution. In my case I’ve setup a directly I call “Apps” and put all of my CLI apps in that directory. Then add it to my path environment variable and then terraform becomes available to me from any terminal wherever I need it. My path variable export on Linux and Mac look like this.

[code]export PATH=$PATH:/home/adron/Apps[/code]

Now you can verify that the Terraform CLI is available by typing terraform in any terminal and you should get a read out of the available Terraform commands.

For those of you who might be trying to install this on the WSL (Windows Subsystem for Linux), on Windows itself, or some variance there is specific instructions for that too. Check out Hashicorp’s installation instructions for more details on several methods and a tutorial video, plus the Microsoft Docs on installing Terraform on the WSL.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.

2: Azure CLI

For this tutorial, there are several ways for Terraform to authenticate to Azure, I’ll be using the Azure CLI authentication method as detailed in this tutorial from Hashicorp. There are also some important notes about the Azure CLI. The Azure CLI method in conjunction with the AzureRM Terraform Provider is used to build out resources using infrastructure as code paradigms, because of this it is important also to insure we have the right versions of everything to work together.

The Caveats

For the AzureRM, which will be downloaded automatically when we setup the repository and initialize it with the terraform init command, we’ll want to make sure we have version 1.20 or greater. Previous versions of the AzureRM Provider used a method of authorizing that reset credentials after an hour. A clear issue.

Terraform also only support authenticating using the az CLI and it must be avilable in the path of the system, same as the way terraform is available via the path. In other words, if both terraform and az can be executed from anywhere in the terminal we’re all set. Using the older methods of Powershell Cmdlets or azure CLI methods aren’t supported anymore.

Authenticating via the Azure CLI is only supported when using a “User Account” and not via Service Principal (ex: az login --service-principal). This works perfectly since these environments I’m building are specifically for my development needs. If you’d like to use this example as a more production focused example, then using something like Service Principals or another systems level verification, authentication, and authorization model should be used. For other examples check out authentication via a Service Principal and Client Secret, Service Principal and Client Certificate, or Managed Service Identity.

Installing

To get the Azure CLI installed I followed the manual installation on Debian/Ubuntu Linux process. For Windows installation check out these instructions.


# Update the latest packages and make sure certs, curl, https transport, and related packages are updated.
sudo apt-get update
sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg
# Download and install the Microsoft signing key.
curl -sL https://packages.microsoft.com/keys/microsoft.asc | \
gpg –dearmor | \
sudo tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null
# Add the software repository of the Azure CLI.
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
# Update the repository information and install the azure-cli package.
sudo apt-get update
sudo apt-get install azure-cli

view raw

install.sh

hosted with ❤ by GitHub

Login & Setup

az login

It’ll bring up a browser that’ll give you a standard Microsoft auth login for your Azure Account.

login.png

When that completes successfully a response is returned in the terminal as shown.

loggedin.jpg

Pieces of this information will be needed later on so I always copy this to a text file for easy access. I usually put this file in a folder I call “DELETE THIS CUZ SECURITY” so that I remember to delete it shortly thereafter so it doesn’t fall into the hands of evil!

For all the other operating systems and places that the Azure CLI can be installed, check out the docs here.

Once logged in a list of accounts can be retrieved too. Run az account list to get the list of accounts available. If you only have one account (re: subscriptions) then you’ll just see exactly what was displayed when you logged in. However if you have other peripheral information, those accounts will be shown here.

If there is more than one subscription, one needs selected and set. To do that execute the following command by passing the subscription id. That’s the second value in that list of values above. Yes, it is kind of odd that they use account and subscription interchangeably in this situation, and that subscription id isn’t exactly obvious if this data is identified as an account and not a subscription, but we’ll give Microsoft a pass for now. Suffice it to say, account id and subscription id in this data is the stand alone id field in the aforementioned data.

az account set --subscription="id" where id is subscription id, or as shown in az account list the id there, whatever one wants to call it.

Configuring Terraform Azure CLI Auth

To do this we will go ahead and setup the initial repository and files. What I’ve done specifically for this is to navigate to Github to the new repository path https://github.com/new. Then I selected the following options:

  1. Repository Name: terraform-todo-list
  2. Description: This is the infrastructure project that I’ll be using to “turn on” and “turn off” my development environment every day.
  3. Public Repo
  4. I checked Initialize this repository with a README and then added the .gitignore file with the Terraform template, and the Apache License 2.0.

newproject.png

This repository is now available at https://github.com/Adron/terraformer-todo-list. All of the steps and details outlined in this blog entry will be available in this repository plus any of my ongoing work on bastion servers, clusters, kubernetes, or other related items specific to my development needs.

With the repository cloned locally via git clone git@github.com:Adron/terraformer-todo-list.git there needs to be a main.tf file created. Once created I’ve added the azurerm provider block provider "azurerm" { version = "=1.27.0" } into the file. This enables Terraform to be executed from this repository directory with terraform init. Running this will pull down the azurerm provider dependency. If everything succeeds you’ll get a response from the command.

terraform-init-success

However if it fails, routinely I’ve ended up out of sync with Terraform version vs. provider version. As mentioned above we definitely need 1.20.0 or greater for the examples in this post. However, I’m also running Terraform at version 0.12.6 which requires at least 1.27.0 of the azurerm or better. If you see an error like this, it’s usually informative and you’ll just need to change the version number so the version of Terraform you’re using will pull down the right version.

terraform-init-fail

Next I run terraform plan and everything should respond with no change to infrastructure requested response.

terraform-plan-aok.png

At this point I want to verify authentication against my Azure account with my Terraform CLI, to do this there are two additional fields that need to be added to the provider: subscription_id and tenant_id. The configuration will look similar to this, except with the subscription id and tenant id from the az account list data that was retrieved earlier when setting up and finding the the Azure account details from the Azure CLI.

terraform-main-auth

Run terraform plan again to see the authentication results, which will look just like the terraform plan results above. With this done there’s just one more thing to do so that we have a good work space in which to work with Terraform against Azure. I always, at this point of any project with Terraform and Azure, setup a Resource Group.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.

3: Azure Resource Group

Just for clarity, a few details about the resource group. A Resource Group in Azure is a grouping that should share the same lifecycle, which is exactly what I’m aiming to do with all of these resources on a day to day basis for development. Every day I intend to start these resources in this Resource Group and then shut them all down at the end of the day.

There are other specifics about what exactly a Resource Group is, but I’ll leave the documentation to be read to elaborate further, for my mission today I just want to have a Resource Group available for further Terraform work. In Terraform the way I go about creating a Resource Group is by adding the following to my main.tf file.


provider "azurerm" {
version = "=1.27.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "11111111-1111-1111-1111-111111111111"
}
resource "azurerm_resource_group" "adrons_resource_group_workspace" {
name = "adrons_workspace"
location = "West US 2"
tags = {
environment = "Development"
}
}

view raw

main.tf

hosted with ❤ by GitHub

Run terraform plan to see the changes. Then run terraform apply to make the changes, which will need a confirmation of yes.

terraform-apply-done

Once I’m done with that I go ahead and issue a terraform destroy command, giving it a yes confirmation when asked, to destroy and wrap up this work for now.

terraform-destroy-cleanup

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.

4: Using Environment Variables

There is one more thing before I want to commit this code to the repository. I need to get the subscription id and tenant id out of the main.tf file. One wouldn’t want to post their cloud access and identification information to a public repository, or ideally to any repository. The easy fix for this is to implement some interpolated variables to pull from environment variables. I can then set the environment variables via my startup script (such as .bash_profile or .bashrc or even the IDE I’m running the Terraform from like Intellij or Webstorm for example). In that script setting the variables would look something like this.


export TF_VAR_subscription_id="00000000-0000-0000-0000-000000000000"
export TF_VAR_tenant_id="11111111-1111-1111-1111-111111111111"

Note that each variable is prepended with TF_VAR. This is the convention so that Terraform will look through and pick up all of the variables that it needs to work with. Once these variables are added to the startup script, run a source ~/.bashrc (linux) or source ~/.bash_profile (on Mac) to set those variables.  For Windows check out this to set the environment variables. With that set there are a few more steps.

In the repository create a file named variables.tf and add the two variables variable "subscription_id" {} and variable "tenant_id" {}. Then in the main.tf file change the subscription_id and tenant_id fields to be assigned variables like subscription_id = var.subscription_id and tenant_id = var.tenant_id. Now run terraform plan and these results should display.

terraform-plan-after-variables

Now the terraform apply can be applied or terraform destroy to create or destroy the Resource Group. The last step now is to just commit this infrastructure code with the variables now removed from the main.tf file.

git add -A

git commit -m 'First executable resource.'

git rebase to pull in all the remote default files and such and merge those with the local additions.

git push -u origin master then to push the changes and set the master local branch to track with the remote branch master.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.
  • Private sensitive data has been moved from the main.tf file into environment variables so that it isn’t copied to the repository.
  • A variables.tf file has been added for the aforementioned variables that map to environment variables.
  • The code base has been committed to Github at https://github.com/Adron/terraformer-todo-list.
  • Both terraform plan and terraform apply deploy as expected and terraform destroy removes infrastructure cleanly as expected.

Next steps coming soon!

The Developer Advocates – Observations on Microsoft’s New Competence

racoonRecently a whole slew of people got hired at Microsoft. Many of us have taken notice. It’s left a lot of people with questions like:

  • Why would Erik, Ashley, or Jesse work at Microsoft?
  • Doesn’t it seem suspicious?
  • I wonder what kind of cash they allocated to that payroll budget?
  • Does Microsoft hire anybody that actually uses Microsoft tooling anymore?
  • I’m confused, what is even going on?

The answers may be more obvious for those of us that have kept an eye on Microsoft. There has been this grand upheaval and cultural change that has occurred. CEO Satya Nadella has legitimately shifted the culture in a way that much of the company has wanted to go. Somehow, he’s also managed to start changing the culture even for those that weren’t sure or didn’t want to go.

Satya has taken what core individuals like Scott Guthrie, Scott Hanselman, and many others have hoped for and pushed for over the years and started to enable the people within Microsoft to make this happen. You can read plenty about how Microsoft has gotten it’s groove back, and about the work the Scotts and others have done to get that groove going. But I’m not particularly writing about that, but it has inspired this article in a big way. I’m going to elaborate on what I’ve observed and what I know to make a strong, effective, useful, and community focused developer advocate and developer advocates team.

The developer advocate team over at Microsoft is led by Jeff Sandquist, Brian Liston, and a few others. They’re solid individuals with good ideas about how to build and have an advocate team contribute effectively to the community in which it works. Here are the top three obvious things they’ve done that have made the team effective, relevant, intelligent, and useful.

  1. The team is diverse. I’m not even going to play around, diverse teams with many ideas and a range of people do better. End of story, it really ought not to be complicated these days. But one can’t just start a team and say “I want my team to be diverse”. That’s a start, but the important part is does one know how to build a diverse team? In technology, if one doesn’t have insight into actual human begins this doesn’t pan out so well. One has to have the ability to communicate effectively to people out of the tech nerd guy stereotype trope in order to actually build this type of team. Jeff, Brian, and crew appear to have this ability. I’ll write more on this later, but suffice it to say, this is a top skillset of a developer advocate team’s leadership.
  2. The team has to be skilled at a variety of complementing technologies. If someone knows X, and the next person knows X, and nobody knows Y, then the team is going to be fairly weak and likely broken in representing and providing value around Y, and in some sense even around X. At this point the Developer Advocates that have been introduced have some pretty extensive skillsets around key technologies that the Microsoft Technical Evangelists have traditionally been extremely weak in. This current team has some skills in the Windows space, but there’s been a big focus in filling the massive skills gap around Linux, cloud technology (ironic there’s traditionally been such a gap on the cloud team), non-MS languages like Go, distributed systems, data analysis and intelligent (or data science or whatever one may call these roles), and more. The Advocate team (also not called evangelists anymore, finally) is finally in a good position to actually start doing advocacy around actual cloud technology. I’m excited for the potential of the prospects!
  3. The third thing that has stood out, is that they’ve hired people that know how to do the advocacy thing already. They’re not trying to define or redefine it on Microsoft’s terms but instead have brought people onboard that are already natural advocates of things they find interesting. Take Erik St Martin (@erikstmartin) for example; co-authored a book, “Go in Action“ with Brian Ketelsen (@bketelsen), co-hosts Go Time FM. That brings up another great example with Brian Ketelsen. Both of these guys are hug advocates in their own right, without connection to any specific big company or what not. These are the types of people that bring huge strength to a team with already proven ability to delivery. Then there’s Jesse Frazelle, but seriously, I really don’t even need to mention the work she’s done with containers (cough cough, docker, etc). Another person you should be watching is Anthony Chu (@nthonyChu), who’s been a steady Azure and great technologies advocate over the years, also joined up. You can read more about the individual team members here, and I hear through the secret grapevine that there are more en route to join. Simply put, Microsoft isn’t pulling their punches!

Before a lot of the Microsoft team had been formed into the epic legion it is today, there were a number of articles pointing to this rebirth into a newly relevant organization. One that was solid is Ars Technica’s “Microsoft’s renewed embrace of developers, developers, developers, developers“ by Peter Bright (@drpizza). One of the first, as anyone who reads & subscribes to [Red Monk] analysis would suspect, was published by James Governor (@monkchips) with “On Hiring Jesse Frazelle: Microsoft’s Developer Advocacy Hot Streak Continues“. The writing has been on the proverbial wall.

So now what?

Now the thing to wait and see is if the team and the team’s leadership can direct all of this energy into their respective efforts. The team is big, lots of people, lots of focus points. How will they use each others’ strengths while building up along core competencies? How will they provide value without detracting from product and push product without losing community value? There are a lot of questions to be answered and I’ll be keeping a close eye on their efforts. As I do with all of the advocacy teams I find fellow interests in. The advocacy, effectiveness, and reasons for it all has been an interest of my own for some time. So much so you can expect more than a few more articles on this topic, until then, cheers!

 

Truly Excellent People and Coding Inspiration…

.NET Fringe took place this last week. It’s been a rather long time since my last actual conference that I actually got to really attend, meet people, and talk to people about all the different projects, aspirations, goals, and ideas about what’s next for the future. This conference was perfect to jump into, first and foremost, I knew it was an effort in being inclusive of the existing community and newcomers. We’d reached out to many brave souls to come and attend this conference about pushing technology into the future.

I met some truly excellent people. Smart, focused, intent, and a whole lot of great conversations followed meeting these people. Here’s a few people you’ll want to keep an eye on based on the technology they’re working on. I got to sit down and talk to every one of these coders and they’re in top form, smart, inventive, witty and full of great humor to boot!

Maria Naggaga @Twitter

I met Maria and one of the first things I saw was her crafty and most excellent art sketches around lifestyles, heroes, and more. I love art like this, and was really impressed with what Maria had done with her’s.

Maria giving us the info.
Maria giving us the info.

I was able to hang out with Maria a bit more and had some good conversation time talking about evangelism, tech fun and nonsense all around. I also was able to attend her talk on “Legacy… What?” which was excellent. The question she posed in the description states a common question posed, “When students think about .Net they think: legacy , enterprise , retired, and what is that?” which I too find to be a valid thought. Is .NET purely legacy these days? For many getting into the field it generally isn’ the landscape of greenfield applications and is far more commonly associated with legacy applications. Hearing her vantage point on this as an evangelist was eye opening. I gained more ideas, thoughts, and was pushed to really get that question answered for students in a different way…  which I’ll add to sometime in the future in another blog entry.

Kathleen Dollard @Twitter && @Github

I spoke to Kathleen while we took a break across the street from the conference at Grendal’s Coffee Shop. We talked a lot about education and what is effective training, diving heavily into what works around video, samples, and related things. You see, we’re both authors at Pluralsight too and spend a lot of time thinking about these things. It was great to be able to sit down and really discuss these topics face to face.

We also dived into a discussion about city livability and how Portland’s transit system works, what is and isn’t working in the city and what it’s like to live here. I was, of course, more than happy to provide as much information as I could.

We also discussed her interest in taking legacy shops (i.e. pre-C# even, maybe Delphi or whatever might exist) and helping them modernize their shop. I found this interesting, as it could be a lot of fun figuring out large gaps in technology like that and helping a company to step forward into the future.

Kathleen gave two presentations at the conference – excellent presentations. One was the “Your Code, Your Brain” presentation, talking about exactly the topic of legacy shops moving forward without disruption.

If you’re interested in Kathleen’s courses, give a look here.

Amy Palamountain @Twitter && @Github

Amy had a wicked great slides and samples that were probably the most flawless I’ve seen in a while. Matter of fact, a short while after the conference Amy put together a blog entry about those great slides and samples “Super Smooth Technical Demoes“.

An intent and listening audience.
An intent and listening audience.

An intent and listening audience.Amy’s talked at the conference was titled “Space, Time, and State“. It almost sounds like we could just turn that into an acronym. The talk was great, touched on the aspects of reactiveness and the battle of state that we developers fight every day while building solutions.

We also got to talk a little after the presentation, the horror of times zones, and a slew of good conversation.

Tomasz Janczuk @Twitter && @Github

AAAAAaggghhhhhh! I missed half of Tomasz’s talk! It always happens at every conference right! You get to talking to people, excited about this topic or that topic and BOOM, you’ve missed half of a talk that you fully intended to attend. But hey, the good part is I still got to see half the talk!

If you’re not familiar with Tomasz’ work and you do anything with Node.js you should pay close attention. Tomasz has been largely responsible for the great work behind Edge.js and influencing the effort to get Node.js running (and running damn well might I add) on Windows. For more on Edge.js check out Act I and Act II and the Github repository.

The Big Hit for Me, Distributed Systems

First some context. About 4 years ago I left the .NET Community almost entirely. Even though I was still doing a little work with C# I primarily switched stacks to other things to push forward with Riak, distributed systems usage, devops deployment of client apps, and a whole host of other things. At the time I basically had gotten real burned out on where the .NET Community had ended up worldwide, while some pushed onward with the technologies I loved to work with, I was tired of waiting and dived into some esoteric stuff and learned strange programming techniques in JavaScript, Ruby, Erlang and dived deeper into distributed technologies for use in application construction.

However some in the community didn’t stop moving the ball forward, and at this conference I got a great view into some of that progress! I’m stoked to see this technology and where it is now, because there is a LOT of potential for a number of things. Here’s the two talks and two more great people I got to see speak. One I knew already (great to see you again and hang out Aaron!) and one I had the privilege & honor to meet (it was most excellent hanging out and seeing your presentation Lena).

Aaron Stonnard @Twitter && @Github

Aaron I’d met back when Troy & I put together the first Node PDX. Aaron had swung into Portland to present on “Building Node.js Applications on Windows Azure“. At .NET Fringe however Aaron was diving into a topic that was super exciting to me. The first line of the description from the topic really says it all “Distributed computing in .NET isn’t something you often hear about, but it’s becoming an increasingly important area for growing .NET businesses around the globe. And frankly it’s an area where .NET has lagged behind other runtimes and platforms for years – but this is changing!“. Yup, that’s my exact pain point, it’s awesome to know Aaron & Petabridge are kicking ass in this space now.

Aaron’s presentation was solid, as to be expected. We also had some good conversations after and before the presentation about the state of distributed compute and systems within the Microsoft and Windows ecosystem. To check out more about Akka .NET that Aaron & Andrew Skotzko …  follow @AkkaDotNet, @aaronontheweb, @petabridge, and @askotzko.

Akka .NET

Alena Dzenisenka @Twitter && @Github

...

…Lena traveled all the way from Kiev in the Ukraine to provide the .NET Fringe crowd with some serious F# distributed and parallel compute knowledge in “Embracing the Cloud“!  (Slides here)

Here’s a short dive into F# here if you’re unfamiliar, which you can install on OS-X, Windows or whatever. So don’t use the “well, I don’t use windows” excuse to not give it a try! Here’s info about MBrace that  Lena also used in her demo. Also dive into brisk from elastacloud…

In addition to the excellent talk that Lena gave I also got to hang out with her, Phil Haack, Ryan Riley, and others over food at Biwa on the last day of the conference. After speaking with Lena about the Ukraine, computing, coding and other topics around hacking and the OSS Community she really inspired me to take a dive into these tools for some of the work that I’m working on now and what I’ll be doing in the near future.

All The Things

Now of course, there were a ton of other people I got to meet, people I got to catch up with I haven’t seen in ages and others I didn’t get to write about. It was a really great conference with great content. I’m looking forward to round 2 and spending more time with everybody in the future!

The whole bunch of us at the end of the conference!
The whole bunch of us at the end of the conference!

Cheers everybody!   \m/

An Aside of Blog Entries on .NET Fringe

Here are some additional blog entries that others wrote about the event. In addition to these blog entries I’ll be updating this entry with any additional entries that I see pop up – so if you post one let me know, and I’ll also update these talks above that I’ve discussed with videos when they’re posted live.

RSVP for the Geek Train to .NET Fringe

Cascadian Flag
Cascadian Flag

The .NET Fringe Conference guests coming from northern Cascadia (north of Portland) will have the excellent benefit of taking the Geek Train to the conference. It’s also only $10 friggin’ bucks!

RSVP link here | RSVP link here | RSVP link here | RSVP link here | RSVP link here

Departure

We’ll depart Saturday, April 11th at 2pm, with an ETA into Portland at 5:50pm.

Itinerary

  • 1:40pm Arrive at train station in Seattle to join group for boarding. **
  • 2:00pm Departing Seattle King Street Station (i.e. you better be on the train)
  • 2:10pm We’ll be seated and get setup for…
  • 2:15pm We’ll break into teams of ~4 or so people (or however many of us there are we’ll break out to a reasonable size groups).
  • 2:17pm I’ll announce hacking goals and ideas for the teams and we’ll launch into coding. More information will be announced soon, but suffice it to say we’ll be planning a hack around geo and logistics based solutions! The solutions hacking begins!
  • – – – much hacking and enjoying of the trip occurs here! 🙂 – – –
  • 5:00pm We announce who’s completed what and we’ll demo and discuss the app awesomeness of what we’ve managed to come up with.
  • 5:50pm or before we arrive in Portland and the fringe fun shall begin.

I’ll have more information posted here along with some other ideas about what the hackfest will include, so stay tuned and also be sure to follow @dotnetfringe, and check out all the speakers to start figuring out your plans!