Development Workspace with Terraform on Azure: Part 1 – Install and Setup Terraform and Azure CLI

Prerequisites before all of this.

Have a basic understanding of how to use Terraform and what it does. This is covered pretty well in the Hashicorp Docs here (single page read <5 minutes) and if you have a LinkedIn Learning account check out my Terraform course “Learning Terraform“.

Beyond that some basic CLI/terminal knowledge, understand where environment variables (as I detail here, here, and here for some starters) are, and miscellaneous knowledge. You’ll also need knowledge and user experience with Git. Most of these things I’ll detail explicitly but otherwise I’ll either link to or provide context for additional information throughout the article.

1: Terraform

Download

You’ll need to first install Terraform and make it available for use on your machine. To do this navigate over to the Hashicorp TerraformTerraform site and to the download section. As of this time 0.12.6 is available, and for the foreseeable future this version or versions coming will be just fine.

Install

You’ll need to unzip this somewhere in a directory that you’ve got the path mapped for execution. In my case I’ve setup a directly I call “Apps” and put all of my CLI apps in that directory. Then add it to my path environment variable and then terraform becomes available to me from any terminal wherever I need it. My path variable export on Linux and Mac look like this.

[code]export PATH=$PATH:/home/adron/Apps[/code]

Now you can verify that the Terraform CLI is available by typing terraform in any terminal and you should get a read out of the available Terraform commands.

For those of you who might be trying to install this on the WSL (Windows Subsystem for Linux), on Windows itself, or some variance there is specific instructions for that too. Check out Hashicorp’s installation instructions for more details on several methods and a tutorial video, plus the Microsoft Docs on installing Terraform on the WSL.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.

2: Azure CLI

For this tutorial, there are several ways for Terraform to authenticate to Azure, I’ll be using the Azure CLI authentication method as detailed in this tutorial from Hashicorp. There are also some important notes about the Azure CLI. The Azure CLI method in conjunction with the AzureRM Terraform Provider is used to build out resources using infrastructure as code paradigms, because of this it is important also to insure we have the right versions of everything to work together.

The Caveats

For the AzureRM, which will be downloaded automatically when we setup the repository and initialize it with the terraform init command, we’ll want to make sure we have version 1.20 or greater. Previous versions of the AzureRM Provider used a method of authorizing that reset credentials after an hour. A clear issue.

Terraform also only support authenticating using the az CLI and it must be avilable in the path of the system, same as the way terraform is available via the path. In other words, if both terraform and az can be executed from anywhere in the terminal we’re all set. Using the older methods of Powershell Cmdlets or azure CLI methods aren’t supported anymore.

Authenticating via the Azure CLI is only supported when using a “User Account” and not via Service Principal (ex: az login --service-principal). This works perfectly since these environments I’m building are specifically for my development needs. If you’d like to use this example as a more production focused example, then using something like Service Principals or another systems level verification, authentication, and authorization model should be used. For other examples check out authentication via a Service Principal and Client Secret, Service Principal and Client Certificate, or Managed Service Identity.

Installing

To get the Azure CLI installed I followed the manual installation on Debian/Ubuntu Linux process. For Windows installation check out these instructions.


# Update the latest packages and make sure certs, curl, https transport, and related packages are updated.
sudo apt-get update
sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg
# Download and install the Microsoft signing key.
curl -sL https://packages.microsoft.com/keys/microsoft.asc | \
gpg –dearmor | \
sudo tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null
# Add the software repository of the Azure CLI.
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
# Update the repository information and install the azure-cli package.
sudo apt-get update
sudo apt-get install azure-cli

view raw

install.sh

hosted with ❤ by GitHub

Login & Setup

az login

It’ll bring up a browser that’ll give you a standard Microsoft auth login for your Azure Account.

login.png

When that completes successfully a response is returned in the terminal as shown.

loggedin.jpg

Pieces of this information will be needed later on so I always copy this to a text file for easy access. I usually put this file in a folder I call “DELETE THIS CUZ SECURITY” so that I remember to delete it shortly thereafter so it doesn’t fall into the hands of evil!

For all the other operating systems and places that the Azure CLI can be installed, check out the docs here.

Once logged in a list of accounts can be retrieved too. Run az account list to get the list of accounts available. If you only have one account (re: subscriptions) then you’ll just see exactly what was displayed when you logged in. However if you have other peripheral information, those accounts will be shown here.

If there is more than one subscription, one needs selected and set. To do that execute the following command by passing the subscription id. That’s the second value in that list of values above. Yes, it is kind of odd that they use account and subscription interchangeably in this situation, and that subscription id isn’t exactly obvious if this data is identified as an account and not a subscription, but we’ll give Microsoft a pass for now. Suffice it to say, account id and subscription id in this data is the stand alone id field in the aforementioned data.

az account set --subscription="id" where id is subscription id, or as shown in az account list the id there, whatever one wants to call it.

Configuring Terraform Azure CLI Auth

To do this we will go ahead and setup the initial repository and files. What I’ve done specifically for this is to navigate to Github to the new repository path https://github.com/new. Then I selected the following options:

  1. Repository Name: terraform-todo-list
  2. Description: This is the infrastructure project that I’ll be using to “turn on” and “turn off” my development environment every day.
  3. Public Repo
  4. I checked Initialize this repository with a README and then added the .gitignore file with the Terraform template, and the Apache License 2.0.

newproject.png

This repository is now available at https://github.com/Adron/terraformer-todo-list. All of the steps and details outlined in this blog entry will be available in this repository plus any of my ongoing work on bastion servers, clusters, kubernetes, or other related items specific to my development needs.

With the repository cloned locally via git clone git@github.com:Adron/terraformer-todo-list.git there needs to be a main.tf file created. Once created I’ve added the azurerm provider block provider "azurerm" { version = "=1.27.0" } into the file. This enables Terraform to be executed from this repository directory with terraform init. Running this will pull down the azurerm provider dependency. If everything succeeds you’ll get a response from the command.

terraform-init-success

However if it fails, routinely I’ve ended up out of sync with Terraform version vs. provider version. As mentioned above we definitely need 1.20.0 or greater for the examples in this post. However, I’m also running Terraform at version 0.12.6 which requires at least 1.27.0 of the azurerm or better. If you see an error like this, it’s usually informative and you’ll just need to change the version number so the version of Terraform you’re using will pull down the right version.

terraform-init-fail

Next I run terraform plan and everything should respond with no change to infrastructure requested response.

terraform-plan-aok.png

At this point I want to verify authentication against my Azure account with my Terraform CLI, to do this there are two additional fields that need to be added to the provider: subscription_id and tenant_id. The configuration will look similar to this, except with the subscription id and tenant id from the az account list data that was retrieved earlier when setting up and finding the the Azure account details from the Azure CLI.

terraform-main-auth

Run terraform plan again to see the authentication results, which will look just like the terraform plan results above. With this done there’s just one more thing to do so that we have a good work space in which to work with Terraform against Azure. I always, at this point of any project with Terraform and Azure, setup a Resource Group.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.

3: Azure Resource Group

Just for clarity, a few details about the resource group. A Resource Group in Azure is a grouping that should share the same lifecycle, which is exactly what I’m aiming to do with all of these resources on a day to day basis for development. Every day I intend to start these resources in this Resource Group and then shut them all down at the end of the day.

There are other specifics about what exactly a Resource Group is, but I’ll leave the documentation to be read to elaborate further, for my mission today I just want to have a Resource Group available for further Terraform work. In Terraform the way I go about creating a Resource Group is by adding the following to my main.tf file.


provider "azurerm" {
version = "=1.27.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "11111111-1111-1111-1111-111111111111"
}
resource "azurerm_resource_group" "adrons_resource_group_workspace" {
name = "adrons_workspace"
location = "West US 2"
tags = {
environment = "Development"
}
}

view raw

main.tf

hosted with ❤ by GitHub

Run terraform plan to see the changes. Then run terraform apply to make the changes, which will need a confirmation of yes.

terraform-apply-done

Once I’m done with that I go ahead and issue a terraform destroy command, giving it a yes confirmation when asked, to destroy and wrap up this work for now.

terraform-destroy-cleanup

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.

4: Using Environment Variables

There is one more thing before I want to commit this code to the repository. I need to get the subscription id and tenant id out of the main.tf file. One wouldn’t want to post their cloud access and identification information to a public repository, or ideally to any repository. The easy fix for this is to implement some interpolated variables to pull from environment variables. I can then set the environment variables via my startup script (such as .bash_profile or .bashrc or even the IDE I’m running the Terraform from like Intellij or Webstorm for example). In that script setting the variables would look something like this.


export TF_VAR_subscription_id="00000000-0000-0000-0000-000000000000"
export TF_VAR_tenant_id="11111111-1111-1111-1111-111111111111"

Note that each variable is prepended with TF_VAR. This is the convention so that Terraform will look through and pick up all of the variables that it needs to work with. Once these variables are added to the startup script, run a source ~/.bashrc (linux) or source ~/.bash_profile (on Mac) to set those variables.  For Windows check out this to set the environment variables. With that set there are a few more steps.

In the repository create a file named variables.tf and add the two variables variable "subscription_id" {} and variable "tenant_id" {}. Then in the main.tf file change the subscription_id and tenant_id fields to be assigned variables like subscription_id = var.subscription_id and tenant_id = var.tenant_id. Now run terraform plan and these results should display.

terraform-plan-after-variables

Now the terraform apply can be applied or terraform destroy to create or destroy the Resource Group. The last step now is to just commit this infrastructure code with the variables now removed from the main.tf file.

git add -A

git commit -m 'First executable resource.'

git rebase to pull in all the remote default files and such and merge those with the local additions.

git push -u origin master then to push the changes and set the master local branch to track with the remote branch master.

check-box-64Verification Checklist

  • Terraform is installed and executable from the terminal in whichever folder on the system.
  • Azure CLI is installed and executable from the terminal in whichever folder on the system.
  • The Azure CLI has been used to login to the Azure account and the subscription/account set for use as the default subscription/account for the Azure CLI commands.
  • A repository has been setup on Github (here) that has a main.tf file that I used to create a single Azure Resource Group in which to do future work within.
  • I ran terraform destroy to clean up for this set of work.
  • Private sensitive data has been moved from the main.tf file into environment variables so that it isn’t copied to the repository.
  • A variables.tf file has been added for the aforementioned variables that map to environment variables.
  • The code base has been committed to Github at https://github.com/Adron/terraformer-todo-list.
  • Both terraform plan and terraform apply deploy as expected and terraform destroy removes infrastructure cleanly as expected.

Next steps coming soon!

New Relic, The King Makers, MS Open Tech, Riak VMs and Life Gets Easier Today

Today Microsoft released, with partnerships with a number of companies including Basho, Hupstream and Bitnami, the VM Depot. I’ve always followed Bitnami, so it’s really cool to see their VM releases for Jenkins (CI Build Server), WordPress, Ruby 1.9.3 stackNode.js and about everything you can imagine out their along side our Basho Riak CentOS image. If you want a great way to get kick started with Riak and you’re setup with Windows Azure, now there is an even easier way to get rolling.

Over on the Basho blog we’ve announced the MS Open Tech and Basho Collabortation. I won’t repeat what was stated there, but want to point out two important things:

  1. Once you get a Riak image going, remember there’s the whole community and the Basho team itself that is there to help you get things rolling via the mail list. If you’re looking for answers, you’ll be able to get them there. Even if you get everything running smoothly, join in anyway and at least just lurk. 🙂
  2. The RTFM value factor is absolutely huge for Riak. Basho has a superb documentation site here. So definitely, when jumping into or researching Riak as software you may want to build on, use for your distributed systems or the Riak Key Value Databases, check out the documentation. Super easy to find things, super easy to read, and really easy to get going with.

So give Riak a try on Windows Azure via the VM Depot. It gets easier by the day, and gives you even more data storage options, distribution capabilities and high availability that is hard to imagine.

New Relic & The Rise of the New Kingmakers

In other news, my good friends at New Relic have released a new book in partnership with Redmonk Analyst Stephen O’Grady @, have released a book he’s written titled The New Kingmakers, How Developers Conquered the World. You may know New Relic as the huge developer advocates that they are with the great analytics tools they provide. Either way, give a look see and read the book. It’s not a giant thousand page tomb, so it just takes a nice lunch break and you’ll get the pleasure of flipping the pages of the book Stephen has put together. You might have read the blog entry that started the whole “Kingmakers” statement, if you haven’t, give that a read first.

I personally love the statement, and have used it a few times myself. In relation to the saying and the book, I’ll have a short review and more to say in the very near future. Until then…

Cheers, enjoy the read, the virtual images and happy hacking.

PaaS Help! Know any PaaS Providers?

I’ve been diligent and started a search of Platform as a Service Providers, so far my list includes:

  • EngineYard
  • Heroku
  • AWS Beanstalk
  • Windows Azure
  • AppFog
  • Tier3
  • CloudFoundry
  • OpenShift
  • IBM PaaS
  • Google App Engine
  • CloudBees

Who else is there? Help me out in creating a list of every possible offering we can find!  Cheers! Please leave a comment or three below with any I’ve missed.  Thanks!

Reality Distortion Field : 17 Companies’ Sitrep

I’m sitting on the bus this morning. As happens almost every day of the week. I’m flipping pages, sort of, it’s an eBook on my Kindle App. I’m reading about Steve Jobs taking over the Macintosh Program at Apple. How things started to fall into place for Apple, for the Macintosh, and how Jobs saw what could be a pushed for it. Everybody else; Microsoft, Xerox, Canon, and practically every single other company was missing it. Xerox Parc had it right in front of them, the GUI, Mouse, Object Oriented Language, and about every single thing we assume for computer use and development today but wasn’t doing anything with it. They were all missing it, except Jobs. The eccentric, crazed, reality distortion field generating Jobs pushed forward and found those that agreed, this was absolutely the future. Today’s computers owe so much to Jobs efforts to pull these people together, to what he saw as the future, and our modern computing world will forever be indebted to Steve Jobs.

Howard Hues had done this 50 years earlier. He simply stated, “nobody wants to fly on a plane at 10k feet and get shaken to pieces, planes need to fly at 30,000 feet or more where the air is smooth!” He then went about working to get a plane built that could do this! The Government was in his way, the industry was fighting him, everybody said this wasn’t the way to go. Nobody could build a plane that would do that right now! It’s absurd. He did it, and bought every single one of them he could putting the airline (TWA) in hock at the same time! But it paid off, and his airline had the nicest planes, best flight in the world, easily. Today’s airlines are all modeled after this ideal, our modern travel owes a huge debt to what Howard Hughes pushed forward.

The competition, the fighting pushed the envelope, but in both cases a visionary could see the future. To them it was plain as an image on a clear sunny day. To them, the future didn’t need to be tomorrow, it was ready right now. The future just needed dragged kicking and screaming directly into today! They did this, they pulled people together who could make these changes, and they with their teams yanked the future right into humanity’s grasp.

Utility Computing / Cloud Computing

With those thoughts flying around at Warp 10 in my mind, everywhere, at every moment it seemed to occur to me. We’re merely putting the motherboards and cassette tape drives together right now in cloud computing. We have no Macintosh of cloud computing, we have no clear direction, there has to be something bigger, much bigger. At this point we’re merely making small steps, slight little strides toward the future. What we need to do is create the future and pull it directly into now!

There could be more though. Some of these things are being put together by individuals at various companies, oriented toward the platform level. There is, somewhere, a growing movement toward that next big shift in the way things are done. The gap between big architectures, big ideas, and launching these things is decreasing by the day – literally!

With these big ideas and big architectures and all the small steps and small pieces the industry is moving in the right direction. We’ve experienced shifts over the years and some more are definitely coming up very soon!

The Playing Field : Sitrep

With these thoughts racing around I felt compelled to look at where the industry stands right now. These are in no particular order, they all provide some type of building blocks for the next big thing, all in some aspect of the industry.

Amazon Web Services : This one should not need explaining. They’re probably the most utilized, nearly the most advanced, robust, price conscious utility storage, compute, and services provider in existence today. They continue to defeat the innovator’s dilemma over and over again, this company, and the departments in the company are hungry, very hungry and they fight the fight to stay in the lead.

Cloudability : This company is about keeping utility/cloud computing costs in check, and knowing where and when you’re pushing the pricing limits among all the various building blocks. There has been more than a few issues with billing, and people blowing through budgets by inadvertently leaving on their 1000 node EC2 instances and Cloudability helps devops keep these types of things under control stopping overages cold!

New Relic : The key to this offering is monitoring of everything, everywhere, all the time. New Relic offers absolutely beautiful charting and information displays around services, compute, storage, and a zillion other metrics among Ruby, PHP, Python, .NET, and about everything else available.

Puppet Labs : Imagine operations, IT, and systems administration all rolled into a single bad ass company’s product efforts. Imagine ways to automate and monitor ritualized machines, get them deployed, all with elegant and extremely powerful tools. Imagine that power now, you’ll know what Puppet Labs provides.

Opscode : The cloud needs management, hard core powerful management. Opscode and their respective chef product does just that. The influence of chef has gone so far as to influence Amazon Web Services (and others) to design their systems automation in a way as to enable chef usage. The devops community around Opscode is growing, the inroads to systems agility they’re making is getting to a point as to even be considered a disruptive market force!

Joyent : The birthplace of node.js, do I need to add more? Well, ok, I will. Joyent has a host of amazing devs, and amazing ops goals. The advances coming out of  Joyent aren’t always associated back to the company (maybe they should be) but rest assured there is some heavy duty research and dev going on over there. Things to check out would be their SmartDataCenter and of course the JoyentCloud.

MongoHQ : Mongo HQ is one of the distributed cloud hosting provider for Mongo DB. Mongo HQ  is also a supported provider in several of the other PaaS Providers such as Heroku and AppHarbor.

MongoLabs : Mongo Labs, another distributed cloud hosting provider for Mongo DB. Mongo Labs is also a supported provider in several of the other PaaS Providers such as Heroku and AppHarbor.

Nodester : Nodester is a hosting solution for node.js applications beautifully distributed in a horizontal way.

Nodejitsu : One of the leading node.js hosting providers and a very active participant in the community in and around New York.

AppFog : AppFog is a Platform as a Service (PaaS Provider) that is working on providing a cloud based horizontally distributed platform for creating applications with a wide variety of frameworks and languages. Some of those include .NET, Ruby on Rails, Java, and many others.

PhpFog : This is the PHP root of the PaaS Provider AppFog. They have a good history and an absolutely spectacular architecture for PHP Applications with a screaming simple and fast deployment model to cloud/utility based systems. They have a really great product.

Heroku : Deploy Ruby, Node.js, Clojure, Java, Python, and Scala. Probably the leader in PaaS based deployment right now. Got git, get Heroku, get push heroku master is about all the gettin’ for your application to be running there.

EngineYard : Think Ruby, Ruby on Rails, Rubinius, or any other aspect of Ruby and you’ll probably arrive at EngineYard in short order. The teams at EngineYard are heavily active in the cloud & Ruby scene. They are easily one of the leaders in PaaS based git workflow deployment in the Ruby & Ruby on Rails Community. They also, however, support tons of other technologies so don’t think they’re limited to just Ruby & Rails.

AppHarbor : The .NET Framework, often thought to be left completely out in the cold when it comes to serious cloud computing git based agile work flows, finally got included with AppHarbor! With the release of AppHarbor the trifecta of IaaS and a solid PaaS offering were finally available for the .NET stack.

Windows Azure : Windows Azure is Microsoft’s official cloud service, which supports a host of capabilities centered around a mostly PaaS based service. Windows Azure has however spread into SaaS and IaaS also. Some of the frameworks and tools they support include Ruby on Rails, Java, PHP, .NET (of course), node.js, Hadoop and others.

CloudFoundry : Cloud Foundry is an open source PaaS Solution that serves to link up various back end and front end architectures. Currently it is supported by a host of companies including VMWare, AppFog, and others.

Putting the Pieces Together

That’s where we stand in the industry today. We have all the pieces and they need fit together to create something great, something awesome, something truly remarkable. I fully intend to create part of the future, will I see you there? I’d hope so!

High Availability From Non-High Availability: OpenStack, Dell, Crowbar, Private Clouds, and Moving the Enterprise Forward…

The Environment

Recently a conversation came up about high availability in a traditional Enterprise Environment. Let me paint the picture for this environment;

“This environment has several hundred servers, and several hundred applications. These application range in simple client server applications to n-tier applications strung across multiple services and machines. Some are resilient, some are not so resilient. These applications have administration that ranges from needing rebooted on a daily basis to not being touched for months at a time. Needless to say the range of applications is vast.

In addition to all these applications the data center had a mix of hardware concerns that directly effected how applications were built.”

With that basic idea, one can imagine that planning for high availability is by no means a simple thing. However there are opportunities now available, that Enterprises have never had previously. In the past an Enterprise would usually have some big heavy hitter come in, such as EMC. The Enterprise would then pay them hundreds of thousands or even millions of dollars to do an analysis. Then the Enterprise would probably fork over another couple hundred grand here and there. This would happen time and time again, until some level of high availability would be achieved.

Well, to put it simply, a lot of that effort is unneeded today. The effort that is needed, with the right team, is in the hands of the Enterprise itself. Some people who know me, would immediately think I’m about to say “just setup an account at AWS and build to the cloud…” which is obviously the easiest, secure, and most progressive route to go. But no, I’m going to step in with some other solutions, that can be provided on-premise. I’ll elaborate at a later time the reasons behind this.

I’m going to now step through some key technologies available today. These can be used to provide high availability from the software architecture points of view. In your enterprise, if you have off-shored, outsourced, or otherwise attained your Enterprise Software, these functionalities and capabilities will be up to the creating provider. You’ll have to go to them for further information on how to change or adapt the architecture.

Software Architecture

For in house software here are some APIs, SDKs, and tools to help attain the much sought after high availability (Always aiming for that mythical five 9’s).

Dell’s Cloud Solutions

So without significant research time, the Dell Solutions can be thoroughly confusing at first glance. They don’t offer anything related to actual “cloud services” such as AWS, Windows Azure, or Rackspace. What they’re simply offering is hardware to build out resilient data centers and contributing actively to open source software solutions.

The Dell Cloud Edge Software is available on Github at dellcloudedge. The best places to start researching what is available are on two key blogs; JBGeorge Tech Blog and Rob Hirschfeld’s Blog.

Another key part of the Dell Solution is Crowbar. Dell open sourced Crowbar at the 2011 OSCON Conference. Even though most of the sample configurations revolve around Dell Poweredge Servers and Rackspace Cloud Builder Solutions, the software is available for use on system that are completely unrelated from Rackspace or Dell Solutions. Crowbar, simply put, is the software used to get servers up and running. As quoted on the Dell Announcement released during OSCON,

“Bringing up a cloud can be no mean feat, as a result a couple of our guys began working on a software framework that could be used to quickly (typically before coffee break!) bring up a multi-node OpenStack cloud on bare metal. That framework became Crowbar. What Crowbar does is manage the OpenStack deployment from the initial server boot to the configuration of the primary OpenStack components, allowing users to complete bare metal deployment of multi-node OpenStack clouds in a matter of hours (or even minutes) instead of days.”

That quote now brings up the next piece of software, OpenStack. When building out a data-center it is a solid idea to begin building a platform on which things will operate. OpenStack enables just that. There are two major elements of OpenStack that are key; OpenStack Compute and OpenStack Storage. This is where the architectural paradigm begins to change dramatically for traditional software. This is also where there will  a major sticking point for traditional Enterprise Software that relies primarily on a database on a server, with a web server on a server, and maybe some middleware or a service bus on another server. The massive problem is applications need to focus around horizontal scalability with compute and storage being the two key elements.

In many enterprises this is unfortunate, because a safe estimate would be 95% or more of enterprise applications don’t scale horizontally, or scale at all. If you’re an SOA shop, you’re much farther along than most. Most enterprises simply rely on the traditional vertical stack. This is a major problem. So how do we bridge this gap between the compute plus storage architectural design goal versus traditional architecture? That’s where the follow software comes to the rescue.

Windows Server AppFabric

Windows Azure AppFabric (Click to visit the MS Azure AppFabric Site)
Windows Azure AppFabric (Click to visit the MS Azure AppFabric Site)

(Not to be confused with the Windows Azure AppFabric, for differences review this article)

Windows Server AppFabric (Click for the MS Site)
Windows Server AppFabric (Click for the MS Site)

The Windows Server AppFabric has several capabilities that help an enterprise application leap forward into the modern era of horizontal scalability with a more clear way to focus on compute and storage. The feature set of the AppFabric includes these key functions that enable this leap forward (more information available in this article):

  • Workflow Instance Management
  • Scaling Out Distributed Applications

These by no means are the only features of AppFabric. For a thorough description of scenarios and applications around AppFabric check out Introducing Windows Server AppFabric.

In Summary

Where Does This Leave Enterprise Environments? The simple answer is, “A really long way away from achieving the scalability, cost savings, integrity, agaility, and capabilities of public cloud computing“. You can quote me on that. The effort to acheive data integrity and up time to perform standard business, it’s already here for Enterprises, but to go beyond that and extend hours of operation, acheive 5 9’s of up time, and decrease costs in a dramatic way is generally cost prohibitive in private cloud infrastructure and especially in traditional data center operations. The fact is, things will still go down. Applications have a long way from being resilient, idempotent, or designed with an architecture that allows them with the concept of the public cloud “design for failure” concept.

So what to do about this? The best thing for an Enterprise Application Environment is simply to start building applications with horizontal scalability in mind. Build with the concept of systems being nodes, with idempotent messaging, clear and redundant messaging queues, and thinking – even while limited by traditional data centers and limited visualization technologies – thinking in a resilient architectural style instead of the traditional vertical mindset.

These tools I’ve outlined can help your Enterprise move forward in a traditional data center environment, a private cloud infrastructure environment, and be prepared for public cloud scale and capabilities.