Red Hat, OpenShift PaaS and Cartridges for Riak

Today I participated in the OpenShift Community Day here in Portland at the Doubletree Hotel. One of the things I wanted to research was the possibility of putting together a OpenShift Origin Cartridge for Riak. As with most PaaS Systems this isn’t the most straight forward process. The reason is simple, OpenShift and CloudFoundry have a deployment model based around certain conventions that don’t fit with the multi-node deployment of a distributed database. But there are ways around this and my intent was to create or come up with a plan for a Cartridge to commit these work-arounds.

After reading the “New OpenShift Cartridge Format – Part 1” by Mike McGrath @Michael_Mcgrath I set out to get a Red Hat Enterprise Linux image up and running. The quickest route to that was to spool up an AWS EC2 instance. 30 seconds later I had exactly that up and running. The next goal was to get Riak installed and running on this instance. I wasn’t going to actually build a cluster right off, but I wanted at least a single running Riak node to use for trying this out.

In the article “New OpenShift Cartridge Format – Part 1” Mike skips the specifics of the cartridge and focuses on getting a service up and running that will be turned into a Cartridge. As Mike writes,

What do we really need to do to create an new cartridge? Step one is to pick something to create a cartridge for.

…to which my answer is, “alright, creating a Cartridge for Riak!” ¬†ūüėČ

However, even though I have the RHEL instance up and running already, with Riak installed, I decided I’d follow along with his exactly example too. So I dove in with

[sourcecode language=”bash”]
sudo yum install httpd

to install Apache. With that done I now have Riak & Apache installed on the RHEL EC2 instance. The goal with both of these services is to get them running as the regular local Unix user in a home directory.

With both Riak and Apache installed, time to create a local user directory for each of the respective tools. However, before that, with this version of Linux on AWS we’ll need to create a local user account.

[sourcecode language=”bash”]
useradd -c "Adron Hall" adron
passwd adron

Changing password for user adron.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Next I switched to the user I created ‘su adron’ and created the following directories in the home path for attempting to get Apache and Riak up and running locally like this. I reviewed the rest of the steps in making the Cartridge w/ Apache and then immediately started running into a few issues with getting Riak setup just like I need it to be able to build a cartridge around it. At least, with my first idea of how I should build a¬†cartridge.

At this point I decided we need to have a conversation around the strategy here. So Bill Decoste, Ryan and some of the other Red Hat team on hand today. After a discussion with Bill it sounds like there are some possibilities to get Riak running via the OpenShift Origin Cartridges.

The Strategy

The plan now is to get a cartridge setup so that the cartridge can launch a single Riak instance. That instance, then with post-launch scripts can then join itself to the overall Riak cluster. The routing can be done via the internal routing and some other capabilities that are inherent to what is in OpenShift itself. It sounds like it’ll take a little more tweaking, but the possibility is there for the near future.

At this point I sat down and read up on the Cartridge a little more before taking off for the day. Overall a good start and interesting to get an overview of the latest around OpenShift.

Thanks to the Red Hat Team, have a great time at the OpenStack Conference and I’ll be hacking on this¬†Cartridge¬†strategy!


Using Bosh to Bootstrap Cloud Foundry via Stark & Wayne Consulting

I finally sat down and really started to take a stab at Cloud Foundry Bosh. Here’s the quick lowdown on installing the necessary bits and getting an initial environment built. Big thanks out to Dr Nic @drnic, Luke Bakken & Brain McClain @brianmmcclain for initial pointers to where the good content is. With their guidance and help I’ve put together this how-to. Enjoy… ¬†boshing.


Step: Get an instance/machine up and running.

To make sure I had a totally clean starting point I started out with an AWS EC2 Instance to work from. I chose a micro instance loaded with Ubuntu. You can use your local workstation if you want to or whatever, it really doesn’t matter. The one catch, of course is you’ll have to have a supported *nix based operating system.

Step: Get things updated for Ubuntu.

[sourcecode language=”bash”]
sudo apt-get update

Step: Get cURL to make life easy.

[sourcecode language=”bash”]
sudo apt-get install curl

Step: Get Ruby, in a proper way.

[sourcecode language=”bash”]
\curl -L | bash -s stable
source ~/.rvm/scripts/rvm
rvm autolibs enable
rvm requirements

Enabling autolibs sets up so that rvm will install all the requirements with the ‘rvm requirements’ command. It used to just show you what you needed, then you’d have to go through and install them. This requirements phase includes some specifics, such as git, gcc, sqlite, and other tools needed to build, execute and work with Ruby via rvm. Really helpful things overall, which will come in handy later when using this instance for whatever purposes.

Finish up the Ruby install and set it as our default ruby to use.

[sourcecode language=”bash”]
rvm install 1.9.3
rvm use 1.9.3 –default
rvm rubygems current

Step: Get bosh-bootstrap.

bosh-bootstrap is the easiest way to get started with a sample bosh deployment. For more information check out Dr Nic’s Stark and Wayne repo on Github. (also check out the Cloud Foundry Bosh repo.)

[sourcecode language=”bash”]
gem install bosh-bootstrap
gem update –system

Git was installed a little earlier in the process, so now set the default user name and email so that when we use bosh it will know what to use for cloning repositories it uses.

[sourcecode language=”bash”]
git config –global "Adron Hall"
git config –global plzdont@spamme.bro

Step: Launch a bosh deploy with the bootstrap.

[sourcecode language=”bash”]
bosh-bootstrap deploy

You’ll receive a prompt, and here’s what to hit to get a good first deploy.

Stage 1: I select AWS, simply as I’ve no OpenStack environment. One day maybe I can try out the other option. Until then I went with the tried and true AWS. Here you’ll need to enter your access & secret key from the AWS security settings for your AWS account.

For the region, I selected #7, which is west 2. That translates to the data center in Oregon. Why did I select Oregon? Because I live in Portland and that data center is about 50 miles away. Otherwise it doesn’t matter which region you select, any region can spool up almost any type of bosh environment.

Stage 2: In this stage, select default by hitting enter. This will choose the default bosh settings. The default uses a medium instance to spool up a good default Cloud Foundry environment. It also sets up a security group specifically for Cloud Foundry.

Stage 3: At this point you’ll be prompted to select what to do, choose to create an inception virtual machine. After a while, sometimes a few minutes, sometimes an hour or two – depending on internal and external connections – you should receive the “Stage 6: Setup bosh” results.

Stage 6: Setup bosh

setup bosh user
uploading /tmp/remote_script_setup_bosh_user to Inception VM
Initially targeting micro-bosh…
Target set to `microbosh-aws-us-west-2′
Creating initial user adron…
Logged in as `admin’
User `adron’ has been created
Login as adron…
Logged in as `adron’
Successfully setup bosh user
cleanup permissions
uploading /tmp/remote_script_cleanup_permissions to Inception VM
Successfully cleanup permissions
Locally targeting and login to new BOSH…
bosh -u adron -p cheesewhiz target
Target set to `microbosh-aws-us-west-2′
bosh login adron cheesewhiz
Logged in as `adron’
Confirming: You are now targeting and logged in to your BOSH


If you look in your AWS Console you should also see a box with a key pair named “inception” and one that is under the “microbosh-aws-us-west-2” name. The inception instance is a m1.small while the microbosh instance is an m1.medium.

That should get you going with bosh. In my next entry around bosh I’ll dive into some of Dr Nic & Brian McClain’s work before diving into what exactly Bosh actually is. As one may expect, from Stark & Wayne we can expect some pretty cool stuff, so keep an eye over there on Stark & Wayne.

Spotlight on HP Open Source

While at OSCON 2011 I spoke to a Phil Robb, Bryan Gartner, and Terri Molini with HP. Phil is heading up the Open Source Program Office for HP, which we spoke about.

Context and Clarity:¬†I knew HP was involved in cloud computing to some degree, know they make tons of devices, hardware, printers, and know they are involved in open source. Beyond that I did not know too much about any particular aspect of HP, nor have I ever worked for them. So if I swoon in response to any of their products or open source efforts don’t think I’m just being a shill, because if you know me, you know better! With that, let’s hit on this discussion and exploration of HP.

The first BIG thing that HP announced, that we all learned about at once via OSCON is HP’s signing up to support the OpenStack Project! ¬†This is pretty big news, as OpenStack is a big deal for future Cloud Computing Development focuses on enabling a company versus locking them into a single provider. For those that don’t know much about OpenStack, I’ll be publishing a Spotlight on OpenStack in the near future!

Cloud Computing, Not Just OpenStack

During our conversation, one of the things I really wanted to know about was HP’s efforts around cloud computing without any specific focus. I wanted to know where they are headed, what their plans are, and how they’re currently involved. Of course many of those questions can be answered just by looking at HP’s signing on with OpenStack! Me being the curious type, I wanted more though.

Phil laid out the focus for me with a great quote, “Open Source & Mobile is exceedingly important, and we’re right there with cloud technologies as well.” ¬†As our conversation progressed it is evident that HP has many current inroads they’re making into cloud computing. Some of those include Linux (of course that’s a no brainer! :)), the LinuxCOE, and other deployment and management software.

Talking to Phil, Bryan, and some other HP Devs and Evangelists we discussed the various approachs HP is taking to get people “cloud enabled”. Their approach is open, as one might expect, and encompasses a wide breadth of capabilities. One of the approaches they have is the distribution of virtual images, regardless of your virtualization software. They’ve worked to provide additional ways to expand and distribute images as necessary.

Web OS, Webkit, and V8

HP also contributes or works with several technologies within the JavaScript Tech Stack including Node.js, V8, and Webkit. They also use these tools extensively in putting together solutions for WebOS or other tool stacks internally. I’m always stoked to hear about more companies and individuals stepping in and contributing even more to Node.js, V8, and that whole echelon of server side js technology.

Other tools, technologies and efforts they’re¬†actively¬†contributing to in some way or another include jQuery, PhoneGap Applications, and others. HP reviews several thousands projects monthly and makes decisions to get involved or contribute in other ways.


HP is a major contributor of several major open source projects. They contribute actively and are involved actively, making a positive impact to the community and projects themselves. HP’s ongoing efforts with cloud computing is continuing to grow, and with the recent boarding of the OpenStack train they’re in line to make some major steps into the cloud computing world. Overall, I’m impressed, to HP & the teams there, keep up the good work. You guys and gals are kicking ass!

OSCON: Talking Shop With HP, Heroku, ForgeRock, Open Source For America, and More!

Today and yesterday I specifically aimed to meet and interview a number of sponsors and companies attending OSCON. My big quest I’d assigned myself was to determine who was doing what, where, when, and why in the Open Source Community. Of course I wasn’t going to get to every company, but I was going to try. Here’s what I got accomplished:

Hewlett Packard (AKA HP)

The big news from the HP Crew, in addition to the other zillion open source efforts they have going on, is that they’ve signed on as a partner with OpenStack! So more great news for that effort and bringing a standardized software stack to cloud computing! Getting HP signed on is one more big step toward this goal.

Even though I’ve mentioned HP first, I’m actually going to have a follow up dedicated entirely to HPs efforts in open source software. Stay tuned for that this weekend!

Heroku's New Laptop Location!
Heroku's New Laptop Location!


Heroku was there handing out the swag, which won them the much coveted space on my laptop! I spoke with the team there, and there are rumblings of some great things, additional tooling stacks, and other ideas. Keep an eye on Heroku, not to much to mention right now but they have some awesome things coming in the near future.

ForgeRock, Simon Phipps, and Open Source for America

Hanging Out With the OSFA Crew (I'm the 2nd one from the right, ok, I'm actually the one on the right ;))
Hanging Out With the OSFA Crew (I'm the 2nd one from the right, ok, I'm actually the one on the right ;))

After speaking with HP I was introduced to the Open Source for America attendees. The Open Source for America, or OSFA, is setup to advocate, educate, and encourage open source software use within Government. They have the very important goal of educating political leaders and decisions makers that open source, not closed source, is much more aligned to providing their mission of liberty, freedom, and return for the citizens of the United States. The ideas, free market of software, and parallels of knowledge transfer within this software industry more closely meet the values that are intended within most civil representative Governments, which I agree totally, in this groups efforts!

Simon Phipps
Simon Phipps

While talking to the OSFA Team I was also introduced to Simon Phipps, who writes for Computer World UK, tweets as @webmink, blogs as webmink, works as CSO (Chief Strategy Officer) at ForgeRock, for full creds check out his LinkedIn Profile, and as he identifies himself, “Software freedom activist, transparency activist, blogger, photographer, writer”. I only spoke to Simon for a few minutes, but we covered some good ground, and must say Simon is one interesting character and a good person to know!

ForgeRock, being a company I’ll admit I knew nothing about until Simon told me about them, is doing some absolutely great work. Their lines include:

  • OpenAM –¬†OpenAM is the market leading open source Authentication, Authorization, Entitlement and Federation product. ForgeRock provides the community with a new home for Sun Microsystems’ OpenSSO product.
  • OpenDJ –¬†OpenDJ is a new LDAPv3 compliant directory service, developed for the Java platform, providing a high performance, highly available and secure store for the identities managed by enterprises.
  • OpenIDM –¬†OpenIDM is an open standards based Identity Management, Provisioning and Compliance solution.
Stay tuned for further write ups regarding these companies and other related information to OSCON 2011.

OSCON: The Web, It’s HUGE! Cloud Computing More Realistically…

It is day 3 of OSCON data & java, and the kick off to the main keynotes and core conference. There are a repeating topics throughout the conference:

The Web, It’s Still HUGE! ¬†Imagine that!

HTML 5, CSS3, JavaScript/jQuery/Node.js – This is starting to look like it will be the development stack of the web. If you use ASP.NET MVC, Ruby on Rails, PHP, Java, or some other web stack these core technologies are here to augment or even in some cases completely replace traditional web stacks.

Node.js can replace web servers in some situations when core APIs or other fundamental simple services are needed. In addition to that the Node server will eventually, I have no doubt, be able to completely replace traditional web servers like Apache, Tomcat, or IIS for almost any web site. In addition to web sites though, Node provides a very valuable engine to develop and test hard core JavaScript, building reusable libraries, and other server oriented needs. The other huge boost for Node.js is the ability for a dev shop to be able to centralize development around a single language. Something that Java and .NET have tried in the past, yet failed to ever achieve. The big irony is JavaScript never started out with this intent, but here it is!

In addition to Node.js making inroads to the server environments worldwide, JavaScript in general is starting to be used for all sorts of tools, stacks, and frameworks outside of just the browser. It can be used to submit a request against Hadoop, it can create a way to access and manipulate CouchDb, MongoDb, and other databases. Javascript is becoming the one language to rule them all (please excuse my Tolkenism ūüėČ )

Cloud Computing or More Realistically, “Distributed, Geographically Dispersed, Highly Available, Highly Available, Resilient, Compute and Storage Segmented Functionality, and not to forget, Business Agility Oriented Utility Computing“.

Long enough title? There are numerous open source cloud platforms and infrastructure offerings available. At OSCON there was discussion and multiple session about OpenStack, the Open Cloud Initiative, Stratos, and other open software solutions for cloud computing. This is great news for developer working with cloud computing technologies, especially for ongoing efforts and pushes to gain adoption of cloud computing within Enterprise.

Companies will continue to push their own proprietary capabilities and features, but it would behoove the industry to standardize on an open platform such as OpenStack. Currently most major cloud/utility computing providers such as Amazon Web Services and Windows Azure lock a company into their specific APIs, SDKs, and custom way of doing things. A development team that is savvy can prevent that, but if the core feature sets around comput, storage, and otherwise were standardized this lock in issue could be resolved.

Half Way Mark, Check

So far the conference has provided lots of insight into the open source community. Announcements have been made that keep the open source community moving forward into the future. With that, some of the things to look forward to:

  • I’ll have some in depth coverage of products, product releases, and services for some of the top open source companies.
  • I will hopefully win a Github T-shirt, to go along with my score of t-shirts for Heroku and others that I’ve received!
  • I’ll dig into some of the bleeding edge technologies around cloud computing including the likes of DotCloud!
So stay tuned, I’ll be back with the action packed details shortly. ¬†Cheers!