Vagrant for VMware Fusion with plugin issues… Part 2

After the previous blog entry I wrote up, working through getting vagrant to spool up a vmware image I got a few other suggestions via Twitter.

With that quick delete of the hidden vagrant directory I gave it a shot again with the provider flagh.

[sourcecode language=”bash”]
rm -rf .vagrant/
$ vagrant up –provider=vmware_fusionBringing machine ‘default’ up with ‘vmware_fusion’ provider…
[default] VMware requires root privileges to make many of the changes
necessary for Vagrant to control it. In a moment, Vagrant will ask for
your administrator password in order to install a helper that will have
permissions to make these changes. Note that Vagrant itself continues
to run without administrative privileges.
Password:
[default] Box ‘bosh-solo-0.6.4’ was not found. Fetching box from specified URL for
the provider ‘vmware_fusion’. Note that if the URL does not have
a box for this provider, you should interrupt Vagrant now and add
the box yourself. Otherwise Vagrant will attempt to download the
full box prior to discovering this error.
Downloading or copying the box…
[/sourcecode]

Which this seemed to work. Downloading the helpers and such started and I waited patiently.

A Few Thoughts…

Needing to delete a hidden file struck me as one of those completely arbitrary and random solutions. It worked, which is awesome, but it working is a completely counter intuitive solution. I did a ‘destroy’ previously along with a number of things that were somewhat not intuitive. At this point the steps were fine, I had to ask for help, and I got help really fast. That’s awesome, but needing to go through those steps was unfortunate and ties back around to @jeffsussna‘s tweet earlier.

Anyway, as soon as I did this I decided Virtual Box it is. As it went through a 40 minute download of an image (??) it finished and displayed…

[sourcecode language=”bash”]
$ vagrant up –provider=vmware_fusionBringing machine ‘default’ up with ‘vmware_fusion’ provider…
[default] VMware requires root privileges to make many of the changes
necessary for Vagrant to control it. In a moment, Vagrant will ask for
your administrator password in order to install a helper that will have
permissions to make these changes. Note that Vagrant itself continues
to run without administrative privileges.
Password:
[default] Box ‘bosh-solo-0.6.4’ was not found. Fetching box from specified URL for
the provider ‘vmware_fusion’. Note that if the URL does not have
a box for this provider, you should interrupt Vagrant now and add
the box yourself. Otherwise Vagrant will attempt to download the
full box prior to discovering this error.
Downloading or copying the box…
Extracting box…te: 119k/s, Estimated time remaining: –:–:–)
The box you attempted to add doesn’t match the provider you specified.

Provider expected: vmware_fusion
Provider of box: virtualbox
[/sourcecode]

… because the image isn’t available for vmware according to Vagrant, so for now, with some solutions and more questions I’m just going to go with the Virtual Box Solution and get back on track with the larger picture blog entry I’m writing. Thanks to @brianmmclain, @mitchelh, @jeffsussna and @thoward37.

Vagrant for VMware Fusion with plugin issues…

I’ve started using Vagrant pretty regularly. I downloaded Virtual Box and been tinkering away with some of the vagrant packages. The one huge bummer was that I was under the delusion that it only worked with Vagrant. Then, I was told by a fellow coder that I needed to check out the VMware Fusion plugin. I immediately was stoked! Simply, have you…

[sourcecode language=”bash”]
$ vagrant plugin install vagrant-vmware-fusion
Installing the ‘vagrant-vmware-fusion’ plugin. This can take a few minutes…
Installed the plugin ‘vagrant-vmware-fusion (0.6.1)’!
$ vagrant plugin license vagrant-vmware-fusion license.lic
Installing license for ‘vagrant-vmware-fusion’…
The license for ‘vagrant-vmware-fusion’ was successfully installed!
$
[/sourcecode]

Now mind you, you’ll need to go out and buy the VMware Fusion Plugin from Hashi Corp. From my perspective I was happy with this purchase just to get the stability improvements of VMware Fusion.

Vagrant & Riak

For my first example of the new plugin I forked and then cloned the Bosh Riak Repository. Once that was cloned I simply opened a terminal and navigated to the path of the repository and tried…

[sourcecode language=”bash”]
vagrant up
[/sourcecode]

…and immediately got the message, “Failed to load the “vagrant-vmware-fusion” plugin. View logs for more details.” Noooooooooooooooez! 🙁 I was sad. But dove into the logs by executing vagrant up with…

[sourcecode language=”bash”]
VAGRANT_LOG=INFO vagrant up
[/sourcecode]

…and there was the error amid the log was something about the VMware Plugin requiring Vagrant the latest version.

Doh! I’d forgotten to upgrade first. I installed the upgrade via the downloads for 1.2.2. Once I upgrade I ran into the same error. A quick ‘vagrant -v’ to check the version showed 1.2.2 was installed. Not knowing the special secret sauce at this point, I figured I’d just reboot. It had been about 20-30 days since I had, so who knew what weird service or something needed to be restarted. I guessed correctly and after a restart vagrant kicked off the download of the vagrant image for the Bosh Riak deploy. It went by fast, and since it didn’t spit out an error about not loading the plugin I thought everything had worked…

[sourcecode language=”bash”]
$ vagrant up
Bringing machine ‘default’ up with ‘virtualbox’ provider…
[default] Setting the name of the VM…
[default] Clearing any previously set forwarded ports…
[default] Creating shared folders metadata…
[default] Clearing any previously set network interfaces…
[default] Preparing network interfaces based on configuration…
[default] Forwarding ports…
[default] — 22 => 2222 (adapter 1)
[default] Booting VM…
[default] Waiting for VM to boot. This can take a few minutes.

…more stuff here…
[/sourcecode]

…or so I had thought. Scrolling back up through the log I realized Vagrant had NOT used the plugin for VMware Fusion. I was still stuck with VirtualBox. I went through the plugin install again to see if it just needed re-applied. At this point, since I’d already installed Virtual Box previously I figured maybe I’d just keep plunging forward and mess with the vmware plugin later, however I’d REALLY like to have all my virtualized images running via Fusion. Not sure what I missed I decided to give it one more try…

[sourcecode language=”bash”]
$ vagrant plugin install vagrant-vmware-fusion
Installing the ‘vagrant-vmware-fusion’ plugin. This can take a few minutes…
Installed the plugin ‘vagrant-vmware-fusion (0.6.1)’!
Adrons-MacBook-Air-3:Downloads adronhall$ vagrant plugin license vagrant-vmware-fusion license.lic
Installing license for ‘vagrant-vmware-fusion’…
The license for ‘vagrant-vmware-fusion’ was successfully installed!
$ cd ~/Codez/riak-release/
$ vagrant up
Bringing machine ‘default’ up with ‘virtualbox’ provider…
[/sourcecode]

Ugh… so I almost give up at this point. I read this, it seems ironic.

Being this isn’t the best user experience I stumble forward trying something else. I get a suggestion from a fellow coder (thx Brian) to try this.

[sourcecode language=”bash”]
$ vagrant up –provider=vmware_fusionAn active machine was found with a different provider. Vagrant
currently allows each machine to be brought up with only a single
provider at a time. A future version will remove this limitation.
Until then, please destroy the existing machine to up with a new
provider.

Machine name: default
Active provider: virtualbox
Requested provider: vmware_fusion
$
[/sourcecode]

Well, that didn’t work either. It looks like maybe if I just blew away my VirtualBox images it would work? Well, funny, being I don’t have any VirtualBox images running or installed. Well, just to make sure I checked the VirtualBox directory. Nothing. I went ahead and deleted the entire VirtualBox Application. Tried again.

[sourcecode language=”bash”]
$ vagrant up –provider=vmware_fusionAn active machine was found with a different provider. Vagrant
currently allows each machine to be brought up with only a single
provider at a time. A future version will remove this limitation.
Until then, please destroy the existing machine to up with a new
provider.

Machine name: default
Active provider: virtualbox
Requested provider: vmware_fusion
$
[/sourcecode]

Ok, I’ve no idea now. Any help would be greatly appreciated! This same thing appears to work fine under VirtualBox, but with the plugin added and VirtualBox removed I’m still not able to get this to work. Help!

Red Hat, OpenShift PaaS and Cartridges for Riak

Today I participated in the OpenShift Community Day here in Portland at the Doubletree Hotel. One of the things I wanted to research was the possibility of putting together a OpenShift Origin Cartridge for Riak. As with most PaaS Systems this isn’t the most straight forward process. The reason is simple, OpenShift and CloudFoundry have a deployment model based around certain conventions that don’t fit with the multi-node deployment of a distributed database. But there are ways around this and my intent was to create or come up with a plan for a Cartridge to commit these work-arounds.

After reading the “New OpenShift Cartridge Format – Part 1” by Mike McGrath @Michael_Mcgrath I set out to get a Red Hat Enterprise Linux image up and running. The quickest route to that was to spool up an AWS EC2 instance. 30 seconds later I had exactly that up and running. The next goal was to get Riak installed and running on this instance. I wasn’t going to actually build a cluster right off, but I wanted at least a single running Riak node to use for trying this out.

In the article “New OpenShift Cartridge Format – Part 1” Mike skips the specifics of the cartridge and focuses on getting a service up and running that will be turned into a Cartridge. As Mike writes,

What do we really need to do to create an new cartridge? Step one is to pick something to create a cartridge for.

…to which my answer is, “alright, creating a Cartridge for Riak!”  😉

However, even though I have the RHEL instance up and running already, with Riak installed, I decided I’d follow along with his exactly example too. So I dove in with

[sourcecode language=”bash”]
sudo yum install httpd
[/sourcecode]

to install Apache. With that done I now have Riak & Apache installed on the RHEL EC2 instance. The goal with both of these services is to get them running as the regular local Unix user in a home directory.

With both Riak and Apache installed, time to create a local user directory for each of the respective tools. However, before that, with this version of Linux on AWS we’ll need to create a local user account.

[sourcecode language=”bash”]
useradd -c "Adron Hall" adron
passwd adron

Changing password for user adron.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[/sourcecode]

Next I switched to the user I created ‘su adron’ and created the following directories in the home path for attempting to get Apache and Riak up and running locally like this. I reviewed the rest of the steps in making the Cartridge w/ Apache and then immediately started running into a few issues with getting Riak setup just like I need it to be able to build a cartridge around it. At least, with my first idea of how I should build a cartridge.

At this point I decided we need to have a conversation around the strategy here. So Bill Decoste, Ryan and some of the other Red Hat team on hand today. After a discussion with Bill it sounds like there are some possibilities to get Riak running via the OpenShift Origin Cartridges.

The Strategy

The plan now is to get a cartridge setup so that the cartridge can launch a single Riak instance. That instance, then with post-launch scripts can then join itself to the overall Riak cluster. The routing can be done via the internal routing and some other capabilities that are inherent to what is in OpenShift itself. It sounds like it’ll take a little more tweaking, but the possibility is there for the near future.

At this point I sat down and read up on the Cartridge a little more before taking off for the day. Overall a good start and interesting to get an overview of the latest around OpenShift.

Thanks to the Red Hat Team, have a great time at the OpenStack Conference and I’ll be hacking on this Cartridge strategy!

References

Write the Docs, Proper Portland Brew, Hack n’ Bike and Polyglot Conference 2013

Blog Entry Index:

I just wrapped up a long weekend of staycation. Monday kicked off Write the Docs this week and today, Tuesday, I’m getting back into the saddle.

Write the Docs

The Write the Docs Conference this week, a two day affair, has kicked off an expanding community around document creation. This conference is about what documentation is, how we create documentation as technical writers, writers, coders and others in the field.

Not only is it about those things it is about how people interact and why documentation is needed in projects. This is one of the things I find interesting, as it seems obvious, but is entirely not obvious because of the battle between good documentation, bad documentation or a complete lack of documentation. The later being the worse situation.

The Bloody War of Documentation!

At this conference it has been identified that the ideal documentation scenario is that building it starts before any software is even built. I do and don’t agree with this, because I know we must avoid BDUF (Big Design Up Front). But we must take this idea, of documentation first, in the appropriate context of how we’re speaking about documentation at the conference. Just as tests & behaviors identified up front, before the creation of the actual implementation is vital to solid, reliable, consistent, testable & high quality production software, good documentation is absolutely necessary.

There are some situations, the exceptions, such as with agencies that create software, in which the software is throwaway. I’m not and don’t think much of the conference is about those types of systems. What we’ve been speaking about at the conference is the systems, or ecosystems, in which software is built, maintained and used for many years. We’re talking about the APIs that are built and then used by dozens, hundreds or thousands of people. Think of Facebook, Github and Twitter. All of these have APIs that thousands upon thousands use everyday. They’re successful in large part, extremely so, because of stellar documentation. In the case of Facebook, there’s some love and hate to go around because they’ve gone between good documentation and bad documentation. However whenever it has been reliable, developers move forward with these APIs and have built billion dollar empires that employ hundreds of people and benefit thousands of people beyond that.

As developers that have been speaking at the conference, and developers in the audience, and this developer too all tend to agree, build that README file before you build a single other thing within the project. Keep that README updated, keep it marked up and easy to read, and make sure people know what your intent is as best you can. Simply put, document!

You might also have snarkily asked, does Write the Docs have docs,why yes, it does:

http://docs.writethedocs.org/ <- Give em’ a read, they’re solid docs.

Portland Proper Brew

Today while using my iPhone, catching up on news & events over the time I had my staycation I took a photo. On that photo I used Stitch to put together some arrows. Kind of a Portland Proper Brew (PPB) with documentation. (see what I did there!) It exemplifies a great way to start the day.

Everyday I bike (or ride the train or bus) in to downtown Porltand anywhere from 5-9 kilometers and swing into Barista on 3rd. Barista is one of the finest coffee shops, in Portland & the world. If you don’t believe me, drag your butt up here and check it out. Absolutely stellar baristas, the best coffee (Coava, Ritual, Sightglass, Stumptown & others), and pretty sweet digs to get going in the morning.

I’ll have more information on a new project I’ve kicked off. Right now it’s called Bike n’ Hack, which will be a scavenger style code hacking & bicycle riding urban awesome game. If you’re interested in hearing more about this, the project, the game & how everything will work be sure to contact me via twitter @adron or jump into the bike n’ hack github organization and the team will be adding more information about who, what, where, when and why this project is going to be a blast!

Polyglot Conference & the Zombie Apocalypse

I’ll be teaching a tutorial, “Introduction to Distributed Databases” at Polyglot Conference in Vancouver in May!  So it has begun & I’m here for you! Come and check out how to get a Riak deployment running in your survival bunker’s data center. Zombies or just your pointy hair boss scenarios of apocalypse we’ll discuss how consistent hashing, hinted handoff and gossipping can help your systems survive infestations! Here’s a basic outline of what I’ll cover…

Introducing Riak, a database designed to survive the Zombie Plague. Riak Architecture & 5 Minute History of Riak & Zombies.

Architecture deep dive:

  • Consistent Hashing, managing to track changes when your kill zone is littered with Zombies.
  • Intelligent Replication, managing your data against each of your bunkers.
  • Data Re-distribution, sometimes they overtake a bunker, how your data is re-distributed.
  • Short Erlang Introduction, a language fit for managing post-civil society.
  • Getting Erlang

Installing Riak on…

  • Ubuntu, RHEL & the Linux Variety.
  • OS-X, the only user centered computers to survive the apocolypse.
  • From source, maintained and modernized for humanities survival.
  • Upgrading Riak, when a bunker is retaken from the zomibes, it’s time to update your Riak.
  • Setting up

Devrel – A developer’s machine w/ Riak – how to manage without zombie bunkers.

  • 5 nodes, a basic cluster
  • Operating Riak
  • Starting, stopping, and restarting
  • Scaling up & out
  • Managing uptime & data integrity
  • Accessing & writing data

Polyglot client libraries

  • JavaScript/Node.js & Erlang for the zombie curing mad scientists.
  • C#/.NET & Java for the zombie creating corporations.
  • Others, for those trying to just survive the zombie apocolypse.

If you haven’t registered for the Polyglot Conference yet, get registered ASAP as it could sell out!

Some of the other tutorials that are happening, that I wish I could clone myself for…

That’s it for updates right now, more code & news later. Cheers!

Distributed Coding Prefunc: Rebar Multinode Riak Core

Before diving into this entry, you might want to check out some of my other getting Erlang installed with appropriate testing frameworks entries. Moving on…

At Basho we’re are always trying to make it easier to do big things. A short time ago we pushed forward on Rebar, Riak Core and getting things put together to make it simpler to get kick started working on distributed systems like the Riak Database & distributed system itself. There’s way more that is possible, which I’ll get into in just a minute. Before diving into some of those things, here’s a few quick links & context of what exactly Rebar and Riak Core are.

Riak Core
Github: https://github.com/basho/riak_core

Riak Core has been available for quite some time. We’ve also been hustling for a while getting together a robust array of material around Riak Core. One excellent place to get started on learning about Riak Core is the “Introducing Riak Core” blog article published on the Basho blog a while back. To describe Riak Core, or riak_core, it is the underpinnings of what Riak is built on. It provides many features to get you started building distributed systems. A few of the key features are being able to track and manage the nodes, clusters and related pieces of the distributed architecture within a system.

Rebar
Github: https://github.com/basho/rebar
Wiki: https://github.com/basho/rebar/wiki

Rebar is an Erlang build tool that helps you in putting together projects based on Riak Core.

Rebar Riak Core
Github: https://github.com/basho/rebar_riak_core

The Rebar Riak Core project repository template helps you start writing things like the Riak Database itself. It’s based on setting up Riak Core via template scripting an N Node Cluster devrel, vnodes, etc. Once you’re up and running it can be used to help develop distributed, scalable and fault tolerant applications.

For more on the Rebar Riak Core check out the README.md in the github repository. There are some great examples of how to get a multinode devrel running in a few steps.

Rebar Riak Core Quick Start

The quickest way to get started using the Riak Core and Rebar scripts is to get the prebuilt binaries or you can just clone and install the Rebar scripts if you’d like all the things. To get the binaries and executables you can download and have them ready by wget (or use your preferred method to download).

[sourcecode language=”bash”]
wget http://cloud.github.com/downloads/basho/rebar/rebar && chmod u+x rebar
[/sourcecode]

To get the cloned repository and ready for use.

[sourcecode language=”bash”]
$ git clone git://github.com/rebar/rebar.git
$ cd rebar
$ ./bootstrap
[/sourcecode]

Now the easiest way possible is to use the Riak Core Templates with a quick git clone. After cloning the repo, copy them to the rebar templates directory (note that you’ll need to create this initially) and then create a working directory to put the project in and navigate into that directory.

[sourcecode language=”bash”]
git clone git://github.com/rzezeski/rebar_riak_core.git
mkdir -p ~/.rebar/templates
cp rebar_riak_core/* ~/.rebar/templates
mkdir projectNameHere
cd projectNameHere
[/sourcecode]

Now that a template is available, run the following command to create the Erlang Project.

[sourcecode language=”bash”]
rebar create template=riak_core_multinode appid=rabbits nodeid=rabbits
[/sourcecode]

You’re now ready to go to work using Rebar and the template you’ve created. I followed the try-try-try example repo in the example above to get started, check it out for a great walk through that dives in deeper to Riak Core, each small element of the project and files created, and a multi-node project as the sample.

So what to do now?

This is where it is time to throw around some creativity to get real solutions to real problems. Building distributed systems is becoming more and more paramount to effective usage of infrastructure and systems. Using Riak Core to get started building out your distributed system is an ideal place to start. These are a few ideas that the team was brainstorming on. Over the coming weeks we’ll be putting together material to outline ways to not only get started, but to implement systems like this.

Distributed Web Caching Tier

Caching tiers often come up in conversations, whether related to distributed systems or not, and often end up on the distributed topic. The question resounds, “how do I create a caching tier that can be distributed and provide real session, state management, cached elements, live data and other needs?” Well Riak Core is a great place to start developing a custom distributed caching tier that could even extend to use Riak KV (the Riak Database implemented on Riak Core), Redis, Rabbit MQ or many other solutions by pulling them together to provide appropriate cache at the appropriate tiers of an application architecture.

In House Cluster Monitoring & Smart Resolver

One of the things that the Riak Core would be used to great effect for is a multi-node, clustered and geographically dispersed monitoring system for any multi-data center application. This could be built out and used for almost any actual environment, with custom specifics or a completely generic situation of pizza box servers. Because fo the distributed concepts behind Riak Core it would provide an ideal basis for monitoring – and re-launching or otherwise dealing with systems that need high uptime and recovered as fast as possible if they go down.

Logging, Web, Server, and Business Analytics

In any situation where analytics are collected there are often dozens if not thousands of servers, various systems and even numerous devices that may be emitting data via services or other mediums. Riak Core is a great place to lay the groundwork for a distributed system that could maintain a massive store of managed data for fast searching of analytics. This could be the groundwork for biotech research analytics, analysis of market data or a dozen other things that need highly available systems storing vast data with map reduction or other search capabilities. Think Business Intelligence (BI) with serious technological power.

Multi-node Project

Of course, as the example I used to create the first sample above, dive into the try-try-try tutorial for some great multi-node how to. If you have any questions please jump in ping me on twitter @adron or ping @basho, join the mailing list, the IRC #riak channel on freenode.