Conference Recap – The awe inspiring quality & number of conferences in Cascadia!

Rails 2013 Conf (April 29th-May 1st)

The Rails 2013 Conference kicked off for me, with a short bike ride through town to the conference center. The Portland conference center is one of the most connected conference centers I’ve seen; light rail, streetcar, bus, bicycle boulevards, trails & of course pedestrian access is all available. I personally have no idea if you can drive to it, but I hear there is parking & such for drivers.

Streetcars
Streetcars

Rails Conf however clearly places itself in the category of a conference of people that give a shit! This is evident in so many things among the community, from the inclusive nature creating one of the most diverse groups of developers to the fact they handed out 7 day transit passes upon picking up your Rails Conf Pass!

Bikes!
Bikes!

The keynote was by DHH (obviously right?). He laid out where the Rails stack is, some roadmap topics & drew out how much the community had grown. Overall, Rails is now in the state of maintain and grow the ideal. Considering its inclusive nature I hope to see it continue to grow and to increase options out there for people getting into software development.

Railsconf 2013
Railsconf 2013

I also met a number of people while at the conference. One person I ran into again was Travis, who lives out yonder in Jacksonville, Florida and works with Hashrocket. Travis & I, besides the pure metal, have Jacksonville as common stomping ground. Last year I’d met him while the Hash Rocket Crew were in town. We discussed Portland, where to go and how to get there, plus what Hashrocket has been up to in regards to use around Mongo, other databases and how Ruby on Rails was treating them. The conclusion, all good on the dev front!

One of these days though, the Hashrocket crew is just gonna have to move to Portland. Sorry Jacksonville, we’ll visit one day. 😉

For the later half of the conferene I actually dove out and headed down for some client discussions in the country of Southern California. Nathan Aschbacher headed up Basho attendance at the conference from this point on. Which reminds me, I’ve gotta get a sitrep with Nathan…

RICON East (May 13th & 14th)

RICON East
RICON East

Ok, so I didn’t actually attend RICON East (sad face), I had far too many things to handle over here in Portlandia – but I watched over 1/3rd of the talks via the 1080p live stream. The basic idea of the RICON Conferences, is a conference series focused on distributed systems. Riak is of course a distributed database, falling into that category, but RICON is by no means merely about Riak at all. At RICON the talks range from competing products to acedemic heavy hitting talks about how, where and why distributed systems are the future of computing. They may touch on things you may be familiar with such as;

  • PaaS (Platform as a Service)
  • Existing databases and how they may fit into the fabric of distributed systems (such as Postgresql)
  • How to scale distributed across AWS Cloud Services, Azure or other cloud providers
RICON East
RICON East

As the videos are posted online I’ll be providing some blog entries around the talks. It will however be extremely difficult to choose the first to review, just as RICON back in October of 2012, every single talk was far above the modicum of the median!

Two immediate two talks that stand out was Christopher Meiklejohn’s @cmeik talk, doing a bit o’ proofs and all, in realtime off the cuff and all. It was merely a 5 minute lightnight talk, but holy shit this guy can roll through and hand off intelligence via a talk so fast in blew my mind!

The other talk was Kyle’s, AKA @aphry, who went through network partitions with databases. Basically destroying any comfort you might have with your database being effective at getting reads in a partition event. Kyle knows his stuff, that is without doubt.

There are many others, so subscribe keep reading and I’ll be posting them in the coming weeks.

Node PDX 2013 (May 16th & 17th)

Horse_js and other characters, planning some JavaScript hacking!
Horse_js and other characters, planning some JavaScript hacking!

Holy moley we did it, again! Thanks to EVERYBODY out there in the community for helping us pull together another kick ass Node PDX event! That’s two years in a row now! My fellow cohort of Troy Howard @thoward37 and Luc Perkins @lucperkins had hustled like some crazed worker bees to get everything together and ready – as always a lot always comes together the last minute and we don’t get a wink of sleep until its all done and everybody has had a good time!

Node PDX Sticker Selection was WICKED COOL!
Node PDX Sticker Selection was WICKED COOL!

Node PDX, it’s pretty self descriptive. It’s a one Node.js conference that also includes topics on hardware, javascript on the client side and a host of other topics. It’s also Portland specific. We have Portland Local Roasted Coffee (thanks Ristretto for the pour over & Coava for the custom roast!), Portland Beer (thanks brew capital of the world!), Portland Food (thanks Nicolas’!), Portland DJs (thanks Monika Mhz!), Portland Bands and tons of Portland wierdness all over the place. It’s always a good time! We get the notion at Node PDX, with all the Portlandia spread all over it’s one of the reasons that 8-12 people move to and get hired in Portland after this conference every year (it might become a larger range, as there are a few people planning to make the move in the coming months!).

A wide angle view of Holocene where Node PDX magic happened!
A wide angle view of Holocene where Node PDX magic happened!

The talks this year increased in number, but maintained a solid range of topics. We had a node.js disco talk, client side JavaScript, sensors and node.js, and even heard about people’s personal stories of how they got into programming JavaScript. Excellent talks, and as with RICON, I’ll be posting a blog entry and adding a few penny thoughts of my own to each talk.

Polyglot Conference 2013 (May 24th Workshops, 25th Conference)

Tea & Chris kick off Polyglot Conference 2013!
Tea & Chris kick off Polyglot Conference 2013!
A smiling crowd!
A smiling crowd!

Polyglot Conference was held in Vancouver again this year, with clear intent to expand to Portland and Seattle in the coming year or two. I’m super stoked about this and will definitely be looking to help out – if you’re interested in helping let me know and I’ll get you in contact with the entire crew that’s been handling things so far!

Polyglot Conference itself is a yearly conference held as an open spaces event. The way open space conferences work is described well on Wikipedia were it is referred to as Open Spaces Technology.

The crowds amass to order the chaos of tracks.
The crowds amass to order the chaos of tracks.

The biggest problem with this conference, is that it’s technically only one day. I hope that we can extend it to two days for next year – and hopefully even have the Seattle and Portland branches go with an extended two day itenerary.

A counting system...
A counting system…

This year the break out sessions that that I attended included “Dev Tools”, “How to Be a Better Programmer”, “Go (Language) Noises”, other great sessions and I threw down a session of my own on “Distributed Systems”. Overall, great time and great sessions! I had a blast and am looking forward to next year.

By the way, I’m not sure if I’ve mentioned this at the beginning of this blog entry, but this is only THE BEGINNING OF SUMMER IN CASCADIA! I’ll have more coverage of these events and others coming up, the roadmap includes OS Bridge (where I’m also speaking) and Portland’s notorious OSCON.

Until the next conference, keep hacking on that next bad ass piece of software, cheers!

Seattle Code Camp – A Summary

Wow… that was great.

I had three presentations. Well, honestly it was more like two presentations and a brainstorming session with about 3 dozen really smart people!

First there was the “Node.js Rulz! JavaScript takes over the full stack“. This session went pretty well, and I hope I got a lot of developers riled up to give Node.js a try. I discussed the various testing, framework, and other libraries needed to get going with development. I also did a test deployment against Tier 3’s Web Fabric PaaS (Cloud Foundry powered AWESOME). If you want to try out this deployment model, a sand box is available for free over at the Iron Foundry Project (.NET extensions for Cloud Foundry). Just sign up and we’ll get you added ASAP.

In summary, I went end to end with Node.js. Overall, it’s a beautiful thing that I highly suggest people give a thorough look at.

The next session I did was “Removing the Operating System Barrier with Platform as a Service“. This session covers my primary live of wrok and advocacy these days. It involves a key facet of software development that I’ve dubbed the “Beer Factor”. More on the “Beer Factor” later.  🙂

In this session I covered the history, reasons for, and overall impetus of PaaS (Platform as a Service) and why it matters to software developers. The general gist is, it is changing the very way we can and will be doing software development. The change, is absolutely for the better. Developers, consider yourself empowered. Also, more on this in the near future.

The last sessions, which was more of a large scale brain storming session, was “Putting it all together, letting apps lead the cycle, TDD in the cloud“. This session really kicked off a lot of different thoughts around the MAJOR gaps in cloud computing development. So I’m going to break out some of those key points below:

  • There is no logical, easy, or well defined way to test deployments to the cloud. If you’re AWS, Rackspace, Windows Azure, Tier 3, AppFog or any other company – deployment is not simple. A big impetus is to test production, something that absolutely has to be done. The gateways or checks in deploying software; for the underlying infrastructure, the platform, or anything that is geographically dispersed, multi-instance, or similar is very difficult. For software developers, devops, and the like, we want this to be better. We all brainstormed a bit around this and the resounding sentiment was, “damn, this is hard, yet so powerful and enabling that we have to figure out better ways to test and do deployments into cloud environments”
  • Chaos Monkey must bet let loose on the WORLD!!  See below:

    @adrianco chances of open sourcing Chaos Monkey? Room full of Cloud Devs want, egged on by @adron#seattlecodecamp

    @iC@adron it’s on the way, actively being cleaned up for github

    Adrian Cockroft RULZ! Thx Adrian, we’ll all keep an eye out for that! 🙂

  • One of the other things that was brought up was the endless options, and thus complexity, around the data story these days. This translates to, how do we simplify deployment of relational, document, object, key value or other types of databases? Each needs a particular type of default deployment. How do we as developers create a better model to get our data repositories of choice up and live. With Cloud Foundry the data deployments are a single node, which isn’t really useful for things like Riak, Mongo, Couch or databases that need to start with three or more nodes. It’s ok for relational databases, but it is very common to need that hot swappable database running somewhere. These are all questions that need answered to make the data story of PaaS technologies more palatable.
  • Monitoring and intelligent systems. Some suggestions around monitoring, which came from the question of how to test a deployed system before and after deployment, where pretty solid. Nodes need to be intelligent enough to be able to identify they’re live, active, and doing X, Y or Z. Controllers need to understand and know how to interact with these nodes. The back and forth is somewhat complex, but I can imagine just like with Cloud Foundry, they’re is a viable and simple solution among all of this with the appropriate abstractions and build out of systems.

That’s my summary. I had a blast, got to see a lot of people I know and meet a lot of new people I didn’t know. I always love being able to catch up and really expand on what our efforts are individually and collectively as a development community. Great fun, until next time, cheers!

Raven DB, A Kick Starter using Tier 3 IaaS

I’m putting together a pretty sweet little application. It does some basic things that are slowly but surely expanding to cover some interesting distributed cloud computing business cases. But today I’m going to dive into my Raven DB experience. The idea is that Raven DB will act as the data repository for a set of API web services (that seems kind of obvious, I state the obvious sometimes).

The first thing we need is a server instance to get our initial node up and running. You can use whatever service, virtualization tools, or a physical server if you want. I’m going to use Tier 3’s Services in my example, so just extrapolate for your own situation.

First I’ve logged in to the Tier 3 Control Site and am going to create a server instance.

Building the Tier 3 Node for Raven DB

Creating the Server Instance (Click for full size image)
Creating the Server Instance (Click for full size image)

Next step is to assign resources. Since this is just a single Raven DB node, and I won’t have it under heavy load, I’ve minimized the resources. This is more of a developers install, but it could easily be a production deploy, just allocate more resources as needed. Also note, I’ve added 50 GB of storage to this particular instance.

Setting Resources (Click for full size image)
Setting Resources (Click for full size image)

Now that we’ve set these, click next and on the next screen click on server task. Here add the public IP option and select the following services to open their ports.

Setting up a Public IP and the respective ports/services (Click for full size image)
Setting up a Public IP and the respective ports/services (Click for full size image)

The task will display once added as an item on the create server view. Once that is done, click create server so the server build out will start.

Creating the Server (Click for full size image)
Creating the Server (Click for full size image)

Now log in with RDP to start setting up the server in preparation of loading Raven DB. The first thing you’ll want to do is go ahead and get Windows Update turned on. My preference is to just turn it on and get every update that is available. Once that is done, make sure to get the latest .NET 4 download from Windows Update too.

Getting Windows Update Turned On (Click for full size image)
Getting Windows Update Turned On (Click for full size image)

Once all of the updates are finished and .NET 4 is installed we’ll get down to the business of getting Raven DB Installed. In this specific example I’ll be installing the Raven DB as a windows service, it however can be installed under IIS so there are many other options depending on how you need it installed.

Installing Raven DB

To get the software to install, navigate over to the Raven DB site at http://ravendb.net/ from the new instance we’ve just spun up. Click on the Download button and you’ll find the latest build over on the right hand side. Click to download the latest software package to a preferred location on the system.

Raven DB - Open Source 2nd Generation Document DB (Click to navigate to the site)
Raven DB – Open Source 2nd Generation Document DB (Click to navigate to the site)

Once you’ve downloaded it (I’ve put my download in the root R:\ partition I created) unzip it into a directory (I’ve just unzipped it here into R:\ to make the paths easy to find, feel free to put it anywhere you would prefer. In our Tier 3 environment the R drive is on a higher speed, thus higher IOP drive system, thus the abilities exceed your standard EBS/AMI or S3 style storage mechanisms.).

Saving the Raven DB Download (Click for full size image)
Saving the Raven DB Download (Click for full size image)
Saving to R:/ (Click for full size image)
Saving to R:/ (Click for full size image)

At this point, open a command prompt to install Raven DB as a service. Navigate to the drive and folder location you’ve saved the file to. Below I displayed a list of the folder and files in the structure.

CLI actions (click for full size image)
CLI actions (click for full size image)

Once you’re in the path of the Raven.Server.exe file then run a slash install on it to get a Windows Service of the Raven DB running.

Raven DB Installation results (click for full size image)
Raven DB Installation results (click for full size image)

To verify that it is up and running (which if you’ve gotten these results, you can rest assured it is, but I always like to just see the services icon) check out the services MMC.

Launching services (Click for full size image)
Launching services (Click for full size image)

There it is running…

Now, you’re not complete yet. There are a few other things you may want to take note of to be sure you’re up and running in every way you need to be.

The management and http transport for Raven DB is done on port 8080. So you’ll have to open that port if you want to connect to the services of the database externally. On windows, open up the Windows Firewall. Right click on the Inbound Rules and click Add Rule.

Select Port (Click for full size image)
Select Port (Click for full size image)
Select the Raven.Server.exe (click to see full size)
Select the Raven.Server.exe (click to see full size)
Inbound Rule (Click for full size image)
Inbound Rule (Click for full size image)
Open up however needed. (Click for full size image)
Open up however needed. (Click for full size image)
Public Private etc. (Click for full size image)
Public Private etc. (Click for full size image)

Enter a name and description on the next wizard dialog screen and click on Finish.

Displayed active firewall rule (Click for full size image)
Displayed active firewall rule (Click for full size image)

Now if you navigate to the IP of the instance with port 8080 you’ll be able to load the management portal for Raven DB and verify it is running and you have remote access.

Raven DB Management Screen
Raven DB Management Screen (Click for full size image)

At this point, if you’d like more evidence of success, click on the “Create Sample Data” button and the management screen will generate some data.

Raven DB Management console with data (Click for full size image)
Raven DB Management console with data (Click for full size image)

At this point you have a live Raven DB instance up and running in Tier 3. Next step is to break out and add nodes for better data integrity, etc.

Summary

In this write up I’ve shown a short how-to on installing and getting Raven DB ready for use on Windows Server 2008 in Tier 3’s Enterprise Cloud environment. In the very near future I’ll broach the topics of big data with Raven DB, and other databases like Riak and their usage in a cloud environment like Tier 3. Thanks for reading, cheers!