In-memory Orchestrate Local Development Database

I was talking with Tory Adams @BEZEI2K about working with Orchestrate‘s Services. We’re totally sold on what they offer and are looking forward to a lot of the technology that is in the works. The day to day building against Orchestrate is super easy, and setting up collections for dev or test or whatever are so easy nothing has stood in our way. Except one thing…

Every once in a while we have to work disconnected. For whatever the reason might be; Comcast cable goes out, we decide to jump on a train or one of us ends up on one of those Q400 puddle jumpers that doesn’t have wifi! But regardless of being disconnected from wifi, cable or internet connectivity we still want to be able to code and test!

In Memory Orchestrate Wrapper

Enter the idea of creating an in memory Orchestrate database wrapper. Using something like convict.js one could easily redirect all the connections as necessary when developing locally. That way development continues right along and when the application is pushed live, it’s redirected to the appropriate Orchestrate connections and keys!

This in memory “fake” or “mock” would need to have the key value, events, and graph store setup just like Orchestrate. With the possibility of having this in memory one could also easily write tests against a real fake and be able to test connected or disconnected without mocking. Not to say that’s a good or bad idea, but just one more tool in the tool chest doesn’t hurt!

If something like this doesn’t pop up in the next week or three, I might just have to kick off this project myself! If anybody is interested please reach out to me and let’s discuss! I’m open to writing it in JavaScript, C#, Java or whatever poison pill you’d prefer. (I’m not polyglot to limit my options!!)

Other Ideas, Development Shop Swap

Another idea that I’ve been pondering is setting up a development shop swap. I’ll leave the reader to determine what that means!  😉  Feel free to throw down ideas that this might bring up and I’ll incorporate that into the soon to be implementation. I’ll have more information about that idea right here once the project gets rolling. In the meantime, happy coding!

Architectural PaaS Cracks or Crack PaaS

Over the last couple years there have been two prominent open source PaaS Solutions come onto the market. Cloud Foundry & OpenShift. There’s been a lot of talk about these plays and the talk has slowly but steadily turned into traction. Large enterprises are picking these up and giving their developers and operations staff a real chance to make changes. Sometimes disruptive in a very good way.

However, with all the grandeur I’m going to hit on the negatives. These are the missing parts, the serious pain points beyond just some little deployment nuisance. Then a last note on why, even amidst the pain points, you still need to make real movement with PaaS tooling and technologies.

Negative: The Data Story is Lacking

Both Cloud Foundry and OpenShift have a way to plug into databases easily.

Cloud Foundry provides ways to build a Cloud Foundry Service that becomes the bound and hooked in SQL Server, MySQL, Postgresql, Redis or whatever data storage service you need. For more details on building a service, check out the echo example on the vcap sample github project.

OpenShift has what are called Cartridges which provide the ability to add databases and other services into the system. For more information about the cartridges check out Red Hat’s OpenShift Documentation and also the forums.

Cloud Foundry and OpenShift however have distinctive weak spots when it comes to services that go beyond a mere single instance database. In the case of a true distributed database such as Cassandra, HBase or Riak, it is inordinately difficult to integrate a system that any PaaS inter-operates with well. In some cases it’s irrelevant to even try.

The key problem being that both of the PaaS systems assume the mantle of master while subjugating the distributed database a lower tier of coordination. The way to resolve this at the moment is to do an autonomous installation of Riak, Cassandra, Neo4j or other database that may be distributed, stored hot swappable, or otherwise spread across multiple machine or instance points. Then create a bound connection between it and the PaaS Application that is hosted. This is the big negative in PaaS systems and tooling right now, the data story just doesn’t expand well to the latest in data and database technologies. I’ll elaborate more about this below.

Negative: Deployment is Sometimes Easy, Maintenance is Sometimes Hard

Cloud Foundry is extremely rough to deploy, unless you use Bosh to deploy to either VMware Virtualized instances or AWS. Now, you could if resources were available get Bosh to deploy your Cloud Foundry environment anywhere you wanted. However, that’s not easy to do. Bosh is still a bit of a black box. I myself along with others in the community are working to document Bosh, but it is slow going.

OpenShift is dramatically easier to deploy, but is missing a few key pieces once deployed that draw some additional operational overhead. One of those is that OpenShift requires more networking management to handle routing between various parts of the PaaS Ecosystem.

Overall, this boils down to what you need between the two PaaS tool chains. If you want Cloud Foundry’s automatic routing and management between nodes. This is a viable route, but if your team wants to manage the networking tier more autonomous from the PaaS environment then maybe OpenShift is the way to go. In the end, it’s negative bumpy territory to determine which you may or may not want based on that.

Negative: Full Spectrum Polyglot, Missing Some

Cloud Foundry has a wider selection of languages and frameworks with community involvement around those with groups like Iron Foundry. OpenShift I’m sure will be getting to parity in the coming months. I have no doubt between both of these PaaS Ecosystems that they’ll expand to new languages and frameworks over time. Being polyglot after all is a no brainer these days!

Why PaaS Is, IMHO, Still Vitally Important

First toss out the idea that huge, web scale, Facebooks and Googles need to be built. Think about what the majority of developers out there in the world work on. Tons and tons and tons of legacy or greenfield enterprise applications. Sometimes the developer is lucky enough to work on a full vertical mix of things for a small business, but generally, the standard developer in the world is working on an enterprise app.

PaaS tooling takes the vast majority of that enterprise app maintenance from an operational side and tosses it out. Instead of managing a bunch of servers with a bunch of different apps the operations team manages an ecosystem that has a bunch of apps. This, for the enterprises that have enough foresight and have managed their IT assets well enough to be able to implement and use PaaS tooling, is HUGE!

For companies working to stay relevant in the enterprise, for companies looking to make inroads into the enterprise and especially for enterprises that are looking to maintain, grow or struggling to keep ahead of the curve – PaaS tooling is something that is a must have.

Just ask a dev, do they want to spend a few hours configuring and testing a server?  Do they want to deploy their application and focus on building more value into that application?

…being I’ve spent a few years being the developer, I’ll hedge on the side of adding value.

What’s Next?

So what’s next? Two major things in my opinion.

1. Fill the data gap. Most of the PaaS tooling needs to bridge the gap with the data story. I’m working my part with testing, development and efforts to get real options built into these environments, but this often leads back to the data story of PaaS being weak. What’s the solution here? I’m in talks, ongoing, planning sessions ongoing, and we’ll eventually get a solid solution around the data side.

2. Fix deployments & deployment management. Bosh isn’t straight forward or obvious in what it does, Cloud Foundry is easily the hardest thing to deploy with many dependencies. OpenShift is easier to deploy and neither of them actually have a solid management story over time. Bosh does some impressive updates of Cloud Foundry, and OpenShift has some upgrade methods, but still over time and during day to day operations there hasn’t been any clear cut wins with viewing, monitoring and managing nodes and data within these environments.

Write the Docs, Proper Portland Brew, Hack n’ Bike and Polyglot Conference 2013

Blog Entry Index:

I just wrapped up a long weekend of staycation. Monday kicked off Write the Docs this week and today, Tuesday, I’m getting back into the saddle.

Write the Docs

The Write the Docs Conference this week, a two day affair, has kicked off an expanding community around document creation. This conference is about what documentation is, how we create documentation as technical writers, writers, coders and others in the field.

Not only is it about those things it is about how people interact and why documentation is needed in projects. This is one of the things I find interesting, as it seems obvious, but is entirely not obvious because of the battle between good documentation, bad documentation or a complete lack of documentation. The later being the worse situation.

The Bloody War of Documentation!

At this conference it has been identified that the ideal documentation scenario is that building it starts before any software is even built. I do and don’t agree with this, because I know we must avoid BDUF (Big Design Up Front). But we must take this idea, of documentation first, in the appropriate context of how we’re speaking about documentation at the conference. Just as tests & behaviors identified up front, before the creation of the actual implementation is vital to solid, reliable, consistent, testable & high quality production software, good documentation is absolutely necessary.

There are some situations, the exceptions, such as with agencies that create software, in which the software is throwaway. I’m not and don’t think much of the conference is about those types of systems. What we’ve been speaking about at the conference is the systems, or ecosystems, in which software is built, maintained and used for many years. We’re talking about the APIs that are built and then used by dozens, hundreds or thousands of people. Think of Facebook, Github and Twitter. All of these have APIs that thousands upon thousands use everyday. They’re successful in large part, extremely so, because of stellar documentation. In the case of Facebook, there’s some love and hate to go around because they’ve gone between good documentation and bad documentation. However whenever it has been reliable, developers move forward with these APIs and have built billion dollar empires that employ hundreds of people and benefit thousands of people beyond that.

As developers that have been speaking at the conference, and developers in the audience, and this developer too all tend to agree, build that README file before you build a single other thing within the project. Keep that README updated, keep it marked up and easy to read, and make sure people know what your intent is as best you can. Simply put, document!

You might also have snarkily asked, does Write the Docs have docs,why yes, it does:

http://docs.writethedocs.org/ <- Give em’ a read, they’re solid docs.

Portland Proper Brew

Today while using my iPhone, catching up on news & events over the time I had my staycation I took a photo. On that photo I used Stitch to put together some arrows. Kind of a Portland Proper Brew (PPB) with documentation. (see what I did there!) It exemplifies a great way to start the day.

Everyday I bike (or ride the train or bus) in to downtown Porltand anywhere from 5-9 kilometers and swing into Barista on 3rd. Barista is one of the finest coffee shops, in Portland & the world. If you don’t believe me, drag your butt up here and check it out. Absolutely stellar baristas, the best coffee (Coava, Ritual, Sightglass, Stumptown & others), and pretty sweet digs to get going in the morning.

I’ll have more information on a new project I’ve kicked off. Right now it’s called Bike n’ Hack, which will be a scavenger style code hacking & bicycle riding urban awesome game. If you’re interested in hearing more about this, the project, the game & how everything will work be sure to contact me via twitter @adron or jump into the bike n’ hack github organization and the team will be adding more information about who, what, where, when and why this project is going to be a blast!

Polyglot Conference & the Zombie Apocalypse

I’ll be teaching a tutorial, “Introduction to Distributed Databases” at Polyglot Conference in Vancouver in May!  So it has begun & I’m here for you! Come and check out how to get a Riak deployment running in your survival bunker’s data center. Zombies or just your pointy hair boss scenarios of apocalypse we’ll discuss how consistent hashing, hinted handoff and gossipping can help your systems survive infestations! Here’s a basic outline of what I’ll cover…

Introducing Riak, a database designed to survive the Zombie Plague. Riak Architecture & 5 Minute History of Riak & Zombies.

Architecture deep dive:

  • Consistent Hashing, managing to track changes when your kill zone is littered with Zombies.
  • Intelligent Replication, managing your data against each of your bunkers.
  • Data Re-distribution, sometimes they overtake a bunker, how your data is re-distributed.
  • Short Erlang Introduction, a language fit for managing post-civil society.
  • Getting Erlang

Installing Riak on…

  • Ubuntu, RHEL & the Linux Variety.
  • OS-X, the only user centered computers to survive the apocolypse.
  • From source, maintained and modernized for humanities survival.
  • Upgrading Riak, when a bunker is retaken from the zomibes, it’s time to update your Riak.
  • Setting up

Devrel – A developer’s machine w/ Riak – how to manage without zombie bunkers.

  • 5 nodes, a basic cluster
  • Operating Riak
  • Starting, stopping, and restarting
  • Scaling up & out
  • Managing uptime & data integrity
  • Accessing & writing data

Polyglot client libraries

  • JavaScript/Node.js & Erlang for the zombie curing mad scientists.
  • C#/.NET & Java for the zombie creating corporations.
  • Others, for those trying to just survive the zombie apocolypse.

If you haven’t registered for the Polyglot Conference yet, get registered ASAP as it could sell out!

Some of the other tutorials that are happening, that I wish I could clone myself for…

That’s it for updates right now, more code & news later. Cheers!

Coder Society Seattle, Meeting this Saturday

Here it comes. Coder Society Seattle, Inaugural Kick Off!

I hope you can make it. Here’s the plan so far. We’re all meeting at Blue Box in beautiful downtown Seattle at 10am. We’ll setup a board (ala kanban style) and immediately jump into breaking our domain out (re: See the Coder Society Google Group for conversation around this, or check out below). I’ll have the post its, you bring the desire to learn new frameworks and build a cool something another!

One thing learned while getting things done and learning at the first Coder Society Portland meeting was that building infrastructure elements really held us back. So for this meet up we’ll dive straight into a PaaS option with Iron Foundry’s Cloud Foundry Environment. This will allow us to use almost any environment application option we want and couple it to whatever database.

Some of the things you’ll need for the meet up:

  1. Desire to learn.
  2. Intent to code, code, and deploy.
  3. You will need a laptop. So buy one, steal it, borrow it, or whatever you gotta do.
The meeting will be held at Blue Box at 119 Pine Street, Suite 200, Seattle, WA 98101    
Updates will also be provided via the Twitter Account: https://twitter.com/#!/codersociety and the site http://codersociety.org

Here’s a run down of our initial kick off goals. Polyglot applications for the win!

Here’s a review of our goals for the meeting:

Primary Goal: PolyGlot Systems
For this we will do a simple project where we pick technologies using at least two different programming languages and have them perform different roles in an application and share information across something neutral like Mongo or Redis. The prime choices, and we can add others, will be to use Node.js/Express.js + Ruby & Sinatra, and possibly C#/.NET MVC. This all depends on the desire of the meeting audience.

Stories: We’re kicking off the meeting goal with a theme many software developers will be super familiar with. Coffee!

  • A coffee drinker wants to add a rating for the coffee beverage.
  • A coffee drinker wants to list the price of the coffee beverage.
  • A coffee drinker wants to review the Barista.
  • A coffee drinker wants to rate the Barista.
  • A coffee drinker wants to know where the beverage.
  • A coffee drinker wants to know where the Barista was working.
  • A coffee drinker doesn’t want to read huge reviews.
  • A Barista wants to be able to list their coffees they use.
  • A Barista wants to be able to comment on reviews.
  • A Barista wants to be able to list their prices for coffee.
  • A Barista wants to be able to select their specialty.
  • A Barista or coffee drinker wants to be able to add or view outlets, wifi, or other information about the coffee shop.
  • A coffee drinker wants to be able to “follow” their favorite Barista.
  • A Barista wants to be able to send an alert to all of their coffee drinkers.

Arbitrary Limits:

  • Coffee Drinks are limited to: Cappuccinos, Lattes, Mochas, Macchiatos, Espresso.
  • We’ll be deploying to the Iron Foundry (Cloud Foundry core) PaaS, which really doesn’t put any particular limitations on us.  😉

Frameworks: After splitting into teams, we’ll iron out which frameworks we want to use and implement using the choice frameworks.

  • Node.js + Express.js / Bricks.js
  • Ruby on Rails
  • Ruby + Sinatra
  • ASP.NET MVC
  • Assembly. Ya know, for the insanely hard core. 🙂

Database:

  • Mongo DB, maybe Redis, Postgresl, or Neo4j if needs arise.

Prerequisites:

  • Bring a Laptop (or computing device you can do development on).
  • Bring some familiarity for setting up and using your development platform. This could be .NET, Ruby on Rails, Sinatra, Node.js, PHP, or whatever.
  • Bring a spirit to learn about new frameworks, get all polyglot, and have fun.

Meeting Workflow:

  1. The meeting will join.
  2. Teams will form.
  3. The kanban board will be explained and setup for use by the teams.
  4. We’ll unpack the user stories, setup workflow, and idea behind the meeting will be reviewed.
  5. Select team technology & domain element (barista or coffee drinker).
  6. Setup tasks within teams.
  7. Pick pairs to work on tasks.
  8. Code… implement…
  9. After implementation, we’ll review everything, and trade war stories.