Tag Archives: dotnet

DSE6 + .NET v?

Project Repo: Interoperability Black Box

First steps. Let’s get .NET installed and setup. I’m running Ubuntu 18.04 for this setup and start of project. To install .NET on Ubuntu one needs to go through a multi-command process of keys and some other stuff, fortunately Microsoft’s teams have made this almost easy by providing the commands for the various Linux distributions here. The commands I ran are as follows to get all this initial setup done.

wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg
sudo mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/
wget -q https://packages.microsoft.com/config/ubuntu/18.04/prod.list
sudo mv prod.list /etc/apt/sources.list.d/microsoft-prod.list
sudo chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg
sudo chown root:root /etc/apt/sources.list.d/microsoft-prod.list

After all this I could then install the .NET SDK. It’s been so long since I actually installed .NET on anything that I wasn’t sure if I just needed the runtime, the SDK, or what I’d actually need. I just assumed it would be safe to install the SDK and then install the runtime too.

sudo apt-get install apt-transport-https
sudo apt-get update
sudo apt-get install dotnet-sdk-2.1

Then the runtime.

sudo apt-get install aspnetcore-runtime-2.1

logoAlright. Now with this installed, I wanted to also see if Jetbrains Rider would detect – or at least what would I have to do – to have the IDE detect that .NET is now installed. So I opened up the IDE to see what the results would be. Over the left hand side of the new solution dialog, if anything isn’t installed Rider usually will display a message that X whatever needs installed. But it looked like everything is showing up as installed, “yay for things working (at this point)!

rider-01

Next up is to get a solution started with the pertinent projects for what I want to build.

dse2

Kazam_screenshot_00001

For the next stage I created three projects.

  1. InteroperationalBlackBox – A basic class library that will be used by a console application or whatever other application or service that may need access to the specific business logic or what not.
  2. InteroperationalBlackBox.Tests – An xunit testing project for testing anything that might need some good ole’ testing.
  3. InteroperationalBlackBox.Cli – A console application (CLI) that I’ll use to interact with the class library and add capabilities going forward.

Alright, now that all the basic projects are setup in the solution, I’ll go out and see about the .NET DataStax Enterprise driver. Inside Jetbrains Rider I can right click on a particular project that I want to add or manage dependencies for. I did that and then put “dse” in the search box. The dialog pops up from the bottom of the IDE and you can add it by clicking on the bottom right plus sign in the description box to the right. Once you click the plus sign, once installed, it becomes a little red x.

dse-adding-package

Alright. Now it’s almost time to get some code working. We need ourselves a database first however. I’m going to setup a cluster in Google Cloud Platform (GCP), but feel free to use whatever cluster you’ve got. These instructions will basically be reusable across wherever you’ve got your cluster setup. I wrote up a walk through and instructions for the GCP Marketplace a few weeks ago. I used the same offering to get this example cluster up and running to use. So, now back to getting the first snippets of code working.

Let’s write a test first.

[Fact]
public void ConfirmDatabase_Connects_False()
{
    var box = new BlackBox();
    Assert.Equal(false, box.ConfirmConnection());
}

In this test, I named the class called BlackBox and am planning to have a parameterless constructor. But as things go tests are very fluid, or ought to be, and I may change it in the next iteration. I’m thinking, at least to get started, that I’ll have a method to test and confirm a connection for the CLI. I’ve named it ConfirmConnection for that purpose. Initially I’m going to test for false, but that’s primarily just to get started. Now, time to implement.

namespace InteroperabilityBlackBox
using System;
using Dse;
using Dse.Auth;

namespace InteroperabilityBlackBox
{
    public class BlackBox
    {
        public BlackBox()
        {}

        public bool ConfirmConnection()
        {
            return false;
        }
    }
}

That gives a passing test and I move forward. For more of the run through of moving from this first step to the finished code session check out this

By the end of the coding session I had a few tests.

using Xunit;

namespace InteroperabilityBlackBox.Tests
{
    public class MakingSureItWorksIntegrationTests
    {
        [Fact]
        public void ConfirmDatabase_Connects_False()
        {
            var box = new BlackBox();
            Assert.Equal(false, box.ConfirmConnection());
        }

        [Fact]
        public void ConfirmDatabase_PassedValuesConnects_True()
        {
            var box = new BlackBox("cassandra", "", "");
            Assert.Equal(false, box.ConfirmConnection());
        }

        [Fact]
        public void ConfirmDatabase_PassedValuesConnects_False()
        {
            var box = new BlackBox("cassandra", "notThePassword", "");
            Assert.Equal(false, box.ConfirmConnection());
        }
    }
}

The respective code for connecting to the database cluster, per the walk through I wrote about here, at session end looked like this.

using System;
using Dse;
using Dse.Auth;

namespace InteroperabilityBlackBox
{
    public class BlackBox : IBoxConnection
    {
        public BlackBox(string username, string password, string contactPoint)
        {
            UserName = username;
            Password = password;
            ContactPoint = contactPoint;
        }

        public BlackBox()
        {
            UserName = "ConfigValueFromSecretsVault";
            Password = "ConfigValueFromSecretsVault";
            ContactPoint = "ConfigValue";
        }

        public string ContactPoint { get; set; }
        public string UserName { get; set; }
        public string Password { get; set; }

        public bool ConfirmConnection()
        {
            IDseCluster cluster = DseCluster.Builder()
                .AddContactPoint(ContactPoint)
                .WithAuthProvider(new DsePlainTextAuthProvider(UserName, Password))
                .Build();

            try
            {
                cluster.Connect();
                return true;
            }
            catch (Exception e)
            {
                Console.WriteLine(e);
                return false;
            }

        }
    }
}

With my interface providing the contract to meet.

namespace InteroperabilityBlackBox
{
    public interface IBoxConnection
    {
        string ContactPoint { get; set; }
        string UserName { get; set; }
        string Password { get; set; }
        bool ConfirmConnection();
    }
}

Conclusions & Next Steps

After I wrapped up the session two things stood out that needed fixed for the next session. I’ll be sure to add these as objectives for the next coding session at 3pm PST on Thursday.

  1. The tests really needed to more resiliently confirm the integrations that I was working to prove out. My plan at this point is to add some Docker images that would provide the development integration tests a point to work against. This would alleviate the need for something outside of the actual project in the repository to exist. Removing that fragility.
  2. The application, in its “Black Box”, should do something. For the next session we’ll write up some feature requests we’d want, or maybe someone has some suggestions of functionality they’d like to see implemented in a CLI using .NET Core working against a DataStax Enterprise Cassandra Database Cluster? Feel free to leave a comment or three about a feature, I’ll work on adding it during the next session.

Chapter 2 in My Twitch Streaming

A while back I started down the path of getting a Twitch Channel started. At this point I’ve gotten a channel setup which I’ve dubbed Thrashing Code albeit it still just has “adronhall” all over it. I’ll get those details further refined as I work on it more.

Today I recorded a new Twitch stream about doing a twitch stream and created an edited video of all the pieces and cameras and angles. I could prospectively help people get started, it’s just my experiences so far and efforts to get everything connected right. The actual video stream recording is available, and I’ll leave it on the channel. However the video I edited will be available and I’ll post a link here.

Tomorrow will be my first official Twitch stream at 3pm PST. If you’re interested in watching check out my Twitch profile here follow and it’ll ping you when I go live. This first streaming session, or episode, or whatever you want to call it, will include a couple topics. I’ll be approaching these topics from that of someone just starting, so if you join help hold me to that! Don’t let me skip ahead or if you notice I left out something key please join and chat at me during the process. I want to make sure I’m covering all the bases as I step through toward achieving the key objectives. Which speaking of…

Tomorrow’s Mission Objectives

  1. Create a DataStax Enterprise Cassandra Cluster in Google Cloud Platform.
  2. Create a .NET project using the latest cross-platform magical bits that will have a library for abstracting the data source(s), a console interface for using the application, and of course a test project.
  3. Configure & connect to the distributed database cluster.

Mission Stretch Objectives

  1. Start a github repo to share the project with others.
  2. Setup some .github templates for feature request issues or related issues.
  3. Write up some Github Issue Feature requests and maybe even sdd some extra features to the CLI for…??? no idea ??? determine 2-3 during the Twitch stream.

If you’d like to follow along, here’s what I have installed. You’re welcome to a range of tooling to follow along with that is the same as what I’ve got here or a variance of other things. Feel free to bring up tooling if you’re curious about it via chat and I’ll answer questions where and when I can.

  • Ubuntu v18.04
  • .NET core v2.1
  • DataStax Enterprise v6

Alright Stop Waiting, CorrugatedIron v1.0 .NET Client released for Riak!

I’ll kick right off with all the specifics:  Jeremiah did a blog entry on today’s release titled “Just one more thing…  Introducing Corrugated Iron v1.0“.

Send a congrats out to the team duo of OJ @TheColonial & Jeremiah @peschkaj via Twitter. Check out the .NET Rocks Podcast with Jeremiah talking with Carl and Richard about storing data in Riak. Also check out Adrian Hills’ article on getting up and running via Nuget with Corrugated Iron in Visual Studio, ping him on Twitter @AdaTheDev.

So no excuses in .NET land to write some apps that are hard core data centric and capable based on the power of Riak! There are tons of features. You can read about them yourself via the README.md file on the CorrugatedIron Repository, but I wanted to post the features right here so you get an idea of the feature rich capabilities of the library. In addition, it does indeed work on Linux & OS-X with Mono. So don’t let Windows get in your way! 🙂

Current Features

♥: denotes availability of both blocking and asynchronous APIs
«: denotes availability of both streaming and non-streaming APIs

  • Riak cluster support:
    • One or more nodes in the cluster.
    • Load-balancing and pooling of connections across the nodes.
      • Currently only round-robin is supported, more strategies to come later.
    • Per-node configuration for:
      • Host Name (purely used for identification).
      • Host Address.
      • PBC Port.
      • HTTP/REST Port.
      • Pool Size.
      • Timeout parameters.
  • Server ping.    ♥
  • Get server information/version.    ♥
  • Simple Get/Put/Delete operations.    ♥
  • Bulk Get/Put/Delete operations.    ♥
  • List buckets.    ♥
  • List keys.    ♥  «
  • Semi-fluent Map/Reduce.    ♥  «
  • Link walking.    ♥
  • Delete buckets.    ♥
  • Set/Get bucket properties.    ♥
  • Batch operations on a single connection.
    • Each time a Client function is called resulting in communication with the Riak cluster, a connection is pulled from a pool on a given node. In most use-cases this functionality is fine as it is often single-shot calls that are made. There are, however, cases where many operations will happen at once. Rather than forcing the user to make multiple calls to the client, resulting in multiple connection acquisitions behind the scenes, the user can use the Batch interface to make many calls on a single connection. This also reduces the overhead of setting the client ID on each call.
    • Because a batch operation reuses a single connection only a subset of the client API is available for batch actions. The functions that are excluded are the asynchronous functions.
  • Graceful degrades to HTTP/REST API when the request isn’t supported via Protocol Buffers.
  • Configurable via web.configapp.config or a custom configuration file.

CorrugatedIron works with .NET 4.0 on Windows and Mono on Linux and OSX.

Thor Project Opens Up, Building the Cloud Foundry Ecosystem with the Community

The Iron Foundry Team are big advocates of open source software. We write code across all sorts of languages, just like many of the development shops out there do. Sometimes we’re heavy on the .NET, other times we’re all up in some Java, Ruby on Rails, spooling up a Node.js Application or something else. So keeping with our love of open source and our polyglot nature we’ve created the Thor Project with three distinct apps.

Before jumping into the applications though, a little context for what and where Thor is in the grand scheme of things. We need to roll back to the Cloud Foundry Project to get into that. The Cloud Foundry Project is an open source project built around software for PaaS (Platform as a Service) which can be used to build your own PaaS internally or externally, in a cloud provider or directly on hardware. It’s your choice how, when and where you want to use it. For more context on PaaS check out my previous entry “The Confusions of IaaS, PaaS and SaaS“.

Thor Project

Cocoa for OS-X

Thor Odinson

Thor Odinson, God of Thunder

You know who Thor is right? He’s this mythic Norse God, also known as the God of Thunder. Since we’re all about bringing the hamma we welcomed Thor into our team’s stable of applications. So starting immediately we’ve released Thor into the realms for contributions and fighting the good open source software battle! If you’d like to join the effort, check out the github project and feel free to join us!

Technically, what is the Thor Application? This is a Cocoa Application built for OS-X that is used for managing, deploying and publishing applications to Cloud Foundry enabled and or Iron Foundry extended PaaS Environments.

.NET for Windows 7

The .NET Metro version of the Thor Application is also released via github with a provided installer. We’ve almost taken the same path, except of course for the very different UX and UI queues with Windows 7 and the Metro UX design guidelines.

WinRT for Windows 8

I wasn’t really sure what to call this version. Is it Metro or WinRT or Windows 8 or something else? Anyway, there is a project, it is albeit empty at this point, but it is the project where the Windows 8 version of Thor will go! For now get the Windows 7 version and install it on Windows 8, it won’t have touch interface support and things, but should work just like a regular application on Windows 8.

The Code

To get started with these, generally you’d just clone the repo and do a build, then get started checking out the code. There is one catch, for the OS-X version you’ll want to pull down the sub-modules with the following command.

git clone git@github.com:YourForkHere/Thor.git
git submodule update --init --recursive

Once you do that in XCode just make sure to then select the right project as the starting build project.

…then when the application is launched…

Thor Running in OS-X

Thor Running in OS-X

I’ll have more in the coming days and weeks about Thor & Iron Foundry. For now, check out the blog entry on the Iron Foundry Blog and subscribe there for more information.

Adam & Krishan Got Me Motivated Today… to toss the trash conversations

I was speaking with Krishan Subramanian (@krishnan) and Adam Seligman (@adamse) today. I love talking to these guys. They’re both smart, intelligent and upbeat guys. They see the positive things we’re all working toward and accomplishing in the technology space, specifically around PaaS, Cloud Computing and around the cultural implications of stronger technology communities, involvement of individuals. We all can see the positives, of how the industry is moving forward so that corporations aren’t the only enablers that are juxtaposed against developers or consumers but instead act to serve consumers based on the progress that individuals make themselves. There’s so much to do and so much progress to be made, the venders can simply follow the community and step up to provide points of leadership.

Absolutely great talking with these guys…

On that topic, what is it that we discussed that has me so motivated? Well there’s a few things that I’m done with and I’m going to make every effort to just throw away the trash. Here’s a few of these things that we discussed and I challenge everybody out there, drop the trash talk and let’s move forward because there is a LOT of awesome things to accomplish. Here’s the two things I’m just dropping…  cold. No reason to discuss them anymore.

  • Toss the language and framework religious wars. It is far simpler than it is sometimes perceived. We have a polyglot industry now where we can easily use the right tool for the job, the right framework, or the language that handles our particular domain the best. There is literally no reason to argue about this anymore. Of course we can talk semantics, debate best use cases, and of course we’ll talk accomplishments and what various things do well. That’s exactly what the focus should be on, not the harping on my X is better than your Y nonsense.
  • The culture war is basically over. Sure there are the hold outs that haven’t gotten a clue yet. But it’s an open source world at this point. Even the dreaded and horrible Oracle has generally conceded this and is frantically waving its marketing arms around trying to get attention. But at the core, mysql, java and the other things that they’ve purchased they’re keeping alive. They’re active participants in the community now, albeit in a somewhat strange way. Considering that even Oracle, Microsoft, Apple and so many others contribute back to the open source community in massive ways, that war can be considered won. Victory, the community and every individual in that community!
  • Lockin is basically dead. The technological reasons to lock in are gone, seriously. There’s some issues around data gravity that are to be overcome, but that’s where a solid architecture (see below) comes in. Anything you need can be contributed to and derived from the development community. Get involved and figure out how technology can be a major piece of your business in a positive way. If you design something poorly, lock in becomes a huge issue. Use the rights tools, don’t get into binding contracts, because in the polyglot world we’re in now there’s no reason to be permanently locked in to anything. Be flexible, be where you need to be, and make those decisions based on the community, your support systems, and your business partners. Don’t tie yourself to vendors unless there is mutual reasons to do exactly that. Lock in is a dead conversation, just don’t, time to move on.

So what are the key conversations today?

  • Ecosystem Architecture – If you’re deploying to AWS, Heroku, Tier 3, AppFog or Windows Azure it all boils down to something very specific that will make or break you. Your architecture. This is where the real value add in the cloud & respective systems are, but there are many discussions and many elements of the technology to understand. This is a fundamentally key conversation topic in the industry today. Pick this one up and drop the other trash.
  • Movement & Data Gravity – How do you access your data, how do you store it, where and how do you derive insight from that data? This is one of the topics that came up in our discusssion and it is huge. The entire computer industry basically exists for the reason of insight. What should we eat today, how do I shift my investments, how is my development team doing, what’s the status of my house being built, where is my family today and can I contact them! All of these things are insights we derive from computer systems. These are the fundamental core reason that computers exist. As an industry we’re finally getting to a point were we can get some pretty solid insightful, intelligent and useful information from our systems. The conversation however continues, there is so much more we can still achieve. So again, drop the wasteful convo and jump on board the conversations about data, information and insights!
  • Community Involvement – I’ve left the key topic for last. This is huge, companies have to be involved today. Companies aren’t dictating progress but instead the community is leading as it should. The community is providing a path for companies to follow or lead, but the community, the individuals are the ones that are seen and known to be innovating. This is so simple it’s wild that it is only now becoming a known reality – companies don’t innovate, people do. Companies don’t involve, people do. Individuals are the drivers of companies, the drivers of Governments, they’re the ones driving innovation and progress. The focus should now and should have always been on the individuals and what they’re working toward to accomplish. So get involved, get the companies involved as a whole and keep the semantic ideal of individuals and the progress they can make core to the way you think of communities. The idea of the “company” innovating is silly, let’s talk and build community with the people that are working around and innovating with these technologies.

Of course there are more, I’d love to hear your take on what the conversations of today should be about. What do we need to resolve? How do we improve our lives, our work and the efforts we’re working toward on a day to day basis?

IaaS vs. PaaS or Infrastructure vs. Platform and I Want Beer NOW!

A friend and now coworker of mine, Richard Seroter (@rseroter & Blog) decided to do a comparo. I took the infrastructure based deployment, ala IaaS and he took the platform based deployment, ala PaaS. What we’ve done is taken a somewhat standard ASP.NET MVC with Entity Framework, a SQL Server Database, a UX & UI design and got it running locally. From there we then deployed the same application the two respective ways to deploy the web application to a live environment. He took the Tier 3 PaaS (Iron Foundry + Cloud Foundry for the win) and I took the tried and true method of deploying via Windows 2008 Server instances via the Tier 3 Infrastructure.

Here are the steps I went through and for his steps check out this blog article on the PaaS deployment.

Part #1 – Get Some Servers Setup

First things first, I need two instances. If you’re following along, you can basically use whatever instances or server you want. AWS, Rackspace, or Windows Azure. Based on that there may be a few steps here or there you may need to alter, add or subtract from the process. One for the ASP.NET MVC Application and one for the SQL Server Database. The web app server doesn’t need a ton of resources, so I built it and scaled back RAM and cores to a single core.

ASP.NET MVC Web Server

ASP.NET MVC Web Server (Click for full size image)

In the next step here I’ve selected additional software to be installed on the instance. I’ll need .NET 4.0 so I’ve added this as shown.

Selecting .NET 4.0 for Addition to the Instance

Selecting .NET 4.0 for Addition to the Instance (Click for full size image)

After setting up the web server I also setup a database server. For the database server I made sure to allocate some decent resource, setting up 2 cores and 8 GB RAM. I also added the SQL Server installation based on Tier 3’s software packages so it would install automatically when the image is created.

All My Instances Running

All My Instances Building & Running (Click for full size image)

When I setup the SQL Server instance, I used a blue print feature that allows the SQL Server to be installed directly on the image. This of course saved me a lot of time. But it does add to the deployment time of the instance in the cloud.

Part #2 – Setting up Windows Server 2008

The first thing we’ll need to do is log into these machines and configure them, standard infrastructure stuff. Open up the Server Manager (which launches automatically on instances) and verify that we have IIS installed on the web server.

Database Server

Server Manager

Server Manager (Click for full size image)

Next log into the database server and verify that the SQL Server is up, running and create the initial database.

Thusly…

Using SQL Server Management Studio checking that the SQL Server Exists

Using SQL Server Management Studio checking that the SQL Server Exists

Once I had both of the servers up and running I got the application ready to deploy. First a little schema generation to use to deploy the database.

Don't Use "Script Database as..." option, use the "Tasks" option...

Don’t Use “Script Database as…” option, use the “Tasks” option…

Once the script is generated then transfer it and execute it against the database on the database server.

Execute the SQL Schema Create Script

Execute the SQL Schema Create Script (Click image for full size)

Always a good thing, even if all green lights are seen on the SQL execution, go in and make sure the tables are all there.

Web Server

Publish

Publish (Click Full Image)

Publish Application (click for full size image)For the web server, as long as IIS is already installed, the setup is fairly easy. First snag the compiled bits that need deployed. We’ll do a direct drop onto the server and get it running.

To get the compiled bits, right click on the Visual Studio Project and select publish. Add a deployment scenario, which I did and set it up to just spit the bits out to a directory. There of course a multiple options at this point to use FTP, WebDav or whatever your choice is. I’m not a particular fan of any of those in particular, they’re all fairly easy.

Deployment Publication Options

Deployment Publication Options (Click for full size image)

Interuption!!!

At this point I actually got hit with the “.NET 4.0 isn’t installed…” which it should have been. I opened up windows update and realized that it had not successfully executed nor had the .NET 4.0 install. This happens with all sorts of instances, regardless of provider, so make sure that the bits we need are installed. Also, with Windows, it’s a really good idea to get windows update turned on.

Back to Deployment

Now that we have the built bits just copy them onto the web application server into the inetpub wwwroot directory. Once you have that copied over you would be able to navigate to the IP of the machine this is setup on. At this time you may also want to setup a cname or a-record to point to the IP, so you can use a friendlier URI.

Retrospective

Now think about what has just gone on for a moment. We had to literally build out machines, add software and more. There were a lot of steps. This takes anywhere from 30 minutes to a few hours of actual work. In a larger business or an enterprise environment it could get extended out even further. Because of the extra complexity it could also end up broken, requiring extra troubleshooting and coding. There could even be a host of odd one off configuration issues with the hosting software itself.

Imagine you wanted to host an ASP.NET, PHP, Ruby on Rails and a Node.js App on the Server. That would be almost impossible. Consider how much extra configuration knowledge an ops person would need to troubleshoot each one of those frameworks. Just sit back and contemplate the complexities involved for a moment. All the complexity goes away with something like Cloud Foundry or Open Shift. With someone managing that system for you, such as us here at Tier 3 with our Web Fabric PaaS, AppFog, Cloud Foundry, or one of the other providers even more of the complexities just disappear.

Time for Summary & Beer

With all the steps and individual tasks needed to get something running in an IaaS Environment, go check out how slick getting something up and running with a PaaS style environment. The juxtaposition between what Richard had to go through versus what I had to go through is pretty significant. Simply put, for the vast majority of all application development can be done against a PaaS Environment and likely should. Digging deeper into the infrastruture elements is rarely needed except in rare scaling circumstances, such as the volume that Facebook, LinkedIn or Netflix deal with. Even then, as has been stated by these companies, they have a PaaS of their own they often build software to. So why not have this ability where you build software?

One of my key metrics, and I’ll be elaborating on this metric more in the future, is when I get to head out of the office for the day, relax, have a beer, and think about what I’ll get to create next. I call this my “Beer Enabler Measure“. PaaS technologies make it much easier for me to get to the relaxing part of my day a lot faster than IaaS technologies, and both of these make sure that I’m not pulling an all nighter without a beer like traditional hosting environments often do.

In the end, sure, infrastructure can be important and can help in transitioning legacy applications into an easier to manage environment. Today though, if you’re doing web application dev of any type, it should be deployed against a PaaS Environment either private or public.

Going Hard Core: Vmware’s Cloud Foundry Forks Uhuru & Iron Foundry Review

Back in December Uhuru Software and Tier 3 released two different forks of Cloud Foundry that enabled .NET Support. I wasn’t sure which I wanted to use, since I had some serious Cloud Foundry work I was about to dive into, so I’ve picked them apart to determine how each works. This is what I’ve found so far.

Uhuru

Iron Foundry

That covers the basic links to the downloads, community, and other points of presence, now it is time to dig into some of the differences I’ve found. First though, I got a good environment setup to test each of the forks, from within the same Cloud Foundry Environment! So this is how I’ve set this up… Setting up the Virtual Machines w/ VMware Fusion I suspect, you could tangibly do this with some other virtualization software, but VMware is probably the easiest to use and setup on OS-X & Windows. I haven’t tried this on Linux so there’s another space I’d have to give it a go. Using ESX I also suspect this would also be extremely easy to setup. It’s up to you, but I’m doing all of this with VMware Fusion. The environment I’m using for this comparison consists of the following virtual images:

Micro Cloud Foundry Instances

These instances were easy, I just downloaded them from the Cloud Foundry Site on the Micro Cloud Foundry Download Page. The simple configuration is outlined in “Micro Cloud Foundry Installation & Setup“.

Iron Foundry Instances

For this, I downloaded the available VM on the Iron Foundry Site here.

Uhuru Instances

I setup the Uhuru Instances using the instructions available from Uhuru Software here.

Setting up Some Controllers

So the first thing I did was dive into setting up a controller, or actually two, because I wanted to have an Iron Foundry Environment and a Uhuru Software Environment. After that I’d then try to mix and match them and figure out differences or conflicts. The instructions listed under the “Uhuru Instances” has information regarding setup of a controller for the Uhuru Software Environment, which is what I followed. It is also a good idea to get setup with Putty or ready with SSH for usage of Cloud Foundry, Uhuru Software, and Iron Foundry.