Tag Archives: azure

The Conversations and Samples of Multi-Cloud

Over the last few weeks the I’ve been putting together multi-cloud conversations and material related to multi-cloud implementation, conversations, and operational situations that exist today. I took a quick look at some of my repos on Github and realized I’d put together a multi-cloud Node.js sample app some time ago and should update it. I’ll get to that, hopefully, but I also stumbled into some tweets and other material I wanted to collect a few of them together.

Some Demo Code for Multi-Cloud

Conversations on Multi-cloud

  • Mitchell Hashimoto of HashiCorp posted a well written comment/article on what he’s been seeing (for some time) on Reddit.
  • A well worded tweet… lot’s of talk per Google’s somewhat underlying push for GKE on prem. Which means more clouds, more zones, and more multi-cloud options.

  • Distributed Data Show Conversations




Leave a comment, tweet at me (@adron), let me know your thoughts or what you’re working on re: multi-cloud. I’m curious to learn about and know more war stories.

Just Another Sunday

I sit here at the moment watching two Kubernetes Clusters build. One is building on Azure and one on Google Cloud Platform (GCP). I’ve got a presentation coming up this Tuesday and Thursday, both I’ll be digging into Kubernetes, Terraform, and a number of other technologies. Those are the two hot technologies for the talks though. Albeit, the continuous integration, languages, and tooling that Terraform builds via configuration and Kubernetes runs in containers is what is actually the meat of this whole sandwich. Which is where I ponder what all of this goo is that wires together things in this virtual programmatic realm in which I’ll build something on top of.

It seems messy from inception. But then of course, all programming and related ecosystem elements in which programming takes place is a messy bag of guts.

Here I sit then, waiting the rather unknown pseudo random amount of time for the Kubernetes Clusters to finish building. A few moments pass and sure enough, as always, very inconsistent build times. The Azure Kubernetes Cluster took 7 minutes to build and the GCP Kubernetes Cluster took just 4 minutes. Last night the Azure cluster was taking 20 minutes or more while the GCP cluster was consuming about 3–4 minutes to build. I’m not sure, as I’ve not dug into the matter deep enough, but something seems awry within the way Azure needs to build out its instances, networking, and related cluster mechanisms. I’m not surprised though, Azure has always behaved and felt slow and cumbersome during the build out of infrastructure. GCP on the other hand clearly comes from Google’s thoroughbred engineering focus on things. It generally builds in a much smaller range of time, consuming much less time overall.

As I build all of this, to work out what will and won’t be in the demo, I find myself next fiddling with presentation material. I really don’t even like to have presentation material, I’d much rather have an interesting enough talk and respective code, samples, and demo to just show the whole thing. Presentation slide decks always fell like, and almost always are, just a crutch for the inability to form ideas, show concepts, or otherwise actually engage the audience around what is being presented. It’s a frustrating dichotomy to say the least. Eventually, with these latest efforts, I actually intend to get down to two slides: one for my information when ending the talk, the requisite contact information and such, then two would be the intro slide with a fancy title for whatever the meat of the talk will be about.

All of this work however is going to be interrupted by the dramatically more important bike ride I’ll take later to clear my thoughts and get the blood flowing through my veins. As things go, I actually dislike sitting still for more than a few hours. I like to chunk my time into brackets, get the work done, and then go for a ride, walk, or something to get my mind cleared back up. I hear it’s healthier for us humans too, but I’ve not set the research to memory to make that argument.

Until later… fini.

Following Good Practice, The Negative Bits About Windows Azure First, But Gems Included! :D

Ok, I’ve used Windows Azure steadily over the last year and a half.  I’ve fought with the SDK so much that I stopped using it. I decided I’d put together this recap of what has driven me crazy and then put together something about the parts that I really like, the awesome bits, the parts that have the greatest potential with Windows Azure. So hold on to your hats, this may be hard hitting.  😉

First the bad parts.

The Windows Azure SDK

Ok, the SDK has driven me nuts. It has had flat out errors, sealed (bad) code, and is TIGHTLY COUPLED to the development fabric. I’m a professional, I can mock that, I don’t need kindergarten level help running this! If I have a large environment with thousands of prospective nodes (or even just a few dozen instances) the development fabric does nothing to help. I’d rate the SDK’s closed (re: sealed/no interfaces) nature and the development fabric as the number 1 reasons that Windows Azure is the hardest platform to develop for at large scale in Enterprise Environments.

Pricing Competitiveness? Ouch. 😦

Windows Azure is by far the most expensive cloud platform or infrastructure to use on the market today. AWS comes in, when priced specifically anywhere from 2/3rds the price to 1/6th the price. Rackspace in some circumstances comes in at the crazy low price of 1/8th as much as Windows Azure for similar capabilities. I realize there are certain things that Windows Azure may provide, but my not, and that in some rare circumstances Azure may come in lower – but that is rare. If Windows Azure wants to stay primarily, and only, an Enterprise Offering than this is fine. Nailing Enterprises on expensive things and offering them these SLA myths is exactly what Enterprises want, piece of mind of an SLA, they don’t care about pricing.

But if Windows Azure wants to play in new business, startups especially, mid-size business, or even small enterprises than the pricing needs a fix.  We’re looking at disparities $500 bucks vs. $3500 bucks in other situations. This isn’t exactly feasible as a way to get into cloud computing. Microsoft, unfortunately for them, has to drop this dream of maintaining revenues and profits at the same rate as their OS & Office Sales. Fact is, the market has already turned this sector into a commoditized price.

Speed, Boot Time, Restart, UI Admin Responsiveness

The Silverlight Interface is beautiful, I’ll give it that. But in most browsers aside from IE it gets flaky. Oh wait, no, I’m wrong. It gets flaky in all the browsers. Doh! This may be fixed now, but in my experience and others that I’ve paired with, we’ve watched in Chrome, Opera, Safari, Firefox, and IE when things have happened. This includes the instance spinning as if starting up when it is started, or when it spins and spins, a refresh is done and the instance has completely disappeared!  I’ve refreshed the Silverlight UI before and it just stops responding to communication before (and this wasn’t even on my machine).

The boot time for an instance is absolutely unacceptable for the Internet, for web development, or otherwise. Boot time should be similar to a solid Linux instance. I don’t care what needs to be done, but the instances need cleaned up, the architecture changed, or the OS swapped out if need be. I don’t care what OS the cloud is running on, but my instance should be live for me within 1-2 minutes or LESS. The current performance of Rackspace, Joyent, AWS, and about every single cloud provider out there boots an instance in about 45 seconds, sometimes a minute, but often less. I know there are workarounds, the whole leave it running while you deploy methods, and other such notions, but those don’t always work out. Sometimes you just need the instance up and running and you need it NOW!

Speed needs measurement to prove out in tests. Speed needs to be observed. I need analytics on my speed of the instance I’m choosing. I don’t know if it is pegged, I don’t know if it is idle and not responding. I have no idea in Windows Azure with any easy way. The speed, in general, seems to be really good on Windows Azure. Often times it appears to be better than others even, but rarely can I really prove it. It’s just a gut feeling that it is moving along well.

So, those are the negatives; speed, boot time, admin UI responsiveness, pricing, and the SDK. Now it is time for the wicked awesome cool bits!

Now, The Cool Parts

Lock In With Mort

This topic you’d have to ask me about in person, many people would be offended by this and I mean no offense by it. The reality is many companies will continue to get and hire what they consider to be plug and play replaceable developers – AKA “mort”. This is really bad for developers, but great for Windows Azure. In addition Windows Azure provides an option to lock in. It is by no means the only option – because by nature a cloud platform and services will only lock you in if YOU allow yourself to be. But providing both ways, lock in or not, is a major boost for Windows Azure also. Hopefully, I’ll have a presentation in regards to this in the near future – or at least find a way to write it up so that it doesn’t come off as me being a mean person, because I honestly don’t intend that.

Deploy Anything, To The Platform

Have a platform to work with instead of starting purely at infrastructure is HUGE for most companies. Not all, but most companies would be benefited in a massive way to write to the Azure Platform instead of single instances like EC2. The reason boils down to this, Windows Azure abstracts out most of the networking, ops, and other management that a company has to do. Most companies have either zero, or very weak ops and admin capabilities. This fact in many companies will actually bring the (I hate saying this) TCO, or Total Cost of Ownership, down for companies building to the Windows Azure Platform vs. the others. Because really, the real cost in all of this is the human cost, not the services as they’re commodotized. Again though, this is for small non-web related businesses – as web companies need to have ops, capabilities, their people absolutely must understand and know how the underpinnings work. If routing, multi-tenancy, networking and other capabilities are to be used to their fullest extent, infrastructure needs to be abstracted but the infrastructure needs to be accessible. Windows Azure does a good deal of infrastructure, and it looks like there will be more available in the future. This will be when the platform actually becomes much more valuable for the web side of the world that demands control, network access, SEO, routing, multi-tenancy, and other options like this.

With the newer generation of developers and others coming out of colleges there is a great idea here and a very bad one. Many new generation developers, if they want web, are jumping right into Ruby on Rails. Microsoft isn’t even a blip on their radar, however there still manage to be thousands that give Microsoft .NET a look, and for them Windows Azure provides a lot of options, including Ruby on Rails, PHP, and more. Soon there will even be some honest to goodness node.js support. I even suspect that the node.js support will probably be some of the fastest performing node.js implementations around. At least, the potential is there for sure. This later group of individuals coming into the industry these days are who will drive the Windows Azure Platform to achieve what it can.

.NET, PHP, and Ruby on Rails Ecosystem (Note, I don’t support of the theft of this word, but I’ll jump on the “ecosystem” bandwagon, reluctantly)

Besides the simple idea that you can deploy any of these to an “instance” in other environments, Windows Azure (almost) makes every one of these a first class platform citizen.  Drop the SDK in my advice, my STRONG advice, and go the RESTful services usage route. Once you do that you aren’t locked in, you can abstract for Windows Azure or any cloud, and you can utilize any of these framework stacks. This, technically, is HUGE to have these available at a platform level. AWS doesn’t offer that, Rackspace doesn’t even dream of it yet, OpenStack doesn’t enable it, and the list goes on. Windows Azure, that’s your option in this category.

The Other MASSIVE Coolness is not Core Windows Azure Features, but They Provide a HUGE Plus for Windows Azure

The add ons to SQL Server are HUGE for enterprises with BI Reporting, SQL Server Reporting, etc. These features are a no brainer for an enterprise. Yes, they provide immediate lock in. Yes, it doesn’t really matter for an enterprise. But here’s the saving grace for this lock in. With the Service Bus and Access Control you can use single sign on to use this and OTHER CLOUD SERVICES in a very secure and safe nature with your development. These two features alone, whether you use other Windows Azure Features or not, are worth using. Even with AWS, Rackspace, or one of the others. The Service Bus and Access Control actually add a lot of capabilities to any type of cloud architecture that comes in useful for enterprise environments, and is practically a requirement for on-premise and in cloud mixed environments (which it seems, almost all environments are).

Other major pluses that I like with Windows Azure includ:

  • Azure Marketplace – Over time, and if marketed well, this could become a huge asset to companies big and small.
  • SQL Azure – SQL Azure is actually a pretty solid database offering for enterprises. Since a lot of Enterprises have already locked themselves into SQL Server, this is a great offering for those companies. However I’m mixed on its usage vs. lower priced mySQL usage, or others for that matter. It definitely adds to the overall Windows Azure Capabilities though, and as time moves forward and other features (such as SSIS, etc) are added to Azure this will become an even greater differentiation.
  • Caching – Well, caching is just awesome isn’t it? I dig me some caching.  This offering is great. It isn’t memCached or some of the others, but it is still a great offering, and again, one of those things that adds to the overall Windows Azure capabilities list. I look forward to Microsoft adding more and more capabilities to this feature.  🙂
Windows Azure has grown and matured a lot over the time since its release from beta. It still however has some major negatives compared to more mature offerings. However, there is light at the end of the tunnel for those choosing the Windows Azure route, or those that are getting put into the Windows Azure route. Some of those things may even help leap ahead of some of the competition at some point. Microsoft is hard core in this game and they’re not letting down. If anyone has failed to notice, they still have one of the largest “war chests” on Earth to play in new games like this – even when they were initially ill prepared. I do see myself using Windows Azure in the future, maybe not extensively, but it’ll be there. And win a large share of the market or not, Microsoft putting this much money into the industry will push all ships forward in some way or another!

Big News on Day #3 of OS Bridge

Microsoft announced today that they’ll be supporting an effort to get Node.js working on Windows. Mary Foley picked it up quick, but also so did Node creator Ryan Dhal. This, being the explosion of support for Node.js, is excellent news. This further enables JavaScript for the whole stack, on any operating system stack. Getting a good solid, stable, and supported version on Windows will enable some serious performance on that platform. Up until the release of the support, Node.js is primarily limited to Windows via software called CYGWIN, which emulates (or runs on?) Windows and simulates a Unix/Linux Environment.

I’ll have more information regarding Node.js, Node Package Manager, and the whole suite of packages to get started with Node Development over the next couple of days. So stay tuned if you’re interested in getting started!

Cloud Formation

Home -> Speaking, Presentations, & Workshops

Here’s the presentation materials that I’ve put together for tonight.

Check my last two posts regarding the location & such:

Bellingham Cloud Talk, Coming Right Up

Here’s the basic outline of what I intend to speak on at the upcoming presentation I have for the Bellingham, Washington .NET Users Group.  If you happen to be in the area you should swing by and give it a listen (or heckle, whatever you feel like doing).

On April 5th I have a talk lined up with the Bellingham .NET Users Group. So far here’s a quick one over of the talk:

What Cloud Computing REALLY is to us techies

  • Geographically dispersed data centers.
  • Node based – AKA grid computing configurations that are…
  • Highly Virtualized – thus distributed.
  • Primarily compute and storage functionality.
  • Auto-scalable based on demand.

What kind of offerings exist out in the wild?

  • Amazon Web Services
  • Rackspace
  • Orcs Web
  • GoGrid
  • Joyent
  • Heroku
  • EngineYard

…many others and then the arrival in the last year”ish” of…

  • Windows Azure
  • AppHarbor

Developing for the cloud, what are the fundamentals in the .NET world?

Well, let’s talk about who has been doing the work so far, pushing ahead this technology.

  • Linux is the OS of choice… free, *nix, most widely used on the Internet by a large margin, and extremely capable…
  • Java
  • Ruby on Rails
  • Javascript & jQuery, budding into Node.js via Google’s V8 Engine
  • The Heroku + EngineYard + Git + AWESOMESAUCE capabilities of pushing… LIVE to vastly scalable and distributable cloud provisions!

So where does that leave us .NETters?

AWS .NET SDK released a few years ago.
Windows Azure & SDK released about a year ago.

These two have however been lacking compared to Heroku and EngineYard for those that want something FAST,

something transformative, easy to use, without extra APIs or odd tightly coupled SDKs.


In Summary the .NET Platform has primarily:

AWS for the top IaaS and most widely available zones & capabilities at the absolutely lowest prices,

Windows Azure for the general build to PaaS Solution, and for the people lucky enough to be going the Git +

MVC + real Agile route, AppHarbor is the peeminent solution.

Demo Time…

Windows Azure Demo

AWS Demo

AppHarbor Demo

Put Stuff in Your Windows Azure Junk Trunk – Windows Azure Worker Role and Storage Queue

Click on Part 1 and Part 2 of this series to review the previous examples and code.  First and foremost have the existing code base created in the other two examples opened and ready in Visual Studio 2010.  Next, I’ll just start rolling ASAP.

In the JunkTrunk.Storage Project add the following class file and code to the project. This will get us going for anything else we needed to do for the application from the queue perspective.

public class Queue : JunkTrunkBase
    public static void Add(CloudQueueMessage msg)

    public static CloudQueueMessage GetNextMessage()
        return Queue.PeekMessage() != null ? Queue.GetMessage() : null;

    public static List<CloudQueueMessage> GetAllMessages()
        var count = Queue.RetrieveApproximateMessageCount();
        return Queue.GetMessages(count).ToList();

    public static void DeleteMessage(CloudQueueMessage msg)

Once that is done open up the FileBlobManager.cs file in the Models directory of the JunkTrunk ASP.NET MVC Web Application. In the PutFile() Method add this line of code toward the very end of that method. The method, with the added line of code should look like this.

public void PutFile(BlobModel blobModel)
    var blobFileName = string.Format("{0}-{1}", DateTime.Now.ToString("yyyyMMdd"), blobModel.ResourceLocation);
    var blobUri = Blob.PutBlob(blobModel.BlobFile, blobFileName);

        new BlobMeta
                Date = DateTime.Now,
                ResourceUri = blobUri,
                RowKey = Guid.NewGuid().ToString()

    Queue.Add(new CloudQueueMessage(blobUri + "$" + blobFileName));

Now that we have something adding to the queue, we want to process this queue message. Open up the JunkTrunk.WorkerRole and make sure you have the following references in the project.

Windows Azure References

Windows Azure References

Next create a new class file called PhotoProcessing.cs. First add a method to the class titled ThumbnailCallback with the following code.

public static bool ThumbnailCallback()
    return false;

Next add another method with a blobUri string and filename string as parameters. Then add the following code block to it.

private static void AddThumbnail(string blobUri, string fileName)
        var stream = Repository.Blob.GetBlob(blobUri);
        if (blobUri.EndsWith(".jpg"))
            var image = Image.FromStream(stream);
            var myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
            var thumbnailImage = image.GetThumbnailImage(42, 32, myCallback, IntPtr.Zero);
            thumbnailImage.Save(stream, ImageFormat.Jpeg);
            Repository.Blob.PutBlob(stream, "thumbnail-" + fileName);
            Repository.Blob.PutBlob(stream, fileName);
    catch (Exception ex)
        Trace.WriteLine("Error", ex.ToString());

Last method to add to the class is the Run() method.

public static void Run()
    var queueMessage = Repository.Queue.GetNextMessage();
    while (queueMessage != null)
        var message = queueMessage.AsString.Split('$');
        if (message.Length == 2)
            AddThumbnail(message[0], message[1]);
        queueMessage = Repository.Queue.GetNextMessage();

Now open up the WorkerRole.cs File and add the following code to the existing methods and add the additional even method below.

public override void Run()
    Trace.WriteLine("Junk Trunk Worker entry point called", "Information");

    while (true)

        Trace.WriteLine("Working", "Junk Trunk Worker Role is active and running.");

public override bool OnStart()
    ServicePointManager.DefaultConnectionLimit = 12;
    RoleEnvironment.Changing += RoleEnvironmentChanging;

    CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
        RoleEnvironment.Changed += (sender, arg) =>
            if (arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>()
                .Any((change) => (change.ConfigurationSettingName == configName)))
                if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))


    return base.OnStart();

private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
    if (!e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)) return;
    Trace.WriteLine("Working", "Environment Change: " + e.Changes.ToList());
    e.Cancel = true;

At this point everything needed to kick off photo processing using Windows Azure Storage Queue as the tracking mechanism is ready. I’ll be following up these blog entries with some additional entries regarding rafactoring and streamlining what we have going on. I might even go all out and add some more functionality or some such craziness! So hope that was helpful and keep reading. I’ll have more bits of rambling and other trouble coming down the blob pipeline soon! Cheers!