Put Stuff in Your Windows Azure Junk Trunk – Windows Azure Worker Role and Storage Queue

Click on Part 1 and Part 2 of this series to review the previous examples and code.  First and foremost have the existing code base created in the other two examples opened and ready in Visual Studio 2010.  Next, I’ll just start rolling ASAP.

In the JunkTrunk.Storage Project add the following class file and code to the project. This will get us going for anything else we needed to do for the application from the queue perspective.

[sourcecode language=”csharp”]
public class Queue : JunkTrunkBase
{
public static void Add(CloudQueueMessage msg)
{
Queue.AddMessage(msg);
}

public static CloudQueueMessage GetNextMessage()
{
return Queue.PeekMessage() != null ? Queue.GetMessage() : null;
}

public static List<CloudQueueMessage> GetAllMessages()
{
var count = Queue.RetrieveApproximateMessageCount();
return Queue.GetMessages(count).ToList();
}

public static void DeleteMessage(CloudQueueMessage msg)
{
Queue.DeleteMessage(msg);
}
}
[/sourcecode]

Once that is done open up the FileBlobManager.cs file in the Models directory of the JunkTrunk ASP.NET MVC Web Application. In the PutFile() Method add this line of code toward the very end of that method. The method, with the added line of code should look like this.

[sourcecode language=”csharp”]
public void PutFile(BlobModel blobModel)
{
var blobFileName = string.Format("{0}-{1}", DateTime.Now.ToString("yyyyMMdd"), blobModel.ResourceLocation);
var blobUri = Blob.PutBlob(blobModel.BlobFile, blobFileName);

Table.Add(
new BlobMeta
{
Date = DateTime.Now,
ResourceUri = blobUri,
RowKey = Guid.NewGuid().ToString()
});

Queue.Add(new CloudQueueMessage(blobUri + "$" + blobFileName));
}
[/sourcecode]

Now that we have something adding to the queue, we want to process this queue message. Open up the JunkTrunk.WorkerRole and make sure you have the following references in the project.

Windows Azure References
Windows Azure References

Next create a new class file called PhotoProcessing.cs. First add a method to the class titled ThumbnailCallback with the following code.

[sourcecode language=”csharp”]
public static bool ThumbnailCallback()
{
return false;
}
[/sourcecode]

Next add another method with a blobUri string and filename string as parameters. Then add the following code block to it.

[sourcecode language=”csharp”]
private static void AddThumbnail(string blobUri, string fileName)
{
try
{
var stream = Repository.Blob.GetBlob(blobUri);

if (blobUri.EndsWith(".jpg"))
{
var image = Image.FromStream(stream);
var myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
var thumbnailImage = image.GetThumbnailImage(42, 32, myCallback, IntPtr.Zero);
thumbnailImage.Save(stream, ImageFormat.Jpeg);
Repository.Blob.PutBlob(stream, "thumbnail-" + fileName);
}
else
{
Repository.Blob.PutBlob(stream, fileName);
}
}
catch (Exception ex)
{
Trace.WriteLine("Error", ex.ToString());
}
}
[/sourcecode]

Last method to add to the class is the Run() method.

[sourcecode language=”csharp”]
public static void Run()
{
var queueMessage = Repository.Queue.GetNextMessage();

while (queueMessage != null)
{
var message = queueMessage.AsString.Split(‘$’);
if (message.Length == 2)
{
AddThumbnail(message[0], message[1]);
}

Repository.Queue.DeleteMessage(queueMessage);
queueMessage = Repository.Queue.GetNextMessage();
}
}
[/sourcecode]

Now open up the WorkerRole.cs File and add the following code to the existing methods and add the additional even method below.

[sourcecode language=”csharp”]
public override void Run()
{
Trace.WriteLine("Junk Trunk Worker entry point called", "Information");

while (true)
{
PhotoProcessing.Run();

Thread.Sleep(60000);
Trace.WriteLine("Working", "Junk Trunk Worker Role is active and running.");
}
}

public override bool OnStart()
{
ServicePointManager.DefaultConnectionLimit = 12;
DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
RoleEnvironment.Changing += RoleEnvironmentChanging;

CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
RoleEnvironment.Changed += (sender, arg) =>
{
if (arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>()
.Any((change) => (change.ConfigurationSettingName == configName)))
{
if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
{
RoleEnvironment.RequestRecycle();
}
}
};
});

Storage.JunkTrunkSetup.CreateContainersQueuesTables();

return base.OnStart();
}

private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
if (!e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)) return;

Trace.WriteLine("Working", "Environment Change: " + e.Changes.ToList());
e.Cancel = true;
}
[/sourcecode]

At this point everything needed to kick off photo processing using Windows Azure Storage Queue as the tracking mechanism is ready. I’ll be following up these blog entries with some additional entries regarding rafactoring and streamlining what we have going on. I might even go all out and add some more functionality or some such craziness! So hope that was helpful and keep reading. I’ll have more bits of rambling and other trouble coming down the blob pipeline soon! Cheers!

Put Stuff in Your Windows Azure Junk Trunk – ASP.NET MVC Application

If you haven’t read Part 1 of this series (part 3 click here), you’ll need to in order to follow along with the JunkTrunk Repository.  Open the solution up if you haven’t already and navigate to the Models Folder within the ASP.NET MVC JunkTrunk Project.  In the folder add another class titled FileItemModel.cs and BlobModel.cs. Add the following properties to the FileItemModel.

[sourcecode language=”csharp”]
public class FileItemModel
{
public Guid ResourceId { get; set; }
public string ResourceLocation { get; set; }
public DateTime UploadedOn { get; set; }
}
[/sourcecode]

Add the following property to the BlobModel and inherit from the FileItemModel Class.

[sourcecode language=”csharp”]
public class BlobModel : FileItemModel
{
public Stream BlobFile { get; set; }
}
[/sourcecode]

Next add a new class file titled FileBlobManager.cs and add the following code to the class.

[sourcecode language=”csharp”]
public class FileBlobManager
{
public void PutFile(BlobModel blobModel)
{
var blobFileName = string.Format("{0}-{1}", DateTime.Now.ToString("yyyyMMdd"), blobModel.ResourceLocation);
var blobUri = Blob.PutBlob(blobModel.BlobFile, blobFileName);

Table.Add(
new BlobMeta
{
Date = DateTime.Now,
ResourceUri = blobUri,
RowKey = Guid.NewGuid().ToString()
});
}

public BlobModel GetFile(Guid key)
{
var blobMetaData = Table.GetMetaData(key);
var blobFileModel =
new BlobModel
{
UploadedOn = blobMetaData.Date,
BlobFile = Blob.GetBlob(blobMetaData.ResourceUri),
ResourceLocation = blobMetaData.ResourceUri
};
return blobFileModel;
}

public List GetBlobFileList()
{
var blobList = Table.GetAll();

return blobList.Select(
metaData => new FileItemModel
{
ResourceId = Guid.Parse(metaData.RowKey),
ResourceLocation = metaData.ResourceUri,
UploadedOn = metaData.Date
}).ToList();
}

public void Delete(string identifier)
{
Table.DeleteMetaDataAndBlob(Guid.Parse(identifier));
}
}
[/sourcecode]

Now that the repository, management, and models are all complete the focus can turn to the controller and the views of the application. At this point the break down of each data element within the data transfer object and the movement of the data back and forth becomes very important to the overall architecture. One of the things to remember is that the application should not pass back and forth data such as URIs or other long easy to hack strings. This is a good place to include Guids or if necessary integer values that identify the data that is getting created, updated, or deleted. This helps to simplify the UI and help decrease the chance of various injection attacks. The next step is to open up the HomeController and add code to complete each of the functional steps for the site.

[sourcecode language=”csharp”]
[HandleError]
public class HomeController : Controller
{
public ActionResult Index()
{
ViewData["Message"] = "Welcome to the Windows Azure Blob Storing ASP.NET MVC Web Application!";
var fileBlobManager = new FileBlobManager();
var fileItemModels = fileBlobManager.GetBlobFileList();
return View(fileItemModels);
}

public ActionResult About()
{
return View();
}

public ActionResult Upload()
{
return View();
}

public ActionResult UploadFile()
{
foreach (string inputTagName in Request.Files)
{
var file = Request.Files[inputTagName];

if (file.ContentLength > 0)
{
var blobFileModel =
new BlobModel
{
BlobFile = file.InputStream,
UploadedOn = DateTime.Now,
ResourceLocation = Path.GetFileName(file.FileName)
};

var fileBlobManager = new FileBlobManager();
fileBlobManager.PutFile(blobFileModel);
}
}

return RedirectToAction("Index", "Home");
}

public ActionResult Delete(string identifier)
{
var fileBlobManager = new FileBlobManager();
fileBlobManager.Delete(identifier);
return RedirectToAction("Index", "Home");
}
}
[/sourcecode]

The view hasn’t been created for the Upload just yet, so the method will cause a build error at this point. But before I add a view for this action, I’ll cover what has been created for the controller.

The Index Action I’ve changed moderately to have a list of the Blobs that are stored in the Windows Azure Blob Storage. This will be pulled from the manager class that we created earlier and passed into the view for rendering. I also, just for cosmetic reasons, changed the default display message passed into the ViewData so that the application would have something displayed more relevant to the application.

The About message I just left as is. The Upload action simply returns what will be a view we create.

The UploadFile Action checks for files within the request, builds up the model and then puts the model into storage via the manager.

The last method is the Delete Action that instantiates the manager and then calls a delete against the storage. This action then in turn traces back through, finds the Table & Blob Entities that are related to the specific blob and deletes both from the respective Windows Azure Storage Table and Blob Mediums.

The next step is to get the various views updated or added to enable the upload and deletion of the blob items.

Add a view titled Upload.aspx to the Home Folder of the Views within the JunkTrunk Project.

Upload View
Upload View

First change the inherits value for the view from System.Web.Mvc.ViewPage to System.Web.Mvc.ViewPage. After that add the following HTML to the view.

[sourcecode language=”html”]
<asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">
Upload an Image
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
<h2>
Upload</h2>
<% using (Html.BeginForm("UploadFile", "Home", FormMethod.Post,
new { enctype = "multipart/form-data" }))
{%>
<%: Html.ValidationSummary(true) %>
<fieldset>
<legend>Fields</legend>

<div class="editor-label">
Select file to upload to Windows Azure Blob Storage:
</div>
<div class="editor-field">
<input type="file" id="fileUpload" name="fileUpload" />
</div>
<p>
<input type="submit" value="Upload" />
</p>
</fieldset>
<% } %>
<div>
<%: Html.ActionLink("Back to List", "Index") %>
</div>
</asp:Content>
[/sourcecode]

After adding the HTML, then change the HTML in the Index.aspx View to have an action link for navigating to the upload page and for viewing the list of uploaded Blobs. Change the inherits first form System.Web.Mvc.ViewPage to System.Web.Mvc.ViewPage<IEnumerable>. The rest of the changes are listed below.

[sourcecode language=”html”]
<asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">
Home Page
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
<h2>
<%: ViewData["Message"] %></h2>
<p>
<%: Html.ActionLink("Upload", "Upload", "Home") %>
a file to Windows Azure Blob Storage.
</p>
Existing Files:<br />
<table>
<tr>
<th>
</th>
<th>
FileName
</th>
<th>
DownloadedOn
</th>
</tr>
<% foreach (var item in Model)
{ %>
<tr>
<td>
<%: Html.ActionLink("Delete", "Delete",
new { identifier = item.ResourceId })%>
</td>
<td>
<%: item.ResourceLocation %>
</td>
<td>
<%: String.Format("{0:g}", item.UploadedOn) %>
</td>
</tr>
<% } %>
</table>
</asp:Content>
[/sourcecode]

Make sure the Windows Azure Project is set as the startup project and click on F5 to run the application. The following page should display first.

The Home Page o' Junk Trunk
The Home Page o' Junk Trunk

Click through on it to go to the upload page.

Selecting an Image to Put in The Junk Trunk
Selecting an Image to Put in The Junk Trunk

On the upload page select and image to upload and then click on upload. This will then upload the image and redirect appropriately to the home page.

The Image in the Junk Trunk
The Image in the Junk Trunk

On the home page the list should now have the uploaded blob image listed. Click delete to delete the image. When deleted the table and the blob itself will be removed from the Windows Azure Storage. To see that the data & image are being uploaded open up the Server Explorer within Visual Studio 2010.

Visual Studio 2010 Server Explorer
Visual Studio 2010 Server Explorer

View the data by opening up the Windows Azure Storage tree. Double click on either of the storage mediums to view table or blob data.

Windows Azure Storage
Windows Azure Storage

Put Stuff in Your Windows Azure Junk Trunk – Repository Base

Alright, so the title is rather stupid, but hey, it’s fun!  🙂

This project I setup to provide some basic functionality with Windows Azure Storage.  I wanted to use each of the three mediums;  Table, Blob, and Queue, and this example will cover each of these things.  The application will upload and store images, provide a listing, some worker processing, and deletion of the images & associated metadata.  This entry is part 1 of this series, with the following schedule for subsequent entries:

Title aside, schedule laid out, description of the project completed, I’ll dive right in!

Putting Stuff in Your Junk Trunk

Create a new Windows Azure Project called PutJunkInIt.  (Click any screenshot for the full size, and also note some of the text may be off – I had to recreate a number of these images)

Windows Azure PutJunkInIt
Windows Azure PutJunkInIt

Next select the ASP.NET MVC 2 Web Application and also a Worker Role and name the projects JunkTrunk and JunkTrunk.WorkerRole.

Choosing Windows Azure Projects
Choosing Windows Azure Projects

In the next dialog choose to create the unit test project and click OK.

Create Unit Test Project
Create Unit Test Project

After the project is created the following projects are setup within the PutJunkInIt Solution.  There should be a JunkTrunk, JunkTrunk.Worker, JunkTrunk Windows Azure Deployment Project, and a JunkTrunk.Tests Project.

Solution Explorer
Solution Explorer

Next add a Windows Class Library Project and title it JunkTrunk.Storage.

Windows Class Library
Windows Class Library

Add a reference to the Microsoft.WindowsAzure.ServiceRuntime and Microsoft.WindowsAzure.StorageClient assemblies to the JunkTrunk.Storage Project.  Rename the Class1.cs file and class to JunkTrunkBase.  Now open up the Class1.cs file in the JunkTrunk.Storage Project.  First add the following fields and constructor to the class.

[sourcecode language=”csharp”]
public const string QueueName = "metadataqueue";
public const string BlobContainerName = "photos";
public const string TableName = "MetaData";
static JunkTrunkBase()
{
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
RoleEnvironment.Changed
+= (sender, arg) =>
{
if (!arg.Changes.OfType()
.Any(change => (change.ConfigurationSettingName == configName)))
return;
if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
{
RoleEnvironment.RequestRecycle();
}
};
});
}
[/sourcecode]

After that add the following blob container and reference methods.

[sourcecode language=”csharp”]
protected static CloudBlobContainer Blob
{
get { return BlobClient.GetContainerReference(BlobContainerName); }
}
private static CloudBlobClient BlobClient
{
get
{
return Account.CreateCloudBlobClient();
}
}
[/sourcecode]

Now add code for the table & queue client and reference methods.

[sourcecode language=”csharp”]
protected static CloudQueue Queue
{
get { return QueueClient.GetQueueReference(QueueName); }
}
private static CloudQueueClient QueueClient
{
get { return Account.CreateCloudQueueClient(); }
}
protected static CloudTableClient Table
{
get { return Account.CreateCloudTableClient(); }
}
protected static CloudStorageAccount Account
{
get
{
return
CloudStorageAccount
.FromConfigurationSetting("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
}
}
[/sourcecode]

This class now provides the basic underpinnings needed to retrieve the appropriate information from the configuration.  This base class can then provide that connection information to connect to the table, queue, or blob mediums.

Next step is to create some initialization code to get the containers created if they don’t exist in Windows Azure.  Add a new class file to the PutJunkInIt Project.

JunkTrunkSetup
JunkTrunkSetup

[sourcecode language=”csharp”]
public class JunkTrunkSetup : JunkTrunkBase
{
public static void CreateContainersQueuesTables()
{
Blob.CreateIfNotExist();
Queue.CreateIfNotExist();
Table.CreateTableIfNotExist(TableName);
}
}
[/sourcecode]

Next add the System.Data.Services.Client Assembly to the project.  After adding the assembly add two new classes and name them BlobMeta.cs and Table.cs. Add the following code to the Table.cs Class.

[sourcecode language=”csharp”]
public class Table
{
public static string PartitionKey;
}
[/sourcecode]

Next add another class file and name it BlobMetaContext.cs and add the following code.

[sourcecode language=”csharp”]
public class BlobMetaContext : TableServiceContext
{
public BlobMetaContext(string baseAddress, StorageCredentials credentials)
: base(baseAddress, credentials)
{
IgnoreResourceNotFoundException = true;
}
public IQueryable Data
{
get { return CreateQuery(RepositoryBase.TableName); }
}
public void Add(BlobMeta data)
{
data.RowKey = data.RowKey.Replace("/", "_");
BlobMeta original = (from e in Data
where e.RowKey == data.RowKey
&& e.PartitionKey == Table.PartitionKey
select e).FirstOrDefault();
if (original != null)
{
Update(original, data);
}
else
{
AddObject(RepositoryBase.TableName, data);
}
SaveChanges();
}
public void Update(BlobMeta original, BlobMeta data)
{
original.Date = data.Date;
original.ResourceUri = data.ResourceUri;
UpdateObject(original);
SaveChanges();
}
}
[/sourcecode]

Now add the following code to the BlobMeta Class.

[sourcecode language=”csharp”]
public class BlobMeta : TableServiceEntity
{
public BlobMeta()
{
PartitionKey = Table.PartitionKey;
}
public DateTime Date { get; set; }
public string ResourceUri { get; set; }
}
[/sourcecode]

At this point, everything should build. Give it a go to be sure nothing got keyed in wrong (or copied in wrong). Once assured the build is still solid, add the Blob.cs Class to the project.

[sourcecode language=”csharp”]
public class Blob : JunkTrunkBase
{
public static string PutBlob(Stream stream, string fileName)
{
var blobRef = Blob.GetBlobReference(fileName);
blobRef.UploadFromStream(stream);
return blobRef.Uri.ToString();
}
public static Stream GetBlob(string blobAddress)
{
var stream = new MemoryStream();
Blob.GetBlobReference(blobAddress)
.DownloadToStream(stream);
return stream;
}
public static Dictionary<string, string> GetBlobList()
{
var blobs = Blob.ListBlobs();
var blobDictionary =
blobs.ToDictionary(
listBlobItem => listBlobItem.Uri.ToString(),
listBlobItem => listBlobItem.Uri.ToString());
return blobDictionary;
}
public static void DeleteBlob(string blobAddress)
{
Blob.GetBlobReference(blobAddress).DeleteIfExists();
}
}
[/sourcecode]

After that finalize the Table Class with the following changes and additions.

[sourcecode language=”csharp”]
public class Table : RepositoryBase
{
public const string PartitionKey = "BlobMeta";
public static void Add(BlobMeta data)
{
Context.Add(data);
}
public static BlobMeta GetMetaData(Guid key)
{
return (from e in Context.Data
where e.RowKey == key.ToString() &&
e.PartitionKey == PartitionKey
select e).SingleOrDefault();
}
public static void DeleteMetaDataAndBlob(Guid key)
{
var ctxt = new BlobMetaContext(
Account.TableEndpoint.AbsoluteUri,
Account.Credentials);
var entity = (from e in ctxt.Data
where e.RowKey == key.ToString() &&
e.PartitionKey == PartitionKey
select e).SingleOrDefault();
ctxt.DeleteObject(entity);
Repository.Blob.DeleteBlob(entity.ResourceUri);
ctxt.SaveChanges();
}
public static List<BlobMeta> GetAll()
{
return (from e in Context.Data
select e).ToList();
}
public static BlobMetaContext Context
{
get
{
return new BlobMetaContext(
Account.TableEndpoint.AbsoluteUri,
Account.Credentials);
}
}
}
[/sourcecode]

The final file to add is the Queue.cs Class File. Add that and then add the following code to the class.

[sourcecode language=”csharp”]
public class Queue : JunkTrunkBase
{
public static void Add(CloudQueueMessage msg)
{
Queue.AddMessage(msg);
}
public static CloudQueueMessage GetNextMessage()
{
return Queue.PeekMessage() != null ? Queue.GetMessage() : null;
}
public static List<CloudQueueMessage> GetAllMessages()
{
var count = Queue.RetrieveApproximateMessageCount();
return Queue.GetMessages(count).ToList();
}
public static void DeleteMessage(CloudQueueMessage msg)
{
Queue.DeleteMessage(msg);
}
}
[/sourcecode]

The now gives us a fully functional class that utilizes the Windows Azure SDK. In Part 2 I’ll start building on top of that using the ASP.NET MVC 2 Web Project. Part 2 will be published tomorrow, so stay tuned.

Gritty Technical Info on Windows Azure Web Roles

This is a follow up to the previous blog entry I wrote pertaining to Windows Azure Roles.  I wanted to cover the bases on the various technical aspects of creating a Windows Azure Web Role & Worker Role in Visual Studio 2010.  Without interruption let’s just dive right in.  Start Visual Studio 2010 and initiate a new project. File, new, and then project will open the new project dialog.

Windows Azure Project
Windows Azure Project

Select a cloud template type and name your project.  Click OK and the New Windows Azure Project Dialog will appear to select the role types you can choose from.

Windows Azure Project Templates
Windows Azure Project Templates

Select an ASP.NET MVC Web Application, name it appropriately, and then click OK.  When prompted for a test project select yes and click OK.  When the solution is finished generating from the chosen templates there will be a SampleWebRole ASP.NET MVC Web Application, the test project titled SampleWebRole.Tests, and a Windows Azure Project titled Windows Azure Web Role Sample.

Solution
Solution

After that run the application to assure that the Development Fabric & other parts of the web application startup appropriately.

With the web application still running, click on the Development Fabric Icon in the status bar of Windows 7 and select the Show Computer Emulator UI.

Show Compute Emulator UI
Show Compute Emulator UI

The Windows Azure Compute Emulator will display. Click on the Service Deployments tree until you can see each individual instance (the little green lights should be showing). Figure 4.6 shows this tree opened with one of the instances selected to view the status trace.

Windows Azure Compute Emulator
Windows Azure Compute Emulator

Select Shift + F5 to stop the web application from running.  In the Solution Explorer right click on the SampleWebRole under the Windows Azure Web Role Sample Project and select Properties.

Properties for SampleWebRole
Properties for SampleWebRole

Under the configuration tab of the SampleWebRole Properties set the Instance Count to 6 and the VM Size to Extra Large.

Windows Azure Instance Properties
Windows Azure Instance Properties

Now select F5 to run the web application again in the Windows Azure Development Fabric.  The Windows Azure Compute Emulator (if it is closed right click back on the status icon to launch it again) will now display each of the 6 instances launching under the SampleWebRole.

Windows Azure Compute Emulator
Windows Azure Compute Emulator

Click on one of the green lights to show that specific instance status in the primary window area.

Windows Azure Compute Instance 2
Windows Azure Compute Instance 2

When you select the specific instance the status of that instance is displayed. The instance that is displayed in figure 4.10 has a number of events being recorded with the diagnostics, MonAgentHost, and the runtime. This particular instance had gone through a rough start. During the lifecycle of a Windows Azure Web, Worker, or CGI Role there are a number of events similar to these that can occur.

Read through the first few lines. These lines show that another agent was running, which could be a number of things that conflicted with this web role starting up cleanly. Eventually the web role was able to startup appropriately as shown in the runtime lines stating that the OnStart() is called and then complete, with the Run() executing next.

Reading further through the diagnostics the web role eventually requests a shutdown and then prepares for that shutdown pending the exit of the parent process 6924.

These types of events are common place when reviewing the actions a web role will go through; generally, don’t get too alarmed by any particular set of messages. As long as the role has green lights on the instances, things are going swimmingly. When the lights change to purple or red then it is important to really start paying attention to the diagnostics.

Windows Azure Worker Roles

In the next blog entry (Part II) I want to show is how to add a worker role and how to analyze the activities within the role. The worker role is somewhat different than a web role. The primary difference between a web role and a worker role is that one is built around providing compute work, while one is built around providing web compute. Think of the worker role as something similar to a Windows Service, which runs ongoing to execute jobs & other processes, often backend type processes. A web role is what is built to host Silverlight and web applications such as ASP.NET or ASP.NET MVC.

Part II published on Monday the 17th.

Shout it

Windows Azure Web, Worker, and CGI Roles – How They Work

This is a write up I’ve put together of how the roles in Windows Azure work.  As far as I know, this is all correct – but if there are any Windows Azure Team Members out there that wouldn’t mind providing some feedback about specifics or adding to the details I have here – please do add comments!  🙂

Windows 2008 and Hyper-V

Windows Azure is built on top of Windows 2008 & Hyper-V. Hyper-V provides virtualization to the various instance types and allocation of resources to those instances. Windows 2008 provides the core operating system functionality for those systems and the Windows Azure Platform Roles and Storage.

The hypervisor that a Hyper-V installation implements does a few unique things compared to many of the other virtualization offerings in the industry. Xen (The Open Source Virtualization Software that Amazon Web Services use) & VMWare both use a shared resource model for utilization of physical resources within a system. This allows for more virtualized instances to be started per physical machine, but can sometimes allow hardware contention. On the other hand Hyper-V pins a particular amount of resources to a virtualized instance, which decreases the number of instances allowed on a physical machine. This enables Hyper-V to prevent hardware contention though. Both designs have their plusses and minuses and in cloud computing these design choices are rarely evident. The context however is important to know when working with high end computing within the cloud.

Windows Azure Fabric Controller

The Windows Azure Fabric Controller is kind of the magic glue that holds all the pieces of Windows Azure together. The Azure Fabric Controller automates all of the load balancing, switches, networking, and other networking configuration. Usually within an IaaS environment you’d have to setup the load balancer, static IP address, internal DNS that would allow for connection and routing by the external DNS, the switch configurations, configuring the DMZ, and a host of other configuration & ongoing maintenance is needed. With the Windows Azure Platform and the Fabric Controller, all of that is taken care of entirely. Maintenance for these things goes to zero.

The Windows Azure Fabric Controller has several primary tasks: networking, hardware, and operating system management, service modeling, and life cycle management of systems.

The low level hardware that the Windows Azure Fabric Controller manages includes switches, load balancers, nodes, load balancers, and other network elements. In addition it manipulates the appropriate internal DNS and other routing needed for communication within the cloud so that each URI is accessed seamlessly from the outside.

The service modeling that the fabric controller provides is a to map the topology of services, port usage, and as mentioned before the internal communication within the cloud. All of this is done by the Fabric Controller without any interaction other than creating an instance or storage service within Windows Azure.

The operating system management from the Fabric Controller involves patching the operating system to assure that security, memory and storage, and other integral operating system features are maintained and optimized. This allows the operating system to maintain uptime and application performance characteristics that are optimal.

Finally the Fabric Controller has the responsibility for service life cycle. This includes updates and configuration changes for domains and fault domains. The Fabric Controller does so in a way to maintain uptime for the services.

Each role has at least one instance running. A role however can have multiple instances, with a theoretically limitless number. In this way, the Fabric Controller, if an instance stops responding is recycled and a new instance takes over. This can sometimes take several minutes, and is a core reason behind the 99.99% uptime SLA requiring two instances within a role to be running. In addition to this the instance that is recycled is rebuilt from scratch, thus destroying any data that would be stored on the role instance itself. This is when Windows Azure Storage plays a pivotal role in maintaining Windows Azure Cloud Applications.

Web Role

The Windows Azure Web Role is designed as a simply to deploy IIS web site or services hosting platform feature. The Windows Azure Web Role can provide hosting for any .NET related web site such as; ASP.NET, ASP.NET MVC, MonoRails, and more.

The Windows Azure Web Role is provides this service hosting with a minimal amount of maintenance required. No routing or load balancing setup is needed; everything is handled by the Windows Azure Fabric Controller.

Uses: Hosting ASP.NET, ASP.NET MVC, MonoRails, or other .NET related web site in a managed, high uptime, highly resilient, controlled environment.

Worker Role

A worker role can be used to host any number of things that need to pull, push, or run continuously without any particular input. A service role can be used to setup a schedule or other type of service. This provides a role dedicated to what could closely be compared to a Windows Service. The options and capabilities of a Worker Role however vastly exceed a simple Windows Service.

CGI Role

This service role is designed to allow execution of technology stacks such as Ruby on Rails, PHP, Java, and other non-Microsoft options.

Windows Azure Storage

Windows Azure Storage is broken into three distinct features within the service. Windows Azure provides tables, blob, and queue for storage needs. Any of the Windows Azure Roles can also connect to the storage to maintain data across service lifecycle reboots, refreshes, and any temporary loss of a Windows Azure Role.

A note about Windows Azure Storage compared to most Cloud Storage Providers: None of the Azure Storage Services are “eventually consistent”. When a write is done, it is instantly visible to all subsequent readers. This simplifies coding but slows down the data storage mechanisms more than eventually consistent data architectures.

Shout it