Following Good Practice, The Negative Bits About Windows Azure First, But Gems Included! :D

Ok, I’ve used Windows Azure steadily over the last year and a half.  I’ve fought with the SDK so much that I stopped using it. I decided I’d put together this recap of what has driven me crazy and then put together something about the parts that I really like, the awesome bits, the parts that have the greatest potential with Windows Azure. So hold on to your hats, this may be hard hitting.  ;)

First the bad parts.

The Windows Azure SDK

Ok, the SDK has driven me nuts. It has had flat out errors, sealed (bad) code, and is TIGHTLY COUPLED to the development fabric. I’m a professional, I can mock that, I don’t need kindergarten level help running this! If I have a large environment with thousands of prospective nodes (or even just a few dozen instances) the development fabric does nothing to help. I’d rate the SDK’s closed (re: sealed/no interfaces) nature and the development fabric as the number 1 reasons that Windows Azure is the hardest platform to develop for at large scale in Enterprise Environments.

Pricing Competitiveness? Ouch. :(

Windows Azure is by far the most expensive cloud platform or infrastructure to use on the market today. AWS comes in, when priced specifically anywhere from 2/3rds the price to 1/6th the price. Rackspace in some circumstances comes in at the crazy low price of 1/8th as much as Windows Azure for similar capabilities. I realize there are certain things that Windows Azure may provide, but my not, and that in some rare circumstances Azure may come in lower – but that is rare. If Windows Azure wants to stay primarily, and only, an Enterprise Offering than this is fine. Nailing Enterprises on expensive things and offering them these SLA myths is exactly what Enterprises want, piece of mind of an SLA, they don’t care about pricing.

But if Windows Azure wants to play in new business, startups especially, mid-size business, or even small enterprises than the pricing needs a fix.  We’re looking at disparities $500 bucks vs. $3500 bucks in other situations. This isn’t exactly feasible as a way to get into cloud computing. Microsoft, unfortunately for them, has to drop this dream of maintaining revenues and profits at the same rate as their OS & Office Sales. Fact is, the market has already turned this sector into a commoditized price.

Speed, Boot Time, Restart, UI Admin Responsiveness

The Silverlight Interface is beautiful, I’ll give it that. But in most browsers aside from IE it gets flaky. Oh wait, no, I’m wrong. It gets flaky in all the browsers. Doh! This may be fixed now, but in my experience and others that I’ve paired with, we’ve watched in Chrome, Opera, Safari, Firefox, and IE when things have happened. This includes the instance spinning as if starting up when it is started, or when it spins and spins, a refresh is done and the instance has completely disappeared!  I’ve refreshed the Silverlight UI before and it just stops responding to communication before (and this wasn’t even on my machine).

The boot time for an instance is absolutely unacceptable for the Internet, for web development, or otherwise. Boot time should be similar to a solid Linux instance. I don’t care what needs to be done, but the instances need cleaned up, the architecture changed, or the OS swapped out if need be. I don’t care what OS the cloud is running on, but my instance should be live for me within 1-2 minutes or LESS. The current performance of Rackspace, Joyent, AWS, and about every single cloud provider out there boots an instance in about 45 seconds, sometimes a minute, but often less. I know there are workarounds, the whole leave it running while you deploy methods, and other such notions, but those don’t always work out. Sometimes you just need the instance up and running and you need it NOW!

Speed needs measurement to prove out in tests. Speed needs to be observed. I need analytics on my speed of the instance I’m choosing. I don’t know if it is pegged, I don’t know if it is idle and not responding. I have no idea in Windows Azure with any easy way. The speed, in general, seems to be really good on Windows Azure. Often times it appears to be better than others even, but rarely can I really prove it. It’s just a gut feeling that it is moving along well.

So, those are the negatives; speed, boot time, admin UI responsiveness, pricing, and the SDK. Now it is time for the wicked awesome cool bits!

Now, The Cool Parts

Lock In With Mort

This topic you’d have to ask me about in person, many people would be offended by this and I mean no offense by it. The reality is many companies will continue to get and hire what they consider to be plug and play replaceable developers – AKA “mort”. This is really bad for developers, but great for Windows Azure. In addition Windows Azure provides an option to lock in. It is by no means the only option – because by nature a cloud platform and services will only lock you in if YOU allow yourself to be. But providing both ways, lock in or not, is a major boost for Windows Azure also. Hopefully, I’ll have a presentation in regards to this in the near future – or at least find a way to write it up so that it doesn’t come off as me being a mean person, because I honestly don’t intend that.

Deploy Anything, To The Platform

Have a platform to work with instead of starting purely at infrastructure is HUGE for most companies. Not all, but most companies would be benefited in a massive way to write to the Azure Platform instead of single instances like EC2. The reason boils down to this, Windows Azure abstracts out most of the networking, ops, and other management that a company has to do. Most companies have either zero, or very weak ops and admin capabilities. This fact in many companies will actually bring the (I hate saying this) TCO, or Total Cost of Ownership, down for companies building to the Windows Azure Platform vs. the others. Because really, the real cost in all of this is the human cost, not the services as they’re commodotized. Again though, this is for small non-web related businesses – as web companies need to have ops, capabilities, their people absolutely must understand and know how the underpinnings work. If routing, multi-tenancy, networking and other capabilities are to be used to their fullest extent, infrastructure needs to be abstracted but the infrastructure needs to be accessible. Windows Azure does a good deal of infrastructure, and it looks like there will be more available in the future. This will be when the platform actually becomes much more valuable for the web side of the world that demands control, network access, SEO, routing, multi-tenancy, and other options like this.

With the newer generation of developers and others coming out of colleges there is a great idea here and a very bad one. Many new generation developers, if they want web, are jumping right into Ruby on Rails. Microsoft isn’t even a blip on their radar, however there still manage to be thousands that give Microsoft .NET a look, and for them Windows Azure provides a lot of options, including Ruby on Rails, PHP, and more. Soon there will even be some honest to goodness node.js support. I even suspect that the node.js support will probably be some of the fastest performing node.js implementations around. At least, the potential is there for sure. This later group of individuals coming into the industry these days are who will drive the Windows Azure Platform to achieve what it can.

.NET, PHP, and Ruby on Rails Ecosystem (Note, I don’t support of the theft of this word, but I’ll jump on the “ecosystem” bandwagon, reluctantly)

Besides the simple idea that you can deploy any of these to an “instance” in other environments, Windows Azure (almost) makes every one of these a first class platform citizen.  Drop the SDK in my advice, my STRONG advice, and go the RESTful services usage route. Once you do that you aren’t locked in, you can abstract for Windows Azure or any cloud, and you can utilize any of these framework stacks. This, technically, is HUGE to have these available at a platform level. AWS doesn’t offer that, Rackspace doesn’t even dream of it yet, OpenStack doesn’t enable it, and the list goes on. Windows Azure, that’s your option in this category.

The Other MASSIVE Coolness is not Core Windows Azure Features, but They Provide a HUGE Plus for Windows Azure

The add ons to SQL Server are HUGE for enterprises with BI Reporting, SQL Server Reporting, etc. These features are a no brainer for an enterprise. Yes, they provide immediate lock in. Yes, it doesn’t really matter for an enterprise. But here’s the saving grace for this lock in. With the Service Bus and Access Control you can use single sign on to use this and OTHER CLOUD SERVICES in a very secure and safe nature with your development. These two features alone, whether you use other Windows Azure Features or not, are worth using. Even with AWS, Rackspace, or one of the others. The Service Bus and Access Control actually add a lot of capabilities to any type of cloud architecture that comes in useful for enterprise environments, and is practically a requirement for on-premise and in cloud mixed environments (which it seems, almost all environments are).

Other major pluses that I like with Windows Azure includ:

  • Azure Marketplace – Over time, and if marketed well, this could become a huge asset to companies big and small.
  • SQL Azure – SQL Azure is actually a pretty solid database offering for enterprises. Since a lot of Enterprises have already locked themselves into SQL Server, this is a great offering for those companies. However I’m mixed on its usage vs. lower priced mySQL usage, or others for that matter. It definitely adds to the overall Windows Azure Capabilities though, and as time moves forward and other features (such as SSIS, etc) are added to Azure this will become an even greater differentiation.
  • Caching – Well, caching is just awesome isn’t it? I dig me some caching.  This offering is great. It isn’t memCached or some of the others, but it is still a great offering, and again, one of those things that adds to the overall Windows Azure capabilities list. I look forward to Microsoft adding more and more capabilities to this feature.  :)
Summary
Windows Azure has grown and matured a lot over the time since its release from beta. It still however has some major negatives compared to more mature offerings. However, there is light at the end of the tunnel for those choosing the Windows Azure route, or those that are getting put into the Windows Azure route. Some of those things may even help leap ahead of some of the competition at some point. Microsoft is hard core in this game and they’re not letting down. If anyone has failed to notice, they still have one of the largest “war chests” on Earth to play in new games like this – even when they were initially ill prepared. I do see myself using Windows Azure in the future, maybe not extensively, but it’ll be there. And win a large share of the market or not, Microsoft putting this much money into the industry will push all ships forward in some way or another!

Cloud Formation

Here’s the presentation materials that I’ve put together for tonight.


Check my last two posts regarding the location & such:

Put Stuff in Your Windows Azure Junk Trunk – Windows Azure Worker Role and Storage Queue

Click on Part 1 and Part 2 of this series to review the previous examples and code.  First and foremost have the existing code base created in the other two examples opened and ready in Visual Studio 2010.  Next, I’ll just start rolling ASAP.

In the JunkTrunk.Storage Project add the following class file and code to the project. This will get us going for anything else we needed to do for the application from the queue perspective.

public class Queue : JunkTrunkBase
{
    public static void Add(CloudQueueMessage msg)
    {
        Queue.AddMessage(msg);
    }

    public static CloudQueueMessage GetNextMessage()
    {
        return Queue.PeekMessage() != null ? Queue.GetMessage() : null;
    }

    public static List<CloudQueueMessage> GetAllMessages()
    {
        var count = Queue.RetrieveApproximateMessageCount();
        return Queue.GetMessages(count).ToList();
    }

    public static void DeleteMessage(CloudQueueMessage msg)
    {
        Queue.DeleteMessage(msg);
    }
}

Once that is done open up the FileBlobManager.cs file in the Models directory of the JunkTrunk ASP.NET MVC Web Application. In the PutFile() Method add this line of code toward the very end of that method. The method, with the added line of code should look like this.

public void PutFile(BlobModel blobModel)
{
    var blobFileName = string.Format("{0}-{1}", DateTime.Now.ToString("yyyyMMdd"), blobModel.ResourceLocation);
    var blobUri = Blob.PutBlob(blobModel.BlobFile, blobFileName);

    Table.Add(
        new BlobMeta
            {
                Date = DateTime.Now,
                ResourceUri = blobUri,
                RowKey = Guid.NewGuid().ToString()
            });

    Queue.Add(new CloudQueueMessage(blobUri + "$" + blobFileName));
}

Now that we have something adding to the queue, we want to process this queue message. Open up the JunkTrunk.WorkerRole and make sure you have the following references in the project.

Windows Azure References

Windows Azure References

Next create a new class file called PhotoProcessing.cs. First add a method to the class titled ThumbnailCallback with the following code.

public static bool ThumbnailCallback()
{
    return false;
}

Next add another method with a blobUri string and filename string as parameters. Then add the following code block to it.

private static void AddThumbnail(string blobUri, string fileName)
{
    try
    {
        var stream = Repository.Blob.GetBlob(blobUri);
 
        if (blobUri.EndsWith(".jpg"))
        {
            var image = Image.FromStream(stream);
            var myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
            var thumbnailImage = image.GetThumbnailImage(42, 32, myCallback, IntPtr.Zero);
            thumbnailImage.Save(stream, ImageFormat.Jpeg);
            Repository.Blob.PutBlob(stream, "thumbnail-" + fileName);
        }
        else
        {
            Repository.Blob.PutBlob(stream, fileName);
        }
    }
    catch (Exception ex)
    {
        Trace.WriteLine("Error", ex.ToString());
    }
}

Last method to add to the class is the Run() method.

public static void Run()
{
    var queueMessage = Repository.Queue.GetNextMessage();
 
    while (queueMessage != null)
    {
        var message = queueMessage.AsString.Split('$');
        if (message.Length == 2)
        {
            AddThumbnail(message[0], message[1]);
        }
 
        Repository.Queue.DeleteMessage(queueMessage);
        queueMessage = Repository.Queue.GetNextMessage();
    }
}

Now open up the WorkerRole.cs File and add the following code to the existing methods and add the additional even method below.

public override void Run()
{
    Trace.WriteLine("Junk Trunk Worker entry point called", "Information");

    while (true)
    {
        PhotoProcessing.Run();

        Thread.Sleep(60000);
        Trace.WriteLine("Working", "Junk Trunk Worker Role is active and running.");
    }
}

public override bool OnStart()
{
    ServicePointManager.DefaultConnectionLimit = 12;
    DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
    RoleEnvironment.Changing += RoleEnvironmentChanging;

    CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
    {
        configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
        RoleEnvironment.Changed += (sender, arg) =>
        {
            if (arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>()
                .Any((change) => (change.ConfigurationSettingName == configName)))
            {
                if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
                {
                    RoleEnvironment.RequestRecycle();
                }
            }
        };
    });

    Storage.JunkTrunkSetup.CreateContainersQueuesTables();

    return base.OnStart();
}

private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
    if (!e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)) return;
            
    Trace.WriteLine("Working", "Environment Change: " + e.Changes.ToList());
    e.Cancel = true;
}

At this point everything needed to kick off photo processing using Windows Azure Storage Queue as the tracking mechanism is ready. I’ll be following up these blog entries with some additional entries regarding rafactoring and streamlining what we have going on. I might even go all out and add some more functionality or some such craziness! So hope that was helpful and keep reading. I’ll have more bits of rambling and other trouble coming down the blob pipeline soon! Cheers!

Put Stuff in Your Windows Azure Junk Trunk – ASP.NET MVC Application

If you haven’t read Part 1 of this series (part 3 click here), you’ll need to in order to follow along with the JunkTrunk Repository.  Open the solution up if you haven’t already and navigate to the Models Folder within the ASP.NET MVC JunkTrunk Project.  In the folder add another class titled FileItemModel.cs and BlobModel.cs. Add the following properties to the FileItemModel.

public class FileItemModel
{
    public Guid ResourceId { get; set; }
    public string ResourceLocation { get; set; }
    public DateTime UploadedOn { get; set; }
}

Add the following property to the BlobModel and inherit from the FileItemModel Class.

public class BlobModel : FileItemModel
{
    public Stream BlobFile { get; set; }
}

Next add a new class file titled FileBlobManager.cs and add the following code to the class.

public class FileBlobManager
{
    public void PutFile(BlobModel blobModel)
    {
        var blobFileName = string.Format("{0}-{1}", DateTime.Now.ToString("yyyyMMdd"), blobModel.ResourceLocation);
        var blobUri = Blob.PutBlob(blobModel.BlobFile, blobFileName);

        Table.Add(
                new BlobMeta
                {
                    Date = DateTime.Now,
                    ResourceUri = blobUri,
                    RowKey = Guid.NewGuid().ToString()
                });
    }

    public BlobModel GetFile(Guid key)
    {
        var blobMetaData = Table.GetMetaData(key);
        var blobFileModel =
            new BlobModel
            {
                UploadedOn = blobMetaData.Date,
                BlobFile = Blob.GetBlob(blobMetaData.ResourceUri),
                ResourceLocation = blobMetaData.ResourceUri
            };
        return blobFileModel;
    }

    public List GetBlobFileList()
    {
        var blobList = Table.GetAll();

        return blobList.Select(
            metaData => new FileItemModel
            {
                ResourceId = Guid.Parse(metaData.RowKey),
                ResourceLocation = metaData.ResourceUri,
                UploadedOn = metaData.Date
            }).ToList();
    }

    public void Delete(string identifier)
    {
        Table.DeleteMetaDataAndBlob(Guid.Parse(identifier));
    }
}

Now that the repository, management, and models are all complete the focus can turn to the controller and the views of the application. At this point the break down of each data element within the data transfer object and the movement of the data back and forth becomes very important to the overall architecture. One of the things to remember is that the application should not pass back and forth data such as URIs or other long easy to hack strings. This is a good place to include Guids or if necessary integer values that identify the data that is getting created, updated, or deleted. This helps to simplify the UI and help decrease the chance of various injection attacks. The next step is to open up the HomeController and add code to complete each of the functional steps for the site.

[HandleError]
public class HomeController : Controller
{
    public ActionResult Index()
    {
        ViewData["Message"] = "Welcome to the Windows Azure Blob Storing ASP.NET MVC Web Application!";
        var fileBlobManager = new FileBlobManager();
        var fileItemModels = fileBlobManager.GetBlobFileList();
        return View(fileItemModels);
    }

    public ActionResult About()
    {
        return View();
    }

    public ActionResult Upload()
    {
       return View();
    }

    public ActionResult UploadFile()
    {
        foreach (string inputTagName in Request.Files)
        {
            var file = Request.Files[inputTagName];

            if (file.ContentLength > 0)
            {
                var blobFileModel =
                    new BlobModel
                        {
                            BlobFile = file.InputStream,
                            UploadedOn = DateTime.Now,
                            ResourceLocation = Path.GetFileName(file.FileName)
                        };

                var fileBlobManager = new FileBlobManager();
                fileBlobManager.PutFile(blobFileModel);
            }
        }

        return RedirectToAction("Index", "Home");
    }

    public ActionResult Delete(string identifier)
    {
        var fileBlobManager = new FileBlobManager();
        fileBlobManager.Delete(identifier);
        return RedirectToAction("Index", "Home");
    }
}

The view hasn’t been created for the Upload just yet, so the method will cause a build error at this point. But before I add a view for this action, I’ll cover what has been created for the controller.

The Index Action I’ve changed moderately to have a list of the Blobs that are stored in the Windows Azure Blob Storage. This will be pulled from the manager class that we created earlier and passed into the view for rendering. I also, just for cosmetic reasons, changed the default display message passed into the ViewData so that the application would have something displayed more relevant to the application.

The About message I just left as is. The Upload action simply returns what will be a view we create.

The UploadFile Action checks for files within the request, builds up the model and then puts the model into storage via the manager.

The last method is the Delete Action that instantiates the manager and then calls a delete against the storage. This action then in turn traces back through, finds the Table & Blob Entities that are related to the specific blob and deletes both from the respective Windows Azure Storage Table and Blob Mediums.

The next step is to get the various views updated or added to enable the upload and deletion of the blob items.

Add a view titled Upload.aspx to the Home Folder of the Views within the JunkTrunk Project.

Upload View

Upload View

First change the inherits value for the view from System.Web.Mvc.ViewPage to System.Web.Mvc.ViewPage. After that add the following HTML to the view.

<asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">
	Upload an Image
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
	<h2>
		Upload</h2>
	<% using (Html.BeginForm("UploadFile", "Home", FormMethod.Post, 
        new { enctype = "multipart/form-data" }))
	   {%>
	<%: Html.ValidationSummary(true) %>
	<fieldset>
		<legend>Fields</legend>
	  
		<div class="editor-label">
			Select file to upload to Windows Azure Blob Storage:
		</div>
		<div class="editor-field">
			<input type="file" id="fileUpload" name="fileUpload" />
		</div>
		<p>
			<input type="submit" value="Upload" />
		</p>
	</fieldset>
	<% } %>
	<div>
		<%: Html.ActionLink("Back to List", "Index") %>
	</div>
</asp:Content>

After adding the HTML, then change the HTML in the Index.aspx View to have an action link for navigating to the upload page and for viewing the list of uploaded Blobs. Change the inherits first form System.Web.Mvc.ViewPage to System.Web.Mvc.ViewPage<IEnumerable>. The rest of the changes are listed below.

<asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">
    Home Page
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
    <h2>
        <%: ViewData["Message"] %></h2>
    <p>
        <%: Html.ActionLink("Upload", "Upload", "Home") %>
        a file to Windows Azure Blob Storage.
    </p>
    Existing Files:<br />
    <table>
        <tr>
            <th>
            </th>
            <th>
                FileName
            </th>
            <th>
                DownloadedOn
            </th>
        </tr>
        <% foreach (var item in Model)
           { %>
        <tr>
            <td>
                <%: Html.ActionLink("Delete", "Delete", 
                new { identifier = item.ResourceId })%>
            </td>
            <td>
                <%: item.ResourceLocation %>
            </td>
            <td>
                <%: String.Format("{0:g}", item.UploadedOn) %>
            </td>
        </tr>
        <% } %>
    </table>
</asp:Content>

Make sure the Windows Azure Project is set as the startup project and click on F5 to run the application. The following page should display first.

The Home Page o' Junk Trunk

The Home Page o' Junk Trunk

Click through on it to go to the upload page.

Selecting an Image to Put in The Junk Trunk

Selecting an Image to Put in The Junk Trunk

On the upload page select and image to upload and then click on upload. This will then upload the image and redirect appropriately to the home page.

The Image in the Junk Trunk

The Image in the Junk Trunk

On the home page the list should now have the uploaded blob image listed. Click delete to delete the image. When deleted the table and the blob itself will be removed from the Windows Azure Storage. To see that the data & image are being uploaded open up the Server Explorer within Visual Studio 2010.

Visual Studio 2010 Server Explorer

Visual Studio 2010 Server Explorer

View the data by opening up the Windows Azure Storage tree. Double click on either of the storage mediums to view table or blob data.

Windows Azure Storage

Windows Azure Storage

Put Stuff in Your Windows Azure Junk Trunk – Repository Base

Alright, so the title is rather stupid, but hey, it’s fun!  :)

This project I setup to provide some basic functionality with Windows Azure Storage.  I wanted to use each of the three mediums;  Table, Blob, and Queue, and this example will cover each of these things.  The application will upload and store images, provide a listing, some worker processing, and deletion of the images & associated metadata.  This entry is part 1 of this series, with the following schedule for subsequent entries:

Title aside, schedule laid out, description of the project completed, I’ll dive right in!

Putting Stuff in Your Junk Trunk

Create a new Windows Azure Project called PutJunkInIt.  (Click any screenshot for the full size, and also note some of the text may be off – I had to recreate a number of these images)

Windows Azure PutJunkInIt

Windows Azure PutJunkInIt

Next select the ASP.NET MVC 2 Web Application and also a Worker Role and name the projects JunkTrunk and JunkTrunk.WorkerRole.

Choosing Windows Azure Projects

Choosing Windows Azure Projects

In the next dialog choose to create the unit test project and click OK.

Create Unit Test Project

Create Unit Test Project

After the project is created the following projects are setup within the PutJunkInIt Solution.  There should be a JunkTrunk, JunkTrunk.Worker, JunkTrunk Windows Azure Deployment Project, and a JunkTrunk.Tests Project.

Solution Explorer

Solution Explorer

Next add a Windows Class Library Project and title it JunkTrunk.Storage.

Windows Class Library

Windows Class Library

Add a reference to the Microsoft.WindowsAzure.ServiceRuntime and Microsoft.WindowsAzure.StorageClient assemblies to the JunkTrunk.Storage Project.  Rename the Class1.cs file and class to JunkTrunkBase.  Now open up the Class1.cs file in the JunkTrunk.Storage Project.  First add the following fields and constructor to the class.

public const string QueueName = "metadataqueue";
public const string BlobContainerName = "photos";
public const string TableName = "MetaData";
static JunkTrunkBase()
{
    CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
    {
        configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
        RoleEnvironment.Changed
            += (sender, arg) =>
                    {
                        if (!arg.Changes.OfType()
                                .Any(change => (change.ConfigurationSettingName == configName)))
                            return;
                        if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
                        {
                            RoleEnvironment.RequestRecycle();
                        }
                    };
    });
}

After that add the following blob container and reference methods.

protected static CloudBlobContainer Blob
{
    get { return BlobClient.GetContainerReference(BlobContainerName); }
}
private static CloudBlobClient BlobClient
{
    get
    {
        return Account.CreateCloudBlobClient();
    }
}

Now add code for the table & queue client and reference methods.

protected static CloudQueue Queue
{
    get { return QueueClient.GetQueueReference(QueueName); }
}
private static CloudQueueClient QueueClient
{
    get { return Account.CreateCloudQueueClient(); }
}
protected static CloudTableClient Table
{
    get { return Account.CreateCloudTableClient(); }
}
protected static CloudStorageAccount Account
{
    get
    {
        return
            CloudStorageAccount
            .FromConfigurationSetting("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
    }
}

This class now provides the basic underpinnings needed to retrieve the appropriate information from the configuration.  This base class can then provide that connection information to connect to the table, queue, or blob mediums.

Next step is to create some initialization code to get the containers created if they don’t exist in Windows Azure.  Add a new class file to the PutJunkInIt Project.

JunkTrunkSetup

JunkTrunkSetup

public class JunkTrunkSetup : JunkTrunkBase
{
    public static void CreateContainersQueuesTables()
    {
        Blob.CreateIfNotExist();
        Queue.CreateIfNotExist();
        Table.CreateTableIfNotExist(TableName);
    }
}

Next add the System.Data.Services.Client Assembly to the project.  After adding the assembly add two new classes and name them BlobMeta.cs and Table.cs. Add the following code to the Table.cs Class.

public class Table
{
    public static string PartitionKey;
}

Next add another class file and name it BlobMetaContext.cs and add the following code.

public class BlobMetaContext : TableServiceContext
{
    public BlobMetaContext(string baseAddress, StorageCredentials credentials)
        : base(baseAddress, credentials)
    {
        IgnoreResourceNotFoundException = true;
    }
    public IQueryable Data
    {
        get { return CreateQuery(RepositoryBase.TableName); }
    }
    public void Add(BlobMeta data)
    {
        data.RowKey = data.RowKey.Replace("/", "_");
        BlobMeta original = (from e in Data
                                where e.RowKey == data.RowKey
                                    && e.PartitionKey == Table.PartitionKey
                                select e).FirstOrDefault();
        if (original != null)
        {
            Update(original, data);
        }
        else
        {
            AddObject(RepositoryBase.TableName, data);
        }
        SaveChanges();
    }
    public void Update(BlobMeta original, BlobMeta data)
    {
        original.Date = data.Date;
        original.ResourceUri = data.ResourceUri;
        UpdateObject(original);
        SaveChanges();
    }
}

Now add the following code to the BlobMeta Class.

public class BlobMeta : TableServiceEntity
{
    public BlobMeta()
    {
        PartitionKey = Table.PartitionKey;
    }
    public DateTime Date { get; set; }
    public string ResourceUri { get; set; }
}

At this point, everything should build. Give it a go to be sure nothing got keyed in wrong (or copied in wrong). Once assured the build is still solid, add the Blob.cs Class to the project.

public class Blob : JunkTrunkBase
{
    public static string PutBlob(Stream stream, string fileName)
    {
        var blobRef = Blob.GetBlobReference(fileName);
        blobRef.UploadFromStream(stream);
        return blobRef.Uri.ToString();
    }
    public static Stream GetBlob(string blobAddress)
    {
        var stream = new MemoryStream();
        Blob.GetBlobReference(blobAddress)
            .DownloadToStream(stream);
        return stream;
    }
    public static Dictionary<string, string> GetBlobList()
    {
        var blobs = Blob.ListBlobs();
        var blobDictionary =
            blobs.ToDictionary(
                listBlobItem => listBlobItem.Uri.ToString(),
                listBlobItem => listBlobItem.Uri.ToString());
        return blobDictionary;
    }
    public static void DeleteBlob(string blobAddress)
    {
        Blob.GetBlobReference(blobAddress).DeleteIfExists();
    }
}

After that finalize the Table Class with the following changes and additions.

public class Table : RepositoryBase
{
    public const string PartitionKey = "BlobMeta";
    public static void Add(BlobMeta data)
    {
        Context.Add(data);
    }
    public static BlobMeta GetMetaData(Guid key)
    {
        return (from e in Context.Data
                where e.RowKey == key.ToString() &&
                e.PartitionKey == PartitionKey
                select e).SingleOrDefault();
    }
    public static void DeleteMetaDataAndBlob(Guid key)
    {
        var ctxt = new BlobMetaContext(
            Account.TableEndpoint.AbsoluteUri,
            Account.Credentials);
        var entity = (from e in ctxt.Data
                        where e.RowKey == key.ToString() &&
                        e.PartitionKey == PartitionKey
                        select e).SingleOrDefault();
        ctxt.DeleteObject(entity);
        Repository.Blob.DeleteBlob(entity.ResourceUri);
        ctxt.SaveChanges();
    }
    public static List<BlobMeta> GetAll()
    {
        return (from e in Context.Data
                select e).ToList();
    }
    public static BlobMetaContext Context
    {
        get
        {
            return new BlobMetaContext(
                Account.TableEndpoint.AbsoluteUri,
                Account.Credentials);
        }
    }
}

The final file to add is the Queue.cs Class File. Add that and then add the following code to the class.

public class Queue : JunkTrunkBase
{
    public static void Add(CloudQueueMessage msg)
    {
        Queue.AddMessage(msg);
    }
    public static CloudQueueMessage GetNextMessage()
    {
        return Queue.PeekMessage() != null ? Queue.GetMessage() : null;
    }
    public static List<CloudQueueMessage> GetAllMessages()
    {
        var count = Queue.RetrieveApproximateMessageCount();
        return Queue.GetMessages(count).ToList();
    }
    public static void DeleteMessage(CloudQueueMessage msg)
    {
        Queue.DeleteMessage(msg);
    }
}

The now gives us a fully functional class that utilizes the Windows Azure SDK. In Part 2 I’ll start building on top of that using the ASP.NET MVC 2 Web Project. Part 2 will be published tomorrow, so stay tuned.

Windows Azure Web, Worker, and CGI Roles – How They Work

This is a write up I’ve put together of how the roles in Windows Azure work.  As far as I know, this is all correct – but if there are any Windows Azure Team Members out there that wouldn’t mind providing some feedback about specifics or adding to the details I have here – please do add comments!  :)

Windows 2008 and Hyper-V

Windows Azure is built on top of Windows 2008 & Hyper-V. Hyper-V provides virtualization to the various instance types and allocation of resources to those instances. Windows 2008 provides the core operating system functionality for those systems and the Windows Azure Platform Roles and Storage.

The hypervisor that a Hyper-V installation implements does a few unique things compared to many of the other virtualization offerings in the industry. Xen (The Open Source Virtualization Software that Amazon Web Services use) & VMWare both use a shared resource model for utilization of physical resources within a system. This allows for more virtualized instances to be started per physical machine, but can sometimes allow hardware contention. On the other hand Hyper-V pins a particular amount of resources to a virtualized instance, which decreases the number of instances allowed on a physical machine. This enables Hyper-V to prevent hardware contention though. Both designs have their plusses and minuses and in cloud computing these design choices are rarely evident. The context however is important to know when working with high end computing within the cloud.

Windows Azure Fabric Controller

The Windows Azure Fabric Controller is kind of the magic glue that holds all the pieces of Windows Azure together. The Azure Fabric Controller automates all of the load balancing, switches, networking, and other networking configuration. Usually within an IaaS environment you’d have to setup the load balancer, static IP address, internal DNS that would allow for connection and routing by the external DNS, the switch configurations, configuring the DMZ, and a host of other configuration & ongoing maintenance is needed. With the Windows Azure Platform and the Fabric Controller, all of that is taken care of entirely. Maintenance for these things goes to zero.

The Windows Azure Fabric Controller has several primary tasks: networking, hardware, and operating system management, service modeling, and life cycle management of systems.

The low level hardware that the Windows Azure Fabric Controller manages includes switches, load balancers, nodes, load balancers, and other network elements. In addition it manipulates the appropriate internal DNS and other routing needed for communication within the cloud so that each URI is accessed seamlessly from the outside.

The service modeling that the fabric controller provides is a to map the topology of services, port usage, and as mentioned before the internal communication within the cloud. All of this is done by the Fabric Controller without any interaction other than creating an instance or storage service within Windows Azure.

The operating system management from the Fabric Controller involves patching the operating system to assure that security, memory and storage, and other integral operating system features are maintained and optimized. This allows the operating system to maintain uptime and application performance characteristics that are optimal.

Finally the Fabric Controller has the responsibility for service life cycle. This includes updates and configuration changes for domains and fault domains. The Fabric Controller does so in a way to maintain uptime for the services.

Each role has at least one instance running. A role however can have multiple instances, with a theoretically limitless number. In this way, the Fabric Controller, if an instance stops responding is recycled and a new instance takes over. This can sometimes take several minutes, and is a core reason behind the 99.99% uptime SLA requiring two instances within a role to be running. In addition to this the instance that is recycled is rebuilt from scratch, thus destroying any data that would be stored on the role instance itself. This is when Windows Azure Storage plays a pivotal role in maintaining Windows Azure Cloud Applications.

Web Role

The Windows Azure Web Role is designed as a simply to deploy IIS web site or services hosting platform feature. The Windows Azure Web Role can provide hosting for any .NET related web site such as; ASP.NET, ASP.NET MVC, MonoRails, and more.

The Windows Azure Web Role is provides this service hosting with a minimal amount of maintenance required. No routing or load balancing setup is needed; everything is handled by the Windows Azure Fabric Controller.

Uses: Hosting ASP.NET, ASP.NET MVC, MonoRails, or other .NET related web site in a managed, high uptime, highly resilient, controlled environment.

Worker Role

A worker role can be used to host any number of things that need to pull, push, or run continuously without any particular input. A service role can be used to setup a schedule or other type of service. This provides a role dedicated to what could closely be compared to a Windows Service. The options and capabilities of a Worker Role however vastly exceed a simple Windows Service.

CGI Role

This service role is designed to allow execution of technology stacks such as Ruby on Rails, PHP, Java, and other non-Microsoft options.

Windows Azure Storage

Windows Azure Storage is broken into three distinct features within the service. Windows Azure provides tables, blob, and queue for storage needs. Any of the Windows Azure Roles can also connect to the storage to maintain data across service lifecycle reboots, refreshes, and any temporary loss of a Windows Azure Role.

A note about Windows Azure Storage compared to most Cloud Storage Providers: None of the Azure Storage Services are “eventually consistent”. When a write is done, it is instantly visible to all subsequent readers. This simplifies coding but slows down the data storage mechanisms more than eventually consistent data architectures.

Shout it

My Current Windows Development Machine Software Stack

I recently did a clean install of Windows 7 64-bit.  It had been a really long time since I listed the current tools, SDKs, and frameworks that I’ve been using.  Thus here’s my entourage of software that I use on a regular basis that is installed on my primary development machines.

Basic Software & System OS

Administration Utilities

Themes & Such

In addition to these packages of software another as important, if not more important to my day-to-day software development includes these software services and cloud hosting services.

SaaS, PaaS, and IaaS

Software I will be adding to the stack within the next few days, weeks, and months.