New Relic, The King Makers, MS Open Tech, Riak VMs and Life Gets Easier Today

Today Microsoft released, with partnerships with a number of companies including Basho, Hupstream and Bitnami, the VM Depot. I’ve always followed Bitnami, so it’s really cool to see their VM releases for Jenkins (CI Build Server), WordPress, Ruby 1.9.3 stackNode.js and about everything you can imagine out their along side our Basho Riak CentOS image. If you want a great way to get kick started with Riak and you’re setup with Windows Azure, now there is an even easier way to get rolling.

Over on the Basho blog we’ve announced the MS Open Tech and Basho Collabortation. I won’t repeat what was stated there, but want to point out two important things:

  1. Once you get a Riak image going, remember there’s the whole community and the Basho team itself that is there to help you get things rolling via the mail list. If you’re looking for answers, you’ll be able to get them there. Even if you get everything running smoothly, join in anyway and at least just lurk. :)
  2. The RTFM value factor is absolutely huge for Riak. Basho has a superb documentation site here. So definitely, when jumping into or researching Riak as software you may want to build on, use for your distributed systems or the Riak Key Value Databases, check out the documentation. Super easy to find things, super easy to read, and really easy to get going with.

So give Riak a try on Windows Azure via the VM Depot. It gets easier by the day, and gives you even more data storage options, distribution capabilities and high availability that is hard to imagine.

New Relic & The Rise of the New Kingmakers

In other news, my good friends at New Relic have released a new book in partnership with Redmonk Analyst Stephen O’Grady @, have released a book he’s written titled The New Kingmakers, How Developers Conquered the World. You may know New Relic as the huge developer advocates that they are with the great analytics tools they provide. Either way, give a look see and read the book. It’s not a giant thousand page tomb, so it just takes a nice lunch break and you’ll get the pleasure of flipping the pages of the book Stephen has put together. You might have read the blog entry that started the whole “Kingmakers” statement, if you haven’t, give that a read first.

I personally love the statement, and have used it a few times myself. In relation to the saying and the book, I’ll have a short review and more to say in the very near future. Until then…

Cheers, enjoy the read, the virtual images and happy hacking.

PaaS Help! Know any PaaS Providers?

I’ve been diligent and started a search of Platform as a Service Providers, so far my list includes:

  • EngineYard
  • Heroku
  • AWS Beanstalk
  • Windows Azure
  • AppFog
  • Tier3
  • CloudFoundry
  • OpenShift
  • IBM PaaS
  • Google App Engine
  • CloudBees

Who else is there? Help me out in creating a list of every possible offering we can find!  Cheers! Please leave a comment or three below with any I’ve missed.  Thanks!

Reality Distortion Field : 17 Companies’ Sitrep

I’m sitting on the bus this morning. As happens almost every day of the week. I’m flipping pages, sort of, it’s an eBook on my Kindle App. I’m reading about Steve Jobs taking over the Macintosh Program at Apple. How things started to fall into place for Apple, for the Macintosh, and how Jobs saw what could be a pushed for it. Everybody else; Microsoft, Xerox, Canon, and practically every single other company was missing it. Xerox Parc had it right in front of them, the GUI, Mouse, Object Oriented Language, and about every single thing we assume for computer use and development today but wasn’t doing anything with it. They were all missing it, except Jobs. The eccentric, crazed, reality distortion field generating Jobs pushed forward and found those that agreed, this was absolutely the future. Today’s computers owe so much to Jobs efforts to pull these people together, to what he saw as the future, and our modern computing world will forever be indebted to Steve Jobs.

Howard Hues had done this 50 years earlier. He simply stated, “nobody wants to fly on a plane at 10k feet and get shaken to pieces, planes need to fly at 30,000 feet or more where the air is smooth!” He then went about working to get a plane built that could do this! The Government was in his way, the industry was fighting him, everybody said this wasn’t the way to go. Nobody could build a plane that would do that right now! It’s absurd. He did it, and bought every single one of them he could putting the airline (TWA) in hock at the same time! But it paid off, and his airline had the nicest planes, best flight in the world, easily. Today’s airlines are all modeled after this ideal, our modern travel owes a huge debt to what Howard Hughes pushed forward.

The competition, the fighting pushed the envelope, but in both cases a visionary could see the future. To them it was plain as an image on a clear sunny day. To them, the future didn’t need to be tomorrow, it was ready right now. The future just needed dragged kicking and screaming directly into today! They did this, they pulled people together who could make these changes, and they with their teams yanked the future right into humanity’s grasp.

Utility Computing / Cloud Computing

With those thoughts flying around at Warp 10 in my mind, everywhere, at every moment it seemed to occur to me. We’re merely putting the motherboards and cassette tape drives together right now in cloud computing. We have no Macintosh of cloud computing, we have no clear direction, there has to be something bigger, much bigger. At this point we’re merely making small steps, slight little strides toward the future. What we need to do is create the future and pull it directly into now!

There could be more though. Some of these things are being put together by individuals at various companies, oriented toward the platform level. There is, somewhere, a growing movement toward that next big shift in the way things are done. The gap between big architectures, big ideas, and launching these things is decreasing by the day – literally!

With these big ideas and big architectures and all the small steps and small pieces the industry is moving in the right direction. We’ve experienced shifts over the years and some more are definitely coming up very soon!

The Playing Field : Sitrep

With these thoughts racing around I felt compelled to look at where the industry stands right now. These are in no particular order, they all provide some type of building blocks for the next big thing, all in some aspect of the industry.

Amazon Web Services : This one should not need explaining. They’re probably the most utilized, nearly the most advanced, robust, price conscious utility storage, compute, and services provider in existence today. They continue to defeat the innovator’s dilemma over and over again, this company, and the departments in the company are hungry, very hungry and they fight the fight to stay in the lead.

Cloudability : This company is about keeping utility/cloud computing costs in check, and knowing where and when you’re pushing the pricing limits among all the various building blocks. There has been more than a few issues with billing, and people blowing through budgets by inadvertently leaving on their 1000 node EC2 instances and Cloudability helps devops keep these types of things under control stopping overages cold!

New Relic : The key to this offering is monitoring of everything, everywhere, all the time. New Relic offers absolutely beautiful charting and information displays around services, compute, storage, and a zillion other metrics among Ruby, PHP, Python, .NET, and about everything else available.

Puppet Labs : Imagine operations, IT, and systems administration all rolled into a single bad ass company’s product efforts. Imagine ways to automate and monitor ritualized machines, get them deployed, all with elegant and extremely powerful tools. Imagine that power now, you’ll know what Puppet Labs provides.

Opscode : The cloud needs management, hard core powerful management. Opscode and their respective chef product does just that. The influence of chef has gone so far as to influence Amazon Web Services (and others) to design their systems automation in a way as to enable chef usage. The devops community around Opscode is growing, the inroads to systems agility they’re making is getting to a point as to even be considered a disruptive market force!

Joyent : The birthplace of node.js, do I need to add more? Well, ok, I will. Joyent has a host of amazing devs, and amazing ops goals. The advances coming out of  Joyent aren’t always associated back to the company (maybe they should be) but rest assured there is some heavy duty research and dev going on over there. Things to check out would be their SmartDataCenter and of course the JoyentCloud.

MongoHQ : Mongo HQ is one of the distributed cloud hosting provider for Mongo DB. Mongo HQ  is also a supported provider in several of the other PaaS Providers such as Heroku and AppHarbor.

MongoLabs : Mongo Labs, another distributed cloud hosting provider for Mongo DB. Mongo Labs is also a supported provider in several of the other PaaS Providers such as Heroku and AppHarbor.

Nodester : Nodester is a hosting solution for node.js applications beautifully distributed in a horizontal way.

Nodejitsu : One of the leading node.js hosting providers and a very active participant in the community in and around New York.

AppFog : AppFog is a Platform as a Service (PaaS Provider) that is working on providing a cloud based horizontally distributed platform for creating applications with a wide variety of frameworks and languages. Some of those include .NET, Ruby on Rails, Java, and many others.

PhpFog : This is the PHP root of the PaaS Provider AppFog. They have a good history and an absolutely spectacular architecture for PHP Applications with a screaming simple and fast deployment model to cloud/utility based systems. They have a really great product.

Heroku : Deploy Ruby, Node.js, Clojure, Java, Python, and Scala. Probably the leader in PaaS based deployment right now. Got git, get Heroku, get push heroku master is about all the gettin’ for your application to be running there.

EngineYard : Think Ruby, Ruby on Rails, Rubinius, or any other aspect of Ruby and you’ll probably arrive at EngineYard in short order. The teams at EngineYard are heavily active in the cloud & Ruby scene. They are easily one of the leaders in PaaS based git workflow deployment in the Ruby & Ruby on Rails Community. They also, however, support tons of other technologies so don’t think they’re limited to just Ruby & Rails.

AppHarbor : The .NET Framework, often thought to be left completely out in the cold when it comes to serious cloud computing git based agile work flows, finally got included with AppHarbor! With the release of AppHarbor the trifecta of IaaS and a solid PaaS offering were finally available for the .NET stack.

Windows Azure : Windows Azure is Microsoft’s official cloud service, which supports a host of capabilities centered around a mostly PaaS based service. Windows Azure has however spread into SaaS and IaaS also. Some of the frameworks and tools they support include Ruby on Rails, Java, PHP, .NET (of course), node.js, Hadoop and others.

CloudFoundry : Cloud Foundry is an open source PaaS Solution that serves to link up various back end and front end architectures. Currently it is supported by a host of companies including VMWare, AppFog, and others.

Putting the Pieces Together

That’s where we stand in the industry today. We have all the pieces and they need fit together to create something great, something awesome, something truly remarkable. I fully intend to create part of the future, will I see you there? I’d hope so!

High Availability From Non-High Availability: OpenStack, Dell, Crowbar, Private Clouds, and Moving the Enterprise Forward…

The Environment

Recently a conversation came up about high availability in a traditional Enterprise Environment. Let me paint the picture for this environment;

“This environment has several hundred servers, and several hundred applications. These application range in simple client server applications to n-tier applications strung across multiple services and machines. Some are resilient, some are not so resilient. These applications have administration that ranges from needing rebooted on a daily basis to not being touched for months at a time. Needless to say the range of applications is vast.

In addition to all these applications the data center had a mix of hardware concerns that directly effected how applications were built.”

With that basic idea, one can imagine that planning for high availability is by no means a simple thing. However there are opportunities now available, that Enterprises have never had previously. In the past an Enterprise would usually have some big heavy hitter come in, such as EMC. The Enterprise would then pay them hundreds of thousands or even millions of dollars to do an analysis. Then the Enterprise would probably fork over another couple hundred grand here and there. This would happen time and time again, until some level of high availability would be achieved.

Well, to put it simply, a lot of that effort is unneeded today. The effort that is needed, with the right team, is in the hands of the Enterprise itself. Some people who know me, would immediately think I’m about to say “just setup an account at AWS and build to the cloud…” which is obviously the easiest, secure, and most progressive route to go. But no, I’m going to step in with some other solutions, that can be provided on-premise. I’ll elaborate at a later time the reasons behind this.

I’m going to now step through some key technologies available today. These can be used to provide high availability from the software architecture points of view. In your enterprise, if you have off-shored, outsourced, or otherwise attained your Enterprise Software, these functionalities and capabilities will be up to the creating provider. You’ll have to go to them for further information on how to change or adapt the architecture.

Software Architecture

For in house software here are some APIs, SDKs, and tools to help attain the much sought after high availability (Always aiming for that mythical five 9’s).

Dell’s Cloud Solutions

So without significant research time, the Dell Solutions can be thoroughly confusing at first glance. They don’t offer anything related to actual “cloud services” such as AWS, Windows Azure, or Rackspace. What they’re simply offering is hardware to build out resilient data centers and contributing actively to open source software solutions.

The Dell Cloud Edge Software is available on Github at dellcloudedge. The best places to start researching what is available are on two key blogs; JBGeorge Tech Blog and Rob Hirschfeld’s Blog.

Another key part of the Dell Solution is Crowbar. Dell open sourced Crowbar at the 2011 OSCON Conference. Even though most of the sample configurations revolve around Dell Poweredge Servers and Rackspace Cloud Builder Solutions, the software is available for use on system that are completely unrelated from Rackspace or Dell Solutions. Crowbar, simply put, is the software used to get servers up and running. As quoted on the Dell Announcement released during OSCON,

“Bringing up a cloud can be no mean feat, as a result a couple of our guys began working on a software framework that could be used to quickly (typically before coffee break!) bring up a multi-node OpenStack cloud on bare metal. That framework became Crowbar. What Crowbar does is manage the OpenStack deployment from the initial server boot to the configuration of the primary OpenStack components, allowing users to complete bare metal deployment of multi-node OpenStack clouds in a matter of hours (or even minutes) instead of days.”

That quote now brings up the next piece of software, OpenStack. When building out a data-center it is a solid idea to begin building a platform on which things will operate. OpenStack enables just that. There are two major elements of OpenStack that are key; OpenStack Compute and OpenStack Storage. This is where the architectural paradigm begins to change dramatically for traditional software. This is also where there will  a major sticking point for traditional Enterprise Software that relies primarily on a database on a server, with a web server on a server, and maybe some middleware or a service bus on another server. The massive problem is applications need to focus around horizontal scalability with compute and storage being the two key elements.

In many enterprises this is unfortunate, because a safe estimate would be 95% or more of enterprise applications don’t scale horizontally, or scale at all. If you’re an SOA shop, you’re much farther along than most. Most enterprises simply rely on the traditional vertical stack. This is a major problem. So how do we bridge this gap between the compute plus storage architectural design goal versus traditional architecture? That’s where the follow software comes to the rescue.

Windows Server AppFabric

Windows Azure AppFabric (Click to visit the MS Azure AppFabric Site)

Windows Azure AppFabric (Click to visit the MS Azure AppFabric Site)

(Not to be confused with the Windows Azure AppFabric, for differences review this article)

Windows Server AppFabric (Click for the MS Site)

Windows Server AppFabric (Click for the MS Site)

The Windows Server AppFabric has several capabilities that help an enterprise application leap forward into the modern era of horizontal scalability with a more clear way to focus on compute and storage. The feature set of the AppFabric includes these key functions that enable this leap forward (more information available in this article):

  • Workflow Instance Management
  • Scaling Out Distributed Applications

These by no means are the only features of AppFabric. For a thorough description of scenarios and applications around AppFabric check out Introducing Windows Server AppFabric.

In Summary

Where Does This Leave Enterprise Environments? The simple answer is, “A really long way away from achieving the scalability, cost savings, integrity, agaility, and capabilities of public cloud computing“. You can quote me on that. The effort to acheive data integrity and up time to perform standard business, it’s already here for Enterprises, but to go beyond that and extend hours of operation, acheive 5 9’s of up time, and decrease costs in a dramatic way is generally cost prohibitive in private cloud infrastructure and especially in traditional data center operations. The fact is, things will still go down. Applications have a long way from being resilient, idempotent, or designed with an architecture that allows them with the concept of the public cloud “design for failure” concept.

So what to do about this? The best thing for an Enterprise Application Environment is simply to start building applications with horizontal scalability in mind. Build with the concept of systems being nodes, with idempotent messaging, clear and redundant messaging queues, and thinking – even while limited by traditional data centers and limited visualization technologies – thinking in a resilient architectural style instead of the traditional vertical mindset.

These tools I’ve outlined can help your Enterprise move forward in a traditional data center environment, a private cloud infrastructure environment, and be prepared for public cloud scale and capabilities.

Following Good Practice, The Negative Bits About Windows Azure First, But Gems Included! :D

Ok, I’ve used Windows Azure steadily over the last year and a half.  I’ve fought with the SDK so much that I stopped using it. I decided I’d put together this recap of what has driven me crazy and then put together something about the parts that I really like, the awesome bits, the parts that have the greatest potential with Windows Azure. So hold on to your hats, this may be hard hitting.  ;)

First the bad parts.

The Windows Azure SDK

Ok, the SDK has driven me nuts. It has had flat out errors, sealed (bad) code, and is TIGHTLY COUPLED to the development fabric. I’m a professional, I can mock that, I don’t need kindergarten level help running this! If I have a large environment with thousands of prospective nodes (or even just a few dozen instances) the development fabric does nothing to help. I’d rate the SDK’s closed (re: sealed/no interfaces) nature and the development fabric as the number 1 reasons that Windows Azure is the hardest platform to develop for at large scale in Enterprise Environments.

Pricing Competitiveness? Ouch. :(

Windows Azure is by far the most expensive cloud platform or infrastructure to use on the market today. AWS comes in, when priced specifically anywhere from 2/3rds the price to 1/6th the price. Rackspace in some circumstances comes in at the crazy low price of 1/8th as much as Windows Azure for similar capabilities. I realize there are certain things that Windows Azure may provide, but my not, and that in some rare circumstances Azure may come in lower – but that is rare. If Windows Azure wants to stay primarily, and only, an Enterprise Offering than this is fine. Nailing Enterprises on expensive things and offering them these SLA myths is exactly what Enterprises want, piece of mind of an SLA, they don’t care about pricing.

But if Windows Azure wants to play in new business, startups especially, mid-size business, or even small enterprises than the pricing needs a fix.  We’re looking at disparities $500 bucks vs. $3500 bucks in other situations. This isn’t exactly feasible as a way to get into cloud computing. Microsoft, unfortunately for them, has to drop this dream of maintaining revenues and profits at the same rate as their OS & Office Sales. Fact is, the market has already turned this sector into a commoditized price.

Speed, Boot Time, Restart, UI Admin Responsiveness

The Silverlight Interface is beautiful, I’ll give it that. But in most browsers aside from IE it gets flaky. Oh wait, no, I’m wrong. It gets flaky in all the browsers. Doh! This may be fixed now, but in my experience and others that I’ve paired with, we’ve watched in Chrome, Opera, Safari, Firefox, and IE when things have happened. This includes the instance spinning as if starting up when it is started, or when it spins and spins, a refresh is done and the instance has completely disappeared!  I’ve refreshed the Silverlight UI before and it just stops responding to communication before (and this wasn’t even on my machine).

The boot time for an instance is absolutely unacceptable for the Internet, for web development, or otherwise. Boot time should be similar to a solid Linux instance. I don’t care what needs to be done, but the instances need cleaned up, the architecture changed, or the OS swapped out if need be. I don’t care what OS the cloud is running on, but my instance should be live for me within 1-2 minutes or LESS. The current performance of Rackspace, Joyent, AWS, and about every single cloud provider out there boots an instance in about 45 seconds, sometimes a minute, but often less. I know there are workarounds, the whole leave it running while you deploy methods, and other such notions, but those don’t always work out. Sometimes you just need the instance up and running and you need it NOW!

Speed needs measurement to prove out in tests. Speed needs to be observed. I need analytics on my speed of the instance I’m choosing. I don’t know if it is pegged, I don’t know if it is idle and not responding. I have no idea in Windows Azure with any easy way. The speed, in general, seems to be really good on Windows Azure. Often times it appears to be better than others even, but rarely can I really prove it. It’s just a gut feeling that it is moving along well.

So, those are the negatives; speed, boot time, admin UI responsiveness, pricing, and the SDK. Now it is time for the wicked awesome cool bits!

Now, The Cool Parts

Lock In With Mort

This topic you’d have to ask me about in person, many people would be offended by this and I mean no offense by it. The reality is many companies will continue to get and hire what they consider to be plug and play replaceable developers – AKA “mort”. This is really bad for developers, but great for Windows Azure. In addition Windows Azure provides an option to lock in. It is by no means the only option – because by nature a cloud platform and services will only lock you in if YOU allow yourself to be. But providing both ways, lock in or not, is a major boost for Windows Azure also. Hopefully, I’ll have a presentation in regards to this in the near future – or at least find a way to write it up so that it doesn’t come off as me being a mean person, because I honestly don’t intend that.

Deploy Anything, To The Platform

Have a platform to work with instead of starting purely at infrastructure is HUGE for most companies. Not all, but most companies would be benefited in a massive way to write to the Azure Platform instead of single instances like EC2. The reason boils down to this, Windows Azure abstracts out most of the networking, ops, and other management that a company has to do. Most companies have either zero, or very weak ops and admin capabilities. This fact in many companies will actually bring the (I hate saying this) TCO, or Total Cost of Ownership, down for companies building to the Windows Azure Platform vs. the others. Because really, the real cost in all of this is the human cost, not the services as they’re commodotized. Again though, this is for small non-web related businesses – as web companies need to have ops, capabilities, their people absolutely must understand and know how the underpinnings work. If routing, multi-tenancy, networking and other capabilities are to be used to their fullest extent, infrastructure needs to be abstracted but the infrastructure needs to be accessible. Windows Azure does a good deal of infrastructure, and it looks like there will be more available in the future. This will be when the platform actually becomes much more valuable for the web side of the world that demands control, network access, SEO, routing, multi-tenancy, and other options like this.

With the newer generation of developers and others coming out of colleges there is a great idea here and a very bad one. Many new generation developers, if they want web, are jumping right into Ruby on Rails. Microsoft isn’t even a blip on their radar, however there still manage to be thousands that give Microsoft .NET a look, and for them Windows Azure provides a lot of options, including Ruby on Rails, PHP, and more. Soon there will even be some honest to goodness node.js support. I even suspect that the node.js support will probably be some of the fastest performing node.js implementations around. At least, the potential is there for sure. This later group of individuals coming into the industry these days are who will drive the Windows Azure Platform to achieve what it can.

.NET, PHP, and Ruby on Rails Ecosystem (Note, I don’t support of the theft of this word, but I’ll jump on the “ecosystem” bandwagon, reluctantly)

Besides the simple idea that you can deploy any of these to an “instance” in other environments, Windows Azure (almost) makes every one of these a first class platform citizen.  Drop the SDK in my advice, my STRONG advice, and go the RESTful services usage route. Once you do that you aren’t locked in, you can abstract for Windows Azure or any cloud, and you can utilize any of these framework stacks. This, technically, is HUGE to have these available at a platform level. AWS doesn’t offer that, Rackspace doesn’t even dream of it yet, OpenStack doesn’t enable it, and the list goes on. Windows Azure, that’s your option in this category.

The Other MASSIVE Coolness is not Core Windows Azure Features, but They Provide a HUGE Plus for Windows Azure

The add ons to SQL Server are HUGE for enterprises with BI Reporting, SQL Server Reporting, etc. These features are a no brainer for an enterprise. Yes, they provide immediate lock in. Yes, it doesn’t really matter for an enterprise. But here’s the saving grace for this lock in. With the Service Bus and Access Control you can use single sign on to use this and OTHER CLOUD SERVICES in a very secure and safe nature with your development. These two features alone, whether you use other Windows Azure Features or not, are worth using. Even with AWS, Rackspace, or one of the others. The Service Bus and Access Control actually add a lot of capabilities to any type of cloud architecture that comes in useful for enterprise environments, and is practically a requirement for on-premise and in cloud mixed environments (which it seems, almost all environments are).

Other major pluses that I like with Windows Azure includ:

  • Azure Marketplace – Over time, and if marketed well, this could become a huge asset to companies big and small.
  • SQL Azure – SQL Azure is actually a pretty solid database offering for enterprises. Since a lot of Enterprises have already locked themselves into SQL Server, this is a great offering for those companies. However I’m mixed on its usage vs. lower priced mySQL usage, or others for that matter. It definitely adds to the overall Windows Azure Capabilities though, and as time moves forward and other features (such as SSIS, etc) are added to Azure this will become an even greater differentiation.
  • Caching – Well, caching is just awesome isn’t it? I dig me some caching.  This offering is great. It isn’t memCached or some of the others, but it is still a great offering, and again, one of those things that adds to the overall Windows Azure capabilities list. I look forward to Microsoft adding more and more capabilities to this feature.  :)
Summary
Windows Azure has grown and matured a lot over the time since its release from beta. It still however has some major negatives compared to more mature offerings. However, there is light at the end of the tunnel for those choosing the Windows Azure route, or those that are getting put into the Windows Azure route. Some of those things may even help leap ahead of some of the competition at some point. Microsoft is hard core in this game and they’re not letting down. If anyone has failed to notice, they still have one of the largest “war chests” on Earth to play in new games like this – even when they were initially ill prepared. I do see myself using Windows Azure in the future, maybe not extensively, but it’ll be there. And win a large share of the market or not, Microsoft putting this much money into the industry will push all ships forward in some way or another!

Cloud Failure, FUD, and The Whole AWS Oatage…

Ok.  First a few facts.

  • AWS has had a data center problem that has been ongoing for a couple of days.
  • AWS has NOT been forthcoming with much useful information.
  • AWS still has many data centers and cloud regions/etc up and live, able to keep their customers up and live.
  • Many people have NOT built their architecture to be resilient in the face of an issue such as this.  It all points to the mantra to “keep a backup”, but many companies have NOT done that.
  • Cloud Services are absolutely more reliable than comparable hosted services, dedicated hardware, dedicated virtual machines, or other traditional modes of compute + storage.
  • Cloud Services are currently the technologically superior option for compute + storage.

Now a few personal observations and attitudes toward this whole thing.

If you’re site is down because of a single point of failure that is your bad architectural design, plain and simple. You never build a site like that if you actually expect to stay up with 99.99% or even 90% of the time. Anyone in the cloud business, SaaS, PaaS, hosting or otherwise should know better than that. Everytime I hear someone from one of these companies whining about how it was AWSs responsiblity, I ask, is the auto manufacturer responsible for the 32,000 innocent dead Americans in 2010? How about the 50,000 dead in the year of peak automobile deaths? Nope, those deaths are the responsiblity of the drivers. When you get behind the wheel you need to, you MUST know what power you yield. You might laugh, you might jest that I use this corralary, but I wanted to use an example ala Frédéric Bastiat (if you don’t know who he is, check him out: Frédéric Bastiat). Cloud computing, and its use, is a responsibility of the user to build their system well.

One of the common things I keep hearing over and over about this is, “…we could have made our site resilient, but it’s expensive…”  Ok, let me think for a second.  Ummm, I call bullshit.  Here’s why.  If you’re a startup of the most modest means, you probably need to have at least 100-300 dollars of services (EC2, S3, etc) running to make sure you’re site can handle even basic traffic and a reasonable business level (i.e. 24/7, some traffic peaks, etc).  With just $100 bucks one can setup multiple EC/2 instances, in DIFFERENT regions, load balance between those, and assure that they’re utilizing a logical storage medium (i.e. RDS, S3, SimpleDB, Database.com, SQL Azure, and the list goes on and on).  There is zero reason that a business should have their data stored ON the flippin’ EC2 instance.  If it is, please go RTFM on how to build an application for the Internets.  K Thx. Awesomeness!!  :)

Now there are some situations, like when Windows Azure went down (yeah, the WHOLE thing) for about an hour or two a few months after it was released.  It was, however, still in “beta” at the time.  If ALL of AWS went down then these people who have not built a resilient system could legitimately complain right along with anyone else that did build a legitimate system. But those companies, such as Netflix, AppHarbor, and thousands of others, have not had downtime because of this data center problem AWS is having.  Unless you’re on one instance, and you want to keep your bill around $15 bucks a month, then I see ZERO reason that you should still be whining.  Roll your site up somewhere else, get your act together and ACT. Get it done.

I’m honestly not trying to defend AWS either.  On that note, the response time and responses have been absolutely horrible. There has been zero legitimate social media, forum, or responses that resemble an solid technical answer or status of this problem. In addition to this Amazon has allowed the media to run wild with absolutely inane and sensational headlines and often poorly written articles.  From a technology company, especially of Amazon’s capabilities and technical prowess (generally, they’re YEARS ahead others) this is absolutely unacceptable and disrespectful on a personal level to their customers and something that Amazon should mature their support and public interaction along with their technology.

Now, enough of me berating those that have fumbled because of this. Really, I do feel for those companies and would be more than happy to help straighten out architectures for these companies (not for free). Matter of fact, because of this I’ll be working up some blog entries about how to put together a geographically resilient site in the cloud.  So far I’ve been working on that instead of this rant, but I just felt compelled after hearing even more nonsense about this incident that I wanted to add a little reason to the whole fray.  So stay tuned and I’ll be providing ways to make sure that a single data-center issues doesn’t tear down your entire site!

UPDATE:  If you know of a well written, intelligent response to this incident, let me know and I’ll add the link here.  I’m not linking to any of the FUD media nonsense though, so don’t send me that junk.  :)  Thanks, cheers!

Cloud Formation

Here’s the presentation materials that I’ve put together for tonight.


Check my last two posts regarding the location & such: