Jul 26, 2010

If you’re an enterprise considering “Windows Server as a Service” you need to be aware of how IaaS vendors would prefer you to deploy and provision your server images. It’s not just a technical issue; it’s a change in market philosophy. We examine this change from a business perspective, to help you estimate the real costs of cloud deployment.

Perhaps Ray Bradbury would be the person who could best appreciate this newest metaphor from the burgeoning realm of cloud computing: Imagine a chapter from The Martian Chronicles where, in a distant city, people are not educated but instead “provisioned.” In their youth, they’re bred and raised only to be adequate vessels for the information that would be imprinted on their minds like a laser etching — information that instantaneously enables each individual to perfectly fulfill a designated role in society. When a vessel fails, as vessels are prone to do, the image of that perfect function can be recalled and re-imprinted from a backup. In such a world, what constitutes a person’s identity for the sake of his function in society would be relocated from the integrity of his mind to the reliability of his image. And inevitably, better images could be replicated, distributed, and given market value.

It Was Inevitable: The App Store for Servers

In the real world, in data centers everywhere today, the thing that we refer to with the term “server” is changing form. It’s moving from the box with the processors and the network cable and the cooling, to a unit of fully installed, pre-activated, self-sustaining software. And even the notion of infrastructure is moving from a physical to a virtual foundation.

The Windows Server provisioning table from version 3.0 of GoGrid.

The Windows Server provisioning table from version 3.0 of GoGrid.

GoGrid’s system is the first to employ Windows Server 2008 (as opposed to Windows Server 2003, which Amazon EC2 uses, although it’s not yet 2008 R2) in a stable, “non-beta” scenario. This screen capture from the newly released version 3.0 of GoGrid’s management interface shows where customers can instantiate fully configured Windows Servers on the fly. Notice the separate instances that include pre-installed middleware such as Microsoft SQL Server and IIS, which are necessary components for a number of Web-centered applications (for example, content management systems like Umbraco). And in the first of what may turn out for GoGrid to be many partner-sponsored selections, notice the rather surprising entry for Hedgehog 3.5.2. This refers to Sentrigo Hedgehog, a database activity monitoring tool that scouts for suspicious usage patterns and applies policy-based responses — a tool that first rose to prominence with Oracle, and now works with SQL Server and Sybase.

This is the server market of the 2010s decade: a kind of app store where the product for sale is a fully defined set of IT functions. The whole metaphor completes its 360-degree rotation, for what a customer of cloud service provider BlueLock called “service-as-a-product.”

A screenshot from The Cloud Market, a Web site that exclusively dishes up Linux and Windows Server images for Amazon EC2.

A screenshot from The Cloud Market, a Web site that exclusively dishes up Linux and Windows Server images for Amazon EC2.

For Amazon’s Elastic Compute Cloud service (EC2), the server image market has not only matured; it’s already being franchised. The Cloud Market is an independent service for the marketing of preconfigured server images already prepared for EC2. Without even having to download the software locally yourself, you claim the image with the buildout you need; the transfer takes place between the Market and Amazon.com.

The Newest Subsidiary Market: Image Maintenance

Economists typically point out that a clear sign of a healthy market is the presence of  little subsidiary markets that sprout around it like mushrooms. In the cloud space, these new markets include some that were originally intended for conventional data centers, but which are finding new life in the cloud.

An established backup service named Zmanda is one example. Its recently announced Zmanda Enterprise 3.1 package is both mindful of cloud deployments, and uses Amazon’s EC2 as its own platform for disaster recovery. Making use of Microsoft’s Virtual Shadow Copy Service (which, for reasons too numerous to explain here, is abbreviated VSS), Zmanda effectively places backup agents in the midst of major middleware installations.

“To be honest, [backup] used to be much more complex than it is today,” admits Zmanda CEO Chandler Kant. “It used to be that basically you had to do some kind of reverse engineering based on what Microsoft was giving you, figure out what had changed, and extract it out. Today Microsoft provides VSS, and we interact with VSS to get live data out of Microsoft Exchange, SQL Server, and SharePoint.” Kant adds his software agents also can determine what data has changed at a more incremental level.

That’s not substantially different than other backup services. What is different is the disaster recovery component. Zmanda effectively uses the data acquired from those agents to enable an emergency image of sorts to be deployed on Amazon’s infrastructure, in case Zmanda’s clients go down (something which can still happen, even in the cloud). It’s not an easy setup process, so although documentation is available, Kant tells us, many customers often end up retaining Zmanda as a kind of auxiliary architectural consultant.

“Let’s say you have an internal private cloud, with Windows Server 2008 R2, and you’re backing it up to Amazon S3 (Simple Storage Service). You can create an instance of Windows Server 2008 R2 on EC2, and have that as your disaster recovery location. We do not charge extra for that, [but] if you want us to set this up for you, we will charge you a consulting fee.”

The Four Dimensions of Cloud Provisioning

With conventional servers, you pay not only for the mechanism but the space it consumes, the work needed to keep it functional within that space, the licensing of the software installed on it, and the upkeep and maintenance of that software.

By contrast, this new class of server images consumes four dimensions of configuration space, all of which are measured more granularly:

  • Time: The memory required for each image to be hosted within the cloud data center, often payable by how much time the memory is actively available (what GoGrid calls the “gigabyte-hour”);
  • Space: The bandwidth consumed by the image’s Internet connection during its normal course of operation, payable by the megabyte;
  • Breadth: The storage necessary to host the database accessed by the software and middleware these images will run;
  • Depth: The software that makes the image functional, typically including the operating system — for which Windows Server is rarely, if ever, the least expensive option.

As the first retailer to discover a viable formula for selling computing as a commodity, Amazon’s EC2 pricing is based on the size of the machine each image represents. The twist here — the concept that effectively launched this industry — is that you have to think of size in terms of time; Amazon charges for the relative bigness of its server images by the hour. EC2’s smallest “Standard” instance consumes 1.7 GB of memory, for which the Windows-based charge is $0.12 per hour; its largest “standard” instance consumes a full 15 GB and costs $0.96 per hour. (Amazon offers as much as 58% discount for prepaid, “reserved” instances.) Much higher memory instances are available on a separate tier, for a substantial premium.

Here is where you may find yourself actually making an architectural decision, for a higher memory instance in a high-bandwidth scenario may result in greater speed, which could result in efficiencies that pay for that premium.

One alternative to housing all that memory on one image could be apportioning a separate cloud-based storage unit (which you may have to do in a multi-image scenario anyway), although you may need to calculate in advance the extent of data transfer between your images and that storage unit. Any transfer that takes place over the Internet incurs a separate charge. For Amazon EC2, it’s $0.10 per gigabyte incoming (effective November 1, 2010) and $0.15 per gigabyte outgoing, for the first 10 terabytes. The separate data storage block (the “depth” in our dimensional metaphor) incurs $0.10 per gigabyte per month, plus an additional $0.10 per month for every 1 million read/write events.

“The ability to increase or decrease compute capacity within minutes, not hours or days, has been a game-changer in terms of cost, performance and business growth,” explains Amazon spokesperson Kay Kinton. “When computing requirements unexpectedly change (up or down), Amazon Web Services can instantly respond, meaning that developers have the ability to control how many resources are in use at any given point in time. In contrast, traditional hosting services generally provide a fixed number of resources for a fixed amount of time, meaning that users have a limited ability to easily respond when their usage is rapidly changing, unpredictable, or is known to experience large peaks at various intervals. As a result, companies either provision too little resources, in which case their customer satisfaction suffers, or too many resources, in which case they’re losing money by not being at full utilization.”

To be competitive against market leader Amazon, GoGrid charges nothing for incoming bandwidth. The size of its instances is measured in RAM-hours; and here, you have to be careful. Even though GoGrid’s marketing and its provisioning calculators imply that you can select server image sizes in increments of 0.5 GB RAM-hours, in fact, when you get to the part where you’re selecting a pre-provisioned image, you realize that Windows Server-based images (true to Microsoft’s specifications) are limited to 2 GB minimums. So the per-hour cost of running any Windows Server image on GoGrid makes it instantly preferable to prepay for one month’s run time ($199 on the “Professional Cloud” plan).

Holding Your Cloud Over the Provider’s Head

The personal service aspect — specifically, the upkeep of the data center hosting these images — is typically considered part of the package. The cloud server market is competitive enough today that no player should stoop to the level of rendering a service charge for availability.

“Amazon Web Services is able to aggregate hundreds of thousands of customers with every imaginable use case to have very high utilization of our infrastructure,” says Amazon’s Kinton. “AWS gives users a great deal of control and visibility into their environment. Users can choose where to place their data [and] run their applications; they can back up to multiple availability zones; and in the event of any service interruptions, they have access to a service health dashboard that gives regular updates on the service health. We also offer services that provide monitoring, auto-scaling and elastic load balancing for even greater resilience in building applications. . . Most customers tell us that our services are also more reliable than what they achieve themselves.”

Up-and-coming cloud service challenger BlueLock makes a similarly bold claim, but then raises Amazon’s bet. It’s willing to state that the business relationship between BlueLock and its customers gives them a more executable form of accountability that they don’t have with their own IT departments.

Boasts BlueLock director of sales engineering Bob Roudebush, “BlueLock specializes in managed IaaS cloud offerings which add a layer of people on top of that compute capacity, who are able to manage those hosted resources as well as — and many times better than — that company could on its own. Whether or not one takes stock in either of these assertions, the difference between hosting this in an outsourced data center versus one on-premise is the accountability aspect. With an outsourced solution, companies can have SLAs in place with guaranteed commitments and financial penalties of those commitments aren’t met. Typically a business unit (the R&D division, for example) wouldn’t have this ‘stick’ to use with an internal IT department.”

Elsewhere, we examine some of the more technically complex methods in which cloud service providers are extending premium tier services to enterprise-level customers, including effectively reselling a policy management service that originally sprouted forth from VMware.

Related Information From Dell.com: A Success Blueprint for the Efficient Enterprise.

Want more like this? Sign up for the weekly IT Expert Voice Newsletter so you don't miss a thing!

Comments are closed.

DELL
FM IT Expert Voice is a partnership between Dell and Federated Media. Privacy Statement