Clustering has been around since almost the earliest PC and mainframe days. But a new take on clustering is emerging that leverages virtualization tools and is becoming more appealing, particularly as enterprise IT shops gain more experience using virtual servers and as the virtualization vendors add more high-availability features to their products.
A combination of services, were previously only the province of very expensive and customized clustered configurations, are now available in the virtual world.
These services, including high availability, virtual storage management, and near-term server failover, can serve as a good substitute for many enterprise’s disaster recovery (DR) applications, too. Virtual machines are easily portable and replicated across the Internet, so you can quickly get a secondary site up and running when the primary server fails. “We have seen disaster recovery protection [become] available to a whole class of customers that couldn’t do it before,” says Bob Williamson, an executive VP with Steeleye Technology, a specialized virtualization vendor. ”In the past, you needed to buy another physical server and have it ready if the primary machine went down.” But, he says virtualization and hosting servers at a remote location permits enterprises to use these machines if their data center goes out. “That lowers the entry cost for deploying wider-area disaster recovery, and opens up this protection to a whole new set of companies that haven’t been able to consider it before,” he says.
In the past year, the three major virtualization vendors — Microsoft, VMware, and Citrix/Xen — each have strengthened their ability to provide more capable DR and business continuity services in their products. These have lots of appeal for enterprises that previously would have either considered a full DR solution or clustering too expensive.
It is possible using these newer tools to replicate and bring up a new instance of Windows Server 2008 in a few milliseconds. For example, you may need to provide additional capacity on an overloaded server or in case of planned upgrades. Consider a server farm with a dozen physical computers, all delivering a Web application. If an enterprise has designed for peak load performance, at other times many of these systems will do little or no work, twiddling their little digital thumbs. The ideal solution is to spin up or spin down new instances of application servers when these loads change, to match a particular service delivery metric and to keep the costs of power and cooling to a minimum.
These solutions aren’t appropriate for transaction processing applications, where immediate failover is required to handle tasks like online payments processing or airline reservations. “There are still times when you need clustering, such as when you can’t afford to lose a single transaction and have to restart this transaction on the new machine after a failover,” says Carl Drisko, an executive and data center evangelist at Novell. “If your virtual machine goes down, anything that is being processed in memory is going to be lost.” But the high-availability virtualized applications can work for less demanding applications, such as enterprise e-mail servers.
One of the issues with earlier custom clustering solutions is that they require identical hardware and operating system versions for each physical machine that was part of the cluster; virtualized servers are more forgiving and flexible, not to mention less expensive. Microsoft’s HyperV, for example, now supports the ability to migrate a running virtual server to a new physical host that even has a different processor family, such as moving from an Intel-based server to one running on an AMD processor.
Another issue is that many of the older-style clusters required very high-speed links to tie together the members of the cluster. Virtualized solutions are also less demanding of connectivity and can make do with longer latency connections, even across typical Internet connections.
As these “almost-clustering” solutions become more popular, look towards increasing sophistication from third-party monitoring tools to help provide a complete solution. For example, Lyonesse Software’s Double-Take, Steeleye’s LifeKeeper, Symantec’s Veritas Application Director, and Cassatt’s Active Response can monitor both physical and virtual applications on running virtual servers, and notify IT staff when a virtualized host or application fails, so that a new virtual instance can be quickly brought online.
All this means that virtualization and clustering will become more interrelated and complementary solutions for IT managers. While the two technologies have come from different heritages and infrastructures, they are now merging and providing a powerful tool for managing more complex workloads in the data center.
Related Information From Dell.com: A Smarter Path to Virtualization