Virtualization has many an upside, not the least of which is its value to disaster recovery efforts.
“Before virtualization, disaster recovery was so expensive to implement that organizations had to choose only the most critical applications to protect and hope for the best for the rest,” says Jeff Nessen, practice director of platform virtualization for Logicalis. “With a carefully planned virtualization strategy, disaster recovery can now be provided for a much broader range of applications and data.”
However, if virtualization isn’t executed with thoughtful precision, the whole virtual mess can tumble even the mightiest of corporate recovery plans.
“When implemented and designed correctly, a virtual environment is an armored bunker 200 feet below the surface where all your eggs are stored; incorrectly, it is a basket on the edge of a wall with very high crosswinds that may cause the basket to tip over at any time,” says Gregory L. Smith, senior product architect at SunGard Availability Services.
The Virtual Fault Line
When virtualization does foul disaster recovery plans, the problem usually stems from human error and lack of foresight rather than a glitch in the technology.
“Unfortunately, many people implement virtualization without any thought of a strategy,” says Daryl Beeson, vice president of sales at Abtech Systems. “If you want to use virtualization for something other than a way to reduce the number of servers in your environment, you have to have a plan.”
But like everything else in the way of IT, plans come and go on the budget winds.
“A successful strategy requires thought. and not many IT departments are staffed to think anymore. It is a real challenge when companies cut staff to a bare minimum so the only option is to react — and not think through how to solve global problems,” says Beeson. “Those that are staffed to think are confronted with the other challenge — and that is budget.”
The Costs of a Virtual Recovery
Beeson says the cost of virtualization, as a capital expense starting out, is not a 1:1 cost in the first year. “There is a price to pay for virtualization and that might be as much as 1:1.5,” he says. “The great news is [in] years two through five, when you start to recognize the savings. But finding the spare cash needed for the initial buy-in stops many people from implementing a simple plan.”
That penny-pinching rationale can lead to a hemorrhage of cash and a bucketful of troubles later when disaster strikes and data is permanently lost or retrieval is delayed too long.
“Traditional disaster recovery plans easily, or painfully I should say, means about five weeks of downtime,” says Beeson.
Contrast that scenario with the typical scenario using virtualization: “If used correctly, IT managers can have a working environment in less time than it takes to brew a pot of coffee,” says Koka Sexton, business development manager at Paragon Software Group.
The greater savings, then, come from weighing total costs against initial costs.
“Disaster recovery has historically been oriented around copying critical data to a secure location for storage or maintaining a separate set of hardware and a remote location for use during an event,” says Chris Patterson, product manager at NaviSite. “The first solution is comparatively low cost, but does not provide end users with a means of quickly accessing remote data, while the second is a very costly means of protection against an event that may or may not occur.”
“Virtualization provides a middle ground in terms of both pricing and functionality,” he adds.
The Bumps, Bruises, and Painful Parts
But building virtualization into your disaster recovery plans is not without its obstacles and worries.
“The concerns that need to be addressed by IT are the additional loads on servers running these virtual machines,” says Sexton. “IT managers will need to make sure there is ample storage for the VMs and provision sufficient resources such as memory, CPU, and bandwidth.”
“Make sure you build virtualization into your disaster recovery plan with regular backups and redundancy as you would with any other disaster recovery plan used,” adds Sexton.
Another sore point: narrow hosting. “If many critical virtual machines reside on one host that fails – that could be considered a single point of failure,” warns Venyu’s product manager, Patrick Tansey.
There are also a few compatibility and integration issues to wrangle.
“Server virtualization introduces new operating paradigms and traditional disaster recovery solutions don’t offer the level of granularity and flexibility that these new virtualized environments demand,” explains Vish Mulchand, director of software product marketing at 3PAR.
Beyond the concerns specific to virtualization are the shared worries with disaster recovery in general.
“There are as many levels of disaster recovery as there are IT infrastructures,” says Logicalis’ Nessen, “Determining the appropriate technology is actually the easy part.” Often, he said, the more difficult challenge is negotiating internally how much risk an organization’s different departments are willing to accept for their applications and data.
“Once realistic recovery parameters are identified, developing a tiered strategy that meets the specific requirements and budgetary constraints of your organization is relatively straightforward, and virtualization most certainly can be the key to an easier and more cost-effective disaster recovery strategy than many companies have had in the past,” Nessen added.
Sweet Spots and Right Moves
Server virtualization forces both end users and vendors to re-evaluate their deployments and offerings. Good visibility and communication across the different layers — such as the application, operating system, hypervisor, fabric connectivity, and storage abstraction layers — is essential.
“End users will need to evaluate offerings that provide this level of visibility and integration,” says Mulchand. “On the virtual server side, the more the virtual server environment can take advantage of existing infrastructure, the easier it will be for end users to deploy.”
Virtualization on the x86 platform has had a profound impact on IT and specifically in disaster recovery. “Initially, virtualization was able to act as a hardware abstraction layer, making the requirements for mirrored hardware or complex procedures to account for dissimilar hardware unnecessary,” says SunGard’s Smith. “The paradigm shift initiated by virtualization has also enabled disaster recovery capabilities that were once unheard of for the general x86 based environment.”
“Through the introduction of SAN-based or shared storage for the data center server population, virtualization has enabled advanced recovery solutions without introducing the complexity of transaction-based replication, advanced shared disk cluster services, and application or OS specific replication,” he added.
“At its core, virtualization has reduced the complexity at time of recovery and enabled advanced recovery solutions that are designed once and implemented uniformly across the x86 based data center environment,” says Smith.
Desktop Virtualization Brings It Home
Coupling desktop and server virtualization greatly increases the effectiveness of a disaster recovery strategy. Desktop virtualization is a newer concept than server virtualization. It was recently brought to the forefront with the rollout of Windows 7.
“For some. this centralization will deliver all that they need,” says Martin Ingram, vice president of strategy at AppSense. “Other organizations will look to use virtualization on the client platform for some users.”
The Microsoft Desktop Optimization Pack (MDOP) includes capabilities that allow organizations to distribute virtual machines that can run on user’s home machines if necessary. “This is a possible way to manage disaster recovery but it comes with added complications in ensuring the VMs are available and up-to-date,” says Ingram.
But that is not to say that desktop virtualization through Windows 7 does not have its own distinct advantages in several possible scenarios.
“Many companies have access to SunGard stations, but without virtualization, the employees just have a computer to work on,” says Mike Strohl, president of Entisys Solutions, a virtualization solutions consultancy and integration firm. “With virtualization of their Windows 7 desktop, they will be up and running within minutes, using their personalized desktop and applications, in the same operating environment as they are accustomed to, from the SunGard, or whatever workstation they have chosen.”
However, Windows 7 is not the only option in desktop virtualization strategies.
“I would expect that desktop virtualization delivered from an organization’s data centers will be a more popular choice than virtualization within Windows 7 itself,” says Ingram.
The reasons for using virtualization may vary from one company to the next, but the bottom line remains the same for them all.
“There is no difference in the user experience from the normal operating environment to the disaster recovery operating environment,” explains Strohl. “Users can do everything within the virtual infrastructure as they were doing in their normal infrastructure with little disruption.”
Want more like this? Sign up for the weekly IT Expert Voice newsletter so you don’t miss a thing!