By Pam Baker -
May 4, 2010

The lesson in the frustrating Pull Me-Push You approach of multiple teams working on SQL Server virtualization is that old mindsets and turf protections can kill any project. Beyond that key realization lays a number of real technical issues to solve. Here’s what to look out for and what to ignore.

Cloud buzz aside, virtualization chatter still dominates in IT circles. The primary discussion is two-toned: “Why should we bother?” and “How would we do it?” Among the first of the follow-up questions is “How well will our Microsoft SQL Server databases work with virtualization?” It’s a legitimate, straightforward question that, unfortunately, is often met with an onslaught of contradictory answers.

Varying technical opinions are common among database administrators (DBAs) and software developers. However, in this case the variation in responses is usually determined by the answerers’ perspectives rather than by actual database server performance metrics.

Take DBAs, for example. “DBAs don’t want to change their server environments,” says Brent Ozar, a Microsoft SQL Server expert with Quest Software, and a Microsoft Certified Master for SQL Server 2008. “They’re probably unhappy with their server performance today on physical hardware, and in their minds, things can only get worse.”

“The decision to virtualize a SQL server environment needs to be considered from a big picture perspective,” says Leon Thomas, president of e-commerce service provider Jelecos. “Focusing solely on cost or performance could be somewhat short-sighted.”

Given the pessimism/optimism pendulum swinging between multiple teams, how accurate can any assessment of virtualization actually be? Welcome to one of the biggest obstacles to database virtualization: measurement mania.

Typically, IT teams (whether in development or the data center) stick with familiar and comfortable metrics. They resist measuring performance in terms of a new technology’s actual capabilities.

Remember: Old mindsets and turf protecting can kill any project. Beyond that key realization lays a number of real issues to solve. Here’s what to look out for and what to ignore.

Storage Storms

The most common performance issues with SQL Servers are related to the storage subsystem. “The storage sub-system where the virtualized disk resides will have a profound effect on SQL,” says Brian Capoccia, disaster recovery practice manager at Agile360, the southern California division of Entisys Solutions.

Here are Capoccia’s top tips to help optimize virtualized database systems.

1. Use storage area networks (SANs) to virtualize storage across many spindles (or disks). NetApp, for example, stripes the volume of data across all spindles. A typical disk shelf in a SAN includes 16 spindles; aggregating this into two shelves or more and the volume of spindles rises significantly.

2. Limit the number of virtual disks that share the storage repository.

3. Keep disks with homogeneous storage characteristics together. For example, keep all SQL log files on one set of spindles and all database files on a separate set. Mixed I/O workloads cause contention for the storage system. It is inefficient for the disks to migrate from sequential writes to random reads and writes.

4. Reserve and dedicate memory that is applicable for SQL Server. Don’t let memory sharing occur.

5. Keep tuning the SQL Server according to Microsoft best practices. (For more on that topic, see Optimizing SQL Server for Windows 7.) Also use Capacity Planner and other tools to set a baseline and to right size the servers when migrating them from physical to virtual servers.

Overhead: Not a Big Deal

The tendency is to focus on performance overhead. But this is a non-issue for all practical purposes except when it becomes a distraction to one or more of the IT teams involved in the database virtualization project.

“SQL Server has built-in instrumentation to tell the DBA what the system has been waiting on at the query level and the server level,” says Ozar. “Instead of being concerned about CPU use going up 10%, find out if CPU is even the bottleneck, first. This keeps DBAs focused on what really matters rather than being distracted by virtualization overheads.”

It is important to remember that virtual hardware scales as much as physical hardware. However, virtualization requires more CPU/system horsepower.

“There are no processor or RAM barriers because the pass-through occurs with very slight overhead,” says Capoccia. “Even if the virtualization overhead were as much as 10%, there are still plenty of resources and very little trade-offs when you consider the portability, improved management, availability, and scalability afforded by the virtual server.”

Warning: Metrics Can Lie

Previous argument notwithstanding, avoiding metrics is as much of a mistake as using the wrong metrics. The key lies in understanding what to measure.

“DBAs are accustomed to monitoring performance a certain way, and those methods give inaccurate results for virtual servers,” says Ozar. “For example, avoid gauging processor load with the ‘Percent Used’ counters because they don’t reflect how the server is really behaving.” Instead, he says, monitor counters like Processor Queue Length, to examine the number of tasks currently awaiting CPU power before they can execute. This approach helps DBAs better understand whether the server is correctly utilized or over-utilized.

Monitor I/O Closely

If you have an application that requires heavy disk I/O, proceed with caution.

“There are known issues with how certain virtualization vendor file systems perform under high disk I/O levels,” warns Jelecos’ Thomas. “In most cases, there are workarounds or tweaks to minimize this issue such as placing less virtual devices on a physical device, but these often further reduce the benefits of virtualization.”

When weighing a virtual machine (VM) versus a physical server, take into account how much I/O your application will be throwing at the VM.

“We mainly use VM for development systems because of the low volume of transactions,” says Anthony Biondo Jr., CEO of Biondo Communications. “For production systems, we almost always use a clustered SQL Server on physical hardware due to the high volume of transactions and the need for the extra horsepower.”

Which SQL Servers to Virtualize First

One of the best ways to leverage the power of virtualization, says Ozar, is to start with your oldest unsupported applications.

“Take those old boat anchors running SQL Server 2000 on Windows 2000, the ones on out-of-support hardware, and virtualize them,” he says.

It’s possible to get better performance due to today’s processors, memory, and storage on these antiquated relics, but virtualization has a much bigger advantage. “These old machines don’t have high availability solutions, and we’re afraid to touch them lest we break them,” says Ozar. “By virtualizing these old servers, they immediately get the high availability and disaster recovery benefits of today’s virtualization technologies.”

In essence, virtualization lessens the pain of running old, unsupported applications and OSs.

The Sticky, Tricky Parts

Avoid virtualizing database servers with active/active SAN multipathing, at least for now, says Ozar. If the servers currently use more than two connections to your SAN, they may suffer from I/O throughput loss. Symptoms include slower backups and slower queries.

“Multipathing is improving with each new hypervisor version, but they’re still not quite to the point of physical boxes,” says Ozar. “That’s not to say you should avoid virtualizing every SQL Server; but just hold off on the multipathed ones until you’ve demonstrated success with the rest of the servers.”

When it comes to SQL virtualization, Ozar advises you “Stop looking at metrics and start looking for bottlenecks.”

If you encounter resistance, try changing minds by presenting key benefits that appeal to each team’s unique woes. “If the DBAs have been burned by bad clustering setups, for example, they might be excited about the vMotion and Live Migration tools in virtualization that can help them avoid downtime due to firmware upgrades or hardware changes,” says Ozar. “Educating the DBA staff and getting them to talk to shops that have successfully virtualized database servers makes a world of difference.”

Want more like this? Sign up for the weekly IT Expert Voice newsletter so you don’t miss a thing!

Want more like this? Sign up for the weekly IT Expert Voice Newsletter so you don't miss a thing!

COMMENTS

DELL
FM IT Expert Voice is a partnership between Dell and Federated Media. Privacy Statement