Gartner predicts data center power, cooling and space problem are worsening. Cloud computing exec Mark Thiele weighed in on the problem—and the solutions.
According to a recent report from Gartner Research, 2009 saw the most significant drop in server deployments ever. Now this trend is in reversal: In the next few years, server deployments will increase, and the results won’t be pretty. Those data center power, cooling, and space problems you’ve been having? Take a deep breath and start strategizing — because they’re about to get worse.
To help you with that task, IT Expert Voice checked in with Mark Thiele, vice president of Data Center Strategy at cloud computing vendor ServiceMesh, president of the nonprofit organization Data Center Pulse, a member of Green Industry Advisory Board at CSU Fullerton, and strategic advisor at sustainability management software vendor at CSRware. We got his thoughts on the Gartner report and its recommendations, and discussed how companies can cut their data center costs.
IT Expert Voice: Gartner predicts that through 2013, data center power, cooling, and space problems will worsen as a result of new high-density infrastructure deployments. To help control energy consumption and costs, Gartner recommends delaying new high-density hardware procurement where possible. What are your thoughts on the prediction and that recommendation?
Mark Thiele: In effect Gartner is correct, but the problem isn’t just the next couple years. This is partly because cloud computing, combined with higher density computing, will have an interesting effect on the data center market as a whole.
In theory, virtualization, more performance per watt, and cloud computing should drive down the space, power, and cooling requirements for the average data center by 50% or more.
However, what we’ve seen over the last 30 years of semi-modern technology is that IT groups don’t really spend less when solutions become more efficient or cheaper. They instead buy more for the same costs. Think of 1995 when a tower server from Compaq cost $25K. Then in 2000 you could buy a 1RU server for $5K. What do you think people did? They bought four or five servers instead of one.
I believe cloud computing will have a similar effect on the IT world that 1RU servers did (see one of my recent blogs). In effect we’ll be making more technology options available to a much wider audience in much more affordable and digestable sizes. This availability will drive additional use and the generation of new ideas for more IT apps/products.
So in a very small nutshell, what I’m saying is that yes, there will be a crush on data center space, but it won’t be for just two years; it will be until we come up with a quantum leap improvement in how we process and store information.
How might a company delay purchasing new hardware without compromising its growth, uptime, and so on?
There are several options with varying degrees of cost and complexity that can help slow the need for more data center capacity. Here are five:
1. Push the envelope on your virtualization efforts. Regardless of what you hear (myths), virtualization is ready for even the biggest jobs.
2. Move some of your processing out of your data center and into someone else’s (i.e., Amazon, Opsource, Savvis, etc).
3. Ensure you’re effectively using the hardware that you have (i.e., what is the average CPU utilization rate? What is your performance per watt?). This is as much an asset management play as it is cloud and virtualization. Also, don’t be afraid to look at some of the new server options like SeaMicro and Tilera. In the right situation, these new server solutions can dramatically increase your capabilities, while driving down ownership costs.
4. Implement the basic data center improvement options that are widely understood, airflow being a key opportunity area in many data centers that are two or more years old.
5. Don’t assume that delaying hardware purchases is your answer. Your best answer might be to replace some of your power-hungry low-performance older devices so that you can concentrate more performance in a smaller footprint.
Gartner also makes the recommendation to deploy server-based energy management software tools and intelligent power strips on racks to monitor and control energy consumption. Do you agree this is a good strategy?
In general, yes. However, this is not a simple task and it can be very interruptive for the average data center. An intelligent power strip costs upwards of $600 each. If you have 50 cabinets with two strips each and you replace all of them you’re looking at spending $60K. It would take years to make up that cost in the small incremental power savings you’ll get, and that doesn’t even account for the potential interruption to operations and labor or software costs.
In most cases it’s better to put in new solutions like this when you’re doing a major rebuild or doing a greenfield data center. An alternative that will get you much of what you need is looking for wireless sensing for the environment and branch monitoring for the power.
What are some of those options?
There are many options in the space. Some of the newer players in the wireless solution space are Arch Rock and SynapSense. However, there are also solutions available from some of the bigger players like Eaton and Emerson, among others. APC has a very comprehensive solution, but customers should be cautious about buying a bigger solution than they need. The potential ROI must be considered as there is no guarantee that any of these solutions will cost less than your inefficiency. They must also be considered against the time of ownership expectancy of the space to be monitored.
“Visibility” is important. Just as an example you could put a “monitoring/management” solution from a company like CSRware. It doesn’t actually provide much efficiency on it’s own, but it does give you a low cost, low risk, quick return on investment opportunity to get visibility to your environment relative to power, water and e-waste among other things. Similar to a scale in the bathroom, because it shows you the problem, you’re more likely to act on the information.
What challenges might these solutions pose to an IT department’s usual way of doing things?
The simple answer here is ROI and people/process. No solution will provide you a benefit if the people and process aren’t well defined in advance. Also, if you don’t have well-defined ownership and reporting criteria for the solutions, they will end up unmanaged and relatively useless in a very short time. Remember, most IT orgs are already overtaxed for resources. If you throw a new solution in the mix without making real accommodations for how it will be owned, no one will own it.
What do you recommend for overcoming these challenges?
Change management is a critical component of any IT environment, so again it boils down to people and process. If a process is poorly managed when it’s manual and you throw a technology at it, you’ll just end up with poorly managed automation. Also, you need to carefully consider the resource and impact requirement of any change versus the proposed efficiency gains. Power is often considered the biggest problem in a data center from a cost perspective, but it’s not.
Here’s an example based on a $1 million a year spend on data center:
Solution: Power monitoring and management with an expected savings of 2 to 5% are implemented.
Costs: The costs are 150K for hardware, software and implementation, and the ongoing yearly maintenance and management costs are 50K, for a five-year cost of $350K.
Savings: The savings over five years based on a 5% efficiency gain are $250K.
The average greenfield data center has a life expectancy of 15 years, so you can quickly surmise that there’s a much more obvious opportunity in a new facility than in retrofitting an older one.
The above is a very conservative and admittedly “back of the napkin” example, but it should give you an idea of what the risks/rewards are. (This example also doesn’t put a cost on the risk incurred during implementation. )
Gartner also recommends accelerating the speed of virtualization and consolidation projects to yield power, cooling and space benefits in 2010 and 2011 to offset the capacity requirements of new hardware.
I agree. I can’t emphasize enough that you should be pushing the envelope on your virtualization efforts. The majority of IT organizations are at 30% virtualization or less.
Do you think organizations are already speeding up virtualization and consolidation projects anyway?
Yes. Interestingly, many of the efforts are being accelerated due to a desire to utilize cloud computing as much as they are to save on power, space and cooling. Either way, it’s the right thing to do.
How about the cutting data centers costs overall? What strategies do you think are most important?
Here are four: First, know the total costs and the cost drivers for your Data Center. You also need to ensure you have a strategy for ownership with a top-down cross-functional leader. In addition, know how are your assets managed. And finally, create efficiency opportunities in the area of airflow management.
If a company is serious about cutting data center costs, should energy-saving/green IT-focused strategies be the top priority?
Many of the ideas I’ve covered have a significant impact on your carbon footprint (your “greeness”). I am a casual conservationist, but assume that my first responsibility to any business is to help ensure bottom line success through ethical business practices.
That being said, a well thought out efficiency plan is the same as a “going-green” plan. Anytime you’re using fewer resources (people, space, equipment, boxes, trucks, and so on) you’re being greener. For larger enterprise customers, there are even less obvious options like having their equipment delivered in more environmentally friendly packaging. An example would be having the hardware vendor deliver a rack full of gear in one box, versus having 5-30 servers delivered in individual cartons.
What are your thoughts on how well companies are doing—or not doing—in terms of focusing on cutting costs in the data center? And what do you think the motivation is?
As a global community I think the top 30% of data center owners are making excellent progress. My biggest concerns are for the thousands of companies with 10K server farms or less, who don’t have the appropriate ownership strategy in place.
As for motivation, it varies depending on the maturity, size and industry. With maturity comes an understanding of the costs and an increased focus. Size is similar to “maturity” in the sense that when you get big enough you can’t ignore the costs anymore. Across industries, IT and finance companies tend to have a stronger focus on their data centers. IT companies because it’s embedded in their DNA and finance because they have a high level of risk aversion and strong cost management strategies.
Any last words?
Buy what you need to solve a problem or enable a business opportunity—don’t buy technology.
Related Information From Dell.com: Your Power and Cooling Strategy.