As data centers grow, power requirements also grow. Data centers, in fact, rank amongst the biggest consumers of power on the planet. However, there is a tipping point, and many data centers are now finding this out the hard way. Surging demand of power means that many data centers have exhausted the power utility’s ability to deliver additional capacity to their locations. Even when power is available from the utility, high-density servers create hot spots in data centers, often exceeding 30 kilowatts per rack, making it impossible to get enough power distributed out to those racks on the floor.

How to Reduce Data Center Energy CostsThe solution is obviously energy efficiency and, by extension, green data centers. Data centers, however, do not always have to go all in for a wholesale replacement of their servers or invest in renewable energy to improve their energy efficiency. The 2012 Energy Efficient IT Report reveals that 53% of data centers experience savings in energy requirements when they adopt new cooling approaches. The report also explores many ways to conserve energy, including virtualized servers/storage, consolidated servers and Energy Star qualified devices.

Virtualization

Virtualization essentially facilitates running multiple operating systems and applications on a single computer. Virtualization of hardware improves operational efficiency and supports consolidation, both key factors in any data center optimization program. According to Gartner, effective use of virtualization can reduce server energy consumption by 82% and decrease floor space by 86%. The cost savings resulting from removing a single physical server can be a whopping $1,200 per year, combining direct energy and cooling costs.

Consolidation

Server consolidation is combining workloads from several machines and systems into smaller number of systems. There is a considerable overlap between virtualization and consolidation, and both reduce the number of servers required. Most physical servers run at only about 10% to 15% of their capacity. Since an idle server consumes as much as 30% of the energy it consumes at peak utilization, it makes more sense, energy wise, to maximize the utilization of servers and switch off servers that aren’t required. New technologies, such as VMware’s “Distributed Resource Scheduler” that dynamically allocate workloads between physical servers, treating it as a single resource pool, make it very easy to squeeze functions into as few physical machines as possible.

Energy Star Certification

As a rule of thumb, the more updated equipment, the better the power management features available. For instance, AMD and other chip makers are implementing new power management features that will scale back voltage and clock frequency on a per-core basis and result in the memory, traditionally a power-hog, drawing less power. With this technology, a CPU utilization of 50% would result in 65% savings in power, and a CPU utilization of 80% would result in 25% savings in power.

Energy Star certifications allows data center managers to identify and deploy an energy-efficient infrastructure. Energy Star qualified hardware and devices use these new low-power, low-wattage energy-efficient technologies.

Alex Carroll

Alex Carroll

Managing Member at Lifeline Data Centers
Alex, co-owner, is responsible for all real estate, construction and mission critical facilities: hardened buildings, power systems, cooling systems, fire suppression, and environmentals. Alex also manages relationships with the telecommunications providers and has an extensive background in IT infrastructure support, database administration and software design and development. Alex architected Lifeline’s proprietary GRCA system and is hands-on every day in the data center.