Just because server differentiation increasingly comes from factors outside the processor doesn’t mean those differences are insignificant. Consider the impressive power and cooling smarts that server vendors are bringing to bear.
Companies face a major challenge reining in the costs of running servers and air conditioning to keep them within thermal specs, because as data centers grow, so do electricity bills. It can cost $25 million to add a megawatt of capacity to a data center, according to industry estimates.
The upshot is that server vendors are integrating power-control technology deep within their boxes. “We’ve built into BladeSystem the ability to throttle pretty much every resource,” says Gary Thome, chief architect of HP’s infrastructure software and blades group, referring to the company’s best-selling blade server line. “We can throttle CPUs, voltage-regulator modules, memory, fans, power supplies, all the way down to trying to keep the power consumed as low as possible at any given time.”
These management smarts extend beyond each server to the chassis as a whole. “We have the ability to put power supplies into low-power mode, and then shed power onto other supplies while still maintaining redundancy,” Thome says. This lets the power suppliers that are running do so at high efficiency, but it goes beyond that. “We have variable-speed fans. Plus, the fans are set up in a zone, so if one part of the chassis is running hot, those fans will run faster, and on another part of the chassis, the fans will run slower.”
more of the Information Week article from Alexander Wolfe