Why are companies moving primary data centers to wholesale colocation facilities? Part 2

Why are companies moving primary data centers to wholesale colocation facilities like Lifeline Data Centers? One of the most common reasons is limits on power, cooling and floor space.

Limits on power

The trend in data centers is the need for more electrical power per square foot of data center space. This hunger for more power is largely driven by the technologies of virtualization, blade servers and storage arrays. Virtualization allows organizations to run multiple logical servers on a single high-powered super server system, or on chassis of blade servers. These blade servers, super servers and storage arrays require more power per square foot of data center space than their predecessors. And power requirements do not always decrease as the number of physical servers decrease. In many cases, enterprise data centers are sitting half empty because of virtualization and server consolidation, yet these same data centers have reached a power limit and are unable to add any more equipment.

Virtualization is not always the reason for increased power demands. Business are more automated than ever. Organizations both small and large have increased server counts significantly over the last few years. At some point, they discover that there is no more power available to add the new servers, networking equipment and storage that they need.

Adding power can be very expensive. If the organization is operating near at the building’s power capacity, it may be impossible. If the company needs to add power to a high uptime data center, it must factor in the cost generators and large-scale battery backup/power conditioning systems.

Limits on cooling

Hand-in-hand with the need for more power is the need for more cooling to dissipate the heat. An easy rule of thumb for data center power is this: for every unit of electricity required to power IT equipment, you need an equal unit of electricity to cool that IT equipment. And cooling becomes more difficult as power requirements per square foot increase. So as power demands increase, cooling demands increase at an even faster rate.

Additional cooling is not always available. If an data center is at the capacity of the existing air conditioning system, the company may not be able to add more cooling capacity. If the data center does not have the floor space, the company many not be able to add cooling. If the data center is at power limits, the company cannot add cooling.

Cooling may be available but cost-prohibitive. If the data center is at the present system’s capacity, the cost of a new system, or better yet a secondary system, may be out-of-line with the benefit that the business receives.

Limits on floor space

Many companies are still experiencing growth in their data center. Small companies find that the mop closet that was converted into a computer closet can no longer hold all of their equipment. Larger data centers find that the new storage arrays are taking more floor space than they have left in the enterprise data center. Virtualization and the associated heat increases are causing many data center managers to rearrange floor plans with more square footage per rack to get the most efficiency from their existing air conditioning systems.

Why are companies moving primary data centers to wholesale colocation facilities? Outsource data centers can solve the problems of limits on power, cooling and floor space. Wholesale data centers build to scale so that clients can grow and change as the business requirements dictate. But not all outsource data centers are the same. Companies must research the ability of all potential providers to deliver power, cooling and floor space requirements as the business needs grow and change.

In Part 3 of this series, we’ll address how the need for higher data center uptime has become a reason to move to a wholesale colocation facility.

Alex Carroll

Alex Carroll

Managing Member at Lifeline Data Centers
Alex, co-owner, is responsible for all real estate, construction and mission critical facilities: hardened buildings, power systems, cooling systems, fire suppression, and environmentals. Alex also manages relationships with the telecommunications providers and has an extensive background in IT infrastructure support, database administration and software design and development. Alex architected Lifeline’s proprietary GRCA system and is hands-on every day in the data center.