ISO 27001 | SSAE 18 SOC 2 Certified Sales: 317.275.0021 NOC: 317.275.0001
Do your worst-behaved applications define your data center requirements? Part 2
Do your worst-behaved applications define your data center requirements?
In Part 1, I talked about the most important applications in your business, also know as your mission critical applications. I covered reasons that many of these mission critical applications are ill-behaved and require special care and feeding in your enterprise data center. These reasons include high bandwidth requirements on the headquarters or wide area network, expensive overbuilt servers, and additional hours of maintenance overhead per month.
How do these worst-behaved applications affect your data center requirements?
Bandwidth – Many applications generate large amounts of network traffic for even the smallest user activity. These levels of traffic can cause the applications to perform poorly via the Internet or small remote office connections. These apps can influence your decision on the location of the primary data center. It often seems simplest to place the data center close (in the same building) to your highest number of users. Yet an in-house data center may not fully support your data center uptime requirements.
Interoperability – If your most important applications link up to other important applications, you may be forced to put these applications in the same data center. If your manufacturing system is feeding data to your customer management system and your accounting system, reliability becomes more important, because a small amount downtime can affect three important software systems, not just one.
Souped-up, expensive servers – Experience has taught your IT staff to overbuild server and storage hardware to solve some of the bad behaviors of your mission critical applications. These non-standard configurations can drive up costs. Non-standard configurations are also more difficult to operate in cloud computing environments, forcing the data center to remain physical, instead of virtual.
More Maintenance – More problems mean more maintenance work to solve them. This drives up FTE requirements and makes outsourcing more complex and expensive. Maintenance load can influence location and staff requirements for the data center.
Costly uptime – Problem applications are harder to keep running and often require more technology for high uptime levels. Expensive high uptime technologies like clustering greatly drive up the costs of keeping the application alive and well.
Ill-behaved line-of-business applications influence strategic data center decisions:
Primary data center location – Would your data center be better off in-house, in the cloud, in an outsourced data center facility or a hybrid of all three?
Wide area network design – Where is the hub of the network? How many telecom providers should I use? How much bandwidth do I buy? How can I get the best pricing?
Server hardware ownership and maintenance – Do I buy my own servers for maximum control? Do I use virtual servers in the cloud? Do I use a combination of both?
Maintenance – Does in-house staff do maintenance or do I outsource it?
Good CIO strategy includes a clear understanding of the mission critical applications and their data center requirements.
More CIOs are using these tools to mitigate the risks of their worst-behaved applications:
- Thin application delivery via software by VMware and Citrix to solve bandwidth problems
- Affordable colocation to build in 99.995% uptime on the data center power and cooling
- Cloud computing services like virtual private servers for predictable mission critical applications
- Change management discipline to manage application behaviors and reduce maintenance
Don’t let your worst-behaved applications cause you to make bad decisions about your data center.