ISO 27001 | SSAE 18 SOC 2 Certified Sales: 317.275.0021 NOC: 317.275.0001
CloudTweaks: Why All Those New Google / Amazon Data Centers Won’t Really Go To Waste – Cloud Computing’s First Supercomputer
Google and Amazon are looking to steadily increase the size of their data centers. However, many opponents to this idea are asking questions such as what will happen if or when the need for these data centers falls? Unlike the rest of us who are using Google and Amazon Cloud services in an elastic and dynamic manner as needs require it, as the actual hardware-backed Cloud providers they won’t be able to be so flexible. Well, if it does occur that public Cloud needs should drop, Amazon has found another use for their growing data centers in the form of Cloud Computing’s first supercomputer.
Stringing together a cluster of 30,000 processing cores, Amazon’s EC2 or Elastic Compute Cloud has managed to achieve the rank of 42 in the top 500 supercomputer ranking of the world. Granted, it isn’t the first in performance with a score of 240 trillion calculations per second but it is by no means an average performer either. The main point is that it is available to anyone, unlike your usual supercomputer cluster which has been built with a dedicated purpose in mind and therefore has rather limited access (and even longer waiting lines).
Amazon proved this by doing an actual paid-for supercomputing process at a mere $1279 an hour. While this may seem like a lot to some people, the people who have set up an actual supercomputer cluster will be shaking their heads in disbelief (and probably regret!) at the millions of dollars used to create a dedicated supercomputer cluster, much less keep one running. What boggles the mind even further is that Amazon did this while running all of their other Cloud related services at the same time.
More of the CloudTweaks article