Power consumption is the number one cost in a modern mega-data center. Some estimates suggest that power costs account for between 30 and 50 percent of the functional operating costs for most data centers. Performance-per-watt metrics are all the rage with big cloud players. Companies like Facebook, Linkedin, Google and Amazon are taking aggressive and experimental steps in their quest to lower power consumption.
In our previous blog post entitled Tariff-Aware Cloud Computing?, we suggested that an insightful and valuable hack might derive from mashing-up our comprehensive Electricity Pricing Database with predictive routing algorithms to deliver an electricity pricing-intelligent routing algorithm. For example, check out the Google Prediction APIand the Amazon Web Services Elastic Compute Cloud.
While we recognize that data center operators are also managing the non-IT energy costs, (ie. cooling, lighting, power-delivery and back-up), the ‘working’ IT power load could easily be shifted real-time to lower-cost jurisdictions, provided your routing algorithm has access to transparent electricity pricing information.
As the big data behemoths become truly interested in data center compute efficiency, we believe that the underlying metric will begin to evolve from performance-per-watt to include a price-per-kWh into the mix.
Not all kWhs are created equal. Small differences in price across country and state lines can drive efficiencies, and data investment dollars. (One study suggests that we can reduce energy costs by up to 30% without a significant difference in client-server distances.) We are betting that developers at the CleanWeb Hack can unlock arbitrage possibilities, and insert a data-per-dollar metric into the mix.