First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.
Cloud Mea Culpa: Nick Carr Was Right and I Was Wrong
- — 17 April, 2009 01:10
The point of these changes is they were not merely fashionable initiatives. Each represented a significant improvement in computing capability, driven by the advance of technology. And each resulted in a massive change in infrastructure, protocols, connectivity, and hardware. This is the point, I think, where Carr's analogy of the electricity grid breaks down. Unlike computing, the fundamentals of electricity settled down quite early and have remained pretty stable subsequently. Otherwise, how could Livermore's famous 100-year lightbulb remain burning to this very day? If electricity were like computing, the City of Livermore would have been forced to discard the lightbulb due to changes in power frequencies, socket obsolescence, incompatible hardware, or the like.
While it is tempting to, in a bit of a paraphrase, forecast The End of Computing, it's unlikely that IT development will stop at Amazon-hosted (or Microsoft- or Google-hosted, for that matter) centralised computing. To extrapolate one current trend, look at the explosion of computing-capable portable devices like smartphones and music players. What will happen to computing when, thanks to Moore's Law, these become as powerful as today's most powerful desktop computers, with just as much storage as well? Surely computing infrastructures will evolve to integrate that new distributed computing capacity.
Just so will data centers. I doubt that today's best practices in data centers, as evinced by cloud providers, will remain static.
Carr's more general point, however, is well-taken. What is the role of corporate data centers in this still-developing IT landscape? Notwithstanding the continuing rapid evolution of infrastructure, does it make sense for companies to run their own data centers going forward? Putting the question another way, is computing so changeable that there is still the potential for competitive advantage in running one's own infrastructure? As a corollary, is the variety of corporate applications so profound-and the differences in their architecture, hardware requirements, and so on so important-that it precludes moving to a standardised environment, a la Microsoft's San Antonio data center.
It's clear that the nature of how data centers are run is moving away from manual toward automation. As that piece discussing cloud data center cost structures noted, the automation of these centers is a key reason their cost structure is so much lower than standard data centers. For an internal data center to remain competitive from a cost benchmark perspective, it must attain those same automation capabilities. Much of the discussion about "internal clouds" posits that companies can match the same agility, automation, and economies of scale that external cloud providers achieve.
I sometimes hear advocates of internal clouds say that they want to provide a way for companies to leverage their existing infrastructure, but make it cloud-capable by layering some additional hardware and software on top. It's an attractive argument, but is it attainable or is does it merely attempt to justify remaining committed to the sunk cost represented by current data centers? More specifically, can real cloud capability be accomplished with the current infrastructure, representing as it does the variety of hardware and software purchased to implement specific application initiatives? My sense is that much of existing infrastructure cannot be moved into an automated environment, and a significant part of it will need to be scrapped or replaced with new automation-friendly technology.
To draw another historical analogy, a complement to the analogy I drew at the end of the cloud center cost posting alluded to above, consider how mass production evolved. When Henry Ford finally realised he needed to replace the Model T with a newer design, he found that his highly-automated factory lacked flexibility-it was designed for one thing: making Model Ts. It took an enormous redesign (lasting 18 months) and required replacing 40% of the machine tools in the factory, which were unable to manufacture anything but the T, before he could begin manufacture of the Model A. One might say he was "locked in" to manufacturing Model Ts. Ever since then, auto factories have been designed to be much more flexible in terms of car manufacture, enabling general automation no matter what specific type of car is being built.
Today's CIOs are likely to confront the same issue: if they want to move to a fully agile, fully flexible infrastructure, they'll need to do a general redesign of systems and processes-put bluntly, it will require significant investment to achieve "internal cloud" automation. Should they do so, that might put them on a level playing field, cost-wise, with commercial cloud providers. But it still won't answer the question as to whether "self-generation" of IT infrastructure is worthwhile when public options are available. Or, given the ever-increasing cost and complexity of infrastructure, does it make better sense to focus on applications-which, remember is where IT capability offers support for competitive advantage-and let someone else deal with the plumbing?
Bernard Golden is CEO of consulting firm HyperStratus, which specialises in virtualisation, cloud computing and related issues. He is also the author of "Virtualsation for Dummies," the best-selling book on virtualisation to date.