Multi-core chip rivals AMD and Intel have been beating their chests as of late, but to what end, I wonder, as developers labor to keep up.
AMD, for one, has fixed the embarrassing flaw that delayed the quad-core Barcelona chip. As Terry Malloy put it in On the Waterfront, so what?
Meanwhile, Intel and Microsoft pat themselves on the back because they've donated US$20 million to UC Berkley and the University of Illinois to found the Universal Parallel Computing Research Centers. Well, it's about time.
Why so negative? The dirty little secret (and it's not all that secret) is that the gap between hardware and software has never been greater. Today's software can barely (if at all) take advantage of quad-core processors, but Intel and AMD seem to be giddy with rivalry, rushing to push out chips with even more cores. Intel has already demonstrated an 80-core processor, and you can expect x86 servers with as many as 64 processor cores in 2009 and desktops with that many by 2012, says Forrester analyst James Staten.
That's not to say that the IT industry is scoffing at the potential benefits of multi-core processing. But the mountain between IT and some future multi-core promise land -- namely, the task of developing parallelized apps that keep pace with continual core advances -- is huge, says David Patterson, the Pardee Professor of Computing Science at UC Berkeley and director of the parallel computing lab. "It's the biggest challenge in 50 years of computing. If we do this, it's a chance to reset the foundation of computing."
In the short run, Patterson says, we can parallelize legacy software and gamble on getting value out of eight cores. But that would be only an interim solution, as such apps would not scale to 32 or 64 cores, he adds.
What is frustrating is that this problem didn't exactly sneak up on the industry. Chip development cycles are very long, and key software developers are well aware of what's moving through the pipeline. Sure, software always lags hardware. Many of us complained that we didn't have software that would take advantage of 500MHz back in the '90s. But what Patterson and others call the multi-core revolution poses problems for developers that are qualitatively different than the problems of the past. Why wait so long to get serious about solving them?