Knowing that there's a limit to how many cores can be put on a chip, processor designers are looking to a tiled architecture as the next generation of chip design.
The agenda of the 19th annual Hot Chips conference going on this week at Stanford University in Palo Alto, California, includes presentations from different chip companies on parallel computing using a tiled, or grid design.
Tiles, each with a processor core and a router, are laid end to end and in rows, looking like a grid map of a city. Instructions jump from tile to tile along their route back and forth across the chip. Different instructions can run parallel to each other simultaneously without having to wait for one another. Parallel computing uses less energy than do today's multicore chips.
Intel detailed a prototype 80-core processor made up of tiles laid out eight across and 10 down. Intel's chip also has a "sleep/wake" function that turns off power to some tiles when they are idle and wakes them up when they are needed, principal engineer at Intel, Yatin Hoskote, said.
Parallelism makes it possible to run a communication instruction concurrently with a computational instruction, he said.
"You get a lot more concurrency. If you can overlap computation with communications as much as possible then you get a high level of efficiency because you are not spending cycles just communicating, you are using computation cycles to send data out onto the chip," Hoskote said.
The sleep feature reduces leakage (electrical power that's wasted when it doesn't do any computing) two to five times better than existing designs, he said, and reduces energy consumption seven times better in each tile's router.
Intel's tiled processor prototype is just a research project, Hoskote said, with no immediate plans to develop a particular product out of it. But a chip industry newcomer, Tilera, used Hot Chips to unveil its first 64-core tiled processor, in which the tiles are arranged eight across and eight down.
The Tile64 product is an imbedded processor used in network routers and switches, and equipment for distributing high-definition video signals.
Nvidia, and AMD also described their parallel computing processors during their presentations at the conference.
Chip makers are studying parallel computing because they believe the trend of offering two-, four- or eight-core processors on a chipset will eventually reach its limit, a professor of computer science at the University of California at Berkeley and one of the event organisers, Alan Jay Smith, said.
"Everyone's got the same problem. They have got more real estate on the chip than they can usefully spend on a uniprocessor, and a uniprocessor runs very hot," Smith said. A uniprocessor is a computer with only one central processing unit. "Everyone is working on parallelism because ... you can built it now more effectively."
The downside of parallelism is that it's difficult to program software applications to run parallel instructions, he said.
"People think in a linear way. Most programs out there are linear. Converting the software into a parallel form where you can have computation going on in multiple processors at once is hard," Smith said.