Washington, Apr 11 (IANS): Having reached their peak potential, computer chips are not getting any faster. So chipmakers are configuring additional cores or processing units on a single platform in order to skirt this problem.
Today, a typical chip might have six or eight cores, all communicating with one another over a single bundle of wires, called a bus. With a bus, however, only one pair of cores can talk at a time, which would be a serious limitation in chips with hundreds or even thousands of cores, envisioned as the future of computing.
Li-Shiuan Peh, associate professor of electrical engineering and computer science at MIT, wants cores to communicate the same way computers hooked to the Internet do: by bundling the information they transmit into "packets".
Each core would have its own router, which could send a packet down any of several paths, depending on the condition of the network as a whole. Multicore chips are faster than single-core chips because they can split up computational tasks and run them on several cores at once, according to an MIT statement.
Cores working on the same task will occasionally need to share data but, until recently, the core count on commercial chips has been low enough that a single bus has been able to handle the extra communication load.
That's already changing, however. "Buses have hit a limit," Peh says. "They typically scale to about eight cores." The 10-core chips found in high-end servers frequently add a second bus, but that approach won't work for chips with hundreds of cores.
Peh and colleagues have developed two techniques to address these concerns. One is something they call "virtual bypassing". In the net, when a packet arrives at a router, the router inspects its addressing information before deciding which path to send it down.
With virtual bypassing, however, each router sends an advance signal to the next, so that it can preset its switch, speeding the packet on with no additional computation. In her group's test chips, Peh says, virtual bypassing allowed a very close approach to the maximum data-transmission rates predicted by theoretical analysis.
These findings will be presented at the Design Automation Conference in June in the US.