MIT produces Swarm chip for “easy parallel programming”

MIT has a long standing history of producing computing innovation. Their latest research is focused on what they call a “Swarm” chip, which is aimed at simplifying parallel coding by removing explicit synchronization and minimizing the interference of processors. This is in contrast to classic Intel-like chips with coherent caches — which, by the way, will have to change, but that is a subject of another discussion.

Applications tested on the processor ran 3-18x faster, while needing in the order of 1/10th of the code usually required. MIT says that the processor time-stamps tasks internally and then automatically determines which should be worked on first by the whole chip.

Task-based processing systems are gaining more and more popularity as programmers are reaching the limits of data parallelism for many common problems. This is well seen in the rise of popularity of programming environments such as Intel’s Threading Building Blocks, Cilk+ and task additions to the OpenMP standard. Swarm does some of this in hardware, promising new avenues for acceleration with minimal overhead in code.

Involved in the design was Joel Emer, who was also one of the key people behind the Triggered Instructions concept, in the domain of spatial programming.