EETimes is reporting a on Intel’s Lake Crest accelerator acquired from Nervana Systems. Its fundamental mission is to make AI primitives process faster – much faster than GPUs, in fact.
In Intel’s broader push for the support of AI and machine/deep learning in 2016, the chip could be considered as direct competition in an area so far dominated by GPUs. Lake Crest is designed from scratch, and currently scheduled to be produced on a 28nm TSMC process. It makes an interesting tradeoff, which is quite similar to the twist GPUs put on CPUs in their time – mathematical operations are simplified, leading to performance boosts of up to 10x. To make things better, nodes can be interconnected with high bandwidth links to enable the construction of custom topologies for an optimized hardware-to-model fit. Finally, the chip is meant to use “High Bandwidth Memory 2”, which promises up to 8 Tbit/s transfers.
Nervana, like many other AI vendors, supports Tensor Flow (championed by Google). With the recent collaboration agreement between Google and Intel, things can only get more interesting.