Is it finally the time for FPGAs in general purpose compute?

FPGAs have been around for decades, mostly in industrial applications. In recent years, ideas relating to “casual” FPGA use in general purpose programming are resurfacing with increased strength.

It is no secret that FPGAs are a bit of a hassle to program. This is one of the main reasons why many outlets today are examining options to move away from FPGAs in favor of GPUs, Xeon Phis or even standard x86.

At the same time, a lot of work is being done on FPGAs becoming more accessible. On the hardware side, vendors such as Intel have worked on x86+FPGA combos for a while now. At this year’s Hot Chips conference an expert from Baidu was talking about using software defined acceleration for fairly mainstream SQL processing. Since about 40% of data analysis jobs in Baidu are written in SQL, the setup uses FPGAs and RTL to process them and boasts a more modest power envelop than that of GPUs.

Another interesting talk from Hot Chips ’16, given by DeePhi Tech, was on software-hardware co-design to accelerate neural networks. The company provides novel FPGA-based solutions for deep learning, with a range of supported mainstream applications, such as detection, tracking, translation, recognition and so on. In essence, modern larger neural network designs require higher bandwidth that cannot easily be delivered (along with other constraints being unsatisfied) by non-FPGA chips. The high-level workflow enables efficient connectors from such frameworks as Tensor Flow. Raw performance results, as compared to the ARM-based NVIDIA Tegra TK1, are not better on all fronts tested, but certainly promising – not to mention improved power consumption.

Perhaps the first general-purpose compute niche for FPGAs (if we may call it that) lies in specialized applications, that have already been understood in the omnipresent push for data parallelism.