Pitfalls and Blessings of Neural Computation with Graphic Processing Units (GPUs)

Anatoli Gorshechnikov
Boston University


Large-scale neural simulations face three major challenges: slow processing speed, large memory footprint, and insufficient communication bandwidth. A possible solution for such a computation is a cluster of computers with modern graphic processors (GPUs).  They reintroduce Single Instruction Multiple Data (SIMD) massively parallel computational paradigm that can help solve the processing speed issue. On the other hand, there are inherent bottlenecks when software utilizes graphic processors: overhead of engaging the GPU for computation, computational cost of branching of the logic in SIMD paradigm, and an additional transfer of the information between the graphic processor and the system board. These issues can become computationally expensive, outweigh the benefits, and make GPU-based neural modeling less attractive.

In this study several performance bottlenecks that can affect simulations of large-scale neural models on a GPU enabled cluster are systematically investigated. How many neurons or synapses are needed to make GPU more effective than CPU? How many neurons or synapses can fit on a GPU? What is the effect of computational complexity of those neurons and synapses on the performance? How is the computational complexity of the simulation affected by the change in the number of neurons and synapses? These are the questions that were investigated in this project. The main result is a set of suggestions for further improvement of GPU based performance of large-scale neural models.

Conference Sponsors