Deep-Learning at the Network Edge

Moloney David¹, Cormac Brick², Alireza Dehghani³, Xiaofan Xu4
¹Movidius, ²Movidius, ³Movidius, 4Movidius,


A lot of thought has been given to training and running deep-learning algorithms in the cloud and allowing access to services based on these algorithms via wide area networks. This approach while appealing has several problems including the fact that the power required to perform computations in the cloud can be up to 1 million times higher than in a System-on-Chip in a mobile device (Horowitz Stanford). Equally the bandwidth requirements for streaming video and images to the cloud rather than locally computing and exchanging metadata are up to 1000x higher (Martonosi Princeton). Also the round-trip latency in using the cloud to run CNNs means that it can't be used in realtime applications. With CNNs starting to be deployed in mobile devices (Wu Baidu) the authors will discuss the implications of CNN-based classifiers for the design on next-generation mobile vision processors and the types of applications to be run on them at the network edge.


 

Key Words: SoC relatime, OpenCL, Mobile Low-Power

Conference Sponsors