FFT‐Based Deep Learning Deployment in Embedded Systems

Sheng Lin1,a, Ning Liu1,b, Mahdi Nazemi2,f, Hongjia Li1,c, Caiwen Ding1,d, Yanzhi Wang1,e and Massoud Pedram2,g
1Dept. of Electrical Engineering & Computer Science, Syracuse University, Syracuse, NY, USA
ashlin@syr.edu
bnliu03@syr.edu
chli42@syr.edu
dcading@syr.edu
eywang393@syr.edu
2Dept. of Electrical Engineering, University of Southern California, Los Angeles, CA, USA
fmnazemi@usc.edu
gpedram@usc.edu

ABSTRACT


Deep learning has delivered its powerfulness in many application domains, especially in image and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage. Researchers have investigated on reducing DNN model size with negligible accuracy loss. This work proposes a Fast Fourier Transform (FFT)‐based DNN training and inference model suitable for embedded platforms with reduced asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms achieving extraordinary processing speed.



Full Text (PDF)