IP4_4 Interactive Presentations
Date: Wednesday, 03 February 2021
Time: 09:00 - 09:30 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/YfkeNJuv3YFTbt9i8
Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session
|IP4_4.1||BLOOMCA: A MEMORY EFFICIENT RESERVOIR COMPUTING HARDWARE IMPLEMENTATION USING CELLULAR AUTOMATA AND ENSEMBLE BLOOM FILTER
Dehua Liang, Graduate School of Information Science and Technology, Osaka University, JP
Dehua Liang, Masanori Hashimoto and Hiromitsu Awano, Osaka University, JP
In this work, we propose a BloomCA which utilizes cellular automata (CA) and ensemble Bloom filter to organize an RC system by using only binary operations, which is suitable for hardware implementation. The rich pattern dynamics created by CA can map the input into high-dimensional space and provide more features for the classifier. Utilizing the ensemble Bloom filter as the classifier, the features can be memorized effectively. Our experiment reveals that applying the ensemble mechanism to Bloom filter endues a significant reduction in inference memory cost. Comparing with the state-of-the-art reference, the BloomCA achieves a 43x reduction for memory cost without hurting the accuracy. Our hardware implementation also demonstrates that BloomCA achieves over 21x and 43.64% reduction in area and power, respectively.
|IP4_4.2||APPROXIMATE COMPUTATION OF POST-SYNAPTIC SPIKES REDUCES BANDWIDTH TO SYNAPTIC STORAGE IN A MODEL OF CORTEX
Yu Yang, KTH Royal Institute of Technology, SE
Dimitrios Stathis1, Yu Yang2, Ahmed Hemani3 and Anders Lansner4
1KTH Royal Institute of Technology, SE; 2Royal Institute of Technology - KTH, SE; 3KTH - Royal Institute of Technology, SE; 4Stockholm University and KTH Royal Institute of Technology, SE
The Bayesian Confidence Propagation Neural Network (BCPNN) is a spiking model of the cortex. The synaptic weights of BCPNN are organized as matrices. They require substantial synaptic storage and a large bandwidth to it. The algorithm requires a dual access pattern to these matrices, both row-wise and column-wise, to access its synaptic weights. In this work, we exploit an algorithmic optimization that eliminates the column-wise accesses. The new computation model approximates the post-synaptic spikes computation with the use of a predictor. We have adopted this approximate computational model to improve upon the previously reported ASIC implementation, called eBrainII. We also present the error analysis of the approximation to show that it is negligible. The reduction in storage and bandwidth to the synaptic storage results in a 48% reduction in energy compared to eBrainII. The reported approximation method also applies to other neural network models based on a Hebbian learning rule.