Design and Optimization of FeFET‐based Crossbars for Binary Convolution Neural Networks

Xiaoming Chen1,2,a, Xunzhao Yin1,b, Michael Niemier1,c and Xiaobo Sharon Hu1,d
1University of Notre Dame, Notre Dame, IN, USA
2Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
achenxiaoming@ict.ac.cn
bxyin1@nd.edu
cmniemier@nd.edu
dshu@nd.edu

ABSTRACT


Binary convolution neural networks (CNNs) have attracted much attention for embedded applications due to low hardware cost and acceptable accuracy. Nonvolatile, resistive random-access memories (RRAMs) have been adopted to build crossbar accelerators for binary CNNs. However, RRAMs still face fundamental challenges such as sneak paths, high write energy, etc. We exploit another emerging nonvolatile device‐ferro electric field‐effect transistor (FeFET), to build crossbars to improve the energy efficiency for binary CNNs. Due to the three‐terminal transistor structure, an FeFET can function as both a nonvolatile storage element and a controllable switch, such that both write and read power can be reduced. Simulation results demonstrate that compared with two RRAMbased crossbar structures, our FeFET‐based design improves write power by 5600× and 395×, and read power by 4.1× and 3.1×. We also tackle an important challenge in crossbar‐based CNN accelerators: when a crossbar array is not large enough to hold the weights of one convolution layer, how do we partition the workload and map computations to the crossbar array? We introduce a hardware‐software co‐optimization solution for this problem that is universal for any crossbar accelerators.



Full Text (PDF)