Visible to the public An Energy-Efficient Stochastic Computational Deep Belief Network

TitleAn Energy-Efficient Stochastic Computational Deep Belief Network
Publication TypeConference Paper
Year of Publication2018
AuthorsLiu, Y., Wang, Y., Lombardi, F., Han, J.
Conference Name2018 Design, Automation Test in Europe Conference Exhibition (DATE)
Keywordsapproximate SC activation unit, belief networks, Biological neural networks, Cognitive Computing, Collaboration, composability, computation speed, Correlation, deep belief network, deep neural networks, DNNs, effective machine learning models, Electronic mail, energy consumption, energy-efficient deep belief network, energy-efficient stochastic computational deep belief network, fixed point arithmetic, fixed-point implementation, floating point arithmetic, floating-point design, Hardware, high energy consumption, Human Behavior, learning (artificial intelligence), Metrics, neural nets, Neurons, nonlinearly separable patterns, pattern classification, policy-based governance, pubcrawl, random number generation, random number generators, rectifier linear unit, resilience, Resiliency, RNGs, SC-DBN design, Scalability, Stochastic computing, Stochastic processes

Deep neural networks (DNNs) are effective machine learning models to solve a large class of recognition problems, including the classification of nonlinearly separable patterns. The applications of DNNs are, however, limited by the large size and high energy consumption of the networks. Recently, stochastic computation (SC) has been considered to implement DNNs to reduce the hardware cost. However, it requires a large number of random number generators (RNGs) that lower the energy efficiency of the network. To overcome these limitations, we propose the design of an energy-efficient deep belief network (DBN) based on stochastic computation. An approximate SC activation unit (A-SCAU) is designed to implement different types of activation functions in the neurons. The A-SCAU is immune to signal correlations, so the RNGs can be shared among all neurons in the same layer with no accuracy loss. The area and energy of the proposed design are 5.27% and 3.31% (or 26.55% and 29.89%) of a 32-bit floating-point (or an 8-bit fixed-point) implementation. It is shown that the proposed SC-DBN design achieves a higher classification accuracy compared to the fixed-point implementation. The accuracy is only lower by 0.12% than the floating-point design at a similar computation speed, but with a significantly lower energy consumption.

Citation Keyliu_energy-efficient_2018