Visible to the public Minimum energy quantized neural networks

TitleMinimum energy quantized neural networks
Publication TypeConference Paper
Year of Publication2017
AuthorsMoons, B., Goetschalckx, K., Berckelaer, N. Van, Verhelst, M.
Conference Name2017 51st Asilomar Conference on Signals, Systems, and Computers
Keywordsapproximate computing, arbitrary fixed point precision, automated minimum-energy optimization, BinaryNets, complex arithmetic, Deep Learning, energy conservation, energy consumption, fixed point arithmetic, fundamental trade-off, generic hardware platform, Hardware, higher precision operators, int4 implementations, int8 networks, iso-accuracy depending, low precision weights, Memory management, Metrics, Minimum Energy, minimum energy QNN, Mobile communication, network on chip security, neural nets, Neural networks, power aware computing, pubcrawl, QNN training, Quantized Neural Network, quantized neural networks, Random access memory, resilience, Resiliency, Scalability, system-on-chip, telecommunication security, Training, wider network architectures
AbstractThis work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) - networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex arithmetic and less bits per weights. This fundamental trade-off is analyzed and quantified to find the minimum energy QNN for any benchmark and hence optimize energy-efficiency. To this end, the energy consumption of inference is modeled for a generic hardware platform. This allows drawing several conclusions across different benchmarks. First, energy consumption varies orders of magnitude at iso-accuracy depending on the number of bits used in the QNN. Second, in a typical system, BinaryNets or int4 implementations lead to the minimum energy solution, outperforming int8 networks up to 2-10x at iso-accuracy. All code used for QNN training is available from
Citation Keymoons_minimum_2017