Javascript must be enabled to continue!
An Adiabatic Method to Train Binarized Artificial Neural Networks
View through CrossRef
Abstract
An artificial neural network consists of neurons and synapses. Neuron gives output based on its input according to non-linear activation functions such as the Sigmoid, Hyperbolic Tangent (Tanh), or Rectified Linear Unit (reLU) functions, etc. Synapses connect the neuron outputs to their inputs with tunable real-valued weights. The most resource-demanding operations in realizing such neural networks are the multiplication and accumulate (MAC) operations that compute the dot product be- tween real-valued outputs from neurons and the synapses weights. The efficiency of neural networks can be drastically enhanced if the neuron outputs and/or the weights can be trained to take binary values ±1 only, for which the MAC can be replaced by the simple XOR operations. In this paper, we demonstrate an adiabatic training method that can successfully binarize the dense neural networks and the convolutional neural networks without modification in terms network structure and with very minimal change in training algorithms. This adiabatic training method is tested in the following four tasks: the recognition of hand-writing numbers using a usual dense network, the cat-dog recog- nition and the audio recognition using a convolutional neural networks, the image recognition with 10 classes (CIFAR-10) using ResNet20 and VGG-Small networks. In all tasks, the performance of the binary neural networks trained by the adiabatic method are almost identical to the networks trained using the conventional reLU or Sigmoid activations with real-valued activations and weights. This adiabatic method can be easily applied to binarize different types of networks, and will increase the computational efficiency considerably and greatly simplify the deployment of neural networks.
Title: An Adiabatic Method to Train Binarized Artificial Neural Networks
Description:
Abstract
An artificial neural network consists of neurons and synapses.
Neuron gives output based on its input according to non-linear activation functions such as the Sigmoid, Hyperbolic Tangent (Tanh), or Rectified Linear Unit (reLU) functions, etc.
Synapses connect the neuron outputs to their inputs with tunable real-valued weights.
The most resource-demanding operations in realizing such neural networks are the multiplication and accumulate (MAC) operations that compute the dot product be- tween real-valued outputs from neurons and the synapses weights.
The efficiency of neural networks can be drastically enhanced if the neuron outputs and/or the weights can be trained to take binary values ±1 only, for which the MAC can be replaced by the simple XOR operations.
In this paper, we demonstrate an adiabatic training method that can successfully binarize the dense neural networks and the convolutional neural networks without modification in terms network structure and with very minimal change in training algorithms.
This adiabatic training method is tested in the following four tasks: the recognition of hand-writing numbers using a usual dense network, the cat-dog recog- nition and the audio recognition using a convolutional neural networks, the image recognition with 10 classes (CIFAR-10) using ResNet20 and VGG-Small networks.
In all tasks, the performance of the binary neural networks trained by the adiabatic method are almost identical to the networks trained using the conventional reLU or Sigmoid activations with real-valued activations and weights.
This adiabatic method can be easily applied to binarize different types of networks, and will increase the computational efficiency considerably and greatly simplify the deployment of neural networks.
Related Results
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
Formal Analysis of Deep Binarized Neural Networks
Formal Analysis of Deep Binarized Neural Networks
Understanding properties of deep neural networks is an important challenge in deep learning. Deep learning networks are among the most successful artificial intelligence technolog...
On the role of network dynamics for information processing in artificial and biological neural networks
On the role of network dynamics for information processing in artificial and biological neural networks
Understanding how interactions in complex systems give rise to various collective behaviours has been of interest for researchers across a wide range of fields. However, despite ma...
Cummins/TACOM Advanced Adiabatic Engine
Cummins/TACOM Advanced Adiabatic Engine
<div class="htmlview paragraph">Cummins Engine Company, Inc. and the U.S. Army have been jointly developing an adiabatic turbocompound engine during the last nine years. Alth...
MASKS: A Multi-Artificial Neural Networks System's verification approach
MASKS: A Multi-Artificial Neural Networks System's verification approach
<div>Artificial Neural networks are one of the most widely applied approaches for classification problems. However, developing an errorless artificial neural network is in pr...
PARAMETRIC STUDY OF WARREN STEEL TRUSS BRIDGE USING ARTIFICIAL NEURAL NETWORKS
PARAMETRIC STUDY OF WARREN STEEL TRUSS BRIDGE USING ARTIFICIAL NEURAL NETWORKS
Abstract
Steel truss bridges are a popular type of amongst several other standard bridges in Indonesia due to their lightweight yet robust and strong structure. In this study A...
Adiabatic and Non-Adiabatic Effects in Solvation Dynamics
Adiabatic and Non-Adiabatic Effects in Solvation Dynamics
The solvation process may in principle involve more then one adiabatic state. This is referred to as non adiabatic solvation. Adiabatic solvation proceeds on a single electronic po...
DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS
DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS
This research paper examines the challenges and prospects associated with the integration of artificial neural networks and knowledge bases. The focus is on leveraging ...


