Javascript must be enabled to continue!
Quantized artificial neural networks implemented with spintronic stochastic computing
View through CrossRef
Abstract
An artificial neural network (ANN) inference involves matrix vector multiplications that require a very large number of multiply and accumulate operations, resulting in high energy cost and large device footprint. Stochastic computing (SC) offers a less resource-intensive ANN implementation with minimal accuracy loss. Random number generators (RNG) are required to implement SC in hardware. These can be realized through stochastic-magnetic tunnel junctions (s-MTJ), where the energy barrier to switch between the ‘up’ and ‘down’ states is designed to be small, enabling thermal noise to generate a random bit stream. While s-MTJs have previously been used to implement SC-ANNs, these studies have been limited to architectures with continuously varying (i.e., analog) weights. In this work, we study the use of SC for matrix vector multiplication with quantized synaptic weights and quantized outputs. We show that a quantized SC-ANN, implemented by using experimentally obtained s-MTJ bitstreams and a limited number of discrete quantized states for both weights and hidden layer nodes in an ANN, can effectively reduce time (latency) and energy consumption in SC compared to an analog implementation, while largely preserving accuracy. We implemented quantization with 5 and 11 quantized states, along with SC configured with stochastic bitstream lengths of 100, 200, 300, 400, and 500 on neural networks with one hidden layer and three hidden layers. Inference was performed on the MNIST dataset for both training with SC and without SC. Training with SC provided better accuracy for all cases. For the shortest bitstream of 100 bits, the highest accuracies were 92% for one hidden layer and over 96% for three hidden layers. The overall system attained its peak accuracy of 96.82% using a 400 bit stochastic bitstream with three hidden layers. Our investigations demonstrate 9× improvement in latency and 2.6× improvement in energy consumption using the quantized SC approach compared to a similar s-MTJ based ANN architecture without quantization.
Title: Quantized artificial neural networks implemented with spintronic stochastic computing
Description:
Abstract
An artificial neural network (ANN) inference involves matrix vector multiplications that require a very large number of multiply and accumulate operations, resulting in high energy cost and large device footprint.
Stochastic computing (SC) offers a less resource-intensive ANN implementation with minimal accuracy loss.
Random number generators (RNG) are required to implement SC in hardware.
These can be realized through stochastic-magnetic tunnel junctions (s-MTJ), where the energy barrier to switch between the ‘up’ and ‘down’ states is designed to be small, enabling thermal noise to generate a random bit stream.
While s-MTJs have previously been used to implement SC-ANNs, these studies have been limited to architectures with continuously varying (i.
e.
, analog) weights.
In this work, we study the use of SC for matrix vector multiplication with quantized synaptic weights and quantized outputs.
We show that a quantized SC-ANN, implemented by using experimentally obtained s-MTJ bitstreams and a limited number of discrete quantized states for both weights and hidden layer nodes in an ANN, can effectively reduce time (latency) and energy consumption in SC compared to an analog implementation, while largely preserving accuracy.
We implemented quantization with 5 and 11 quantized states, along with SC configured with stochastic bitstream lengths of 100, 200, 300, 400, and 500 on neural networks with one hidden layer and three hidden layers.
Inference was performed on the MNIST dataset for both training with SC and without SC.
Training with SC provided better accuracy for all cases.
For the shortest bitstream of 100 bits, the highest accuracies were 92% for one hidden layer and over 96% for three hidden layers.
The overall system attained its peak accuracy of 96.
82% using a 400 bit stochastic bitstream with three hidden layers.
Our investigations demonstrate 9× improvement in latency and 2.
6× improvement in energy consumption using the quantized SC approach compared to a similar s-MTJ based ANN architecture without quantization.
Related Results
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
On the role of network dynamics for information processing in artificial and biological neural networks
On the role of network dynamics for information processing in artificial and biological neural networks
Understanding how interactions in complex systems give rise to various collective behaviours has been of interest for researchers across a wide range of fields. However, despite ma...
MASKS: A Multi-Artificial Neural Networks System's verification approach
MASKS: A Multi-Artificial Neural Networks System's verification approach
<div>Artificial Neural networks are one of the most widely applied approaches for classification problems. However, developing an errorless artificial neural network is in pr...
DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS
DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS
This research paper examines the challenges and prospects associated with the integration of artificial neural networks and knowledge bases. The focus is on leveraging ...
Integrating quantum neural networks with machine learning algorithms for optimizing healthcare diagnostics and treatment outcomes
Integrating quantum neural networks with machine learning algorithms for optimizing healthcare diagnostics and treatment outcomes
The rapid advancements in artificial intelligence (AI) and quantum computing have catalyzed an unprecedented shift in the methodologies utilized for healthcare diagnostics and trea...
PARAMETRIC STUDY OF WARREN STEEL TRUSS BRIDGE USING ARTIFICIAL NEURAL NETWORKS
PARAMETRIC STUDY OF WARREN STEEL TRUSS BRIDGE USING ARTIFICIAL NEURAL NETWORKS
Abstract
Steel truss bridges are a popular type of amongst several other standard bridges in Indonesia due to their lightweight yet robust and strong structure. In this study A...
An Adiabatic Method to Train Binarized Artificial Neural Networks
An Adiabatic Method to Train Binarized Artificial Neural Networks
Abstract
An artificial neural network consists of neurons and synapses. Neuron gives output based on its input according to non-linear activation functions such as the Sigm...
Quantized filtering of linear stochastic systems
Quantized filtering of linear stochastic systems
In this paper we investigate a general multi-level quantized filter of linear stochastic systems. For a given multi-level quantization and under the Gaussian assumption on the pred...

