Javascript must be enabled to continue!
Efficient Layer Optimizations for Deep Neural Networks
View through CrossRef
Deep neural networks (DNNs) have technical issues such as long training time as the network size increases. Parameters require significant memory, which may cause migration issues for embedded devices. DNNs applied various pruning techniques to reduce the network size in deep neural networks, but many problems still exist when applying the pruning techniques. Among neural networks, several applications applied autoencoders for reconstruction and dimension reduction. However, network size is a disadvantage of autoencoders since the architecture of the autoencoders has a double workload due to the encoding and decoding processes. In this research, we chose autoencoders and two deep neural networks AlexNet and VGG16 to apply out of order layer pruning. We perform the sensitivity analysis to explore the performance variations for the network architecture and network complexity through an out of order layer pruning mechanism. As a result of applying the proposed layer pruning scheme to the autoencoder, we developed the accordion autoencoder (A2E) and applied credit card fraud detection and MNIST classification. Our results show 4.9 Percent and 13.6 Percent performance drops, respectively, but we observe a significant reduction in network complexity, 85.1 Percent and 94.5 Percent for each application. We extend the out of order layer pruning to deeper learning networks. In our approach, we propose a simple yet efficient scheme, accuracy aware structured filter pruning based on the characterization of each convolutional layer combined with the quantization of fully connected layers. We investigate the accuracy and compression rate of each layer using a fixed pruning ratio, and then the pruning priority is rearranged depending on the accuracy of each layer. Our analysis of layer characterization shows that the pruning order of the layers does affect the final accuracy of the deep neural network. Based on our experiments using the proposed pruning scheme, the parameter size in the AlexNet can be up to 47.28x smaller than the original model. We also obtained comparable results for VGG16, achieving a maximum compression rate of 35.21x.
Blue Eyes Intelligence Engineering and Sciences Engineering and Sciences Publication - BEIESP
Title: Efficient Layer Optimizations for Deep Neural Networks
Description:
Deep neural networks (DNNs) have technical issues such as long training time as the network size increases.
Parameters require significant memory, which may cause migration issues for embedded devices.
DNNs applied various pruning techniques to reduce the network size in deep neural networks, but many problems still exist when applying the pruning techniques.
Among neural networks, several applications applied autoencoders for reconstruction and dimension reduction.
However, network size is a disadvantage of autoencoders since the architecture of the autoencoders has a double workload due to the encoding and decoding processes.
In this research, we chose autoencoders and two deep neural networks AlexNet and VGG16 to apply out of order layer pruning.
We perform the sensitivity analysis to explore the performance variations for the network architecture and network complexity through an out of order layer pruning mechanism.
As a result of applying the proposed layer pruning scheme to the autoencoder, we developed the accordion autoencoder (A2E) and applied credit card fraud detection and MNIST classification.
Our results show 4.
9 Percent and 13.
6 Percent performance drops, respectively, but we observe a significant reduction in network complexity, 85.
1 Percent and 94.
5 Percent for each application.
We extend the out of order layer pruning to deeper learning networks.
In our approach, we propose a simple yet efficient scheme, accuracy aware structured filter pruning based on the characterization of each convolutional layer combined with the quantization of fully connected layers.
We investigate the accuracy and compression rate of each layer using a fixed pruning ratio, and then the pruning priority is rearranged depending on the accuracy of each layer.
Our analysis of layer characterization shows that the pruning order of the layers does affect the final accuracy of the deep neural network.
Based on our experiments using the proposed pruning scheme, the parameter size in the AlexNet can be up to 47.
28x smaller than the original model.
We also obtained comparable results for VGG16, achieving a maximum compression rate of 35.
21x.
Related Results
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
On the role of network dynamics for information processing in artificial and biological neural networks
On the role of network dynamics for information processing in artificial and biological neural networks
Understanding how interactions in complex systems give rise to various collective behaviours has been of interest for researchers across a wide range of fields. However, despite ma...
Deep convolutional neural network and IoT technology for healthcare
Deep convolutional neural network and IoT technology for healthcare
Background Deep Learning is an AI technology that trains computers to analyze data in an approach similar to the human brain. Deep learning algorithms can find complex patterns in ...
Synchronizability and eigenvalues of two-layer star networks
Synchronizability and eigenvalues of two-layer star networks
From the study of multilayer networks, scientists have found that the properties of the multilayer networks show great difference from those of the traditional complex networks. In...
A holistic aerosol model for Uranus and Neptune, including Dark Spots
A holistic aerosol model for Uranus and Neptune, including Dark Spots
<p>Previous studies of the reflectance spectra of Uranus and Neptune concentrated on individual, narrow wavelength regions, inferring solutions for the vertical struc...
Integrating quantum neural networks with machine learning algorithms for optimizing healthcare diagnostics and treatment outcomes
Integrating quantum neural networks with machine learning algorithms for optimizing healthcare diagnostics and treatment outcomes
The rapid advancements in artificial intelligence (AI) and quantum computing have catalyzed an unprecedented shift in the methodologies utilized for healthcare diagnostics and trea...
Neural stemness contributes to cell tumorigenicity
Neural stemness contributes to cell tumorigenicity
Abstract
Background: Previous studies demonstrated the dependence of cancer on nerve. Recently, a growing number of studies reveal that cancer cells share the property and ...
Deep Neural Networks for Human’s Fall-risk Prediction using Force-Plate Time Series Signal
Deep Neural Networks for Human’s Fall-risk Prediction using Force-Plate Time Series Signal
ABSTRACTEarly and accurate identification of the balance deficits could reduce falls, in particular for older adults, a prone population. Our work investigates deep neural networks...


