Javascript must be enabled to continue!
Development and research of a neural network alternate incremental learning algorithm
View through CrossRef
In this paper, the relevance of developing methods and algorithms for neural network incremental learning is shown. Families of incremental learning techniques are presented. A possibility of using the extreme learning machine for incremental learning is assessed. Experiments show that the extreme learning machine is suitable for incremental learning, but as the number of training examples increases, the neural network becomes unsuitable for further learning. To solve this problem, we propose a neural network incremental learning algorithm that alternately uses the extreme learning machine to correct the only output layer network weights (operation mode) and the backpropagation method (deep learning) to correct all network weights (sleep mode). During the operation mode, the neural network is assumed to produce results or learn from new tasks, optimizing its weights in the sleep mode. The proposed algorithm features the ability for real-time adaption to changing external conditions in the operation mode. The effectiveness of the proposed algorithm is shown by an example of solving the approximation problem. Approximation results after each step of the algorithm are presented. A comparison of the mean square error values when using the extreme learning machine for incremental learning and the developed algorithm of neural network alternate incremental learning is made.
Title: Development and research of a neural network alternate incremental learning algorithm
Description:
In this paper, the relevance of developing methods and algorithms for neural network incremental learning is shown.
Families of incremental learning techniques are presented.
A possibility of using the extreme learning machine for incremental learning is assessed.
Experiments show that the extreme learning machine is suitable for incremental learning, but as the number of training examples increases, the neural network becomes unsuitable for further learning.
To solve this problem, we propose a neural network incremental learning algorithm that alternately uses the extreme learning machine to correct the only output layer network weights (operation mode) and the backpropagation method (deep learning) to correct all network weights (sleep mode).
During the operation mode, the neural network is assumed to produce results or learn from new tasks, optimizing its weights in the sleep mode.
The proposed algorithm features the ability for real-time adaption to changing external conditions in the operation mode.
The effectiveness of the proposed algorithm is shown by an example of solving the approximation problem.
Approximation results after each step of the algorithm are presented.
A comparison of the mean square error values when using the extreme learning machine for incremental learning and the developed algorithm of neural network alternate incremental learning is made.
Related Results
DILRS: Domain-Incremental Learning for Semantic Segmentation in Multi-Source Remote Sensing Data
DILRS: Domain-Incremental Learning for Semantic Segmentation in Multi-Source Remote Sensing Data
With the exponential growth in the speed and volume of remote sensing data, deep learning models are expected to adapt and continually learn over time. Unfortunately, the domain sh...
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
Deep convolutional neural network and IoT technology for healthcare
Deep convolutional neural network and IoT technology for healthcare
Background Deep Learning is an AI technology that trains computers to analyze data in an approach similar to the human brain. Deep learning algorithms can find complex patterns in ...
On the role of network dynamics for information processing in artificial and biological neural networks
On the role of network dynamics for information processing in artificial and biological neural networks
Understanding how interactions in complex systems give rise to various collective behaviours has been of interest for researchers across a wide range of fields. However, despite ma...
Neural stemness contributes to cell tumorigenicity
Neural stemness contributes to cell tumorigenicity
Abstract
Background: Previous studies demonstrated the dependence of cancer on nerve. Recently, a growing number of studies reveal that cancer cells share the property and ...
Formability and Surface Finish Studies in Single Point Incremental Forming
Formability and Surface Finish Studies in Single Point Incremental Forming
Incremental sheet metal forming (ISMF) has demonstrated its great potential to form complex three-dimensional parts without using a component specific tooling. The die-less nature ...
Optimisation in Neurosymbolic Learning Systems
Optimisation in Neurosymbolic Learning Systems
In the last few years, Artificial Intelligence (AI) has reached the public consciousness through high-profile applications such as chatbots, image generators, speech synthesis and ...
Inversion using adaptive physics‐based neural network: Application to magnetotelluric inversion
Inversion using adaptive physics‐based neural network: Application to magnetotelluric inversion
ABSTRACTA new trend to solve geophysical problems aims to combine the advantages of deterministic inversion with neural network inversion. The neural networks applied to geophysica...

