Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Stochastic Search Algorithms for Identification, Optimization, and Training of Artificial Neural Networks

View through CrossRef
This paper presents certain stochastic search algorithms (SSA) suitable for effective identification, optimization, and training of artificial neural networks (ANN). The modified algorithm of nonlinear stochastic search (MN-SDS) has been introduced by the author. Its basic objectives are to improve convergence property of the source defined nonlinear stochastic search (N-SDS) method as per Professor Rastrigin. Having in mind vast range of possible algorithms and procedures a so-called method of stochastic direct search (SDS) has been practiced (in the literature is called stochastic local search-SLS). The MN-SDS convergence property is rather advancing over N-SDS; namely it has even better convergence over range of gradient procedures of optimization. The SDS, that is, SLS, has not been practiced enough in the process of identification, optimization, and training of ANN. Their efficiency in some cases of pure nonlinear systems makes them suitable for optimization and training of ANN. The presented examples illustrate only partially operatively end efficiency of SDS, that is, MN-SDS. For comparative method backpropagation error (BPE) method was used.
Title: Stochastic Search Algorithms for Identification, Optimization, and Training of Artificial Neural Networks
Description:
This paper presents certain stochastic search algorithms (SSA) suitable for effective identification, optimization, and training of artificial neural networks (ANN).
The modified algorithm of nonlinear stochastic search (MN-SDS) has been introduced by the author.
Its basic objectives are to improve convergence property of the source defined nonlinear stochastic search (N-SDS) method as per Professor Rastrigin.
Having in mind vast range of possible algorithms and procedures a so-called method of stochastic direct search (SDS) has been practiced (in the literature is called stochastic local search-SLS).
The MN-SDS convergence property is rather advancing over N-SDS; namely it has even better convergence over range of gradient procedures of optimization.
The SDS, that is, SLS, has not been practiced enough in the process of identification, optimization, and training of ANN.
Their efficiency in some cases of pure nonlinear systems makes them suitable for optimization and training of ANN.
The presented examples illustrate only partially operatively end efficiency of SDS, that is, MN-SDS.
For comparative method backpropagation error (BPE) method was used.

Related Results

Integrating quantum neural networks with machine learning algorithms for optimizing healthcare diagnostics and treatment outcomes
Integrating quantum neural networks with machine learning algorithms for optimizing healthcare diagnostics and treatment outcomes
The rapid advancements in artificial intelligence (AI) and quantum computing have catalyzed an unprecedented shift in the methodologies utilized for healthcare diagnostics and trea...
Evaluating the Science to Inform the Physical Activity Guidelines for Americans Midcourse Report
Evaluating the Science to Inform the Physical Activity Guidelines for Americans Midcourse Report
Abstract The Physical Activity Guidelines for Americans (Guidelines) advises older adults to be as active as possible. Yet, despite the well documented benefits of physical a...
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
Modeling Hybrid Metaheuristic Optimization Algorithm for Convergence Prediction
Modeling Hybrid Metaheuristic Optimization Algorithm for Convergence Prediction
The project aims at the design and development of six hybrid nature inspired algorithms based on Grey Wolf Optimization algorithm with Artificial Bee Colony Optimization algorithm ...
Modeling Hybrid Metaheuristic Optimization Algorithm for Convergence Prediction
Modeling Hybrid Metaheuristic Optimization Algorithm for Convergence Prediction
The project aims at the design and development of six hybrid nature inspired algorithms based on Grey Wolf Optimization algorithm with Artificial Bee Colony Optimization algorithm ...
On the role of network dynamics for information processing in artificial and biological neural networks
On the role of network dynamics for information processing in artificial and biological neural networks
Understanding how interactions in complex systems give rise to various collective behaviours has been of interest for researchers across a wide range of fields. However, despite ma...
Sample-efficient Optimization Using Neural Networks
Sample-efficient Optimization Using Neural Networks
<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligibl...
Stochastic Optimization Approach to Surfactant-Polymer Flooding
Stochastic Optimization Approach to Surfactant-Polymer Flooding
Abstract The research presents an effective way for the optimization of one of the enhanced oil recovery mechanism; surfactant-polymer flooding by the application of...

Back to Top