Javascript must be enabled to continue!
On Convergence Conditions of an Extended Projection Neural Network
View through CrossRef
The output trajectory convergence of an extended projection neural network was developed under the positive definiteness condition of the Jacobian matrix of nonlinear mapping. This note offers several new convergence results. The state trajectory convergence and the output trajectory convergence of the extended projection neural network are obtained under the positive semidefiniteness condition of the Jacobian matrix. Comparison and illustrative examples demonstrate applied significance of these new results.
Title: On Convergence Conditions of an Extended Projection Neural Network
Description:
The output trajectory convergence of an extended projection neural network was developed under the positive definiteness condition of the Jacobian matrix of nonlinear mapping.
This note offers several new convergence results.
The state trajectory convergence and the output trajectory convergence of the extended projection neural network are obtained under the positive semidefiniteness condition of the Jacobian matrix.
Comparison and illustrative examples demonstrate applied significance of these new results.
Related Results
On Exponential Convergence Conditions of an Extended Projection Neural Network
On Exponential Convergence Conditions of an Extended Projection Neural Network
Recently the extended projection neural network was proposed to solve constrained monotone variational inequality problems and a class of constrained nonmonotontic variational ineq...
Noise Robust Projection Rule for Klein Hopfield Neural Networks
Noise Robust Projection Rule for Klein Hopfield Neural Networks
Multistate Hopfield models, such as complex-valued Hopfield neural networks (CHNNs), have been used as multistate neural associative memories. Quaternion-valued Hopfield neural net...
Bicomplex Projection Rule for Complex-Valued Hopfield Neural Networks
Bicomplex Projection Rule for Complex-Valued Hopfield Neural Networks
A complex-valued Hopfield neural network (CHNN) with a multistate activation function is a multistate model of neural associative memory. The weight parameters need a lot of memory...
Detection of whale calls in noise: Performance comparison between a beluga whale, human listeners, and a neural network
Detection of whale calls in noise: Performance comparison between a beluga whale, human listeners, and a neural network
This article examines the masking by anthropogenic noise of beluga whale calls. Results from human masking experiments and a software backpropagation neural network are compared to...
Like Me or Like Us
Like Me or Like Us
Research has shown abundant evidence for social projection, that is, the tendency to expect similarity between oneself and others ( Krueger, 1998a , 1998b ). This effect is stronge...
Intersemiotic projection and academic comics: towards a social semiotic framework of multimodal paratactic and hypotactic projection
Intersemiotic projection and academic comics: towards a social semiotic framework of multimodal paratactic and hypotactic projection
Abstract
Intersemiotic projection is one of the most common configurations in the knowledge construction process of academic comics. Although previous studies addres...
Neural network-based classification of X-ray fluorescence spectra of artists’ pigments: an approach leveraging a synthetic dataset created using the fundamental parameters method
Neural network-based classification of X-ray fluorescence spectra of artists’ pigments: an approach leveraging a synthetic dataset created using the fundamental parameters method
AbstractX-ray fluorescence (XRF) spectroscopy is an analytical technique used to identify chemical elements that has found widespread use in the cultural heritage sector to charact...
Training Pi-Sigma Network by Online Gradient Algorithm with Penalty for Small Weight Update
Training Pi-Sigma Network by Online Gradient Algorithm with Penalty for Small Weight Update
A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training metho...