Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Regularized classical optimality conditions in iterative form for convex optimization problems for distributed Volterra-type systems

View through CrossRef
We consider the regularization of the classical optimality conditions (COCs) — the Lagrange principle and the Pontryagin maximum principle — in a convex optimal control problem with functional constraints of equality and inequality type. The system to be controlled is given by a general linear functional-operator equation of the second kind in the space $L^m_2$, the main operator of the right-hand side of the equation is assumed to be quasinilpotent. The objective functional of the problem is strongly convex. Obtaining regularized COCs in iterative form is based on the use of the iterative dual regularization method. The main purpose of the regularized Lagrange principle and the Pontryagin maximum principle obtained in the work in iterative form is stable generation of minimizing approximate solutions in the sense of J. Warga. Regularized COCs in iterative form are formulated as existence theorems in the original problem of minimizing approximate solutions. They “overcome” the ill-posedness properties of the COCs and are regularizing algorithms for solving optimization problems. As an illustrative example, we consider an optimal control problem associated with a hyperbolic system of first-order differential equations.
Title: Regularized classical optimality conditions in iterative form for convex optimization problems for distributed Volterra-type systems
Description:
We consider the regularization of the classical optimality conditions (COCs) — the Lagrange principle and the Pontryagin maximum principle — in a convex optimal control problem with functional constraints of equality and inequality type.
The system to be controlled is given by a general linear functional-operator equation of the second kind in the space $L^m_2$, the main operator of the right-hand side of the equation is assumed to be quasinilpotent.
The objective functional of the problem is strongly convex.
Obtaining regularized COCs in iterative form is based on the use of the iterative dual regularization method.
The main purpose of the regularized Lagrange principle and the Pontryagin maximum principle obtained in the work in iterative form is stable generation of minimizing approximate solutions in the sense of J.
Warga.
Regularized COCs in iterative form are formulated as existence theorems in the original problem of minimizing approximate solutions.
They “overcome” the ill-posedness properties of the COCs and are regularizing algorithms for solving optimization problems.
As an illustrative example, we consider an optimal control problem associated with a hyperbolic system of first-order differential equations.

Related Results

Regularization of classical optimality conditions in optimization problems for linear Volterra-type systems with functional constraints
Regularization of classical optimality conditions in optimization problems for linear Volterra-type systems with functional constraints
We consider the regularization of classical optimality conditions (COCs) — the Lagrange principle (LP) and the Pontryagin maximum principle (PMP) — in a convex optimal control prob...
Inversion Using Adaptive Physics-Based Neural Network: Application to Magnetotelluric Inversion
Inversion Using Adaptive Physics-Based Neural Network: Application to Magnetotelluric Inversion
Abstract In order to develop a geophysical earth model that is consistent with the measured geophysical data, two types of inversions are commonly used: a physics-ba...
On iterative methods to solve nonlinear equations
On iterative methods to solve nonlinear equations
Many of the problems in experimental sciences and other disciplines can be expressed in the form of nonlinear equations. The solution of these equations is rarely obtained in close...
Optimizing motor-timing decision via adaptive risk-return control
Optimizing motor-timing decision via adaptive risk-return control
Human’s ability of optimal motor-timing decision remains debated. The optimality seems context-dependent as the sub-optimality was often observed for tasks with different gain/loss...
Suboptimality in Perceptual Decision Making
Suboptimality in Perceptual Decision Making
Short AbstractHuman perceptual decisions are often described as optimal, but this view remains controversial. To elucidate the issue, we review the vast literature on suboptimaliti...

Back to Top