ASYMPTOTIC EXPANSION OF A SOLUTION FOR THE SINGULARLY PERTURBED OPTIMAL CONTROL PROBLEM WITH A CONVEX INTEGRAL QUALITY INDEX AND SMOOTH CONTROL CONSTRAINTS

The paper deals with the problem of optimal control with a convex integral quality index for a linear steady-state control system in the class of piecewise continuous controls with smooth control constraints. In a general case, to solve such a problem, the Pontryagin maximum principle is applied as the necessary and sufficient optimum condition. The main difference from the preceding article [10] is that the terminal part of the convex integral quality index depends not only on slow, but also on fast variables. In a particular case, we derive an equation that is satisfied by an initial vector of the conjugate system. Then this equation is extended to the optimal control problem with the convex integral quality index for a linear system with the fast and slow variables. It is shown that the solution of the corresponding equation as ε → 0 tends to the solution of an equation corresponding to the limit problem. The results obtained are applied to study a problem which describes the motion of a material point in Rn for a fixed interval of time. The asymptotics of the initial vector of the conjugate system that defines the type of optimal control is built. It is shown that the asymptotics is a power series of expansion.


Introduction
The paper is devoted to studying the asymptotics of the initial vector of a conjugated state and an optimal value of the quality index in the optimal control problem [1-3] for a linear system with a fast and slow variable (see review [4]), convex integral quality index [3,Chapter 3], and smooth geometrical constraints for control.
Singularly perturbed problems of optimal control have been considered in different settings in [5][6][7].The solving of problems with a closed and bounded control area meets certain difficulties.That is why the problems with fast and slow variables and closed constraints for control have been studied to a less extent.A significant contribution to solving these problems was made by Dontchev and Kokotovic.Problems with constraints for control in the form of a polygon are dealt with in [5,7].The structure of such optimal control is a relay function with values in the apexes of the polygon.No optimal control with constraints in the form of a sphere, which is a continuous function with a finite and countable number of discontinuity points, has been considered so far.
The asymptotics of solutions of the perturbed control problem was formulated differently in papers [8][9][10].
The main difference from the preceding article [10] is that the terminal part of the convex integral quality index depends not only on slow, but also on fast variables.In the present work, the basic equation for searching for the asymptotics of the initial vector of the conjugated state of the problem under consideration and optimal control is obtained.
General relationships are applied to the case of the optimal control with a point of a small mass in an n-dimensional space under the action of a bounded force.
1. Construction of complete asymptotic expansion of vector λ ε for an optimal control problem with fast and slow variables Let us consider a problem that belongs to the class of piecewise continuous controls optimal control problem for a linear stationary system with a convex integral quality index: where Problem (1.1) simulates a motion of a material point of small mass ε > 0 with the coefficient of the medium resistance equals to 1 in the space R n under action of the constrained control force u(t).
Note that in the considered convex integral quality index J, where the first term can be interpreted as a fine for the control error at a finite time instant T , whereas the second is used to account for the energy costs of the implementation of the control.
Controllable system (1.1) contains fast and slow variables.The terminal part of the convex integral quality index depends not only on slow, but also on fast variables.For each fixed ε > 0 the problem (1.1) takes the form where Calculating e Aεt and ∇( 1 2 z ε (T ) 2 ), we obtain • all eigenvalues of matrix A 22 have negative real parts; • the pair (A 22 , B 2 ) is completely controllable.
Under the formulated conditions applied to the problem (1.2), the Pontryagin maximum principle is a necessary and sufficient optimum criterion.In this case, the problem has a unique solution [3, p. 3.5, theorem 14].As well, the following statement is valid: Statement 1.The pair z ε (t), u ε (t) is a solution of the maximum principle problem if and only if u ε (t) is determined with the following formula: and the vector λ ε is the unique solution of the equation where ∇ϕ is the subgradient function in the sense of convex analysis.Besides u ε (t) is a unique optimal control in the problem (1.2) [10, Statement 1].
Definition 1.The vector λ ε , that satisfies the equation (1.4), will be called as a vector determining the optimal control in the problem (1.2).Note that since ∇ϕ(z ε ) = x ε y ε , then the vector λ ε , which determines the optimal control in the problem (1.2), has the form Definition 2. The vectors l ε , ρ ε also will be called as a vectors determining the optimal control in the problem (1.2).
By virtue (1.3) the equation (1.4) transforms into system: Let us note that the optimal control u o ε (τ ) in the problem (1.1) by virtue 1 is expressed through the vectors l ε , ρ ε as follows: The main problem posed for (1.1) is to determine the complete asymptotic expansion in powers of the small parameter ε of optimal control, optimal values of the quality index and the optimal process.Formula (1.6) shows that if it is possible to obtain the complete asymptotic expansion of the vectors l ε , ρ ε , which determine the optimal control in problem (1.1), then this vectors can also be used for the asymptotic expansions of the above values.
We introduce some notation.If the vector-function f ε (t) is such that f ε (t) = O(ε α ) as ε → 0 for any α > 0 uniformly with respect to t ∈ [0, T ], then instead of f ε (t) we will write O.In particular, e −γT /ε = O.Theorem 1.Let the vectors l ε , ρ ε are the unique solutions of the equation (1.5) in the problem (1.1), and the vector l 0 is the unique solution of the equation Then l ε → l 0 and ε −1 ρ ε → −l 0 as ε → +0.
P r o o f.It is known that the attainability set for the controllable system under control from (1.1) is uniformly bounded by the time instant T at ε ∈ (0, ε 0 ] (see., for example, [6, Theorem 3.1]).
Writing the first equation from (1.5): Taking into account that the expression under integral is uniformly constrained and that Let us show that the vector ρ ε can be presented in the form of dt. (1.8) Let τ := t/ε.The equation (1.8) rewriting as Replacing the variable ξ := e −τ , we obtain Thus, the vector ρ ε is bounded.Let us prove that a sequence {ε −1 ρ ε } is bounded.By contradiction, we find For simplicity, the n dependence of ε will be omitted.
Let us divide the integral into two terms by means of introduction of complementary parameter α(ε): where α(ε) = O(ε γ ) as ε → 0 and for a certain positive number γ.
The received contradiction leads to the fact that ρ ε = O(ε), and we can rewrite the vector ρ ε = ε • r ε , where the sequence {r ε } is bounded.
Divide the integral into two terms.Taking into account Positive numbers µ 1 , µ 2 are represented by integrals We can suppose, that r 0 = µ • l 0 , where µ := − µ/µ 2 .Change of variable in integration ν := 1 + ξ(µ − 1) allows to rewrite an integral equation as follows Integral is equal to zero at µ = 1.Let µ = 1, then the function under integral sign is uneven function on a variable ν.Consequently, the integral is equal to zero at µ = −1.We prove, that ρ ε = εr ε , besides a first term r 0 = −l 0 is a bounding vector.Theorem 1.1 is proved.
From (1.5) and (1.7) we obtain two cases: x 0 and l 0 > 2. (1.11) 1. Consider the first case x 0 < T + 2. By virtue of (1.11) and Theorem 1 the inequality l ε < 2 is valid for all sufficiently small ε.Taking into account that (1 − e −t/ε ) 1 at any t 0 and ε > 0, from (1.5) we obtain for l ε , ρ ε the rewriting system of equations: (1.12) The solution of (1.12) are vectors It follows from these representations that λ ε is expanded as ε → 0 into the asymptotic power series.Moreover, we can obtain explicit form for the first two coefficients of vectors l ε , r ε .
Theorem 2. Suppose that x 0 < T + 2. Then the vectors l ε , r ε , which determine the optimal control in problem (1.1), are expanded as ε → 0 into a power asymptotic series: 2. Now consider the case x 0 > T + 2.
Replacing the variable ξ := 1 − 2η.Then factor under the integral sign in the rewriting system as a function ψ ε (η) contains vectors l ε , ρ ε , as follows where λ = (l + r)/2, ν = (l − r)/2.For a small variables l, r we can receive the following expressions Taking into account that we have a new representations of vectors l, r we rewrite the system of equation as follows where β(ε) := 1 − 2e −T /ε .Notice that β(ε) → 1 as ε → 0.
Having transformed a factor 1 + ξ 1 − ξ = 1 + 2ξ 1 − ξ under the integral sign and divided the integral from the first equation of system (1.13) into two terms, we find Calculating the switching points ξ 1 , ξ 2 from a constraint ξl 0 + λ + ξν = 2, we set Henceforward •; • is a scalar product in a corresponding space.Using a binomial expansion and expansion of quadratic root as a small parameter, we find ξ 1 , ξ 2 : We can extend the integral from the second equation of system (1.13) at the point ξ = 1: Introducing into consideration a vector function   , we rewrite system (1.13) as follows F (λ, ν, ε) = 0, where where ξ 1 , ξ 2 are the switching points of control u(t).
Let us remove a singularity at the point ξ = 1, divide the integral from the first equation of the system into two terms: Calculating the second integral: .
Let us expand terms 1 − ξ 1 and ln(1 − ξ 1 ) as a small parameter: Calculating the Gateau derivative of function ρ/ ρ , we obtain We can use the formula (1.16) to find a partial derivatives Taking into account that the unique term in the right side of equation (1.14) has no order o(1), and according to formula (1.16) we find Function F 2 (λ, ν, ε) from the second equation from (1.15) transforms to Calculating the third integral: and calculating partial derivatives of the third integral : (△ν) = 0.
Calculating derivatives of first and fifth integrals, we use formula (1.16): ∂ ∂λ Calculating derivatives of second and fourth integrals, we take into account the following formula (1.17) Since each integral contains only one multiple limit and integral from the partial derivative of the expression under the integral sign is equal to zero, and taking into account the formula (1.17) we obtain Following this line of reasoning, we find Let us write the partial derivatives : (△ν) = 0.

Show that operator
is continuously reversible.Consider the equation F(0, 0)(△λ, △ν) =: (g 1 , g 2 ).Multiplying scalarly the first and second coordinates of vectors (1.18), we find unknown couples of multiply scalarly: The reversible operator F −1 (g 1 , g 2 ) is equal: g 1 + T l 0 l 0 ; g 2 l 0 2 + T l 0 l 0 ; g 1 − l 0 g 2 l 0 3 l 0 l 0 + T − g 2 − ln 2 l 0 l 0 l 0 ; g 2 l 0 2 l 0 1 − ln(2/ l 0 ) Thus, the implicit function theorem is applicable.It means that the vectors l ε , r ε (as a functions of ε) are infinitely differentiable with respect to ε for all small ε and, therefore, l ε , r ε can be expanded into the asymptotic series.The coefficients of this series can be found via the standard procedure: substituting the series into the equation F(λ, ν, ε) = 0, expanding values dependent on ε into the asymptotic series in power of ε and equating terms of the same order of smallness with respect to ε, we obtain equations of the form F(△λ k , △ν k ) = (g 1,k , g 2,k ) with the right parts known.Then, by the formula (1) we find l k , r k .Theorem 3. Suppose that x 0 > T + 2. Then the vectors l ε , r ε , which determine the optimal control in problem (1.1) are expanded as ε → 0 into the power asymptotic series:

Conclusion
1.Both in the first and the second cases under consideration, from (1.14), (1.15) and the asymptotic expansion of l ε the asymptotic expansions of both the quality index and optimal control as well as optimal state of the system are conventionally obtained.With this, the asymptotic expansions of the optimal control and optimal state of the system will be exponentially decreasing boundary layers in the neighborhood of point t = 0.Moreover, if t ε β and β ∈ (0, 1), then the optimal control u o (t) is constant plus the asymptotic zero.
and O and I are the zero and the identity matrices of dimensional n × n respectively.