AN ALGORITHM FOR COMPUTING BOUNDARY POINTS OF REACHABLE SETS OF CONTROL SYSTEMS UNDER INTEGRAL CONSTRAINTS1

In this paper we consider a reachability problem for a nonlinear affine-control system with integral constraints, which assumed to be quadratic in control variables. Under controllability assumptions it was proved in [8] that any admissible control that steers the control system to the boundary of its reachable set is a local solution to an optimal control problem with an integral cost functional and terminal constraints. This leads to the Pontriagyn maximum principle for boundary trajectories. We propose here a numerical algorithm for computing the reachable set boundary based on the maximum principle and provide some numerical examples.


Introduction
We consider here the reachable sets of a nonlinear affine-control system with joint integral constraints on the state and the control. The numerical algorithms for constructing approximations of reachable sets of control systems were investigated in many works (see, for example [2, 4, 7, 9-12, 14, 15, 17]). The properties of reachable sets under integral constraints and algorithms for their construction were studied in [1,5,6,16]. For systems with pointwise constraints on the control it is known (see, for example, [13]) that the control, which steers the trajectory to the boundary of the reachable set, satisfies the Pontryagin maximum principle. In the paper [8] we have considered the reachability problem for a nonlinear affine-control system with constraints on the control variables given by the quadratic integral inequality. Assuming the controllability property of the linearized system, we proved that any admissible control that steers the control system to the boundary of its reachable set is a local solution to an optimal control problem with an integral cost functional and a terminal constraint. This leads to the maximum principle for boundary trajectories. The last result admits a generalization to the case of joint integral constraints on the state and the control given by the inequality The reachable set in this case may be considered as the solution to the inverse optimal control problem: to find the terminal states reachable from the given initial state by the trajectories satisfying the constraints on the value of the cost functional. The aim of the present paper is to propose a numerical algorithm for computing boundary points of the reachable set. This algorithm is based on the solution of equations following from the maximum principle for boundary trajectories.

Notation and definitions
Further by A ⊤ we denote the transpose of a real matrix A, I n is an identity n × n-matrix, 0 n is a zero n × n-matrix, 0 stands for a zero vector of appropriate dimension. For x, y ∈ R n let (x, y) = x ⊤ y denotes the inner product, x ⊤ = (x 1 , . . . , x n ), x = (x, x) 1 2 be the Euclidean norm, and B r (x): B r (x) = {x ∈ R n : x −x ≤ r} be a ball of radius r > 0 centered atx. For a set S ⊂ R n let ∂S be the boundary of S; ∂f ∂x (x) is the Jacobi matrix of a vector-valued function f (x).
For a real k × m matrix A a matrix norm is denoted as A . The symbol R n×r denotes a space of n × r real matrices, the symbols L 1 , L 2 and C stand for the spaces of summable, square summable and continuous vector-functions respectively. The norms in these spaces are denoted as · L 1 , · L 2 , · C . We consider the control systeṁ The functions f 1 and f 2 are assumed to be continuously differentiable in x and satisfying the following conditions: where l 1 (·) ∈ L 1 , l 2 (·) ∈ L 2 . Under these assumptions for any u(·) ∈ L 2 there exists a unique absolutely continuous solution x(t) of system (1.1) which satisfies the initial condition x(t 0 ) = x 0 and is defined on the interval [t 0 , t 1 ]. 2 Denote as J(u(·)) the following integral functional Here x(t) is a solution of system (1.1) corresponding to the control u(t) and the initial vector x 0 . The function Q(t, x) and the positive definite symmetric matrix R(t, x) are assumed to be continuous on [t 0 , t 1 ] × R n and satisfying the inequalities Q(t, x) ≥ 0, u ⊤ R(t, x)u ≥ α u 2 for some α > 0 and any (t, where µ > 0 is a given number, and let P be a m × n full rank real matrix, m ≤ n. Denote by G(t 1 ) the (output) reachable set of the system (1.1) at the time t 1 for the fixed x 0 and the integral constraints: where x(t, u(·)) is a trajectory of system (1.1), corresponding to u(·).
The reachable set is a compact set in R m , but it may be empty. Recall the following definitions: the linear control systeṁ a) is said to be controllable on [t 0 , t 1 ] with respect to the output y = P x if for any y 1 ∈ R m there exists a control u(·) ∈ L 2 that transfers the system from the zero initial state x(t 0 ) = 0 to the final state x(t 1 ) such that P x(t 1 ) = y 1 ; b) is said to be the linearization of the systemẋ = F (t, x, u) along the trajectory x(t), u(t) if

Extremal Properties of Boundary Points
Let us show that any admissible control that steers the control system to the boundary of its reachable set is a local solution to an optimal control problem with an integral cost functional and terminal constraints.
P r o o f. The proof follows the scheme of the proof of the Theorem 1 [8] and uses the Graves theorem [3].
Since the local minimum in L 2 admits the needle variations of the control, the local L 2 -minimizer satisfies Pontryagin's maximum principle. Introduce the Pontryagin function (Hamiltonian) asso- A locally optimal control for (2.3) satisfies the maximum principle: there exist p 0 ≥ 0, l ∈ R m , (p 0 , l) = 0, and a function p(t) such that Since the terminal constraints are regular (rankP = m), we have p 0 + p(t) = 0, t ∈ [t 0 , t 1 ]. As previously, we denote here by (A(t), B(t)) the matrices of the linearization along (x(t), u(t)) of system (1.1). Applying the maximum principle to the solution of problem (2.3) we come the following Corrolary 1. Suppose that u(t) satisfies the assumptions of Theorem 1. Then there exist l ∈ R m , l = 0 and a function p(t) such thaṫ a pair (A(t), B(t)) is controllable w.r.t. y = P x, then p 0 > 0. Indeed, if it turned out that p 0 = 0, then p(·) is a non zero solution of the equatioṅ and from the maximum principle we would obtain Integrating both sides of the last equality over [t 0 , t 1 ], we get l ⊤ V l = 0. This contradicts to the controllability of (A(t), B(t)) w.r.t. y = P x, since l = 0. Thus we can take p 0 = 1 2 , from the maximum principle it follows that

Examples
Here we illustrate the above procedure for two examples of 2-dimensional control systems. E x a m p l e 1. Consider the Duffing equatioṅ ϕ(x 1 ) = −αx 1 − βx 3 1 , α, β > 0, which describes the motion of nonlinear stiff spring on impact of an external force u. Consider the integral constraint on the state and the control where a, b are nonegative parameters and take P = I 2 .
The results of numerical simulation are shown in the Fig. 3-6. The Fig. 3 shows the plot of the reachable sets boundaries for t 1 = 2, and for a = 0, b = 0; a = 0.1, b = 0; a = 0.5, b = 0.1 respectively. This plot demonstrates that reachable sets are nonconvex for a = 0, b = 0 and became convex under increase of parameters a, b.
The next plot (Fig. 4) exhibits the zero-level lines of ψ 0 (q) corresponding to the curves of Fig. 3.
The Fig. 5 demonstrates the dependence of reachable sets on the value µ 2 = 0.5, 1, 1.5, 2, 2.2. It shows that reachable sets that are convex for small µ 2 loose their convexity as µ 2 increases (see [16]). In this example the method fails for µ 2 > 2.2 because a numerical integration of (2.6) unable to meet integration tolerances. Note that the considered procedure may by applied if the zero-level line ψ 0 (q) = 0 is a differentiable curve. Differentiability can be violated in the points where ψ 0q 1 (q) = ψ 0q 2 (q) = 0 or the right-hand side of (2.6) is singular. The graph of the solution of (2.6) corresponding to the value µ 2 = 2.2 is shown in Fig. 6.

Conclusion
This paper describes an algorithm for computing the boundaries of the reachable sets under joint integral constrains on state and control variables. The reachable set may be considered here as the solution to the inverse optimal control problem: to find the terminal states reached from the given initial state by the trajectories satisfying the constraints on the value of the cost functional. The Pontriagyn maximum principle for boundary trajectories is applied to construct a numerical algorithm for computing the boundary points. The results of numerical simulation for two examples of second order nonlinear control systems are presented.