A STABLE METHOD FOR LINEAR EQUATION IN BANACH SPACES WITH SMOOTH NORMS 1

A stable method for numerical solution of a linear operator equation in reflexive Banach spaces is proposed. The operator and the right-hand side of the equation are assumed to be known approximately. The corresponding error levels may remain unknown. Approximate operators and their conjugate ones must possess the property of strong pointwise convergence. The exact normal solution is assumed to be sourcewise representable and some upper estimate for the norm of its source element must be known. The norm in the Banach space of solutions is supposed to satisfy the following smoothness-type condition: some function of the norm must be differentiable. Under these conditions a stability of the method with respect to nonuniform perturbations in operator is shown and the strong convergence to the normal solution is proved. A boundary control problem for the one-dimensional wave equation is considered as an example of possible application. The results of the model numerical experiments are presented.


Introduction
The problem of finding solution to a linear operator equation arises in many fields of applied mathematics when solving integral equations, some boundary value problems, systems of linear equations and other linear inverse problems.The known complication that can arise thereby is the ill-posedness of such inverse problems.This means that small changes in initial data (coefficients of the system of linear equations, the right-hand sides of equations, boundary data, coefficients of differential operator, etc.) can cause loss of existence or uniqueness of the perturbed problem solution or lead to not small changes in this solution.To deal with the issues of such types many regularization methods were proposed: Tikhonov regularization method [23,24], residual method [17], method of quasi-solutions [12], residual principle [16], iterative regularization methods [2] and many others [3,10,21,22,25].Most of them require knowledge of error levels in initial data approximation or knowledge of some compact set containing a sought solution.In many applications these assumptions are rather hard to be ensured.Instead of these traditional assumptions our method requires a sought solution to be sourcewise representable and, moreover, some majorant for the source norm to be known.It allows anyone who wants to apply the method to focus on researching corresponding properties of the exact problem.
In this paper we consider a linear operator equation in reflexive Banach spaces H and F, where A ∈ L(H → F ) is a linear bounded operator and f ∈ F is a given element.It is required to find normal solution u * , i. e. a solution u * to (1) with a minimal norm in the space H: u * = arg min In the sequel the norm of the space H will be supposed to be strictly convex, so the solution u * to the problem (2) is unique, and it exists if equation (1) has a solution [8,Proposition 1.2,p. 35].
Suppose that instead of exact data A and f some of their approximations A n ∈ L(H → F ) and f n ∈ F, n = 1, 2 . . ., are known.The asymptotic properties of the method will be studied under the condition that the approximate data converge to the exact ones in the following sense: Here and below A * : F * → H * and A * n : F * → H * are operators adjoint to A and A n .Note that the first two limit relations in (3) are weaker than conditions of uniform convergence usually required in the traditional regularizing procedures [3,10,16], [22] - [25].Also we do not require in (3) the knowledge of any error levels.
A stable method of solving the problem (2) under perturbations of type (3) in Hilbert spaces H and F was proposed in [19].Briefly recall this method for the convenience of comparison.In [19] the following basic assumptions were accepted: H4.Some majorant r * of the source norm is known: It is well-known that the solution u * to (2) belongs to the closure of R(A * ) [10, Proposition 2.3, p. 33], so the assumption H3 is rather natural and holds true for any operator A with closed range.The method from [19] is then formulated as follows: find a solution v n ∈ F to the following quadratic optimization problem and set element u n = A * n v n as a final approximation for the sought solution to (2).Here •, • F denotes the inner product in space F .
The method proposed below is an extension of the described method from [19] to Banach spaces with smoothness-type property of the norm in space H.
The rest of the paper is organized as follows.In the next Section 2, we formulate some assumptions about the spaces H, F and the special properties of the exact solution.In Section 3 the method is described, and in the Section 4 its stability is proved.In Section 5 one of the possible applications to the boundary control problem for the 1-D wave equation is considered, and in final Section 6 corresponding numerical results are provided.

Basic Assumptions and Auxiliary Statements
The method presented in the next section for Banach spaces requires the following assumptions: B1.H and F are reflexive Banach spaces.
B6.The solution u * to (2) is sourcewise representable in the following sense: there exists an element v * ∈ F * such that u * = J H A * v * , where mapping J H : H * → H is defined as B7.Some majorant r * of the source norm is known: v * F * ≤ r * .
Remark 1.Using reflexivity of H and Asplund's duality mapping representation theorem [5, Theorem 4.4, p. 26], it is not hard to see that J H defined in (1.2) is in fact duality mapping with weight (or gauge) function φ −1 (x).
Let us explain the meaning of the assumption B6.As in the case of Hilbert spaces H and F , this assumption is fulfilled for operators A with closed range.The corresponding proof will be presented now.
Theorem 2. Let assumptions B1, B2, B4, B5 be fulfilled and Au * = f .Then u * is solution to (2) if and only if where R(A * ) is closure of R(A * ).
P r o o f.Let u * be a solution to (2).Let us prove that (1.3) takes place.Consider the following minimization problem: Since function p(x) is strictly increasing and P (u) = p( u H ), this problem is equivalent to (2), and the element u * is the unique solution to (1.4).Also consider linear auxiliary minimization problem: Here and below, the expression f, u is understood as the value of linear continuous functional f ∈ H * on the element u ∈ H. Notice that optimal value of minimizing functional in (1.5) is nonnegative.Indeed, if there exists an element u ∈ H such that P ′ (u * ), u < 0 and A u = 0, then we can consider elements u α = u * + α u, α ≥ 0. Using definition of Fréchet derivative we get where o(α)/α → 0 as α → 0. It means that for all sufficiently small α > 0 P (u α ) < P (u * ) and Au α = Au * + αA u = Au * = f , so u * is not the solution to (1.4).This contradiction shows that P ′ (u * ), u ≥ 0 for all u ∈ H such that Au = 0, i. e. for all u ∈ N (A), where N (A) denotes the kernel of A. Since the kernel N (A) is a linear subspace of H, it means that P ′ (u * ), u = 0 for all u ∈ N (A).In other words, we have and equality (N (A)) ⊥ = R(A * ) (see [13, Theorem 1 * , p. 357], using reflexivity of H) allows us to pass from (1.6) to (1.3).
On the other hand, let u * be a solution to (1) and let inclusion (1.3) be fulfilled.We want to prove that u * = u, where u is a solution of (2).Let us suppose that u * = u.Notice that under assumptions B2, B4 operator P ′ (u) is strictly monotonic and that is why the following inequality holds true: It was proved above that u satisfies the condition (1.3), therefore This contradiction means that our assumption u * = u is not true, so u * is indeed a solution to (2).
Lemma 1.Let assumptions B1, B2, B4 be fulfilled.Then K ′ (P ′ (u)) = u, ∀u ∈ H and P ′ (K ′ (g)) = g, ∀g ∈ H * .P r o o f.Let us extend functions p(x) and k(x) defined in (1.1) to the region x ≤ 0 in the even way: k(x) = k(−x), p(x) = p(−x).Then these extensions will be convex dual.Indeed, for all x ∈ R concave function xy − k(y) of variable y attains its maximum when x − k ′ (y) = 0, i. e. (1.8) Note that the following equality takes place for any strictly increasing smooth function ψ ∈ C 1 (R): Passing in (1.9) to the limit as ψ → φ, ψ −1 → φ −1 uniformly on any segment [a, b], we obtain from (1.8) that Applying lemma 1 to (1.3) and using notation (1.2) we get the main result concerning assumption B6.
Corollary 1.Let assumptions B1, B2, B4, B5 be fulfilled and let u * be a solution to (1): (1.10) It means that assumption B6 is fulfilled for all operators A with closed range.For other operators this assumption contains additional requirement to the normal solution u * , but it is rather close to the necessary condition (1.10).

Description of the Method
The algorithm proposed below in Banach spaces to find the normal solution (2) to the equation (1) in case of approximate data A n , f n , n = 1, 2, . . ., is similar to its Hilbert version (4) from [19].
1.For the fixed sequence number n find an element v n ∈ V that satisfies the conditions where r * is taken from assumption B7 and ε n ≥ 0 is a parameter that allows to solve the optimization problem 2. Set u n = J H A * n v n as an approximate solution to (2).
Remark 2. Note that for Hilbert spaces H and F we can take φ(x H /2 and J H u = K ′ (u) = u.In this case method (2.1) fully coincides with the method from [19].
Remark 3. As in [19] instead of V we can use in (2.1) sets where In this case the proof of the method convergence does not change.For finite-dimensional approximate operators A n and A * n which are usually used in practical computations, it makes possible to choose finite-dimensional subspaces F * n for variations of sources v.In this case, problem I n (v) → inf, v ∈ V n turns into a finite-dimensional problem of minimization a smooth convex function I n (v) on a ball V n .Note that ball is one of the simpliest convex closed bounded set with a non-empty interior.For an approximate solution of such problems, more precisely, an approximate solution by the value of the function, there is a well-developed arsenal of numerical methods.

Proof of Convergence
Let us examine the behavior of the approximate solutions u n when perturbed data A n , f n asymptotically approach their exact values A, f in the sense of (3).To do this we need the following equivalent reformulation of the problem (2).Lemma 2. Let assumptions B1, B2, B4-B6 be fulfilled.Then an element u * ∈ H is the solution to (2) if and only if it can be represented as u * = J H A * v, where v is a solution to the following optimization problem: P r o o f.The problem (3.1) is a smooth and convex one without constraints, so it is equivalent to finding an element v ∈ F * on which the derivative of the functional vanishes [8, Proposition 2.1, p. 36]: Taking into account (1.2), this equation is equivalent to a system of two equations for the unknowns Let u * be a solution to (2).Then it follows from assumption B6 that u * satisfies (3.2).On the other hand, if u * satisfies (3.2) then using corollary 1 we get that u * is the solution to (2).Now we are ready to prove convergence of the method.
Theorem 3. Let assumptions B1-B7 and conditions (3) be fulfilled.Let u n be a final output of the method described above and ε n → 0 as n → ∞.Then u n − u * H → 0 as n → ∞.P r o o f.Space F * is reflexive, and set V defined in (2.1) is convex, bounded and closed, therefore family v n ∈ V has in F * a weak limit point v 0 ∈ V [7, V. 4.7, p. 425]: v nm w → v 0 as m → ∞.In order to simplify notation, we will omit symbol m from the subsequence n m and write v n w → v 0 , n → ∞.Then due to the strong pointwise convergence of A n to A we have Functional K(g) is convex and continuous, hence it is weakly lower semicontinuous [9, Proposition 5, p. 74].Denote I(v) = K(A * v) − v, f and notice that the following inequalities are valid: The first inequality in (3.4) is due to weak lower semicontinuity of K(g) and strong convergence f n − f F → 0. The third inequality follows from (2.1) and the inclusion v 0 ∈ V .The equality is due to (3).From (3.4) it follows that there exists lim where the last convergence takes place because the function k(x) is strictly increasing.Using (3.3), (3.5) and Radon-Riesz property of norm from assumption B3, we get strong convergence Since source element v * from assumption B6 belongs to V , it follows from (2.1) that I n (v n ) ≤ I n (v * ) + ε n .Passing to a limit and using (3.6) and convergence v n , f n → v 0 , f , we get I(v 0 ) ≤ I(v * ).Lemma 2 states that v * is a solution to global optimization problem (3.1).That is why This means that v 0 is also a solution to the problem (3.1), so using lemma 2 once more, but in opposite direction, we obtain that the only solution u * to the problem (2) can be represented by the source v 0 : Assumption B4 implies strong continuity of J H , which with (3.6) leads to the limit relation Notice that our proof holds true for all weak limit points v 0 ∈ V , and that is why convergence (3.7) is valid for arbitrary family of approximate data A n , f n possessing asymptotic properties (3).

Application to the Boundary Control Problem
In order to illustrate the application ability of the method, consider the following model boundary control problem for one-dimensional wave equation: The goal of control actions u(t) is to drive the system to a given final state f (x) = (f 0 (x), f 1 (x)) at a given time T ≥ 2l: The spaces H and F of controls u(t) and target states f (x) are the following ones: Here L p (a, b) is Lebesgue space of measurable functions φ defined on (a, b) with integrable |φ| p on (a, b).Space W −1 p (0, l) is adjoint to Sobolev space • W 1 q (0, l) of functions φ ∈ L q (0, l) having the first derivative φ ′ ∈ L q (0, l) and vanishing at both endpoints: φ(0) = φ(l) = 0.The numbers p and q are adjoint: 1/p + 1/q = 1.The norms are defined as follows: Let us also consider adjoint problem [15,26]: (4.5) Analogously to [11] it can be proved that linear operator is well-defined and bounded: A * ∈ L(F * → H * ).Then its adjoint operator is also linear and bounded: A ∈ L(H → F ), so the boundary control problem (4.1), (4.2) can be reformulated as equation (1) in Banach spaces H and F .We will find its normal solution u * with property (2).

Analogously after integrating differential equation along characteristics
and Then using Jensen's inequality we obtain It means that the constant µ in (4.7) is equal to 1.
Remark 4. The value of µ = 1 is adequate for T being close to 2l, but becomes too rough for sufficiently large T .Using a slightly modified technique, one can obtain for µ another expression of the form µ = C • (T − 2l) (with a constant C > 0 independent on T ) being more preferable for sufficiently large T .
It follows from observability inequality (4.7) that R(A) = F [14, Theorem 3.6, p. 13], so the assumption B5 is fulfilled.Then the closedness of R(A) implies closedness of R(A * ) [14, Theorem 3.7, p. 13], and with the help of corollary 1 the validity of the assumption B6 is proved.
Remark 5. Note that, despite of closedness of R(A * ), even in the case of Hilbert spaces (p = 2) the problem (4.1), (4.2) is unstable when approximate operators A n are constructed using finite difference space semi-discrete scheme, as it was shown in [26].Using fully discrete schemes with inequal time and space mesh steps is also noted in [26] as a practice that leads to instabilities.Indirectly it was illustrated by non-regularized computations in [6].
To find a value r * for the source norm estimate from assumption B7 take into account, that element v * is the unique source (due to (4.7)) for the solution u * and satisfies the following conditions: F ≡ r * .In our case µ = 1, so r * = f p/q , and assumption B7 is true.In practice, if we know only approximate target f n , we can take r * = f n p/q + γ with some fixed γ > 0.

Numerical Experiments
Numerical experiments were produced for the problem (4.1) with l = 1, T = 3 = 3l > 2l, p = 3 and µ = 1.As a terminal target state f = (f 0 (x), f 1 (x)) we choose where Note that at first we chose control u * (t) such that u * ∈ J H R(A * ).After that using explicit expressions for the solution of boundary value problem (4.1) we defined target f = Au * , so according to corollary 1 it means that u * (t) is the solution to (2).Plots of u * (t) and f (x) are shown at Figure 1 and Figure 2 respectively.Approximate operator A n was built similar to [18] using three-layer explicit difference scheme on a uniform grid with M nodes on segment [0, l] and N nodes on [0, T ].Approximate terminal state f n was produced by discretization of functions (5.1) and by adding random noise of fixed level δ = f n − f d F / f d F , where f d is discretized function (5.1).The Table 1 presents some relative errors ǫ = u n − u * H / u * H (where H = L 3 (0, 1)) of finding control u * (t) by the method, depending on grid parameters M , N and noise level δ in target state.Some typical plots of approximate controls u n (t) are presented at Figure 3 and Figure 4.
As it can be seen from Table 1, errors of the method are quite acceptable and decrease with grid refinement and noise vanishing, that agrees with the theoretical conclusions stated above.It is curious that the numerical results are sensitive to small variations in the steps of the difference grid near their equal values when the stability condition of the difference scheme is satisfied.Of course, other methods oriented to problems of the type (4.1) can give better results.The main advantages  of our method are its universality, the possibility of applying to a wide class of ill-posed problems in Banach spaces and also the existence of a theoretical base in the form of assumptions B1-B7.

Conclusion
In the paper a numerical method for the linear equation in Banach spaces is proposed.The main advantage of the method is its applicability to problems with non-uniformly perturbed operator.However, inequalities like (4.7) with explicit values of µ can be obtained only in limited number of applications, so in practice the problem of choosing an appropriate value of the important parameter r * can occur sufficiently difficult.We also note that for uniformly convex Banach spaces it seems possible to obtain error estimate of the proposed method.
H1. Spaces H and F are Hilbert and identified with their adjoint spaces in the Riesz sense: H ≃ H * , F ≃ F * .H2. Equation (1) has a solution.H3.The solution u * to (2) is sourcewise representable: u * ∈ R(A * ), where R(A * ) denotes range of operator A * : F → H.It means that there exists a source element v * ∈ F such that u * = A * v * .

Figure 1 .
Figure 1.Plot of the exact control u * (t)

1 f 1 Figure 2 .
Figure 2. Plot of the exact terminal state f (x)
). Applying Fenchel-Moreau theorem [8, Proposition 4.1, p.18] we get k * * = p * = k, so functions k and p are dual.Then we get the duality of functions P (u) = p( u H ) and K(g) = k( g H *

Table 1 .
Relative errors ǫ = u n − u * H / u * H of the method