Extending the Applicability of the Super-Halley-Like Method Using ω-Continuous Derivatives and Restricted Convergence Domains

Abstract We present a local convergence analysis of the super-Halley-like method in order to approximate a locally unique solution of an equation in a Banach space setting. The convergence analysis in earlier studies was based on hypotheses reaching up to the third derivative of the operator. In the present study we expand the applicability of the super-Halley-like method by using hypotheses up to the second derivative. We also provide: a computable error on the distances involved and a uniqueness result based on Lipschitz constants. Numerical examples are also presented in this study.


Introduction
In this study we consider the super-Halley-like method for approximating a locally unique solution x * of equation where F is a twice Fréchet differentiable operator defined on a subset Ω of a Banach space B 1 with values in a Banach space B 2 . In particular, we study the local convergence analysis of the super-Halley method defined for each n = 0, 1, 2, . . . by x n+1 = y n + 1 2 L n (I − L n ) −1 (y n − x n ), (1.1) where x 0 is an initial point and L n = F (x n ) −1 F (x n )F (x n ) −1 F (x n ). The efficiency and importance of method (1.1) were discussed in [4][5][6][7][8][9]14]. The study of convergence of iterative algorithms is usually centered into two categories: semilocal and local convergence analysis. The semilocal convergence is based on the information around an initial point, to obtain conditions ensuring the convergence of these algorithms, while the local convergence is based on the information around a solution to find estimates of the computed radii of the convergence balls. Local results are important since they provide the degree of difficulty in choosing initial points. The local convergence of method (1.1) was shown using hypotheses given in non-affine invariant form by (C 1 ) F : Ω ⊂ B 1 → B 2 is a thrice continuously differentiable operator. (C 2 ) There exists x 0 ∈ Ω such that F (x 0 ) −1 ∈ L(B 2 , B 1 ) and F −1 (x 0 ) ≤ β.
The hypotheses for the local convergence analysis of these methods are the same but x 0 is replaced by x * . Notice however that hypotheses (C 5 ) and (C 6 ) limit the applicability of these methods. As a motivational example, let us define function f on Ω = [− 1 2 , 5 2 ] by We have that Then, obviously, function f is unbounded on Ω. Hence, the earlier results using the (C) conditions cannot be used to solve equation f (x) = 0. Notice that, in-particular there is a plethora of iterative methods for approximating solutions of nonlinear equations defined on B 1 (see [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16] We study these methods in the more general setting of a Banach space under hypotheses only on the first and second Fréchet derivatives of F and under the same set of conditions (see conditions (H) that follow). This way we expand the applicability of these methods.
The rest of the paper is organized as follows: Section 2 contains the local convergence analysis and Section 3 contains the semi-local convergence analysis of method (1.1). The numerical examples are presented in the concluding Section 4.

Local convergence analysis
Local convergence analysis of method (1.1) is based on some scalar functions and parameters. Let w 0 : [0, +∞) → [0, +∞) be a continuous and increasing function with w 0 (0) = 0. Suppose the equation w 0 (t) = 1 has at least one positive root. Denote by r 0 the smallest such a root, i.e.
We have h 1 (0) = −1 < 0 and h 1 (t) → +∞ as t −→ r − 0 . The intermediate value theorem guarantees the existence of zeros for function h 1 (t) in (0, r 0 ). Denote by r 1 the smallest zero. Furthermore, define functions p and h p on the interval [0, r 0 ) by We get that h p (0) = −1 < 0 and h p (t) → +∞ as t −→ r − 0 . Denote by r p the smallest zero of function h p in the interval (0, r 0 ). Finally, define functions g 2 and h 2 on the interval [0, r p ) by and h 2 (t) = g 2 (t) − 1.
We obtain h 2 (0) = −1 < 0 and h 2 (t) → +∞ as t −→ r − p . Denote by r 2 the smallest zero of function h 2 on the interval (0, r p ). Define the radius of convergence r by Then, for each t ∈ [0, r) Let U (v, ρ),Ū (v, ρ) stand, respectively for the open and closed balls in B 1 with center v ∈ B 1 and of radius ρ > 0.
Next, we present the local convergence analysis of method (1.1) using the preceding notation.
Theorem 2.1. Let F : Ω ⊂ B 1 → B 2 be a twice continuously Fréchetdifferentiable operator. Suppose there exist x * ∈ Ω and a continuous and increasing function w 0 : [0, +∞) → [0, +∞) with w 0 (0) = 0 such that for each Let andŪ (x * , r) ⊆ Ω, where the radii r 0 and r are defined by (2.1) and (2.2), respectively. Then, sequence {x n } starting from x 0 ∈ U (x * , r)\{x * } and given by (1.1) is well defined, remains in U (x * , r) and converges to x * . Moreover, the following estimates hold where the functions g i , i = 1, 2, are defined previously. Furthermore, if there exists r * ≥ r such that then, the limit point x * is the only solution of equation Proof. We shall show estimates (2.10) and (2.11) using induction on the integer k. Let x ∈ U (x * , r). Using (2.1), (2.2) and (2.5), we get that In view of (2.13) and the Banach lemma on invertible operators ( [1,4,15,16]) In particular, y 0 is well defined by the first substep of method (1.1) for n = 0. We can write , (2.14) and (2.15) we obtain in turn that which shows (2.10) for n = 0 and y 0 ∈ U (x * , r). Next, we must show (I − L 0 ) −1 ∈ L(B 2 , B 1 ). By (2.2), (2.8), (2.9) and (2.14) we have in turn that where we also used the estimate We also have that x 1 is well defined by the second substep of method (1.1) and (2.18). Moreover, by (2.2), (2.3) (for i = 2), (2.14), (2.16)-(2.19), we get in turn that which shows (2.11) for n = 0 and x 1 ∈ U (x * , r). The induction is finished, if we simply replace x 0 , y 0 , x 1 by x k , y k , x k+1 , respectively in the preceding estimates. Furthermore, from the estimate Finally, to show the uniqueness part, let Q = (2.7)). In [3], Argyros and Ren used instead of (2.7) the condition But using (2.7) and (2.20) we get that L ≤ L * holds, since Ω 0 ⊆ Ω. In case L < L * , the new convergence analysis is better than the old one. Notice also that we have by (2.6) and (2.20) . Notice that the convergence radius for Newton's method given independently by Rheinboldt ([15]) and Traub ([16]) is given by As an example, let us consider the function f (x) = e x − 1. Then x * = 0.
(c) The local results can be used for projection methods such as Arnoldi's method, the generalized minimum residual method (GMREM), the generalized conjugate method (GCM) for combined Newton/finite projection methods and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy ( [4][5][6][7]). (d) The results can be also used to solve equations where the operator F satisfies the autonomous differential equation ( [5,7]): where p is a known continuous operator. Since F (x * ) = p(F (x * )) = p(0), we can apply the results without actually knowing the solution x * . As an example, let us consider F (x) = e x − 1. Then, we can choose p(x) = x + 1 and x * = 0. (e) It is worth noticing that convergence conditions for method (1.1) are not changing if we use the new instead of the old conditions. Moreover, for the error bounds in practice we can use the computational order of convergence (COC) , for each n = 1, 2, . . .
instead of the error bounds obtained in Theorem 2.1. (f) In view of (2.6) and the estimate

Semi-local convergence analysis
The following conditions (H) in a non affine invariant form have been used to show the semi-local convergence analysis of method (1.1) ( [5,14]): In many applications the iterates {x n } remain in a neighborhood of Ω 0 . If we locate Ω 0 before we find M, q and ψ, then the new semi-local convergence analysis will be weaker. Consequently, the new convergence domain will be at least as large as if we were using Ω. To achieve this goal we consider the weaker conditions (A) in an affine invariant form: with w 0 (0) = 0 such that for each x ∈ Ω Define functions q, d, c on the interval [0, r 0 ) by and 1−c(t) − t = 0 has zeros in the interval (0, r 0 ). Denote by r the smallest such a zero. Moreover, the zero r satisfies q(r) < 1 and c(r) < 1.
It is convenient for the semi-local convergence analysis of method (1.1) that follows to define the scalar sequences {p n }, {q n }, {s n }, {t n } for each n = 0, 1, 2, . . . by We need an Ostrowski-type representation of method (1.1).
Assumption 3.1. Suppose that method (1.1) is well defined for each n = 0, 1, 2, . . . Then the following equality holds Proof. Using method (1.1) and the Taylor series expansion about y n ∈ Ω, we can write which leads to (3.5).
Next, we present the semi-local convergence analysis of method (1.1) using the preceding notation and conditions (A).
Theorem 3.2. Let F : Ω ⊂ B 1 → B 2 be a twice continuously Fréchetdifferentiable operator. Suppose that the conditions (A) are satisfied. Then, sequence {x n } generated for x 0 ∈ Ω by method (1.1) is well defined in U (x 0 , r), remains in U (x 0 , r) for each n = 0, 1, 2, . . . and converges to solution x * ∈ U (x 0 , r) of equation F (x) = 0. Moreover, the following estimates hold Furthermore, the limit point x * is the only solution of equation F (x) = 0 in Ω 1 = Ω ∩Ū (x 0 , r).
Proof. Let x ∈ U (x 0 , r 0 ), where r 0 is defined in condition (A 2 ) from which we have that It follows from (3.6) and the Banach lemma on invertible operators that F (x) −1 ∈ L(B 2 , B 1 ) and We also have that y 0 is well defined by the first substep of method (1.1) and L 0 exists for n = 0. In view of the definition of L 0 , (A 1 ), (A 3 ), (A 4 ), (3.1), (3.2) and (3.7), we get that The point x 1 is also well defined by the second substep of method (1.1) for n = 0 and Using (3.5), (A 2 ), (3.1)-(3.4), (3.7)-(3.9), we get in turn that and the induction for (3.10) and (3.11) is completed. Moreover, we have that Scalar sequences {t k }, {s k } are increasing and bounded from above by r (see (3.10), (3.11), (3.13) and (3.14)), so they converge to their unique least upper bound r 1 ≤ r. In view of (3.10) and (3.11) sequences {x k }, {y k } are Cauchy in a complete space B 1 and as such they converge to some x * ∈Ū (x 0 , r) (sincē U (x 0 , r) is a closed set). By letting k −→ ∞ in (3.12) we get F (x * ) = 0.
Moreover, the third condition in (A 3 ) implies the second condition but Hence, it is important to introduce the second condition, in case strict inequality holds in (3.15).
for each x, y ∈ Ω. Choose w(t) = M 0 . Then, we have that since Ω 0 ⊆ Ω. and leading to a tight convergence analysis (see also the numerical examples).

Numerical Examples
The numerical examples are presented in this section.
Example 4.1. Returning back to the motivational example at the introduction of this study, we have w 0 (t) = w(t) = v(t) = 146.6629073t and v 1 (t) = 2. The parameters for method (1.1) are r 1 = 0.6667, r 2 = 0.4384 = r.