In this paper, we present firstly several one-step iterative methods including classical Newton’s method and Halley’s method for root-finding problem based on Thiele’s continued fraction. Secondly, by using approximants of the second derivative and the third derivative, we obtain an improved iterative method, which is not only two-step iterative method but also avoids calculating the higher derivatives of the function. Analysis of its convergence shows that the order of convergence of the modified iterative method is four for a simple root of the equation. Finally, to illustrate the efficiency and performance of the proposed method, we give some numerical experiments and comparison.
In the last years, many higher order iterative methods have been developed to solve a root-finding problem. Iterative algorithm is an interesting and very ancient subject (see literature1–19 and references therein), due to the importance of the topic in numerical analysis, engineering and applied problems such as economics analysis, physics, dynamical models, and so on.20–23 In general, by using a variant of different techniques, such as Taylor series,24–26 decomposition,4,5,9 quadrature2,12 and homotopy methods,13,14 these iterative methods could be derived.
We consider a nonlinear equation f(x) = 0. Suppose that and let satisfy . It is well known that Newton’s method as below
is a good choice for finding the approximate root of f(x) = 0, which is a one-step iterative method and has quadratic convergence for a simple zero of the nonlinear equation, as shown in the works.24–26 If two steps of Newton iterative method are seen as one step, Traub considered the following iteration in Traub24
The two-step Newton’s method is fourth-order convergent. Apparently, this iterative method depends on the first derivatives at point xk and another point yk. This kind of dependence can spend more time to the application for the two-step Newton’s method. Moreover, the two-step method requires two function evaluations and two first derivative evaluations per iteration. Thus, both to reduce the total number of new function evaluations (that is, the values of f and its derivatives ) per iteration and to keep the order of convergence, constructing an efficient iterative method is the main motivation of this work.
In this paper, we start with using a continued fraction to establish several one-step iterative methods including classical Newton’s method and Halley’s method for solving nonlinear equations. In order to avoid calculating the high-order derivatives of the function, then we employ the approximants of the higher derivatives to improve the presented iterative method. As a result, we construct an iterative formula without calculating the high-order derivatives. Unlike the above two-step Newton’s method, the proposed method only requires to calculate the values of the function f twice and the derivative at xk once for each iteration step. It is clear that the advantage of the proposed method can save mathematical labor by eliminating one derivative evaluation at yk. Furthermore, we prove that the order of convergence of the modified iterative method is four for a simple root of the equation. At last, we give some numerical experiments and comparison to illustrate the efficiency and performance of the proposed method.
The rest of this paper is organized as follows. In the next section we present some preliminaries for Thiele’s continued fraction and of iterative methods. In the subsequent section, we propose firstly several one-step iterative scheme based on the expansion of Thiele’s continued fraction. Then, we improve the presented iterative method to obtain an iterative formula without calculating the high-order derivatives, and we prove that the modified method is fourth-order convergent at least. In the penultimate section, we give numerical examples to show the performance of the presented method and compare it with other high-order methods. Finally, we draw conclusions from the experiment results of numerical examples in the last section.
Preliminaries
In this section, we briefly recall some basic definitions and results for Thiele’s continued fraction and the relative speed of convergence of iterative schemes. Some surveys and complete literatures on continued fractions and the speed of convergence of iterative methods could be found in Tan et al.,27,28 Traub,24 Gautschi25 and Burden et al.26 For simplicity throughout this paper we let stand for a real numbers set.
Definition 2.1 Assume and are two sets of real numbers. The continued fraction as below
is called Thiele’s continued fraction.
Definition 2.2 For equation (2) in Definition 2.1, the following continued fraction
is defined as the n-th truncated Thiele’s continued fraction.
Let denote the coefficients of Taylor polynomial as
For the relation between the coefficients , of Taylor’s expansion and the coefficients , of Thiele’s continued fraction, we provide straightforward a lemma as follows without trying to prove it.
Lemma 2.1 (Viscovatov algorithm) Suppose that the function f(x) is n times differentiable on an interval . If f(x) can be expanded into the following Thiele’s continued fraction about the point
and Taylor series about the point xk
where. Then the coefficients, can be calculated by using Viscovatov algorithm as follows27,28
On the other hand, we look back upon the relative speed of convergence of the iterative scheme. Firstly, we need to give a procedure for measuring how rapidly a sequence converges. Secondly, we require a definition for determining the speed of an iterative scheme.
Definition 2.3 Assume a sequence converges to ξ, with for all . If there exist two positive constants γ and τ such that
thenconverges to ξ of order τ, and γ is called asymptotic error constant.
Definition 2.4 Assume a sequence is generated by using an iterative technique of the form , for an initial approximation x0 and each . If the sequence converges to the solution of order τ, then the order of convergence of the iterative scheme is said to be τ.
The iterative methods
Now, we consider iterative methods to find a simple root of a nonlinear equation f(x) = 0. Let p be a simple root of the equation f(x) = 0 and suppose that η is an initial guess sufficiently close to p. Using Thiele’s continued fraction, we have
where η is the initial approximation for a zero. It follows from Viscovatov algorithm (Lemma 2.1) that
and
Considering the first truncated Thiele’s continued fraction for equation (5), we have
Let us set equation (16) as the stage for a new method, and use and in equation (16). Then we obtain the following iterative method.
Algorithm 3.3Starting with an initial approximation x0 and replacing equation (16) withequations (6) to (9), the iterative sequenceis generated by the following iterative scheme
for all.
Algorithm 3.3 is a one-step method, which could be used for solving nonlinear equation. Obviously, in order to implement this iterative method, we have to calculate the second derivative and the third derivative of the function f(x), which may cause inconvenience. For the sake of overcoming the drawback, we introduce approximants of the second derivative and the third derivative by means of Taylor series, which is a very important idea and plays a significant part in developing some iterative methods without calculating the higher derivatives. To be more precise, letting , and then expanding into third Taylor series about the point xk yields
Algorithm 3.4Starting with an initial approximation x0, the iterative sequenceis generated by the following iterative scheme
wherefor all.
It is obvious to see that the iterative method equation (19) needs to calculate the second derivative of the function f(x). In order to avoid computing the second derivative, we introduce an approximant of the second derivative by using Taylor series. Similarly, expanding into second Taylor series about the point xk yields
which implies
Using equation (20) in equation (19), one can get the following iterative method free from second derivatives.
Algorithm 3.5Starting with an initial approximation x0, the iterative sequenceis generated by the following iterative scheme
for all.
Algorithm 3.4 and Algorithm 3.5 are two-step iterative methods. Especially for Algorithm 3.5, it does not require to compute the high-order derivatives. But more importantly, the characteristic of Algorithm 3.5 is that per iteration it requires two evaluations of the function and one of its first derivative. The efficiency of this method is better than that of the well-known other methods involving the second-order derivative of the function.
Convergence analysis
In this section we take into account the convergence criteria of Algorithm 3.5.
Theorem 4.1 Let be a solution of the equation f(x) = 0. Suppose, in addition, that f(x) is sufficiently differentiable function in a neighborhood of the point p. If the initial approximation x0 is sufficiently near to p, then the order of convergence of the two-step iterative scheme defined by Algorithm 3.5 is four.
Proof. By assumption, p is a solution of the equation f(x) = 0, so f(p) = 0. Let ek be the error at k-th iteration, then one has . Consider the Taylor polynomial for expanded about p, one can have
where . It is clear that . From equations (24) and (25), one has
which shows that the order of convergence of Algorithm 3.5 is four according to Definition 2.3 and Definition 2.4. We have shown Theorem 4.1.□
Numerical examples
In order to verify the performance of the fourth-order method defined by Algorithm 3.5 (abbreviated as Alg.3.5), we present some numerical results on some test equations. Also, we compare their results with Newton’s method (NM for short), Halley’s method (HM for short) and the modified Householder method (MHM for short) with fourth-order convergence that suggested by Noor and Gupta.11 Starting with an given initial approximation x0, all numerical computations are carried out by using Mathematica software. For , we choose the following stopping criteria so that the computer programs for iterative computations are terminated when the criteria are satisfied simultaneously:
The test equations are presented as below, most of them could be found in the literatures17–19 or some references mentioned previously.
Displayed in Table 1 are the different test equations , the initial approximation x0, the number of iterations k to approximate the solution, the approximate solution xk, the values and . From Table 1, it is clear that Algorithm 3.5 is compatible with the method such as NM, HM and MHM.
Numerical experiments and comparison for different iterative methods.
Method
Equation
x0
k
xk
NM
3.6
8
0.25753028543986076045536730493724178138
HM
3.6
6
0.25753028543986076045536730493724178138
MHM
3.6
5
0.25753028543986076045536730493724178138
Alg.3.5
3.6
4
0.25753028543986076045536730493724178138
NM
3.5
7
2.03526848118195915354755041547361249916
HM
3.5
6
2.03526848118195915354755041547361249916
MHM
3.5
5
2.03526848118195915354755041547361249916
Alg.3.5
3.5
4
2.03526848118195915354755041547361249916
NM
1
5
0.73908513321516064165531208767387340401
HM
1
4
0.73908513321516064165531208767387340401
MHM
1
4
0.73908513321516064165531208767387340401
Alg.3.5
1
3
0.73908513321516064165531208767387340401
NM
–1
7
2.15443469003188372175929356651935049526
HM
–1
7
2.15443469003188372175929356651935049526
MHM
–1
5
2.15443469003188372175929356651935049526
Alg.3.5
–1
5
2.15443469003188372175929356651935049526
NM
2
8
0.63915409633200758106478062050024025359
HM
2
5
0.63915409633200758106478062050024025359
MHM
2
5
0.63915409633200758106478062050024025359
Alg.3.5
2
4
0.63915409633200758106478062050024025359
NM
1
7
0.40999201798913713162125837649907538612
HM
1
5
0.40999201798913713162125837649907538612
MHM
1
5
0.40999201798913713162125837649907538612
Alg.3.5
1
4
0.40999201798913713162125837649907538612
NM
3.5
13
3.00000000000000000000000000000000000000
HM
3.5
7
3.00000000000000000000000000000000000000
MHM
3.5
6
3.00000000000000000000000000000000000000
Alg.3.5
3.5
6
3.00000000000000000000000000000000000000
NM
–1
7
–1.40449164821534122603508681778686807718
HM
–1
5
–1.40449164821534122603508681778686807718
MHM
–1
5
–1.40449164821534122603508681778686807718
Alg.3.5
–1
4
–1.40449164821534122603508681778686807718
Clearly, is an exact solution of the equation . Let denote the sequence generated by any iterative method. Shown in Table 2 are the different iterative methods, the test equation , the initial approximation x0, the cumulative count n, the approximate solution xn of each iteration, the values and . Table 2 mainly illustrates the relative speed of convergence of the sequences to 3 when the initial approximation . It is easy to see that the fourth-order convergent sequence that generated by Alg.3.5 is within of 3 by the sixth term. At least 13 terms are needed to ensure this accuracy for the quadratically convergent sequence which generated by Newton iterative method. Just as Theorem 4.1 shows that fourth-order convergent sequences generally converge much more quickly than those that have low order convergence.
The relative speed of convergence of the sequences by different iterative methods.
Method
Equation
x0
n
NM
3.5
1
3.42865506283005651245721073000655709096
2
3.35671923435846878717900312491876046905
3
3.28441981122350293301087227610438508013
4
3.21240630845112338607950153061433380389
5
3.14241815947802526994541960762590326219
6
3.07872591444877280795742106579227547418
7
3.02986671328086231576490110603292132687
8
3.00518216043983687455684490043015144276
9
3.00017276403899187393669369452989768051
10
3.00000019615891595773288643949423192716
11
3.00000000000025306874011878291631803157
12
3.00000000000000000000000042121106213522
13
3.00000000000000000000000000000000000000
HM
3.5
1
3.35601116577588568424558312154122759624
2
3.21112901168050819815797000684949896354
3
3.07807397692207490178726191469322843597
4
3.00617947727528652556626267615414160179
5
3.00000332721627045019400114474122902304
6
3.00000000000000051895693141726273814255
7
3.00000000000000000000000000000000000000
MHM
3.5
1
3.22147926950672397692904786913974804090
2
2.97890333863655565021657260124804199973
3
3.00041785542541819593006067606805136507
4
2.99999999684674501872507771319677283455
5
3.00000000000000000000000135619706129043
6
3.00000000000000000000000000000000000000
Alg.3.5
3.5
1
3.32770201961387298232542827485346691464
2
3.15616031883867475772422389675496118063
3
3.02435629509809135524330861400534569356
4
3.00002940997206443891702473123340661668
5
3.00000000000000006931645429656286350195
6
3.00000000000000000000000000000000000000
Conclusion
From the section, it is evident that we have obtained several iterative methods including classical Newton’s method and Halley’s method based on the approximation formula of Thieles continued fraction. In order to avoid calculating the higher derivatives of the function, we have improved the proposed iterative method by using approximants of the second derivative and the third derivative. So we have gotten a modified iterative method free from the higher derivatives of the function. We have proved that the order of convergence of the method suggested by Algorithm 3.5 is four. Numerical experiments and comparison have shown in Tables 1 and 2 that the iterative scheme in Algorithm 3.5 is more efficient and performs better than classical Newton’s method, Halley’s method and the modified Householder method introduced by Noor and Gupta.11
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported partially by the Natural Science Key Foundation of Education Department of Anhui Province (grant no. KJ2013A183), the Project of Leading Talent Introduction and Cultivation in Colleges and Universities of Education Department of Anhui Province (grant no. gxfxZD2016270) and the Incubation Project of the National Scientific Research Foundation of Bengbu University (grant no. 2018GJPY04).
References
1.
AmatSBusquierSGutierrezJM.Geometric construction of iterative functions to solve nonlinear equations. J Comput Appl Math2003;
157: 197–205.
2.
FrontiniMSormaniE.Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl Math Comput2004;
149: 771–782.
3.
ChunC.A geometric construction of iterative functions of order three to solve nonlinear equations. Comput Math Appl2007;
53: 972–976.
4.
AbbasbandyS.Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method. Appl Math Comput2003;
145: 887–893.
5.
ChunC.Iterative methods improving Newtons method by the decomposition method. Comput Math Appl2005;
50: 1559–1568.
6.
KouJLiYWangX.A composite fourth-order iterative method for solving non-linear equations. Appl Math Comput2007;
184: 471–475.
7.
WangXKouJGuC.A new modified secant-like method for solving nonlinear equations. Comput Math Appl2010;
60: 1633–1638.
8.
NoorMAShahFA.A family of iterative schemes for finding zeros of nonlinear equations having unknown multiplicity. Appl Math Inf Sci2014;
8: 2367–2373.
9.
ShahFANoorMA.Some numerical methods for solving nonlinear equations by using decomposition technique. Appl Math Comput2015;
251: 378–386.
10.
KwunYCSaqibMFahadM, et al.
A new cubically convergent iterative method for solving nonlinear equations. Int J Pure Appl Math2016;
111: 67–76.
11.
NoorMAGuptaV.Modified Householder iterative method free from second derivatives for nonlinear equations. Appl Math Comput2007;
190: 1701–1706.
12.
CorderoATorregrosaJR.Variants of Newton’s method using fifth-order quadrature formulas. Appl Math Comput2007;
190: 686–698.
13.
GolbabaiAJavidiM.A new family of iterative methods for solving system of nonlinear algebraic equations. Appl Math Comput2007;
190: 1717–1722.
14.
NoorMA.Iterative methods for nonlinear equations using homotopy perturbation technique. Appl Math Inform Sci2010;
4: 227–235.
15.
MorlandoF.A class of two-step Newton’s methods with accelerated third-order convergence. Gen Math Notes2015;
29: 17–26.
16.
ThukralR.New modification of Newton method with third-order convergence for solving nonlinear equations of type. Am J Comput Appl Math2016;
6: 14–18.
17.
SaqibMIqbalM.Some multi-step iterative methods for solving nonlinear equations. Open J Math Sci2017;
1: 25–33.
18.
QureshiUK.A new accelerated third-order two-step iterative method for solving nonlinear equations. Math Theory and Modeling2018;
8: 64–68.
19.
AliFAslamWAliK, et al.
New family of iterative methods for solving nonlinear models. Discrete Dyn Nat Soc2018;
2018: 1–12.
20.
CichockiAUnbehauenB.Neural networks for optimization and signal processing.
New York:
John Wiley & Sons, 1993.
21.
SciaviccoLSicilianoB.Modelling and control of robot manipulators.
London:
Springer-Verlag, 2000.
22.
MiljkovićSMiladinovićMStanimirovićP, et al.
Application of the pseudoinverse computation in reconstruction of blurred images. Filomat2012;
26: 453–465.
23.
RolandPBeimPG.Inverse problems in neural field theory. SIAM J Appl Dynam Sys2009;
8: 1405–1433.
24.
TraubJF.Iterative methods for the solution of equations.
New York:
Prentice-Hall, 1964.