Sidi's generalized secant method
Sidi's generalized secant method is a root-finding algorithm, that is, a numerical method for solving equations of the form {\displaystyle f(x)=0} . The method was published by Avram Sidi.[1]
The method is a generalization of the secant method. Like the secant method, it is an iterative method which requires one evaluation of {\displaystyle f} in each iteration and no derivatives of {\displaystyle f}. The method can converge much faster though, with an order which approaches 2 provided that {\displaystyle f} satisfies the regularity conditions described below.
Algorithm
[edit ]We call {\displaystyle \alpha } the root of {\displaystyle f}, that is, {\displaystyle f(\alpha )=0}. Sidi's method is an iterative method which generates a sequence {\displaystyle \{x_{i}\}} of approximations of {\displaystyle \alpha }. Starting with k + 1 initial approximations {\displaystyle x_{1},\dots ,x_{k+1}}, the approximation {\displaystyle x_{k+2}} is calculated in the first iteration, the approximation {\displaystyle x_{k+3}} is calculated in the second iteration, etc. Each iteration takes as input the last k + 1 approximations and the value of {\displaystyle f} at those approximations. Hence the nth iteration takes as input the approximations {\displaystyle x_{n},\dots ,x_{n+k}} and the values {\displaystyle f(x_{n}),\dots ,f(x_{n+k})}.
The number k must be 1 or larger: k = 1, 2, 3, .... It remains fixed during the execution of the algorithm. In order to obtain the starting approximations {\displaystyle x_{1},\dots ,x_{k+1}} one could carry out a few initializing iterations with a lower value of k.
The approximation {\displaystyle x_{n+k+1}} is calculated as follows in the nth iteration. A polynomial of interpolation {\displaystyle p_{n,k}(x)} of degree k is fitted to the k + 1 points {\displaystyle (x_{n},f(x_{n})),\dots (x_{n+k},f(x_{n+k}))}. With this polynomial, the next approximation {\displaystyle x_{n+k+1}} of {\displaystyle \alpha } is calculated as
with {\displaystyle p_{n,k}'(x_{n+k})} the derivative of {\displaystyle p_{n,k}} at {\displaystyle x_{n+k}}. Having calculated {\displaystyle x_{n+k+1}} one calculates {\displaystyle f(x_{n+k+1})} and the algorithm can continue with the (n + 1)th iteration. Clearly, this method requires the function {\displaystyle f} to be evaluated only once per iteration; it requires no derivatives of {\displaystyle f}.
The iterative cycle is stopped if an appropriate stopping criterion is met. Typically the criterion is that the last calculated approximation is close enough to the sought-after root {\displaystyle \alpha }.
To execute the algorithm effectively, Sidi's method calculates the interpolating polynomial {\displaystyle p_{n,k}(x)} in its Newton form.
Convergence
[edit ]Sidi showed that if the function {\displaystyle f} is (k + 1)-times continuously differentiable in an open interval {\displaystyle I} containing {\displaystyle \alpha } (that is, {\displaystyle f\in C^{k+1}(I)}), {\displaystyle \alpha } is a simple root of {\displaystyle f} (that is, {\displaystyle f'(\alpha )\neq 0}) and the initial approximations {\displaystyle x_{1},\dots ,x_{k+1}} are chosen close enough to {\displaystyle \alpha }, then the sequence {\displaystyle \{x_{i}\}} converges to {\displaystyle \alpha }, meaning that the following limit holds: {\displaystyle \lim \limits _{n\to \infty }x_{n}=\alpha }.
Sidi furthermore showed that
- {\displaystyle \lim _{n\to \infty }{\frac {x_{n+1}-\alpha }{\prod _{i=0}^{k}(x_{n-i}-\alpha )}}=L={\frac {(-1)^{k+1}}{(k+1)!}}{\frac {f^{(k+1)}(\alpha )}{f'(\alpha )}},}
and that the sequence converges to {\displaystyle \alpha } of order {\displaystyle \psi _{k}}, i.e.
- {\displaystyle \lim \limits _{n\to \infty }{\frac {|x_{n+1}-\alpha |}{|x_{n}-\alpha |^{\psi _{k}}}}=|L|^{(\psi _{k}-1)/k}}
The order of convergence {\displaystyle \psi _{k}} is the only positive root of the polynomial
- {\displaystyle s^{k+1}-s^{k}-s^{k-1}-\dots -s-1}
We have e.g. {\displaystyle \psi _{1}=(1+{\sqrt {5}})/2} ≈ 1.6180, {\displaystyle \psi _{2}} ≈ 1.8393 and {\displaystyle \psi _{3}} ≈ 1.9276. The order approaches 2 from below if k becomes large: {\displaystyle \lim \limits _{k\to \infty }\psi _{k}=2} [2] [3]
Related algorithms
[edit ]Sidi's method reduces to the secant method if we take k = 1. In this case the polynomial {\displaystyle p_{n,1}(x)} is the linear approximation of {\displaystyle f} around {\displaystyle \alpha } which is used in the nth iteration of the secant method.
We can expect that the larger we choose k, the better {\displaystyle p_{n,k}(x)} is an approximation of {\displaystyle f(x)} around {\displaystyle x=\alpha }. Also, the better {\displaystyle p_{n,k}'(x)} is an approximation of {\displaystyle f'(x)} around {\displaystyle x=\alpha }. If we replace {\displaystyle p_{n,k}'} with {\displaystyle f'} in (1 ) we obtain that the next approximation in each iteration is calculated as
This is the Newton–Raphson method. It starts off with a single approximation {\displaystyle x_{1}} so we can take k = 0 in (2 ). It does not require an interpolating polynomial but instead one has to evaluate the derivative {\displaystyle f'} in each iteration. Depending on the nature of {\displaystyle f} this may not be possible or practical.
Once the interpolating polynomial {\displaystyle p_{n,k}(x)} has been calculated, one can also calculate the next approximation {\displaystyle x_{n+k+1}} as a solution of {\displaystyle p_{n,k}(x)=0} instead of using (1 ). For k = 1 these two methods are identical: it is the secant method. For k = 2 this method is known as Muller's method.[3] For k = 3 this approach involves finding the roots of a cubic function, which is unattractively complicated. This problem becomes worse for even larger values of k. An additional complication is that the equation {\displaystyle p_{n,k}(x)=0} will in general have multiple solutions and a prescription has to be given which of these solutions is the next approximation {\displaystyle x_{n+k+1}}. Muller does this for the case k = 2 but no such prescriptions appear to exist for k > 2.
References
[edit ]- ^ Sidi, Avram, "Generalization Of The Secant Method For Nonlinear Equations", Applied Mathematics E-notes 8 (2008), 115–123, http://www.math.nthu.edu.tw/~amen/2008/070227-1.pdf
- ^ Traub, J.F., "Iterative Methods for the Solution of Equations", Prentice Hall, Englewood Cliffs, N.J. (1964)
- ^ a b Muller, David E., "A Method for Solving Algebraic Equations Using an Automatic Computer", Mathematical Tables and Other Aids to Computation 10 (1956), 208–215