This method finds a root of a one-variable function. By root, we mean x | f(x) = 0.
Newton-Raphson method works on a startup value given as input. Iteratively, it calculates the next point with the following formula. xn = xm – f(xm) / d(xm), where d(x) is the derivative of f(x). This formula runs on the following principle: getting the cut with the X axis of the straight line that is tangent to the curve in xm returns us xn, which is closer to the root than xm. Doing this iteratively, we can get a close value to the real root.
In order to work, this method requires a function to solve, a starting point and either a number for iterations, a maximum absolute error value or a maximum iterative error value (the most common is the second one).
All of these testings were done with a maximal absolute error of 1e-12:
- f(x) = x³ – 3x² – 4x + 2 (x0 = -2, x = -1.2924, 7 iterations, 4 ms)
- f(x) = sin(x) + x – 2 (x0 = 2, x = 1.10606, 7 iterations, 7 ms)
- f(x) = e^x + x² – 2 (x0 = 2, x = 0.537274, 8 iterations, 6 ms)
This method can only find one root for each run, meaning that one must know how many roots the function has and where are they found.
It has trouble in the following cases:
- The function has a tiny or huge slope in a point
Some examples of functions that fail are:
- f(x) = x¹⁰-1 (In this function, the slope close to the root gets bigger, so the program cuts less on each iteration. In this case, the program found the root (x = 1) on 45 iterations, but if we gave the program a smaller error, it would take a lot of iterations)
- f(x) =log(abs(x)) (If we pass x0 = 10, the program returns a bigger value (with alternating symbols) each time, since the slope gets smaller on every iteration)
Newton-Raphson is better against bisection method in terms of performance, since the answer converges a lot faster. Due to this, this is the method used on most advanced calculators.
On the other hand, by receiving only one value, the method can diverge (which in case of bisection doesn’t happen).