### Introduction

This method finds a root of a one-variable function. By root, we mean x | f(x) = 0.

### Method

Secant method works similarly as the Newton-Raphson method. It relies on two startup values given as input. Iteratively, it calculates the next point with the following formula. x2 = (x0 * f(x1) – x1 * f(x0)) / (f(x1) – f(x0)). This formula runs on the following principle: getting the cut with the X axis of the straight line that is secant to the curve in x0 and x1 (this means, it cuts through it) returns us x2, which is closer to the root than the previous ones. Doing this iteratively, we can get a close value to the real root.

This method is an alternative for the Newton-Raphson method when calculating the derivative of f(x) is really hard (secant method does not rely on the derivative, but only on the function).

In order to work, this method requires a function to solve, two starting points and either a number for iterations, a maximum absolute error value or a maximum iterative error value (the most common is the second one).

### Examples

All of these testings were done with a maximal absolute error of 1e-12:

- f(x) = x³ – 3x² – 4x + 2 (x0 = -2, x1 = 0, x = -1.2924, 16 iterations, 6 ms)
- f(x) = sin(x) + x – 2 (x0 = 0, x1 = 2, x = 1.10606, 9 iterations, 6 ms)
- f(x) = e^x + x² – 2 (x0 = 0, x1 = 2, x = 0.537274, 10 iterations, 5 ms)

### Singularities

This method can only find one root for each run, meaning that one must know how many roots the function has and where are they found.

It has trouble in the following cases:

- The function uses some x0 and x1 such as f(x0) = f(x1), since this makes the method diverge.
- The function has a small slope in points of evaluation.

Some examples of functions that fail are:

- f(x) = x¹⁰-1 (x0 = 0.5, x1 = 1.5) In this function, the slope close to the root gets bigger, so the program cuts less on each iteration. In this case, the program didn’t found the root, since it goes for iterative error. The value it found was x = 0.517325 on 5 iterations, but if we gave the program absolute error instead of iterative error, it could finish.
- (x) =log(abs(x)) (x0 = 5, x1 = 6) In this function, the program returns a bigger value (with alternating symbols) each time, since the slope gets smaller on every iteration). This can be solved with values that have a smaller slope between evaluations.

### Flowchart

### Code

### Conclusions

Although Newton-Raphson converges faster than secant method, secant has the strength on not relying on the derivative. On the other hand, Newton-Raphson only requires a starting value, while secant requires two.