Introduction

As seen previously on the regression articles, sometimes we want to find a function given a set of points. But, sometimes we don’t really care about the function, we only care about one value. Or sometimes we don’t have enough values to make a regression. For those cases, we have interpolation. Interpolation works on n points (x,_i y_i) and a value x for which we wish to obtain y. If given n points, we end up with a n – 1 grade function, and we get the corresponding value for y.

In this post four interpolation methods are covered: linear and quadratic to understand the basics and Newton and Lagrange, which are general versions.

Linear interpolation

Given two points, find the y value of a third one. Starting by the line formula, y = mx + b, we can obtain the following formula:

To calculate the value, we simply evaluate.

Quadratic interpolation

Given three points, find the y value of a fourth one. Starting by the parabol formula, y = ax^2 + bx + c, we can obtain the following formulas:

As on linear interpolation, we just evaluate.

Newton interpolation

Newton interpolation is a general version of quadratic interpolation. As seen on the quadratic version, we evaluate the discrete diferential iteratively (only twice). Newton interpolation works in this way. We calculate our discrete diferential in the following recursive way:

We evaluate for each coefficient of the n – 1 grade polynomial. d[x_0] is the constant term, d[x_0, x_1] the grade 1, and it goes on. After calculating all the d’s, we need to evaluate in the formula as the following:

newton

Since it uses a recursive formula, the complexity grows really big. In order to prevent this, we make a memoization process: as we compute, we store and we use this results.

Newton Interpolation.jpg

Lagrange interpolation

The main weakness of Newton interpolation is that it first obtains the coefficients and then it evaluates. If you want the function for some set of points, it’s better to use polynomial regression. Lagrange interpolation is calculated in a very different way, but it leads to the same result. We use two formulas:

lagrange1

lagrange2

This is simpler than Newton, it runs on O(n²)  and uses O(1) of space.

Lagrange Interpolation.jpg

Examples

  • {(1, 0), (6, 1.791759)}, x = 2 (y = 0.358352, 4 ms)
  • {(1, 0), (4, 1.386294), (6, 1.791759)}, x = 2 (y = 0.565844, 6 ms)
  • {(1, 0), (4, 1.386294), (6, 1.791759), (2, 3)}, x = 0 (y = 1.79716, 6 ms)

Code

Conclusions

Interpolation is a good alternative when computing a regression is too hard and you need the exact function of few values. But, making continuous interpolation isn’t that good: in that case, it’s better to make a regression and save the function.

Advertisements