Richardson extrapolation is a sequence acceleration technique. It can be applied anytime we a solid theoretical error estimate.
Recall the centered difference quotient: $$\phi(h) = \frac{f(x+h) - f(x-h)}{2h}.$$
We could've expanded the Taylor's Series for $f(x+h),f(x-h)$ to more terms to see that $$ \begin{eqnarray*} \phi(h) &= f'(x) + a_2 h^2 + a_4 h^4 + a_6 h^6 + a_8 h^8 + \ldots \end{eqnarray*} $$ Replacing $h$ with $h/2$, we find: $$ \begin{eqnarray*} \phi(h/2) &= f'(x) + a_2 h^2/4 + a_4 h^4/16 + a_6 h^6/64 + a_8 h^8/256 + \ldots \end{eqnarray*} $$
Now, we can combine this with $\phi(h)$ to get better accuracy - $O(h^4)$, in fact! $$ \begin{eqnarray*} \phi(h) - 4 \phi(h/2) &= - 3 f'(x) + \frac{3}{4} 4 h^4 + \frac{15}{16} a_6 h^6
In fact, this process can be iterated. We let $$D(n,0) = \phi\left(\frac{h}{2^n}\right)$$ and $$D(n,m) = \frac{4^m D(n,m-1) - D(n-1,m-1)}{4^m - 1}.$$ This generates a matrix whose first column is a sequence of approximations of order $O(h/2^n)$; the subsequent columns improvements generated via Richardson extrapolation. The lower right entry is then the awesome entry!
Perhaps this is best illustrated with an example. Let $f(x)=e^{-x^2}$; let's estimate $f'(1)$. Of course, it's not hard to compute it very precisely, since we know $f'(x)=-2x\,e^{-x^2}$.
import numpy as np
def f(x): return np.exp(-x**2)
def fp(x): return -2*x*np.exp(-x**2)
fp(1)
Here are a few estimates using the centered difference quotient.
def phi(x,h): return (f(x+h)-f(x-h))/(2*h)
d = [phi(1,h) for h in [2**(-n) for n in range(5)]]
d
Not so good. Let's try to improve it using Richardson extrapolation.
N = len(d)
D = np.zeros((N,N))
D[:,0] = d
for m in range(1,N):
for n in range(m,N):
D[n,m] = (4**m*D[n,m-1]-D[n-1,m-1])/(4**m-1)
D
Awesome!!
The idea behind romberg integration is to apply Richardson extrapolation to a sequence of integral estimates generated by the trapezoidal rule. Sticking with the same $f$, let's use this to estimate $$\int_{-1}^1 e^{-x^2} \, dx.$$ Note that a high quality implementation (which this is not) would avoid the redundant computation of $f$ evaluated at the points in question here.
def trapezoidal_sum(f,a,b,n):
xs = np.linspace(a,b,n+1)
ys = f(xs)
return sum([(ys[i+1]+ys[i]) for i in range(n)])*(b-a)/(2*n)
d = [trapezoidal_sum(f,-1,1,2**k) for k in range(5)]
d
And now, we extrapolate.
N = len(d)
D = np.zeros((N,N))
D[:,0] = d
for m in range(1,N):
for n in range(m,N):
D[n,m] = (4**m*D[n,m-1]-D[n-1,m-1])/(4**m-1)
D
This is good enough that SciPy has a built in routine; we can illustrate it's use and (assuming it's good) check our result.
from scipy.integrate import romberg
romberg(f,-1,1)
Awesome!!