Richardson extrapolation

and Romberg integration

Richardson extrapolation is a sequence acceleration technique. It can be applied anytime we a solid theoretical error estimate.

Improving the centered difference quotient

Theory

Recall the centered difference quotient: $$\phi(h) = \frac{f(x+h) - f(x-h)}{2h}.$$

We could've expanded the Taylor's Series for $f(x+h),f(x-h)$ to more terms to see that $$ \begin{eqnarray*} \phi(h) &= f'(x) + a_2 h^2 + a_4 h^4 + a_6 h^6 + a_8 h^8 + \ldots \end{eqnarray*} $$ Replacing $h$ with $h/2$, we find: $$ \begin{eqnarray*} \phi(h/2) &= f'(x) + a_2 h^2/4 + a_4 h^4/16 + a_6 h^6/64 + a_8 h^8/256 + \ldots \end{eqnarray*} $$

Now, we can combine this with $\phi(h)$ to get better accuracy - $O(h^4)$, in fact! $$ \begin{eqnarray*} \phi(h) - 4 \phi(h/2) &= - 3 f'(x) + \frac{3}{4} 4 h^4 + \frac{15}{16} a_6 h^6

  • \frac{63}{64} a_8 h^8 + \ldots\ \frac{4 \phi(h/2) - \phi(h)}{3} &= f'(x) - \frac{1}{4} 4 h^4 - \frac{5}{16} a_6 h^6
  • \frac{21}{64} a_8 h^8 + \ldots \end{eqnarray*} $$

In fact, this process can be iterated. We let $$D(n,0) = \phi\left(\frac{h}{2^n}\right)$$ and $$D(n,m) = \frac{4^m D(n,m-1) - D(n-1,m-1)}{4^m - 1}.$$ This generates a matrix whose first column is a sequence of approximations of order $O(h/2^n)$; the subsequent columns improvements generated via Richardson extrapolation. The lower right entry is then the awesome entry!

Practice

Perhaps this is best illustrated with an example. Let $f(x)=e^{-x^2}$; let's estimate $f'(1)$. Of course, it's not hard to compute it very precisely, since we know $f'(x)=-2x\,e^{-x^2}$.

In [1]:
import numpy as np
def f(x): return np.exp(-x**2)
def fp(x): return -2*x*np.exp(-x**2)
fp(1)
Out[1]:
-0.73575888234288467

Here are a few estimates using the centered difference quotient.

In [2]:
def phi(x,h): return (f(x+h)-f(x-h))/(2*h)
d = [phi(1,h) for h in [2**(-n) for n in range(5)]]
d
Out[2]:
[-0.49084218055563289,
 -0.67340155850954053,
 -0.72034287515965034,
 -0.73192094576096345,
 -0.73480049075469234]

Not so good. Let's try to improve it using Richardson extrapolation.

In [3]:
N = len(d)
D = np.zeros((N,N))
D[:,0] = d
for m in range(1,N):
    for n in range(m,N):
        D[n,m] = (4**m*D[n,m-1]-D[n-1,m-1])/(4**m-1)
D
Out[3]:
array([[-0.49084218,  0.        ,  0.        ,  0.        ,  0.        ],
       [-0.67340156, -0.73425468,  0.        ,  0.        ,  0.        ],
       [-0.72034288, -0.73598998, -0.73610567,  0.        ,  0.        ],
       [-0.73192095, -0.7357803 , -0.73576632, -0.73576094,  0.        ],
       [-0.73480049, -0.73576034, -0.73575901, -0.73575889, -0.73575888]])

Awesome!!

Romberg integration

The idea behind romberg integration is to apply Richardson extrapolation to a sequence of integral estimates generated by the trapezoidal rule. Sticking with the same $f$, let's use this to estimate $$\int_{-1}^1 e^{-x^2} \, dx.$$ Note that a high quality implementation (which this is not) would avoid the redundant computation of $f$ evaluated at the points in question here.

In [4]:
def trapezoidal_sum(f,a,b,n):
    xs = np.linspace(a,b,n+1)
    ys = f(xs)
    return sum([(ys[i+1]+ys[i]) for i in range(n)])*(b-a)/(2*n)
d = [trapezoidal_sum(f,-1,1,2**k) for k in range(5)]
d
Out[4]:
[0.73575888234288467,
 1.3678794411714423,
 1.4627405036571262,
 1.4859681956007622,
 1.4917312296913905]

And now, we extrapolate.

In [5]:
N = len(d)
D = np.zeros((N,N))
D[:,0] = d
for m in range(1,N):
    for n in range(m,N):
        D[n,m] = (4**m*D[n,m-1]-D[n-1,m-1])/(4**m-1)
D
Out[5]:
array([[ 0.73575888,  0.        ,  0.        ,  0.        ,  0.        ],
       [ 1.36787944,  1.57858629,  0.        ,  0.        ,  0.        ],
       [ 1.4627405 ,  1.49436086,  1.48874583,  0.        ,  0.        ],
       [ 1.4859682 ,  1.49371076,  1.49366742,  1.49374554,  0.        ],
       [ 1.49173123,  1.49365224,  1.49364834,  1.49364804,  1.49364765]])

This is good enough that SciPy has a built in routine; we can illustrate it's use and (assuming it's good) check our result.

In [6]:
from scipy.integrate import romberg
romberg(f,-1,1)
Out[6]:
1.4936482656242029

Awesome!!