A key point of Simpson's rule is that it is exact for cubic polynomials. While this is not at all hard to show by hand, this verification is one example of somthing that can be done with Sympy. Furthermore, there are related formulae that can be derived by solving systems exactly and Sympy provides a convenient way to do this exactly.
Sympy is a relatively weak computer algebra system but, when working in the Python ecosystem, it's worth knowing. Since it is so often used in an interactive context, we often just import the whole name space:
from sympy import *
x = symbols('x')
While Sympy is a computer algebra system (or CAS, like Mathematica used to be) it still lives in Python. Thus, symbols (even symbols like x
) must be declared. Once we do that, though, it really does feel like a CAS. For example, here are the critical points of $f(x)=x e^{-x^2}$ computed symbolically.
def f(x): return x*exp(-x**2)
x1,x2 = solve(diff(f(x)))
x2
Again, this is an exact result. Of course, you can generate a decimal approximation.
N(x2)
We could go on and on with an intro to sympy, but let's let the subject matter drive our content. Again, Simpson's rule is exact for cubic polynomials. Let's check this for the general cubic $f(x)=ax^3+bx^2+cx+d$ over the interval $[-1,1]$, i.e. $$\int_{-1}^1 f(x) \, dx = \frac{1}{3}f(-1) + \frac{4}{3}f(0) + \frac{1}{3}f(1).$$
a,b,c,d = symbols('a,b,c,d')
def f(x): return a*x**3 + b*x**2 + c*x + d
print(integrate(f(x),(x,-1,1)))
print(expand(f(-1)/3+4*f(0)/3+f(1)/3))
Now suppose we turn this around and ask, suppose we want $$\int_{-1}^1 f(x) \, dx \approx w_1\,f(-1) + w_2\,f(0) + w_3\,f(1).$$ What choice of weights $w_1$, $w_2$, and $w_3$ work best? Well, if it's exact for polynomials of high degree, that seems good and, the higher the better. Thus, might ask "what values of $w_1$, $w_2$, and $w_3$ make $$\int_{-1}^1 f(x) \, dx = w_1\,f(-1) + w_2\,f(0) + w_3\,f(1)$$ for f(x) = ax^3+bx^2+cx+d$? Well, let's compute the right hand side:
w1,w2,w3 = symbols('w1,w2,w3')
collect(expand(w1*f(-1)+w2*f(0)+w3*f(1)), (a,b,c,d))
This suggests that we solve the following:
solve([Eq(w3-w1,0), Eq(w1+w3,2/S(3)), Eq(w3-w1,0), Eq(w1+w2+w3,2)])
As expected!
Here's one more way to think of it, which is slightly easier. Suppose we want our approximation $$\int_{-1}^1 x^k \, dx \approx w_1\,(-1)^k + w_2\,0^k + w_3\,1^k$$ to be exact for as large a $k$ as possible. By linearity of the integral, it should then be exact for polynomials up to that degree. That leads us to the following equations:
eqs = [Eq(integrate(x**k,(x,-1,1)), w1*(-1)**k + w2*(0)**k + w3*1**k) for k in range(4)]
eqs
solve(eqs)
Gauss came up with the following idea: rather than choosing just the weights $w_1$, $w_2$, and $w_3$, let's also choose the nodes $x_1$, $x_2$, and $x_3$ at which the function is evaluated. That leads us to the following equations:
from sympy import *
x1,x2,x3 = symbols('x1,x2,x3')
w1,w2,w3 = symbols('w1,w2,w3')
eqs = [Eq(integrate(x**k,(x,-1,1)),w1*x1**k + w2*x2**k + w3*x3**k) for k in range(6)]
eqs
OK, let's solve 'em.
sols = solve(eqs)
sols
Looks like a lot but, in reality, it's just the same solution with the $x_i$s permuted. Let's grab the one in the implied order.
the_sol = [sol for sol in sols if sol[x1]<sol[x2]<sol[x3]]
the_sol
OK, let's use it.
gauss_nodes = [the_sol[0][x] for x in (x1,x2,x3)]
gauss_weights = [the_sol[0][w] for w in (w1,w2,w3)]
def gauss_rule(f):
x1,x2,x3 = gauss_nodes
w1,w2,w3 = gauss_weights
return w1*f(x1)+w2*f(x2)+w3*f(x3)
First, let's see if it works exactly for 5th degree polynomials. Here's a fifth degree polynomial. The degree is just high enough that we'll represent the coefficient with a function $c_k$.
c = symbols('c', cls=Function)
def f(x): return sum([c(k)*x**k for k in range(6)])
f(x)
Let's compare the gauss_rule
and the actual integral.
print(gauss_rule(f))
print(integrate(f(x), (x,-1,1)))
Looks good, let's do a numerical example.
def f(x): return exp(-x**2)
N(gauss_rule(f))
I wonder if that's right? Let's compare to Numpy.
import numpy as np
from scipy.integrate import quad, simps
def f(x): return np.exp(-x**2)
quad(f, -1,1)
Pretty close for a three point evaluation! I wonder how it compares to Simpson's rule?
simps([f(-1), f(0), f(1)])