Here's some Python code (using the NumPy and SciPy libraries) for solving linear systems.
# Imports
import numpy as np
from scipy.linalg import lu, inv, solve, lu_solve, lu_factor, norm
First, let's generate a random example.
np.random.seed(1)
A = np.random.randint(-5,5,(4,4))
b = np.random.randint(-5,5,4)
print(A)
print(b)
Here's the simple, straightforward way to solve the system $A\vec{x}=\vec{b}$:
solve(A,b)
The lu
command provides a way to compute the LU decomposition.
P,L,U = lu(A)
print(P)
print(L)
print(U)
The matrices $P$, $L$, $U$, and $A$ should satisfy $$LU = PA.$$
L.dot(U)
Suppose we'd like to find the permuted LU Decomposition of that same matrix $$A = \begin{bmatrix} 0 & 3 & 4 & 0 \\ -5 & -5 & -4 & 2 \\ 1 & 4 & -3 & -1 \\ 0 & -3 & -1 & -3 \\ \end{bmatrix}$$ by hand. We can illustrate the steps but still have Python do the computations as follows.
First, it's evident that we need to swap rows 1 and 2 at the beginning. That brings us to
$$P = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}.$$So we find ourselves applying Gaussian elimination to $$PA = \begin{bmatrix} -5 & -5 & -4 & 2 \\ 0 & 3 & 4 & 0 \\ 1 & 4 & -3 & -1 \\ 0 & -3 & -1 & -3 \\ \end{bmatrix}.$$ Or, in computer speak:
P = np.array([
[0,1,0,0],
[1,0,0,0],
[0,0,1,0],
[0,0,0,1]
])
PA = P.dot(A)
PA
The first step in Gaussian elimination should be $\frac{1}{5}R_1 + R_3$. We can accomplish this via multiplication with the following elementary matrix:
E1 = np.array([
[1,0,0,0],
[0,1,0,0],
[1/5,0,1,0],
[0,0,0,1]
])
E1.dot(PA)
Our next two steps are $-R2+R3$ and $R_2+R4$, which we can combine into a single step as follows:
E32 = np.array([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0,-1, 1, 0],
[0, 1, 0, 1]
])
E32.dot(E1).dot(PA)
Finally, we do $\frac{3}{7.8}R_3+R_4$:
E4 = np.array([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 3/7.8, 1]
])
U = E4.dot(E32.dot(E1).dot(PA))
U
There's $U$!
We could compute $L$ as the inverse of the product of the $E$s:
L = inv(E4.dot(E32).dot(E1))
print(L)
Note how easy it is, though, to see $L$ in the row operations that we used in the first place.
Let's see how to use the LU decomposition to solve large systems and examine the time savings.
n = 1000
A = np.random.randint(-50,50,(n,n))
b = np.random.randint(-50,50,n)
%%timeit
solve(A,b)
SciPy provides an lu_solve
command that allows us to solve a system expressed in terms of a compactified form of the LU decomposition. The compactified form is generated with lu_factor
.
%%timeit
lu_factor(A)
LU,P = lu_factor(A)
%%timeit
lu_solve((LU,P),b)
It takes much longer to compute an inverse than it does to compute an LU factorization - or even just to solve $A\vec{x} = \vec{b}$ directly in the first place.
%%timeit
inv(A)
Once we have the inverse, though, it's pretty fast to use it to solve systems.
AInv = inv(A)
%%timeit
AInv.dot(b)
However, using the inverse is usually not as accurate as Gaussian elimination.
sol1 = AInv.dot(b)
norm(A.dot(sol1) - b)
sol2 = lu_solve((LU,P),b)
norm(A.dot(sol2) - b)