Today, we're going to talk about powers of matrices and how that leads to a way to compute the dominant eigenvector of a matrix.
Can you see what's interesting about the following code?
from numpy.random import rand,seed
from scipy.linalg import norm
M = rand(5,5)
v = rand(5)
for i in range(20):
v = M.dot(v)
v = v/norm(v)
print(M.dot(v)/v)
print(v)
# Slightly simplified output:
# [2.113 2.113 2.113 2.113 2.113]
# [0.276 0.355 0.357 0.345 0.741]
print(M.dot(v)/v)
takes the vector $M\,\vec{v}$ and divides term-wise by $\vec{v}$. The result is vector of length 5 whose value is $2.113$ in each component.The basic idea behind iteration is very simple and quite broad.
Let $f(x) = \frac{1}{2}x$ and let $x_0=12$. Then our sequence is
$$\left\{12,6,3,\frac{3}{2},\frac{3}{4},\frac{3}{8},\frac{3}{16},\frac{3}{32}\right\}.$$More generally, if $a\in\mathbb R$, we define $f(x)=ax$ and let $x_0\in\mathbb R$, then $f^{k}(x_0)=a^kx_0$.
Iteration via multiplication by a matrix is very similar:
Note that we've used parentheses to distinguish the terms of our sequence from our growing list of sub-scripts and super-scripts.
The image below explores the iteration of $\vec{v} \to T\,\vec{v}$, where
$$T = \frac{1}{5}\begin{pmatrix}3&1\\1&3\end{pmatrix}.$$The outer ring is a set of initial seeds and each subsequent smaller ring is the image under multiplication by $T$ of the previous ring.
We'll explain why iteration should converge to an eigenvector under the following assumptions:
The eigenstructure clarifies the iterative process.
eig
or the power method
.The following matrix can be stored with 27 integers as a sparse matrix, versus 100 integers as a full matrix.
$$\begin{pmatrix} \color{lightgray}0 & 1 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & 2 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & 3 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & 4 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & 5 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & 6 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & 7 & \color{lightgray}0 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & 8 & \color{lightgray}0 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & 9 \\ \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 & \color{lightgray}0 \end{pmatrix}$$A visualization of the simple example: Zeros are white, the largest value is black, and intermediate values are gray.
A visualization of the NCAA basketball matrix is far too large to write down here but it's not hard to visualize:
Clusters form on the diagonal because the teams are sorted into conferences.
scipy.sparse
has a command called eigs
designed to compute a few eigenvalue/eigenvector pairs of sparse matrices. Using that, I obtained the following strengths and rankings:
Team | Strength |
---|---|
Florida_St | 0.150725 |
Kansas | 0.150592 |
Duke | 0.148579 |
Villanova | 0.146498 |
Maryland | 0.140609 |
Michigan_St | 0.138507 |
Baylor | 0.138496 |
Creighton | 0.137279 |
Oregon | 0.135733 |
Louisville | 0.134360 |
Virginia | 0.133454 |
Kentucky | 0.129268 |
We're going to run through some computations with the following matrix:
$$ M = \left( \begin{array}{ccccc} 548 & -692 & 44 & 378 & 184 \\ 212 & -268 & 16 & 148 & 68 \\ 100 & -108 & -24 & 22 & -12 \\ -576 & 728 & -56 & -400 & -208 \\ 296 & -380 & 44 & 222 & 124 \\ \end{array} \right) $$
import numpy as np
from scipy.linalg import norm,\
lu_solve, lu_factor
M = np.array([
[548, -692, 44, 378, 184],
[212, -268, 16, 148, 68],
[100, -108, -24,22, -12],
[-576, 728, -56, -400, -208],
[296, -380, 44, 222, 124]
])
v = [1,1,1,1,1]
for i in range(50):
v = M.dot(v)
v = v/norm(v)
MM = M - (-36)*np.identity(5)
v = [1,1,1,1,1]
for i in range(100):
v = MM.dot(v)
v = v/norm(v)
If $(\lambda,\vec{v})$ is an eigenpair for $M$, then $$(M-\lambda I)\vec{v} = M\vec{v}-\lambda\vec{v} = \vec{0}.$$ Thus, iteration of $M-\lambda I$ should converge to some other eigenvector.
lu = lu_factor(M)
v = [1,1,1,1,1]
for i in range(100):
v = lu_solve(lu,v)
v = v/norm(v)
Since $1/\lambda$ is an eigenvalue of $M^{-1}$ when $\lambda$ is an eigenvalue of $M$.
SciPy has library for working with sparse matrices that I'll describe here soon.