An archive of Mark's Spring 2018 Numerical Analysis course.

A more realistic fiction

mark

Four fictitious teams play each other in an unnamed sport. We’ll call the sport FB and we’ll denote the teams:

  1. OSU
  2. MSU
  3. PSU
  4. UM

After playing each other twice each during the regular season their result matrix can be written:

\left(\begin{matrix} 0 & 2 & 2 & 2 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 1 & 2 & 0 \end{matrix}\right)

What difficulties might arise when we apply the Perron-Frobenius technique to this matrix? How might we alter the technique slightly to get around this problem?

funmanbobyjo

Using this code:

 from scipy.linalg import eig
 eig(M)

 import numpy as np
 M = np.matrix([
     [0, 2, 2, 2],
     [0, 0, 1, 1],
     [0, 1, 0, 0],
     [0, 1, 2, 0]
   ])
 vals, vecs = eig(M)
 vals

you get the following eigenvalues

 array([ 0.00000000+0.j        ,  1.76929235+0.j        ,
        -0.88464618+0.58974281j, -0.88464618-0.58974281j])

Half of these eigenvalues are negative so this wont work according to the Perron Frobenius technique.

I’m not sure how to alter this technique though.

mark

@funmanbobyjo You’ve computed the eigenvalues of the matrix. The idea is to compute the dominant eigenvector. Ultimately, though, the question is a theoretical one so I suggest you have a look at the Perron-Frobenius theorem and make sure all the hypotheses are satisfied.

anonymous_user

The Perron-Frobenius theorem asserts that a real valued square matrix with positive entries has a unique valued largest positive eigenvalue, with a corresponding eigenvector that can be chosen to have all positive entries (i.e., you can write it as negative or positive, and we choose positive).

If we look at our matrix as outlined above, it is not square, as the first column is all zeros.

I think one thing we might try is a positive perturbation of the matrix’s entries by some tiny \epsilon, which shouldn’t change the information captured in the matrix very much, but does satisfy the conditions given by the theorem.

nathan

@anonymous_user I think this is still a square matrix, even though the first column is zeros. For instance,

\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}

has dimensions of 3x3, not 0x0. So,

\begin{bmatrix} 0 & 1 & 2 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix}

Should have dimensions of 3x3, not 3x2. Zeros are still information that shouldn’t be ignored.

In my calculations, Perron-Frobenius theorem is satisfied.

from scipy.linalg import eig
import numpy as np
M = np.matrix([
     [0, 2, 2, 2],
     [0, 0, 1, 1],
     [0, 1, 0, 0],
     [0, 1, 2, 0]
   ])
vals, vecs = eig(M)
vals

Out[1]:

array([ 0.00000000+0.j        ,  1.76929235+0.j        ,
       -0.88464618+0.58974281j, -0.88464618-0.58974281j])

The unique positive eigenvalue is at position 1.

vecs[:,1]

Out[2]:

array([-0.88298754+0.j, -0.28206901+0.j, -0.15942476+0.j, -0.33963778+0.j])

This eigenvector (corresponding to the dominant eigenvalue) can be chosen to be positive. I don’t see any issue using the Perron-Frobenius theorem, I must say.

mark

@nathan I agree that the matrix is square. However, I don’t think the Perron-Frobenius theorem can be applied because the hypotheses are not satisfied. Specifically, the matrix is not irreducible. This is most easily seen by examining the graph induced by this adjacency matrix:

If the matrix were irreducible, then this matrix would be strongly connected. It’s not, though, since there’s no path to get to 1 from any other vertex.

If you want an example where the conclusion doesn’t hold either, you might look at an example where there’s a team that doesn’t win any games.

Lorentz

Could we add fictitious games by adding a matrix with 1’s in all entries aside from the diagonal? We can show that the associated graph for the first matrix is reducible.

import numpy as np
from scipy.linalg import eig
from sympy import *

A=np.array([[0,2,2,2],[0,0,1,1],[0,1,0,0],[0,1,2,0]])
a=Matrix(A)

print(a.rref())
vals, vec = eig(A)

vec = abs(vec[:,0])
ranking = np.argsort(vec).tolist()
ranking.reverse()
ranking

This prints the upper triangular matrix for the first graph. As you can see it is reducible.

(Matrix([
[0, 1.0,   0,   0],
[0,   0, 1.0,   0],
[0,   0,   0, 1.0],
[0,   0,   0,   0]]), [1, 2, 3])

[0, 3, 2, 1]

Here is the eigen-ranking for the first graph. The records for each team are 0(6-0),1(2-4),2(1-5),3(3-3) so we would expect the teams to be ranked 0,3,1,2. So from the first matrix we don’t get the desired ranking.

C=np.array([[0,3,3,3],[1,0,2,2],[1,3,0,1],[1,2,3,0]])
c=Matrix(C)
print(c.rref())
vals, vec = eig(C)

vec = abs(vec[:,0])
ranking = np.argsort(vec).tolist()
ranking.reverse()
ranking

(Matrix([
[1.0,   0,   0,   0],
[  0, 1.0,   0,   0],
[  0,   0, 1.0,   0],
[  0,   0,   0, 1.0]]), [0, 1, 2, 3])
[0, 3, 1, 2]

If we add 1 to each non-diagonal entry and compute the ranking we get the desired ranking and the matrix is shown to be irreducible so that it is strongly connected.