Eigenranking by power method

(5 pts)

This problem is only 5 points since it's pretty easy but it does bring us rather full circle a bit. Your mission is simple: Reproduce the computation of your eigenvector for the Eigenranking problem using the power method. You can find some simple code implementing the power method for a $2\times2$ matrix in this SageCell linked from our presentation on the power method.

joshuam

Comments

  • edited April 2020

    My matrix from the eigenranking problem is
    $
    M=
    \begin{bmatrix}
    0 & 94 & 58 & 77 \\
    17 & 0 & 30 & 51 \\
    24 & 17 & 0 & 71 \\
    35 & 27 & 41 & 0
    \end{bmatrix}.$
    After importing my data, I used the power method by starting the vector $\langle1,1,1,1\rangle$ and transforming it by the multiplication of $M$ and normalizing the result.
    20 iterations was enough to make all the components of $(Mv)\frac{1}{v}$ equal to one another to 6 decimal places, so $v$ has converged to be an eigenvector of $M$.

    import numpy as np
    from numpy.linalg import norm, eig
    M = np.array([
        [0,94,58,77],
        [17,0,30,51],
        [24,17,0,71],
        [35,27,41,0]
    ])
    v = [1,1,1,1]
    for i in range(20):
        v = M.dot(v)
        v=v/norm(v)
    
    print(M.dot(v)/v)
    print(v)
    #out
    [125.97665062 125.97665067 125.97665088 125.97665075]
    [0.71960854 0.36518012 0.42024789 0.41496837]
    

    I also used numpy.linagl.eig to compare these results to the original.

    eValues, eVectors = eig(M)
    print(eValues[0])
    print(eVectors[:,0])
    #out
    (125.97665072772894+0j)
    [0.71960854+0.j 0.36518012+0.j 0.42024789+0.j 0.41496837+0.j]
    
    mark
  • My matrix was,

    $$
    \begin{pmatrix}
    0 & 54 & 56 & 72 \\
    33 & 0 & 27 & 43 \\
    55 & 34 & 0 & 50 \\
    30 & 13 & 52 & 0
    \end{pmatrix}
    $$

    Once I imported the libraries and defined my matrix, I decided to use a randomly generated vector as the seed;

    import numpy as np
    from scipy.linalg import norm, eig
    
    A = np.array([
        [0, 54, 56, 72],
        [33, 0, 27, 43],
        [55, 34, 0, 50],
        [30, 13, 52, 0]
    ])
    
    v = np.random.rand(4)
    

    I arbitrarily chose 100 iterations for the calculation,

    for i in range(100):
        v = A.dot(v)
        v = v/norm(v)
    
    print('Normalized eigenvector:',v)
    print()
    print('Eigenvalue:', T.dot(v)/v)
    
    # out
    Normalized eigenvector: [0.62609933 0.40611338 0.53090098 0.40151675]
    
    Eigenvalue: [128.68530542 128.68530542 128.68530542 128.68530542]
    

    Next, I computed the eigenvectors/eigenvalues using the eig() function and extracted the dominate eigenvector/eigenvalue in order to determine the difference between them and my results;

     Eval, Evec = eig(A)
    vec = abs(Evec[:,0])
    val = abs(Eval[0])
    
    print(vec)
    print()
    print(val)
    
    # out
    [0.62609933 0.40611338 0.53090098 0.40151675]
    
    128.68530542041444
    

    I found the difference by subtracting my results from these values,

    vecdiff = vec - v
    valdiff = val - Evalue
    
    print('Difference between Eigenvectors:',vecdiff)
    print()
    print('Difference between eigenvalues:',valdiff)
    
    # out
    Difference between Eigenvectors: [ 1.11022302e-16 -3.88578059e-16  0.00000000e+00 -5.55111512e-17]
    
    Difference between eigenvalues: -2.842170943040401e-14
    

    As you can see, both methods produce almost identical results.

    mark
  • edited April 2020

    My matrix was
    $M = \left[\begin{matrix}
    0 & 46 & 66 & 95 & 60 \\
    21 & 0 & 31 & 61 & 65 \\
    24 & 34 & 0 & 72 & 35 \\
    45 & 17 & 17 & 0 & 34 \\
    58 & 13 & 31 & 13 & 0 \\
    \end{matrix}\right]$

    I imported the following libraries

    import numpy as np
    from scipy.linalg import norm
    from numpy.linalg import eig
    

    and input my matrices M and v

    M=np.matrix([
    [0,46,66,95,60],
    [21,0,31,61,65],
    [24,34,0,72,35],
    [45,17,17,0,34],
    [58,13,31,13,0]
    ])
    v=np.matrix([
    [1],
    [1],
    [1],
    [1],
    [1]
    ])
    

    To find the vectors and values

    for i in range(20):
    v = M.dot(v)
    v = v/norm(v)
    print('The normalized eigenvectors are:',v)
    print('The corresponding eigenvalues:', M.dot(v)/v)
    

    which gave

    The normalized eigenvectors are: [[0.62589228]
    [0.43443712]
    [0.41350197]
    [0.33937542]
    [0.36518993]]
    The corresponding eigenvalues: [[162.05232288]
    [162.05232289]
    [162.05232288]
    [162.05232288]
    [162.05232287]]
    

    and to compare the values I simply used the code from last time:

    EVa, EVE = eig(M)
    print(abs(EVE[:,0]))
    print()
    print((EVa[0]))
    

    which gave

    [[0.62589228]
    [0.43443712]
    [0.41350197]
    [0.33937542]
    [0.36518993]]
    
    (162.052322878669+0j)
    

    It is easy to see there is significant decimal approximation accuracy utilizing this method, since the outputs of both method share very similar answers.

    mark
  • edited April 2020

    I used the matrix from my original eigenranking post,
    $$M=\begin{pmatrix}
    0&39&68&24\\
    33&0&54&54\\
    48&14&0&45\\
    51&23&40&0\end{pmatrix}.$$
    using the same index (New York Giants, Philadelphia Eagles, Dallas Cowboys, and Washington Redskins). I reevaluated the teams ranking by using the power method, iterating
    $$\mathbf{v_{(k)}}=\frac{M\mathbf{v_{(k-1)}}}{\left|\left|M\mathbf{v_{(k-1)}}\right|\right|}$$
    fifty times with a randomly generated $\mathbf{v_{(0)}}$. After ranking the teams with the new eigenvector, I got the same results* as my original post,

    1. Philadelphia Eagles $\approx 0.552$
    2. New York Giants $\approx 0.522$
    3. Washington Redskins $\approx 0.472$
    4. Dallas Cowboys $\approx 0.446$

    *Note: The untrimmed values from both methods are the same within at least 15 decimal places.

    import numpy as np
    from scipy.linalg import norm
    
    M = np.array([
        [0, 39, 68, 24],
        [33, 0, 54, 54],
        [48, 14, 0, 45],
        [51, 23, 40, 0]
    ])
    
    NFCE = [
        'New York Giants',
        'Philadelphia Eagles',
        'Dallas Cowboys',
        'Washington Redskins'
    ]
    
    v = np.random.rand(4)
    
    for i in range(50):
        v = M.dot(v)
        v = v/norm(v)
    
    print('The dominant eigenvector is:',v)
    print('')
    print('The dominant eigenvalue:', M.dot(v)/v)
    print('')
    
    # Output:
    # The dominant eigenvector is: [0.52222463 0.55213059 0.44644344 0.47235739]
    #
    #The dominant eigenvalue: [121.07399852 121.07399852 121.07399852 121.07399852]
    
    v = abs(v)
    ranking = np.argsort(v).tolist()
    ranking.reverse()
    
    print([NFCE[i] for i in ranking])
    print('')
    print([v[i] for i in ranking])
    
    #Output:
    #['Philadelphia Eagles', 'New York Giants', 'Washington Redskins', 'Dallas Cowboys']
    #
    #[0.5521305858636474, 0.5222246322434978, 0.4723573863996464, 0.4464434445097206]
    
    mark
  • I used the same data from the original Eigenranking problem, defined below:

    import numpy as np
    from scipy.linalg import eig
    AFC_South = ['Texans', 'Colts', 'Titans', 'Jaguars']
    
    M = [
        [0,45,62,70],
        [45,0,46,44],
        [24,36,0,57],
        [44,32,44,0]
    ]
    

    I generated a random vector and multiplied it to $M$ fifty times, normalizing the vector after each iteration.

    np.random.seed(100)
    v = np.random.rand(4)
    for i in range(50):
        v = M.dot(v)
        v = v/norm(v)
    
    print("The dominant eigenvector is: \n", v, "\n")
    print("The corresponding eigenvalue is: \n", M.dot(v)/v, )
    #output:
    #The dominant eigenvector is: 
     #[0.60043555 0.49624917 0.43175543 0.45475396] 
    #
    #The corresponding eigenvalue is: 
     #[134.79019756 134.79019756 134.79019756 134.79019756]
    

    Finally, I took the absolute value of the eigenvector entries and sorted them to achieve my ranking:

    v = abs(v)
    rank = np.argsort(v).tolist()
    rank.reverse()
    print(ranking)
    
    print([AFC_South[i] for i in rank])
    print([v[i] for i in rank])
    #output:
    #['Texans', 'Colts', 'Jaguars', 'Titans']
    #[0.6004355492984134, 0.4962491695025007, 0.4547539615126678, 0.43175542543820566]
    
    mark
  • The matrix I had was the following:

    $$
    A = \left(
    \begin{array}{cccc}
    0 & 33 & 30 & 14 \\
    0 & 0 & 28 & 28 \\
    20 & 10 & 0 & 281 \\
    5 & 30 & 36 & 0 \\
    \end{array}
    \right).
    $$

    I then used the following section of code the determine the normalized eigenvectors and eigenvalue of the system:

    import numpy as np
    from numpy.linalg import norm, eig
    
    Scores = np.array([
        [0, 33, 30, 14],
        [0, 0, 28, 28],
        [20, 10, 0, 28],
        [5, 30, 36, 0]
    ])
    v = np.random.rand(4)
    for i in range(50):
        v = Scores.dot(v)
        v = v/norm(v)
    print('Normalized Eigenvector:',v)
    print('Eigenvalue:', Scores.dot(v)/v)
    

    This resulted in the following output:

    Normalized Eigenvector: [0.56338676 0.43620806 0.47348793 0.51780988]
    Eigenvalue: [63.63096241 63.63096241 63.63096241 63.63096241]
    

    This results were then put into the following section of code to rank the results:

    V = abs(v)
    ranking = np.argsort(V).tolist()
    ranking.reverse()
    ranking
    

    Which gave the following:

    [0, 3, 2, 1]
    

    Referring to my previous work on this matrix using eig() to determine the eigenvectors, I can see that this results match what I had gotten then:

    [0, 3, 2, 1]
    
    [0.5633867633255967,
     0.5178098778092245,
     0.47348792594239686,
     0.4362080573986129]
    

    The ranking ends up identical through both approaches and the primary difference is seeming the number of decimal places generated.

    mark
  • We are given the following matrix:

    $$ A =\begin{pmatrix}
    0 & 51 & 68 & 75 \\
    20 & 0 & 38 & 39 \\
    60 & 51 & 0 & 43 \\
    33 & 46 & 43 & 0
    \end{pmatrix}.$$

    Implementing the power method, we find our dominant eigenvalue and eigenvector:

    v = np.random.rand(4)
    for i in range(50):
        v = A.dot(v)
        v = v/norm(v)
    print("The dominant eignevector is \n", v, "\n")
    print("The dominant eigenvalue is: \n", A.dot(v)/v,)
    
    Out:
    The dominant eignevector is 
     [0.62825902 0.35882072 0.53692157 0.43388192] 
    
    The dominant eigenvalue is: 
     [139.03766418 139.03766418 139.03766418 139.03766418]
    

    Next we can use similar code that was used to originally rank the division:

    NFCSouth = [
        'Carolina', 'Atlanta', 'New Orleans', 'Tampa Bay'
    ]
    ranking = np.argsort(v).tolist()
    ranking.reverse()
    print(ranking)
    print([NFCSouth[i] for i in ranking])
    print([v[i] for i in ranking])
    
    Out:
    [0, 2, 3, 1]
    ['Carolina', 'New Orleans', 'Tampa Bay', 'Atlanta']
    [0.6282590242040224, 0.5369215681005995, 0.43388191780629937, 
    0.35882072071625176]
    

    Checking the original rankings we calculated, we see that these answers are identical.

    mark
  • My matrix was

    $M=
    \begin{bmatrix}
    0 & 34 & 55 & 68 \\
    33 & 0 & 34 & 38 \\
    13 & 20 & 0 & 45 \\
    51 & 44 & 19 & 0
    \end{bmatrix}$

    The power method is to take $<1,1,1,1>$ and multiply it by M, than normalizing, than iterating that process x, amount of times till all entries of are new ket vector are the samish. That result is are dominant eigenvalue and the dominant eigenvalue can be found by taking that by its norm. The cut and dry code and comparing the power method to the "standard" method can be found below:

    import numpy as np
    from scipy.linalg import norm, eig
    
    M = np.array([
    [0, 34, 55, 68],
    [33, 0, 34, 38],
    [13, 20, 0, 45],
    [51, 44, 19, 0]
    ])
    
    v = [1,1,1,1]
    for i in range(69): #because I'm a child
    v = M.dot(v)
    v=v/norm(v)
    
    print(M.dot(v)/v)
    print(v)
    
    #out 
    [113.52438543 113.52438543 113.52438543 113.52438543]
    [0.62310549 0.46227688 0.35856359 0.51910672]
    
    Eval, Evec = eig(M)
    vec = abs(Evec[:,0])
    val = abs(Eval[0])
    
    print(vec)
    print(val)
    
    #out [0.62310549 0.46227688 0.35856359 0.51910672]
    113.52438543145462
    t = vec-v
    z = val - M.dot(v)/v
    
    print(t)
    print(z)
    
    #out [ 0.00000000e+00 -1.11022302e-16 -1.66533454e-16 -1.11022302e-16]
    [2.84217094e-14 1.42108547e-14 2.84217094e-14 1.42108547e-14]
    
  • edited April 2020

    My matrix from the eigenranking problem was
    $$
    M =
    \begin{bmatrix}
    0 & 47 & 44 & 38\\
    52 & 0 & 20 & 23\\
    38 & 61 & 0 & 28\\
    65 & 73 & 56 & 0
    \end{bmatrix}.
    $$

    Using python to implement the power method, I noticed that the dominant eigenvector converged surprisingly early on.

    AFC_Central = ['Cincinnati Bengals ','Cleveland Browns   ','Houston Oilers     ','Pittsburgh Steelers']
    M = array([
        [0,47,44,38],
        [52,0,20,23],
        [38,61,0,28],
        [65,73,56,0]
    ])
    evals,evecs = eig(M)
    vec = abs(evec[:,0])
    

    After only 1 iteration the power method yielded a vector with the same ranking order

    v = array([1,1,1,1]).transpose()
    for ii in range(1):
        v = M.dot(v)
        v = v/norm(v)
    print('eigenvector  :',vec)
    print('power method :',v)
    print('difference   :',abs(vec-v))
    #output
    eigenvector  : [0.47983292 0.37663356 0.45636684 0.64779382]
    power method : [0.45771453 0.33707659 0.45061818 0.68834588]
    difference   : [0.0221184  0.03955697 0.00574867 0.04055205]
    

    and after 10 iterations the vector was accurate to 4 digits, even 5 for the larger elements.

    v = array([1,1,1,1]).transpose()
    for ii in range(10):
        v = M.dot(v)
        v = v/norm(v)
    print('eigenvector  :',vec)
    print('power method :',v)
    print('difference   :',abs(vec-v))
    #output
    eigenvector  : [0.47983292 0.37663356 0.45636684 0.64779382]
    power method : [0.47983633 0.37662777 0.45637268 0.64779055]
    difference   : [3.40688394e-06 5.78335328e-06 5.83845048e-06 3.27426759e-06]
    

    Since the vector found using the power method is so close to the true eigenvector, it must yield the same team ranking. Just to be sure, here are the results after a total of 50 iterations

    v = array([1,1,1,1]).transpose()
    for ii in range(50):
        v = M.dot(v)
        v = v/norm(v)
    rank = argsort(v).tolist()
    rank.reverse()
    team_name = [AFC_Central[i] for i in rank]
    value = [v[i] for i in rank]   
    err = [abs(vec-v)[i] for i in rank]
    for i in range(4):
        print(team_name[i],':', value[i])
    print('difference   :',err)
    #output
    Pittsburgh Steelers : 0.6477938222363908
    Cincinnati Bengals  : 0.47983292134649763
    Houston Oilers      : 0.4563668438954505
    Cleveland Browns    : 0.3766335556709713
    difference   : [0.0, 5.551115123125783e-17, 5.551115123125783e-17, 5.551115123125783e-17]
    
    mark
  • @dan I think you've got a little error in your code where you tried to do your loop. In short, whitespace is significant in Python.

    This is an effective loop

    for i in range 10:
        print('do something')
    

    This is not an effective loop

    for i in range 10:
    print('do something')
    
Sign In or Register to comment.