Your personal LU Decomposition problem

edited January 2020 in Assignments

(20 points)


A note on formatting matrices

Due to a little bug in the parser, you need four backslashes to generate a linebreak in a matrix. Thus to get
$$\begin{pmatrix}
1 & 2 \\
3 & 4
\end{pmatrix}$$
you need:

$$\begin{pmatrix}
  1 & 2 \\\\
  3 & 4
\end{pmatrix}$$

In this 20 point problem, you'll be computing both the LU Decomposition of a matrix (described in section 1.3 of our text) as well as the Permuted LU Decomposition (described in sections 1.4 and 1.7). You'll also use the LU Decomposition to obtain some information about your matrix. More specifically, given your personal $4\times4$ matrix $A$:

  1. Use the scipy.linalg.lu command to compute the Permuted LU Decomposition. Report the resulting matrices $P$, $L$, and $U$ and verify that $PA = LU$. Then,
  2. Show the steps necessary to obtain $L$ and $U$ such that $A=LU$.
  3. Use your LU Decomposition to solve $A\vec{x}=\langle 0,0,0,1 \rangle$.
  4. Use both your LU Decomposition and your Permuted LU Decomposition to compute $\det(A)$.

For part 2, you may feel free to use the tools provided in our Solve and LUSolve notebook to assist in your computations.

You can find your personal matrix by clicking on your name below:

Alex Ash Ben Dan
Eli Frank Gabriel JoshuaB
JoshuaM Kyle Mark Miguel
Nick Opie Stephen Wade
audreyfrank

Comments

  • This post does not completely answer the question but it does illustrate how to approach part 2, which is really the crux of the matter. Note that all the code in this response is contained in this live Sage Cell.

    My personal matrix is:

    $$A = \left(
    \begin{array}{cccc}
    2 & 4 & 4 & 0 \\
    0 & -1 & 3 & -1 \\
    8 & 16 & 18 & 2 \\
    -2 & -3 & 1 & 12 \\
    \end{array}
    \right).$$

    Or, in Python:

    import numpy as np
    A = np.array([
      [2, 4, 4, 0],
      [0, -1, 3, -1],
      [8, 16, 18, 2],
      [-2, -3, 1, 12]
    ])
    

    To eliminate the non-zero terms below the diagonal in the first column, I can multiply by the appropriate elementary matrices:

    E21 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [-4,0,1,0],
        [1,0,0,1]
    ])
    E21.dot(A)
    
    # Out:
    # array([[ 2,  4,  4,  0],
    #       [ 0, -1,  3, -1],
    #       [ 0,  0,  2,  2],
    #       [ 0,  1,  5, 12]])
    

    We then need to eliminate the 1 in the last row of the second column:

    E3 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,1,0,1]
    ])
    E3.dot(E21.dot(A))
    
    # Out:
    # array([[ 2,  4,  4,  0],
    #    [ 0, -1,  3, -1],
    #    [ 0,  0,  2,  2],
    #    [ 0,  0,  8, 11]])
    

    Finally, we need to eliminate the last entry of the third column:

    E4 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,0,0.-4,1]
    ])
    U = E4.dot(E3.dot(E21.dot(A)))
    U
    
    # Out:
    # array([[ 2.,  4.,  4.,  0.],
    #       [ 0., -1.,  3., -1.],
    #       [ 0.,  0.,  2.,  2.],
    #       [ 0.,  0.,  0.,  3.]])
    

    So, there's $U$. Or, in math talk:

    $$\left(\begin{array}{rrrr}
    2 & 4 & 4 & 0 \\
    0 & -1 & 3 & -1 \\
    0 & 0 & 2 & 2 \\
    0 & 0 & 0 & 3
    \end{array}\right)$$

    To find $L$, we just record the opposites of the terms below the diagonal in our elementary matrices. Thus,

    $$L = \left(\begin{array}{rrrr}
    1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    4 & 0 & 1 & 0 \\
    -1 & -1 & 4 & 1
    \end{array}\right).$$

    If this all worked, then we should have $A = LU$ or
    $$
    \left(
    \begin{array}{cccc}
    2 & 4 & 4 & 0 \\
    0 & -1 & 3 & -1 \\
    8 & 16 & 18 & 2 \\
    -2 & -3 & 1 & 12 \\
    \end{array}
    \right) = \left(\begin{array}{rrrr}
    1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    4 & 0 & 1 & 0 \\
    -1 & -1 & 4 & 1
    \end{array}\right) \left(\begin{array}{rrrr}
    2 & 4 & 4 & 0 \\
    0 & -1 & 3 & -1 \\
    0 & 0 & 2 & 2 \\
    0 & 0 & 0 & 3
    \end{array}\right).$$

    I guess that shouldn't be too hard to check by hand but we could have Python do this as well. Remembering that $A$ and $U$ are already defined, we have:

    L = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [4,0,1,0],
        [-1,-1,4,1]
    ])
    L.dot(U) == A
    
    # Out:
    # array([[ True,  True,  True,  True],
    #       [ True,  True,  True,  True],
    #       [ True,  True,  True,  True],
    #       [ True,  True,  True,  True]])
    

    Excellent!

    audreyBenfrank
  • edited February 2020

    I was given the matrix:

    $$
    A =
    \begin{bmatrix}
    5&3&5&1\\
    10&7&13&6\\
    20&11&21&-1\\
    5&7&29&11
    \end{bmatrix}
    $$

    I started by using python's lu() function to decompose $A$ into $P$, $L$, and $U$. As we noted yesterday, python's $P$ is in fact $P^{-1}$.

    PInv,L,U = lu(A)
    P = inv(PInv)
    print(P)
    print(L)
    print(U)
    print(L.dot(U))
    print(P.dot(A))
    

    The output was:

    [[-0.  0.  1.  0.]
     [-0.  0.  0.  1.]
     [-0.  1.  0.  0.]
     [ 1.  0.  0.  0.]]
    [[1.         0.         0.         0.        ]
     [0.25       1.         0.         0.        ]
     [0.5        0.35294118 1.         0.        ]
     [0.25       0.05882353 0.28       1.        ]]
    [[20.         11.         21.         -1.        ]
     [ 0.          4.25       23.75       11.25      ]
     [ 0.          0.         -5.88235294  2.52941176]
     [ 0.          0.          0.         -0.12      ]]
    [[20. 11. 21. -1.]
     [ 5.  7. 29. 11.]
     [10.  7. 13.  6.]
     [ 5.  3.  5.  1.]]
    [[20. 11. 21. -1.]
     [ 5.  7. 29. 11.]
     [10.  7. 13.  6.]
     [ 5.  3.  5.  1.]]
    

    Next I verified $LU = PA$ with

    L.dot(U) == P.dot(A)
    array([[ True,  True,  True,  True],
           [ True,  True,  True,  True],
           [ True,  True,  True,  True],
           [ True,  True,  True,  True]])
    

    Seeing that $A$ has 6 non-zero numbers below the main diagonal it will take 6 matrix operations to reduce $A$ to $U$.
    The matrix operations in the order they were preformed were:
    $
    -R_1+R_4\\
    -4R_1 + R_3 \\
    -2R_1 + R_2\\
    R_2 + R_3\\
    -4R_2 + R_4\\
    -3R_3 + R_4
    $
    with the corresponding elementary matrices:

    E1 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [-1,0,0,1]
    ])
    E2 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [-4,0,1,0],
        [0,0,0,1]
    ])
    E3 = np.array([
        [1,0,0,0],
        [-2,1,0,0],
        [0,0,1,0],
        [0,0,0,1]
    ])
    E4 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,1,1,0],
        [0,0,0,1]
    ])
    E5 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,-4,0,1]
    ])
    E6 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,0,-3,1]
    ])
    

    $U$ is the resulting matrix after all the matrix operations are applied.
    U = E6.dot(E5.dot(E4.dot(E3.dot(E2.dot(E1.dot(A))))))
    $$
    U =
    \begin{bmatrix}
    5&3&5&1\\
    0&1&3&4\\
    0&0&4&-1\\
    0&0&0&-3
    \end{bmatrix}
    $$

    $L$ is now obtained by left multiplying the inverses elementary matrices back to the last operation by simply flipping the signs of the lower triangle numbers then multiplying them. I obtained:

    $$
    L =
    \begin{bmatrix}
    1&0&0&0\\
    2&1&0&0\\
    4&-1&1&0\\
    1&4&3&1
    \end{bmatrix}
    $$

    I verify my calculations again.

    L.dot(U) == A
    array([[ True,  True,  True,  True],
              [ True,  True,  True,  True],
              [ True,  True,  True,  True],
              [ True,  True,  True,  True]])
    

    Next I solved $b=\langle 0,0,0,1\rangle$ using $LU$
    Starting with $L$, I noticed the $b$ vector will not change based on forward row operations, since there is nothing but zeros except for the last element.
    The result is still $b=\langle 0,0,0,1\rangle$.
    However, solving the $U$ matrix with backward row operations will alter $b$.
    The operations I used were:
    $
    \frac{-1}{3}R_4+R_3 \\
    \frac{4}{3}R_4+R_2\\
    \frac{-11}{3}R_4+R_1\\
    \frac{-3}{4}R_3+R_2\\
    R_3+R_1\\
    -3R_2+R_1\\
    \frac{1}{5}R_1\\
    \frac{1}{4})R_3 \\
    \frac{-1}{3}R_4
    $
    This resulted in $b=\langle \frac{-4}{5}, \frac{19}{12}, \frac{-1}{12}, \frac{-1}{3}\rangle$
    Finally, $det(A)$ is calculated by:
    (5)(1)(4)(-3)=-60.
    Using the computer's permuted decomposition, the determinant was
    det(PInv.dot(L.dot(U)))
    calculated to be -60.000000000001116.

    mark
  • edited January 2020

    My matrix is

    $$A = \begin{pmatrix}
    5&3&1&2\\
    20&17&8&8\\
    15&4&-2&11\\
    5&3&-4&24\end{pmatrix}.$$

    Question 1: To compute the permuted $LU$ decomposition using the lu() function, I began by importing the appropriate Python library (along with a few others that would help with other computations),

    import numpy as np
    from scipy.linalg import lu, inverse, solve, det
    

    and then I entered my matrix.

    A = np.array([
    [5, 3, 1, 2],
    [20, 17, 8, 8],
    [15, 4, -2, 11],
    [5, 3, -4, 24]
    ])
    

    Next I called the lu() function to extract $P$, $L,$ and $U$ and printed the results.

    P,L,U = lu(A)
    print(P)
    print(L)
    print(U)
    
    [[0. 0. 0. 1.]
     [1. 0. 0. 0.]
     [0. 1. 0. 0.]
     [0. 0. 1. 0.]]
    [[ 1.          0.          0.          0.        ]
     [ 0.75        1.          0.          0.        ]
     [ 0.25        0.14285714  1.          0.        ]
     [ 0.25        0.14285714 -0.02941176  1.        ]]
    [[20.         17.          8.          8.        ]
     [ 0.         -8.75       -8.          5.        ]
     [ 0.          0.         -4.85714286 21.28571429]
     [ 0.          0.          0.         -0.08823529]]
    

    Lastly, I verified $P$, $L,$ and $U$ by computing their product and comparing it to $A$.

    P.dot(L).dot(U) == A
    
    [[ True,  True,  True,  True],
    [ True,  True,  True,  True],
    [ True,  True,  True,  True],
    [ True,  True,  True,  True]])
    

    Question 2: To compute $L$ and $U$ manually, I multiplied my $A$ matrix by matrices $E_{1}$, $E_{2}$, and $E_{3}$ to reduce $A$ to the upper triangular matrix $U$.

    E1 = np.array([
        [1,0,0,0],
        [-4,1,0,0],
        [-3,0,1,0],
        [-1,0,0,1]
    ])
    
    E2 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,1,1,0],
        [0,0,0,1]
    ])
    
    E3 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,0,-5,1]    
    ])
    
    U = E3.dot(E2.dot(E1.dot(A)))
    

    The resulting $U$ matrix is

    $$U = E_{3}E_{2}E_{1}A = \begin{pmatrix}
    5&3&1&2\\
    0&5&4&0\\
    0&0&-1&5\\
    0&0&0&-3\end{pmatrix}.$$

    By multiplying $E_{3}$, $E_{2}$, and $E_{1}$and inverting the result,

    L = inv(E3.dot(E2.dot(E1)))
    

    I was able to compute the $L$ matrix

    $$L = \left(E_{3}E_{2}E_{1}\right)^{-1} = \begin{pmatrix}
    1&0&0&0\\
    4&1&0&0\\
    3&-1&1&0\\
    1&0&5&1\end{pmatrix}.$$

    Lastly, I confirmed that $LU$ = $A$.

    L.dot(U) == A
    
    [[ True  True  True  True]
     [ True  True  True  True]
     [ True  True  True  True]
     [ True  True  True  True]]
    

    Question 3: In order to solve the $A\vec{x}=\langle 0,0,0,1 \rangle$ using my $LU$ decomposition, I first entered the $\vec{b}$ vector.

    b = [0,0,0,1]
    

    I then solved $L\vec{c} = \vec{b}$,

    c = solve(L,b)
    

    which resulted in $\vec{c} = \left(0, 0, 0, 1\right)^{\mathrm{T}}$.

    With vector $\vec{c}$, I solved $U\vec{x} = \vec{c}$,

    x = solve(U,c)
    

    to get $\vec{b} = \left(-\frac{1}{3}, \frac{4}{3}, \frac{5}{3}, -\frac{1}{3}\right)^{\mathrm{T}}$.

    Lastly, to verify my answer, I solved $A\vec{x} = \vec{b}$,

    x = solve(A,b)
    

    and got an identical result.


    Question 4: I computed the product of the diagonals of the $U$ matrix in order to find the determinant of matrix $A$,

    det = 5*5*(-1)*(-3)
    

    which turned out to be $75$. Since I didn't swap any rows during my LU decomposition, I didn't need to change the sign of the product of the diagonal. I verified the result with the det() function

    det(A).
    
    mark
  • @frank and @LordFarquaad

    I edited your posts a bit to format the matrices correctly. Turns out there's a little bug in the parser that requires a double double backslash for a line break.

    PrinceHumperdinckfrank
  • dandan
    edited February 2020

    My Matrix:

    $$A = \left(
    \begin{array}{cccc}
    -2 & 1 & -1 & -2 \\
    6 & -5 & 0 & 9 \\
    -10 & 3 & -4 & -2 \\
    6 & 3 & 0 & -14 \\
    \end{array}
    \right).$$

    Part 1

    $$P= \left(
    \begin{array}{cccc}
    0& 0 & 0 & 1 \\
    0 & 0& 1& 0 \\
    1 & 0 & 0 & 0 \\
    0 & 1 & 0 &0 \\
    \end{array}
    \right).
    $$

    $$L= \left(
    \begin{array}{cccc}
    1& 0 & 0 & 0 \\
    -0.6 & 1& 0& 0 \\
    -0.6 & -\frac{2}{3} & 1& 0 \\
    0.2 & \frac{1}{12} & 0 &1 \\
    \end{array}
    \right).
    $$

    $$U= \left(
    \begin{array}{cccc}
    -10& 3 & -4 & -2 \\
    0 & 4.8& -2.4& -15.2 \\
    0 & 0 & -4& -\frac{7}{3} \\
    0 & 0 & 0 &-\frac{1}{3} \\
    \end{array}
    \right).
    $$

    P,L,U = lu(A)
    print(P)
    print(L)
    print(U)
    Output:[[0. 0. 0. 1.]
     [0. 0. 1. 0.]
     [1. 0. 0. 0.]
     [0. 1. 0. 0.]]
    [[ 1.          0.          0.          0.        ]
     [-0.6         1.          0.          0.        ]
     [-0.6        -0.66666667  1.          0.        ]
     [ 0.2         0.08333333 -0.          1.        ]]
    [[-10.           3.          -4.          -2.        ]
     [  0.           4.8         -2.4        -15.2       ]
     [  0.           0.          -4.          -2.33333333]
     [  0.           0.           0.          -0.33333333]]
    

    $PA = LU$ As seen by:

    input: inv(P).dot(A)
    out: 
    array([[-10.,   3.,  -4.,  -2.],
       [  6.,   3.,   0., -14.],
       [  6.,  -5.,   0.,   9.],
       [ -2.,   1.,  -1.,  -2.]])
    
    input: L.dot(U)
    out: array([[-10.,   3.,  -4.,  -2.],
       [  6.,   3.,   0., -14.],
       [  6.,  -5.,   0.,   9.],
       [ -2.,   1.,  -1.,  -2.]])
    

    For some reason the next one is throwing 3 false results, even though it's evident that they are equal:

    input: inv(P).dot(A) == L.dot(U)
    out: array([[ True,  True,  True,  True],
       [False, False,  True,  True],
       [False,  True,  True,  True],
       [ True,  True,  True,  True]])
    

    When looking at: $$PA - LU$$

    in: inv(P).dot(A) - L.dot(U)
    out: 
    array([[ 0.0000000e+00,  0.0000000e+00,  0.0000000e+00,  0.0000000e+00],
    [-8.8817842e-16, -4.4408921e-16,  0.0000000e+00,  0.0000000e+00],
    [-8.8817842e-16,  0.0000000e+00,  0.0000000e+00,  0.0000000e+00],
    [ 0.0000000e+00,  0.0000000e+00,  0.0000000e+00,  0.0000000e+00]])
    

    Its clear to see that they get very close to being zero, but not close enough. This all comes down to floating point error and dealing with the rational numbers that go on forever. This is why we use a permuted matrix in python. The goal is to eliminate this by dealing with integers as much as possible.

    Part 2

    The following elementary matrices were used to solve for $L$ and $U$:

    E321 = np.array([
    [1,0,-0,0],
    [3, 1, 0,0],
    [-5, 0, 1, 0],
    [3, 0, 0, 1]
    ])
    
    E4 = np.array([
    [1,0,0,0],
    [0, 1, 0,0],
    [0, -1, 1, 0],
    [0, 3, 0, 1]
    ])
    
    E5= np.array([
    [1,0,0,0],
    [0, 1, 0,0],
    [0, 0, 1, 0],
    [0, 0, 3, 1]
    ])
    

    Where as $U = E_5 E_4 E_{321}$.

    U = E5.dot(E4.dot(E321.dot(A)))
    

    $$U= \left(
    \begin{array}{cccc}
    -2& 1 & -1 & -2 \\
    0 & -2& -3& 3 \\
    0 & 0 & 4& 5 \\
    0 & 0 & 0 &4 \\
    \end{array}
    \right).
    $$

    Where L is the opposites of the terms below the diagonal in our elementary matrices.

    $$L= \left(
    \begin{array}{cccc}
    1& 0 & 0 & 0 \\
    -3 & 1& 0& 0 \\
    5 & 1 & 1 &0 \\
    -3 & -3 & -3 &1 \\
    \end{array}
    \right).
    $$

    A = LU

    Inn: L.dot(U) == A
    Out: array([[ True,  True,  True,  True],
       [ True,  True,  True,  True],
       [ True,  True,  True,  True],
       [ True,  True,  True,  True]])
    

    Well that at least works.

    Part 3: Solve $$LU\vec{x}=\vec{b}=\langle 0,0,0,1\rangle$$ by LU

    By way of the book: LUx = Lc = b

    $$
    L\vec{c}= \left(
    \begin{array}{cccc}
    1& 0 & 0 & 0 \\
    -3 & 1& 0& 0 \\
    5 & 1 & 1 &0 \\
    -3 & -3 & -3 &1
    \end{array} \right)
    \left(\begin{array}{cccc} a\\ b \\ c \\ d \end{array}\right)
    =\left(\begin{array}{cccc} 0\\ 0 \\ 0 \\ 1 \end{array}\right)
    $$

    in: b = [0,0,0,1]
    solve(L,b)
    
    out: array([ 0., -0., -0.,  1.])
    

    After solving for this: Ux = c

    $$
    U\vec{x}= \left(
    \begin{array}{cccc}
    -2& 1 & -1 & -2 \\
    0 & -2& -3& 3\\
    0 & 0 & 4 &5 \\
    0 & 0 & 0 & 4
    \end{array}\right)
    \left(\begin{array}{ccccc} x\\ y \\ z \\ w \end{array}\right)
    =\left(\begin{array}{cccc} 0\\ 0 \\ 0\\ 1\end{array}\right)
    $$

    in: b = [0,0,0,1]
    S = solve(L.dot(U),b)
    
    out: array([ 0.328125,  0.84375 , -0.3125  ,  0.25    ])
    

    Checking with A:

    in: b = [0,0,0,1]
    Z = solve(A,b)
    
    out: array([ 0.328125,  0.84375 , -0.3125  ,  0.25    ])
    

    That is in order:

    In: Z == S
    Out: array([ True,  True,  True,  True])
    

    Part 4:

    $$Since A = LU$$
    $$ \rightarrow det(A) = det(U)$$

    Multiplying the diagonals of an upper triangular matrix produces the determinate.
    Therefore: $$ (-2)(-2)(4)(4) = 64$$

    Checking this with Python

    In: det(U)
    Out: 64.0
    

    In the case of the permuted matrix:
    $$det(A) = det(P^{-1}LU)$ note: $P = P^{-1}$$ for Python Code:

    In: det(P.dot(L.dot(U)))
    Out: 64.0
    

    Those two are also in order.

    mark
  • edited February 2020

    Personal Matrix:

    $$A = \begin{pmatrix}
    -2 & 4 & -3 & 2 \\
    2 & -3 & 6 & 2 \\
    -2 & 1 & 13 & -5 \\
    4 & -4 & 18 & 14 \ \
    \end{pmatrix} $$

    1) By scipy.linalg command,

    import numpy as np
    from scipy.linalg import lu
    
    A = np.array([
        [-2,4,-3,2],
        [2,-3,6,2],
        [-2,1,-13,-5],
        [4,-4,18,14]
    ])
    
    P,L,U = lu(A)
    print(P)
    print(L)
    print(U)
    [[0. 1. 0. 0.]
     [0. 0. 0. 1.]
     [0. 0. 1. 0.]
     [1. 0. 0. 0.]]
    [[ 1.   0.   0.   0. ]
     [-0.5  1.   0.   0. ]
     [-0.5 -0.5  1.   0. ]
     [ 0.5 -0.5 -0.   1. ]]
    [[ 4.  -4.  18.  14. ]
     [ 0.   2.   6.   9. ]
     [ 0.   0.  -1.   6.5]
     [ 0.   0.   0.  -0.5]]
    

    Thus,

    $$P = \begin{pmatrix}
    0 & 1 & 0 & 0 \\
    0 & 0 & 0 & 1 \\
    0 & 0 & 1 & 0 \\
    1 & 0 & 0 & 0 \ \
    \end{pmatrix} $$

    $$L = \begin{pmatrix}
    1 & 0 & 0 & 0 \\
    -0.5 & 1 & 0 & 0 \\
    -0.5 & -0.5 & 1 & 0 \\
    0.5 & -0.5 & 0 & 1 \ \
    \end{pmatrix} $$

    $$U = \begin{pmatrix}
    4 & -4 & 18 & 14 \\
    0 & 2 & 6 & 9 \\
    0 & 0 & -1 & 6.5 \\
    0 & 0 & 0 & -0.5 \ \
    \end{pmatrix} $$

    Just to verify,

    print(P.dot(L).dot(U) == A)
    
    [[ True  True  True  True]
     [ True  True  True  True]
     [ True  True  True  True]
     [ True  True  True  True]]
    

    Therefore, PA = LU is verified.

    2) By numpy command we obtain U,

    import numpy as np
    
    A = np.array([
        [-2,4,-3,2],
        [2,-3,6,2],
        [-2,1,-13,-5],
        [4,-4,18,14]
    ])
    
    E21 = np.array([
        [1,0,0,0],
        [1,1,0,0],
        [-1,0,1,0],
        [2,0,0,1]
    ])
    
    E21.dot(A)
    
    array([[ -2,   4,  -3,   2],
           [  0,   1,   3,   4],
           [  0,  -3, -10,  -7],
           [  0,   4,  12,  18]])
    
    E3 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,3,1,0],
        [0,-4,0,1]
    ])
    
    E3.dot(E21.dot(A))
    
    array([[-2,  4, -3,  2],
           [ 0,  1,  3,  4],
           [ 0,  0, -1,  5],
           [ 0,  0,  0,  2]])
    
    U = E3.dot(E21.dot(A))
    
    U
    
    array([[-2,  4, -3,  2],
           [ 0,  1,  3,  4],
           [ 0,  0, -1,  5],
           [ 0,  0,  0,  2]])
    

    Thus,

    $$U = \begin{pmatrix}
    -2 & 4 & -3 & 2 \\
    0 & 1 & 3 & 4 \\
    0 & 0 & -1 & 5 \\
    0 & 0 & 0 & 2 \ \
    \end{pmatrix} $$

    Therefore,

    $$L = \begin{pmatrix}
    1 & 0 & 0 & 0 \\
    -1 & 1 & 0 & 0 \\
    1 & -3 & 1 & 0 \\
    -2 & 4 & 0 & 1 \ \
    \end{pmatrix} $$
    such that A = LU.

    3) By the use of LU Decomposition from above, we are able to solve for the solution for L and U.

    import numpy as np
    from scipy.linalg import solve
    
    L = ([
        [1,0,0,0],
        [-1,1,0,0],
        [1,-3,1,0],
        [-2,4,0,1]
    ])
    
    b = [0,0,0,1]
    
    solve(L,b)
    
    array([-0.,  0.,  0.,  1.])
    
    U = ([
        [-2,4,-3,2],
        [0,1,3,4],
        [0,0,-1,5],
        [0,0,0,2]
    ])
    
    solve(U,b)
    
    array([-22.25,  -9.5 ,   2.5 ,   0.5 ])
    

    4) By using LU Decomposition and Permuted LU Decomposition, we can compute det(A):

    import numpy as np
    from scipy.linalg import det, lu
    
    A = np.array([
        [-2,4,-3,2],
        [2,-3,6,2],
        [-2,1,-13,-5],
        [4,-4,18,14]
    ])
    
    det(A)
    
    3.9999999999999956
    
    U = np.array([
        [-2,4,-3,2],
        [0,1,3,4],
        [0,0,-1,5],
        [0,0,0,2]
    ])
    
    det(U)
    
    4.0
    
    P,L,U = lu(A)
    print(P)
    print(L)
    print(U)
    
    [[0. 1. 0. 0.]
     [0. 0. 0. 1.]
     [0. 0. 1. 0.]
     [1. 0. 0. 0.]]
    [[ 1.   0.   0.   0. ]
     [-0.5  1.   0.   0. ]
     [-0.5 -0.5  1.   0. ]
     [ 0.5 -0.5 -0.   1. ]]
    [[ 4.  -4.  18.  14. ]
     [ 0.   2.   6.   9. ]
     [ 0.   0.  -1.   6.5]
     [ 0.   0.   0.  -0.5]]
    
    P.dot(L).dot(U)
    
    array([[ -2.,   4.,  -3.,   2.],
           [  2.,  -3.,   6.,   2.],
           [ -2.,   1., -13.,  -5.],
           [  4.,  -4.,  18.,  14.]])
    
    det(P.dot(L).dot(U))
    
    3.9999999999999956
    

    Therefore, the det(A) = 3.9999999999999956 or 4.

    mark
  • edited January 2020

    Personal matrix:

    $$
    A =
    \begin{pmatrix}
    -3 & 4 & -1 & 4 \\
    9 & -8 & 5 &-8 \\
    3 & 0 & 4 & 2 \\
    -15 & 20 & -2 & 31
    \end{pmatrix}
    $$

    Part 1:

    A = np.array([
        [-3, 4, -1, 4],
        [9, -8, 5, -8],
        [3, 0, 4, 2],
        [-15, 20, -2, 31]
    ])
    
    # LU decomposition
    P,L,U = lu(A)
    P = inv(P)
    
    print(P)
    print(L)
    print(U)
    
    
    # verify LU = PA
    print(L.dot(U))
    L.dot(U) == P.dot(A)
    

    $$
    P =
    \begin{pmatrix}
    0 & 0 & 0 & 1 \\
    0 & 0 & 1 & 0 \\
    1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0
    \end{pmatrix}
    $$

    $$
    L =
    \begin{pmatrix}
    1 & 0 & 0 & 0 \\
    -\frac{3}{5} & 1 & 0 & 0 \\
    \frac{1}{5} & 0 & 1 & 0 \\
    -\frac{1}{5} & 1 & \frac{1}{3} & 1
    \end{pmatrix}
    $$

    $$
    U =
    \begin{pmatrix}
    -15 & 20 & -2 & 31 \\
    0 & 4 & \frac{19}{5} & \frac{53}{5} \\
    0 & 0 & -\frac{3}{5} & -\frac{11}{5} \\
    0 & 0 & 0 & -\frac{5}{3}
    \end{pmatrix}
    $$

    Part 2:

    # (-5)R1 + R4
    E1 = np.array([
        [1, 0, 0, 0],
        [0, 1, 0, 0],
        [0, 0, 1, 0],
        [-5, 0, 0, 1]
    ])
    
    # (3)R1 + R2
    E2 = np.array([
        [1, 0, 0, 0],
        [3, 1, 0, 0],
        [0, 0, 1, 0],
        [0, 0, 0, 1]
    ])
    
    # R1 + R3
    E3 = np.array([
        [1, 0, 0, 0],
        [0, 1, 0, 0],
        [1, 0, 1, 0],
        [0, 0, 0, 1]
    ])
    
    # (-1)R2 + R3
    E4 = np.array([
        [1, 0, 0, 0],
        [0, 1, 0, 0],
        [0, -1, 1, 0],
        [0, 0, 0, 1]
    ])
    
    # (-3)R3 + R4
    E5 = np.array([
        [1, 0, 0, 0],
        [0, 1, 0, 0],
        [0, 0, 1, 0],
        [0, 0, -3, 1]
    ])
    
    U = E5.dot(E4.dot(E3.dot(E2.dot(E1.dot(A)))))
    
    print(U)
    
    # L matrix
    L = np.array([
        [1, 0, 0, 0],
        [-3, 1, 0, 0],
        [-1, 1, 1, 0],
        [5, 0, 3, 1]
    ])
    
    print(L)
    
    print(L.dot(U))
    
    L.dot(U) == A
    

    $$
    U =
    \begin{pmatrix}
    -3 & 4 & -1 & 4 \\
    0 & 4 & 2 & 4 \\
    0 & 0 & 1 & 2 \\
    -15 & 20 & -2 & 31
    \end{pmatrix}
    $$

    $$
    L =
    \begin{pmatrix}
    1 & 0 & 0 & 0 \\
    -3 & 1 & 0 & 0 \\
    -1 & 1 & 1 & 0 \\
    5 & 0 & 3 & 1
    \end{pmatrix}
    $$

    Part 3:

    solve(L.dot(U),[0,0,0,1])
    
    array([ 4.00000000e-01,  6.99440506e-17, -4.00000000e-01,  2.00000000e-01])
    

    Part 4:

    # LU decomposition
    print(det(L.dot(U)))
    
    # permuted LU decomposition 
    P_inv,L,U = lu(A)
    
    print(det(P_inv.dot(L.dot(U))))
    
    -60.0
    -60.00000000000002
    
    mark
  • edited February 2020

    Personal Matrix:

    $$A=\begin{pmatrix}
    5 & 1 & 3 & -3 \\
    25& 3& 15& -11 \\
    15 &-3 & 12 &2 \\
    -5 & -11 & -6 &27 \\
    \end{pmatrix}$$

    Input : 
    A=np.array([
        [5, 1, 3, -3],
        [25, 3, 15, -11],
        [15, -3, 12, 2],
        [-5, -11, -6, 27]
    ])
    

    Decomposition of Personal Matrix

    P,L,U = lu(A)
    P=inv(P)
    print(P)
    print(L)
    print(U)
    
    Output: 
    [[-0.  1.  0.  0.]
     [-0.  0.  0.  1.]
     [-0.  0.  1.  0.]
     [ 1.  0.  0.  0.]]
    [[ 1.          0.          0.          0.        ]
     [-0.2         1.          0.          0.        ]
     [ 0.6         0.46153846  1.          0.        ]
     [ 0.2        -0.03846154 -0.02631579  1.        ]]
    [[ 25.           3.          15.         -11.        ]
     [  0.         -10.4         -3.          24.8       ]
     [  0.           0.           4.38461538  -2.84615385]
     [  0.           0.           0.           0.07894737]]
    

    LU decomposition is in fact a solution to the personal matrix

    print(L.dot(U))
    L.dot(U) == P.dot(A)
    
    [[ 25.   3.  15. -11.]
     [ -5. -11.  -6.  27.]
     [ 15.  -3.  12.   2.]
     [  5.   1.   3.  -3.]]
    array([[ True,  True,  True,  True],
           [ True,  True,  True,  True],
           [ True, False,  True,  True],
           [ True,  True,  True,  True]])
    

    Elementary Matrices:

    # -5R1 + R2
    E21=np.array([
        [1,0,0,0],
        [-5,1,0,0],
        [0,0,1,0],
        [0,0,0,1]
    ])
    
    # R1 + R4
    E41=np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [1,0,0,1]
    ])
    
    #  -3R1 + R3
    E31=np.array([
        [1,0,0,0],
        [0,1,0,0],
        [-3,0,1,0],
        [0,0,0,1]
    ])
    
    # -5R2 + R4
    E42=np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,-5,0,1]
    ])
    
    # R3 + R4
    E43=np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,0,1,1]
    ])
    

    For the last two portions of the elementary operations, I had to split up into multiple U's for it to print U correctly. I also rearranged my matrix before inputting. I had to more or less ignore P.

    U1 = E43.dot(E42.dot(E31.dot(E41.dot(E21.dot(A)))))
    print(U1)
    

    This produced

    $$ U1= \begin{pmatrix}
    5& 1 & 3 & -3 \\
    0& -2& 0& 4 \\
    0 & -6 & 3 & 11 \\
    0 & -6 & 0 & 15 \\
    \end{pmatrix}$$

    Finally,

    # -3R2 + R3
    E32=np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,-3,1,0],
        [0,0,0,1]
    ])
    
    # -3R2 + R4
    E42 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,-3,0,1]
    ])
    U2 = E42.dot(E32.dot(U1))
    print(U2)
    

    This produced

    $$ U2= \begin{pmatrix}
    5& 1 & 3 & -3 \\
    0& -2& 0& 4 \\
    0 & 0 & 3 & -1 \\
    0 & 0 & 0 & 3 \\
    \end{pmatrix}$$

    To solve for $$ b = \begin{pmatrix}
    0 \\
    0 \\
    0 \\
    1 \\
    \end{pmatrix}$$

    b=np.array([
         [0],
         [0],
         [0],
         [1]
        ])
    
    solve(A,x)
    

    Resulting in:

    $$ x=\begin{pmatrix}
    1.19904087e-16 \\
    6.66666667e-01\\
    1.11111111e-01\\
    3.33333333e-01\\
    \end{pmatrix}$$

    Lastly, for the det(A) I had to load in the feature from scipy.linalg and used command

    print(det(L.dot(U))

    As well as

        P1=inv(P)
        print(det(P1.dot(L.dot(U))))
    

    giving

    $$ -90.00000000000011 $$

    and

    $$ -90.00000000000024 $$

    Revisiting this post to add the difference

    Using the same code I performed

    L.dot(U) - P.dot(A)

    which resulted in

    $$ x=\begin{pmatrix}
    0.0000000e+00 & 0.0000000e+00 & 0.0000000e+00 & 0.0000000e+00\\
    0.0000000e+00 & 0.0000000e+00 & 0.0000000e+00 & 0.0000000e+00\\
    0.0000000e+00 & 8.8817842e-16 & 0.0000000e+00 & 0.0000000e+00\\
    0.0000000e+00 & 0.0000000e+00 & 0.0000000e+00 & 0.0000000e+00\\
    \end{pmatrix}$$

    In row 3 column 2, we see the reason for the false value originally output by the computation. Upon taking the difference, we see the false was due to the very very small difference of 10 to the negative power of 16, which is essentially zero!

    mark
  • BenBen
    edited February 2020

    My personal matrix is:

    $$A =
    \begin{pmatrix}
    -3 & -1 & 0 & 3 \\
    -12 & -7 & 1 & 14 \\
    -15 & -2 & 4 & 16 \\
    -15 & 4 & -13 & 7
    \end{pmatrix}
    $$


    Q1: Code a PLU Decomp.

    Using scipy.linalg.lu, the PLU decomposition is:

    # My personal matrix
    A = np.array([
        [-3 ,-1,  0,  3],
        [-12,-7,  1, 14],
        [-15,-2,  4, 16],
        [-15, 4,-13,  7]
        ])
    
    # Factor the matrix
    P, L, U = lu(A)
    print('P:\n', P,'\n')
    print('L:\n', L,'\n')
    print('U:\n', U,'\n')
    
    # Recomp and compare answers
    print('A:\n',A)
    print('PxLxU:\n',P.dot(L.dot(U)),'\n\n')
    

    $$A = P^{-1}LU =
    \left[
    \begin{smallmatrix}
    0 & 0 & 0 & 1 \\
    0 & 0 & 1 & 0 \\
    1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0
    \end{smallmatrix}
    \right]
    \left[
    \begin{smallmatrix}
    1 & 0 & 0 & 0 \\
    1 & 1 & 0 & 0 \\
    .8 & -.9 & 1 & 0 \\
    .2 & -.1 & .14286 & 1
    \end{smallmatrix}
    \right]
    \left[
    \begin{smallmatrix}
    -15 & -2 & 4 & 16 \\
    0 & 6 & -17 & -9 \\
    0 & 0 & -17.5 & -6.9 \\
    0 & 0 & 0 & -.11429
    \end{smallmatrix}
    \right]$$

    As long as we are comfortable with a loose interpretation of "0", the decomposition checks out:

    print('PxLxU:\n',P.dot(L.dot(U)),'\n\n')
    

    produces:

    PxLxU:
      [[-3.00000000e+00 -1.00000000e+00  1.38777878e-16  3.00000000e+00]
      [-1.20000000e+01 -7.00000000e+00  1.00000000e+00  1.40000000e+01]
      [-1.50000000e+01 -2.00000000e+00  4.00000000e+00  1.60000000e+01]
      [-1.50000000e+01  4.00000000e+00 -1.30000000e+01  7.00000000e+00]]
    

    Q2: Obtain L and U by Hand

    For the manual LU decomposition we have three L component matrices from six operations:

    lOps = list()
    
    # prod(inv(E)[1:3])
    lOps.append(np.array([
        [1,0,0,0],
        [4,1,0,0],
        [5,0,1,0],
        [5,0,0,1]
    ]))
    
    # prod(inv(E)[4:5])
    lOps.append(np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,-1,1,0],
        [0,-3,0,1]
    ]))
    
    # inv(E)[6]
    lOps.append(np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,0,-2,1]
    ]))
    

    and the elementary matrix:

    I4 = np.array([
        [1,0,0,0],
        [0,1,0,0],
        [0,0,1,0],
        [0,0,0,1]
        ])
    

    We combine them using:

    def backOps(M,ops):
        # Apply op matrices left to right
        for o in ops:
            M = M.dot(o)
        return M
    
    def forwardOps(M,ops):
        # Apply op matrices right to left
        for o in ops:
            M = inv(o).dot(M)
        return M
    

    as

    # Build UT and LT
    LT = backOps(I4,lOps)
    UT = forwardOps(A,lOps)
    

    to get

    $$LT =
    \begin{pmatrix}
    1 & 0 & 0 & 0 \\
    4 & 1 & 0 & 0 \\
    5 & -1 & 1 & 0 \\
    5 & -3 & -2 & 1
    \end{pmatrix}
    , UT =
    \begin{pmatrix}
    -3 & -1 & 0 & 3 \\
    0 & -3 & 1 & 2 \\
    0 & 0 & 5 & 3 \\
    0 & 0 & 0 & 4
    \end{pmatrix}$$

    To check, we ask:

    print('LU = A?\n',LT.dot(UT) == A)
    

    and get:

    $$LU = A :
    \begin{pmatrix}
    True & True & True & True \\
    True & True & True & True \\
    True & True & True & True \\
    True & True & True & True
    \end{pmatrix}$$


    Q3: Use your LU to solve $A\vec{x} = \vec{b} = ⟨0,0,0,1⟩$

    To accomplish this herculean task, we must
    1. Solve $L\vec{c} = \vec{b}$
    2. Solve $U\vec{x} = \vec{c}$

    In python:

    #Solve for Ax = <0,0,0,1> using LU
    b = np.array([0,0,0,1])
    c = solve(LT, b)
    x = solve(UT, c)
    

    In algebra, $L\vec{c} = \vec{b}$:

    $$
    a + 0b + 0c + 0d = 0 \\
    4a + b + 0c + 0d = 0 \\
    5a + -b + c + 0d = 0 \\
    5a + -3b + -2c + d = 1
    $$
    Gives us $\vec{c} = ⟨0,0,0,1⟩$. So $U\vec{x} = \vec{c}$:
    $$
    -3w + -x + 0y + 3z = 0 \\
    0w + -3x + y + 2z = 0 \\
    0w + 0x + 5y + 3z = 0 \\
    0w + 0x + 0y + 4z = 1
    $$

    Checking in python:

    print('LU solve:\n',x)
    print('Auto solve:\n', solve(A,b))
    

    Which produces:

    LU solve, vec x =
       [ 0.21111111  0.11666667 -0.15        0.25      ]
    Auto solve, vec x =
       [ 0.21111111  0.11666667 -0.15        0.25      ]
    

    Q4: compute det(A)

    To get the determinant of A, we multiply the diagonal of the upper triangular matrix resulting from the Gaussian reduction.

    def diagProd(M):
        # Multiply the diagonal of a square matrix
        s = 1
        for i in range(len(M)):
            s *= M[i][i]
        return s
    

    Comparing our UT vs Python's U:

    ## det(A) using U, UT.
    print("\nU det:",diagProd(U))
    print("UT det:", diagProd(UT))
    print("Det(A):",det(A))
    

    we get:

    U det: -180.00000000000006
    UT det: 180.0
    Det(A): 180.00000000000003
    

    And see that the U-derived calculation is negative, due to the row swap.

    mark
  • @Ben That looks mostly great! Strangely, though, you've not included the code on the first part where you're explicitly asked to use a specific command. Could you include that code block, please?

  • @gabriel That looks really good! I've got just a couple minor issues:

    1. You've got a couple of extraneous back-ticks in your code blocks and
    2. I'm curious what that False is doing in the third row second column of your output for L.dot(U) == P.dot(A). Could you print the difference and see what we get?
  • edited February 2020

    @dan @dan That looks pretty good! I do have a few issues, though.

    1. You've got some LaTeX formatting issues. I've edited a couple of those to get you started but you should try to fix the rest as well.
    2. Let's try to track down the origin of the Falses in your output for inv(P).dot(A) == L.dot(U) by examining the difference of the two.
    dan
  • @joshua That looks great - right up until the very end! You're using the det command to compute the determinant. The point is that you don't have to use that at all - you can multiply the terms on the diagonal.

  • edited February 2020

    My matrix is
    $A = \left(
    \begin{array}{cccc}
    2 & -3 & -1 & -1 \\
    -2 & 8 & 1 & -1 \\
    8 & -27 & -6 & 6 \\
    10 & -25 & -11 & 14
    \end{array}
    \right).$
    Question 1: First, I used the lu() command to find the permuted $LU$ Decomposition

    import numpy as np
    from scipy.linalg import lu,inv,solve,det
    A = np.array([
        [2,-3,-1,-1],
        [-2,8,1,-1],
        [8,-27,-6,6],
        [10,-25,-11,14]
    ])
    P,L,U = lu(A)
    print(P)
    print(L)
    print(U)
    #out:
    #[[0. 0. 1. 0.]
     #[0. 0. 0. 1.]
     #[0. 1. 0. 0.]
     #[1. 0. 0. 0.]]
    #[[ 1.00000000e+00  0.00000000e+00  0.00000000e+00  0.00000000e+00]
     #[ 8.00000000e-01  1.00000000e+00  0.00000000e+00  0.00000000e+00]
     #[ 2.00000000e-01 -2.85714286e-01  1.00000000e+00  0.00000000e+00]
     #[-2.00000000e-01 -4.28571429e-01  3.01346250e-17  1.00000000e+00]]
    #[[ 10.         -25.         -11.          14.        ]
     #[  0.          -7.           2.8         -5.2       ]
     #[  0.           0.           2.          -5.28571429]
     #[  0.           0.           0.          -0.42857143]]
    

    To verify the correctness of this decomposition, I computed the product of $P \cdot L \cdot U$ and subtracted $A$, which should yield the Zero Matrix.

    P.dot(L).dot(U) - A
    #out:array([[ 0.00000000e+00,  4.44089210e-16, -2.22044605e-16,
    #    2.22044605e-16],
    #   [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
    #     4.44089210e-16],
    #   [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
    #     0.00000000e+00],
    #   [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
    #     0.00000000e+00]])
    

    Although some of the entries are not exactly zero, the "nonzero" entries are extremely small and are the result of roundoff error.

    Question 2:
    To compute this decomposition manually, I used the following elementary matrices:

    E1 = np.array([
     [1,0,0,0],
     [1,1,0,0],
     [-4,0,1,0],
     [-5,0,0,1]
     ])
    E2 = np.array([
    [1,0,0,0],
    [0,1,0,0],
    [0,3,1,0],
    [0,2,0,1]
    ])
    E3 = np.array([
    [1,0,0,0],
    [0,1,0,0],
    [0,0,1,0],
    [0,0,-3,1]
    ])
    

    The U Matrix is computed by multiplying the elementary matrices to A:

    E3.dot(E2.dot(E1.dot(A)))
    #out: array([[ 2, -3, -1, -1],
    #    [ 0,  5,  0, -2],
    #    [ 0,  0, -2,  4],
    #    [ 0,  0,  0,  3]])
    

    Then L is computed by $(E_3 \cdot E_2 \cdot E_1)^{-1}$:

    L = inv(E3.dot(E2.dot(E1)))
    L
    #out: array([[ 1.,  0., -0.,  0.],
    #   [-1.,  1., -0.,  0.],
    #   [ 4., -3.,  1.,  0.],
    #   [ 5., -2.,  3.,  1.]])
    

    Finally, I confirmed the decomposition:

    L.dot(U) == A
    #out: array([[ True,  True,  True,  True],
    #   [ True,  True,  True,  True],
    #   [ True,  True,  True,  True],
    #   [ True,  True,  True,  True]])
    

    Question 3: Now to solve, $A\vec{x} = \langle 0,0,0,1 \rangle$ using the $LU$ decomposition, I input the $\vec{b}$ vector and solved for $L\vec{c} = \vec{b}$.

    b = [0,0,0,1]
    c = solve(L,b)
    c
    #out:array([ 0., -0., -0.,  1.])
    

    Therefore $\vec{c} = \langle 0,0,0,1 \rangle$. Next, I solved $U\vec{x} = \vec{c}$:

    x = solve(U,c)
    x
    #out: array([0.7       , 0.13333333, 0.66666667, 0.33333333])
    

    Therefore $\vec{x} = \langle \frac{7}{10} \frac{2}{15} \frac{2}{3} \frac{1}{3} \rangle$. To verify this, I checked $A\vec{x}$:

    A.dot(x)
    #out: array([-1.11022302e-16,  5.55111512e-17, -4.44089210e-16,  1.00000000e+00])
    

    Accounting for roundoff error, this is equal to our original $\vec{b}$.

    Question 4:
    For the $LU$ decomposition by hand, the determinant is given as follows:

    print("U = \n", U)
    det1 = 1
    for i in range(4):
        det1 *= U[i][i]
    det1
    #out: U = 
    #[[ 2 -3 -1 -1]
    #[ 0  5  0 -2]
    #[ 0  0 -2  4]
    #[ 0  0  0  3]]
    #-60
    

    Now to use the Permuted $LU$ decomposition, recall that P contained 3 row swaps.

    print(U1)
    det2 = 1
    for i in range(4):
        det2 *= U1[i][i]
    det2 *= (-1)**3
    det2
    #out:  [[ 10.         -25.         -11.          14.        ]
    # [  0.          -7.           2.8         -5.2       ]
    # [  0.           0.           2.          -5.28571429]
    # [  0.           0.           0.          -0.42857143]]
    # -59.999999999999986
    
    mark
  • edited February 2020

    My personal matrix is the following.

    [[ 5 3 -3 1]
    [-10 -8 11 0]
    [ 5 9 -20 -7]
    [ 20 18 -21 2]]

    Part I

    To find the LU Decomposition we can use the following python code.

    P,L,U = lu(A)
    print("\tP\n",P,"\n")
    print("\tL\n",L,"\n")
    print("\tU\n",U,"\n")

    The result is,

    P
    

    [[0. 0. 0. 1.]
    [0. 0. 1. 0.]
    [0. 1. 0. 0.]
    [1. 0. 0. 0.]]

    L
    

    [[ 1. 0. 0. 0. ]
    [ 0.25 1. 0. 0. ]
    [-0.5 0.22222222 1. 0. ]
    [ 0.25 -0.33333333 -0.70588235 1. ]]

    U
    

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. 0. 3.77777778 2.66666667]
    [ 0. 0. 0. -0.11764706]]

    To verify the result we can show that PA is equal to LU.

    P=inv(P)
    side1=P.dot(A)
    side2=L.dot(U)
    print("Side 1\n" + str(side1) + "\nSide 2\n" + str(side2))

    Side 1
    [[ 20. 18. -21. 2.]
    [ 5. 9. -20. -7.]
    [-10. -8. 11. 0.]
    [ 5. 3. -3. 1.]]
    Side 2
    [[ 20. 18. -21. 2.]
    [ 5. 9. -20. -7.]
    [-10. -8. 11. 0.]
    [ 5. 3. -3. 1.]]

    Part II

    Now I will show the steps to do this computation.

    By taking the dot product of a series of elementary matrices. An example of the code I used is shown below.

    E1=np.array([
    [0,0,0,1],[0,1,0,0],[0,0,1,0],[1,0,0,0]
    ])

    A1=E1.dot(A)
    print(E1,"\n",A1)

    Step 1

    [[0 0 0 1]
    [0 1 0 0]
    [0 0 1 0]
    [1 0 0 0]]

    Step 1 result

    [[ 20 18 -21 2]
    [-10 -8 11 0]
    [ 5 9 -20 -7]
    [ 5 3 -3 1]]

    Step 2

    [[1. 0. 0. 0. ]
    [0.5 1. 0. 0. ]
    [0. 0. 1. 0. ]
    [0. 0. 0. 1. ]]

    Step 2 result

    [[ 20. 18. -21. 2. ]
    [ 0. 1. 0.5 1. ]
    [ 5. 9. -20. -7. ]
    [ 5. 3. -3. 1. ]]

    Step 3

    [[ 1. 0. 0. 0. ]
    [ 0. 1. 0. 0. ]
    [-0.25 0. 1. 0. ]
    [ 0. 0. 0. 1. ]]

    Step 3 result

    [[ 20. 18. -21. 2. ]
    [ 0. 1. 0.5 1. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 5. 3. -3. 1. ]]

    Step 4

    [[1 0 0 0]
    [0 0 1 0]
    [0 1 0 0]
    [0 0 0 1]]

    Step 4 result

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. 1. 0.5 1. ]
    [ 5. 3. -3. 1. ]]

    Step 5

    [[ 1. 0. 0. 0. ]
    [ 0. 1. 0. 0. ]
    [ 0. 0. 1. 0. ]
    [-0.25 0. 0. 1. ]]

    Step 5 result

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. 1. 0.5 1. ]
    [ 0. -1.5 2.25 0.5 ]]

    Step 6

    [[1 0 0 0]
    [0 1 0 0]
    [0 0 0 1]
    [0 0 1 0]]

    Step 6 result

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. -1.5 2.25 0.5 ]
    [ 0. 1. 0.5 1. ]]

    Step 7

    [[1. 0. 0. 0. ]
    [0. 1. 0. 0. ]
    [0. 0.33333333 1. 0. ]
    [0. 0. 0. 1. ]]

    Step 7 result

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. 0. -2.66666667 -2. ]
    [ 0. 1. 0.5 1. ]]

    Step 8

    [[ 1. 0. 0. 0. ]
    [ 0. 1. 0. 0. ]
    [ 0. -0.22222222 0. 1. ]
    [ 0. 0. 1. 0. ]]

    Step 8 result

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. 0. 3.77777778 2.66666667]
    [ 0. 0. -2.66666667 -2. ]]

    Step 9

    [[1. 0. 0. 0. ]
    [0. 1. 0. 0. ]
    [0. 0. 1. 0. ]
    [0. 0. 0.70588235 1. ]]

    Step9 result

    [[ 2.00000000e+01 1.80000000e+01 -2.10000000e+01 2.00000000e+00]
    [ 0.00000000e+00 4.50000000e+00 -1.47500000e+01 -7.50000000e+00]
    [ 0.00000000e+00 0.00000000e+00 3.77777778e+00 2.66666667e+00]
    [ 0.00000000e+00 0.00000000e+00 -1.11111107e-08 -1.17647067e-01]]

    Notice that aside from some roundoff error this is essentially the same as the U we were given by the command.

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. 0. 3.77777778 2.66666667]
    [ 0. 0. 0. -0.11764706]]

    Now we have to confirm that A=LU.

    L1=np.array([
    [1,0,0,0],[1/4,1,0,0],[-1/2,1/9,1,0],[1/4,-1/3,-0.70588235,1]
    ])
    inv(P).dot(L1.dot(A9))

    The result is,

    array([[ 5. , 3. , -3. , 1. ],
    [-10. , -8.5 , 12.63888889, 0.83333333],
    [ 5. , 9. , -20. , -7. ],
    [ 20. , 18. , -21. , 2. ]])

    The 12.6388 should be an 11 and the 0.8333 should be a zero but this inaccuracy is just due to floating point error in the calculations.

    Part 3

    Suppose we want to find Ax=(0,0,0,1). We can do this by multiplying our vector by the inverse of A, but since we've already found L,U, and P, we can use these components and do the calculation in steps. I will define B=(0,0,0,1).

    Step 1 (P*B)

    [[-0. 0. 0. 1.]
    [-0. 0. 0. 0.]
    [-0. 0. 0. 0.]
    [ 0. 0. 0. 0.]]

    Step 2 (L inverse dotted with the result from step 1)

    [[ 0. 0. 0. 1. ]
    [ 0. 0. 0. -0.25 ]
    [ 0. 0. 0. 0.55555556]
    [ 0. 0. 0. 0.05882353]]

    Step 3 (U inverse dotted with the result from step 2)

    [[ 0. 0. 0. -0.05]
    [ 0. 0. 0. 0.75]
    [ 0. 0. 0. 0.5 ]
    [ 0. 0. 0. -0.5 ]]

    This should be our final answer. To confirm we can just use the command inv(A)*B.

    array([[ 0. , 0. , 0. , -0.05],
    [ 0. , 0. , 0. , 0.75],
    [ 0. , 0. , 0. , 0.5 ],
    [-0. , -0. , -0. , -0.5 ]])

    Part 4

    To find the determinant of A we can simply use the diagonals of our U matrix.

    [[ 20. 18. -21. 2. ]
    [ 0. 4.5 -14.75 -7.5 ]
    [ 0. 0. 3.77777778 2.66666667]
    [ 0. 0. 0. -0.11764706]]

    204.53.77777778*-0.11764706= -40.00000042352941

    To confirm this we can use the command det(A), which yields,

    -39.999999999999964
    #It's safe to assume that the determinant of A is -40

  • edited February 2020

    I started with my Matrix A
    $$\begin{pmatrix}
    1 & -3 & -2 & -1 \\
    3 & -4 & -4 & 0 \\
    3 & 6 & -3 & 3 \\
    -3 & 19 & 1 & 1
    \end{pmatrix}$$
    1.
    I input my matrix A into the code

    A = np.array([ [1,-3,-2,-1], [3,-4,-4,0], [3,6,-3,3], [-3,19,1,1] ])

    I then used the lu function to find and print P, L and U
    P,L,U = lu(A)

    print(P)
    print(L)
    print(U)
    print(P.dot(L).dot(U))
    print(A)
    

    And got back
    [[0. 0. 0. 1.] [1. 0. 0. 0.] [0. 0. 1. 0.] [0. 1. 0. 0.]] [[ 1. 0. 0. 0. ] [-1. 1. 0. 0. ] [ 1. 0.66666667 1. 0. ] [ 0.33333333 -0.11111111 -0.33333333 1. ]] [[ 3. -4. -4. 0. ] [ 0. 15. -3. 1. ] [ 0. 0. 3. 2.33333333] [ 0. 0. 0. -0.11111111]] [[ 1. -3. -2. -1.] [ 3. -4. -4. 0.] [ 3. 6. -3. 3.] [-3. 19. 1. 1.]] [[ 1 -3 -2 -1] [ 3 -4 -4 0] [ 3 6 -3 3] [-3 19 1 1]]

    We can see that $A = PLU$, because in this case they use $A = PLU$ as opposed to $PA = LU$. We could find this by simply using $P^(-1)A = LU$.

    2.

    In order to solve for U and L, we will first apply Gaussian Elimination to our matrix to find U.

    We will eliminate the numbers in each column and find three elementary matrices, one for each column (except column 4)
    E1 = np.array([ [1,0,0,0], [-3,1,0,0], [-3,0,1,0], [3,0,0,1] ]) E2 = np.array([ [1,0,0,0], [0,1,0,0], [0,-3,1,0], [0,-2,0,1] ]) E3 = np.array([ [1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,-3,1] ])
    Going through each of these steps we can find our upper matrix U
    U = E3.dot(E2.dot(E1).dot(A)) [[ 1 -3 -2 -1] [ 0 5 2 3] [ 0 0 -3 -3] [ 0 0 0 1]]
    We can then find the lower matrix L by combining the elementary matrices and flipping the signs on the numbers below the diagonal. Doing so, we find that L is
    [[ 1 0 0 0] [ 3 1 0 0] [ 3 3 1 0] [-3 2 3 1]]
    I went ahead and verified this
    L.dot(U) == A
    array([[ True, True, True, True], [ True, True, True, True], [ True, True, True, True], [ True, True, True, True]])

    3.

    Given the matrix $b = <0,0,0,1>$, we can solve for $x$ using $Ax = b$. Using LU decomposition, we can solve for $x$ by first solving $Lc = b$ for $c$, then solving $Ux = c$.

    In order to solve for $c$, we find the inverse of $L$ and solve $c = L^-1 * b$
    L_inv = np.linalg.inv(L) print(L_inv) c = L_inv.dot(b) print(c)
    [[ 1.00000000e+00 3.70074342e-17 -6.16790569e-17 0.00000000e+00] [-3.00000000e+00 1.00000000e+00 1.85037171e-16 0.00000000e+00] [ 6.00000000e+00 -3.00000000e+00 1.00000000e+00 -0.00000000e+00] [-9.00000000e+00 7.00000000e+00 -3.00000000e+00 1.00000000e+00]] [[0.] [0.] [0.] [1.]]
    So we find that $c = b$
    Now we can solve for $x$ by using the equation $x = U^-1 * c$

    U_inv = np.linalg.inv(U) print(U_inv) x = U_inv.dot(c) print(x)
    [[ 1. 0.6 -0.26666667 -1.6 ] [ 0. 0.2 0.13333333 -0.2 ] [-0. -0. -0.33333333 -1. ] [ 0. 0. 0. 1. ]] [[-1.6] [-0.2] [-1. ] [ 1. ]]
    So, we see that $x = <-1.6,-.2, -1, 1>$

    We can input this x value back into $Ax = b$ to verify its accuracy
    A.dot(x)
    array([[ 0.0000000e+00], [-8.8817842e-16], [-8.8817842e-16], [ 1.0000000e+00]])
    We see that this is essentially equivalent to b, as float points/roundoff have contributed to some very slight error.

    4.
    To find the determinate, we simply use the determinate function with both our LU decomp as well as our PLU decomp
    np.linalg.det(L.dot(U))
    np.linalg.det(P.dot(L.dot(U)))

    We find that $det(LU) = -14.999999999999993$, and that $det(PLU) = -15.00000000000002$. So the determinate of our matrix should be $-15$.

  • edited February 2020

    The matrix I was assigned was:

    $
    A = \left(
    \begin{array}{cccc}
    2 & 1 & -3 & 4 \\
    8 & 3 & -8 & 20 \\
    0 & -4 & 18 & 21 \\
    0 & 2 & -12 & -14 \\
    \end{array}
    \right).
    $

    For problem 1, I used the lu() function to create the matrices for P_inv, L, and U. I then took the inverse of P_inv to get PA so that I could verify LU == PA:

    A = np.array([[2 , 1 , -3 ,  4 ],
           [ 8 ,  3 ,  -8. ,  20 ],
           [ 0 ,  -4 , 18, 21],
           [ 0 , 2 , -11 , -14 ]])
    
    P_inv,L,U = lu(A)
    
    print("\n A: ")
    print(A)
    print("\n L: ")
    print(L)
    print("\n U: ")
    print(U)
    
    print("\n P_inv: ")
    print(P_inv)
    P = inv(P_inv)
    print("\n P:")
    print(P)
    print("\n PA:")
    PA = P.dot(A)
    print(PA)
    LU = L.dot(U)
    print("\n LU:")
    print(LU)
    
    LU == PA
    

    The following are the outputs I received from the print() commands for each matrix:


    For problem 2, I needed to solve for L and U such that A = LU. To start, I used four elementary row operations to transform A into an upper triangular matrix (U). From here, L can be found by taking the inverse of the dot products of the elementary row operations (without A):

    E1 = np.array([[1 , 0 , 0 ,  0 ],
           [ -4 ,  1 ,  0. ,  0 ],
           [ 0 ,  0 , 1,  0],
           [ 0 , 0 , 0 , 1 ]])
    E2 = np.array([[1 , 0 , 0 ,  0 ],
           [ 0 ,  1 ,  0. ,  0 ],
           [ 0 ,  -4 , 1,  0],
           [ 0 , 0 , 0 , 1 ]])
    E3 = np.array([[1 , 0 , 0 ,  0 ],
           [ 0 ,  1 ,  0. ,  0 ],
           [ 0 ,  0 , 1,  0],
           [ 0 , 2 , 0 , 1 ]])
    E4 = np.array([[1 , 0 , 0 ,  0 ],
           [ 0 ,  1 ,  0. ,  0 ],
           [ 0 ,  0 , 1,  0],
           [ 0 , 0 , 3/2 , 1 ]])
    
    print("\n E1 ( -4R1 + R2 ): ")
    print(E1)
    print("\n A after performing E1:")
    print(E1.dot(A))
    print("\n E2 ( -4R2 + R3 ): ")
    print(E2)
    print("\n A after performing E2:")
    print((E2.dot(E1.dot(A))))
    print("\n E3 ( 2R2 + R4 ): ")
    print(E3)
    print("\n A after performing E3:")
    print((E3.dot(E2.dot(E1.dot(A)))))
    print("\n E4 ( (3/2)*R3 + R4 ): ")
    print(E4)
    U = E4.dot(E3.dot(E2.dot(E1.dot(A))))
    print("\n U: ")
    print(U)
    
    L = inv(E4.dot(E3.dot(E2.dot(E1))))
    print("\n L: ")
    print(L)
    
    print("\n LU:")
    print(L.dot(U))
    print("\n Verify LU == A:")
    L.dot(U) == A
    

    The follow block is the results from the output console:


    For problem 3, I needed to solve the system $ Ax⃗ =⟨0,0,0,1⟩ $, where $ b = (0,0,0,1) $. It is known that $ LUx = Lc = b $, so I used the solve() function with $ L $ and $ b $ to find what c. Once I had c, I used solve() with U and c to find x:

    b = [0, 0, 0, 1]
    
    U = np.array([[ 2,   1,  -3,   4, ],
         [ 0,  -1,   4,   4, ],
         [ 0,   0,   2,   5 ],
         [ 0,   0,   0,   1.5]])
    
    L = np.array([[ 1,   0,   0,  0, ],
         [ 4,   1,   0,   0, ],
         [ 0,   4,   1,   0, ],
         [ 0,  -2,  -1.5,  1 ]])
    LU = L.dot(U)
    
    #print(LU)
    c = solve(L,b)
    print("\n c:")
    print(c)
    x = solve(U,c)
    print("\n x:")
    print(x)
    

    The following are the outputs:


    From this, it can be seen that:
    $$ x = \langle {\frac{-11}{6}, -4, \frac{-5}{3}, \frac{2}{3}} \rangle $$

    The last problem asked for the determinant of A using the matrices found in problems 1 and 3. The determinant(s) can be found by multiplying across the diagonal of U:

    det_U_1 = 8*-4*-2*0.09375
    det_U_3 = 2*-1*2*1.5
    print("\n Determinant found by multiplying along the diagonal of U found in Problem 1:")
    print(det_U_1)
    print("\n Determinant found by multiplying along the diagonal of U found in Problem 3:")
    print(det_U_3)
    print("\n Determinant found by using det(A) to verify the solution above:")
    print(det(A))
    
    

    The output of the above code was:

     Determinant found by multiplying along the diagonal of U found in Problem 1:
    6.0
    
     Determinant found by multiplying along the diagonal of U found in Problem 3:
    -6.0
    
     Determinant found by using det(A) to verify the solution above:
    -6.0000000000000115
    

    As it can be seen, both LU Decomposition and Permuted LU Decomposition can be used to calculate the determinant without using the det() function

    mark
  • edited February 2020

    @joshuam That looks mostly really great! I just have one issue with part 3. When you use the solve command as you did: solve(LU,b), well, since LU == A, there's really no difference between that and solve(A,b). Thus, you haven't used the LU Decomposition to any actual benefit.

    Also, while you're at it - take a look at my opening comment on formatting matrices in the question itself.

  • @opie You're off to a good start but there's a few issues to work on.

    • Your code formatting is mostly off because you're using back-ticks (meant for very short, inline code) to format block code. For blocks of code, just indent four spaces - as you did for you first group of print statements.

      • For part 3, don't use inv! The main thing that we've learned about the inverse of a matrix is to not use it. :)

      • For part 4, don't use det - similar comments apply. For both of these parts, you should think in terms of the alternative approaches we've learned.

  • @mark Think I got it this time.

  • @miguel It looks like you made some progress but, when you type det(P.dot(L).dot(U)), you are effectively computing det(A) so you haven't used the LU Decomposition to any actual benefit.

Sign In or Register to comment.