Diagonalization

Welcome to our first real online presentation! Today, I'm going to talk about eigenvalues, eigenvectors, and their use to diagonalize matrices.

The presentation works just like last time - use the arrows in the lower right to navigate through the slides and listen to the audio recording when the player appears (which it doesn't on this first slide).

Eigen-pairs

I assume that we all know that, given $A\in\mathbb R^{n\times n}$ that an eigenvalue/eigenvector pair for $A$ consists of a scalar $\lambda$ and a vector $\vec{v}\in\mathbb R^n$ such that $A\vec{v} = \lambda \vec{v}$.

I'll also assume that we understand that, geometrically, this means that the one-dimensional space of $\mathbb R^n$ spanned by $\vec{v}$ is invariant under the action of $A$.

Computing eigen-pairs

In Linear I, we write $A\vec{v}=\lambda \vec{v}$ as $(A-\lambda I)\vec{v}=\vec{0}$, which has a solution iff $\det(A-\lambda I)=0$. This allows us to find eigenvalues and subsequently eigenvectors.

In numerical linear algebra, we'll learn a couple of new algorithms to compute eigen-pairs. These are iterative techniques based on the structure of the matrix and diagonalization is the first step to understanding that structure.

Diagonalization

We say that $A\in\mathbb R^{n\times n}$ is diagonalizable if there is a there is a diagonal matrix $D$ and a non-singular matrix $S$ such that

$$A = SDS^{-1} \text{ or, equivalently } D=S^{-1}AS.$$

As we'll see, this form will help us understand the geometric action of $A$.

Geometric action 1

Let's try to understand the geometric action of a matrix written in the form $SDS^{-1}$.

As a first baby step, we'll describe the geometric action of a diagonal matrix:

$$D = \left(\begin{matrix} d_1 & 0 & \cdots & 0 \\ 0 & d_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & d_n \end{matrix}\right).$$

Geometric action 2

Let's write the $j^{th}$ coordinate basis vector $\vec{e}_j$ as

$$ \left(\vec{e}_j\right)_i = \begin{cases} 1 & \text{if } i=j \\ 0 & \text{if } i\neq j \end{cases}. $$

That is, $\vec{e}_j$ is the vector that has a one in the $j^{\text{th}}$ spot and zeros otherwise.

Geometric action 3

Now, $D\vec{e}_j = d_j \vec{e}_j$. Thus, the geometric action of $D$ on $e_j$ is super easy to see; it just stretches or compresses and possibly reflects $\vec{e}_j$ depending on the value of $d_j$. This is illustrated below.

Animation of square morphing into rectangle

Geometric action 4

To extend this to a matrix of the form $SDS^{-1}$, we write $S$ as a list of column vectors:

$$S = \left(\vec{S}_1 \: \vec{S}_2 \: \cdots \: \vec{S}_n\right).$$

Then, $S\vec{e}_j = \vec{S}_j$ - i.e., multiplication by the $j^{\text{th}}$ coordinate basis vector simply extracts the $j^{\text{th}}$ column of $S$.   Of course, $S^{-1}$ does exactly the opposite; it maps the columns of $S$ to the coordinate basis vectors.

Geometric action 5

We can now see exactly how $A=SDS^{-1}$ acts on space. Given a column vector $\vec{S}_j$ of $S$,

  • multiplication by $S^{-1}$ maps $\vec{S}_j \to \vec{e}_j$,
  • multiplication by $D$ then acts on the coordinate basis vectors as we've already learned, and
  • $S$ finally maps those modified coordinate basis vectors back to the subspaces spanned by its columns.

Geometric action 6

The net effect of all this is ...

More complicated animation

Diagonalizing matrices 1

For a matrix $A$ to be diagonalizable,

  • It must be non-singular and
  • It must have a linearly independent set of eigen-vectors.

That first condition makes sense, since $SDS^{-1}$ must be nonsingular; the second will make more sense soon.

Diagonalizing matrices 2

Now, let's suppose that $A$ is a non-singular matrix with a linearly independent set of eigen-vectors. More specifically, suppose that the (eigenvalue, eigenvector) pairs of $A$ are $(\lambda_j,\vec{v}_j)$. Then, to diagonalize $A$, we simply choose $S$ to be the matrix whose columns are the $\vec{v}_j$s and $D$ to be the diagonal matrix with the $\lambda_j$s on the diagonal.

Diagonalizing matrices 3

That is:

$$ S = \left(\vec{v}_1 \: \vec{v}_2 \cdots \vec{v}_n\right) $$

and

$$ D = \begin{pmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & \cdots & \lambda_n \end{pmatrix}. $$

Diagonalizing matrices 4

The easiest way to see why this works is to proceed exactly as we did back in the slide on Geometric action 5. There we showed that $SDS^{-1}$ stretches (or compresses) the $j^{\text{th}}$ column of $S$ out by the factor $\lambda_j$. But that's exactly what $A$ does, since $S$ was constructed so that column was an eigenvector of $A$.

Example

Suppose I want to diagonalize the matrix

$$A = \left( \begin{array}{ccc} 5 & 3 & 0 \\ -6 & -4 & 0 \\ 17 & 9 & -2 \\ \end{array} \right). $$

We'll use the basic, Linear I approach to finding the eigenvalues and eigenvectors. Thus, we would begin by computing $\det(A-\lambda I)$.

Example (cont)

$$ \begin{align*} \det(A-\lambda I) &= \left| \begin{array}{ccc} 5-\lambda & 3 & 0 \\ -6 & -\lambda -4 & 0 \\ 17 & 9 & -\lambda -2 \\ \end{array} \right| \\ &= 4 + 4 \lambda - \lambda^2 - \lambda^3 \\ &= -(\lambda -2) (\lambda +1) (\lambda +2). \end{align*} $$

Thus, the eigenvalues are $-2$, $-1$, and $2$.

Example (cont 2)

For each eigen-value $\lambda$, we solve the system $A\vec{v} = \lambda\vec{v}$. For $\lambda=2$, for example, we get:

$$\left( \begin{array}{ccc} 5 & 3 & 0 \\ -6 & -4 & 0 \\ 17 & 9 & -2 \\ \end{array} \right)\left(\begin{array}{c}x\\y\\z\end{array}\right) = 2\left(\begin{array}{c}x\\y\\z\end{array}\right). $$

Example (cont 3)

That's equivalent to the system

$$\begin{align*} 5 x+3 y&=2 x \\ -6 x-4 y&=2 y \\ 17 x+9 y-2 z&=2 z \end{align*}$$

which has general solution $y=-x$, $z=2x$, where $x$ is free. (Recall that the system is set up to have redundancy.)

Example (cont 4)

Taking $x=1$, we have $y=-1$ and $z=2$. Thus an eigenvector is $\langle 1,-1,2 \rangle$.

Similarly, eigenvectors for $\lambda = -1$ and $\lambda=-2$ are

  • $\lambda=-1$: $\vec{v}=\langle -1,2,1 \rangle$
  • $\lambda=-2$: $\vec{v}=\langle 0,0,1 \rangle$

Example (cont 5)

Thus, $A=SDS^{-1}$, where

$$ D = \left( \begin{array}{ccc} -2 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 2 \\ \end{array} \right) \text{ and } S = \left( \begin{array}{ccc} 0 & -1 & 1 \\ 0 & 2 & -1 \\ -1 & 1 & 2 \\ \end{array} \right). $$