Eigenvalues and eigenvectors

Published

October 8, 2025

\[ \newcommand{\vect}[1]{\mathbf{#1}} \]

When we study a linear transformation \(T:\mathbb R^n \to \mathbb R^n\), there are often many invariant subspaces. We can simplify the transformation by studying how it acts on those smaller, invariant spaces. Eigenvectors and their associated eigenvalues are the algebraic tools for formulating this process. Furthermore, there’s a very geometric way to view these concepts as well.

The basic definitions

Throughout this presentation \(T\) will denote a linear transformation mapping \(\mathbb R^n \to \mathbb R^n\). We’ll generally suppose that \(T\) has the matrix representation \[T(\vect{x}) = A\vect{x}.\]

We say that \(\vect{x}\) is an eigenvector of \(T\) with eigenvalue \(\lambda\) if \[T(\vect{x}) = A\vect{x} = \lambda \vect{x}.\]

Real eigenvalues

As we’ll see, there are good reasons to consider the possibilities that the scalar \(\lambda\) might be a real or a complex eigenvalue.

In the case where \(\lambda\) is real, the equation \[T(\vect{x}) = A\vect{x} = \lambda \vect{x}\] immediately implies that the one-dimensional subspace of \(\mathbb R^n\) spanned by \(\vect{x}\) \(\text{span}(\{\vect{x}\})\) is invariant under the action of \(T\).

The sign and magnitude of \(\lambda\) dictate the geometric properties of the transformation on that invariant subspace:

  • \(|\lambda|<1\) implies that \(T\) compresses \(\text{span}(\{\vect{x}\})\),
  • \(|\lambda|>1\) implies that \(T\) stretches out \(\text{span}(\{\vect{x}\})\),
  • \(\lambda<0\) implies that \(T\) reflects \(\text{span}(\{\vect{x}\})\).

Complex eigenvalues

When we say that \(\lambda\) is complex, we mean that \(\lambda\) has the form \[\lambda = a + b i,\] where \(a,b\in\mathbb R\) and is the imaginary unit satisfying \(i^2 = -1\).

We’ll see how complex eigenvalues naturally arise from the computations involved in finding real eigenvalues and how there’s a natural interpretation of these things involving rotation in the vector space \(\mathbb R^n\).

Examples

Let’s take a look at several examples in \(\mathbb R^2\). Some of these interactive examples will be familiar, since we first met them when we discussed the geometric action of matrices. The focus now, though, is how that action relates to eigenvalues and eigenvectors.

Example 1: Stretch and squish

The action of \[M = \begin{bmatrix}2&0\\0&1/2\end{bmatrix}\] preserves the subspace of \(\mathbb R^2\) spanned by \([ 1,0 ]^{\mathsf T}\) and the subspace spanned by \([ 0,1 ]^{\mathsf T}\).

  • \([ 1,0 ]^{\mathsf T}\) is an eigenvector with eigenvalue \(2\) and
  • \([ 0,1 ]^{\mathsf T}\) is an eigenvector with eigenvalue \(1/2\).

Example 2: Sideways stretch and squish

The action of \[M = \begin{bmatrix}1&1\\1/2&3/2\end{bmatrix}\] preserves the subspace of \(\mathbb R^2\) spanned by \([ 1,1 ]^{\mathsf T}\) and the subspace spanned by \([ -2,1 ]^{\mathsf T}\).

The first subspace has eigenvalue \(2\) and the second has eigenvalue \(1/2\).

Checking to see if a real number \(\lambda\) and vector \(\vect{x}\) forms an eigenvalue/eigenvector pair is generally easy. Just compute. For example, \[ \begin{bmatrix}1&1\\1/2&3/2\end{bmatrix} \begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}1+1\\1/2+3/2\end{bmatrix} = \begin{bmatrix}2\\2\end{bmatrix} = 2\begin{bmatrix}1\\1\end{bmatrix} \] and \[ \begin{bmatrix}1&1\\1/2&3/2\end{bmatrix} \begin{bmatrix}-2\\1\end{bmatrix} = \begin{bmatrix}-2+1\\-1+3/2\end{bmatrix} = \begin{bmatrix}-1\\1/2\end{bmatrix} = \frac{1}{2}\begin{bmatrix}-2\\1\end{bmatrix} \]

Example 3: A skew

The action of \[M = \begin{bmatrix}1&1\\0&1\end{bmatrix}\] preserves the subspace of \(\mathbb R^2\) spanned by \([ 1,0 ]^{\mathsf T}\) and that’s it.

I guess that \([ 1,0 ]^{\mathsf T}\) is the only (non-zero) eigenvector and its eigenvalue is \(1\).

Let’s verify algebraically that \(\lambda=1\) is the only eigenvalue and that any corresponding eigenvector must have the form \([x \quad 0]^{\mathsf T}\). We begin by expanding out the product

\[ \begin{bmatrix}1&1\\0&1\end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}x+y\\y\end{bmatrix} = \lambda \begin{bmatrix}x\\y\end{bmatrix}. \] This is equivalent to the pair of equations \(x+y=\lambda x\) and \(y=\lambda y\). Note that the second equation can be written \(y(1-\lambda)=0\), which implies that either \[ \lambda = 1 \text{ or } y=0. \] If \(\lambda=1\), then the first equation states \(x+y=x\) so \(y=0\).
If \(y=0\), then the first equation states \(x=\lambda x\) so \(\lambda=1\).
That is \[ \lambda = 1 \textit{ and } y=0. \]

Example 4: Stretch and flip

The action of \[M = \begin{bmatrix}1&3\\3&1\end{bmatrix}\] preserves and stretches the subspace of \(\mathbb R^2\) spanned by \([ 1,1 ]^{\mathsf T}\) and preserves but flips the subspace spanned by \([ -2,1 ]^{\mathsf T}\).

The first has eigenvalue \(4\) and the second has eigenvalue \(-2\).

Example 5: Project

The action of \[M = \begin{bmatrix}4/5&2/5\\2/5&1/5\end{bmatrix}\] projects \(\mathbb R^2\) onto the subspace spanned by \([ 2,1 ]^{\mathsf T}\), which is an eigenvector with eigenvalue \(1\). The vector \([-1,2]^{\mathsf T}\) is an eigenvector for eigenvalue \(0\).

Finding eigenvalues

There’s a surprisingly easy way to find eigenvalues. Let’s begin by supposing that \[A\vect{x} = \lambda \vect{x}.\] Then, we’ll bring both terms to one side to obtain \[A\vect{x} - \lambda \vect{x} = \vect{0}.\] We could rewrite this as \[ \begin{aligned} A\vect{x} - \lambda \vect{x} &= A\vect{x} - \lambda I\vect{x} \\ &= \left(A-\lambda I\right)\vect{x} = \vect{0}. \end{aligned} \]

Note that this implies that \(A-\lambda I\) must be singular. Of course, there’s a simple test for singularity, namely \[\det(A-\lambda I) = 0.\] The expression on the left is generally a polynomial in the variable \(\lambda\); it is called the characteristic polynomial of \(A\).

The characteristic polynomial gives us a simple algebraic criterion on \(\lambda\) that we can use to solve for \(\lambda\); we simply find its roots.

Example

Recall that \[M = \begin{bmatrix}1&1\\1/2&3/2\end{bmatrix}\] has the eigenvalues \(\lambda=2\) and \(\lambda =1/2\). Let’s show how we can use the characteristic polynomial to find those eigenvalues.

The first step will be to find the matrix \(M-\lambda I\), which can be obtained by simply subtracting \(1\)s off the diagonal of \(M\). Thus, \[M-\lambda I = \begin{bmatrix}1-\lambda&1\\1/2&3/2-\lambda\end{bmatrix}.\]

We now find the determinant \[\begin{aligned} \left|M-\lambda I\right| &= \begin{vmatrix}1-\lambda&1\\1/2&3/2-\lambda\end{vmatrix} = (1-\lambda)\left(\frac{3}{2}-\lambda\right) - \frac{1}{2} \\ &=\lambda^2-5\frac{\lambda }{2}+\frac{3}{2} - \frac{1}{2} = \lambda^2-5\frac{\lambda }{2}+1 \\ &= \frac{1}{2}\left(2\lambda^2 - 5\lambda + 2\right) = \frac{1}{2}(x-2)(2x-1). \end{aligned}\]

From the factored form, we can see easily that the eigenvalues are \[\lambda=2 \text{ and } \lambda=1/2.\]

Interpreting eigenvalues

We now turn to the question of interpreting eigenvalues. While our current intuition is base largely on two dimensional matrices with real eigenvalues, these ideas actually extend to higher dimensional matrices with complex eigenvalues. A summary might look like:

  • The set of all eigenvectors for a single eigenvalue forms a subspace,
  • Eigenvectors for distinct eigenvalues are linearly independent,
  • The dimension of the subspace for an eigenvalue cannot exceed the algebraic multiplicity of the eigenvalue but repeated eigenvalues can be a bit tricky.

Multiplicity??

So, what’s this notion of “algebraic multiplicity” referred to above? In fact, there are two types of multiplicity

  • Algebraic multiplicity refers to the multiplicity of the root of the eigenvalue within the characteristic polynomial.
  • Geometric multiplicity refers to the dimension of the eigenspace.

We can illustrate the concept with two simple 2D examples. Note that \(\lambda=1\) is an eigenvalue for both \[ I = \begin{bmatrix}1&0\\0&1\end{bmatrix} \text{ and } M = \begin{bmatrix}1&1\\0&1\end{bmatrix}, \] because the characteristic polynomial of each matrix is \((\lambda-1)^2\). Note that exponent of \(2\) is exactly the algebraic multiplicity. More generally, \(\lambda_0\) is an eigenvalue of multiplicity \(m\) for a matrix if \((\lambda-\lambda_0)^m\) divides the characteristic polynomial but \((\lambda-\lambda_0)^{m+1}\) does not.

The geometric multiplicity refers to the dimension of the eigenspace formed by the eigenvectors of the matrix. In our example above, the identity matrix \(I\) has the linearly independent set of eigenvectors \[ \vect{\imath} = [1 \quad 0]^{\mathsf T} \text{ and } \vect{\jmath} = [0 \quad 1]^{\mathsf T}. \] Thus, the geometric multiplicity of the eigenvalue \(\lambda=1\) for \(I\) is again \(2\). Note, though, that the geometric multiplicity of \(\lambda=1\) for \(M\) is just \(1\).

Eigenspaces

It’s pretty easy to show that the set of all eigenvectors for a single eigenvalue forms a subspace. If \(\vect{x}\) and \(\vect{y}\) are eigenvectors for the eigenvector \(\lambda\) and \(\alpha\) and \(\beta\) are scalars, then \[\begin{aligned} A(\alpha\vect{x} + \beta\vect{y}) &= \alpha A\vect{x} + \beta A\vect{y} \\ &= \alpha \lambda \vect{x} + \beta\lambda \vect{y} \\ &= \lambda (\alpha\vect{x} + \beta\vect{y}). \end{aligned}\]

Thus, \(\alpha\vect{x} + \beta\vect{y}\) is also an eigenvector with eigenvalue \(\lambda\). That is, that the set of all eigenvectors for a single eigenvalue is closed under linear combinations, which is one characterization of a subspace.

Linear independence for distinct eigenvalues

Suppose that \(\vect{x}\) and \(\vect{y}\) are eigenvectors for the eigenvalues \(\lambda_1\) and \(\lambda_2\). Suppose also, that one of the eigenvectors is a constant multiple of the other, say \[\vect{x} = c\vect{y}.\] Then, \[\lambda_1 (c\vect{y}) = A (c\vect{y}) = c(A\vect{y}) = c(\lambda_2\vect{y}) = \lambda_2(c\vect{y}).\] So, we must have \(\lambda_1 = \lambda_2\).

Repeated eigenvalues

Consider the matrices \[ A = \begin{bmatrix}1&0\\0&1\end{bmatrix} \quad B = \begin{bmatrix}1&1\\0&1\end{bmatrix}. \] Both \(A\) and \(B\) have the characteristic polynomial \((\lambda-1)^2\); we say that \(\lambda=1\) is a repeated eigenvalue. It’s easy to see that the corresponding eigenspace of \(A\) is all of \(\mathbb R^2\). For \(B\), though \[\begin{bmatrix}1&1\\0&1\end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}x+y\\y\end{bmatrix}. \] Thus, \([x \quad y]^{\mathsf T}\) is an eigenvector only when \(y=0\).

It’s worth mentioning that we’ve already explored the geometric action of \(B\).

Interpreting complex eigenvalues in 2D

Let’s use our method for finding eigenvalues to see how complex eigenvalues might arise. We’ll then try to figure out how to interpret them.

We’ll start with the simplest example along these lines, namely \[ A = \begin{bmatrix}0&-1\\1&0\end{bmatrix}. \] Then \[ \det(A-\lambda I) = \begin{vmatrix}-\lambda&-1\\1&-\lambda\end{vmatrix} = \lambda^2 + 1. \] I guess we need \(\lambda^2 = -1\), i.e. \(\lambda=\pm i\).

Rotate \(90^{\circ}\)

The geometric action of \[A = \begin{bmatrix}0&-1\\1&0\end{bmatrix}\] is to rotate \(\mathbb R^2\) through the angle \(\pi/2\).

Rotate \(\theta\)

Let \(R(\theta)\) denote the matrix \[ R(\theta) = \begin{bmatrix} \cos(\theta)&-\sin(\theta) \\ \sin(\theta)&\cos(\theta) \end{bmatrix}.\]

Then, \(R(\theta)\vec{\imath}\) and \(R(\theta)\vec{\jmath}\) simply extract out the columns of \(R(\theta)\), just as they do for any matrix.

Notice, though that the first column is exactly \(\vec{\imath}\) rotated through the angle \(\theta\). This follows from the very definition of the sine and cosine.

Similarly, the second column is exactly \(\vec{\jmath}\) rotated through the angle \(\theta\). You could convince yourself of that using the fact that the second column is perpendicular to the first.

It follows by linearity that \(R(\theta)\) rotates every vector in the plane through the angle \(\theta\)! The matrix \(R(\theta)\) is aptly called a rotation matrix.

Eigenvalues of \(R(\theta)\)

Let’s compute the eigenvalues of \(R(\theta)\). First, we find the characteristic polynomial: \[\begin{aligned} \det(R(\theta) - \lambda I) &= \begin{vmatrix} \cos(\theta) - \lambda & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) - \lambda \end{vmatrix} = (\cos(\theta) - \lambda)^2 + \sin^2(\theta) \\ &= \lambda^2 - 2\cos(\theta)\lambda + \sin^2(\theta) + \cos^2(\theta) = \lambda^2 - 2\cos(\theta)\lambda + 1. \end{aligned}\] We find the roots of the characteristic polynomial using the quadratic formula: \[\begin{aligned} \lambda &= \frac{2\cos(\theta) \pm \sqrt{4\cos^2(\theta) - 4}}{2} \\ &= \frac{2\cos(\theta)}{2} \pm \frac{\sqrt{-4}\sqrt{1-\cos^2(\theta)}}{2} = \cos(\theta) \pm \sin(\theta) i. \end{aligned}\]

Thus, the eigenvalues of the rotation matrix are complex and the degree of the rotation is reflected by the value of the eigenvalue.

Rotation and scaling

Some matrices include both scaling and rotation. To achieve this, simply define \(A = r\,R(\theta)\), where \(r\in\mathbb R\). The eigenvalues should then be \(r(\cos(\theta)\pm\sin(\theta))\).

For example,

\[\begin{aligned} \sqrt{2}R(\pi/4) &= \sqrt{2}\begin{bmatrix}\cos(\pi/4)&-\sin(\pi/4)\\ \sin(\pi/4)&\cos(\pi/4)\end{bmatrix} \\ &= \sqrt{2}\begin{bmatrix}1/\sqrt{2}&-1/\sqrt{2}\\ 1/\sqrt{2}&1/\sqrt{2}\end{bmatrix} = \begin{bmatrix}1&-1\\1&1\end{bmatrix} \end{aligned}\]

The eigenvalues should be \[\sqrt{2}(\cos(\pi/4) + i\sin(\pi/4)) = \sqrt{2} \left(\frac{1}{\sqrt{2}} \pm i\frac{1}{\sqrt{2}}\right) = 1\pm i.\]

Double check

Let’s double check those eigenvalues:

\[ \begin{vmatrix}1-\lambda & -1 \\ 1 & 1 - \lambda\end{vmatrix} = (1-\lambda)^2 + 1 = \lambda^2 - 2\lambda + 2. \]

Applying the quadratic formula to find the roots, we get \[ \frac{2\pm\sqrt{4-8}}{2} = 1\pm i, \] as expected.

Rotation and scaling illustrated

The action of \[M = \begin{bmatrix}1&-1\\1&1\end{bmatrix}\] rotates \(\mathbb R^2\) through the angle \(\pi/4\) and expands by the factor \(\sqrt{2}\).

“Involving” rotation

If a matrix has complex eigenvalues, then it’s geometric action generally “involves” rotation. Here’s an examle to illustrate that.

The action of \[M = \begin{bmatrix}1&-1\\1&0\end{bmatrix}\] kinda rotates through the angle \(\pi/3\). You can hit the \(M\) buttons \(6\) times to see that but it’s also more complicated than just rotation.

Let’s check the eigenvalues for this example.

\[ \det(M-\lambda I) = \left|\begin{array}{cc}1-\lambda & -1 \\ 1 & -\lambda\end{array}\right| = \lambda^2 - \lambda + 1. \\ \] Setting \(\lambda^2 - \lambda + 1 = 0\) and applying the quadratic formula, we find \[ \lambda = \frac{-b\pm\sqrt{b^2-4ac}}{2a} = \frac{1\pm\sqrt{1-4}}{2}=\frac{1}{2}\pm\frac{\sqrt{3}}{2}i. \] These conjugate eigenvalues should yield conjugate eigenvectors carrying the same information so we can choose just one, say with the \(+\) sign. That suggests we set up the system \[ \begin{bmatrix}1&-1\\1&0\end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix} = \left(\frac{1}{2} + \frac{\sqrt{3}}{2}i\right) \begin{bmatrix}x\\y\end{bmatrix}. \] Recall that this system will be redundant. Examining the second equation, we find that \[ x = \left(\frac{1}{2} + \frac{\sqrt{3}}{2}i\right)y. \] Set \(y=1\) to obtain the eigenvector \[ \begin{bmatrix} \frac{1}{2} + \frac{\sqrt{3}}{2}i \\ 1 \end{bmatrix} = \begin{bmatrix}1/2 \\ 1\end{bmatrix} + \begin{bmatrix}\sqrt{3}/2 \\ 0\end{bmatrix}i \] Note that the real and imaginary parts of that vector determine a 2D subspace associated with that eigenvalue; that’s the subspace that “involves” rotation. In fact, since the eigenvalue \[ \frac{1}{2} + \frac{\sqrt{3}}{2}i = \cos(\pi/3) + i\sin(\pi/3), \] we suspect that maybe we should see a rotation of order \(6\)?

Indeed we can compute directly that \(M^6 = I\), as you can see in the demo by applying \(M\) six times.

Three dimensional space

Let’s explore what kinds of things might happen in 3D.

Three real eigenvalues

Let’s suppose that \[ A = \begin{bmatrix} 6 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -3 \end{bmatrix}. \]

Can you guess what oughtta happen?

I suppose that the \(x\)-axis will stretch by th factor \(6\), the \(y\)-axis should be fixed, and the \(z\)-axis will flip and stretch by the factor \(3\).

Three more real eigenvalues

This matrix looks harder: \[ \left[\begin{array}{rrr} -17 & 51 & -28 \\ -10 & 36 & -20 \\ -6 & 27 & -15 \end{array}\right]. \] Since the dimension and numbers are larger, let’s use Sage to find the eigenvalues and eigenvectors:

Here’s how to interpret this output:

  • There are three eigenvalue/eigenvector pairs:
    • \(\lambda=6\), \(\vect{v}=[1 \quad 1\quad 1]^{\mathsf T}\)
    • \(\lambda=1\), \(\vect{v}=[1 \quad 2 \quad 3]^{\mathsf T}\)
    • \(\lambda=-3\), \(\vect{v}=[1 \quad 0 \quad -1/2]^{\mathsf T}\)
  • Each eigenvalue appears with algebraic multiplicity \(1\).

Geometrically, this matrix acts just like the last one, just with different vectors:

  • It stretches the line spanned by \([1 \quad 1 \quad 1]^{\mathsf T}\) by the factor \(6\),
  • it fixes the line spanned by \([1 \quad 2 \quad 3]^{\mathsf T}\), and
  • it flips and stretches the line spanned by \([1 \quad 0 \quad -1/2]^{\mathsf T}\) by the factor \(-3\).

Complex eigenvalues in 3D

Since the characteristic polynomial of a 3D matrix is a cubic, there’s always at least one real root and any complex eigenvalues come in conjugate pairs. Thus, we anticipate that there will be an invariant plane where the action of the matrix involves rotation and one-dimensional invariant subspace that’s linearly independent from that plane. Here are a couple of examples illustrating this.

Simplest example

The simplest possible example along these lines looks like so: \[ M = \begin{bmatrix} 1&-1&0\\1&1&0\\0&0&2 \end{bmatrix}. \] It’s pretty easy to see how this works after we compute \[ M = \begin{bmatrix} 1&-1&0\\1&1&0\\0&0&2 \end{bmatrix} \begin{bmatrix} x\\y\\z \end{bmatrix} = \begin{bmatrix} x-y\\x+y\\2z \end{bmatrix}. \] Thus, we have a rotation with scaling in the \(xy\)-plane while the \(z\)-axis stretches out perpendicularly to the rotation.

You can see that in action here:

Another example

Once you get to complex eigenvalues in 3D, things get more complicated. It’s not hard to interpret eigenvalues and eigenvectors, if we compute them with software, though.

Suppose we have \[ M = \begin{bmatrix} 4&-3&-1 \\ 2&3&1 \\ -2&3&5 \end{bmatrix}. \]

We can compute the eigensystem with Sage:

So, clearly we have a real eigenvalue of \(\lambda=6\) with invariant space spanned by \([1 \quad 0 \quad -2]^{\mathsf T}\).

For the complex eigenvalue, if you realize that \(1.73205\approx\sqrt{3}\) and \(0.866025\approx\sqrt{3}/2\), then we can see that the eigenspace is spanned by the vectors \[[1 \quad 1/2 \quad -1/2]^{\mathsf T} \text{ and } [0 \quad \sqrt{3}/2 \quad -\sqrt{3}/2]^{\mathsf T}.\]

Altogether, this looks like so: