Linear Algebra 5

Geometry of determinants

Portions copyright Rob Beezer (GFDL)

Fri, Feb 07, 2025

Geometry of determinants

Our main mission is to gain some level of understanding as to why determinants behave the way they do.

  • What kinds of ways are there to characterize or compute a determinant?
  • What does a determinant tell us geometrically?
  • And, why does it tell us that?
  • How is that related to non-singularity?

\[ \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\transpose}[1]{#1^{T}} \newcommand{\inverse}[1]{#1^{-1}} \]

Algebraic characterization

Here are a couple of nice facts about determinants:

  • The determinant of a triangular matrix is the product of the terms on the diagonal and
  • the determinant is multiplicative, ie. \[\det(AB) = \det(A)\det(B).\] It turns out that the determinant is completely determined by these two properties. That is, any functions with both of these properties are necessarily the same.

Geometric action

A good way to understand the behavior of just about any kind of function is via the geometric action that it induces. In the case of a linear transformation, we might try to understand the degree to which is stretches out area or, more generally, \(n\)-dimensional volume.

Typically, this is done by envisioning the effect of the transformation on the unit square (or \(n\)-dimensional unit cube).

Example

Press “Apply” to see the action induced on the unit square by the matrix \[\begin{bmatrix}4&2\\2&-2\end{bmatrix}.\]

Comments on the example

The vectors \(\vec{\imath}=\langle 1,0 \rangle\) and \(\vec{\jmath}=\langle 0,1 \rangle\) map to the column vectors \(\langle 4,2 \rangle\) and \(\langle 2,-2 \rangle\) of the matrix.

The resulting area is clearly much larger. In fact, the value of the determinant is \(-12\) so the area of the result should be \(12\).

The minus sign indicates that there’s a flip in the transformation, which is clear to see.

Outline

Ultimately, we’ve got to figure out

  • why those two properties uniquely determine the determinant (using row reduction),
  • why the Laplace expansion formula has those properties,
  • what elementary matrices are, and
  • why the determinant has its particular geometric properties.

Ways to compute a determinant

Last time, we learned the somewhat crazy but common definition of the determinant, namely:

\[\begin{align*} \detname{A}&= \matrixentry{A}{11}\detname{\submatrix{A}{1}{1}} -\matrixentry{A}{12}\detname{\submatrix{A}{1}{2}} +\matrixentry{A}{13}\detname{\submatrix{A}{1}{3}}-\\ &\quad \matrixentry{A}{14}\detname{\submatrix{A}{1}{4}} +\cdots +(-1)^{n+1}\matrixentry{A}{1n}\detname{\submatrix{A}{1}{n}}\text{.} \end{align*}\]

The formula is an example of a Laplace expansion.

This is not the only way to compute a determinant, though.

Expansion about other rows or columns

Actually, you can expand about any row or column. That is, we have a

Row expansion and a

\[\begin{align*} \detname{A}&= (-1)^{i+1}\matrixentry{A}{i1}\detname{\submatrix{A}{i}{1}}+ (-1)^{i+2}\matrixentry{A}{i2}\detname{\submatrix{A}{i}{2}}\\ &\quad+(-1)^{i+3}\matrixentry{A}{i3}\detname{\submatrix{A}{i}{3}}+ \cdots+ (-1)^{i+n}\matrixentry{A}{in}\detname{\submatrix{A}{i}{n}} \end{align*}\]

Column expansion

\[\begin{align*} \detname{A}&= (-1)^{1+j}\matrixentry{A}{1j}\detname{\submatrix{A}{1}{j}}+ (-1)^{2+j}\matrixentry{A}{2j}\detname{\submatrix{A}{2}{j}}\\ &\quad+(-1)^{3+j}\matrixentry{A}{3j}\detname{\submatrix{A}{3}{j}}+ \cdots+ (-1)^{n+j}\matrixentry{A}{nj}\detname{\submatrix{A}{n}{j}} \end{align*}\]

Example: expansion about the last row

\[\begin{equation*} \tiny A=\begin{bmatrix} -2 & 3 & 0 & 1\\ 9 & -2 & 0 & 1\\ 1 & 3 & -2 & -1\\ 4 & 1 & 2 & 6 \end{bmatrix}\text{.} \end{equation*}\]

\[\begin{align*} \detbars{A} &= (4)(-1)^{4+1} \begin{vmatrix} 3 & 0 & 1\\ -2 & 0 & 1\\ 3 & -2 & -1 \end{vmatrix} +(1)(-1)^{4+2} \begin{vmatrix} -2 & 0 & 1\\ 9 & 0 & 1\\ 1 & -2 & -1 \end{vmatrix}\\ &\quad\quad+(2)(-1)^{4+3} \begin{vmatrix} -2 & 3 & 1\\ 9 & -2 & 1\\ 1 & 3 & -1 \end{vmatrix} +(6)(-1)^{4+4} \begin{vmatrix} -2 & 3 & 0 \\ 9 & -2 & 0 \\ 1 & 3 & -2 \end{vmatrix}\\ &= (-4)(10)+(1)(-22)+(-2)(61)+6(46)=92\text{.} \end{align*}\]

Example: expansion about third column

\[\begin{equation*} \tiny A=\begin{bmatrix} -2 & 3 & 0 & 1\\ 9 & -2 & 0 & 1\\ 1 & 3 & -2 & -1\\ 4 & 1 & 2 & 6 \end{bmatrix}\text{.} \end{equation*}\]

\[\begin{align*} \detbars{A} &= (0)(-1)^{1+3} \begin{vmatrix} 9 & -2 & 1\\ 1 & 3 & -1\\ 4 & 1 & 6 \end{vmatrix} + (0)(-1)^{2+3} \begin{vmatrix} -2 & 3 & 1\\ 1 & 3 & -1\\ 4 & 1 & 6 \end{vmatrix} +\\ &\quad\quad(-2)(-1)^{3+3} \begin{vmatrix} -2 & 3 & 1\\ 9 & -2 & 1\\ 4 & 1 & 6 \end{vmatrix} + (2)(-1)^{4+3} \begin{vmatrix} -2 & 3 & 1\\ 9 & -2 & 1\\ 1 & 3 & -1 \end{vmatrix}\\ &=0+0+(-2)(-107)+(-2)(61)=92\text{.} \end{align*}\]

A couple comments

Idea behind the proof

It’s easy to see these alternate expansions should be true for all \(2\times2\) matrices (try it!).

The result extends to higher dimensions using induction.

Special matrices

Sometimes, the specific form of a matrix may make one row or column easier to expand about.

As we’ll see in our next example…

Upper triangular example

The determinant of a triangular matrix is always the product of the terms on the diagonal; you can see why by expanding along the first row or column.

\[\begin{align*} &\begin{vmatrix} 2 & 3 & -1 & 3 & 3\\ 0 & -1 & 5 & 2 & -1\\ 0 & 0 & 3 & 9 & 2\\ 0 & 0 & 0 & -1 & 3\\ 0 & 0 & 0 & 0 & 5 \end{vmatrix} =2(-1)^{1+1} \begin{vmatrix} -1 & 5 & 2 & -1\\ 0 & 3 & 9 & 2\\ 0 & 0 & -1 & 3\\ 0 & 0 & 0 & 5 \end{vmatrix} \\ &=2(-1)(-1)^{1+1} \begin{vmatrix} 3 & 9 & 2\\ 0 & -1 & 3\\ 0 & 0 & 5 \end{vmatrix} =2(-1)(3)(-1)^{1+1} \begin{vmatrix} -1 & 3\\ 0 & 5 \end{vmatrix}\\ &=2(-1)(3)(-1)(-1)^{1+1} \begin{vmatrix} 5 \end{vmatrix} =2(-1)(3)(-1)(5)=30 \end{align*}\]

Upper triangular comments

The process of triangulation followed by multiplication along the diagonal is actually a rather efficient way to compute the determinant for large matrices.

The triangular form also connects the determinant to area much more directly.

Our next step, in fact, will be to use a special class of matrices to keep track of the row reduction process to allow us to gain a better understanding of the connection between the determinant and area.

Row reduction and determinants

The elementary row operations, it turns out, affect the value of a determinant in predictable ways. In fact,

  • A row swap \(R_i \leftrightarrow R_j\) changes the sign of the determinant and
  • A row add \(\alpha R_i + R_j \to R_j\) doesn’t change the determinant at all.

Thus, another way to compute a determinant is to

  • triangulate the matrix using the above types of operations,
  • multiply the terms on the diagonal, and finally
  • account for the sign using the number of swaps.

Elementary matrices

Each of the elementary row operations can be expressed in terms of matrices called the elementary matrices, each of which is generated by applying an elementary row operation to the identity matrix.

Furthermore, the effect of multiplication by an elementary matrix is equivalent to applying the elementary row operation in the first place!

Example

Here’s an example illustrating the correspondence between elementary row operations and elementary matrices for a \(3\times4\) matrix.

\[\tiny A= \begin{bmatrix} 2 & 1 & 3 & 1\\ 1 & 3 & 2 & 4\\ 5 & 0 & 3 & 1 \end{bmatrix}\]

\[\begin{align*} \tiny \rowopswap{1}{3}:\ & \tiny \begin{bmatrix} 5 & 0 & 3 & 1\\ 1 & 3 & 2 & 4\\ 2 & 1 & 3 & 1 \end{bmatrix} & \tiny \elemswap{1}{3}:\ & \tiny \begin{bmatrix} 0 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} 2 & 1 & 3 & 1\\ 1 & 3 & 2 & 4\\ 5 & 0 & 3 & 1 \end{bmatrix} = \begin{bmatrix} 5 & 0 & 3 & 1\\ 1 & 3 & 2 & 4\\ 2 & 1 & 3 & 1 \end{bmatrix}\\ \tiny \rowopmult{2}{2}:\ & \tiny \begin{bmatrix} 5 & 0 & 3 & 1\\ 2 & 6 & 4 & 8\\ 2 & 1 & 3 & 1 \end{bmatrix} & \tiny \elemmult{2}{2}:\ & \tiny \begin{bmatrix} 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 5 & 0 & 3 & 1\\ 1 & 3 & 2 & 4\\ 2 & 1 & 3 & 1 \end{bmatrix} = \begin{bmatrix} 5 & 0 & 3 & 1\\ 2 & 6 & 4 & 8\\ 2 & 1 & 3 & 1 \end{bmatrix}\\ \tiny \rowopadd{2}{3}{1}:\ & \tiny \begin{bmatrix} 9 & 2 & 9 & 3\\ 2 & 6 & 4 & 8\\ 2 & 1 & 3 & 1 \end{bmatrix} & \tiny \elemadd{2}{3}{1}:\ & \tiny \begin{bmatrix} 1 & 0 & 2\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 5 & 0 & 3 & 1\\ 2 & 6 & 4 & 8\\ 2 & 1 & 3 & 1 \end{bmatrix} = \begin{bmatrix} 9 & 2 & 9 & 3\\ 2 & 6 & 4 & 8\\ 2 & 1 & 3 & 1 \end{bmatrix} \end{align*}\]

Properties of elementary matrices

  • Every nonsingular matrix can be expressed as a product of elementary matrices.
  • It’s easy to understand the affect of an elementary matrix on area or \(n\)-dimensional volume.
  • The determinant is multiplicative on the elementary matrices. That is \[\det(E_1E_2) = \det(E_1)\det(E_2)\] whenever \(E_1\) and \(E_2\) are \(n\times n\) elementary matrices.
  • Those two taken together can be used to show that the determinant is multiplicative on the set of all \(n\times n\) matrices.
  • Furthermore, we can understand why the determinant measures area distortion since it’s the cumulative multiplicative effect of the elementary matrices

Row add

Adding a constant times one row to another simply skews the picture, which preserves the area.

Row swap

Swapping rows preserves area but changes orientation.

Row mult

Multiplying a row by a constant affects the area by that same multiplicative factor.

Comments

  • It’s fairly easy to visualize these in 3D, as well, and to see that volume and orientation are affected in the same way.
  • The ideas extend to \(n\)-dimensional determinants, as well.
  • Ultimately, there are are algebraic proofs involving the Laplace expansion as well.

An algebraic proof sketch

Let’s think about the fact that a row swap changes the sign of a determinant.

First it’s super easy to see for \(2\times2\) determinants

\[ \begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix} = a_{11}a_{22} - a_{21}a_{12} \] and \[ \begin{vmatrix}a_{21}&a_{22}\\a_{11}&a_{12}\end{vmatrix} = a_{21}a_{12} - a_{11}a_{22}. \]

Typically, these proofs start with the \(2\times 2\) case and then extend to higher dimensions via induction.

Singularity

What happens when the matrix \(A\) is singular?

In this case, when we row reduce \(A\), we no longer get the identity. Thus, \(A\vec{x}=\vec{0}\) has infinitely many solutions and the null space has positive dimension.

As a result, the column space of \(A\) and range of the associated linear transformation cannot be all of \(\mathbb R^n\). Thus, the matrix cannot be invertible.

This should all be reflected in the geometric behavior of the linear transformation.

Geometric singularity

Here’s a look at the geometric effect of multiplication by the matrix \[ A = \begin{bmatrix}2&4\\1&2\end{bmatrix}. \]

The “squishing” of the two-dimensional space into one is exactly why the range cannot be all of \(\mathbb R^2\).

Ways to tell when a matrix is singular

It’s often easy to see when a small matrix is singular. A matrix is certainly singular if

  • One row or column contains only zeros,
  • Two rows or columns are the same,
  • One row or column is a multiple of another,
  • One row or column is a linear combination of the others.

Transposes

Finally, I feel it’s time to mention transposes. It’s easy - If \(A\) is an \(m\times n\) matrix, then the transpose of \(A\), is the matrix \(A^T\) satisfying \[[A^T]_{ij} = [A]_{ji}.\] For example, \[ \begin{bmatrix} 1&2&3\\4&5&6 \end{bmatrix}^T = \begin{bmatrix} 1&4\\2&5\\3&6 \end{bmatrix}. \]

Properties of transposes

The transpose will prove quite useful when we do algebraic manipulations with the dot product starting next time, so let’s investigate some of it’s properties. In what follows, we’ll assume that \(A\) and \(B\) are sized so that the operations are defined.

  • \(\transpose{\left(\transpose{A}\right)}=A\)
  • \(\transpose{(A+B)}=\transpose{A}+\transpose{B}\)
  • \(\transpose{(\alpha A)}=\alpha\transpose{A}\)
  • \(\detname{\transpose{A}}=\detname{A}\)

The three three seem quite obvious and the last seems at least believable since you can expand a determinant around the first row or column.

Matrix multiplication and transposes

The transpose interacts nicely with matrix multiplication and via the inverse.

  • \(\transpose{(AB)}=\transpose{B}\transpose{A}\)
  • \(\inverse{(AB)}=\inverse{B}\inverse{A}\)
  • \(\inverse{(\transpose{A})}=\transpose{(\inverse{A})}\)

These are perhaps a bit more mysterious and deserve a closer look.

Transpose of a product

Note that, if \(A\) is \(m\times n\) and \(B\) must be \(n\times p\) then both sides of \(\transpose{(AB)}=\transpose{B}\transpose{A}\) are, at least, well defined. We can show that they’re equal by investigating their entries

\[\begin{align*} \matrixentry{\transpose{(AB)}}{ji} &=\matrixentry{AB}{ij} =\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj} =\sum_{k=1}^{n}\matrixentry{B}{kj}\matrixentry{A}{ik} \\ &=\sum_{k=1}^{n}\matrixentry{\transpose{B}}{jk}\matrixentry{\transpose{A}}{ki} =\matrixentry{\transpose{B}\transpose{A}}{ji} \end{align*}\]

Inverse of a product

To talk about the inverse of a product, then \(A\) and \(B\) both have to be square and of the same size. Assuming so, note that \[(AB)(\inverse{B}\inverse{A}) = A(B\inverse{B})\inverse{A} = AI\inverse{A} =A\inverse{A}=I.\] Thus, \(\inverse{B}\inverse{A}\) satisfies the definition of \(\inverse{(AB)}\).

Transpose of an inverse

To show that \(\inverse{(\transpose{A})}=\transpose{(\inverse{A})}\), it suffices to show that \(\transpose{(\inverse{A})}\) satisfies the definition of inverse for \(\transpose{A}\). We can use the corresponding traspose property to show this: \[ \transpose{A}\transpose{(\inverse{A})} = \transpose{(\inverse{A}A)} = \transpose{I} = I. \]


Finally, it’s worth mentioning that any matrix that satisfies \[\transpose{A} = A\] is called symmetric. The identity matrix is symmetric, for example.