Physics 64 / Linear Algebra

Harvey Mudd College

Modified: 1/17/25 11:35

Square Matrices

Square matrices have a number of interesting properties and deserve special attention. An \(N \times N\) square matrix maps an \(N\)-dimensional vector into another \(N\)-dimensional vector. Several types of square matrices have special properties:

Identity Matrix

The identity matrix has ones along the main diagonal and zeros everywhere else: \begin{equation} \mat{I} = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{pmatrix} \label{eq:identity} \end{equation}

Inverse

If a square matrix \(\mat{A}\) has an inverse \(\mat{A}^{-1}\), then \begin{equation} \mat{A}^{-1} \vdot \mat{A} = \mat{I} = \mat{A} \vdot \mat{A}^{-1} \end{equation}

If \(\mat{A}\) transforms a column vector \(\vb{x}\) to a new column vector \(\vb{y} = \mat{A} \vdot \vb{x}\), the inverse matrix allows us to recover \(\vb{x}\) from \(\vb{y}\) via \(\vb{x} = \mat{A}^{-1} \vdot \vb{y}\) since \begin{equation} \mat{A}^{-1} \vdot \vb{y} = \mat{A}^{-1} \vdot (\mat{A} \vdot \vb{x}) = (\mat{A}^{-1} \vdot \mat{A}) \vdot \vb{x} = \mat{I} \vdot \vb{x} = \vb{x} \end{equation} The existence of an inverse implies that operating with \(\mat{A}\) on a vector does not entail the loss of information; that is, the matrix has a trivial nullspace, which means that its rows (and columns) are linearly independent. It usually isn’t obvious by inspection whether a square matrix has an inverse (that its rows are linearly independent). If one or more rows can be expressed as the linear combination of other rows, then the matrix is singular, its determinant vanishes, and it does not have an inverse.

You can use Gauss-Jordan elimination to compute the inverse of a square matrix (provided it has one).

Hermitian Matrices

A Hermitian matrix is a complex matrix equal to its conjugate transpose: \(\mat{H} = \widetilde{\mat{H}} = (\mat{H}^{*})^{\mathrm{T}}\).

Normal Matrices

A normal matrix commutes with its conjugate transpose: \(\widetilde{\mat{A}} \vdot \mat{A} = \mat{A} \vdot \widetilde{\mat{A}}\).

Unitary Matrices

A unitary matrix is a complex square matrix whose inverse is equal to its conjugate transpose: \(\mat{U}^{-1} = \widetilde{\mat{U}}\).

Orthogonal Matrices

An orthogonal matrix satisfies \begin{equation} \mat{O}^{\rm T} \vdot \mat{O} = \mat{O} \vdot \mat{O}^{\rm T} = \mat{I} \end{equation} where \(\mat{O}^{\rm T}\) is the transpose of \(\mat{O}\), which interchanges rows and columns. For an orthogonal matrix, therefore, \begin{equation} \mat{O}^{\rm T} = \mat{O}^{-1} \end{equation}

The determinant of an orthogonal matrix is either \(+1\) or \(-1\). Orthogonal matrices with determinant \(+1\) are called special orthogonal matrices [the group of all such matrices of dimension \(n\) is notated \(\mat{SO}(n)\)] and they correspond to rotations in \(\mathbb{R}^n\). Orthogonal matrices with determinant equal to \(-1\) combine rotations with an inversion through the origin. They reverse the handedness of the underlying coordinate system.

Orthogonal matrices:

Rotation Matrices

The matrix \[ \mat{R}(\theta) = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \] rotates a column vector through angle \(\theta\) in the counterclockwise direction. You can confirm that \(\mat{R}(\pi/2)\) rotates \(\vu{x} \to \begin{pmatrix}1 \\\ 0\end{pmatrix}\) into \(\vu{y} \to \begin{pmatrix} 0 \\\ 1 \end{pmatrix}\) and \(\vu{y}\) into \(-\vu{x}\).

We can generalize readily to 3 dimensions, at least for rotations around one of the basis vectors. For example, \[ \begin{pmatrix} \cos\theta & 0 & \sin\theta \\ 0 & 1 & 0 \\ -\sin\theta & 0 & \cos\theta \end{pmatrix} \] rotates a column vector around the \(y\) axis. All proper rotations (that don’t alter the handedness of the basis vectors) have a determinant of 1. Improper rotations, which do change the handedness of the basis vectors, have determinant \(-1\).

Miscellany