Get AI summaries of any video or article — Sign up free
Linear Algebra 29 | Identity and Inverses thumbnail

Linear Algebra 29 | Identity and Inverses

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The n×n identity matrix I_n has 1s on the diagonal and 0s elsewhere.

Briefing

Identity matrices and inverses sit at the center of linear algebra because they formalize “do nothing” and “undo what a transformation does.” An n×n identity matrix has 1s on the diagonal and 0s everywhere else, and it acts as the neutral element for matrix multiplication: multiplying any compatible matrix B by the identity leaves B unchanged. On the linear-map level, the identity matrix corresponds to the identity map on R^n, sending every vector x back to itself. These two viewpoints—matrix multiplication and linear transformations—stay tightly linked and provide the same underlying behavior.

Once that neutral element is in place, inverses become the natural next question: what matrix (or map) reverses an operation? For a square matrix A, an inverse is another square matrix A^{-1} of the same size such that A·A^{-1} equals the identity matrix and also A^{-1}·A equals the identity matrix. The requirement that both multiplication orders produce the identity mirrors the familiar real-number idea, but matrices are more restrictive: not every square matrix has such an inverse. If an inverse exists, it is unique—so the notation A^{-1} is justified. Matrices that do have inverses are called invertible; they are also described as non-singular or regular. Matrices without an inverse are called singular (or non-invertible/non-regular).

The same classification can be expressed using the associated linear map f_A. A matrix A is invertible exactly when the induced linear map f_A: R^n → R^n is bijective, meaning it both hits every vector in the codomain and never collapses distinct inputs to the same output. In that case, the inverse map f_A^{-1} exists, and it corresponds to the inverse matrix A^{-1}. This correspondence is not just conceptual—it yields concrete equalities: composing f_A^{-1} with f_A gives the identity map on R^n, and composing in the other order also gives the identity map. Translating back to matrices, those inverse-map relationships match the two matrix equations A·A^{-1} = I and A^{-1}·A = I.

In short, identity matrices define the “do nothing” baseline for both multiplication and linear transformations. Inverses then define the “undo” operation, but only for matrices whose action is bijective at the map level. That equivalence between inverse matrices and inverse linear maps is the key bridge that makes later computations possible—because it turns the abstract idea of reversing a transformation into precise algebraic conditions on matrices.

Cornell Notes

An identity matrix I_n is the neutral element for matrix multiplication: multiplying it on the left or right by a compatible matrix leaves that matrix unchanged. It also corresponds to the identity linear map on R^n, sending every vector x to itself. A square matrix A is invertible if there exists a matrix A^{-1} of the same size such that A·A^{-1} = I_n and A^{-1}·A = I_n; this inverse is unique. On the linear-map side, invertibility means the induced map f_A is bijective, so it has an inverse map f_A^{-1}. The inverse map f_A^{-1} matches the induced map of A^{-1}, linking matrix inverses and inverse transformations.

Why does the identity matrix act like “1” in matrix multiplication?

Because for any matrix B with compatible dimensions, multiplying by I_n leaves B unchanged. If B has n columns, then B·I_n = B; if B has n rows, then I_n·B = B. This makes I_n the neutral element for matrix multiplication, analogous to how 1 is the neutral element for real-number multiplication.

What exactly makes a square matrix invertible?

A square matrix A is invertible if there exists another square matrix A^{-1} of the same size such that both products equal the identity matrix: A·A^{-1} = I_n and A^{-1}·A = I_n. The existence of such a matrix is what defines invertibility; if no such matrix exists, A is singular.

How does invertibility translate from matrices to linear maps?

Invertibility of A is equivalent to bijectivity of the induced linear map f_A: R^n → R^n. If f_A is bijective, it has an inverse map f_A^{-1}. That inverse map satisfies f_A^{-1}∘f_A = id_{R^n} and f_A∘f_A^{-1} = id_{R^n}, matching the two-sided identity conditions for A^{-1} in matrix multiplication.

Why is the inverse matrix notation A^{-1} justified?

If an inverse exists, it is uniquely determined. Because there can be only one matrix that satisfies both A·X = I_n and X·A = I_n, the inverse is not ambiguous—so writing A^{-1} is well-defined.

What does “singular” mean in this framework?

A matrix is singular when it is not invertible, which corresponds to the induced linear map f_A not being bijective. In that case, there is no inverse map f_A^{-1}, and therefore no matrix A^{-1} that can undo the transformation in both multiplication orders.

Review Questions

  1. State the defining equations for the inverse of a square matrix A using identity matrices.
  2. Explain the equivalence between a matrix being invertible and its induced linear map being bijective.
  3. Describe how the identity matrix relates to both matrix multiplication and the identity linear map on R^n.

Key Points

  1. 1

    The n×n identity matrix I_n has 1s on the diagonal and 0s elsewhere.

  2. 2

    I_n is the neutral element for matrix multiplication: multiplying by I_n on the appropriate side leaves a compatible matrix unchanged.

  3. 3

    The identity matrix corresponds to the identity linear map on R^n, mapping every vector x to itself.

  4. 4

    A square matrix A is invertible if there exists a unique matrix A^{-1} such that A·A^{-1} = I_n and A^{-1}·A = I_n.

  5. 5

    Invertibility of A is equivalent to bijectivity of the induced linear map f_A: R^n → R^n.

  6. 6

    Matrices without inverses are called singular (non-invertible/non-regular), reflecting that f_A is not bijective.

  7. 7

    Inverse-map relationships (composition giving the identity map) match the two-sided inverse equations for matrices.

Highlights

Identity matrices are the “do nothing” element for both matrix multiplication and linear transformations.
An inverse requires two-sided cancellation: A·A^{-1} = I_n and A^{-1}·A = I_n.
Invertibility of a matrix is exactly the same as bijectivity of its induced linear map.
When an inverse exists, it is unique, making the notation A^{-1} unambiguous.
Inverse maps and inverse matrices correspond perfectly through the matrix-to-linear-map relationship.