Linear Algebra 29 | Identity and Inverses
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
The n×n identity matrix I_n has 1s on the diagonal and 0s elsewhere.
Briefing
Identity matrices and inverses sit at the center of linear algebra because they formalize “do nothing” and “undo what a transformation does.” An n×n identity matrix has 1s on the diagonal and 0s everywhere else, and it acts as the neutral element for matrix multiplication: multiplying any compatible matrix B by the identity leaves B unchanged. On the linear-map level, the identity matrix corresponds to the identity map on R^n, sending every vector x back to itself. These two viewpoints—matrix multiplication and linear transformations—stay tightly linked and provide the same underlying behavior.
Once that neutral element is in place, inverses become the natural next question: what matrix (or map) reverses an operation? For a square matrix A, an inverse is another square matrix A^{-1} of the same size such that A·A^{-1} equals the identity matrix and also A^{-1}·A equals the identity matrix. The requirement that both multiplication orders produce the identity mirrors the familiar real-number idea, but matrices are more restrictive: not every square matrix has such an inverse. If an inverse exists, it is unique—so the notation A^{-1} is justified. Matrices that do have inverses are called invertible; they are also described as non-singular or regular. Matrices without an inverse are called singular (or non-invertible/non-regular).
The same classification can be expressed using the associated linear map f_A. A matrix A is invertible exactly when the induced linear map f_A: R^n → R^n is bijective, meaning it both hits every vector in the codomain and never collapses distinct inputs to the same output. In that case, the inverse map f_A^{-1} exists, and it corresponds to the inverse matrix A^{-1}. This correspondence is not just conceptual—it yields concrete equalities: composing f_A^{-1} with f_A gives the identity map on R^n, and composing in the other order also gives the identity map. Translating back to matrices, those inverse-map relationships match the two matrix equations A·A^{-1} = I and A^{-1}·A = I.
In short, identity matrices define the “do nothing” baseline for both multiplication and linear transformations. Inverses then define the “undo” operation, but only for matrices whose action is bijective at the map level. That equivalence between inverse matrices and inverse linear maps is the key bridge that makes later computations possible—because it turns the abstract idea of reversing a transformation into precise algebraic conditions on matrices.
Cornell Notes
An identity matrix I_n is the neutral element for matrix multiplication: multiplying it on the left or right by a compatible matrix leaves that matrix unchanged. It also corresponds to the identity linear map on R^n, sending every vector x to itself. A square matrix A is invertible if there exists a matrix A^{-1} of the same size such that A·A^{-1} = I_n and A^{-1}·A = I_n; this inverse is unique. On the linear-map side, invertibility means the induced map f_A is bijective, so it has an inverse map f_A^{-1}. The inverse map f_A^{-1} matches the induced map of A^{-1}, linking matrix inverses and inverse transformations.
Why does the identity matrix act like “1” in matrix multiplication?
What exactly makes a square matrix invertible?
How does invertibility translate from matrices to linear maps?
Why is the inverse matrix notation A^{-1} justified?
What does “singular” mean in this framework?
Review Questions
- State the defining equations for the inverse of a square matrix A using identity matrices.
- Explain the equivalence between a matrix being invertible and its induced linear map being bijective.
- Describe how the identity matrix relates to both matrix multiplication and the identity linear map on R^n.
Key Points
- 1
The n×n identity matrix I_n has 1s on the diagonal and 0s elsewhere.
- 2
I_n is the neutral element for matrix multiplication: multiplying by I_n on the appropriate side leaves a compatible matrix unchanged.
- 3
The identity matrix corresponds to the identity linear map on R^n, mapping every vector x to itself.
- 4
A square matrix A is invertible if there exists a unique matrix A^{-1} such that A·A^{-1} = I_n and A^{-1}·A = I_n.
- 5
Invertibility of A is equivalent to bijectivity of the induced linear map f_A: R^n → R^n.
- 6
Matrices without inverses are called singular (non-invertible/non-regular), reflecting that f_A is not bijective.
- 7
Inverse-map relationships (composition giving the identity map) match the two-sided inverse equations for matrices.