Linear Algebra 29 | Identity and Inverses [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
An n×n identity matrix 1ₙ has ones on the diagonal and zeros elsewhere.
Briefing
Identity matrices and matrix inverses are the backbone of turning linear maps into something you can compute—and back again. An n×n identity matrix, written as 1ₙ (or often just I when the size is clear), is built from ones on the diagonal and zeros everywhere else. Its defining feature is how it behaves under multiplication: multiplying any compatible matrix B by 1ₙ on the right leaves B unchanged (B·1ₙ = B), and multiplying on the left also leaves it unchanged (1ₙ·A = A, when dimensions match). That “does nothing” role mirrors the number 1’s job in ordinary multiplication, but now for matrix multiplication.
The identity matrix also has a clean interpretation at the level of linear maps. A matrix A corresponds to a linear map f_A that sends vectors in ℝⁿ to vectors in ℝⁿ (or more generally ℝⁿ to ℝᵐ depending on dimensions). For the identity matrix 1ₙ, the induced map f_{1ₙ} takes any vector x to 1ₙx, which equals x. So the identity matrix corresponds exactly to the identity map on ℝⁿ: it preserves every vector and therefore every linear structure.
Inverses come next, and the transcript emphasizes that they only make sense for square matrices. For a square matrix A, an inverse is another square matrix A^{-1} of the same size such that A·A^{-1} = 1ₙ and also A^{-1}·A = 1ₙ. This two-sided requirement is the matrix analogue of how multiplicative inverses work for real numbers. The key difference is that not every square matrix has such a partner. If an inverse exists, it is unique—so writing A^{-1} is justified because there can be at most one matrix that satisfies both equations.
A square matrix is called invertible if it has an inverse; the transcript also notes common synonyms: non-singular and regular. When no inverse exists, the matrix is singular (also called non-invertible or non-regular). The map viewpoint sharpens the meaning: invertibility of A corresponds to the induced linear map f_A being bijective. In other words, the inverse matrix exists exactly when the linear transformation can be reversed without losing information.
Finally, the relationship between matrix inverses and inverse linear maps is made explicit. If f_A is the linear map induced by A, then the inverse linear map f_A^{-1} is induced by the inverse matrix A^{-1}. This equivalence shows up in the composition rule: f_A^{-1}∘f_A equals the identity map (and likewise f_A∘f_A^{-1} equals the identity map). In matrix language, the same idea becomes A^{-1}A = 1ₙ and AA^{-1} = 1ₙ. The takeaway is that inverses aren’t arbitrary definitions—they’re forced by the need for a true “undoing” operation at both the algebraic (multiplication) and geometric (linear map) levels.
Cornell Notes
An identity matrix 1ₙ is the n×n square matrix with ones on the diagonal and zeros elsewhere. It acts as a neutral element for matrix multiplication: multiplying by it on either side (when dimensions match) leaves the other matrix unchanged. Under the matrix-to-linear-map correspondence, 1ₙ induces the identity map on ℝⁿ, sending every vector x to x.
A square matrix A has an inverse A^{-1} only if there exists a same-sized matrix satisfying both A·A^{-1}=1ₙ and A^{-1}·A=1ₙ. Such an inverse is unique, so the notation A^{-1} is well-defined. Invertibility of A matches bijectivity of the induced linear map f_A, and the inverse linear map is induced by A^{-1}.
Why does the identity matrix behave like “1” in ordinary arithmetic?
How does the identity matrix translate into the language of linear maps?
What conditions must a matrix A satisfy to have an inverse?
Why is the inverse of a matrix unique when it exists?
How does invertibility of a matrix relate to properties of its induced linear map?
What is the exact relationship between inverse maps and inverse matrices?
Review Questions
- What two multiplication equations define the inverse of a square matrix A?
- How does the identity matrix 1ₙ act on vectors in ℝⁿ, and how does that relate to the identity map?
- Why is bijectivity of the induced linear map f_A equivalent to A being invertible?
Key Points
- 1
An n×n identity matrix 1ₙ has ones on the diagonal and zeros elsewhere.
- 2
For compatible dimensions, multiplying by 1ₙ on either side leaves a matrix unchanged (B·1ₙ = B and 1ₙ·A = A).
- 3
The identity matrix corresponds to the identity linear map on ℝⁿ: f_{1ₙ}(x)=x.
- 4
A square matrix A is invertible only if there exists a same-sized matrix A^{-1} with both A·A^{-1}=1ₙ and A^{-1}·A=1ₙ.
- 5
When an inverse exists, it is unique, which justifies the notation A^{-1}.
- 6
Invertible matrices correspond exactly to bijective induced linear maps; singular matrices correspond to non-bijective maps.
- 7
Inverse maps and inverse matrices match under composition: f_A^{-1}∘f_A and f_A∘f_A^{-1} both produce the identity map.