Get AI summaries of any video or article — Sign up free
Linear Algebra 29 | Identity and Inverses [dark version] thumbnail

Linear Algebra 29 | Identity and Inverses [dark version]

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

An n×n identity matrix 1ₙ has ones on the diagonal and zeros elsewhere.

Briefing

Identity matrices and matrix inverses are the backbone of turning linear maps into something you can compute—and back again. An n×n identity matrix, written as 1ₙ (or often just I when the size is clear), is built from ones on the diagonal and zeros everywhere else. Its defining feature is how it behaves under multiplication: multiplying any compatible matrix B by 1ₙ on the right leaves B unchanged (B·1ₙ = B), and multiplying on the left also leaves it unchanged (1ₙ·A = A, when dimensions match). That “does nothing” role mirrors the number 1’s job in ordinary multiplication, but now for matrix multiplication.

The identity matrix also has a clean interpretation at the level of linear maps. A matrix A corresponds to a linear map f_A that sends vectors in ℝⁿ to vectors in ℝⁿ (or more generally ℝⁿ to ℝᵐ depending on dimensions). For the identity matrix 1ₙ, the induced map f_{1ₙ} takes any vector x to 1ₙx, which equals x. So the identity matrix corresponds exactly to the identity map on ℝⁿ: it preserves every vector and therefore every linear structure.

Inverses come next, and the transcript emphasizes that they only make sense for square matrices. For a square matrix A, an inverse is another square matrix A^{-1} of the same size such that A·A^{-1} = 1ₙ and also A^{-1}·A = 1ₙ. This two-sided requirement is the matrix analogue of how multiplicative inverses work for real numbers. The key difference is that not every square matrix has such a partner. If an inverse exists, it is unique—so writing A^{-1} is justified because there can be at most one matrix that satisfies both equations.

A square matrix is called invertible if it has an inverse; the transcript also notes common synonyms: non-singular and regular. When no inverse exists, the matrix is singular (also called non-invertible or non-regular). The map viewpoint sharpens the meaning: invertibility of A corresponds to the induced linear map f_A being bijective. In other words, the inverse matrix exists exactly when the linear transformation can be reversed without losing information.

Finally, the relationship between matrix inverses and inverse linear maps is made explicit. If f_A is the linear map induced by A, then the inverse linear map f_A^{-1} is induced by the inverse matrix A^{-1}. This equivalence shows up in the composition rule: f_A^{-1}∘f_A equals the identity map (and likewise f_A∘f_A^{-1} equals the identity map). In matrix language, the same idea becomes A^{-1}A = 1ₙ and AA^{-1} = 1ₙ. The takeaway is that inverses aren’t arbitrary definitions—they’re forced by the need for a true “undoing” operation at both the algebraic (multiplication) and geometric (linear map) levels.

Cornell Notes

An identity matrix 1ₙ is the n×n square matrix with ones on the diagonal and zeros elsewhere. It acts as a neutral element for matrix multiplication: multiplying by it on either side (when dimensions match) leaves the other matrix unchanged. Under the matrix-to-linear-map correspondence, 1ₙ induces the identity map on ℝⁿ, sending every vector x to x.

A square matrix A has an inverse A^{-1} only if there exists a same-sized matrix satisfying both A·A^{-1}=1ₙ and A^{-1}·A=1ₙ. Such an inverse is unique, so the notation A^{-1} is well-defined. Invertibility of A matches bijectivity of the induced linear map f_A, and the inverse linear map is induced by A^{-1}.

Why does the identity matrix behave like “1” in ordinary arithmetic?

Because it is the neutral element for matrix multiplication. For any compatible matrix B, right-multiplying by the identity leaves it unchanged: B·1ₙ = B. Left-multiplying also leaves it unchanged: 1ₙ·A = A (when A has n columns so the product is defined). This parallels how multiplying by 1 leaves real numbers unchanged, but with matrix multiplication instead of scalar multiplication.

How does the identity matrix translate into the language of linear maps?

A matrix A induces a linear map f_A. For the identity matrix 1ₙ, the induced map satisfies f_{1ₙ}(x) = 1ₙx = x for every vector x in ℝⁿ. That means the induced map is exactly the identity map on ℝⁿ, which “does nothing” to vectors.

What conditions must a matrix A satisfy to have an inverse?

A must be square. Then an inverse A^{-1} is a same-size matrix such that both products yield the identity: A·A^{-1}=1ₙ and A^{-1}·A=1ₙ. The requirement that both orders work is essential; it’s the matrix analogue of having a two-sided multiplicative inverse.

Why is the inverse of a matrix unique when it exists?

The transcript notes that if a matrix A has a matrix that satisfies the two inverse equations, that inverse is uniquely determined. So there cannot be two different matrices that both multiply with A to produce 1ₙ on both sides. This uniqueness is why the notation A^{-1} is meaningful.

How does invertibility of a matrix relate to properties of its induced linear map?

Invertibility of A is equivalent to the induced linear map f_A being bijective. If f_A is bijective, it has an inverse map, and that inverse map corresponds to the induced map of A^{-1}. If f_A is not bijective, the matrix is singular (non-invertible).

What is the exact relationship between inverse maps and inverse matrices?

If A^{-1} is the inverse matrix, then the inverse linear map is f_A^{-1}. The composition rule matches the identity: f_A^{-1}∘f_A equals the identity map on ℝⁿ, and f_A∘f_A^{-1} also equals the identity map. In matrix form, that same idea becomes A^{-1}A=1ₙ and AA^{-1}=1ₙ.

Review Questions

  1. What two multiplication equations define the inverse of a square matrix A?
  2. How does the identity matrix 1ₙ act on vectors in ℝⁿ, and how does that relate to the identity map?
  3. Why is bijectivity of the induced linear map f_A equivalent to A being invertible?

Key Points

  1. 1

    An n×n identity matrix 1ₙ has ones on the diagonal and zeros elsewhere.

  2. 2

    For compatible dimensions, multiplying by 1ₙ on either side leaves a matrix unchanged (B·1ₙ = B and 1ₙ·A = A).

  3. 3

    The identity matrix corresponds to the identity linear map on ℝⁿ: f_{1ₙ}(x)=x.

  4. 4

    A square matrix A is invertible only if there exists a same-sized matrix A^{-1} with both A·A^{-1}=1ₙ and A^{-1}·A=1ₙ.

  5. 5

    When an inverse exists, it is unique, which justifies the notation A^{-1}.

  6. 6

    Invertible matrices correspond exactly to bijective induced linear maps; singular matrices correspond to non-bijective maps.

  7. 7

    Inverse maps and inverse matrices match under composition: f_A^{-1}∘f_A and f_A∘f_A^{-1} both produce the identity map.

Highlights

The identity matrix is the neutral element for matrix multiplication, just like the number 1 is for real-number multiplication.
At the map level, 1ₙ induces the identity map: every vector x stays x.
Matrix inverses require two-sided conditions: both A·A^{-1} and A^{-1}·A must equal 1ₙ.
Invertibility of A is equivalent to bijectivity of the induced linear map f_A.
Inverse linear maps and inverse matrices are the same idea expressed through composition versus multiplication.