Inverse matrices, column space and null space | Chapter 7, Essence of linear algebra
Based on 3Blue1Brown's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Write linear systems as Ax = V to connect algebraic solvability to geometric behavior of a linear transformation.
Briefing
Linear algebra’s payoff is practical: many real problems reduce to solving linear systems, and the geometry of a matrix determines whether solutions exist, whether they’re unique, and what the solution set looks like. A linear system can be written as a single matrix equation, Ax = V, where A encodes the coefficients, x holds the unknowns, and V is the target vector. Geometrically, multiplying by A is a linear transformation: solving Ax = V means finding the vector x that lands exactly on V after the transformation reshapes space. This geometric lens matters because it turns algebraic questions—like “does an inverse exist?”—into questions about how A squashes or stretches space.
When A has a non-zero determinant, it does not collapse space into a lower dimension. In that case, there is exactly one vector x that maps to V, and the solution can be obtained by reversing the transformation. The reverse operation is the inverse matrix, A^{-1}, defined by the property A^{-1}A = I, where I is the identity transformation that leaves i-hat and j-hat unmoved (and thus has columns (1,0) and (0,1) in 2D). Examples make the idea concrete: a 90-degree counterclockwise rotation is undone by a 90-degree clockwise rotation; a shear that shifts j-hat right by one unit is undone by a shear that shifts j-hat left by one unit. In higher dimensions, the same principle holds: if the determinant is non-zero, the transformation is invertible, and when the number of equations matches the number of unknowns, the system typically has a unique solution.
Determinant zero changes everything. A transformation with determinant zero squishes space into a lower-dimensional object—like a line, plane, or point—so it cannot be “unsquished” by any function that maps each input vector to a single output vector. For square systems (same number of equations and unknowns), this means no inverse exists. Still, solutions might exist even without an inverse: they occur only when V lies within the collapsed output region. That leads to a more refined classification than “determinant zero” alone.
The key refinement is rank, defined as the number of dimensions in the transformation’s output. A rank-one transformation collapses everything onto a line; rank-two collapses onto a plane; rank-three fills 3D space. The set of all possible outputs of A is the column space, which equals the span of A’s columns—because each column shows where a basis vector lands under the transformation. Rank is therefore the dimension of the column space, and a matrix is full rank when this dimension equals the number of columns.
Finally, the null space (kernel) captures the vectors that collapse to the origin: all x such that Ax = 0. Because linear transformations always keep the origin fixed, the zero vector always belongs to the null space. When A is not full rank, entire subspaces collapse to zero: in 2D, a line of vectors may map to the origin; in 3D, a plane or line may do so depending on how much collapse occurs. In the language of linear systems, when V = 0, the null space describes all solutions—so it explains not just whether solutions exist, but how many and in what geometric form.
Cornell Notes
The equation Ax = V can be read geometrically: A acts as a linear transformation that maps vectors x to outputs. If det(A) ≠ 0, the transformation doesn’t collapse space, so A has an inverse A^{-1} and Ax = V has exactly one solution, found by x = A^{-1}V. If det(A) = 0, A collapses space into a lower-dimensional set, so no inverse exists; solutions exist only when V lies in the column space. The column space is the span of A’s columns and its dimension is the rank. The null space (kernel) is the set of vectors that map to the origin (Ax = 0), and when V = 0 it gives the full set of solutions.
Why does det(A) ≠ 0 guarantee a unique solution to Ax = V?
What does it mean geometrically when det(A) = 0?
How do column space and rank connect to the columns of A?
What is the null space, and how does it describe solutions?
How do full rank and “only the zero vector maps to the origin” relate?
Review Questions
- In what geometric situations does Ax = V have no inverse, and what additional condition determines whether a solution still exists?
- How would you distinguish rank 1, rank 2, and rank 3 transformations using the shapes of their column spaces?
- If V = 0, how do the null space and the set of solutions to Ax = V relate?
Key Points
- 1
Write linear systems as Ax = V to connect algebraic solvability to geometric behavior of a linear transformation.
- 2
If det(A) ≠ 0, A is invertible, so Ax = V has exactly one solution given by x = A^{-1}V.
- 3
If det(A) = 0, A collapses space into a lower-dimensional set, so no inverse exists; solutions exist only when V lies in the column space.
- 4
Column space equals the span of A’s columns, and its dimension is the rank of A.
- 5
Full rank means the column space has maximal dimension, which implies the null space contains only the zero vector.
- 6
Null space (kernel) consists of all x such that Ax = 0, and when V = 0 it describes the entire solution set.
- 7
Rank and nullity describe how much collapse occurs: the more collapse, the larger the subspace of solutions when V = 0.