Get AI summaries of any video or article — Sign up free
Inverse matrices, column space and null space | Chapter 7, Essence of linear algebra thumbnail

Inverse matrices, column space and null space | Chapter 7, Essence of linear algebra

3Blue1Brown·
5 min read

Based on 3Blue1Brown's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Write linear systems as Ax = V to connect algebraic solvability to geometric behavior of a linear transformation.

Briefing

Linear algebra’s payoff is practical: many real problems reduce to solving linear systems, and the geometry of a matrix determines whether solutions exist, whether they’re unique, and what the solution set looks like. A linear system can be written as a single matrix equation, Ax = V, where A encodes the coefficients, x holds the unknowns, and V is the target vector. Geometrically, multiplying by A is a linear transformation: solving Ax = V means finding the vector x that lands exactly on V after the transformation reshapes space. This geometric lens matters because it turns algebraic questions—like “does an inverse exist?”—into questions about how A squashes or stretches space.

When A has a non-zero determinant, it does not collapse space into a lower dimension. In that case, there is exactly one vector x that maps to V, and the solution can be obtained by reversing the transformation. The reverse operation is the inverse matrix, A^{-1}, defined by the property A^{-1}A = I, where I is the identity transformation that leaves i-hat and j-hat unmoved (and thus has columns (1,0) and (0,1) in 2D). Examples make the idea concrete: a 90-degree counterclockwise rotation is undone by a 90-degree clockwise rotation; a shear that shifts j-hat right by one unit is undone by a shear that shifts j-hat left by one unit. In higher dimensions, the same principle holds: if the determinant is non-zero, the transformation is invertible, and when the number of equations matches the number of unknowns, the system typically has a unique solution.

Determinant zero changes everything. A transformation with determinant zero squishes space into a lower-dimensional object—like a line, plane, or point—so it cannot be “unsquished” by any function that maps each input vector to a single output vector. For square systems (same number of equations and unknowns), this means no inverse exists. Still, solutions might exist even without an inverse: they occur only when V lies within the collapsed output region. That leads to a more refined classification than “determinant zero” alone.

The key refinement is rank, defined as the number of dimensions in the transformation’s output. A rank-one transformation collapses everything onto a line; rank-two collapses onto a plane; rank-three fills 3D space. The set of all possible outputs of A is the column space, which equals the span of A’s columns—because each column shows where a basis vector lands under the transformation. Rank is therefore the dimension of the column space, and a matrix is full rank when this dimension equals the number of columns.

Finally, the null space (kernel) captures the vectors that collapse to the origin: all x such that Ax = 0. Because linear transformations always keep the origin fixed, the zero vector always belongs to the null space. When A is not full rank, entire subspaces collapse to zero: in 2D, a line of vectors may map to the origin; in 3D, a plane or line may do so depending on how much collapse occurs. In the language of linear systems, when V = 0, the null space describes all solutions—so it explains not just whether solutions exist, but how many and in what geometric form.

Cornell Notes

The equation Ax = V can be read geometrically: A acts as a linear transformation that maps vectors x to outputs. If det(A) ≠ 0, the transformation doesn’t collapse space, so A has an inverse A^{-1} and Ax = V has exactly one solution, found by x = A^{-1}V. If det(A) = 0, A collapses space into a lower-dimensional set, so no inverse exists; solutions exist only when V lies in the column space. The column space is the span of A’s columns and its dimension is the rank. The null space (kernel) is the set of vectors that map to the origin (Ax = 0), and when V = 0 it gives the full set of solutions.

Why does det(A) ≠ 0 guarantee a unique solution to Ax = V?

A non-zero determinant means the transformation associated with A does not squish space into a lower dimension. In that situation, the mapping from inputs x to outputs is reversible: there exists a unique inverse transformation A^{-1} such that A^{-1}A = I (doing nothing). Because the transformation can be undone, exactly one vector x lands on the target V, and the solution is x = A^{-1}V.

What does it mean geometrically when det(A) = 0?

det(A) = 0 indicates the transformation collapses space into something lower-dimensional—like a line, plane, or point. That collapse prevents a true inverse from existing, because reversing would require mapping a whole collapsed set back to distinct original vectors, which a function cannot do. Solutions may still exist, but only if V lies on the collapsed output set.

How do column space and rank connect to the columns of A?

The column space is the set of all possible outputs of the transformation, and it equals the span of A’s columns. Each column shows where a basis vector lands under A; taking all linear combinations of those columns produces every achievable output. Rank is the dimension of that column space: rank 1 means outputs form a line, rank 2 a plane, and rank 3 fills 3D space.

What is the null space, and how does it describe solutions?

The null space (kernel) is the set of vectors x such that Ax = 0, meaning they all collapse to the origin. Because linear transformations keep the origin fixed, the zero vector always belongs to the null space. When V = 0 in Ax = V, the null space is exactly the set of solutions; if A is not full rank, that solution set is not just one vector but an entire subspace (a line in 2D, a plane or line in 3D depending on the collapse).

How do full rank and “only the zero vector maps to the origin” relate?

Full rank means the column space has the maximum possible dimension (equal to the number of columns). In that case, the transformation collapses nothing extra onto the origin: the only vector that maps to zero is the zero vector itself. When the matrix is not full rank, collapse occurs and many nonzero vectors map to the origin, forming a nontrivial null space.

Review Questions

  1. In what geometric situations does Ax = V have no inverse, and what additional condition determines whether a solution still exists?
  2. How would you distinguish rank 1, rank 2, and rank 3 transformations using the shapes of their column spaces?
  3. If V = 0, how do the null space and the set of solutions to Ax = V relate?

Key Points

  1. 1

    Write linear systems as Ax = V to connect algebraic solvability to geometric behavior of a linear transformation.

  2. 2

    If det(A) ≠ 0, A is invertible, so Ax = V has exactly one solution given by x = A^{-1}V.

  3. 3

    If det(A) = 0, A collapses space into a lower-dimensional set, so no inverse exists; solutions exist only when V lies in the column space.

  4. 4

    Column space equals the span of A’s columns, and its dimension is the rank of A.

  5. 5

    Full rank means the column space has maximal dimension, which implies the null space contains only the zero vector.

  6. 6

    Null space (kernel) consists of all x such that Ax = 0, and when V = 0 it describes the entire solution set.

  7. 7

    Rank and nullity describe how much collapse occurs: the more collapse, the larger the subspace of solutions when V = 0.

Highlights

Non-zero determinant corresponds to a transformation that doesn’t collapse space, making inverse matrices possible and solutions unique.
Determinant zero removes invertibility, but solutions can still exist if the target vector V lies inside the column space.
Rank is the dimension of the column space: rank 1 outputs form a line, rank 2 a plane, and rank 3 fills 3D space.
The null space is the geometric “pile-up” at the origin: it’s the set of all vectors that map to zero and, for V = 0, equals the solution set.

Topics

Mentioned

  • I