Get AI summaries of any video or article — Sign up free
Linear Algebra 20 | Linear maps induce matrices [dark version] thumbnail

Linear Algebra 20 | Linear maps induce matrices [dark version]

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Any vector x in R^N can be written as a linear combination of canonical unit vectors e1 through eN.

Briefing

Every linear map between finite-dimensional vector spaces can be turned into a unique matrix, and that matrix is determined entirely by what the map does to the canonical unit vectors. For a linear map F: R^N → R^M, the key takeaway is that knowing F(e1), F(e2), …, F(eN) is enough to reconstruct F(x) for any vector x—no other information is needed. This matters because it converts an abstract, coordinate-free object (a linear map) into a concrete table of numbers that can be computed with using standard matrix multiplication.

The reasoning starts by writing any vector x in R^N as a linear combination of the canonical unit vectors e1 through eN. Linearity then lets the map distribute over that combination: F(x) becomes a sum of the scalars from x multiplied by the corresponding vectors F(ei). In practical terms, this means the entire behavior of F is encoded in the images of those unit vectors. Once those images are known, the output for any input vector follows directly from the same linear combination.

From there comes the matrix representation. The claim is that there exists exactly one M×N matrix A such that the linear map F equals the matrix-induced map f_A, meaning F(x) = A x for all x in R^N. The construction is explicit: the columns of A are precisely the vectors F(e1), F(e2), …, F(eN). With this setup, matrix-vector multiplication reproduces the same linear combination produced by applying F to x, establishing existence.

Uniqueness is proved by contradiction-style reasoning. Suppose two matrices A and B both represent the same linear map, so that A x = B x for every x in R^N. Subtracting gives (A − B)x = 0 for all x. Choosing x to be each canonical unit vector e_i forces every column of A − B to be the zero vector, so A − B must be the zero matrix. Therefore A = B, and no other matrix can represent the same linear map.

The result is a two-way translation: matrices induce linear maps, and linear maps correspond to exactly one matrix. That equivalence is the foundation for later work, where problems about linear transformations can be tackled using matrix operations and algebraic tools.

Cornell Notes

A linear map F: R^N → R^M is completely determined by its values on the canonical unit vectors e1, …, eN. Because any x ∈ R^N can be written as a linear combination of these unit vectors, linearity forces F(x) to be the same linear combination of F(ei). This leads to a matrix representation: there is exactly one M×N matrix A such that F(x) = A x for all x. The matrix A is built by placing F(e1), …, F(eN) as its columns. Existence comes from checking that A x reproduces the linear combination defining F(x), and uniqueness follows because if two matrices agree on all x, then their difference must be the zero matrix (all columns vanish).

Why do the images of the canonical unit vectors e1, …, eN determine F(x) for every x in R^N?

Any vector x ∈ R^N can be written as x = x1 e1 + x2 e2 + … + xN eN. Linearity lets F distribute over this sum and pull out scalars: F(x) = x1 F(e1) + x2 F(e2) + … + xN F(eN). So once the vectors F(ei) are known, the formula above computes F(x for any choice of coordinates (x1, …, xN).

How is the matrix A constructed from a linear map F: R^N → R^M?

A is the unique M×N matrix whose columns are the vectors F(e1), F(e2), …, F(eN). Concretely, the first column of A is F(e1), the second column is F(e2), and this continues until the Nth column is F(eN). This column placement is what makes A x match the linear combination defining F(x).

What does it mean that F equals the matrix-induced map f_A?

It means the same output is produced by both descriptions for every input vector x ∈ R^N: F(x) = f_A(x) = A x. Here A x is standard matrix-vector multiplication, producing a vector in R^M.

How does the existence proof work for the matrix representation?

Start with the constructed matrix A and compute A x. Matrix-vector multiplication expresses A x as a linear combination of the columns of A, weighted by the coordinates of x. Since the columns of A are exactly F(ei), that linear combination becomes x1 F(e1) + … + xN F(eN), which matches F(x) by linearity. Therefore A x reproduces F(x) for all x.

How does uniqueness follow from the condition A x = B x for all x?

If two matrices A and B both represent the same linear map, then A x = B x for every x ∈ R^N. Subtracting gives (A − B)x = 0 for all x. Taking x = e_i forces (A − B)e_i = 0, and (A − B)e_i picks out the i-th column of (A − B). Since this holds for every i, every column is the zero vector, so A − B is the zero matrix and A = B.

Review Questions

  1. Given x = (x1, …, xN) in R^N, write an explicit formula for F(x) using only F(e1), …, F(eN).
  2. If A is the matrix representation of F, what is the relationship between the i-th column of A and F(ei)?
  3. Suppose A x = B x for all x ∈ R^N. What can be concluded about A − B, and why?

Key Points

  1. 1

    Any vector x in R^N can be written as a linear combination of canonical unit vectors e1 through eN.

  2. 2

    Linearity forces F(x) to equal the same linear combination of F(e1), …, F(eN).

  3. 3

    For a linear map F: R^N → R^M, there exists exactly one M×N matrix A such that F(x) = A x for all x.

  4. 4

    The i-th column of A is exactly F(ei).

  5. 5

    Existence is confirmed by showing A x reproduces the linear combination that defines F(x).

  6. 6

    Uniqueness follows because if two matrices agree on all x, their difference annihilates every e_i, making all columns zero.

Highlights

A linear map’s entire behavior is encoded in its values on the canonical unit vectors.
The matrix representation is built column-by-column: column i equals F(ei).
The representation is not just possible—it’s unique, because agreement on all inputs forces the matrices to be identical.