Get AI summaries of any video or article — Sign up free
Linear Algebra 19 | Matrices induce linear maps thumbnail

Linear Algebra 19 | Matrices induce linear maps

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Any m×n matrix A induces a linear map f_A: R^n → R^m defined by f_A(x)=Ax.

Briefing

A matrix isn’t just a table of numbers: it automatically defines a linear map between vector spaces, and the usual matrix-vector multiplication is exactly the rule that makes this correspondence work. For an m×n matrix A, the induced map f_A takes inputs from R^n and outputs vectors in R^m by sending a vector x to A x. Linearity then follows from two required properties—additivity and homogeneity—both of which match standard distributive and scalar-compatibility laws for matrix multiplication.

Additivity means that combining inputs before applying the map is the same as applying the map to each input and then adding the results: f_A(x+y)=f_A(x)+f_A(y). In matrix terms, this is the statement that A(x+y)=Ax+Ay, reflecting the distributive rule for matrix-vector multiplication. Homogeneity means scaling an input scales the output by the same factor: f_A(λx)=λf_A(x). In matrix terms, this becomes A(λx)=λ(Ax), using the compatibility of scalar multiplication with matrix multiplication. Together, these two laws confirm that the map induced by any matrix is linear.

To make the mechanism concrete, the transcript walks through a 2D example where A has two columns, labeled a_1 and a_2, and vectors x and y each have two components. Multiplying A by x+y produces a vector formed by distributing the components across the columns: the result can be rearranged into (A x) + (A y). A similar distribution works for scalar multiplication, again yielding λ(Ax). The takeaway is that linearity isn’t an extra assumption—it drops out directly from how matrix-vector multiplication is defined.

The discussion then pivots to what happens when two matrices are multiplied. If A is m×k and B is k×n, the product AB is defined and corresponds to composing the induced linear maps. The induced maps f_A and f_B act on R^n→R^k and R^k→R^m respectively, so their composition f_A∘f_B maps R^n→R^m. Starting with an input x, applying f_B gives Bx, and then applying f_A gives A(Bx). By associativity of matrix multiplication, A(Bx)=(AB)x. That equality shows that composing linear maps corresponds exactly to multiplying their matrices: f_A∘f_B = f_{AB}.

This is the key structural insight: the definition of matrix multiplication is not arbitrary. It is chosen precisely so that matrix products mirror the composition of linear transformations. The transcript closes by noting that the reverse direction—recovering a matrix from an abstract linear map—will be handled next, completing the correspondence between linear maps and matrices.

Cornell Notes

Any m×n matrix A induces a linear map f_A: R^n → R^m via f_A(x)=Ax. Linearity comes from two checks: additivity (A(x+y)=Ax+Ay) and homogeneity (A(λx)=λ(Ax)), which are direct consequences of distributive and scalar-compatibility rules for matrix multiplication. A worked 2D example using columns a_1 and a_2 shows how distributing components across columns produces the same result as adding (or scaling) after applying the map. The transcript then connects matrix multiplication to composition: for compatible sizes, f_A∘f_B equals the linear map induced by AB, because A(Bx)=(AB)x by associativity. This correspondence explains why the matrix product rule is the right one for linear transformations.

Why does every matrix A define a linear map f_A(x)=Ax?

Because the induced map satisfies the two linearity requirements. For additivity, f_A(x+y)=A(x+y)=Ax+Ay, matching the distributive law. For homogeneity, f_A(λx)=A(λx)=λ(Ax), matching scalar compatibility with matrix multiplication. Since both properties hold for all vectors x,y in R^n and scalars λ, f_A is linear.

How does the additivity property look in matrix terms?

Additivity requires f_A(x+y)=f_A(x)+f_A(y). With f_A(x)=Ax, this becomes A(x+y)=Ax+Ay. The equality follows from the distributive rule for matrix-vector multiplication, so the map respects vector addition.

What does homogeneity mean for the induced map f_A?

Homogeneity requires f_A(λx)=λf_A(x). Substituting f_A(x)=Ax gives A(λx)=λ(Ax). This uses the law that scalar multiplication can be pulled through the matrix-vector product, so scaling the input scales the output by the same factor.

In the 2D example with columns a_1 and a_2, how does linearity show up?

With A=[a_1 a_2] and x=(x1,x2), y=(y1,y2), the product A(x+y) expands into a_1(x1+y1)+a_2(x2+y2). Distributing gives a_1x1+a_1y1+a_2x2+a_2y2, which can be regrouped as (a_1x1+a_2x2)+(a_1y1+a_2y2)=Ax+Ay. A similar component-wise distribution verifies f_A(λx)=λf_A(x).

Why does composing linear maps correspond to multiplying matrices?

For compatible dimensions, f_B(x)=Bx and then f_A(f_B(x))=A(Bx). Associativity gives A(Bx)=(AB)x. Since (AB)x is exactly f_{AB}(x), the composition f_A∘f_B equals the linear map induced by the matrix product AB.

Review Questions

  1. Given an m×n matrix A, what are the domain and codomain of the induced map f_A(x)=Ax?
  2. Which two properties must be checked to prove a map is linear, and how do they translate into matrix identities for A(x+y) and A(λx)?
  3. If A is m×k and B is k×n, what matrix corresponds to the composition f_A∘f_B, and why?

Key Points

  1. 1

    Any m×n matrix A induces a linear map f_A: R^n → R^m defined by f_A(x)=Ax.

  2. 2

    Linearity of f_A follows from additivity: A(x+y)=Ax+Ay.

  3. 3

    Linearity of f_A also follows from homogeneity: A(λx)=λ(Ax).

  4. 4

    A 2D column-based computation shows linearity by distributing vector components across the matrix columns.

  5. 5

    For compatible sizes, composing induced maps matches matrix multiplication: f_A∘f_B = f_{AB}.

  6. 6

    Associativity of matrix multiplication is the algebraic reason that A(Bx)=(AB)x.

  7. 7

    The matrix product rule is justified by its ability to represent composition of linear transformations.

Highlights

Matrix-vector multiplication turns a numeric table A into an abstract linear transformation f_A(x)=Ax.
Additivity and homogeneity for f_A are exactly the distributive and scalar-compatibility laws of matrix multiplication.
The composition of linear maps corresponds to multiplying their matrices: applying B first and then A gives (AB)x.
Associativity ensures the equality A(Bx)=(AB)x, making the correspondence precise.