Get AI summaries of any video or article — Sign up free
Linear Algebra 19 | Matrices induce linear maps [dark version] thumbnail

Linear Algebra 19 | Matrices induce linear maps [dark version]

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

An M×N matrix A induces a linear map F_A: R^N → R^M via F_A(x)=A x.

Briefing

A matrix doesn’t just store numbers—it automatically defines a linear map between vector spaces, and the usual matrix-vector multiplication is exactly the rule that makes linearity work. For an M×N matrix A, the associated map F_A takes an input vector x in R^N and outputs A x in R^M. Linearity then follows from two compatibility properties: distributing over vector addition and commuting with scalar multiplication. Concretely, the additive property comes from the matrix-vector distributive law A(x+y)=Ax+Ay, while the homogeneous property comes from the scalar compatibility law A(λx)=λ(Ax). Together these match the definition of a linear map, so every matrix induces a function that respects both vector addition and scaling.

The transcript also builds intuition using a 2D example. Writing A in terms of its columns A1 and A2, and taking x and y as two-component vectors, the product A(x+y) can be expanded into combinations of column-scaled vectors. After applying distributive rules for scalars and rearranging terms, the expression matches Ax+Ay. That calculation isn’t just a check—it illustrates the underlying mechanism: matrix multiplication is designed so that column contributions add and scale exactly the way linear maps require.

Just as important, the correspondence runs both directions: matrices and linear maps are tightly linked. Starting from a matrix A, one gets a linear map F_A. The reverse direction—starting from an abstract linear map and constructing the matrix that represents it—is flagged for the next installment, emphasizing that the matrix-to-map relationship is not merely convenient but structurally fundamental.

Finally, the discussion turns to how matrix multiplication reflects composition of linear maps. Given matrices A and B with compatible dimensions (B is K×N and A is M×K), their product AB is defined and corresponds to composing the induced maps. The map F_B sends x to Bx, and then F_A sends that result to A(Bx). Using associativity of matrix multiplication, A(Bx) becomes (AB)x. The conclusion is precise: the composition F_A ∘ F_B equals the linear map induced by the matrix product AB. This shows the definition of matrix multiplication isn’t arbitrary; it is chosen so that multiplying tables of numbers reproduces composing linear transformations.

In short, matrices act as blueprints for linear transformations, and matrix multiplication mirrors how those transformations compose—turning an abstract operation (function composition) into a concrete computational rule (multiplying matrices).

Cornell Notes

An M×N matrix A defines a linear map F_A: R^N → R^M by sending each vector x to A x. Linearity comes directly from matrix-vector rules: A(x+y)=Ax+Ay (additivity) and A(λx)=λ(Ax) (homogeneity). A column-based 2D example with columns A1 and A2 shows how expanding A(x+y) and regrouping terms reproduces Ax+Ay. The same framework extends to matrix multiplication: if A is M×K and B is K×N, then composing the induced maps satisfies F_A ∘ F_B = F_{AB}. This matters because it connects abstract linear transformations to concrete computations with matrices.

Why does every matrix A induce a linear map F_A, and what two properties must be checked?

For an M×N matrix A, define F_A(x)=A x. To qualify as a linear map, F_A must be additive and homogeneous. Additivity means F_A(x+y)=F_A(x)+F_A(y), which matches the distributive rule A(x+y)=Ax+Ay. Homogeneity means F_A(λx)=λF_A(x), matching A(λx)=λ(Ax). These two properties together establish linearity.

How does the 2D column viewpoint make additivity feel concrete?

Take a 2-column matrix A with columns A1 and A2, and vectors x=(x1,x2), y=(y1,y2). Then A x = A1 x1 + A2 x2 (written as column-scaled contributions). Expanding A(x+y) gives A1(x1+y1)+A2(x2+y2), which splits into A1 x1 + A1 y1 + A2 x2 + A2 y2. Regrouping yields (A1 x1 + A2 x2) + (A1 y1 + A2 y2) = Ax + Ay.

What is the dimension condition for multiplying matrices A and B in this setting?

If B is K×N and A is M×K, then the product AB is defined and has size M×N. This dimension matching is necessary because the inner dimension K must align: A’s number of columns equals B’s number of rows.

How does composing linear maps relate to multiplying matrices?

The induced map F_B sends x to Bx. Applying F_A next gives F_A(F_B(x)) = A(Bx). Associativity of matrix multiplication turns this into (AB)x. Therefore the composition F_A ∘ F_B equals the linear map induced by the product matrix AB, i.e., F_A ∘ F_B = F_{AB}.

Why is the definition of matrix multiplication described as “not arbitrary” here?

Because the multiplication rule is chosen so that it reproduces composition of linear maps using number tables. If matrix multiplication were defined differently, the identity F_A ∘ F_B = F_{AB} would fail. The transcript emphasizes that the multiplication rule is exactly what makes abstract transformation composition computable via matrices.

Review Questions

  1. Given an M×N matrix A, write the induced map F_A and state the two equations that verify linearity.
  2. For matrices A (M×K) and B (K×N), show how F_A ∘ F_B acting on x becomes (AB)x.
  3. In the 2-column example with columns A1 and A2, how does expanding A(x+y) lead to Ax+Ay after regrouping terms?

Key Points

  1. 1

    An M×N matrix A induces a linear map F_A: R^N → R^M via F_A(x)=A x.

  2. 2

    Linearity of F_A follows from A(x+y)=Ax+Ay (additivity) and A(λx)=λ(Ax) (homogeneity).

  3. 3

    Viewing A through its columns A1 and A2 makes distributive expansion and regrouping produce Ax+Ay.

  4. 4

    Matrix multiplication requires dimension compatibility: if B is K×N and A is M×K, then AB is M×N.

  5. 5

    The composition of induced maps matches matrix multiplication: F_A ∘ F_B = F_{AB}.

  6. 6

    Associativity of matrix multiplication is the key step connecting A(Bx) to (AB)x.

  7. 7

    The matrix-to-linear-map correspondence is presented as fundamental, with the reverse construction deferred to the next video.

Highlights

Every matrix A acts like a blueprint for a linear transformation: x ↦ A x.
Linearity isn’t assumed—it drops out of the distributive and scalar-compatibility rules for matrix-vector multiplication.
Matrix multiplication is engineered so that composing linear maps corresponds exactly to multiplying their matrices: F_A ∘ F_B = F_{AB}.
A column-based expansion (using A1 and A2) turns the abstract linearity conditions into straightforward algebra.