Linear Algebra 19 | Matrices induce linear maps [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
An M×N matrix A induces a linear map F_A: R^N → R^M via F_A(x)=A x.
Briefing
A matrix doesn’t just store numbers—it automatically defines a linear map between vector spaces, and the usual matrix-vector multiplication is exactly the rule that makes linearity work. For an M×N matrix A, the associated map F_A takes an input vector x in R^N and outputs A x in R^M. Linearity then follows from two compatibility properties: distributing over vector addition and commuting with scalar multiplication. Concretely, the additive property comes from the matrix-vector distributive law A(x+y)=Ax+Ay, while the homogeneous property comes from the scalar compatibility law A(λx)=λ(Ax). Together these match the definition of a linear map, so every matrix induces a function that respects both vector addition and scaling.
The transcript also builds intuition using a 2D example. Writing A in terms of its columns A1 and A2, and taking x and y as two-component vectors, the product A(x+y) can be expanded into combinations of column-scaled vectors. After applying distributive rules for scalars and rearranging terms, the expression matches Ax+Ay. That calculation isn’t just a check—it illustrates the underlying mechanism: matrix multiplication is designed so that column contributions add and scale exactly the way linear maps require.
Just as important, the correspondence runs both directions: matrices and linear maps are tightly linked. Starting from a matrix A, one gets a linear map F_A. The reverse direction—starting from an abstract linear map and constructing the matrix that represents it—is flagged for the next installment, emphasizing that the matrix-to-map relationship is not merely convenient but structurally fundamental.
Finally, the discussion turns to how matrix multiplication reflects composition of linear maps. Given matrices A and B with compatible dimensions (B is K×N and A is M×K), their product AB is defined and corresponds to composing the induced maps. The map F_B sends x to Bx, and then F_A sends that result to A(Bx). Using associativity of matrix multiplication, A(Bx) becomes (AB)x. The conclusion is precise: the composition F_A ∘ F_B equals the linear map induced by the matrix product AB. This shows the definition of matrix multiplication isn’t arbitrary; it is chosen so that multiplying tables of numbers reproduces composing linear transformations.
In short, matrices act as blueprints for linear transformations, and matrix multiplication mirrors how those transformations compose—turning an abstract operation (function composition) into a concrete computational rule (multiplying matrices).
Cornell Notes
An M×N matrix A defines a linear map F_A: R^N → R^M by sending each vector x to A x. Linearity comes directly from matrix-vector rules: A(x+y)=Ax+Ay (additivity) and A(λx)=λ(Ax) (homogeneity). A column-based 2D example with columns A1 and A2 shows how expanding A(x+y) and regrouping terms reproduces Ax+Ay. The same framework extends to matrix multiplication: if A is M×K and B is K×N, then composing the induced maps satisfies F_A ∘ F_B = F_{AB}. This matters because it connects abstract linear transformations to concrete computations with matrices.
Why does every matrix A induce a linear map F_A, and what two properties must be checked?
How does the 2D column viewpoint make additivity feel concrete?
What is the dimension condition for multiplying matrices A and B in this setting?
How does composing linear maps relate to multiplying matrices?
Why is the definition of matrix multiplication described as “not arbitrary” here?
Review Questions
- Given an M×N matrix A, write the induced map F_A and state the two equations that verify linearity.
- For matrices A (M×K) and B (K×N), show how F_A ∘ F_B acting on x becomes (AB)x.
- In the 2-column example with columns A1 and A2, how does expanding A(x+y) lead to Ax+Ay after regrouping terms?
Key Points
- 1
An M×N matrix A induces a linear map F_A: R^N → R^M via F_A(x)=A x.
- 2
Linearity of F_A follows from A(x+y)=Ax+Ay (additivity) and A(λx)=λ(Ax) (homogeneity).
- 3
Viewing A through its columns A1 and A2 makes distributive expansion and regrouping produce Ax+Ay.
- 4
Matrix multiplication requires dimension compatibility: if B is K×N and A is M×K, then AB is M×N.
- 5
The composition of induced maps matches matrix multiplication: F_A ∘ F_B = F_{AB}.
- 6
Associativity of matrix multiplication is the key step connecting A(Bx) to (AB)x.
- 7
The matrix-to-linear-map correspondence is presented as fundamental, with the reverse construction deferred to the next video.