Abstract Linear Algebra 22 | Linear Maps
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A linear map is completely constrained by two rules: preserving vector addition and scalar multiplication.
Briefing
A linear map is defined by two rules—preserving vector addition and scalar multiplication—and that constraint sharply limits what it can do to geometry: it sends straight lines to straight lines (or collapses them to the origin), never bending them into curves. In other words, once a map respects the “linear structure” of a vector space, the image of any parallelogram-shaped region is forced by how it treats the basic operations that build that region.
Formally, the setup uses two vector spaces V and W over the same field F (real numbers or complex numbers). A function F: V → W is linear exactly when it satisfies, for all vectors u, v in V and all scalars λ in F, the identities F(u + v) = F(u) + F(v) and F(λu) = λF(u). The plus sign on the left refers to addition in V, while the plus sign on the right refers to addition in W—these operations may live in different spaces, but a linear map translates one structure into the other. Likewise, scalar multiplication must be compatible with the same field on both sides; otherwise “pulling out” λ would break.
A key general consequence follows immediately: every linear map sends the zero vector to the zero vector. The reasoning is direct—since 0·u equals the zero vector in V, linearity gives F(0·u) = 0·F(u), which must equal the zero vector in W. This yields a practical test: if a candidate map fails to take 0_V to 0_W, it cannot be linear.
The transcript then grounds the definition with examples. One example uses inner products. Fix a vector a in F^3 and define a map by F(u) = ⟨a, u⟩, where ⟨·,·⟩ is the standard inner product. Because inner products are linear in the appropriate argument, this construction automatically satisfies the two linearity properties. The map goes from a three-dimensional space to a one-dimensional space, and it can be written as a matrix product: a row vector (with transpose in the real case, and transpose plus complex conjugation in the complex case) multiplied by the column vector u. That matrix representation shows how linear maps can be encoded concretely.
A second example moves to function spaces. Let V be the vector space of polynomials of degree ≤ 3 and W the space of polynomials of degree ≤ 2. Define a linear map L by L(p) = p′ (the derivative). Differentiation respects addition and scalar multiplication: (p + q)′ = p′ + q′ and (λp)′ = λp′. Since these are exactly the linearity rules, differentiation is a linear map between these polynomial spaces. The discussion notes that, even in such abstract settings, linear maps can still be represented by matrices—though the details are deferred to the next installment.
Cornell Notes
Linear maps are functions between vector spaces that preserve the two operations that define linear structure: vector addition and scalar multiplication. For vector spaces V and W over the same field F, a map F: V → W is linear if F(u + v) = F(u) + F(v) and F(λu) = λF(u) for all u, v in V and all scalars λ in F. A crucial consequence is that every linear map must send the zero vector in V to the zero vector in W; checking F(0_V) = 0_W is a quick way to rule out nonlinearity. Examples include maps built from inner products (u ↦ ⟨a, u⟩) and differentiation on polynomial spaces (p ↦ p′), both of which satisfy the linearity identities. These examples illustrate how linearity constrains geometry and enables matrix representations.
Why does a linear map preserve straight-line structure (and never create curves)?
What is the exact definition of linearity between two vector spaces V and W?
How does one prove that every linear map sends the zero vector to the zero vector?
How does the inner-product example produce a linear map?
Why is differentiation a linear map on polynomial spaces?
Review Questions
- Given a candidate map F: V → W, what single check involving the zero vector can quickly disprove linearity?
- State the two equations that define linearity and explain why the field F must be the same on both sides.
- For the map L(p) = p′ on polynomials of degree ≤ 3, what is L(x^2) and why does the result stay within degree ≤ 2?
Key Points
- 1
A linear map is completely constrained by two rules: preserving vector addition and scalar multiplication.
- 2
Linearity forces geometric behavior: straight lines map to straight lines or collapse to a point; no curves can appear.
- 3
For linear maps F: V → W over the same field, the identities F(u + v) = F(u) + F(v) and F(λu) = λF(u) must hold for all u, v and λ.
- 4
Every linear map satisfies F(0_V) = 0_W, making this a fast nonlinearity test.
- 5
Maps defined using inner products, such as u ↦ ⟨a, u⟩, are linear because inner products respect addition and scaling.
- 6
Differentiation on polynomial spaces (p ↦ p′) is linear since derivatives distribute over addition and scalar multiplication.
- 7
Even in abstract vector spaces like polynomial spaces, linear maps can be represented by matrices—details are saved for a later discussion.