Abstract Linear Algebra 23 | Combinations of Linear Maps
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Hom(V, W), the set of all linear maps from V to W, becomes a vector space when addition and scalar multiplication are defined pointwise.
Briefing
Linear maps aren’t just single functions between vector spaces—they form their own vector space under addition and scalar multiplication. Given two linear maps K, L: V → W (with the same domain V and codomain W over the same field), their sum is defined pointwise by (K + L)(x) = K(x) + L(x), and scalar multiples are defined by (λL)(x) = λ·L(x). With these operations, the collection Hom(V, W) of all linear maps from V to W becomes an F-vector space again: the zero element is the zero map sending every x ∈ V to 0 ∈ W, and the usual vector-space rules (closure, associativity, distributivity, etc.) carry over because they hold in W for the outputs.
This abstract “calculus of linear maps” becomes concrete through orthogonal projections. For a finite-dimensional inner product space V with an orthonormal basis {e1, e2, …}, take a subspace U spanned by n − 1 of those basis vectors. The orthogonal projection onto U is a linear map P_U: V → V given by P_U(x) = Σ_{j=1}^{n−1} e_j · ⟨e_j, x⟩, using the inner product’s linearity in the second argument. The complementary projection onto U⊥ is also linear and can be written as a single-term sum involving the remaining basis vector (the “normal component”). Adding the two projections recovers the identity map on V: P_U + P_{U⊥} = Id_V. Subtracting them yields a reflection across U, expressible as Id_V − 2·P_{U⊥} (or equivalently Id_V − 2 times the normal component projection), which still remains linear.
Beyond combining linear maps by addition and scaling, linear maps also combine by composition. If K: U → V and L: V → W are linear, then the composition L ∘ K: U → W is linear as well. This matters because composition behaves like a structural operation on Hom(V, W), analogous to multiplication in familiar settings like matrices.
The projection example shows composition’s effect sharply. Composing a projection with itself gives the same projection again: projecting twice changes nothing. Composing the two orthogonal projections (onto U and onto U⊥) collapses everything to zero: since the subspaces are orthogonal, applying one projection after the other always produces the zero vector, so the resulting linear map is the zero map in Hom(V, V). Taken together, these results show that linear maps support both vector-space operations (addition/scalar multiplication) and an additional “multiplication-like” operation (composition), setting up a richer algebraic structure that will be developed further later through matrix multiplication on a concrete level.
Cornell Notes
Linear maps from V to W form a vector space Hom(V, W). Pointwise addition (K + L)(x) = K(x) + L(x) and scalar multiplication (λL)(x) = λ·L(x) keep maps linear, so the zero map (x ↦ 0) acts as the additive identity. Orthogonal projections provide a concrete example: the projection onto a subspace U and the projection onto its orthogonal complement U⊥ are linear, add up to the identity, and their difference produces a reflection across U. Composition also preserves linearity: if K and L are linear, then L ∘ K is linear. For projections, composing a projection with itself returns the same projection, while composing projections onto orthogonal subspaces yields the zero map.
Why does Hom(V, W) become a vector space once addition and scalar multiplication are defined pointwise?
How is the orthogonal projection onto a subspace U written using an orthonormal basis?
What relationship holds between the projection onto U and the projection onto U⊥?
How does subtracting the projections relate to reflection across U?
What does composition do to projections—especially when the subspaces are orthogonal?
Review Questions
- Given linear maps K, L: V → W, write the formula for (K + L)(x) and (λL)(x). Why do these preserve linearity?
- State and justify the identity involving orthogonal projections onto U and U⊥.
- What are the outcomes of composing P_U with itself, and composing P_U with P_{U⊥}?
Key Points
- 1
Hom(V, W), the set of all linear maps from V to W, becomes a vector space when addition and scalar multiplication are defined pointwise.
- 2
For linear maps K and L, the sum is (K + L)(x) = K(x) + L(x), and scalar multiplication is (λL)(x) = λ·L(x).
- 3
The additive identity in Hom(V, W) is the zero map x ↦ 0 ∈ W.
- 4
Orthogonal projection onto a subspace U is linear and can be expressed using an orthonormal basis as P_U(x) = Σ e_j ⟨e_j, x⟩.
- 5
Projections onto complementary subspaces satisfy P_U + P_{U⊥} = Id_V, and their difference yields a reflection across U.
- 6
Composition of linear maps is linear, and for projections it is idempotent (P_U ∘ P_U = P_U).
- 7
Composing projections onto orthogonal subspaces produces the zero map.