Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 23 | Combinations of Linear Maps thumbnail

Abstract Linear Algebra 23 | Combinations of Linear Maps

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Hom(V, W), the set of all linear maps from V to W, becomes a vector space when addition and scalar multiplication are defined pointwise.

Briefing

Linear maps aren’t just single functions between vector spaces—they form their own vector space under addition and scalar multiplication. Given two linear maps K, L: V → W (with the same domain V and codomain W over the same field), their sum is defined pointwise by (K + L)(x) = K(x) + L(x), and scalar multiples are defined by (λL)(x) = λ·L(x). With these operations, the collection Hom(V, W) of all linear maps from V to W becomes an F-vector space again: the zero element is the zero map sending every x ∈ V to 0 ∈ W, and the usual vector-space rules (closure, associativity, distributivity, etc.) carry over because they hold in W for the outputs.

This abstract “calculus of linear maps” becomes concrete through orthogonal projections. For a finite-dimensional inner product space V with an orthonormal basis {e1, e2, …}, take a subspace U spanned by n − 1 of those basis vectors. The orthogonal projection onto U is a linear map P_U: V → V given by P_U(x) = Σ_{j=1}^{n−1} e_j · ⟨e_j, x⟩, using the inner product’s linearity in the second argument. The complementary projection onto U⊥ is also linear and can be written as a single-term sum involving the remaining basis vector (the “normal component”). Adding the two projections recovers the identity map on V: P_U + P_{U⊥} = Id_V. Subtracting them yields a reflection across U, expressible as Id_V − 2·P_{U⊥} (or equivalently Id_V − 2 times the normal component projection), which still remains linear.

Beyond combining linear maps by addition and scaling, linear maps also combine by composition. If K: U → V and L: V → W are linear, then the composition L ∘ K: U → W is linear as well. This matters because composition behaves like a structural operation on Hom(V, W), analogous to multiplication in familiar settings like matrices.

The projection example shows composition’s effect sharply. Composing a projection with itself gives the same projection again: projecting twice changes nothing. Composing the two orthogonal projections (onto U and onto U⊥) collapses everything to zero: since the subspaces are orthogonal, applying one projection after the other always produces the zero vector, so the resulting linear map is the zero map in Hom(V, V). Taken together, these results show that linear maps support both vector-space operations (addition/scalar multiplication) and an additional “multiplication-like” operation (composition), setting up a richer algebraic structure that will be developed further later through matrix multiplication on a concrete level.

Cornell Notes

Linear maps from V to W form a vector space Hom(V, W). Pointwise addition (K + L)(x) = K(x) + L(x) and scalar multiplication (λL)(x) = λ·L(x) keep maps linear, so the zero map (x ↦ 0) acts as the additive identity. Orthogonal projections provide a concrete example: the projection onto a subspace U and the projection onto its orthogonal complement U⊥ are linear, add up to the identity, and their difference produces a reflection across U. Composition also preserves linearity: if K and L are linear, then L ∘ K is linear. For projections, composing a projection with itself returns the same projection, while composing projections onto orthogonal subspaces yields the zero map.

Why does Hom(V, W) become a vector space once addition and scalar multiplication are defined pointwise?

Because for any K, L: V → W that are linear, the definitions (K + L)(x) = K(x) + L(x) and (λL)(x) = λ·L(x) produce a new map that still satisfies linearity. Additivity and homogeneity follow from the fact that addition and scalar multiplication in W obey the vector-space axioms. The zero element is the zero map x ↦ 0 ∈ W, which is linear and acts as the additive identity for all maps in Hom(V, W).

How is the orthogonal projection onto a subspace U written using an orthonormal basis?

If V is finite-dimensional with orthonormal basis {e1, …} and U is spanned by e1 through e_{n−1}, then the projection P_U is P_U(x) = Σ_{j=1}^{n−1} e_j · ⟨e_j, x⟩. This is linear because the inner product is linear in the second argument, so ⟨e_j, x⟩ behaves linearly with respect to x, and the sum of linear expressions stays linear.

What relationship holds between the projection onto U and the projection onto U⊥?

They are complementary: P_U + P_{U⊥} = Id_V. Geometrically, every vector x splits into a component in U plus a component in U⊥, and the two projections recover those components. Algebraically, adding the formulas for the two projections returns x itself.

How does subtracting the projections relate to reflection across U?

Subtracting the complementary projection gives a reflection. Since subtraction means adding the negative map, the operation corresponds to flipping the normal component. The result can be expressed as a linear combination of the identity and the projection onto the normal component, summarized as a reflection formula like Id_V − 2·P_{U⊥} (equivalently, Id_V minus twice the projection onto the orthogonal complement).

What does composition do to projections—especially when the subspaces are orthogonal?

Composing a projection with itself is idempotent: P_U ∘ P_U = P_U, so projecting twice changes nothing. Composing projections onto orthogonal subspaces cancels: P_{U⊥} ∘ P_U = 0 (and similarly P_U ∘ P_{U⊥} = 0). The reason is that projecting onto U produces a vector orthogonal to U⊥, so the next projection sends it to the zero vector.

Review Questions

  1. Given linear maps K, L: V → W, write the formula for (K + L)(x) and (λL)(x). Why do these preserve linearity?
  2. State and justify the identity involving orthogonal projections onto U and U⊥.
  3. What are the outcomes of composing P_U with itself, and composing P_U with P_{U⊥}?

Key Points

  1. 1

    Hom(V, W), the set of all linear maps from V to W, becomes a vector space when addition and scalar multiplication are defined pointwise.

  2. 2

    For linear maps K and L, the sum is (K + L)(x) = K(x) + L(x), and scalar multiplication is (λL)(x) = λ·L(x).

  3. 3

    The additive identity in Hom(V, W) is the zero map x ↦ 0 ∈ W.

  4. 4

    Orthogonal projection onto a subspace U is linear and can be expressed using an orthonormal basis as P_U(x) = Σ e_j ⟨e_j, x⟩.

  5. 5

    Projections onto complementary subspaces satisfy P_U + P_{U⊥} = Id_V, and their difference yields a reflection across U.

  6. 6

    Composition of linear maps is linear, and for projections it is idempotent (P_U ∘ P_U = P_U).

  7. 7

    Composing projections onto orthogonal subspaces produces the zero map.

Highlights

Linear maps from V to W form a vector space under pointwise addition and scalar multiplication.
Orthogonal projections satisfy P_U + P_{U⊥} = Id_V, turning geometric decomposition into an algebraic identity.
Subtracting the complementary projection produces a reflection across U, expressible using Id_V and a projection term.
Composition preserves linearity, and orthogonal projections compose to the zero map.

Topics