Abstract Linear Algebra 15 | Orthogonal Projection Onto Subspace
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Orthogonal projection decomposes any vector X uniquely as X = P + N, with P in the subspace U and N in the orthogonal complement U⊥.
Briefing
Orthogonal projection onto a finite-dimensional subspace works the same way as in the one-dimensional case: every vector X splits uniquely into a part that lies inside the subspace U and a part that is orthogonal to U. The key payoff is that the “normal component” is determined by orthogonality to U, and for finite-dimensional U it’s enough to check orthogonality against a basis rather than every vector in U—turning an infinite-looking condition into a finite one.
In the one-dimensional setting, the projection of X onto a line spanned by a nonzero vector R is computed using the inner product. The transcript emphasizes a simplification: if R is replaced by a normalized unit vector R̂ (so that ⟨R̂, R̂⟩ = 1), the projection formula becomes cleaner. Instead of dividing by ⟨R, R⟩, the projection onto the line is expressed directly as ⟨R̂, X⟩R̂. The remaining component N is then obtained by subtraction: N = X − P. A concrete example in R² is worked out numerically, showing how P is computed first and then N follows immediately.
The discussion then generalizes from a line to a higher-dimensional subspace. In a geometric picture (a plane in R³), the goal remains identical: write X = P + N where P ∈ U and N lies in the orthogonal complement U⊥. Uniqueness is highlighted as a general fact: once P is required to be in U and N is required to be orthogonal to U, there is only one such decomposition. This also leads to a structural relationship between U and U⊥: their intersection contains only the zero vector, reflecting that a vector cannot be both in U and orthogonal to U unless it is zero.
To make the orthogonal complement practical, the transcript introduces a crucial finite-dimensional criterion. If U has dimension k with basis vectors B₁, …, B_k, then a vector Y is in U⊥ exactly when it is orthogonal to each basis vector. In other words, checking ⟨Y, B_j⟩ = 0 for all j = 1, …, k is sufficient; there’s no need to test orthogonality against every vector in U. The justification uses linearity of the inner product in the second argument: any vector u ∈ U can be written as a linear combination of the basis vectors, and orthogonality to each basis element forces orthogonality to every such combination.
With these ingredients in place, the framework for orthogonal projection onto a finite-dimensional subspace is set: for any inner product space V and finite-dimensional subspace U, the projection of X onto U is the component P in U, while the normal component N lies in U⊥, and the decomposition X = P + N is unique. The transcript signals that the actual computation method using a basis will be developed next, but the groundwork—especially the basis-based characterization of U⊥—is established as the main conceptual tool.
Cornell Notes
Orthogonal projection onto a finite-dimensional subspace U splits any vector X uniquely as X = P + N, where P ∈ U and N ∈ U⊥ (the orthogonal complement). In the one-dimensional case, using a unit direction vector R̂ turns the projection into P = ⟨R̂, X⟩R̂, and the normal part is N = X − P. For higher dimensions, the transcript stresses that to verify whether a vector Y lies in U⊥, it’s enough to check orthogonality against a basis B₁,…,B_k of U. This works because every u ∈ U is a linear combination of the basis vectors and the inner product is linear in its second argument. That finite check is what makes orthogonal projection computations feasible in practice.
Why does normalizing the direction vector simplify the one-dimensional projection formula?
What does it mean to decompose X as X = P + N in the subspace setting?
How can orthogonality to a whole subspace be checked using only a basis?
What relationship between U and U⊥ is emphasized, and why should it be expected?
What practical advantage does the basis test for U⊥ provide for computing projections?
Review Questions
- Given a subspace U with basis B₁,…,B_k, what condition on a vector Y guarantees Y ∈ U⊥?
- In the one-dimensional case, how do P and N relate to X when using a unit vector R̂?
- Why does uniqueness of the decomposition X = P + N depend on requiring P ∈ U and N ∈ U⊥?
Key Points
- 1
Orthogonal projection decomposes any vector X uniquely as X = P + N, with P in the subspace U and N in the orthogonal complement U⊥.
- 2
Using a unit direction vector R̂ in the one-dimensional case simplifies the projection to P = ⟨R̂, X⟩R̂.
- 3
The normal component is always obtained by subtraction: N = X − P.
- 4
For a finite-dimensional subspace U with basis B₁,…,B_k, a vector Y lies in U⊥ iff ⟨Y, B_j⟩ = 0 for every basis vector.
- 5
The intersection U ∩ U⊥ is {0}, reflecting positive definiteness of the inner product.
- 6
Orthogonality checks become finite because every u ∈ U is a linear combination of basis vectors and the inner product is linear in its second argument.