Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 15 | Orthogonal Projection Onto Subspace thumbnail

Abstract Linear Algebra 15 | Orthogonal Projection Onto Subspace

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Orthogonal projection decomposes any vector X uniquely as X = P + N, with P in the subspace U and N in the orthogonal complement U⊥.

Briefing

Orthogonal projection onto a finite-dimensional subspace works the same way as in the one-dimensional case: every vector X splits uniquely into a part that lies inside the subspace U and a part that is orthogonal to U. The key payoff is that the “normal component” is determined by orthogonality to U, and for finite-dimensional U it’s enough to check orthogonality against a basis rather than every vector in U—turning an infinite-looking condition into a finite one.

In the one-dimensional setting, the projection of X onto a line spanned by a nonzero vector R is computed using the inner product. The transcript emphasizes a simplification: if R is replaced by a normalized unit vector R̂ (so that ⟨R̂, R̂⟩ = 1), the projection formula becomes cleaner. Instead of dividing by ⟨R, R⟩, the projection onto the line is expressed directly as ⟨R̂, X⟩R̂. The remaining component N is then obtained by subtraction: N = X − P. A concrete example in R² is worked out numerically, showing how P is computed first and then N follows immediately.

The discussion then generalizes from a line to a higher-dimensional subspace. In a geometric picture (a plane in R³), the goal remains identical: write X = P + N where P ∈ U and N lies in the orthogonal complement U⊥. Uniqueness is highlighted as a general fact: once P is required to be in U and N is required to be orthogonal to U, there is only one such decomposition. This also leads to a structural relationship between U and U⊥: their intersection contains only the zero vector, reflecting that a vector cannot be both in U and orthogonal to U unless it is zero.

To make the orthogonal complement practical, the transcript introduces a crucial finite-dimensional criterion. If U has dimension k with basis vectors B₁, …, B_k, then a vector Y is in U⊥ exactly when it is orthogonal to each basis vector. In other words, checking ⟨Y, B_j⟩ = 0 for all j = 1, …, k is sufficient; there’s no need to test orthogonality against every vector in U. The justification uses linearity of the inner product in the second argument: any vector u ∈ U can be written as a linear combination of the basis vectors, and orthogonality to each basis element forces orthogonality to every such combination.

With these ingredients in place, the framework for orthogonal projection onto a finite-dimensional subspace is set: for any inner product space V and finite-dimensional subspace U, the projection of X onto U is the component P in U, while the normal component N lies in U⊥, and the decomposition X = P + N is unique. The transcript signals that the actual computation method using a basis will be developed next, but the groundwork—especially the basis-based characterization of U⊥—is established as the main conceptual tool.

Cornell Notes

Orthogonal projection onto a finite-dimensional subspace U splits any vector X uniquely as X = P + N, where P ∈ U and N ∈ U⊥ (the orthogonal complement). In the one-dimensional case, using a unit direction vector R̂ turns the projection into P = ⟨R̂, X⟩R̂, and the normal part is N = X − P. For higher dimensions, the transcript stresses that to verify whether a vector Y lies in U⊥, it’s enough to check orthogonality against a basis B₁,…,B_k of U. This works because every u ∈ U is a linear combination of the basis vectors and the inner product is linear in its second argument. That finite check is what makes orthogonal projection computations feasible in practice.

Why does normalizing the direction vector simplify the one-dimensional projection formula?

For a line spanned by a nonzero vector R, the projection formula involves dividing by ⟨R, R⟩ (which equals the squared length of R). If R̂ is chosen so that ⟨R̂, R̂⟩ = 1, then ⟨R̂, R̂⟩ no longer appears as a denominator. The projection becomes P = ⟨R̂, X⟩R̂, and the orthogonal component is N = X − P.

What does it mean to decompose X as X = P + N in the subspace setting?

The decomposition requires P to lie in the subspace U and N to be orthogonal to every vector in U. That orthogonality condition is expressed as N ∈ U⊥, meaning ⟨N, u⟩ = 0 for all u ∈ U. The transcript notes that this yields a unique decomposition: there is no alternative way to write X as a sum of a vector in U and a vector orthogonal to U.

How can orthogonality to a whole subspace be checked using only a basis?

If U has basis B₁,…,B_k, then Y ∈ U⊥ exactly when ⟨Y, B_j⟩ = 0 for each j = 1,…,k. The “only if” direction is immediate because each B_j lies in U. For the “if” direction, any u ∈ U can be written as u = Σ_j λ_j B_j. Linearity of the inner product in the second argument gives ⟨Y, u⟩ = Σ_j λ_j⟨Y, B_j⟩ = 0, so Y is orthogonal to every u ∈ U.

What relationship between U and U⊥ is emphasized, and why should it be expected?

The intersection U ∩ U⊥ contains only the zero vector. If a vector v lies in U and also in U⊥, then v is orthogonal to itself: ⟨v, v⟩ = 0. Positive definiteness of the inner product implies that happens only when v = 0.

What practical advantage does the basis test for U⊥ provide for computing projections?

Orthogonal projection requires finding the component N that is orthogonal to U. Instead of enforcing ⟨N, u⟩ = 0 for infinitely many u ∈ U, it suffices to enforce ⟨N, B_j⟩ = 0 for the finitely many basis vectors. This reduces the orthogonality constraints to a finite system tied directly to the chosen basis of U.

Review Questions

  1. Given a subspace U with basis B₁,…,B_k, what condition on a vector Y guarantees Y ∈ U⊥?
  2. In the one-dimensional case, how do P and N relate to X when using a unit vector R̂?
  3. Why does uniqueness of the decomposition X = P + N depend on requiring P ∈ U and N ∈ U⊥?

Key Points

  1. 1

    Orthogonal projection decomposes any vector X uniquely as X = P + N, with P in the subspace U and N in the orthogonal complement U⊥.

  2. 2

    Using a unit direction vector R̂ in the one-dimensional case simplifies the projection to P = ⟨R̂, X⟩R̂.

  3. 3

    The normal component is always obtained by subtraction: N = X − P.

  4. 4

    For a finite-dimensional subspace U with basis B₁,…,B_k, a vector Y lies in U⊥ iff ⟨Y, B_j⟩ = 0 for every basis vector.

  5. 5

    The intersection U ∩ U⊥ is {0}, reflecting positive definiteness of the inner product.

  6. 6

    Orthogonality checks become finite because every u ∈ U is a linear combination of basis vectors and the inner product is linear in its second argument.

Highlights

Normalizing the spanning vector turns the projection formula into a one-line inner-product expression: P = ⟨R̂, X⟩R̂.
Orthogonal complements can be characterized using only basis vectors: Y ∈ U⊥ iff ⟨Y, B_j⟩ = 0 for all j.
A vector cannot be both in U and orthogonal to U unless it is the zero vector.
The decomposition X = P + N is unique once P ∈ U and N ⊥ U are enforced.
Finite dimensionality makes the orthogonality condition computationally manageable.

Topics