Get AI summaries of any video or article — Sign up free
Hilbert Spaces 9 | Projection Theorem thumbnail

Hilbert Spaces 9 | Projection Theorem

6 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Every vector x in a Hilbert space X and every closed subspace U admit a unique decomposition x = p + n with p ∈ U and n ∈ U⊥.

Briefing

Hilbert spaces guarantee a clean geometric split: every vector can be written as the sum of a component lying in a closed subspace and a component orthogonal to that entire subspace. This “projection theorem” matters because it extends the familiar linear-algebra idea of orthogonal projection to infinite-dimensional settings, where existence and uniqueness are far from automatic.

Start with a Hilbert space X and a closed subspace U (closedness is essential). For any vector x in X, there is a unique decomposition x = p + n, where p belongs to U and n belongs to the orthogonal complement U⊥. The right-angle relationship is the key geometric fact: p is orthogonal to every vector in U⊥, and equivalently n is orthogonal to every vector in U. The vector p is called the orthogonal projection of x onto U, often denoted proj_U(x) or x|_U.

Uniqueness follows from how U intersects U⊥. If a vector y lies in both U and U⊥, then it is orthogonal to itself: ⟨y, y⟩ = 0. Positive definiteness of the inner product forces y to be the zero vector, so U ∩ U⊥ = {0}. With that in hand, suppose x has two decompositions x = p + n = p~ + n~ with p, p~ in U and n, n~ in U⊥. Subtracting gives (p − p~) = (n~ − n). The left side lives in U, the right side lives in U⊥, so their common value must be in the intersection, hence must be 0. Therefore p = p~ and n = n~.

Existence is where the infinite-dimensional case earns its keep. The argument uses the approximation (best-approximation) property of Hilbert spaces: because U is closed and convex, there is a unique “closest point” p in U to x, minimizing the distance ||x − u|| over all u in U. Define n = x − p. The remaining task is to show n actually lies in U⊥, i.e., that it is orthogonal to every direction v in U.

To prove that orthogonality, the proof tests the minimizing property against perturbations of p along any v in U. Any u in U can be written as p + Λv for scalars Λ, so the distance from x to p + Λv cannot be smaller than the distance to p. Translating this inequality into inner-product form and choosing a specific scaling Λ tied to ⟨v, n⟩ forces the inequality to collapse into the statement ⟨n, v⟩ = 0 for all v in U. Since this holds for every v, n belongs to U⊥, completing the decomposition x = p + n.

With existence and uniqueness established, orthogonal projections become reliable tools in Hilbert spaces—provided the subspace being projected onto is closed. That condition is the gatekeeper that makes the best-approximation step work and, in turn, makes the projection theorem hold in full generality.

Cornell Notes

The projection theorem in Hilbert spaces says that for any vector x and any closed subspace U of a Hilbert space X, there is a unique split x = p + n where p ∈ U and n ∈ U⊥. The uniqueness comes from the fact that U ∩ U⊥ = {0}: if a vector lies in both, then ⟨y, y⟩ = 0, so y must be the zero vector. Existence relies on the best-approximation property: since U is closed (hence suitable for approximation), there is a unique p in U minimizing ||x − u||. Setting n = x − p, the minimizing inequality against perturbations p + Λv forces ⟨n, v⟩ = 0 for every v ∈ U, so n ∈ U⊥. This turns orthogonal projection into a guaranteed, well-defined operation in infinite dimensions.

Why does uniqueness of the decomposition x = p + n reduce to proving U ∩ U⊥ = {0}?

If x = p + n = p~ + n~ with p, p~ ∈ U and n, n~ ∈ U⊥, then (p − p~) = (n~ − n). The left side lies in U and the right side lies in U⊥, so their common value lies in U ∩ U⊥. Any y in U ∩ U⊥ satisfies ⟨y, u⟩ = 0 for all u ∈ U; taking u = y gives ⟨y, y⟩ = 0. Positive definiteness of the inner product implies y = 0, so the intersection contains only the zero vector. That forces p = p~ and n = n~.

How does the best-approximation property produce the candidate projection p?

Because U is closed in a Hilbert space, it behaves like a closed convex set for approximation. For a given x, there exists a unique p ∈ U that minimizes the distance ||x − u|| over all u ∈ U. This p is taken as the orthogonal projection of x onto U. The proof then defines n = x − p and works to show n is orthogonal to every vector in U.

What inequality is used to prove that n = x − p lies in U⊥?

The minimizing property says that for every v ∈ U and scalar Λ, the distance from x to p + Λv cannot be smaller than the distance to p. In norm form: ||x − (p + Λv)|| ≥ ||x − p||. Since x − p = n, this becomes ||n − Λv|| ≥ ||n||. Converting to inner products and expanding yields an inequality involving ⟨n, v⟩ and ||v||.

Why is testing only unit vectors v (with ||v|| = 1) enough?

Orthogonality requires ⟨n, v⟩ = 0 for all v ∈ U. If the condition holds for all unit vectors in U, then it holds for all vectors because any nonzero vector w ∈ U can be written as w = ||w||·v where v is unit. Linearity of the inner product gives ⟨n, w⟩ = ||w||⟨n, v⟩, so zero for unit vectors implies zero for all vectors.

How does choosing a special scalar Λ force the inequality to imply ⟨n, v⟩ = 0?

After expanding ||n − Λv||^2 using inner-product linearity, the inequality becomes a quadratic expression in Λ involving terms like −Λ⟨n, v⟩ and its complex conjugate plus |Λ|^2||v||^2. Picking Λ = ⟨v, n⟩ (with ||v|| = 1) aligns the terms so that the negative contributions match the positive one, turning the inequality into a statement that a nonnegative quantity is also nonpositive. The only way that can happen is that the quantity must be zero, yielding ⟨v, n⟩ = 0.

What role does closedness of U play in the theorem?

Closedness is what guarantees the existence (and uniqueness) of the best approximation p ∈ U to x. Without closedness, the minimizing sequence for ||x − u|| might fail to converge to a point inside U, so the candidate p might not exist. The entire existence proof of the orthogonal projection depends on having that closest point.

Review Questions

  1. State the projection theorem for a Hilbert space X and a closed subspace U, including the roles of U and U⊥.
  2. Explain why U ∩ U⊥ must equal {0} and how that fact proves uniqueness of the decomposition x = p + n.
  3. Outline how the best-approximation property leads to n = x − p being orthogonal to every v ∈ U.

Key Points

  1. 1

    Every vector x in a Hilbert space X and every closed subspace U admit a unique decomposition x = p + n with p ∈ U and n ∈ U⊥.

  2. 2

    The orthogonal projection of x onto U is the unique element p ∈ U such that x − p is orthogonal to all of U.

  3. 3

    Uniqueness follows because U ∩ U⊥ = {0}, which is forced by positive definiteness of the inner product.

  4. 4

    Existence relies on the best-approximation property: closed subspaces provide a unique closest point p to x.

  5. 5

    Defining n = x − p, the minimizing inequality against perturbations p + Λv implies ⟨n, v⟩ = 0 for every v ∈ U.

  6. 6

    Orthogonal projection in infinite-dimensional Hilbert spaces works for closed subspaces because approximation arguments remain valid there.

  7. 7

    Closedness of U is not cosmetic; it is the condition that makes the closest-point step—and thus the projection—exist.

Highlights

Any x ∈ X splits uniquely into x = p + n with p in U and n in U⊥, giving a guaranteed “right-angle” geometry in Hilbert spaces.
The intersection U ∩ U⊥ contains only the zero vector, making the decomposition impossible to duplicate.
Existence of the projection is driven by best approximation: the closest point p ∈ U to x exists and is unique when U is closed.
Orthogonality of n is forced by comparing distances ||n − Λv|| and ||n|| for all directions v ∈ U and scalars Λ.