Hilbert Spaces 6 | Orthogonal Complement
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Two vectors x and y in an inner product space are orthogonal exactly when ⟨x,y⟩ = 0, and the same rule extends to subsets by requiring zero inner product with every element.
Briefing
Orthogonality in inner product spaces isn’t just a definition—it becomes a geometric tool for carving a vector space into mutually perpendicular directions. Two vectors are orthogonal exactly when their inner product is zero, and the same idea extends from single vectors to entire subsets and subspaces. From there, the orthogonal complement of a set A is defined as the collection of all vectors in X that have zero inner product with every element of A; symbolically, A^⊥ = {x in X : ⟨x,a⟩ = 0 for all a in A}. This matters because A^⊥ forms the “space of directions” that do not interact (via the inner product) with A, turning perpendicularity into a structure that can be studied with linear algebra and topology.
The course then lays out how orthogonal complements behave in any inner product space X, including infinite-dimensional ones. First, A^⊥ is always a subspace of X. Closure under addition follows because if x and y both annihilate every a in A under the inner product, then so does x+y; similarly, scalar multiples stay in A^⊥ because inner products are linear in the second argument (and conjugate-linear in the first, though the result is still zero). Second, A^⊥ is always closed with respect to the norm topology induced by the inner product. The argument uses sequences: if a sequence (x_n) in A^⊥ converges to some x in X, then for any a in A the numbers ⟨x_n,a⟩ are identically zero, so their limit is also zero. Continuity of the inner product lets the limit pass inside, giving ⟨x,a⟩ = 0 for all a in A, so x remains in A^⊥.
Next comes how orthogonal complements interact with set operations. Taking the closure of A does not change the orthogonal complement: A^⊥ = (\overline{A})^⊥. The inclusion A^⊥ ⊆ (\overline{A})^⊥ is shown by approximating any point B in \overline{A} with a sequence (a_n) from A; continuity again transfers the zero inner product from ⟨x,a_n⟩ to ⟨x,B⟩. Meanwhile, enlarging a set shrinks its orthogonal complement in general: if A ⊆ U, then U^⊥ ⊆ A^⊥. A parallel phenomenon holds for spans: since span(A) contains A, the orthogonal complement of span(A) cannot be larger than A^⊥. The reverse inclusion is proved by writing any element of span(A) as a finite linear combination of vectors from A and using linearity of the inner product in the second argument to show that vectors orthogonal to A are also orthogonal to every linear combination.
Taken together, these results justify why orthogonal complements are typically treated for subspaces rather than arbitrary sets: the complement depends only on the closure and the span. The payoff is conceptual and practical—once A^⊥ is pinned down by these invariances, the next step is to understand what happens when orthogonal complementation is applied twice.
Cornell Notes
Orthogonality in an inner product space is defined by the inner product: vectors x and y are orthogonal when ⟨x,y⟩ = 0. This extends to subsets and subspaces: x is orthogonal to a set A if ⟨x,a⟩ = 0 for every a in A, and the orthogonal complement A^⊥ is the set of all such x. A^⊥ is always a subspace and is always closed in the norm topology induced by the inner product; sequence limits stay inside A^⊥ because the inner product is continuous. The orthogonal complement also ignores “extra” points added by closure and span: A^⊥ = (\overline{A})^⊥ and A^⊥ = (span(A))^⊥. These invariances explain why orthogonal complements are usually studied for subspaces.
How does the definition of orthogonality for vectors extend to subsets and subspaces?
What exactly is the orthogonal complement A^⊥, and why is it a subspace?
Why is A^⊥ always closed in an inner product space?
Why does taking closure not change the orthogonal complement: A^⊥ = (\overline{A})^⊥?
How does orthogonal complementation behave with span(A)?
Review Questions
- State the definition of A^⊥ for a subset A of an inner product space X.
- Prove (in outline) why A^⊥ is closed using a convergent sequence argument.
- Explain why A^⊥ does not change when A is replaced by its closure or by span(A).
Key Points
- 1
Two vectors x and y in an inner product space are orthogonal exactly when ⟨x,y⟩ = 0, and the same rule extends to subsets by requiring zero inner product with every element.
- 2
The orthogonal complement A^⊥ is the set of all vectors in X that have zero inner product with every vector in A: A^⊥ = {x ∈ X : ⟨x,a⟩ = 0 ∀a ∈ A}.
- 3
A^⊥ is always a linear subspace because it is closed under addition and scalar multiplication, and it contains the zero vector.
- 4
A^⊥ is always closed in the norm topology induced by the inner product; convergence of a sequence in A^⊥ preserves membership due to continuity of the inner product.
- 5
Orthogonal complements ignore closure: A^⊥ = (\overline{A})^⊥, proved by approximating points in \overline{A} with sequences from A.
- 6
Orthogonal complements ignore span: A^⊥ = (span(A))^⊥, proved by using linearity of the inner product on finite linear combinations.
- 7
Inclusion reverses: if A ⊆ U, then U^⊥ ⊆ A^⊥, so enlarging a set makes its orthogonal complement smaller.