Get AI summaries of any video or article — Sign up free
Hilbert Spaces 6 | Orthogonal Complement thumbnail

Hilbert Spaces 6 | Orthogonal Complement

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Two vectors x and y in an inner product space are orthogonal exactly when ⟨x,y⟩ = 0, and the same rule extends to subsets by requiring zero inner product with every element.

Briefing

Orthogonality in inner product spaces isn’t just a definition—it becomes a geometric tool for carving a vector space into mutually perpendicular directions. Two vectors are orthogonal exactly when their inner product is zero, and the same idea extends from single vectors to entire subsets and subspaces. From there, the orthogonal complement of a set A is defined as the collection of all vectors in X that have zero inner product with every element of A; symbolically, A^⊥ = {x in X : ⟨x,a⟩ = 0 for all a in A}. This matters because A^⊥ forms the “space of directions” that do not interact (via the inner product) with A, turning perpendicularity into a structure that can be studied with linear algebra and topology.

The course then lays out how orthogonal complements behave in any inner product space X, including infinite-dimensional ones. First, A^⊥ is always a subspace of X. Closure under addition follows because if x and y both annihilate every a in A under the inner product, then so does x+y; similarly, scalar multiples stay in A^⊥ because inner products are linear in the second argument (and conjugate-linear in the first, though the result is still zero). Second, A^⊥ is always closed with respect to the norm topology induced by the inner product. The argument uses sequences: if a sequence (x_n) in A^⊥ converges to some x in X, then for any a in A the numbers ⟨x_n,a⟩ are identically zero, so their limit is also zero. Continuity of the inner product lets the limit pass inside, giving ⟨x,a⟩ = 0 for all a in A, so x remains in A^⊥.

Next comes how orthogonal complements interact with set operations. Taking the closure of A does not change the orthogonal complement: A^⊥ = (\overline{A})^⊥. The inclusion A^⊥ ⊆ (\overline{A})^⊥ is shown by approximating any point B in \overline{A} with a sequence (a_n) from A; continuity again transfers the zero inner product from ⟨x,a_n⟩ to ⟨x,B⟩. Meanwhile, enlarging a set shrinks its orthogonal complement in general: if A ⊆ U, then U^⊥ ⊆ A^⊥. A parallel phenomenon holds for spans: since span(A) contains A, the orthogonal complement of span(A) cannot be larger than A^⊥. The reverse inclusion is proved by writing any element of span(A) as a finite linear combination of vectors from A and using linearity of the inner product in the second argument to show that vectors orthogonal to A are also orthogonal to every linear combination.

Taken together, these results justify why orthogonal complements are typically treated for subspaces rather than arbitrary sets: the complement depends only on the closure and the span. The payoff is conceptual and practical—once A^⊥ is pinned down by these invariances, the next step is to understand what happens when orthogonal complementation is applied twice.

Cornell Notes

Orthogonality in an inner product space is defined by the inner product: vectors x and y are orthogonal when ⟨x,y⟩ = 0. This extends to subsets and subspaces: x is orthogonal to a set A if ⟨x,a⟩ = 0 for every a in A, and the orthogonal complement A^⊥ is the set of all such x. A^⊥ is always a subspace and is always closed in the norm topology induced by the inner product; sequence limits stay inside A^⊥ because the inner product is continuous. The orthogonal complement also ignores “extra” points added by closure and span: A^⊥ = (\overline{A})^⊥ and A^⊥ = (span(A))^⊥. These invariances explain why orthogonal complements are usually studied for subspaces.

How does the definition of orthogonality for vectors extend to subsets and subspaces?

For vectors, orthogonality means ⟨x,y⟩ = 0. For a subset A ⊆ X, a vector x is orthogonal to A when ⟨x,a⟩ = 0 for every a ∈ A. For a subspace A, the same condition is used: x ∈ A^⊥ exactly when x has zero inner product with every vector in that subspace. The notation ⊥ is used consistently for vectors, subsets, and subspaces.

What exactly is the orthogonal complement A^⊥, and why is it a subspace?

A^⊥ = {x ∈ X : ⟨x,a⟩ = 0 for all a ∈ A}. It is a subspace because if x,y ∈ A^⊥ then for any a ∈ A, ⟨x+y,a⟩ = ⟨x,a⟩ + ⟨y,a⟩ = 0 + 0 = 0, so x+y ∈ A^⊥. Also, for any scalar λ, ⟨λx,a⟩ = λ⟨x,a⟩ (with conjugation in the first argument not affecting the “equals zero” outcome), so λx ∈ A^⊥. The zero vector is automatically in A^⊥.

Why is A^⊥ always closed in an inner product space?

Closedness is shown using sequences. If x_n ∈ A^⊥ and x_n → x in X, then for any a ∈ A, ⟨x_n,a⟩ = 0 for all n. Taking limits gives lim_n ⟨x_n,a⟩ = 0. Continuity of the inner product lets the limit pass inside: ⟨x,a⟩ = 0 for all a ∈ A, so x ∈ A^⊥. Therefore A^⊥ contains all its limit points and is closed.

Why does taking closure not change the orthogonal complement: A^⊥ = (\overline{A})^⊥?

One inclusion is proved by approximation. If x ∈ A^⊥ and B ∈ \overline{A}, then there exists a sequence a_n ∈ A with a_n → B. Since ⟨x,a_n⟩ = 0 for all n, continuity yields ⟨x,B⟩ = lim_n ⟨x,a_n⟩ = 0. Because this holds for every B in \overline{A}, x ∈ (\overline{A})^⊥. Combined with the general reverse inclusion from A ⊆ \overline{A}, equality follows.

How does orthogonal complementation behave with span(A)?

Because A ⊆ span(A), the orthogonal complement reverses inclusion: (span(A))^⊥ ⊆ A^⊥. The reverse inclusion is shown by taking x ∈ A^⊥ and any element of span(A), which is a finite linear combination ∑_j λ_j a_j with a_j ∈ A. Linearity of the inner product in the second argument gives ⟨x,∑_j λ_j a_j⟩ = ∑_j λ_j ⟨x,a_j⟩ = 0, since each ⟨x,a_j⟩ = 0. Hence x ∈ (span(A))^⊥, so A^⊥ = (span(A))^⊥.

Review Questions

  1. State the definition of A^⊥ for a subset A of an inner product space X.
  2. Prove (in outline) why A^⊥ is closed using a convergent sequence argument.
  3. Explain why A^⊥ does not change when A is replaced by its closure or by span(A).

Key Points

  1. 1

    Two vectors x and y in an inner product space are orthogonal exactly when ⟨x,y⟩ = 0, and the same rule extends to subsets by requiring zero inner product with every element.

  2. 2

    The orthogonal complement A^⊥ is the set of all vectors in X that have zero inner product with every vector in A: A^⊥ = {x ∈ X : ⟨x,a⟩ = 0 ∀a ∈ A}.

  3. 3

    A^⊥ is always a linear subspace because it is closed under addition and scalar multiplication, and it contains the zero vector.

  4. 4

    A^⊥ is always closed in the norm topology induced by the inner product; convergence of a sequence in A^⊥ preserves membership due to continuity of the inner product.

  5. 5

    Orthogonal complements ignore closure: A^⊥ = (\overline{A})^⊥, proved by approximating points in \overline{A} with sequences from A.

  6. 6

    Orthogonal complements ignore span: A^⊥ = (span(A))^⊥, proved by using linearity of the inner product on finite linear combinations.

  7. 7

    Inclusion reverses: if A ⊆ U, then U^⊥ ⊆ A^⊥, so enlarging a set makes its orthogonal complement smaller.

Highlights

Orthogonal complement turns “perpendicularity to a set” into a concrete subset: A^⊥ = {x : ⟨x,a⟩ = 0 for all a ∈ A}.
A^⊥ is guaranteed to be both a subspace and a closed set—sequence limits cannot escape it.
Replacing A by its closure or by span(A) leaves A^⊥ unchanged, so orthogonal complements depend only on those generated structures.

Topics