Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 13 | Orthogonality thumbnail

Abstract Linear Algebra 13 | Orthogonality

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Two vectors X and Y are orthogonal in an inner product space exactly when ⟨X,Y⟩ = 0.

Briefing

Orthogonality is defined in any inner product space as the condition that two vectors have zero inner product—turning the familiar “right angle” idea into a flexible geometric tool that depends on the chosen inner product. The practical payoff is that once orthogonality is available, vectors can be decomposed into perpendicular components, which underpins concepts like orthogonal projection and normal components.

The discussion begins with a geometric picture: a vector X not parallel to a fixed line (a subspace) casts a “shadow” on that line when light shines straight down. That shadow represents the orthogonal projection of X onto the subspace. Alongside the projection sits a second component, the “normal component,” so that X can be written as a sum of two vectors—one lying in the subspace and one perpendicular to it. The key point is that the perpendicular relationship is not absolute; it is determined by the inner product, meaning the right angle is “general” and changes when the inner product changes.

From there, the formal definition is introduced. In a vector space V over F equipped with an inner product ⟨·,·⟩, two vectors X and Y are orthogonal exactly when ⟨X,Y⟩ = 0. A standard orthogonality symbol is used to express this relationship compactly. The same algebraic criterion reproduces the usual Euclidean notion in R^n, but it also extends beyond geometry into settings where “angle” is defined by the inner product.

A concrete example is given in a polynomial space: consider polynomials on the interval [−1,1] with an inner product defined via an integral. Using monomials P1(x)=x and P2(x)=x^2, the inner product becomes an integral of x·x^2 = x^3 over [−1,1]. Symmetry shows that the integral is zero, so P1 and P2 are orthogonal. The takeaway is that orthogonality can be meaningfully discussed for polynomials because the only requirement is an inner product structure.

The second major concept is the orthogonal complement. For any subset M of V (not necessarily a subspace), the orthogonal complement M^⊥ is the set of all vectors X in V such that ⟨X,m⟩ = 0 for every m in M. Even if M is just a subset, M^⊥ always forms a subspace. Geometric intuition is provided in low dimensions: in R^3, if M is a line through the origin, M^⊥ is a plane; in R^2, if M is a line through the origin, M^⊥ is the perpendicular line. Finally, when M is a subspace, the dimensions satisfy dim(M) + dim(M^⊥) = dim(V), reflecting that the two parts fill the space in an orthogonal way—setting up the next step: orthogonal projections and related decompositions.

Cornell Notes

Orthogonality in linear algebra is defined by an inner product: vectors X and Y are orthogonal exactly when ⟨X,Y⟩ = 0. This generalizes the usual right-angle idea from Euclidean space to any vector space equipped with an inner product, so “perpendicular” depends on the chosen inner product. The orthogonal complement M^⊥ of a subset M consists of all vectors that are orthogonal to every vector in M; even when M is not a subspace, M^⊥ is always a subspace. In geometric terms, orthogonal complements turn lines into perpendicular planes (in R^3) and lines into perpendicular lines (in R^2). When M is a subspace, dim(M) + dim(M^⊥) = dim(V), showing how orthogonal pieces combine to span the whole space.

How does the definition of “right angle” change when moving from R^n to a general inner product space?

A right angle is replaced by the inner-product criterion: X and Y are orthogonal when ⟨X,Y⟩ = 0. Because the inner product determines what counts as “angle,” the same vectors can be orthogonal under one inner product and not under another. In other words, perpendicularity is defined relative to the chosen inner product, not by Euclidean geometry alone.

Why does orthogonality make sense for polynomials, not just geometric vectors?

Orthogonality only needs an inner product. For polynomials on [−1,1], an inner product can be defined by an integral. With P1(x)=x and P2(x)=x^2, the inner product becomes ∫_{−1}^{1} x·x^2 dx = ∫_{−1}^{1} x^3 dx, which is zero by symmetry. Since the inner product is zero, P1 and P2 are orthogonal in that polynomial space’s geometry.

What exactly is the orthogonal complement M^⊥ of a subset M?

M^⊥ is the set of all vectors X in V such that ⟨X,m⟩ = 0 for every m in M. The definition does not require M to be a subspace; it can be any subset. Still, M^⊥ always ends up being a subspace, because it is defined by linear orthogonality constraints.

How do orthogonal complements look in R^3 and R^2?

In R^3, if M is a line through the origin, then M^⊥ is a plane: every vector in that plane is orthogonal to every vector in the line. In R^2, if M is a line through the origin, then M^⊥ is another line, specifically the perpendicular direction. These examples illustrate how orthogonality creates complementary geometric objects.

What dimension relationship holds when M is a subspace?

When M is a subspace, the dimensions satisfy dim(M) + dim(M^⊥) = dim(V). This expresses that M and its orthogonal complement account for the entire space without overlap in an orthogonal sense, which becomes important for orthogonal projections later.

Review Questions

  1. State the condition for two vectors to be orthogonal in an inner product space.
  2. Define the orthogonal complement M^⊥ and explain whether M must be a subspace.
  3. In a finite-dimensional inner product space, what dimension formula links M and M^⊥ when M is a subspace?

Key Points

  1. 1

    Two vectors X and Y are orthogonal in an inner product space exactly when ⟨X,Y⟩ = 0.

  2. 2

    Orthogonality depends on the chosen inner product, so “perpendicular” is relative to the inner product’s geometry.

  3. 3

    Orthogonality extends beyond Euclidean vectors; any inner product space supports a meaningful orthogonality concept.

  4. 4

    In a polynomial space with an integral-defined inner product, monomials like x and x^2 can be orthogonal because their inner product integral evaluates to zero.

  5. 5

    The orthogonal complement M^⊥ consists of all vectors orthogonal to every vector in M, and M^⊥ is always a subspace even if M is only a subset.

  6. 6

    If M is a subspace, then dim(M) + dim(M^⊥) = dim(V), reflecting an orthogonal split of dimensions.

Highlights

Orthogonality is defined by the inner product: ⟨X,Y⟩ = 0, which generalizes the right-angle idea to any inner product space.
A geometric “shadow” corresponds to an orthogonal projection, with the remaining component perpendicular to the target subspace.
In the polynomial example on [−1,1], the inner product of x and x^2 becomes ∫_{−1}^{1} x^3 dx = 0 by symmetry, proving orthogonality.
The orthogonal complement M^⊥ is the set of vectors orthogonal to every element of M, and it always forms a subspace.
For subspaces, orthogonal complements satisfy dim(M) + dim(M^⊥) = dim(V).