Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 19 | Fourier Coefficients thumbnail

Abstract Linear Algebra 19 | Fourier Coefficients

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Orthogonal projection onto a finite-dimensional subspace with an orthonormal basis is computed as x|U = Σ ⟨x,bj⟩ bj.

Briefing

Orthogonal projections in inner-product spaces turn “finding coefficients” into simple inner-product calculations—no linear systems required. For a finite-dimensional subspace U with an orthonormal basis B = {b1, …, bK}, any vector x in the ambient inner-product space V projects onto U as

x|U = Σ_{j=1..K} ⟨x, b_j⟩ b_j.

Those scalars ⟨x, b_j⟩ are the coefficients of the linear combination, and when the basis is orthonormal they come directly from inner products. The payoff is practical: even when x already lies in U, the same formula reproduces x exactly, and the method avoids solving a system of equations. In this setting, the linear combination is often called the Fourier expansion of x with respect to the orthonormal basis, and the scalars are called Fourier coefficients—terminology that later connects to classical Fourier analysis.

The lesson then becomes concrete through a function-space example built inside a finite-dimensional span of specific functions on [−π, π]. The subspace V is spanned by four functions: a normalized constant function (1/√2), cos(x), cos(2x), and sign(x). An inner product is defined using an integral:

⟨f, g⟩ = (1/π) ∫_{−π}^{π} f(x)g(x) dx.

With this inner product, the chosen functions behave like an orthonormal set. The constant and cos(x) are normalized so their inner products with themselves equal 1, and symmetry arguments determine orthogonality: cos(x) is even, sign(x) is odd, so their product is odd; integrating an odd function over a symmetric interval yields 0. Similar checks across the remaining pairs establish an orthonormal basis (denoted O and B in the transcript).

Using that orthonormal basis, the Fourier coefficients of a target function u(x) = sin(x^2) are computed by evaluating four inner products—one per basis element. The coefficient for the constant basis element comes out to 1/2. The coefficient for cos(x) is 0, because the integrand reduces to a form that evaluates to the same value at −π and π. The coefficient for cos(2x) is −1/2, coming from a longer integral that yields −π/2 before the normalization cancels. Finally, the coefficient for sign(x) is 0 again due to odd-function symmetry.

With only two nonzero coefficients, the Fourier expansion collapses to a simple identity:

sin(x^2) = (1/2)·(1/√2) + (−1/2)·cos(2x)

which the transcript rewrites in its final simplified form as

sin(x^2) = 1/2 − (1/2)cos(2x).

The broader takeaway is that once an orthonormal basis is in place, Fourier coefficients are just inner products, and the resulting expansion expresses the function as a linear combination of basis elements—an idea that generalizes beyond this toy example and connects to how orthonormal bases can be constructed (e.g., via the Gram–Schmidt procedure) in later material.

Cornell Notes

In an inner-product space, an orthonormal basis makes orthogonal projection—and thus “coefficient finding”—easy. For a finite-dimensional subspace U with orthonormal basis {b1,…,bK}, the projection of x onto U is x|U = Σ ⟨x,bj⟩ bj. The scalars ⟨x,bj⟩ are the coefficients of the expansion, often called Fourier coefficients in this context. The transcript then builds an orthonormal basis inside a function space on [−π,π] using (1/√2), cos(x), cos(2x), and sign(x), with inner product ⟨f,g⟩ = (1/π)∫_{−π}^{π} f(x)g(x) dx. For u(x)=sin(x^2), only the constant and cos(2x) coefficients are nonzero, leading to sin(x^2)=1/2−(1/2)cos(2x).

Why does an orthonormal basis eliminate the need to solve a system of linear equations for coefficients?

With an orthonormal basis {b1,…,bK} for U, the orthogonal projection of x onto U is x|U = Σ_{j=1..K} ⟨x,bj⟩ bj. Because the basis vectors are orthonormal, taking inner products isolates each coefficient: ⟨x|U, bj⟩ = ⟨x, bj⟩. That means the coefficients are directly the inner products ⟨x,bj⟩, rather than unknowns in a linear system.

How does the inner product ⟨f,g⟩ = (1/π)∫_{−π}^{π} f(x)g(x) dx help establish orthogonality in the example?

It turns symmetry into algebra. For instance, cos(x) is even and sign(x) is odd, so their product is odd. Over the symmetric interval [−π,π], the integral of an odd function is 0, so ⟨cos(x), sign(x)⟩ = 0. Similar parity checks and normalization (e.g., ⟨cos^2(x),1⟩-type integrals) confirm orthonormality for the chosen basis.

What determines whether a Fourier coefficient becomes zero for u(x)=sin(x^2)?

Parity of the integrand inside the inner product. The coefficient for a basis function bj is ⟨u,bj⟩ = (1/π)∫_{−π}^{π} sin(x^2)·bj(x) dx. If sin(x^2)·bj(x) is odd, the integral cancels to 0 over [−π,π]. In the transcript, the cos(x) coefficient vanishes via evaluation at endpoints (sin^3(x)/3 at ±π gives the same value), and the sign(x) coefficient vanishes by odd-function symmetry.

How is the coefficient for cos(2x) obtained, and what is its final value?

The coefficient is ⟨sin(x^2), cos(2x)⟩ = (1/π)∫_{−π}^{π} sin(x^2)cos(2x) dx. The transcript notes this integral is longer but yields −π^2/2 (as written there), which after applying the (1/π) normalization results in the coefficient −2 in the intermediate computation; the final simplified expansion presented is sin(x^2)=1/2−(1/2)cos(2x), meaning the effective cos(2x) coefficient is −1/2 in the final expression.

What is the final Fourier expansion of sin(x^2) in the chosen basis?

Only two coefficients survive: the constant term and the cos(2x) term. The transcript concludes sin(x^2) = 1/2 − (1/2)cos(2x), expressed as a linear combination of the basis functions (with the constant basis element and the cos(2x) basis element).

Review Questions

  1. Given an orthonormal basis {b1,…,bK} of U, derive the formula for the orthogonal projection x|U and identify the coefficients.
  2. In the function-space example, which parity (even/odd) combinations force an inner product integral to be zero over [−π,π], and why?
  3. For u(x)=sin(x^2), which basis functions produce nonzero Fourier coefficients in the transcript’s final result, and what symmetry or calculation leads to the others being zero?

Key Points

  1. 1

    Orthogonal projection onto a finite-dimensional subspace with an orthonormal basis is computed as x|U = Σ ⟨x,bj⟩ bj.

  2. 2

    Fourier coefficients in this linear-algebra setting are exactly the inner products ⟨x,bj⟩ with respect to an orthonormal basis.

  3. 3

    Using parity (even/odd functions) over a symmetric interval [−π,π] quickly proves many inner products are zero.

  4. 4

    An inner product defined by (1/π)∫_{−π}^{π} f(x)g(x) dx turns orthogonality of functions into integral identities.

  5. 5

    In the example span{(1/√2), cos(x), cos(2x), sign(x)}, the basis is treated as orthonormal under the chosen inner product.

  6. 6

    For u(x)=sin(x^2), only the constant and cos(2x) coefficients remain nonzero, yielding sin(x^2)=1/2−(1/2)cos(2x).

Highlights

Orthonormal bases convert coefficient-finding into direct inner-product computations: x|U = Σ ⟨x,bj⟩ bj.
Even/odd symmetry over [−π,π] can make Fourier coefficients vanish without heavy integration.
The example constructs an orthonormal function basis using (1/√2), cos(x), cos(2x), and sign(x) under ⟨f,g⟩=(1/π)∫ f g.
The final expansion for sin(x^2) collapses to just two terms: a constant and cos(2x).

Topics