Abstract Linear Algebra 19 | Fourier Coefficients
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Orthogonal projection onto a finite-dimensional subspace with an orthonormal basis is computed as x|U = Σ ⟨x,bj⟩ bj.
Briefing
Orthogonal projections in inner-product spaces turn “finding coefficients” into simple inner-product calculations—no linear systems required. For a finite-dimensional subspace U with an orthonormal basis B = {b1, …, bK}, any vector x in the ambient inner-product space V projects onto U as
x|U = Σ_{j=1..K} ⟨x, b_j⟩ b_j.
Those scalars ⟨x, b_j⟩ are the coefficients of the linear combination, and when the basis is orthonormal they come directly from inner products. The payoff is practical: even when x already lies in U, the same formula reproduces x exactly, and the method avoids solving a system of equations. In this setting, the linear combination is often called the Fourier expansion of x with respect to the orthonormal basis, and the scalars are called Fourier coefficients—terminology that later connects to classical Fourier analysis.
The lesson then becomes concrete through a function-space example built inside a finite-dimensional span of specific functions on [−π, π]. The subspace V is spanned by four functions: a normalized constant function (1/√2), cos(x), cos(2x), and sign(x). An inner product is defined using an integral:
⟨f, g⟩ = (1/π) ∫_{−π}^{π} f(x)g(x) dx.
With this inner product, the chosen functions behave like an orthonormal set. The constant and cos(x) are normalized so their inner products with themselves equal 1, and symmetry arguments determine orthogonality: cos(x) is even, sign(x) is odd, so their product is odd; integrating an odd function over a symmetric interval yields 0. Similar checks across the remaining pairs establish an orthonormal basis (denoted O and B in the transcript).
Using that orthonormal basis, the Fourier coefficients of a target function u(x) = sin(x^2) are computed by evaluating four inner products—one per basis element. The coefficient for the constant basis element comes out to 1/2. The coefficient for cos(x) is 0, because the integrand reduces to a form that evaluates to the same value at −π and π. The coefficient for cos(2x) is −1/2, coming from a longer integral that yields −π/2 before the normalization cancels. Finally, the coefficient for sign(x) is 0 again due to odd-function symmetry.
With only two nonzero coefficients, the Fourier expansion collapses to a simple identity:
sin(x^2) = (1/2)·(1/√2) + (−1/2)·cos(2x)
which the transcript rewrites in its final simplified form as
sin(x^2) = 1/2 − (1/2)cos(2x).
The broader takeaway is that once an orthonormal basis is in place, Fourier coefficients are just inner products, and the resulting expansion expresses the function as a linear combination of basis elements—an idea that generalizes beyond this toy example and connects to how orthonormal bases can be constructed (e.g., via the Gram–Schmidt procedure) in later material.
Cornell Notes
In an inner-product space, an orthonormal basis makes orthogonal projection—and thus “coefficient finding”—easy. For a finite-dimensional subspace U with orthonormal basis {b1,…,bK}, the projection of x onto U is x|U = Σ ⟨x,bj⟩ bj. The scalars ⟨x,bj⟩ are the coefficients of the expansion, often called Fourier coefficients in this context. The transcript then builds an orthonormal basis inside a function space on [−π,π] using (1/√2), cos(x), cos(2x), and sign(x), with inner product ⟨f,g⟩ = (1/π)∫_{−π}^{π} f(x)g(x) dx. For u(x)=sin(x^2), only the constant and cos(2x) coefficients are nonzero, leading to sin(x^2)=1/2−(1/2)cos(2x).
Why does an orthonormal basis eliminate the need to solve a system of linear equations for coefficients?
How does the inner product ⟨f,g⟩ = (1/π)∫_{−π}^{π} f(x)g(x) dx help establish orthogonality in the example?
What determines whether a Fourier coefficient becomes zero for u(x)=sin(x^2)?
How is the coefficient for cos(2x) obtained, and what is its final value?
What is the final Fourier expansion of sin(x^2) in the chosen basis?
Review Questions
- Given an orthonormal basis {b1,…,bK} of U, derive the formula for the orthogonal projection x|U and identify the coefficients.
- In the function-space example, which parity (even/odd) combinations force an inner product integral to be zero over [−π,π], and why?
- For u(x)=sin(x^2), which basis functions produce nonzero Fourier coefficients in the transcript’s final result, and what symmetry or calculation leads to the others being zero?
Key Points
- 1
Orthogonal projection onto a finite-dimensional subspace with an orthonormal basis is computed as x|U = Σ ⟨x,bj⟩ bj.
- 2
Fourier coefficients in this linear-algebra setting are exactly the inner products ⟨x,bj⟩ with respect to an orthonormal basis.
- 3
Using parity (even/odd functions) over a symmetric interval [−π,π] quickly proves many inner products are zero.
- 4
An inner product defined by (1/π)∫_{−π}^{π} f(x)g(x) dx turns orthogonality of functions into integral identities.
- 5
In the example span{(1/√2), cos(x), cos(2x), sign(x)}, the basis is treated as orthonormal under the chosen inner product.
- 6
For u(x)=sin(x^2), only the constant and cos(2x) coefficients remain nonzero, yielding sin(x^2)=1/2−(1/2)cos(2x).