Fourier Transform 2 | Trigonometric Polynomials [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Fourier series approximations rely on expressing periodic functions as finite linear combinations of sine and cosine waves.
Briefing
Fourier series set up approximations of periodic functions by building them from sine and cosine waves, and the key move is to standardize everything to a 2π-periodic setting. Once a function repeats every 2 units, 4 units, or 6 units, the approximation strategy becomes clearer: represent the target as a linear combination of basic trigonometric building blocks that already repeat every 2π. That standardization matters because the familiar sine and cosine functions are naturally 2π-periodic, so stretching or compressing the original function to match that period simplifies the math without changing the essential periodic structure.
The construction starts by defining a function space: all real-valued functions on ℝ that satisfy f(x+2π)=f(x) for every real x. This set forms a real vector space, meaning sums and scalar multiples stay inside the same 2π-periodic world. From there, the transcript builds intuition with examples. Constant functions are trivially 2π-periodic. More interesting are functions like sin(x), which oscillate once per 2π, and sin(2x), which oscillates twice per 2π—an explicit way to control “frequency” by changing the multiplier inside the sine or cosine.
To approximate more complicated periodic functions, the approach collects an infinite family of simple waves into a linearly independent set U. The set includes all sine functions sin(kx) for k=1,2,3,… (odd functions that vanish at the origin) and all cosine functions cos(kx) plus the constant function (even functions that equal 1 at the origin). Linear independence is crucial: it guarantees that if a periodic function is expressed as a linear combination of these basis waves, the coefficients are uniquely determined. That uniqueness is the foundation for later approximation results.
With U in hand, the transcript defines what it calls a real trigonometric polynomial: a finite linear combination of sine and cosine terms. Such a polynomial has the form a0 plus a finite sum of cosine terms ak cos(kx) and a finite sum of sine terms bk sin(kx), with real coefficients. These polynomials are the approximating objects for 2π-periodic functions, analogous in spirit to Taylor polynomials but fundamentally different in their building blocks and approximation behavior.
Finally, the transcript introduces a complex version of trigonometric polynomials. Allowing complex coefficients and using the identity that links sine/cosine to complex exponentials makes the representation more compact: instead of writing separate sine and cosine sums, one can write a single sum over exponentials e^{ikx} (or equivalently e^{i k x} terms) with coefficients Ck. The complex formulation streamlines the algebra, setting up the next step—geometry in function space—where orthogonality becomes the organizing principle for Fourier analysis.
Cornell Notes
Fourier series approximate 2π-periodic functions using finite linear combinations of sine and cosine waves. The method begins by restricting attention to the vector space of all real functions f on ℝ satisfying f(x+2π)=f(x), then selecting a linearly independent set U made of sin(kx) and cos(kx) (with the constant included via cos(0x)). Because U is linearly independent, any real trigonometric polynomial built from it has uniquely determined coefficients, which supports later approximation guarantees. A real trigonometric polynomial uses separate cosine and sine sums with real coefficients. A complex trigonometric polynomial replaces the split sine/cosine form with a compact exponential form using complex coefficients, leveraging the cosine/sine–exponential identity.
Why does the discussion “stretch or compress” functions to make them 2π-periodic before building Fourier series?
What is the function space used for Fourier series in this setup, and why is it a vector space?
How do sine and cosine families relate to frequency and symmetry in the approximation basis?
Why is linear independence of the set U a big deal for later approximation?
What distinguishes a real trigonometric polynomial from a complex trigonometric polynomial?
How is the “Taylor polynomial” analogy used, and where does it break?
Review Questions
- What conditions define the 2π-periodic function space used to build Fourier series, and how does that guarantee closure under addition and scalar multiplication?
- Write the general form of a real trigonometric polynomial and explain how the sine and cosine parts differ in symmetry at x=0.
- Why does switching to complex exponentials make the trigonometric polynomial representation more compact?
Key Points
- 1
Fourier series approximations rely on expressing periodic functions as finite linear combinations of sine and cosine waves.
- 2
Normalizing to a 2π-periodic setting aligns the target function with the natural period of sin(x) and cos(x).
- 3
The set of all real 2π-periodic functions forms a real vector space under addition and scalar multiplication.
- 4
The basis set U is built from sin(kx) (odd, zero at the origin) and cos(kx) plus the constant (even, value 1 at the origin).
- 5
Linear independence of U ensures uniqueness of coefficients in any trigonometric polynomial representation.
- 6
A real trigonometric polynomial uses separate cosine and sine sums with real coefficients, while a complex trigonometric polynomial uses complex coefficients and exponential form for compactness.