Real Analysis 25 | Uniform Convergence [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Uniform convergence requires a single N for each ε that works for all x in the domain, unlike pointwise convergence where N may depend on x.
Briefing
Uniform convergence is the stronger notion of convergence for functions where a single “eventually” index works for every point in the domain at once. In pointwise convergence, the index n allowed to make |F_n(x) − f(x)| small can depend on the specific point x. Uniform convergence swaps that flexibility: for every ε > 0, there is one N such that for all n ≥ N and for every x in the interval I, the inequality |F_n(x) − f(x)| < ε holds simultaneously. That quantifier order change is the whole difference—and it matters because it turns many separate point-by-point guarantees into one global control over the entire graph.
Geometrically, the limit function f has a “tube” of height ε around its graph. Uniform convergence means that from some stage onward, every graph of F_n lies entirely inside that tube for all x in I, not just at individual x-values. Pointwise convergence only ensures that each fixed x eventually lands in the tube, even if different points require different stages. The uniform version forces the entire family of functions to settle down together.
To make this precise, the transcript introduces a way to measure how close two functions are: use the supremum norm. Given two functions f and g on I, consider |f(x) − g(x)| for each x, then take the maximum “worst-case” size across the domain. Since a maximum may not exist, the supremum is used instead: ||f − g||_∞ = sup_{x∈I} |f(x) − g(x)|. With this distance, uniform convergence becomes a simple statement about ordinary convergence of real numbers: F_n → f uniformly on I exactly when ||F_n − f||_∞ → 0 as n → ∞. In other words, the worst-case error across the whole interval shrinks to zero.
An example illustrates why pointwise convergence alone is insufficient. The sequence consists of functions that get steeper and steeper, converging pointwise to a limit function f that has a jump. Even though each fixed x sees F_n(x) approach f(x), the supremum norm never drops below a positive threshold. The transcript notes that around the jump, the distance between F_n and f stays at least 1 (using the given values −1 and 1 on either side of the jump), so ||F_n − f||_∞ cannot tend to 0. This shows pointwise convergence does not imply uniform convergence.
The key takeaway is that uniform convergence is strictly stronger than pointwise convergence. That strength pays off later because uniform convergence preserves important properties of functions. The transcript highlights two: continuity and boundedness. While pointwise convergence can fail to keep these properties, uniform convergence ensures that if each F_n is continuous (or bounded) and F_n converges uniformly to f, then the limit function f inherits continuity (or boundedness).
Cornell Notes
Uniform convergence strengthens pointwise convergence by requiring one index N that works for every point x in the domain at the same time. Formally, for every ε > 0 there exists N such that for all n ≥ N and all x ∈ I, |F_n(x) − f(x)| < ε. This global control can be expressed using the supremum norm: ||F_n − f||_∞ = sup_{x∈I} |F_n(x) − f(x)|, and uniform convergence is equivalent to ||F_n − f||_∞ → 0. A jump-function example shows why pointwise convergence alone fails: the supremum error stays bounded away from zero, so the supremum norm cannot go to 0. Uniform convergence is therefore stronger and preserves properties like continuity and boundedness.
How does the quantifier order distinguish pointwise from uniform convergence?
What does the “ε-tube” picture mean for graphs of F_n?
Why introduce the supremum norm, and how does it connect to uniform convergence?
What goes wrong in the jump-function example where pointwise convergence holds but uniform convergence fails?
Which properties does uniform convergence preserve that pointwise convergence may not?
Review Questions
- State the formal definition of uniform convergence and explain where the index N depends (or does not depend) on x.
- Explain how the supremum norm relates to uniform convergence, and why it captures “worst-case” error.
- Using the jump-function scenario, describe why ||F_n − f||_∞ cannot go to 0 even though F_n(x) → f(x) for each fixed x.
Key Points
- 1
Uniform convergence requires a single N for each ε that works for all x in the domain, unlike pointwise convergence where N may depend on x.
- 2
The ε-tube interpretation: from some N onward, the entire graph of F_n stays within vertical distance ε of the limit graph f across the whole interval.
- 3
The supremum norm ||f − g||_∞ = sup_{x∈I} |f(x) − g(x)| measures global closeness by taking the worst-case pointwise difference.
- 4
Uniform convergence is equivalent to ||F_n − f||_∞ → 0, turning the problem into ordinary convergence of real numbers.
- 5
A sequence converging pointwise to a discontinuous (jump) limit can still fail to converge uniformly because the supremum error stays bounded away from zero.
- 6
Uniform convergence is stronger than pointwise convergence and preserves key properties such as continuity and boundedness of the limit function.