Limits, L'Hôpital's rule, and epsilon delta definitions | Chapter 7, Essence of calculus
Based on 3Blue1Brown's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A derivative at a point is defined as a limit of the difference quotient (f(a+h)−f(a))/h as h→0, using ordinary finite nudges rather than infinitesimals.
Briefing
Limits sit at the center of calculus not as a new intuition, but as the rigorous language that makes “approach” precise—especially when derivatives are defined as limits. The derivative at a point is framed as the ratio of output change to input change, then interpreted through a limiting process: start with a small, ordinary nudge h away from the input, compute the resulting change in f, and ask what the ratio does as h shrinks toward 0. Crucially, this formal setup avoids talk of infinitesimals. The symbols dx and df can be treated as concrete finite nudges, provided the analysis explicitly tracks what happens as the nudge size approaches 0.
That emphasis on “approaches” leads directly to the epsilon-delta definition of limits, which turns the vague idea of getting closer into a testable condition. Using the example (2+h^3−2^3)/h, the expression is undefined at h=0 because it becomes 0/0, but it is well-defined for h values arbitrarily close to 0. As h approaches 0, the outputs narrow toward 12 from both sides, so the limit exists and equals 12. The mathematician’s skeptical question—what exactly does “approach” mean—gets answered by looking at output ranges. A limit exists when, for every tolerance ε (no matter how small), one can choose an input window δ around the target point such that every input within δ (excluding the problematic point itself) produces outputs within ε of the limiting value.
When the output refuses to squeeze into any arbitrarily small neighborhood, the limit fails to exist. The transcript contrasts the smooth “approach to 12” behavior with a jump-like counterexample: approaching from the right yields 2, while approaching from the left yields 1. No matter how tiny the input range becomes, the corresponding outputs still straddle at least a width of 1, so there is no single number that the function approaches. Epsilon-delta formalizes that failure: there exists some ε for which no δ can force the outputs to stay within ε of a candidate limit.
After building the rigor, the discussion shifts to computation. Many limits produce indeterminate forms like 0/0, such as sin(πx)/(x^2−1) near x=1 (and similarly near x=−1). Since direct substitution fails, the method uses a derivative-based approximation: when two functions f(x) and g(x both vanish at x=a), their values near a behave like their derivatives times the small nudge (roughly f′(a)·dx and g′(a)·dx). In the ratio f(x)/g(x), the dx factors cancel, leaving the limit equal to f′(a)/g′(a). This is L’Hôpital’s Rule, presented as a systematic way to turn certain indeterminate limits into derivative ratios. The rule is attributed historically to Johann Bernoulli, with L’Hôpital later associated with the work through patronage.
Overall, the through-line is that limits make calculus precise: they justify derivative definitions without infinitesimals, define what “approach” means in a way that can succeed or fail, and power practical tools like L’Hôpital’s Rule for evaluating otherwise stubborn 0/0 limits.
Cornell Notes
The derivative is defined using limits: take a finite input nudge h away from a point, form the ratio of output change to input change, and then see what happens as h→0. Limits are made rigorous with epsilon-delta: a function approaches L if for every ε>0 there exists a δ>0 such that inputs within δ of the target (but not equal to it) produce outputs within ε of L. A limit fails when outputs cannot be squeezed into any arbitrarily small ε-neighborhood, such as when left- and right-hand behaviors approach different values. For computing indeterminate 0/0 limits, L’Hôpital’s Rule uses the idea that near a point where f(a)=g(a)=0, the ratio f(x)/g(x) is approximately f′(a)/g′(a), because the small dx factors cancel. This turns many “plug-in fails” problems into derivative calculations.
Why does the derivative definition use a limit of a difference quotient instead of talking about infinitesimals?
How does epsilon-delta turn “approach” into an unambiguous condition?
What does it mean for a limit not to exist, using the left/right jump example?
How does L’Hôpital’s Rule arise from local linear approximations near a 0/0 point?
Why does sin(πx)/(x^2−1) have a hole at x=1, and what is the limit there?
Review Questions
- In epsilon-delta language, what must be true about δ for every ε to guarantee that lim_{h→0} F(h)=L?
- Give an example of a situation where a limit fails to exist and explain it using the idea of output ranges that cannot be squeezed.
- Explain why the dx factors cancel in the derivation of L’Hôpital’s Rule for f(x)/g(x) near a point where both numerator and denominator vanish.
Key Points
- 1
A derivative at a point is defined as a limit of the difference quotient (f(a+h)−f(a))/h as h→0, using ordinary finite nudges rather than infinitesimals.
- 2
The notation dx/df in derivative expressions is shorthand for the explicit limiting process in the formal definition.
- 3
A limit exists when output values can be forced into an arbitrarily small ε-neighborhood of the candidate L by choosing inputs within some δ-neighborhood.
- 4
A limit fails when left- and right-hand behaviors approach different values or when output ranges cannot shrink below a fixed width no matter how small δ becomes.
- 5
The epsilon-delta definition formalizes “approach” by requiring: for every ε>0 there exists δ>0 such that 0<|h|<δ implies |F(h)−L|<ε.
- 6
L’Hôpital’s Rule for 0/0 indeterminate forms follows from local linear behavior: near x=a, f(x)≈f′(a)·dx and g(x)≈g′(a)·dx, so f(x)/g(x)→f′(a)/g′(a).
- 7
For sin(πx)/(x^2−1) near x=1, the limit is computed as (cos(π)·π)/(2·1)=−π/2.