Get AI summaries of any video or article — Sign up free
Differential equations, a tourist's guide | DE1 thumbnail

Differential equations, a tourist's guide | DE1

3Blue1Brown·
6 min read

Based on 3Blue1Brown's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Differential equations model change by relating a function to its derivatives, often capturing feedback where acceleration depends on the current state.

Briefing

Differential equations are the language for describing change—when it’s easier to model how a system evolves than to pin down its exact state at every moment. From Newtonian mechanics to population growth and even the dynamics of affection, they formalize relationships where rates of change depend on the current values themselves. The core takeaway is that many real-world systems can be expressed as equations linking a function to its derivatives, and that this structure unlocks both qualitative insight and practical computation, even when exact formulas are out of reach.

The tour begins by distinguishing two main types: ordinary differential equations (ODEs), where the unknown depends on a single input such as time, and partial differential equations (PDEs), where the unknown depends on multiple inputs like space and time. A thrown object under gravity provides a first ODE: if y(t) is vertical position, then ÿ = −g. Solving it means “working backwards” from derivatives—integrating to recover velocity and position while using initial conditions to fix the constants. That simple example already hints at a recurring theme: differential equations often contain feedback loops where acceleration depends on position (or other state variables), not just on time.

Gravity between bodies makes that feedback explicit. In the two-body gravitational setting, acceleration depends on distance, so the system becomes a coupled dance between position and velocity. More generally, physics often uses second-order differential equations, where the highest derivative appearing is the second derivative; higher-order equations involve third, fourth, or even higher derivatives. Conceptually, solving such equations is like assembling an “infinite jigsaw puzzle”: the values of the unknown at every time must fit together with their own rate-of-change constraints.

The pendulum example turns that abstraction into something concrete—and more realistic than the standard small-angle sine-wave story. The true restoring acceleration involves sin(θ), not θ, so the pendulum’s period lengthens for larger swings and the motion stops resembling a pure sine wave. With damping included (modeled as a term proportional to angular velocity), the resulting nonlinear differential equation becomes “juicy”: analytic solutions are either extremely complicated or unavailable in closed form. That limitation pushes the focus from exact solving to understanding the dynamics directly from the equations.

To build that understanding, the discussion shifts from plotting θ versus time to using phase space: a two-dimensional state space with axes (θ, θ̇). Each point represents a complete snapshot of the system, and the differential equation induces a vector field that shows how the state moves. Trajectories spiral toward stable fixed points when damping is present, and changing parameters like the air-resistance coefficient μ changes how quickly the spiral tightens. This same phase-space approach generalizes beyond pendulums: the three-body problem expands the state space dramatically (18 dimensions for positions and momenta), and the stability question becomes a matter of whether nearby trajectories contract or expand.

Finally, when exact solutions are unavailable, numerical simulation offers a practical path: step forward in time using small increments Δt, repeatedly updating θ and θ̇ based on the differential equation. The tour closes by connecting these modeling limits to chaos theory—small measurement errors can cause trajectories to diverge rapidly, making long-term prediction unreliable even when the equations are known. The result is a sobering but motivating message: the complexity of nature isn’t hidden; it lives inside the math itself.

Cornell Notes

Differential equations describe systems by relating a quantity to its derivatives, capturing how change depends on the current state. ODEs involve one independent variable (often time), while PDEs involve multiple inputs like space and time. The pendulum example shows why real motion deviates from the small-angle sine approximation: the restoring force depends on sin(θ), and damping adds a velocity-dependent term. Because nonlinear equations are often unsolvable in closed form, phase space (θ, θ̇) and vector fields provide qualitative insight into stability and long-term behavior. When analytic solutions fail, numerical methods approximate trajectories by stepping forward with small time increments Δt, though chaos theory warns that prediction can still break down due to sensitivity to initial conditions.

Why does gravity lead to a differential equation even in the simplest “thrown object” scenario?

With y(t) as vertical position, gravity produces a constant downward acceleration. In derivative form, that means ÿ = −g. Since velocity is ẏ and acceleration is ÿ, the equation directly links the second derivative of position to a constant. Solving it requires integrating: first recover ẏ(t) = −gt + (initial velocity), then integrate again to get y(t) = −(1/2)gt^2 + (initial velocity)t + (initial position). Initial conditions supply the constants because many functions share the same derivative.

What changes when gravity depends on where the bodies are, as in planetary motion?

Acceleration can no longer be treated as a constant. With two bodies, the gravitational force points toward the other body and its strength scales inversely with distance squared, so acceleration becomes a function of position. Since position changes over time and acceleration depends on position, the system couples variables: position affects acceleration, and acceleration affects velocity, which then affects position. This feedback is a common pattern in differential equations where derivatives are defined in terms of the function itself.

Why does the pendulum’s motion stop being a sine wave for large angles?

The restoring acceleration along the arc is proportional to −sin(θ), not −θ. In the small-angle regime, sin(θ) ≈ θ, which recovers the familiar harmonic-motion sine-wave behavior and yields the approximate period 2π√(l/g). But for larger angles, the approximation fails: the period becomes longer than the high-school formula predicts, and θ versus time no longer resembles a sine wave. The key mismatch is that the sine appears as a function of θ in the differential equation, while the sine-wave approximation treats θ itself as the sine output.

How does phase space (θ, θ̇) turn a second-order equation into something visual and analyzable?

A second-order ODE for θ(t) can be rewritten as two first-order equations by treating the state as the pair (θ, θ̇). In phase space, each point represents a complete snapshot: angle plus angular velocity. The differential equation determines the “arrow” at each point—its direction and relative magnitude—by giving θ̇ as the first component of the state’s rate of change and θ̈ as the second component. Trajectories follow these arrows over time, making stability and attraction visible as patterns like inward spirals toward fixed points.

What does damping do to the pendulum’s trajectories in phase space, and how can μ be interpreted?

Damping is modeled as a velocity-proportional term, written as −μθ̇ in the equation for θ̈. In phase space, that produces trajectories that spiral inward rather than oscillating forever at constant amplitude. Increasing μ makes the spiral tighten faster: the pendulum slows down more quickly and loses energy more rapidly. Even without knowing the physical meaning, the vector field’s behavior reveals that larger μ increases the rate at which the system approaches an attracting state.

How do numerical methods approximate solutions when analytic forms are unavailable?

Numerical simulation steps forward in time using a small increment Δt. Starting from initial values θ(0) and θ̇(0), the method updates θ by θ ← θ + θ̇Δt and updates θ̇ by θ̇ ← θ̇ + θ̈Δt, where θ̈ is computed from the differential equation as a function of θ and θ̇. Repeating this many times approximates θ(t). Smaller Δt improves accuracy but requires more steps (e.g., reaching θ at t = 10 with Δt = 0.01 needs about 1000 steps).

Review Questions

  1. How does rewriting a second-order differential equation as a system of two first-order equations enable phase-space visualization?
  2. In the pendulum model, what role does sin(θ) play in producing deviations from the small-angle sine-wave approximation?
  3. Why can numerical simulation still fail to provide reliable long-term prediction in chaotic systems?

Key Points

  1. 1

    Differential equations model change by relating a function to its derivatives, often capturing feedback where acceleration depends on the current state.

  2. 2

    Ordinary differential equations (ODEs) use a single independent variable (commonly time), while partial differential equations (PDEs) involve multiple inputs such as space and time.

  3. 3

    The thrown-object example yields ÿ = −g, and solving it requires integrating twice and using initial conditions to fix constants.

  4. 4

    The pendulum’s realistic dynamics depend on sin(θ) and, with damping, on a velocity term proportional to θ̇, which breaks the pure sine-wave approximation for large angles.

  5. 5

    Phase space (θ, θ̇) converts a second-order problem into a vector-field picture, making stability and attraction patterns visible as trajectories.

  6. 6

    When closed-form solutions are impractical or nonexistent, numerical methods approximate trajectories by stepping forward with small time increments Δt.

  7. 7

    Chaos theory limits long-term predictability: tiny differences in initial conditions can produce wildly different trajectories even when the governing equations are known.

Highlights

Gravity’s constant acceleration leads to the clean ODE ÿ = −g, solvable by integrating backward from derivatives using initial conditions.
Real pendulums don’t follow a perfect sine wave because the restoring term is −sin(θ), not −θ; damping further changes the motion qualitatively.
Phase space turns the pendulum into a vector-field problem: each state (θ, θ̇) has a direction of motion, and trajectories reveal stability as spirals toward fixed points.
Numerical simulation approximates solutions by repeatedly updating θ and θ̇ using the differential equation, trading accuracy for computational cost via Δt.
Chaos theory adds a second layer of limitation: even with known equations, prediction can fail due to sensitivity to initial conditions.

Topics

Mentioned

  • Stephen Strogatz
  • James Glick
  • ODE
  • PDE