Get AI summaries of any video or article — Sign up free
Ordinary Differential Equations 20 | Matrix Exponential thumbnail

Ordinary Differential Equations 20 | Matrix Exponential

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

For homogeneous autonomous linear systems , the solution with is .

Briefing

Solving a homogeneous autonomous linear system of differential equations reduces to computing a single object: the matrix exponential. For systems of the form , where is a constant matrix, the unique global solution of the initial value problem can be written as . The key move is to start from the fixed-point iteration used in the Picard–Lindelöf theorem and watch it generate the same power-series structure as the ordinary exponential function—except with matrix powers replacing scalar powers.

The construction begins with the integral form of the differential equation. In Picard iteration, the first guess is the constant function . Plugging this into the integral yields , where is the identity matrix. Reapplying the integral operator repeatedly produces higher-order terms: the second iterate adds , the next adds , and so on. After steps, the partial sum has the form . Taking the limit as gives the solution for all time, and the matrix-valued series is defined as the matrix exponential .

A concrete example makes the method feel less abstract. Consider the 2D system with , which corresponds to rotation in the plane. Computing powers of reveals a repeating pattern: , , and . Because of that structure, the infinite series for collapses into separate even and odd terms. The even powers contribute and the odd powers contribute , with the off-diagonal entries carrying the appropriate signs. The result is the closed form Multiplying this by an initial vector on the plane produces a trajectory that stays on a circle centered at the origin. Changing selects which circle, while shifting time changes only the point along that same orbit—not the orbit itself.

The payoff is practical: for homogeneous autonomous linear systems, solving is equivalent to evaluating . But the approach has a clear boundary. If the system is nonautonomous—meaning depends on time—then no longer applies directly, and a different framework is needed for the next step in the series.

Cornell Notes

For homogeneous autonomous linear systems with constant matrix , the initial value problem has a unique global solution given by . This comes from Picard–Lindelöf iteration: repeated integral substitution generates the power series , which is defined as the matrix exponential. In a 2D rotation example with , matrix powers repeat so the series separates into even and odd terms, producing . Multiplying by yields circular orbits, with time shifts moving the point along the same circle.

How does Picard iteration lead to the matrix exponential for ?

Write the integral form . Start with . The first iterate gives . Repeating the substitution produces . Taking yields , and the matrix series is defined as .

Why is the solution guaranteed to exist for all time in this setting?

The system is homogeneous and autonomous with constant, so the Picard–Lindelöf (Cauchy–Lipschitz) theorem applies with a globally well-behaved right-hand side. The resulting solution is global (defined for all ), which justifies taking the infinite series limit to define and use it for .

In the 2D example , what pattern in powers makes the exponential computable?

Compute powers: , so and . This repeating cycle forces the matrix exponential series to split into even and odd powers. Even powers contribute multiples of with coefficients matching the series, while odd powers contribute multiples of with coefficients matching the series.

How do the entries of relate to and in the rotation example?

Because even powers of reduce to , the diagonal entries become . Odd powers reduce to , which place terms in the off-diagonal positions with alternating signs. The final closed form is .

Why do all trajectories form circles, and what changes when or time changes?

For any initial vector , the solution is . The matrix is a rotation matrix, so it preserves lengths and keeps constant. That means the orbit is a circle centered at the origin. Changing selects the radius (which circle), while changing moves the point to a different angle on the same circle.

Review Questions

  1. What power series defines , and how does it connect to Picard iteration?
  2. For with constant , what is the formula for given ?
  3. In the rotation example, which matrix power identity (like ) makes the exponential simplify to and ?

Key Points

  1. 1

    For homogeneous autonomous linear systems , the solution with is .

  2. 2

    Picard–Lindelöf iteration generates the series , which defines the matrix exponential.

  3. 3

    The matrix exponential is a matrix-valued analogue of the scalar exponential, replacing with .

  4. 4

    In the 2D rotation case , the power cycle forces the series to collapse into and .

  5. 5

    The resulting closed form is a rotation matrix.

  6. 6

    Multiplying by produces circular orbits centered at the origin; time shifts move along the same circle.

  7. 7

    If depends on time (nonautonomous systems), the simple method no longer applies directly.

Highlights

The matrix exponential is the closed-form object that turns into .
Picard iteration doesn’t just approximate the solution—it reproduces the exponential power series term by term.
For , the identity makes equal the standard rotation matrix with and .
All trajectories in the example are circles centered at the origin because preserves vector length.

Topics

Mentioned

  • EVP
  • P Lindelöf
  • Picard
  • Cauchy–Lipschitz