Ordinary Differential Equations 20 | Matrix Exponential
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
For homogeneous autonomous linear systems , the solution with is .
Briefing
Solving a homogeneous autonomous linear system of differential equations reduces to computing a single object: the matrix exponential. For systems of the form , where is a constant matrix, the unique global solution of the initial value problem can be written as . The key move is to start from the fixed-point iteration used in the Picard–Lindelöf theorem and watch it generate the same power-series structure as the ordinary exponential function—except with matrix powers replacing scalar powers.
The construction begins with the integral form of the differential equation. In Picard iteration, the first guess is the constant function . Plugging this into the integral yields , where is the identity matrix. Reapplying the integral operator repeatedly produces higher-order terms: the second iterate adds , the next adds , and so on. After steps, the partial sum has the form . Taking the limit as gives the solution for all time, and the matrix-valued series is defined as the matrix exponential .
A concrete example makes the method feel less abstract. Consider the 2D system with , which corresponds to rotation in the plane. Computing powers of reveals a repeating pattern: , , and . Because of that structure, the infinite series for collapses into separate even and odd terms. The even powers contribute and the odd powers contribute , with the off-diagonal entries carrying the appropriate signs. The result is the closed form Multiplying this by an initial vector on the plane produces a trajectory that stays on a circle centered at the origin. Changing selects which circle, while shifting time changes only the point along that same orbit—not the orbit itself.
The payoff is practical: for homogeneous autonomous linear systems, solving is equivalent to evaluating . But the approach has a clear boundary. If the system is nonautonomous—meaning depends on time—then no longer applies directly, and a different framework is needed for the next step in the series.
Cornell Notes
For homogeneous autonomous linear systems with constant matrix , the initial value problem has a unique global solution given by . This comes from Picard–Lindelöf iteration: repeated integral substitution generates the power series , which is defined as the matrix exponential. In a 2D rotation example with , matrix powers repeat so the series separates into even and odd terms, producing . Multiplying by yields circular orbits, with time shifts moving the point along the same circle.
How does Picard iteration lead to the matrix exponential for ?
Why is the solution guaranteed to exist for all time in this setting?
In the 2D example , what pattern in powers makes the exponential computable?
How do the entries of relate to and in the rotation example?
Why do all trajectories form circles, and what changes when or time changes?
Review Questions
- What power series defines , and how does it connect to Picard iteration?
- For with constant , what is the formula for given ?
- In the rotation example, which matrix power identity (like ) makes the exponential simplify to and ?
Key Points
- 1
For homogeneous autonomous linear systems , the solution with is .
- 2
Picard–Lindelöf iteration generates the series , which defines the matrix exponential.
- 3
The matrix exponential is a matrix-valued analogue of the scalar exponential, replacing with .
- 4
In the 2D rotation case , the power cycle forces the series to collapse into and .
- 5
The resulting closed form is a rotation matrix.
- 6
Multiplying by produces circular orbits centered at the origin; time shifts move along the same circle.
- 7
If depends on time (nonautonomous systems), the simple method no longer applies directly.