Get AI summaries of any video or article — Sign up free
Ordinary Differential Equations 19 | Solution Space thumbnail

Ordinary Differential Equations 19 | Solution Space

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

For linear first-order systems X′(t)=A(t)X(t)+B(t), standard continuity assumptions on A(t) and B(t) yield solutions defined on the whole interval I (often all of R).

Briefing

For systems of linear differential equations, the solution set has a rigid structure: in the homogeneous case, all solutions form an n-dimensional vector space. That fact matters because it turns an infinite-dimensional-looking problem (functions of time) into a finite one—once n linearly independent solutions are known, every other solution is a linear combination of them.

The discussion starts with first-order linear systems written in the form X′(t)=A(t)X(t)+B(t), where A(t) is an n×n matrix and B(t) is an n-dimensional vector. Under the usual continuity assumptions on A(t) and B(t), solutions exist globally on the whole interval I (and, in the common global setting, on all of R). A special simplification appears for autonomous systems, where A(t) and B(t) are constant. In a concrete 2×2 example, the phase portrait reveals a fixed point at the origin and closed circular orbits around it. Those closed orbits correspond to periodic solutions, and the direction of motion (clockwise as time increases) is read directly from the vector field.

To handle the general (nonhomogeneous) case, the key move is to study the associated homogeneous system obtained by dropping the forcing term: X′(t)=A(t)X(t). The homogeneous structure brings linear-algebra tools to the foreground. If α(t) and β(t) are solutions, then α(t)+β(t) is also a solution because differentiation and matrix multiplication distribute over addition. Likewise, scaling a solution by any real number produces another solution. As a result, the set S0 of all continuously differentiable solutions to the homogeneous system forms a real vector space (indeed, a subspace of the function space).

The central theorem then pins down its dimension. For an n-dimensional linear system, S0 has dimension n. The reasoning uses the existence and uniqueness of solutions to initial value problems (via the Picard–Lindelöf theorem). Each initial condition X(t0)=x0 determines a unique solution curve in an “extended phase portrait” that includes time. To convert the dimension question into a linear-algebra statement, a linear map L is defined by projecting each solution α(t) to its value at a fixed time t0: L(α)=α(t0). Surjectivity follows because every x0 in R^n occurs as the value of some solution at t0. Injectivity follows from uniqueness: if two solutions agree at t0, they must coincide for all t in I. With L a linear isomorphism between S0 and R^n, the dimensions match, giving dim(S0)=dim(R^n)=n.

The takeaway is practical: for homogeneous linear ODE systems, finding n linearly independent solutions is enough to generate the entire solution space. The next step—set up for the following video—is to see how this plays out in an explicit example.

Cornell Notes

Linear first-order systems X′(t)=A(t)X(t)+B(t) have globally defined solutions on the interval I under standard continuity assumptions. In the homogeneous case X′(t)=A(t)X(t), the solution set S0 is a real vector space because sums and scalar multiples of solutions remain solutions. The crucial result is that for an n-dimensional system, S0 has dimension n: once n linearly independent homogeneous solutions are found, every homogeneous solution is their linear combination. The proof uses Picard–Lindelöf uniqueness to show that the projection map L:S0→R^n given by L(α)=α(t0) is a linear isomorphism, so dim(S0)=dim(R^n)=n.

Why does the homogeneous system’s solution set form a vector space?

If α(t) and β(t) satisfy α′(t)=A(t)α(t) and β′(t)=A(t)β(t), then (α+β)′(t)=α′(t)+β′(t)=A(t)α(t)+A(t)β(t)=A(t)(α(t)+β(t)). Similarly, for any real scalar λ, (λα)′(t)=λ α′(t)=λ A(t)α(t)=A(t)(λα(t)). So sums and scalar multiples stay inside S0.

What does “autonomous” mean here, and how does it change the picture of solutions?

Autonomous means A(t) and B(t) are constant (no t-dependence). Then the vector field depends only on X, not on time, making phase portraits easier to interpret. In the 2×2 example, the origin acts as a fixed point, and nearby trajectories form closed circular orbits, indicating periodic solutions.

How does Picard–Lindelöf uniqueness help determine the dimension of S0?

Uniqueness ensures that two solutions that share the same initial value X(t0)=x0 must coincide for all t in I. This property prevents “crossings” of solution curves in the extended phase portrait. That injectivity is essential when proving that the projection map L(α)=α(t0) is one-to-one.

What is the linear map L used in the dimension argument, and why is it an isomorphism?

Define L:S0→R^n by L(α)=α(t0). It is linear because evaluating at a fixed time respects addition and scalar multiplication. It is surjective because for every x0 in R^n, the initial value problem has a (unique) solution with α(t0)=x0. It is injective because if L(α)=L(β), then α(t0)=β(t0), so uniqueness forces α(t)=β(t) for all t. Therefore L is a linear isomorphism.

What does dim(S0)=n mean operationally for solving homogeneous linear ODEs?

It means S0 has a basis of n linearly independent solutions. Once those n solutions are known, any other homogeneous solution can be written as a linear combination of them, so the infinite-dimensional “space of functions” collapses to a finite-dimensional parameterization.

Review Questions

  1. In the homogeneous system X′(t)=A(t)X(t), show directly using differentiation and matrix distributivity why α+β is a solution if α and β are solutions.
  2. Explain why the map L(α)=α(t0) must be injective, and identify which theorem provides the needed uniqueness.
  3. Why does dim(S0)=dim(R^n)=n follow once L is shown to be a linear isomorphism?

Key Points

  1. 1

    For linear first-order systems X′(t)=A(t)X(t)+B(t), standard continuity assumptions on A(t) and B(t) yield solutions defined on the whole interval I (often all of R).

  2. 2

    Autonomous systems have constant A and B, making phase portraits time-independent and easier to interpret.

  3. 3

    In the homogeneous case X′(t)=A(t)X(t), the set of solutions S0 is closed under addition and scalar multiplication, so S0 is a real vector space.

  4. 4

    For an n-dimensional homogeneous linear system, the solution space S0 has dimension n.

  5. 5

    The dimension result follows by defining L:S0→R^n via L(α)=α(t0) and using Picard–Lindelöf uniqueness to prove L is a linear isomorphism.

  6. 6

    Once n linearly independent homogeneous solutions are found, every homogeneous solution is their linear combination.

Highlights

Homogeneous linear ODE solutions form a vector space because differentiation and matrix multiplication distribute over sums and scalars.
Autonomous 2×2 examples can show fixed points and periodic orbits directly from the vector field’s phase portrait.
A projection map from solutions to their value at time t0 becomes a linear isomorphism, forcing dim(S0)=n.
Uniqueness of initial value problems prevents distinct solution curves from crossing, enabling the injectivity step.

Topics

Mentioned

  • ODE
  • R
  • RN