Ordinary Differential Equations 19 | Solution Space
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
For linear first-order systems X′(t)=A(t)X(t)+B(t), standard continuity assumptions on A(t) and B(t) yield solutions defined on the whole interval I (often all of R).
Briefing
For systems of linear differential equations, the solution set has a rigid structure: in the homogeneous case, all solutions form an n-dimensional vector space. That fact matters because it turns an infinite-dimensional-looking problem (functions of time) into a finite one—once n linearly independent solutions are known, every other solution is a linear combination of them.
The discussion starts with first-order linear systems written in the form X′(t)=A(t)X(t)+B(t), where A(t) is an n×n matrix and B(t) is an n-dimensional vector. Under the usual continuity assumptions on A(t) and B(t), solutions exist globally on the whole interval I (and, in the common global setting, on all of R). A special simplification appears for autonomous systems, where A(t) and B(t) are constant. In a concrete 2×2 example, the phase portrait reveals a fixed point at the origin and closed circular orbits around it. Those closed orbits correspond to periodic solutions, and the direction of motion (clockwise as time increases) is read directly from the vector field.
To handle the general (nonhomogeneous) case, the key move is to study the associated homogeneous system obtained by dropping the forcing term: X′(t)=A(t)X(t). The homogeneous structure brings linear-algebra tools to the foreground. If α(t) and β(t) are solutions, then α(t)+β(t) is also a solution because differentiation and matrix multiplication distribute over addition. Likewise, scaling a solution by any real number produces another solution. As a result, the set S0 of all continuously differentiable solutions to the homogeneous system forms a real vector space (indeed, a subspace of the function space).
The central theorem then pins down its dimension. For an n-dimensional linear system, S0 has dimension n. The reasoning uses the existence and uniqueness of solutions to initial value problems (via the Picard–Lindelöf theorem). Each initial condition X(t0)=x0 determines a unique solution curve in an “extended phase portrait” that includes time. To convert the dimension question into a linear-algebra statement, a linear map L is defined by projecting each solution α(t) to its value at a fixed time t0: L(α)=α(t0). Surjectivity follows because every x0 in R^n occurs as the value of some solution at t0. Injectivity follows from uniqueness: if two solutions agree at t0, they must coincide for all t in I. With L a linear isomorphism between S0 and R^n, the dimensions match, giving dim(S0)=dim(R^n)=n.
The takeaway is practical: for homogeneous linear ODE systems, finding n linearly independent solutions is enough to generate the entire solution space. The next step—set up for the following video—is to see how this plays out in an explicit example.
Cornell Notes
Linear first-order systems X′(t)=A(t)X(t)+B(t) have globally defined solutions on the interval I under standard continuity assumptions. In the homogeneous case X′(t)=A(t)X(t), the solution set S0 is a real vector space because sums and scalar multiples of solutions remain solutions. The crucial result is that for an n-dimensional system, S0 has dimension n: once n linearly independent homogeneous solutions are found, every homogeneous solution is their linear combination. The proof uses Picard–Lindelöf uniqueness to show that the projection map L:S0→R^n given by L(α)=α(t0) is a linear isomorphism, so dim(S0)=dim(R^n)=n.
Why does the homogeneous system’s solution set form a vector space?
What does “autonomous” mean here, and how does it change the picture of solutions?
How does Picard–Lindelöf uniqueness help determine the dimension of S0?
What is the linear map L used in the dimension argument, and why is it an isomorphism?
What does dim(S0)=n mean operationally for solving homogeneous linear ODEs?
Review Questions
- In the homogeneous system X′(t)=A(t)X(t), show directly using differentiation and matrix distributivity why α+β is a solution if α and β are solutions.
- Explain why the map L(α)=α(t0) must be injective, and identify which theorem provides the needed uniqueness.
- Why does dim(S0)=dim(R^n)=n follow once L is shown to be a linear isomorphism?
Key Points
- 1
For linear first-order systems X′(t)=A(t)X(t)+B(t), standard continuity assumptions on A(t) and B(t) yield solutions defined on the whole interval I (often all of R).
- 2
Autonomous systems have constant A and B, making phase portraits time-independent and easier to interpret.
- 3
In the homogeneous case X′(t)=A(t)X(t), the set of solutions S0 is closed under addition and scalar multiplication, so S0 is a real vector space.
- 4
For an n-dimensional homogeneous linear system, the solution space S0 has dimension n.
- 5
The dimension result follows by defining L:S0→R^n via L(α)=α(t0) and using Picard–Lindelöf uniqueness to prove L is a linear isomorphism.
- 6
Once n linearly independent homogeneous solutions are found, every homogeneous solution is their linear combination.