Abstract Linear Algebra 31 | Solutions for Linear Equations
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A linear equation L(X) = B has a solution iff B lies in range(L).
Briefing
Solving a linear equation in an abstract setting boils down to two geometric objects: the range (for whether solutions exist) and the kernel (for whether solutions are unique). For a linear map L: V → W between vector spaces over a field F (real or complex), the equation L(X) = B asks for all X ∈ V that land on a fixed target vector B ∈ W. Existence is immediate: if B is not in the range of L, there are no solutions. Uniqueness is equally tied to structure: if the kernel of L contains anything beyond the zero vector, then multiple solutions appear because different kernel elements can be added without changing the output.
In this framework, the kernel is defined as ker(L) = {v ∈ V : L(v) = 0}, meaning it is exactly the solution set of the homogeneous equation L(X) = 0. The range is defined as range(L) = {w ∈ W : ∃x ∈ V with L(x) = w}, i.e., the set of all right-hand sides that the map can actually hit. Both ker(L) and range(L) are subspaces, which is what makes the solution set behave predictably.
Once at least one particular solution x0 exists (so B ∈ range(L)), every solution has the same form as in the matrix case: the full solution set is x0 + ker(L) = {x0 + v : v ∈ ker(L)}. The logic is straightforward. If x0 solves L(x) = B, then for any v in the kernel, linearity gives L(x0 + v) = L(x0) + L(v) = B + 0 = B, so x0 + v is also a solution. Conversely, if x is any solution, then subtracting the outputs forces L(x − x0) = 0, so x − x0 must lie in the kernel. This equivalence shows both the shape of the solution set and why uniqueness happens exactly when ker(L) is trivial.
The discussion then connects these ideas to the rank–nullity theorem, a central dimension-counting result for finite-dimensional vector spaces. When V is finite-dimensional, the theorem states that dim(range(L)) + dim(ker(L)) = dim(V). Here dim(range(L)) is the rank of L, and dim(ker(L)) is the nullity. The key point is that the proof can be transferred from matrix theory: choosing bases turns L into a matrix, where the familiar rank–nullity statement holds, and the dimensions involved do not depend on which matrix representation (i.e., which bases) is chosen. That invariance is what lets the theorem hold in the abstract linear-map setting without committing to any particular coordinates.
Overall, the method replaces brute-force solving with structural criteria: check whether B lies in the range to decide existence, inspect whether the kernel is trivial to decide uniqueness, and use x0 + ker(L) to describe all solutions when they exist. The dimension relationship then quantifies how much freedom the kernel provides and how much of W the map can reach.
Cornell Notes
For a linear map L: V → W and a fixed vector B ∈ W, the equation L(X) = B has solutions exactly when B lies in the range of L. If at least one solution x0 exists, then every solution is obtained by adding any kernel element: {X ∈ V : L(X) = B} = x0 + ker(L). Uniqueness occurs precisely when ker(L) is trivial (only the zero vector), since nonzero kernel elements generate distinct solutions with the same output. In finite-dimensional settings, the rank–nullity theorem links these objects by dim(range(L)) + dim(ker(L)) = dim(V), where rank = dim(range(L)) and nullity = dim(ker(L)). This extends the familiar matrix results to general linear maps via basis-dependent matrix representations.
How can existence of solutions to L(X) = B be decided without solving for X directly?
Why does a nontrivial kernel automatically destroy uniqueness?
What is the exact form of the full solution set once one solution x0 is known?
How do kernel and range relate to subspaces, and why does that matter?
What does the rank–nullity theorem say in this abstract linear-map setting?
Review Questions
- Given L: V → W and B ∈ W, what two conditions determine whether L(X) = B has no solutions, exactly one solution, or infinitely many solutions?
- If x0 is one solution to L(X) = B, prove (using linearity) that every vector of the form x0 + v with v ∈ ker(L) is also a solution.
- How does rank–nullity quantify the tradeoff between the size of ker(L) and the size of range(L) when V is finite-dimensional?
Key Points
- 1
A linear equation L(X) = B has a solution iff B lies in range(L).
- 2
The kernel ker(L) is the solution set of the homogeneous equation L(X) = 0.
- 3
If one particular solution x0 exists, then all solutions are exactly x0 + ker(L).
- 4
Uniqueness holds exactly when ker(L) is trivial, i.e., ker(L) = {0}.
- 5
ker(L) and range(L) are subspaces of V and W respectively, which forces the solution set to have affine-subspace structure.
- 6
In finite-dimensional spaces, dim(range(L)) + dim(ker(L)) = dim(V), linking existence/uniqueness structure to dimension counts via rank and nullity.