Linear Algebra 26 | Steinitz Exchange Lemma [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Steinitz exchange lemma guarantees that any two bases of the same subspace U have the same number of vectors, making “dimension” well-defined.
Briefing
Steinitz exchange lemma is the key tool for making “dimension” well-defined: it guarantees that any two bases of the same subspace contain the same number of vectors. That matters because the usual intuition—“a line needs one vector, a plane needs two”—only works if the count of basis vectors cannot change when a different basis is chosen. The lemma formalizes this by showing that vectors can be swapped between bases without altering the basis size.
The setup starts with a subspace U of R^N and a fixed basis B = {v1, v2, …, vK}. Then a new family of vectors {a1, a2, …, aL} is introduced, all lying in U and assumed to be linearly independent. The exchange lemma claims that one can build a new basis of U by taking all the vectors from the independent family A and adding exactly K − L vectors chosen from the original basis B. In other words, L “new” vectors can replace L of the old ones, leaving the total number of basis vectors unchanged at K.
To make the mechanism concrete, the proof idea is demonstrated first in the simplest nontrivial case L = 1. Here there is a single vector a1 that is linearly independent from the empty set, and because B already spans U, the vector a1 must be expressible as a linear combination of the basis vectors: a1 = λ1 v1 + … + λK vK, with not all λj equal to zero. This immediately implies that the combined family B ∪ {a1} is linearly dependent—adding a1 to a spanning basis forces dependence.
The next step is to identify a basis vector that can be removed without losing the ability to represent everything in U. Choose an index j where λj ≠ 0. By rearranging the linear combination, vj can be written in terms of the other basis vectors and a1. This “solves for” vj and produces a new family C obtained by replacing vj with a1. The proof then checks two properties for C: linear independence and spanning.
For linear independence, any linear combination of vectors in C that equals the zero vector must have the coefficient of a1 equal to zero; otherwise one could isolate a1 and derive a forbidden alternative expression contradicting the uniqueness of the original coordinates of a1 in the basis B. Once that coefficient is forced to vanish, the remaining vectors come from B, which are already linearly independent, so all coefficients must be zero.
For spanning, take any vector u in U. Since B is a basis, u can be written using the v’s. Whenever vj appears, substitute the earlier expression for vj in terms of the other v’s and a1. This yields a representation of u using only vectors from C. Thus C is both linearly independent and spanning, so it is a basis.
Finally, the argument extends beyond L = 1: exchanging multiple vectors can be handled by reusing the same logic iteratively. The payoff is decisive—every basis of U has the same number of elements—setting the stage for defining the dimension of a subspace in the next step of the course.
Cornell Notes
Steinitz exchange lemma ensures that the size of a basis for a subspace is fixed. Starting with a subspace U ⊆ R^N and a basis B = {v1, …, vK}, the lemma considers a linearly independent set A = {a1, …, aL} inside U. It guarantees that replacing L vectors from B with the L vectors from A produces a new basis of U, using exactly K − L remaining vectors from B. The proof is illustrated for L = 1 by expressing a1 as a linear combination of the v’s, choosing a nonzero coefficient, and solving for the corresponding vj in terms of the others and a1. The resulting set is shown to be linearly independent and spanning, hence a basis. This invariance of basis size makes “dimension” well-defined.
Why does the lemma matter for defining dimension of a subspace?
In the L = 1 case, why is the family B ∪ {a1} automatically linearly dependent?
How does choosing an index j with λj ≠ 0 enable the exchange?
What prevents the new set C from being linearly dependent in the L = 1 proof?
How does spanning work after the exchange?
How does the argument extend from L = 1 to general L?
Review Questions
- What exact statement does Steinitz exchange lemma guarantee about the number of vectors in bases of the same subspace?
- In the L = 1 proof, how does solving for vj from the equation for a1 lead to a candidate new basis C?
- Why does uniqueness of coordinates in the original basis B rule out a nonzero coefficient for a1 when proving linear independence of C?
Key Points
- 1
Steinitz exchange lemma guarantees that any two bases of the same subspace U have the same number of vectors, making “dimension” well-defined.
- 2
Given a basis B = {v1, …, vK} of U and a linearly independent set A = {a1, …, aL} in U, the lemma allows constructing a new basis using all vectors in A plus exactly K − L vectors from B.
- 3
In the L = 1 case, a1 must be expressible as a linear combination of the basis vectors because B spans U.
- 4
If a1 = Σ λi vi and some λj ≠ 0, then vj can be rewritten in terms of a1 and the remaining basis vectors, enabling the “exchange.”
- 5
The exchanged set C is proven to be linearly independent by showing any zero combination would contradict the uniqueness of the original representation of a1 in basis B.
- 6
Spanning after the exchange follows by rewriting any u ∈ U using B and substituting the expression for vj in terms of a1 and the remaining vectors.
- 7
For L > 1, the exchange can be repeated, preserving the total basis size K while swapping in all vectors from the independent set A.