Get AI summaries of any video or article — Sign up free
Linear Algebra 26 | Steinitz Exchange Lemma [dark version] thumbnail

Linear Algebra 26 | Steinitz Exchange Lemma [dark version]

6 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Steinitz exchange lemma guarantees that any two bases of the same subspace U have the same number of vectors, making “dimension” well-defined.

Briefing

Steinitz exchange lemma is the key tool for making “dimension” well-defined: it guarantees that any two bases of the same subspace contain the same number of vectors. That matters because the usual intuition—“a line needs one vector, a plane needs two”—only works if the count of basis vectors cannot change when a different basis is chosen. The lemma formalizes this by showing that vectors can be swapped between bases without altering the basis size.

The setup starts with a subspace U of R^N and a fixed basis B = {v1, v2, …, vK}. Then a new family of vectors {a1, a2, …, aL} is introduced, all lying in U and assumed to be linearly independent. The exchange lemma claims that one can build a new basis of U by taking all the vectors from the independent family A and adding exactly K − L vectors chosen from the original basis B. In other words, L “new” vectors can replace L of the old ones, leaving the total number of basis vectors unchanged at K.

To make the mechanism concrete, the proof idea is demonstrated first in the simplest nontrivial case L = 1. Here there is a single vector a1 that is linearly independent from the empty set, and because B already spans U, the vector a1 must be expressible as a linear combination of the basis vectors: a1 = λ1 v1 + … + λK vK, with not all λj equal to zero. This immediately implies that the combined family B ∪ {a1} is linearly dependent—adding a1 to a spanning basis forces dependence.

The next step is to identify a basis vector that can be removed without losing the ability to represent everything in U. Choose an index j where λj ≠ 0. By rearranging the linear combination, vj can be written in terms of the other basis vectors and a1. This “solves for” vj and produces a new family C obtained by replacing vj with a1. The proof then checks two properties for C: linear independence and spanning.

For linear independence, any linear combination of vectors in C that equals the zero vector must have the coefficient of a1 equal to zero; otherwise one could isolate a1 and derive a forbidden alternative expression contradicting the uniqueness of the original coordinates of a1 in the basis B. Once that coefficient is forced to vanish, the remaining vectors come from B, which are already linearly independent, so all coefficients must be zero.

For spanning, take any vector u in U. Since B is a basis, u can be written using the v’s. Whenever vj appears, substitute the earlier expression for vj in terms of the other v’s and a1. This yields a representation of u using only vectors from C. Thus C is both linearly independent and spanning, so it is a basis.

Finally, the argument extends beyond L = 1: exchanging multiple vectors can be handled by reusing the same logic iteratively. The payoff is decisive—every basis of U has the same number of elements—setting the stage for defining the dimension of a subspace in the next step of the course.

Cornell Notes

Steinitz exchange lemma ensures that the size of a basis for a subspace is fixed. Starting with a subspace U ⊆ R^N and a basis B = {v1, …, vK}, the lemma considers a linearly independent set A = {a1, …, aL} inside U. It guarantees that replacing L vectors from B with the L vectors from A produces a new basis of U, using exactly K − L remaining vectors from B. The proof is illustrated for L = 1 by expressing a1 as a linear combination of the v’s, choosing a nonzero coefficient, and solving for the corresponding vj in terms of the others and a1. The resulting set is shown to be linearly independent and spanning, hence a basis. This invariance of basis size makes “dimension” well-defined.

Why does the lemma matter for defining dimension of a subspace?

Dimension is intended to be the number of vectors in a basis. But a subspace can have many different bases, and without a theorem like Steinitz exchange, the basis size could in principle change. The lemma prevents that: any basis of U has the same number of elements because linearly independent vectors can be exchanged with basis vectors while preserving the total count.

In the L = 1 case, why is the family B ∪ {a1} automatically linearly dependent?

Because B is already a basis of U, it spans U. Since a1 lies in U, it can be written as a linear combination of the basis vectors: a1 = λ1 v1 + … + λK vK. That means a1 is not “new” in a spanning sense; adding it to a spanning set forces a linear dependence among the combined vectors.

How does choosing an index j with λj ≠ 0 enable the exchange?

From a1 = Σ λi vi, if some λj ≠ 0, the equation can be rearranged to solve for vj: vj = (1/λj) a1 − Σ(i≠j) (λi/λj) vi. This shows vj is redundant once a1 is included, so vj can be removed while keeping the ability to represent vectors in U.

What prevents the new set C from being linearly dependent in the L = 1 proof?

Assume a linear combination of vectors in C equals the zero vector. If the coefficient of a1 were nonzero, one could divide by it and obtain an alternative expression for a1 that does not involve vj. But the original coordinates of a1 relative to the basis B are unique, and vj was essential in that expression. This contradiction forces the coefficient of a1 to be zero, and then the remaining v’s are linearly independent, forcing all coefficients to be zero.

How does spanning work after the exchange?

Take any u ∈ U. Since B spans U, u can be written using v1, …, vK. If vj appears, substitute the earlier formula expressing vj in terms of a1 and the other v’s. The result is a representation of u using only the vectors in C, proving C spans U.

How does the argument extend from L = 1 to general L?

The same exchange idea can be applied repeatedly: each time a linearly independent vector is added, some vector from the current basis can be removed without breaking the basis properties. Doing this L times yields a new basis containing all vectors from A and exactly K − L vectors from the original basis B.

Review Questions

  1. What exact statement does Steinitz exchange lemma guarantee about the number of vectors in bases of the same subspace?
  2. In the L = 1 proof, how does solving for vj from the equation for a1 lead to a candidate new basis C?
  3. Why does uniqueness of coordinates in the original basis B rule out a nonzero coefficient for a1 when proving linear independence of C?

Key Points

  1. 1

    Steinitz exchange lemma guarantees that any two bases of the same subspace U have the same number of vectors, making “dimension” well-defined.

  2. 2

    Given a basis B = {v1, …, vK} of U and a linearly independent set A = {a1, …, aL} in U, the lemma allows constructing a new basis using all vectors in A plus exactly K − L vectors from B.

  3. 3

    In the L = 1 case, a1 must be expressible as a linear combination of the basis vectors because B spans U.

  4. 4

    If a1 = Σ λi vi and some λj ≠ 0, then vj can be rewritten in terms of a1 and the remaining basis vectors, enabling the “exchange.”

  5. 5

    The exchanged set C is proven to be linearly independent by showing any zero combination would contradict the uniqueness of the original representation of a1 in basis B.

  6. 6

    Spanning after the exchange follows by rewriting any u ∈ U using B and substituting the expression for vj in terms of a1 and the remaining vectors.

  7. 7

    For L > 1, the exchange can be repeated, preserving the total basis size K while swapping in all vectors from the independent set A.

Highlights

Steinitz exchange lemma is the mechanism that locks the basis size for a subspace, which is exactly what dimension needs.
For L = 1, the proof hinges on expressing a1 in the original basis and then solving for a basis vector vj with a nonzero coefficient.
The new set is shown to be a basis by proving both linear independence (via uniqueness of coordinates) and spanning (via substitution).
The general case works by reusing the same exchange logic multiple times, swapping in L independent vectors while keeping the total count fixed.

Topics

  • Steinitz Exchange Lemma
  • Basis Size Invariance
  • Dimension of Subspaces
  • Linear Independence
  • Linear Combination Proof