Abstract Linear Algebra 4 | Basis, Linear Independence, Generating Sets [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A linear combination in this framework always uses a finite sum of the form j vj, even if the underlying set of vectors is infinite.
Briefing
A basis in abstract linear algebra is defined as the “sweet spot” between spanning and uniqueness: it generates a subspace while keeping linear combinations unique. That framing matters because it lets mathematicians describe subspaces efficiently (with as few vectors as possible) and measure their size via dimension—even when the subspace is infinite-dimensional.
The discussion starts by extending familiar linear-algebra operations from ^n to a general vector space V over a field F. A general linear combination of vectors v1 through vK in V uses scalars through K from F and forms a finite sum j vj. The finiteness is emphasized: even when later infinities appear, “linear combination” always means a finite number of vectors.
Next comes span. For any subset M of V, span(M) is the set of all vectors obtainable as finite linear combinations of elements of M. Span(M) is always a subspace of V, and the empty set is handled so that span(∅) becomes the smallest subspace, containing only the zero vector. With span in place, the notion of a generating set follows: a subset M generates a subspace U if span(M) equals U. This turns the task of describing U into checking whether every vector in U can be built from vectors in M.
Linear independence is introduced as the counterpart that prevents redundancy. A set M is linearly independent if every vector produced by linear combinations of M has exactly one way to do so—equivalently, the only way to represent the zero vector is the trivial combination where all coefficients are zero. The definition allows M to be infinite, but still restricts attention to finite linear combinations.
Combining generating and independence yields the definition of a basis: a set M is a basis of a subspace U if it both generates U and is linearly independent. Bases can differ from one subspace to another, but the number of vectors needed in any basis is fixed. That fixed size is the dimension of U, written dim(U). For finite-dimensional spaces, dimension is a natural number; for infinite-dimensional spaces, the dimension is treated as “infinity” rather than distinguishing different infinite cardinalities.
Concrete examples anchor the abstractions. For P0, the space of constant real polynomials, a basis can be taken as the single constant function X 1, giving dim(P0)=1. For P2, polynomials of degree at most 2, the monomials 1, X, and X^2 form a basis, so dim(P2)=3. The transcript also notes that the space of all polynomials (with no degree bound) is infinite-dimensional. Finally, it points to a matrix space: the vector space of complex-valued 2 3 matrices has dimension 6, and the task is to construct a linearly independent generating set (a basis) with six elements—highlighting how these definitions extend naturally to finite-dimensional linear algebra used for computations like coordinates in the next installment.
Cornell Notes
The core idea is that a basis is the most efficient way to describe a subspace: it both spans the subspace and avoids redundancy. Span(M) is the set of all vectors obtainable from finite linear combinations of elements of M, and it always forms a subspace. A set is linearly independent when the only way to form the zero vector is the trivial combination (all coefficients zero), which enforces uniqueness of coefficients. A basis is a set that is both generating (span(M)=U) and linearly independent, and the number of vectors in any basis is fixed; that number is the dimension dim(U). For finite-dimensional spaces, dim(U) is a natural number; for infinite-dimensional spaces, the transcript treats the dimension as “infinity.”
Why does the definition of a linear combination insist on finiteness, even when infinite sets appear later?
How do span(M) and generating sets relate to describing a subspace U?
What does linear independence guarantee about representations of vectors?
Why does the dimension of a subspace not depend on which basis is chosen?
How are the examples P0 and P2 used to illustrate basis and dimension?
What is the dimension of the space of complex 2 3 matrices, and what does that imply about a basis?
Review Questions
- State the definitions of span(M), generating set, and linear independence in terms of finite linear combinations.
- Explain why a basis is both generating and linearly independent, and how that leads to a well-defined notion of dimension.
- Give a basis and compute the dimension for P0 and P2, using the monomial/constant-function reasoning from the examples.
Key Points
- 1
A linear combination in this framework always uses a finite sum of the form j vj, even if the underlying set of vectors is infinite.
- 2
For any subset M of a vector space V, span(M) is the set of all vectors obtainable from finite linear combinations of elements of M, and span(M) is always a subspace.
- 3
A generating set M for a subspace U satisfies span(M)=U, meaning M reproduces every vector in U exactly via linear combinations.
- 4
Linear independence means the zero vector can only be obtained through the trivial combination where all coefficients are zero, enforcing uniqueness of coefficients.
- 5
A basis of U is a set that both generates U and is linearly independent; bases provide the most efficient description of a subspace.
- 6
The dimension dim(U) is the fixed number of vectors in any basis of U: a natural number in finite-dimensional cases, and treated as “infinity” when not finite.
- 7
The space of constant polynomials P0 has dimension 1, while the space of polynomials of degree at most 2, P2, has dimension 3.