Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 4 | Basis, Linear Independence, Generating Sets [dark version] thumbnail

Abstract Linear Algebra 4 | Basis, Linear Independence, Generating Sets [dark version]

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A linear combination in this framework always uses a finite sum of the form j vj, even if the underlying set of vectors is infinite.

Briefing

A basis in abstract linear algebra is defined as the “sweet spot” between spanning and uniqueness: it generates a subspace while keeping linear combinations unique. That framing matters because it lets mathematicians describe subspaces efficiently (with as few vectors as possible) and measure their size via dimension—even when the subspace is infinite-dimensional.

The discussion starts by extending familiar linear-algebra operations from ^n to a general vector space V over a field F. A general linear combination of vectors v1 through vK in V uses scalars  through K from F and forms a finite sum j vj. The finiteness is emphasized: even when later infinities appear, “linear combination” always means a finite number of vectors.

Next comes span. For any subset M of V, span(M) is the set of all vectors obtainable as finite linear combinations of elements of M. Span(M) is always a subspace of V, and the empty set is handled so that span(∅) becomes the smallest subspace, containing only the zero vector. With span in place, the notion of a generating set follows: a subset M generates a subspace U if span(M) equals U. This turns the task of describing U into checking whether every vector in U can be built from vectors in M.

Linear independence is introduced as the counterpart that prevents redundancy. A set M is linearly independent if every vector produced by linear combinations of M has exactly one way to do so—equivalently, the only way to represent the zero vector is the trivial combination where all coefficients are zero. The definition allows M to be infinite, but still restricts attention to finite linear combinations.

Combining generating and independence yields the definition of a basis: a set M is a basis of a subspace U if it both generates U and is linearly independent. Bases can differ from one subspace to another, but the number of vectors needed in any basis is fixed. That fixed size is the dimension of U, written dim(U). For finite-dimensional spaces, dimension is a natural number; for infinite-dimensional spaces, the dimension is treated as “infinity” rather than distinguishing different infinite cardinalities.

Concrete examples anchor the abstractions. For P0, the space of constant real polynomials, a basis can be taken as the single constant function X  1, giving dim(P0)=1. For P2, polynomials of degree at most 2, the monomials 1, X, and X^2 form a basis, so dim(P2)=3. The transcript also notes that the space of all polynomials (with no degree bound) is infinite-dimensional. Finally, it points to a matrix space: the vector space of complex-valued 23 matrices has dimension 6, and the task is to construct a linearly independent generating set (a basis) with six elements—highlighting how these definitions extend naturally to finite-dimensional linear algebra used for computations like coordinates in the next installment.

Cornell Notes

The core idea is that a basis is the most efficient way to describe a subspace: it both spans the subspace and avoids redundancy. Span(M) is the set of all vectors obtainable from finite linear combinations of elements of M, and it always forms a subspace. A set is linearly independent when the only way to form the zero vector is the trivial combination (all coefficients zero), which enforces uniqueness of coefficients. A basis is a set that is both generating (span(M)=U) and linearly independent, and the number of vectors in any basis is fixed; that number is the dimension dim(U). For finite-dimensional spaces, dim(U) is a natural number; for infinite-dimensional spaces, the transcript treats the dimension as “infinity.”

Why does the definition of a linear combination insist on finiteness, even when infinite sets appear later?

A linear combination is defined as a sum j vj using finitely many vectors v1 through vK and scalars 1 through K. Even if the generating set M is infinite, the span(M) only includes vectors formed from finite linear combinations. This keeps the algebraic construction well-defined and matches the usual linear-algebra meaning of “combination.”

How do span(M) and generating sets relate to describing a subspace U?

Span(M) is the collection of all vectors that can be built from M using finite linear combinations. If span(M)=U, then M is a generating set for U, meaning every vector in U can be expressed using vectors from M. This lets one replace the potentially complicated subspace U with a simpler set M whose combinations reproduce U exactly.

What does linear independence guarantee about representations of vectors?

Linear independence guarantees there is no non-trivial way to combine vectors from M to get the zero vector. Equivalently, any vector that can be expressed as a linear combination of M has a unique set of coefficients. In particular, to represent 0, all coefficients must be zero; otherwise the set would be dependent.

Why does the dimension of a subspace not depend on which basis is chosen?

Once a set M is both generating and linearly independent, it forms a basis of U. The transcript notes that although many different bases can exist for the same subspace, the number of vectors in such a basis is fixed. That invariant number is defined as dim(U), interpreted as the minimal number of vectors needed to span U.

How are the examples P0 and P2 used to illustrate basis and dimension?

For P0 (constant polynomials), a single basis vector suffices: the constant function X  1, so dim(P0)=1. For P2 (polynomials of degree at most 2), the monomials 1, X, and X^2 form a basis; every such polynomial is a linear combination of these, and they are linearly independent, so dim(P2)=3.

What is the dimension of the space of complex 23 matrices, and what does that imply about a basis?

A 23 matrix space has 23=6 independent entries, so its dimension is 6. A basis must therefore be a linearly independent generating set with six elements, meaning every such matrix can be written uniquely in terms of those six basis matrices (with coefficients in the field of complex numbers).

Review Questions

  1. State the definitions of span(M), generating set, and linear independence in terms of finite linear combinations.
  2. Explain why a basis is both generating and linearly independent, and how that leads to a well-defined notion of dimension.
  3. Give a basis and compute the dimension for P0 and P2, using the monomial/constant-function reasoning from the examples.

Key Points

  1. 1

    A linear combination in this framework always uses a finite sum of the form j vj, even if the underlying set of vectors is infinite.

  2. 2

    For any subset M of a vector space V, span(M) is the set of all vectors obtainable from finite linear combinations of elements of M, and span(M) is always a subspace.

  3. 3

    A generating set M for a subspace U satisfies span(M)=U, meaning M reproduces every vector in U exactly via linear combinations.

  4. 4

    Linear independence means the zero vector can only be obtained through the trivial combination where all coefficients are zero, enforcing uniqueness of coefficients.

  5. 5

    A basis of U is a set that both generates U and is linearly independent; bases provide the most efficient description of a subspace.

  6. 6

    The dimension dim(U) is the fixed number of vectors in any basis of U: a natural number in finite-dimensional cases, and treated as “infinity” when not finite.

  7. 7

    The space of constant polynomials P0 has dimension 1, while the space of polynomials of degree at most 2, P2, has dimension 3.

Highlights

Span(M) is built from finite linear combinations and always forms a subspace, with span(∅) equal to the zero-only subspace.
Linear independence is characterized by the impossibility of a non-trivial linear combination producing the zero vector.
A basis is defined as the combination of generating and linear independence, and the number of basis vectors equals dim(U).
P0 has a one-vector basis (the constant function 1), while P2 has a three-vector monomial basis (1, X, X^2).
The complex 23 matrix space has dimension 6, so any basis must contain six linearly independent generators.