Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 2 | Examples of Abstract Vector Spaces thumbnail

Abstract Linear Algebra 2 | Examples of Abstract Vector Spaces

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Complex matrices ℂ^{M×N} form a vector space using entrywise addition and complex scalar multiplication.

Briefing

Vector spaces don’t have to be made of arrows or coordinates—once addition and scalar multiplication obey the usual rules, almost any structured collection of objects can qualify. The core takeaway is that sets of matrices, functions, and polynomials become vector spaces as soon as their natural operations (entrywise addition and scaling for matrices; pointwise addition and scaling for functions; coefficient-based addition and scaling for polynomials) satisfy the vector space axioms over an appropriate field like ℝ, ℂ, or even ℚ.

A first, familiar example is the set of all M×N matrices whose entries lie in ℂ, written as ℂ^{M×N}. Adding two such matrices is done entry by entry, and multiplying a matrix by a complex scalar scales every entry. Because these operations follow the standard algebra of complex numbers, the vector space axioms reduce to checking familiar properties like associativity, distributivity, and existence of additive identity and inverses.

The more conceptual example is a function space. Fix any set I and consider all maps from I to ℝ, denoted CurvF(I) (written as F^I in the transcript). This collection becomes a real vector space when addition and scalar multiplication are defined pointwise: for functions f and g, the sum (f+g)(x) equals f(x)+g(x) using ordinary addition in ℝ; for a real scalar Λ, the scaled function (Λ·f)(x) equals Λ·f(x) using ordinary multiplication in ℝ. The vector space axioms then “inherit” from the corresponding rules in the real numbers, since every required identity is verified by applying it at each point x∈I.

The same construction works for complex-valued functions, producing a complex vector space when scaling uses complex numbers.

A second major example is the space of polynomials. Let CurvP(ℝ) be the set of polynomial functions P:ℝ→ℝ, where P(x) is built from natural powers of x with real coefficients (P(x)=a_n x^n+…+a_1 x+a_0). Addition and scalar multiplication are defined using the same pointwise/coefficients-based operations as for functions, but with an important constraint: the result must still be a polynomial. That closure holds—adding two polynomials yields another polynomial, and scaling a polynomial by a scalar keeps it polynomial—so CurvP(ℝ) forms a real vector space. With complex scalars, it similarly becomes a complex vector space.

Finally, the transcript connects these examples by viewing polynomials as a subspace. Since every polynomial is a function, CurvP(ℝ) sits inside the larger function space CurvF(ℝ) (the set of all functions ℝ→ℝ). Because the vector space operations are the same, this inclusion is a linear subspace: it shares the same zero vector as the ambient space. In the function setting, the zero vector is the zero function—mapping every input x to 0—so the “abstract zero” in CurvP(ℝ) is exactly that polynomial/function.

Cornell Notes

The transcript builds vector spaces from structured sets by defining addition and scalar multiplication in a way that matches the objects’ natural operations. Matrices form a vector space over ℂ using entrywise addition and complex scalar scaling. Function spaces CurvF(I) become real vector spaces over ℝ when addition and scaling are defined pointwise: (f+g)(x)=f(x)+g(x) and (Λ·f)(x)=Λ·f(x). Polynomial spaces CurvP(ℝ) form a vector space because these operations keep results within polynomials, not just arbitrary functions. Polynomials then appear as a linear subspace of the larger function space, sharing the same zero vector: the zero function x↦0.

Why does the set of complex M×N matrices form a vector space?

Because it uses the standard operations compatible with complex numbers: matrix addition is defined entrywise, and scalar multiplication multiplies every entry by a complex scalar. With these operations, the vector space axioms (associativity/commutativity of addition, distributive laws, compatibility of scalar multiplication, and existence of additive identity and inverses) follow from the corresponding rules for complex arithmetic.

How do pointwise operations turn a set of functions into a vector space?

For a fixed domain set I, take all functions f:I→ℝ. Define addition by (f+g)(x)=f(x)+g(x) and scalar multiplication by (Λ·f)(x)=Λ·f(x), using ordinary addition and multiplication in ℝ. Since the axioms hold for real numbers at each point x, they hold for the functions as a whole.

What changes when moving from real-valued functions to complex-valued functions?

The structure becomes a complex vector space by switching the field used for scaling from ℝ to ℂ. The definitions of addition and pointwise scaling look the same, but scalar multiplication now uses complex scalars, so the closure and axioms rely on complex arithmetic instead of real arithmetic.

Why does the polynomial set CurvP(ℝ) qualify as a vector space, not just a function set?

Closure under the operations is the key. If P1 and P2 are polynomials, then P1+P2 is still a polynomial, and for any scalar Λ, Λ·P1 is still a polynomial. Because the results stay within the polynomial form, the vector space operations never leave CurvP(ℝ).

What makes CurvP(ℝ) a linear subspace of the larger function space CurvF(ℝ)?

CurvP(ℝ) is a subset of CurvF(ℝ) since every polynomial is a function ℝ→ℝ. The addition and scalar multiplication used in CurvP(ℝ) are the same as those in the ambient function space, and the zero vector matches: the zero function x↦0 serves as the additive identity in both settings.

Review Questions

  1. Given a fixed domain I, write the formulas for (f+g)(x) and (Λ·f)(x) that make CurvF(I) a real vector space.
  2. What closure property must hold for a subset of a function space to be a vector subspace? Apply it to polynomials.
  3. In the function-space setting, what is the zero vector, and why is it the same in both CurvF(ℝ) and CurvP(ℝ)?

Key Points

  1. 1

    Complex matrices ℂ^{M×N} form a vector space using entrywise addition and complex scalar multiplication.

  2. 2

    A function space CurvF(I) becomes a real vector space when addition and scaling are defined pointwise using ordinary ℝ operations.

  3. 3

    Vector space axioms for pointwise-defined function operations follow from the axioms of the underlying field at each point x∈I.

  4. 4

    Polynomial spaces CurvP(ℝ) form vector spaces because addition and scalar multiplication keep results inside the class of polynomials.

  5. 5

    Polynomials sit inside the larger function space as a linear subspace since they share the same vector operations and the same zero vector.

  6. 6

    The zero vector in a function space is the zero function x↦0, which also acts as the zero polynomial/function in CurvP(ℝ).

Highlights

Pointwise definitions—(f+g)(x)=f(x)+g(x) and (Λ·f)(x)=Λ·f(x)—are enough to turn a set of functions into a vector space.
The closure requirement is what distinguishes polynomial spaces from arbitrary function sets: polynomials stay polynomials under the operations.
CurvP(ℝ) is a linear subspace of CurvF(ℝ) because it uses the same operations and shares the zero vector, the zero function x↦0.

Topics