Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 2 | Examples of Abstract Vector Spaces [dark version] thumbnail

Abstract Linear Algebra 2 | Examples of Abstract Vector Spaces [dark version]

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A vector space is determined by a set plus two operations—vector addition and scalar multiplication—that satisfy the standard axioms over a chosen field F.

Briefing

The core takeaway is that many familiar mathematical objects become vector spaces once addition and scalar multiplication are defined in a way that stays closed under those operations. The transcript starts by recalling the vector space framework: a set of vectors becomes a vector space over a field F if it supports vector addition and scalar multiplication (with scalars from F) satisfying the standard eight axioms. The field matters because scaling can be done with real numbers, complex numbers, or even rational numbers—each choice changes what “scalars” are allowed.

As a first example, matrices with fixed size M×N form a vector space when their entries come from a field like C. Adding two matrices and multiplying a matrix by a complex scalar work entry-by-entry, and the vector space axioms follow from the usual algebraic rules for complex numbers. This example is intentionally concrete: it looks like ordinary linear algebra, but the point is that the vector space structure is what matters.

The next example shifts to a more abstract but central construction: functions. Fix any set I and consider all maps from I to the real numbers, written as F(I) = {f : I → R}. This set becomes a real vector space by defining operations pointwise. For addition, (f+g)(x) is defined as f(x)+g(x) using ordinary addition in R. For scalar multiplication, (λf)(x) is defined as λ·f(x) using ordinary multiplication in R. Because these operations inherit their algebraic behavior from the real numbers, the vector space axioms are satisfied automatically. The transcript also notes the parallel extension: if functions are complex-valued and scalars come from C, the same pointwise rules produce a complex vector space.

A second function-based example focuses on polynomials. Let P(R) denote the set of polynomial functions p : R → R, where p(x) is built from powers of x with real coefficients (only natural powers appear). Addition and scalar multiplication are again defined using the same pointwise rules as for general functions. The crucial extra check is closure: after adding or scaling two polynomials, the result must still be a polynomial. That closure holds because sums and scalar multiples of polynomials remain polynomials.

From there, the transcript draws an important structural conclusion: P(R) sits inside the larger function space F(R) as a linear subspace. Since vector space operations are the same in both settings, P(R) inherits the zero vector from the ambient function space. In this function space, the zero vector is the zero function—sending every x in the domain to 0. The discussion ends by previewing that later videos will formalize how linear subspaces are defined in this abstract setting, but the key idea is already in place: subspaces are vector spaces living inside larger ones, closed under the same operations.

Cornell Notes

Many mathematical objects become vector spaces once addition and scalar multiplication are defined and the set is closed under those operations. Matrices with entries in C form a complex vector space because entrywise addition and scalar multiplication satisfy the vector space axioms. A more general construction uses functions: for any set I, the set of all maps f: I → R becomes a real vector space with pointwise operations (f+g)(x)=f(x)+g(x) and (λf)(x)=λ·f(x). Polynomials P(R) form a real vector space because pointwise addition and scaling keep results polynomial. P(R) is a linear subspace of the larger function space F(R), and its zero vector is the zero function x↦0.

Why do matrices with complex entries form a vector space?

Fix dimensions M×N and consider all matrices whose entries lie in C. Define addition and scalar multiplication entry-by-entry: (A+B)ij=Aij+Bij and (λA)ij=λ·Aij. Because these operations follow the usual algebraic rules of complex numbers, the eight vector space axioms are satisfied, making the set a complex vector space (scalars come from C).

How is addition defined for functions f,g: I → R in a vector space setting?

Addition is pointwise. For each x in the domain I, define (f+g)(x)=f(x)+g(x), using ordinary addition in R. Doing this for every x produces a new function from I to R, so the result stays inside the same function set.

How is scalar multiplication defined for functions, and what does the “ordinary multiplication” refer to?

Scalar multiplication is also pointwise. For λ in R and f: I → R, define (λf)(x)=λ·f(x), where the multiplication λ·f(x) is the usual multiplication in the real numbers. Applying this at every x yields another function in the same set.

What extra condition is needed when moving from general functions to polynomials?

Closure under operations. After defining addition and scalar multiplication pointwise, it must still be true that the result is a polynomial. The transcript highlights that sums and scalar multiples of polynomials remain polynomials, so P(R) is closed under the vector space operations.

What is the zero vector in the polynomial/function vector space?

In a function space, the zero vector is the zero function. For the polynomial space P(R) (and the ambient function space F(R)), the zero vector sends every input x to 0: z(x)=0 for all x in the domain.

Review Questions

  1. Given a set I and functions f,g: I → R, write the formula for (f+g)(x) and explain why it stays in the same set of functions.
  2. Why does closure under addition and scalar multiplication matter when defining a vector space of polynomials?
  3. Identify the zero vector in the function space F(I) and in P(R).

Key Points

  1. 1

    A vector space is determined by a set plus two operations—vector addition and scalar multiplication—that satisfy the standard axioms over a chosen field F.

  2. 2

    Matrices of size M×N with entries in C form a complex vector space using entrywise addition and scalar multiplication.

  3. 3

    For any set I, the function set F(I)={f:I→R} becomes a real vector space when addition and scalar multiplication are defined pointwise.

  4. 4

    Pointwise operations inherit their algebra from R, making the vector space axioms hold for function spaces.

  5. 5

    Polynomials P(R) form a vector space because pointwise addition and scaling keep results polynomial (closure).

  6. 6

    P(R) is a linear subspace of the larger function space F(R) because it uses the same operations and contains the zero vector.

  7. 7

    In function spaces, the zero vector is the zero function x↦0.

Highlights

Function spaces become vector spaces by defining addition and scalar multiplication pointwise: (f+g)(x)=f(x)+g(x) and (λf)(x)=λ·f(x).
Polynomials form a vector space because pointwise operations preserve the polynomial form, ensuring closure.
P(R) sits inside F(R) as a linear subspace, sharing the same zero vector: the zero function.
Matrix spaces with entries in C inherit vector space structure from the algebra of complex numbers.

Topics

  • Vector Spaces
  • Function Spaces
  • Pointwise Operations
  • Polynomial Subspaces
  • Zero Function