Abstract Linear Algebra 2 | Examples of Abstract Vector Spaces [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A vector space is determined by a set plus two operations—vector addition and scalar multiplication—that satisfy the standard axioms over a chosen field F.
Briefing
The core takeaway is that many familiar mathematical objects become vector spaces once addition and scalar multiplication are defined in a way that stays closed under those operations. The transcript starts by recalling the vector space framework: a set of vectors becomes a vector space over a field F if it supports vector addition and scalar multiplication (with scalars from F) satisfying the standard eight axioms. The field matters because scaling can be done with real numbers, complex numbers, or even rational numbers—each choice changes what “scalars” are allowed.
As a first example, matrices with fixed size M×N form a vector space when their entries come from a field like C. Adding two matrices and multiplying a matrix by a complex scalar work entry-by-entry, and the vector space axioms follow from the usual algebraic rules for complex numbers. This example is intentionally concrete: it looks like ordinary linear algebra, but the point is that the vector space structure is what matters.
The next example shifts to a more abstract but central construction: functions. Fix any set I and consider all maps from I to the real numbers, written as F(I) = {f : I → R}. This set becomes a real vector space by defining operations pointwise. For addition, (f+g)(x) is defined as f(x)+g(x) using ordinary addition in R. For scalar multiplication, (λf)(x) is defined as λ·f(x) using ordinary multiplication in R. Because these operations inherit their algebraic behavior from the real numbers, the vector space axioms are satisfied automatically. The transcript also notes the parallel extension: if functions are complex-valued and scalars come from C, the same pointwise rules produce a complex vector space.
A second function-based example focuses on polynomials. Let P(R) denote the set of polynomial functions p : R → R, where p(x) is built from powers of x with real coefficients (only natural powers appear). Addition and scalar multiplication are again defined using the same pointwise rules as for general functions. The crucial extra check is closure: after adding or scaling two polynomials, the result must still be a polynomial. That closure holds because sums and scalar multiples of polynomials remain polynomials.
From there, the transcript draws an important structural conclusion: P(R) sits inside the larger function space F(R) as a linear subspace. Since vector space operations are the same in both settings, P(R) inherits the zero vector from the ambient function space. In this function space, the zero vector is the zero function—sending every x in the domain to 0. The discussion ends by previewing that later videos will formalize how linear subspaces are defined in this abstract setting, but the key idea is already in place: subspaces are vector spaces living inside larger ones, closed under the same operations.
Cornell Notes
Many mathematical objects become vector spaces once addition and scalar multiplication are defined and the set is closed under those operations. Matrices with entries in C form a complex vector space because entrywise addition and scalar multiplication satisfy the vector space axioms. A more general construction uses functions: for any set I, the set of all maps f: I → R becomes a real vector space with pointwise operations (f+g)(x)=f(x)+g(x) and (λf)(x)=λ·f(x). Polynomials P(R) form a real vector space because pointwise addition and scaling keep results polynomial. P(R) is a linear subspace of the larger function space F(R), and its zero vector is the zero function x↦0.
Why do matrices with complex entries form a vector space?
How is addition defined for functions f,g: I → R in a vector space setting?
How is scalar multiplication defined for functions, and what does the “ordinary multiplication” refer to?
What extra condition is needed when moving from general functions to polynomials?
What is the zero vector in the polynomial/function vector space?
Review Questions
- Given a set I and functions f,g: I → R, write the formula for (f+g)(x) and explain why it stays in the same set of functions.
- Why does closure under addition and scalar multiplication matter when defining a vector space of polynomials?
- Identify the zero vector in the function space F(I) and in P(R).
Key Points
- 1
A vector space is determined by a set plus two operations—vector addition and scalar multiplication—that satisfy the standard axioms over a chosen field F.
- 2
Matrices of size M×N with entries in C form a complex vector space using entrywise addition and scalar multiplication.
- 3
For any set I, the function set F(I)={f:I→R} becomes a real vector space when addition and scalar multiplication are defined pointwise.
- 4
Pointwise operations inherit their algebra from R, making the vector space axioms hold for function spaces.
- 5
Polynomials P(R) form a vector space because pointwise addition and scaling keep results polynomial (closure).
- 6
P(R) is a linear subspace of the larger function space F(R) because it uses the same operations and contains the zero vector.
- 7
In function spaces, the zero vector is the zero function x↦0.