Abstract Linear Algebra 10 | Inner Products [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
An inner product is a map ⟨·,·⟩: V×V→F that turns an algebraic vector space into a geometric one by enabling lengths and angles.
Briefing
General inner products are introduced as the mechanism that turns a purely algebraic vector space into one with geometry—so angles and lengths become definable. The core move is to define an inner product as a map ⟨·,·⟩: V×V→F (with F being the real numbers or the complex numbers) that satisfies three constraints: positive definiteness, linearity in one argument (chosen here to be the second), and conjugate symmetry. Positive definiteness requires ⟨x,x⟩ to be a nonnegative real number for every vector x, and it must be zero only for the zero vector. Linearity in the second slot means ⟨x, y+z⟩=⟨x,y⟩+⟨x,z⟩ and ⟨x, λy⟩=λ⟨x,y⟩ for scalars λ. Conjugate symmetry ties the two inputs together: in real spaces it reduces to symmetry, while in complex spaces swapping the arguments introduces complex conjugation, matching the behavior needed to keep lengths and angles well-behaved.
To make the definition work uniformly over real and complex settings, the transcript clarifies notation for conjugation and matrix adjoints. Scalars use an overbar to denote complex conjugation; for matrices, the star (A*) means transpose plus entrywise complex conjugation. In the purely real case, that star collapses to just the transpose. This setup matters because the standard inner product on F^n can be written compactly using these operations: for column vectors u and v, the inner product is u* v, where u* is a row vector formed by conjugate-transposing u. That matrix-product form reproduces the familiar component formula with conjugation on the first vector.
The definition is then stress-tested with examples. A “mixed-component” inner product candidate on F^2 that pairs u1 with v2 and u2 with v1 can satisfy linearity and conjugate symmetry, but it fails positive definiteness: plugging the same vector into both slots (for instance u=(1,−1)) yields a negative value, so it cannot be an inner product. The lesson is that all three axioms are required; passing two properties is not enough.
A positive example comes from an infinite-dimensional setting: a polynomial space on the unit interval, with coefficients in F, equipped with an inner product defined by an integral. For polynomials f and g, the inner product takes the form ⟨f,g⟩=∫_0^1 f(x)̄ g(x) dx, mirroring the finite-dimensional “sum of conjugate times” structure but replacing the sum over components with an integral over x. Evaluating ⟨P,P⟩ for a simple choice like P(x)=i·x produces a real positive number (the transcript computes 1/3), illustrating how the same geometry idea extends beyond finite-dimensional spaces. The integral’s appearance is framed as the continuum analogue of the standard component-wise sum, with complex conjugation still applied to the first function.
Overall, the transcript builds a toolkit: precise axioms for inner products, consistent real/complex notation, and concrete examples that range from F^n to polynomial function spaces—setting up later discussions of further examples and positive definite matrices.
Cornell Notes
An inner product is a function ⟨·,·⟩: V×V→F (F=ℝ or ℂ) that equips a vector space with geometry by enabling lengths and angles. It must satisfy three conditions: (1) positive definiteness—⟨x,x⟩ is a nonnegative real and equals 0 only when x is the zero vector; (2) linearity in the second argument—distributes over addition and scalar multiplication; and (3) conjugate symmetry—swapping arguments conjugates the result in complex spaces (symmetry in real spaces). The standard inner product on F^n can be written as u* v using conjugate transpose. Examples show that meeting only some axioms fails: a proposed F^2 form can be linear and conjugate-symmetric yet still produce negative ⟨x,x⟩. Finally, an inner product on polynomial spaces uses an integral ⟨f,g⟩=∫_0^1 f(x)̄ g(x) dx, extending the same structure to infinite-dimensional spaces.
What three properties must an inner product satisfy, and why do they matter for geometry?
How do the star notation and complex conjugation connect to inner products in real vs. complex vector spaces?
Why does a candidate inner product on F^2 fail even if it looks linear and conjugate-symmetric?
How does the standard inner product on F^n relate to a matrix product?
How can inner products exist in infinite-dimensional spaces, and what replaces the finite sum?
Review Questions
- What changes in the conjugate symmetry condition when moving from real vector spaces to complex vector spaces?
- Give an example of how positive definiteness can fail even when linearity and conjugate symmetry hold.
- How does the integral inner product on polynomials mirror the component-wise formula for the standard inner product on F^n?
Key Points
- 1
An inner product is a map ⟨·,·⟩: V×V→F that turns an algebraic vector space into a geometric one by enabling lengths and angles.
- 2
Positive definiteness requires ⟨x,x⟩ to be a nonnegative real number and forces x to be the zero vector exactly when ⟨x,x⟩=0.
- 3
Linearity is imposed in the second argument here: it must distribute over addition and scalar multiplication.
- 4
Conjugate symmetry becomes ordinary symmetry in real spaces but introduces complex conjugation when swapping arguments in complex spaces.
- 5
The star operation A* means transpose plus entrywise complex conjugation; in real spaces it reduces to transpose.
- 6
The standard inner product on F^n can be written compactly as u* v for column vectors u and v.
- 7
Inner products extend to infinite-dimensional spaces via integrals, such as ⟨f,g⟩=∫_0^1 f(x)̄ g(x) dx for polynomials on [0,1].