Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 34 | Eigenvalues and Eigenvectors for Linear Maps thumbnail

Abstract Linear Algebra 34 | Eigenvalues and Eigenvectors for Linear Maps

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A nonzero vector X is an eigenvector of L if L(X) lies in the span of X, equivalently L(X)=λX for some scalar λ.

Briefing

Eigenvectors and eigenvalues for a linear map are defined by a simple “scaling” condition: a nonzero vector X is an eigenvector of L if L(X) lands in the same direction as X, meaning L(X)=λX for some scalar λ. This matters because it turns a potentially complicated linear transformation into something predictable on the eigenvector directions—just multiplication by λ—so the structure of L becomes easier to analyze.

The definition works in both finite and infinite-dimensional vector spaces, without requiring any matrix representation. The only restriction is that an eigenvector cannot be the zero vector: since L(0)=0, the zero vector would satisfy the scaling condition trivially and would not carry meaningful information. For any nonzero X, the corresponding scalar λ is the eigenvalue associated with that eigenvector. Importantly, λ can be any element of the underlying field F, including 0.

A key reformulation links eigenvalues to kernels. Starting from L(X)=λX and rearranging gives (L−λI)(X)=0, where I is the identity map. That means λ is an eigenvalue exactly when the linear map L−λI has a nontrivial kernel (i.e., contains vectors other than 0). The set of all eigenvectors for a fixed eigenvalue λ is precisely the kernel of L−λI, excluding the zero vector; this kernel is called the eigenspace. Since kernels are subspaces, each eigenspace forms a subspace of V (and adding 0 keeps it a subspace).

In finite dimensions, the same eigenvalue condition can be checked using matrices. If a basis is chosen, L−λI corresponds to a matrix, and the dimension of the kernel is preserved under passing to that matrix representation. As a result, eigenvalues do not change when moving between the abstract linear map and its matrix form. Practically, this lets eigenvalues be computed by the familiar matrix method: solve the characteristic polynomial obtained from det(A−λI). The transcript emphasizes that the determinant-based approach is not just a computational trick—it matches the abstract kernel criterion.

The motivation for developing the abstract viewpoint comes from infinite-dimensional settings. A concrete example uses the vector space C^1 of real-valued functions on the real line whose derivative is continuous. On this space, differentiation is a linear map L(f)=f′. Eigenvectors arise naturally: the exponential function e^x is its own derivative, so L(e^x)=e^x, meaning e^x is an eigenvector with eigenvalue 1. This example shows that eigenvalues and eigenvectors extend beyond finite-dimensional linear algebra, even though fully characterizing them in infinite-dimensional spaces becomes more subtle—often requiring tools from functional analysis.

Overall, the core insight is that eigenvalues are exactly the scalars λ for which L−λI fails to be injective (has a nontrivial kernel), and eigenvectors are the nonzero vectors in that kernel. That single idea unifies the abstract definition, the matrix characteristic-polynomial method, and infinite-dimensional examples like differentiation on function spaces.

Cornell Notes

Eigenvectors and eigenvalues of a linear map L are defined by the scaling rule L(X)=λX for some scalar λ and some nonzero vector X. Equivalently, λ is an eigenvalue exactly when (L−λI)(X)=0 has nontrivial solutions, meaning the kernel of L−λI is not just {0}. The eigenspace for λ is the kernel of L−λI (including 0), and it is a subspace of V. In finite-dimensional spaces, passing to a matrix representation preserves the kernel condition, so eigenvalues can be computed via the characteristic polynomial det(A−λI). In infinite-dimensional spaces, the same abstract definition still applies; for instance, differentiation on C^1 has e^x as an eigenvector with eigenvalue 1.

Why must an eigenvector be nonzero in the definition?

If X=0, then L(0)=0 always holds for any linear map, so (L−λI)(0)=0 would be true for every λ. Excluding the zero vector prevents the definition from becoming trivial and ensures eigenvectors carry real information about how L acts.

How does the equation L(X)=λX turn into a kernel condition?

Starting from L(X)=λX, subtract λX from both sides to get L(X)−λX=0. Writing λX as λI(X) gives (L−λI)(X)=0. Therefore, λ is an eigenvalue exactly when L−λI has a nontrivial kernel, i.e., there exists X≠0 with (L−λI)(X)=0.

What exactly is an eigenspace, and what vectors does it contain?

For a fixed eigenvalue λ, the eigenspace is the kernel of L−λI. It includes the zero vector by definition of a kernel, and all nonzero vectors in that kernel are eigenvectors with eigenvalue λ. So the eigenspace is a subspace of V whose nonzero elements are precisely the eigenvectors for λ.

Why do eigenvalues computed from matrices match the abstract definition?

In finite dimensions, choosing a basis turns L into a matrix A, and L−λI into A−λI. The dimension (and nontriviality) of the kernel is preserved under this change of representation. That means λ is an eigenvalue for L exactly when A−λI has a nontrivial kernel, which is equivalent to det(A−λI)=0 via the characteristic polynomial.

How does the differentiation example produce an eigenvalue and eigenvector?

Let L be differentiation on C^1: L(f)=f′. Since the exponential function satisfies (e^x)′=e^x, applying L to e^x returns the same function. That matches L(e^x)=1·e^x, so e^x is an eigenvector and 1 is the corresponding eigenvalue.

Review Questions

  1. State the definition of an eigenvector and eigenvalue for a linear map L on a vector space V.
  2. Prove (or explain) why λ is an eigenvalue of L iff ker(L−λI) is nontrivial.
  3. In finite dimensions, how does the characteristic polynomial det(A−λI) relate to the kernel condition for eigenvalues?

Key Points

  1. 1

    A nonzero vector X is an eigenvector of L if L(X) lies in the span of X, equivalently L(X)=λX for some scalar λ.

  2. 2

    Eigenvalues can be any element of the field F, including 0; the definition imposes no extra restriction on λ.

  3. 3

    λ is an eigenvalue exactly when the kernel of L−λI contains vectors other than 0.

  4. 4

    For each eigenvalue λ, the eigenspace is ker(L−λI), forming a subspace of V whose nonzero elements are the eigenvectors.

  5. 5

    In finite-dimensional settings, eigenvalues are unchanged when moving between an abstract linear map and its matrix representation.

  6. 6

    Matrix computation of eigenvalues via det(A−λI)=0 matches the abstract kernel criterion.

  7. 7

    Infinite-dimensional examples still fit the same framework: differentiation on C^1 has e^x as an eigenvector with eigenvalue 1.

Highlights

Eigenvalues are precisely the scalars λ that make L−λI have a nontrivial kernel.
The eigenspace for λ is ker(L−λI), so eigenvectors for λ are exactly the nonzero vectors in that kernel.
In finite dimensions, the abstract definition and the characteristic-polynomial method agree because kernel nontriviality is preserved under matrix representation.
Differentiation on C^1 turns e^x into itself, giving eigenvalue 1 and eigenvector e^x.

Topics