Linear Algebra 53 | Eigenvalues and Eigenvectors [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Eigenvectors are nonzero vectors whose direction is preserved by a linear transformation up to a scalar factor.
Briefing
Eigenvalues and eigenvectors identify directions that a linear transformation preserves up to scaling—turning a complicated matrix action into a simple geometric rule. For a square matrix A acting as a linear map f_A on vectors, an eigenvector x is a nonzero vector whose direction does not change under f_A; instead, the transformation only multiplies it by a scalar λ. If λ is negative, the direction flips; if λ is zero, the vector collapses to the origin. This matters because it pinpoints the “special” directions that remain structurally stable under the transformation, which is central to solving systems, understanding stability, and diagonalizing matrices.
The core condition is written as A x = λ x. Rearranging gives (A − λ I)x = 0, where I is the identity matrix. That reformulation reframes the search for eigenvectors as a kernel problem: x lies in the kernel of (A − λ I). Eigenvectors must exclude the zero vector, since 0 always satisfies the equation trivially, but the scaling factors λ are allowed to be any real number. Once a single eigenvector exists for a given λ, there are infinitely many eigenvectors for that λ because the kernel forms a whole subspace.
A two-dimensional example makes the mechanics concrete. Using a simple 2×2 matrix with entries chosen as 1s and 0s, multiplying A by a vector (x1, x2) and setting the result equal to λ(x1, x2) produces two equations: x1 + x2 = λ x1 and x2 = λ x2. The second equation immediately forces either λ = 1 or x2 = 0. Checking the first equation under these cases shows that the only nontrivial outcome occurs when λ = 1; other choices lead either to contradictions or to the zero vector, which is disallowed for eigenvectors.
With λ = 1 fixed, the first equation reduces to x1 + x2 = x1, implying x2 = 0 while leaving x1 free to be any nonzero real number. So the matrix has exactly one eigenvalue, λ = 1, and its eigenvectors have the form (x1, 0) with x1 ≠ 0. This example also illustrates a broader pattern: eigenvalues come from making (A − λ I)x = 0 have nonzero solutions, and eigenvectors are the corresponding nonzero vectors in that kernel.
The video then formalizes the terminology. For a square matrix A, λ is an eigenvalue if there exists a nonzero vector x such that A x = λ x; that vector x is an eigenvector associated with λ. Eigenvectors associated with the same λ form an eigenspace, defined as the kernel of (A − λ I), which includes the zero vector so it becomes an ordinary subspace. Finally, the set of all eigenvalues of A is called the spectrum of the matrix—an idea that can be generalized and becomes a foundation for later, more systematic computation methods.
Cornell Notes
Eigenvalues and eigenvectors describe the directions a linear transformation preserves. For a square matrix A, a nonzero vector x is an eigenvector of A with eigenvalue λ if A x = λ x, meaning A scales x without changing its direction (except possibly a flip when λ < 0). Rearranging yields (A − λ I)x = 0, so eigenvectors are exactly the nonzero vectors in the kernel of (A − λ I). All eigenvectors for a fixed λ form an eigenspace (a subspace), and the set of all eigenvalues is the spectrum of A. In a worked 2×2 example, the only eigenvalue is λ = 1 and the eigenvectors are all vectors of the form (x1, 0) with x1 ≠ 0.
Why does the equation A x = λ x capture “direction preserved” under a linear map?
How does (A − λ I)x = 0 relate eigenvalues to kernels?
Why exclude the zero vector when defining eigenvectors?
In the 2×2 example, how does the second equation constrain λ and x2?
Once λ = 1 is found, how are the eigenvectors determined?
What are eigenspaces and the spectrum?
Review Questions
- Given a matrix A and a scalar λ, how would you test whether λ is an eigenvalue without solving for eigenvectors directly?
- Explain why eigenvectors associated with the same eigenvalue form a subspace, and what that subspace is called.
- In what way does the sign of λ affect the geometric behavior of an eigenvector under A?
Key Points
- 1
Eigenvectors are nonzero vectors whose direction is preserved by a linear transformation up to a scalar factor.
- 2
A vector x is an eigenvector of A with eigenvalue λ exactly when A x = λ x.
- 3
Rewriting A x = λ x as (A − λ I)x = 0 turns the eigenvector search into a kernel problem.
- 4
For a fixed λ, all eigenvectors form an eigenspace equal to ker(A − λ I), which is a subspace that includes the zero vector.
- 5
Eigenvalues are the scalars λ for which (A − λ I) has at least one nonzero vector in its kernel.
- 6
In the worked 2×2 example, the only eigenvalue is λ = 1 and the eigenvectors are all vectors (x1, 0) with x1 ≠ 0.
- 7
The spectrum of a matrix is the set of all its eigenvalues.