Get AI summaries of any video or article — Sign up free
Linear Algebra 53 | Eigenvalues and Eigenvectors [dark version] thumbnail

Linear Algebra 53 | Eigenvalues and Eigenvectors [dark version]

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Eigenvectors are nonzero vectors whose direction is preserved by a linear transformation up to a scalar factor.

Briefing

Eigenvalues and eigenvectors identify directions that a linear transformation preserves up to scaling—turning a complicated matrix action into a simple geometric rule. For a square matrix A acting as a linear map f_A on vectors, an eigenvector x is a nonzero vector whose direction does not change under f_A; instead, the transformation only multiplies it by a scalar λ. If λ is negative, the direction flips; if λ is zero, the vector collapses to the origin. This matters because it pinpoints the “special” directions that remain structurally stable under the transformation, which is central to solving systems, understanding stability, and diagonalizing matrices.

The core condition is written as A x = λ x. Rearranging gives (A − λ I)x = 0, where I is the identity matrix. That reformulation reframes the search for eigenvectors as a kernel problem: x lies in the kernel of (A − λ I). Eigenvectors must exclude the zero vector, since 0 always satisfies the equation trivially, but the scaling factors λ are allowed to be any real number. Once a single eigenvector exists for a given λ, there are infinitely many eigenvectors for that λ because the kernel forms a whole subspace.

A two-dimensional example makes the mechanics concrete. Using a simple 2×2 matrix with entries chosen as 1s and 0s, multiplying A by a vector (x1, x2) and setting the result equal to λ(x1, x2) produces two equations: x1 + x2 = λ x1 and x2 = λ x2. The second equation immediately forces either λ = 1 or x2 = 0. Checking the first equation under these cases shows that the only nontrivial outcome occurs when λ = 1; other choices lead either to contradictions or to the zero vector, which is disallowed for eigenvectors.

With λ = 1 fixed, the first equation reduces to x1 + x2 = x1, implying x2 = 0 while leaving x1 free to be any nonzero real number. So the matrix has exactly one eigenvalue, λ = 1, and its eigenvectors have the form (x1, 0) with x1 ≠ 0. This example also illustrates a broader pattern: eigenvalues come from making (A − λ I)x = 0 have nonzero solutions, and eigenvectors are the corresponding nonzero vectors in that kernel.

The video then formalizes the terminology. For a square matrix A, λ is an eigenvalue if there exists a nonzero vector x such that A x = λ x; that vector x is an eigenvector associated with λ. Eigenvectors associated with the same λ form an eigenspace, defined as the kernel of (A − λ I), which includes the zero vector so it becomes an ordinary subspace. Finally, the set of all eigenvalues of A is called the spectrum of the matrix—an idea that can be generalized and becomes a foundation for later, more systematic computation methods.

Cornell Notes

Eigenvalues and eigenvectors describe the directions a linear transformation preserves. For a square matrix A, a nonzero vector x is an eigenvector of A with eigenvalue λ if A x = λ x, meaning A scales x without changing its direction (except possibly a flip when λ < 0). Rearranging yields (A − λ I)x = 0, so eigenvectors are exactly the nonzero vectors in the kernel of (A − λ I). All eigenvectors for a fixed λ form an eigenspace (a subspace), and the set of all eigenvalues is the spectrum of A. In a worked 2×2 example, the only eigenvalue is λ = 1 and the eigenvectors are all vectors of the form (x1, 0) with x1 ≠ 0.

Why does the equation A x = λ x capture “direction preserved” under a linear map?

Because multiplying a vector by a scalar λ keeps it on the same line through the origin. If λ > 0, the vector points in the same direction; if λ < 0, it points in the opposite direction (a flip); if λ = 0, the vector maps to the origin. The eigenvector condition requires that the output A x stays aligned with x, differing only by the scalar factor λ.

How does (A − λ I)x = 0 relate eigenvalues to kernels?

Starting from A x = λ x, subtract λ x from both sides to get (A − λ I)x = 0. This means x is sent to the zero vector by the matrix (A − λ I). Therefore, eigenvectors (nonzero ones) are exactly the nonzero vectors in the kernel of (A − λ I). This kernel viewpoint also explains why eigenvectors for a fixed λ come in infinitely many directions: a kernel is a subspace.

Why exclude the zero vector when defining eigenvectors?

The zero vector always satisfies A·0 = λ·0 for any λ, since both sides equal 0. Including it would make every scalar look like an eigenvalue. The definition requires a non-vanishing vector x so that the eigenvalue corresponds to a genuinely preserved direction under A.

In the 2×2 example, how does the second equation constrain λ and x2?

The system includes x2 = λ x2. This equation holds in two cases: (1) λ = 1, or (2) x2 = 0. Each case must then be checked against the first equation x1 + x2 = λ x1 to see whether a nonzero solution exists.

Once λ = 1 is found, how are the eigenvectors determined?

Plugging λ = 1 into x1 + x2 = λ x1 gives x1 + x2 = x1, which simplifies to x2 = 0. That leaves x1 free, except it cannot be zero (otherwise the vector would be the zero vector). So eigenvectors are all (x1, 0) with x1 ≠ 0.

What are eigenspaces and the spectrum?

For a given eigenvalue λ, the eigenspace is the kernel of (A − λ I), including the zero vector so it forms a proper subspace. The spectrum of A is the set of all eigenvalues of A—collecting every λ for which (A − λ I) has nonzero vectors in its kernel.

Review Questions

  1. Given a matrix A and a scalar λ, how would you test whether λ is an eigenvalue without solving for eigenvectors directly?
  2. Explain why eigenvectors associated with the same eigenvalue form a subspace, and what that subspace is called.
  3. In what way does the sign of λ affect the geometric behavior of an eigenvector under A?

Key Points

  1. 1

    Eigenvectors are nonzero vectors whose direction is preserved by a linear transformation up to a scalar factor.

  2. 2

    A vector x is an eigenvector of A with eigenvalue λ exactly when A x = λ x.

  3. 3

    Rewriting A x = λ x as (A − λ I)x = 0 turns the eigenvector search into a kernel problem.

  4. 4

    For a fixed λ, all eigenvectors form an eigenspace equal to ker(A − λ I), which is a subspace that includes the zero vector.

  5. 5

    Eigenvalues are the scalars λ for which (A − λ I) has at least one nonzero vector in its kernel.

  6. 6

    In the worked 2×2 example, the only eigenvalue is λ = 1 and the eigenvectors are all vectors (x1, 0) with x1 ≠ 0.

  7. 7

    The spectrum of a matrix is the set of all its eigenvalues.

Highlights

Eigenvalues and eigenvectors identify the “stable” directions of a matrix: A x lands on the same line as x.
The kernel form (A − λ I)x = 0 provides a clean computational route to eigenvectors.
A single eigenvector implies infinitely many eigenvectors for the same eigenvalue because the kernel is a subspace.
In the example, checking the two resulting equations forces λ = 1 and yields eigenvectors with x2 = 0.

Mentioned