Get AI summaries of any video or article — Sign up free
Linear Algebra 53 | Eigenvalues and Eigenvectors thumbnail

Linear Algebra 53 | Eigenvalues and Eigenvectors

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A vector x is an eigenvector of A if Ax=λx for some scalar λ, with x required to be nonzero.

Briefing

Eigenvalues and eigenvectors identify the special directions a linear transformation preserves—up to scaling—when a matrix acts on space. For a square matrix A representing a linear map f_A, an eigenvector x is any nonzero vector that, after applying the map, lands on the same line as before: f_A(x)=Ax=λx. The scalar λ is the eigenvalue, and it matters because it tells how that direction changes: λ>0 keeps the direction, λ<0 flips it, and λ=0 collapses the vector to the origin. This turns a geometric question about how a transformation reshapes space into an algebraic one about solving a matrix equation.

The core algebraic move is rewriting Ax=λx into (A−λI)x=0, where I is the identity matrix. That equation means x lies in the kernel (null space) of the matrix (A−λI). Since eigenvectors must be nonzero, the focus becomes finding values of λ for which (A−λI) has a nontrivial kernel. Once such an eigenvector exists, there are infinitely many eigenvectors for the same λ because any nonzero vector in that kernel can be scaled to produce another valid eigenvector. Geometrically, this is why eigenvectors correspond to “characteristic directions” of the transformation.

A worked 2×2 example illustrates the mechanics. Starting with a simple matrix (with entries chosen as 1s and 0s), the eigenvalue equation Ax=λx produces two component equations. The system is not linear in the unknowns because λ multiplies the vector components, so standard linear-system methods don’t apply directly. By analyzing the second equation first, one sees it is satisfied when λ=1 or when the relevant component x2=0. Checking each case against the first equation eliminates the options that lead only to the zero vector, leaving λ=1 as the only valid eigenvalue. Substituting λ=1 back into the equations forces x2=0 while leaving x1 free, yielding eigenvectors of the form (x1,0) with x1≠0. The example shows how eigenvalues can be determined by consistency conditions that prevent the solution from collapsing to the zero vector.

The formal definitions tie everything together. A scalar λ is an eigenvalue of A if there exists a nonzero vector x such that Ax=λx; that vector x is then an eigenvector. All eigenvectors associated with a fixed eigenvalue λ form an eigenspace, defined as the kernel of (A−λI). Unlike the set of eigenvectors themselves, the eigenspace is a subspace that includes the zero vector, which is included for convenience so the structure behaves like an ordinary subspace. Finally, the collection of all eigenvalues of A is called the spectrum of A. These definitions set up the systematic computation methods that come next.

Cornell Notes

Eigenvalues and eigenvectors describe the directions a linear transformation preserves under a matrix A. A nonzero vector x is an eigenvector if Ax=λx for some scalar λ, meaning the output stays on the same line as x and is only scaled (possibly flipped if λ<0, or collapsed if λ=0). Rearranging gives (A−λI)x=0, so eigenvectors correspond to nontrivial vectors in the kernel of (A−λI). For a fixed λ, all eigenvectors form the eigenspace ker(A−λI), which is a subspace that includes the zero vector. The set of all eigenvalues of A is the spectrum of A, a central concept for understanding how A acts.

Why does Ax=λx capture “preserved direction” under a linear map?

Because Ax=λx means the transformed vector is a scalar multiple of the original. Multiplying by λ keeps the vector on the same line through the origin: λ>0 preserves direction, λ<0 reverses it, and λ=0 sends the vector to the origin. The direction doesn’t become a new curve or rotate into a different line; it only scales (and possibly flips).

How does the equation Ax=λx turn into a kernel problem?

Starting from Ax=λx, subtract λx from both sides to get (A−λI)x=0. Here I is the identity matrix. The equation (A−λI)x=0 means x is sent to the zero vector by the matrix (A−λI), so x lies in ker(A−λI). Eigenvectors are exactly the nonzero vectors in that kernel.

Why must eigenvectors exclude the zero vector, but eigenvectors’ eigenspaces include it?

The zero vector always satisfies Ax=λx because A·0=0 and λ·0=0, so including it would make every λ look like a solution. Eigenvectors therefore require x≠0. The eigenspace ker(A−λI) is defined as a subspace for structural reasons, and subspaces by definition include the zero vector.

What does it mean that one eigenvector implies infinitely many eigenvectors for the same eigenvalue?

If x is in ker(A−λI), then any nonzero scalar multiple c·x also lies in the kernel because (A−λI)(c·x)=c(A−λI)x=c·0=0. That produces infinitely many eigenvectors along the same preserved direction.

In the 2×2 example, why does the analysis lead to λ=1 as the only eigenvalue?

The component equations derived from Ax=λx create constraints that must hold simultaneously. The second equation is satisfied either by λ=1 or by setting a component (x2) to 0. When each case is checked in the first equation, the only nontrivial solutions (ones that avoid producing the zero vector) occur when λ=1. Substituting λ=1 then forces x2=0 while leaving x1 free, giving eigenvectors of the form (x1,0) with x1≠0.

Review Questions

  1. Given a matrix A and scalar λ, how do you test whether λ is an eigenvalue using (A−λI)?
  2. Explain the relationship between eigenvectors, the kernel ker(A−λI), and the eigenspace.
  3. If λ is negative, what geometric change should an eigenvector undergo under the transformation Ax?

Key Points

  1. 1

    A vector x is an eigenvector of A if Ax=λx for some scalar λ, with x required to be nonzero.

  2. 2

    The eigenvalue λ describes how the transformation scales an eigenvector: positive preserves direction, negative flips it, and zero collapses it to the origin.

  3. 3

    Rewriting Ax=λx as (A−λI)x=0 converts the eigenvalue problem into finding values of λ that make ker(A−λI) nontrivial.

  4. 4

    All eigenvectors for a fixed eigenvalue λ form the eigenspace ker(A−λI), which is a subspace including the zero vector.

  5. 5

    Once an eigenvector exists for λ, infinitely many eigenvectors follow by scaling within the eigenspace.

  6. 6

    The spectrum of A is the set of all eigenvalues of A, obtained by collecting every λ that yields a nontrivial kernel for (A−λI).

Highlights

Eigenvectors are precisely the nonzero vectors whose images under A stay on the same line, differing only by a scalar factor λ.
The equation (A−λI)x=0 shows eigenvectors live in the kernel of (A−λI), turning geometry into a solvable algebraic condition.
In the 2×2 worked example, consistency across component equations eliminates all but one eigenvalue: λ=1, with eigenvectors (x1,0), x1≠0.
The eigenspace ker(A−λI) includes the zero vector so it forms an ordinary subspace, even though eigenvectors themselves exclude zero.

Mentioned