Get AI summaries of any video or article — Sign up free
Linear Algebra 65 | Diagonalizable Matrices [dark version] thumbnail

Linear Algebra 65 | Diagonalizable Matrices [dark version]

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A square matrix is diagonalizable iff it has n linearly independent eigenvectors, equivalently iff the eigenvectors span C^n.

Briefing

Diagonalizable matrices are exactly the square matrices that admit a full set of eigenvectors—enough to rebuild every vector in the space—so the matrix action can be rewritten as a simple scaling along independent directions. That matters because it turns complicated linear transformations into a diagonal form, making powers of the matrix, solving linear systems, and understanding dynamics far easier.

The discussion starts from the idea of changing coordinates. If a matrix A has eigenvectors X1 through Xn that form a basis of C^n, then any vector U can be expressed as a linear combination of those eigenvectors. In that eigenvector coordinate system, A acts by scaling each eigenvector by its corresponding eigenvalue. Writing the eigenvectors as columns of a matrix X, the transformation becomes A = X D X^{-1}, where D is diagonal and its diagonal entries are the eigenvalues (counted with multiplicity). The key question becomes whether such an eigenvector basis exists at all.

That existence question is reframed into a basis test: the eigenvectors span C^n if and only if they are linearly independent and therefore form a basis. Equivalently, the matrix X built from those eigenvectors must be invertible. This leads to the definition: a square matrix is diagonalizable if it has n linearly independent eigenvectors, i.e., if there exists an invertible X and a diagonal D such that A = X D X^{-1}.

Examples clarify when diagonalization works and when it fails. A diagonal matrix is automatically diagonalizable because the standard basis vectors are eigenvectors. A non-diagonal triangular matrix can still be diagonalizable: in the 2×2 case, even if one standard basis vector is not an eigenvector, the matrix may still have two independent eigenvectors, allowing a diagonal form. But diagonalization can break down when eigenvectors are too scarce. For a 2×2 triangular matrix with only one eigenvalue and only a one-dimensional eigenspace, there aren’t enough independent eigenvectors to span C^2, so the matrix is not diagonalizable.

The transcript then gives practical criteria. For each eigenvalue, algebraic multiplicity counts how many times it appears as a root of the characteristic polynomial, while geometric multiplicity counts the dimension of the eigenspace. A matrix is diagonalizable precisely when, for every eigenvalue, geometric multiplicity equals algebraic multiplicity (and the geometric multiplicities sum to n). A major consequence follows: every normal matrix—such as a selfadjoint (Hermitian) matrix—is diagonalizable, and even admits an orthonormal eigenbasis.

Finally, a convenient shortcut is offered: if a matrix has n distinct eigenvalues in an n-dimensional space, then it is diagonalizable. In that situation each eigenvalue has algebraic multiplicity one, and the eigenspaces automatically provide enough independent eigenvectors without explicitly computing them.

Cornell Notes

A square matrix is diagonalizable exactly when it has n linearly independent eigenvectors, letting every vector in C^n be written in that eigenvector basis. With eigenvectors X1,…,Xn arranged as columns of X, the matrix can be rewritten as A = X D X^{-1}, where D is diagonal and contains the eigenvalues. Diagonalization fails when the eigenspaces are too small—for instance, a 2×2 matrix with only one eigenvalue and a one-dimensional eigenspace cannot supply two independent eigenvectors. The key test uses multiplicities: for each eigenvalue, geometric multiplicity must equal algebraic multiplicity. Normal matrices (including selfadjoint/Hermitian ones) always satisfy this and therefore are diagonalizable.

Why does having an eigenvector basis make diagonalization possible?

If eigenvectors X1,…,Xn form a basis of C^n, then any vector U can be written as a linear combination of them. In that basis, applying A to each eigenvector just scales it by its eigenvalue, so A’s action becomes independent scaling along each basis direction. Putting the eigenvectors as columns of X yields A X = X D, which rearranges to A = X D X^{-1} with D diagonal.

What is the precise condition for diagonalizability in C^n?

A square matrix A is diagonalizable if it has n linearly independent eigenvectors. Equivalently, the eigenvectors span C^n, so the matrix X formed from those eigenvectors is invertible. This is the same as requiring the existence of an invertible X and a diagonal D such that A = X D X^{-1}.

How do algebraic and geometric multiplicities determine diagonalizability?

Algebraic multiplicity counts how many times an eigenvalue appears as a root of the characteristic polynomial. Geometric multiplicity is the dimension of the eigenspace for that eigenvalue. Diagonalizability requires that, for every eigenvalue, geometric multiplicity equals algebraic multiplicity; otherwise, the eigenspaces don’t provide enough independent eigenvectors to reach n total.

Why can a triangular matrix be diagonalizable even if it isn’t diagonal?

Diagonalizability depends on having enough independent eigenvectors, not on the matrix’s shape. In the 2×2 triangular example, even though one standard basis vector is not an eigenvector, the matrix still has two independent eigenvectors. That means the eigenspaces span C^2, so an X can be built and A can be diagonalized.

What specific failure mode prevents diagonalization in the 2×2 case?

When there is only one eigenvalue and the eigenspace is one-dimensional, there is only one independent eigenvector direction. That yields fewer than n=2 eigenvectors, so they cannot span C^2. With not enough eigenvectors, no invertible X exists to produce A = X D X^{-1}.

What shortcuts guarantee diagonalizability without computing eigenvectors?

Two major shortcuts are given. First, every normal matrix (including selfadjoint/Hermitian matrices) is diagonalizable and even admits an orthonormal eigenbasis. Second, if there are n distinct eigenvalues in an n-dimensional space, each eigenvalue has algebraic multiplicity one, and the matrix is diagonalizable.

Review Questions

  1. State the definition of a diagonalizable matrix and explain how it leads to the formula A = X D X^{-1}.
  2. Give an example of why a matrix with geometric multiplicity smaller than algebraic multiplicity cannot be diagonalizable.
  3. Explain why normal matrices are guaranteed to be diagonalizable and what extra structure they provide.

Key Points

  1. 1

    A square matrix is diagonalizable iff it has n linearly independent eigenvectors, equivalently iff the eigenvectors span C^n.

  2. 2

    If X is the matrix whose columns are eigenvectors and D is diagonal with corresponding eigenvalues, then diagonalization means A = X D X^{-1}.

  3. 3

    Diagonalization can succeed for non-diagonal matrices (e.g., some triangular matrices) as long as there are enough independent eigenvectors.

  4. 4

    Diagonalization fails when eigenspaces are too small, such as a 2×2 matrix with only one eigenvalue and a one-dimensional eigenspace.

  5. 5

    A practical criterion: for every eigenvalue, geometric multiplicity must equal algebraic multiplicity.

  6. 6

    Every normal matrix (including selfadjoint/Hermitian matrices) is diagonalizable and admits an orthonormal eigenbasis.

  7. 7

    If a matrix has n distinct eigenvalues, it is diagonalizable without needing to compute eigenvectors.

Highlights

Diagonalization is possible exactly when eigenvectors supply a full basis, turning A’s action into independent scaling along those directions.
The multiplicity match—geometric multiplicity equals algebraic multiplicity for each eigenvalue—is the central test for diagonalizability.
Normal matrices are always diagonalizable, and they even allow an orthonormal eigenbasis.

Topics