Linear Algebra 65 | Diagonalizable Matrices [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A square matrix is diagonalizable iff it has n linearly independent eigenvectors, equivalently iff the eigenvectors span C^n.
Briefing
Diagonalizable matrices are exactly the square matrices that admit a full set of eigenvectors—enough to rebuild every vector in the space—so the matrix action can be rewritten as a simple scaling along independent directions. That matters because it turns complicated linear transformations into a diagonal form, making powers of the matrix, solving linear systems, and understanding dynamics far easier.
The discussion starts from the idea of changing coordinates. If a matrix A has eigenvectors X1 through Xn that form a basis of C^n, then any vector U can be expressed as a linear combination of those eigenvectors. In that eigenvector coordinate system, A acts by scaling each eigenvector by its corresponding eigenvalue. Writing the eigenvectors as columns of a matrix X, the transformation becomes A = X D X^{-1}, where D is diagonal and its diagonal entries are the eigenvalues (counted with multiplicity). The key question becomes whether such an eigenvector basis exists at all.
That existence question is reframed into a basis test: the eigenvectors span C^n if and only if they are linearly independent and therefore form a basis. Equivalently, the matrix X built from those eigenvectors must be invertible. This leads to the definition: a square matrix is diagonalizable if it has n linearly independent eigenvectors, i.e., if there exists an invertible X and a diagonal D such that A = X D X^{-1}.
Examples clarify when diagonalization works and when it fails. A diagonal matrix is automatically diagonalizable because the standard basis vectors are eigenvectors. A non-diagonal triangular matrix can still be diagonalizable: in the 2×2 case, even if one standard basis vector is not an eigenvector, the matrix may still have two independent eigenvectors, allowing a diagonal form. But diagonalization can break down when eigenvectors are too scarce. For a 2×2 triangular matrix with only one eigenvalue and only a one-dimensional eigenspace, there aren’t enough independent eigenvectors to span C^2, so the matrix is not diagonalizable.
The transcript then gives practical criteria. For each eigenvalue, algebraic multiplicity counts how many times it appears as a root of the characteristic polynomial, while geometric multiplicity counts the dimension of the eigenspace. A matrix is diagonalizable precisely when, for every eigenvalue, geometric multiplicity equals algebraic multiplicity (and the geometric multiplicities sum to n). A major consequence follows: every normal matrix—such as a selfadjoint (Hermitian) matrix—is diagonalizable, and even admits an orthonormal eigenbasis.
Finally, a convenient shortcut is offered: if a matrix has n distinct eigenvalues in an n-dimensional space, then it is diagonalizable. In that situation each eigenvalue has algebraic multiplicity one, and the eigenspaces automatically provide enough independent eigenvectors without explicitly computing them.
Cornell Notes
A square matrix is diagonalizable exactly when it has n linearly independent eigenvectors, letting every vector in C^n be written in that eigenvector basis. With eigenvectors X1,…,Xn arranged as columns of X, the matrix can be rewritten as A = X D X^{-1}, where D is diagonal and contains the eigenvalues. Diagonalization fails when the eigenspaces are too small—for instance, a 2×2 matrix with only one eigenvalue and a one-dimensional eigenspace cannot supply two independent eigenvectors. The key test uses multiplicities: for each eigenvalue, geometric multiplicity must equal algebraic multiplicity. Normal matrices (including selfadjoint/Hermitian ones) always satisfy this and therefore are diagonalizable.
Why does having an eigenvector basis make diagonalization possible?
What is the precise condition for diagonalizability in C^n?
How do algebraic and geometric multiplicities determine diagonalizability?
Why can a triangular matrix be diagonalizable even if it isn’t diagonal?
What specific failure mode prevents diagonalization in the 2×2 case?
What shortcuts guarantee diagonalizability without computing eigenvectors?
Review Questions
- State the definition of a diagonalizable matrix and explain how it leads to the formula A = X D X^{-1}.
- Give an example of why a matrix with geometric multiplicity smaller than algebraic multiplicity cannot be diagonalizable.
- Explain why normal matrices are guaranteed to be diagonalizable and what extra structure they provide.
Key Points
- 1
A square matrix is diagonalizable iff it has n linearly independent eigenvectors, equivalently iff the eigenvectors span C^n.
- 2
If X is the matrix whose columns are eigenvectors and D is diagonal with corresponding eigenvalues, then diagonalization means A = X D X^{-1}.
- 3
Diagonalization can succeed for non-diagonal matrices (e.g., some triangular matrices) as long as there are enough independent eigenvectors.
- 4
Diagonalization fails when eigenspaces are too small, such as a 2×2 matrix with only one eigenvalue and a one-dimensional eigenspace.
- 5
A practical criterion: for every eigenvalue, geometric multiplicity must equal algebraic multiplicity.
- 6
Every normal matrix (including selfadjoint/Hermitian matrices) is diagonalizable and admits an orthonormal eigenbasis.
- 7
If a matrix has n distinct eigenvalues, it is diagonalizable without needing to compute eigenvectors.