Get AI summaries of any video or article — Sign up free
Linear Algebra 63 | Spectral Mapping Theorem thumbnail

Linear Algebra 63 | Spectral Mapping Theorem

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Eigenvalues of A^m follow the rule: if A x = λ x, then A^m x = λ^m x for all natural m.

Briefing

The spectral mapping theorem for polynomials gives a direct rule for how eigenvalues change when a matrix is transformed by a polynomial: the spectrum of p(A) is exactly the set of values p(λ) as λ ranges over the spectrum of A. That matters because it turns what could be a fresh eigenvalue computation for a new matrix into a simple “plug-in” operation on the old eigenvalues.

The discussion starts from the eigenvalue equation A x = Λ x, emphasizing complex-valued matrices so eigenvalues may be complex as well. Multiplying by A again shows that A^2 x = Λ^2 x, so x remains an eigenvector for A^2 while the eigenvalue squares. Repeating the argument (and using induction) yields the general pattern: for every natural power m, A^m has the same eigenvectors as A, with eigenvalues Λ^m. This power behavior extends naturally from monomials to polynomials.

A polynomial p(z) = c_m z^m + … + c_1 z + c_0 is turned into a matrix polynomial p(A) by substituting A for z. The constant term requires the identity matrix: c_0 I, so the whole expression becomes a well-defined n×n matrix. With that setup, the spectral mapping theorem can be stated: the spectrum of p(A) is determined completely by the spectrum of A, and specifically equals {p(Λ) : Λ ∈ spectrum(A)}. As a consequence, if spectrum(A) has n elements, then spectrum(p(A)) has at most n elements. A constant polynomial illustrates the extreme case: if p(z)=c_0, then p(A)=c_0 I and the spectrum collapses to the single value {c_0}.

The proof strategy is framed as showing equality of two sets via two inclusions. One inclusion follows from the earlier eigenvalue-power reasoning (with matrix additions handled by induction). The other inclusion is proved by contrapositive. First, the constant-polynomial case is handled directly using the characteristic polynomial: eigenvalues correspond to zeros of det(p(A) − λ̃ I). For non-constant p, the argument assumes a complex number μ is not in {p(Λ)} and builds a new polynomial q(z)=p(z)−μ. Factoring q using the fundamental theorem of algebra gives q(z)=c ∏_{j=1}^m (z−A_j). Since μ was chosen to avoid p(Λ) for all eigenvalues Λ of A, none of the roots A_j can lie in spectrum(A), meaning each factor det(A − A_j I) is nonzero. Multiplicativity of determinants then shows det(q(A)) ≠ 0, so μ cannot be an eigenvalue of p(A). That completes the contrapositive and therefore the set equality.

A concrete example closes the loop. For A = [[3,2],[1,2]], the eigenvalues are 1 and 4. Taking B = 3A^3 − 7A^2 + A − 2I, the theorem says the eigenvalues of B are obtained by evaluating the polynomial at 1 and 4. The results are −5 and 82, avoiding any direct eigenvalue computation for B itself.

Cornell Notes

The spectral mapping theorem for polynomials links eigenvalues before and after a polynomial matrix transformation. If A has spectrum spectrum(A), then for any polynomial p, the spectrum of p(A) is exactly {p(λ) : λ ∈ spectrum(A)}. The key mechanism comes from the eigenvalue equation: if A x = λ x, then A^m x = λ^m x for all natural m, and this extends from powers to general polynomials. The theorem implies eigenvalues of p(A) can be found by plugging the eigenvalues of A into p, often eliminating a new characteristic-polynomial calculation. A proof by contrapositive uses factorization of q(z)=p(z)−μ and determinant multiplicativity to show that any μ not of the form p(λ) cannot be an eigenvalue of p(A).

Why does an eigenvector of A stay an eigenvector for A^m, and how do the eigenvalues change?

Starting from A x = Λ x, multiplying by A gives A^2 x = A(Λ x) = Λ(A x) = Λ^2 x. Repeating the multiplication shows A^m x = Λ^m x for every natural power m. So the eigenvector x is preserved across powers, while the eigenvalue is raised to the m-th power.

How is a polynomial p(z) turned into a matrix p(A), and why does the identity matrix appear?

For p(z)=c_m z^m+…+c_1 z + c_0, the matrix polynomial is p(A)=c_m A^m+…+c_1 A + c_0 I. The identity matrix I is required because the constant term c_0 is a scalar and must become a matrix of the same size as A to make addition valid.

What does the spectral mapping theorem claim about the spectrum of p(A)?

It claims spectrum(p(A)) = {p(Λ) : Λ ∈ spectrum(A)}. In particular, if spectrum(A) has n elements, then spectrum(p(A)) has at most n elements. For a constant polynomial p(z)=c_0, p(A)=c_0 I, so the spectrum is just {c_0}.

How does the contrapositive proof work for non-constant polynomials?

Assume μ is not in {p(Λ): Λ ∈ spectrum(A)}. Define q(z)=p(z)−μ and factor it as q(z)=c ∏_{j=1}^m (z−A_j). Because μ avoids p(Λ), none of the roots A_j can be eigenvalues of A, so det(A−A_j I) ≠ 0 for every j. Then det(q(A)) = det(c∏(A−A_j I)) becomes c^n times a product of determinants, none of which are zero, so det(q(A)) ≠ 0. Therefore μ cannot be an eigenvalue of p(A).

In the example, how are the eigenvalues of B obtained from those of A?

With A = [[3,2],[1,2]], the eigenvalues are 1 and 4. For B = 3A^3 − 7A^2 + A − 2I, evaluate the polynomial p(t)=3t^3−7t^2+t−2 at t=1 and t=4. This gives p(1)=−5 and p(4)=82, so spectrum(B) = {−5, 82}.

Review Questions

  1. If A x = λ x, what is A^5 x and what eigenvalue does A^5 have on the vector x?
  2. Given a polynomial p(z)=z^2−3z+2 and a matrix A with eigenvalues 1 and 4, what set does the spectral mapping theorem predict for spectrum(p(A))?
  3. Why must the constant term in p(A) be written as c_0 I rather than just c_0?

Key Points

  1. 1

    Eigenvalues of A^m follow the rule: if A x = λ x, then A^m x = λ^m x for all natural m.

  2. 2

    A polynomial matrix p(A) is formed by substituting A for the variable and using c_0 I for the constant term.

  3. 3

    For any polynomial p, spectrum(p(A)) equals {p(λ) : λ ∈ spectrum(A)}.

  4. 4

    The theorem implies spectrum(p(A)) has at most as many elements as spectrum(A), and constant polynomials collapse the spectrum to a single value.

  5. 5

    A contrapositive proof for non-constant p uses q(z)=p(z)−μ, factorization into linear terms, and determinant multiplicativity to show μ cannot be an eigenvalue unless μ = p(λ) for some eigenvalue λ of A.

  6. 6

    In the worked example, evaluating the polynomial at A’s eigenvalues (1 and 4) immediately yields B’s eigenvalues (−5 and 82).

Highlights

Once A x = Λ x holds, the same vector x becomes an eigenvector for every power A^m, with eigenvalue Λ^m.
The spectrum of p(A) is obtained by plugging eigenvalues of A into p—no new eigenvalue computation is required.
For constant p(z)=c_0, the transformed matrix p(A)=c_0 I has spectrum {c_0}.
The proof hinges on showing that if μ is not of the form p(Λ), then det(p(A)−μ I) stays nonzero via factorization and determinant products.
For A = [[3,2],[1,2]], the polynomial B = 3A^3 − 7A^2 + A − 2I has eigenvalues −5 and 82 by direct evaluation at 1 and 4.

Topics

  • Spectral Mapping Theorem
  • Eigenvalues of Matrix Powers
  • Polynomial Matrix Functions
  • Characteristic Polynomial
  • Determinant Factorization