Get AI summaries of any video or article — Sign up free
Abstract Linear Algebra 28 | Equivalent Matrices thumbnail

Abstract Linear Algebra 28 | Equivalent Matrices

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Equivalent matrices are those related by invertible changes of basis on both the codomain (left) and domain (right).

Briefing

Equivalent matrices capture when two different matrix representations actually describe the same linear transformation, even after changing the bases of the domain and codomain. The key insight is that a linear map can look like different matrices because the coordinates depend on the chosen bases. In the earlier example, the derivative map from polynomial space P3 to P2 had multiple matrix representations—different matrices, same underlying linear map—because different polynomial bases were used.

That observation leads to a practical question: given a matrix A representing a linear map with respect to bases B (for V) and C (for W), can one find a “nicer” matrix A~ that is still a valid representation of the same map, perhaps with more zeros? The answer is not always—certain matrices, like the zero matrix, can’t represent a nonzero linear map. But in the cases where it is possible, the new matrix arises from changing bases in both vector spaces.

Geometrically, changing bases corresponds to multiplying by invertible “change-of-basis” matrices. If V has dimension n and W has dimension m, then A is an m×n matrix. Switching to new bases introduces invertible matrices S and T so that the new representation A~ is related to the old one by the formula

A~ = S · A · T,

with S invertible (size m×m) and T invertible (size n×n). The transcript emphasizes that every invertible matrix can serve as a change-of-basis matrix, meaning equivalence is exactly the set of matrices reachable through these invertible left and right multiplications.

This motivates the definition of equivalent matrices without mentioning the linear map at all: two matrices of the same shape are equivalent if they can be connected by invertible matrices via A~ = S A T. The notation often uses a tilde symbol (∼) to denote this relation.

Finally, the equivalence relation properties are checked to justify the notation. Reflexivity holds because choosing identity matrices leaves A unchanged. Symmetry holds because if A~ = S A T with invertible S and T, then multiplying by their inverses reverses the relationship. Transitivity holds because composing two such transformations corresponds to multiplying the invertible matrices together. With an equivalence relation in place, matrices split into equivalence classes, and matrices in different classes cannot represent the same linear map. The next step—teased for a later installment—is how to characterize these equivalence classes more concretely.

Cornell Notes

Equivalent matrices are those that represent the same linear map after changing bases in the domain and codomain. If A is an m×n matrix, then another m×n matrix A~ is equivalent to A exactly when there exist invertible matrices S (m×m) and T (n×n) such that A~ = S·A·T. This relation matches the coordinate-change effect: left multiplication corresponds to changing the basis of the codomain, and right multiplication corresponds to changing the basis of the domain. The equivalence relation is reflexive (identity matrices), symmetric (invertible matrices can be inverted), and transitive (invertible changes compose). Matrices in different equivalence classes cannot represent the same linear map.

Why can two different matrices represent the same linear map?

Because matrix entries depend on the chosen bases. A fixed linear map L: V → W can look different in coordinates when the basis of V or W changes. Changing bases introduces invertible change-of-basis matrices, so the same map can produce multiple matrix representations that are related by invertible transformations.

What is the exact algebraic condition for two matrices to be equivalent?

For matrices A and A~ of the same size m×n, equivalence means there exist invertible matrices S (m×m) and T (n×n) such that A~ = S·A·T. This is the coordinate-change rule: invertible matrices encode the basis changes on the codomain (left) and domain (right).

Why does the definition not need the linear map L?

The transcript argues that the basis-change mechanism can be expressed purely in terms of matrices. Since changing bases always corresponds to multiplying by invertible matrices, the relationship between representations can be captured directly by the equation A~ = S·A·T. So equivalence can be defined as a relation on matrices alone.

How do reflexivity, symmetry, and transitivity follow from invertibility?

Reflexivity: choose S = I_m and T = I_n, giving A = I_m·A·I_n. Symmetry: if A~ = S·A·T, then A = S^{-1}·A~·T^{-1}, so the relation reverses. Transitivity: if A2 = S1·A1·T1 and A3 = S2·A2·T2, then A3 = (S2·S1)·A1·(T1·T2), and products of invertible matrices stay invertible.

What does it mean for matrices to lie in different equivalence classes?

An equivalence relation partitions matrices into equivalence classes. Matrices in different classes cannot represent the same linear map, because representing the same map would require a basis change on both sides, which would force them to be connected by invertible matrices as in A~ = S·A·T.

Review Questions

  1. Given an m×n matrix A, what sizes must the invertible matrices S and T have to form an equivalent matrix S·A·T?
  2. If A~ = S·A·T with S and T invertible, how would you express A in terms of A~?
  3. Why does equivalence imply that matrices in different equivalence classes cannot represent the same linear map?

Key Points

  1. 1

    Equivalent matrices are those related by invertible changes of basis on both the codomain (left) and domain (right).

  2. 2

    If A is m×n, then A~ is equivalent to A exactly when A~ = S·A·T for invertible S (m×m) and T (n×n).

  3. 3

    A linear map can have multiple matrix representations because coordinates change when bases change.

  4. 4

    Not every matrix can represent a given linear map; equivalence captures exactly the reachable representations via invertible basis changes.

  5. 5

    The equivalence relation is reflexive (identity matrices), symmetric (invertible matrices can be inverted), and transitive (invertible transformations compose).

  6. 6

    Equivalence classes partition matrices of the same size, and matrices in different classes cannot represent the same linear map.

Highlights

A~ = S·A·T is the core rule: left and right multiplication by invertible matrices encode basis changes in codomain and domain.
Invertible matrices are not just convenient—they are exactly what basis changes produce, so equivalence is precisely “connected by invertibles.”
Once equivalence is established as an equivalence relation, matrices split into equivalence classes with a strict consequence: different classes can’t correspond to the same linear map.

Topics