Get AI summaries of any video or article — Sign up free
Linear Algebra 31 | Inverses of Linear Maps are Linear thumbnail

Linear Algebra 31 | Inverses of Linear Maps are Linear

3 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

If a linear map F is bijective, its inverse F⁻¹ exists and is guaranteed to be linear.

Briefing

An invertible linear map has an inverse that is automatically linear—a fact that removes a whole class of checks when working with linear transformations. Once a function F between vector spaces is both linear and bijective, its inverse F⁻¹ exists and preserves the same algebraic structure: it respects scalar multiplication and vector addition. That means any time an inverse transformation is needed, linearity comes “for free,” as long as invertibility is guaranteed.

The discussion starts with the familiar matrix case, where linear maps correspond to matrices. If a linear map is represented by a matrix, then the inverse map corresponds to the inverse matrix. The transcript then extends this to compositions: if two linear maps are represented by matrices A and B, their composition corresponds to the matrix product AB. Taking inverses reverses the order—so (AB)⁻¹ equals B⁻¹A⁻¹. This order-reversal is presented as an important computational rule for matrix inverses and sets up the broader, abstract goal: proving that the same structural behavior holds for inverses of linear maps even when no matrix is explicitly used.

The abstract proof begins with a linear map F from ℝⁿ to ℝⁿ (the specific domain/codomain is less important than the vector-space setting) and assumes F is bijective. Bijectivity ensures F⁻¹ is well-defined. To prove linearity of F⁻¹, the transcript checks two defining properties.

First, scalar multiplication: take any vector Y and scalar λ. Because F is bijective, there exists a unique X such that F(X)=Y. Applying F⁻¹ to λY gives F⁻¹(λY). Rewriting λY as F(λX) uses linearity of F, since F(λX)=λF(X)=λY. Then F⁻¹(F(λX)) collapses back to λX, and substituting X=F⁻¹(Y) yields λF⁻¹(Y). This shows F⁻¹(λY)=λF⁻¹(Y).

Second, addition: take vectors Y and Ŷ. Again, bijectivity provides unique X and X̃ such that F(X)=Y and F(X̃)=Ŷ. Then F⁻¹(Y+Ŷ)=F⁻¹(F(X)+F(X̃)). Linearity of F turns the sum inside into F(X+X̃). Applying F⁻¹ cancels F, leaving X+X̃, which equals F⁻¹(Y)+F⁻¹(Ŷ). So F⁻¹ preserves addition.

With both scalar multiplication and addition verified, F⁻¹ is linear. The practical takeaway is direct: if a linear map is invertible (bijective), there’s no need to separately test whether its inverse is linear—its linearity is guaranteed by the structure-preserving nature of F and the existence of F⁻¹.

Cornell Notes

If F is a linear map that is also bijective, then its inverse F⁻¹ exists and is automatically linear. The proof checks the two requirements for linearity. For scalar multiplication, using bijectivity lets one write any Y as F(X), so F⁻¹(λY)=F⁻¹(F(λX))=λF⁻¹(Y). For addition, writing Y=F(X) and Ŷ=F(X̃) gives F⁻¹(Y+Ŷ)=F⁻¹(F(X)+F(X̃))=F⁻¹(F(X+X̃))=F⁻¹(Y)+F⁻¹(Ŷ). This matters because it eliminates repeated verification: once invertibility is known, linearity of the inverse follows without extra work.

Why does bijectivity matter for proving that the inverse map is linear?

Bijectivity guarantees that for every vector Y in the codomain there is exactly one vector X in the domain with F(X)=Y. That uniqueness is what makes F⁻¹(Y) well-defined, and it allows the proof to rewrite expressions like F⁻¹(λY) as F⁻¹(F(λX)) using the linearity of F.

How does the scalar-multiplication proof work step by step?

Start with F⁻¹(λY). Since F is bijective, pick the unique X such that F(X)=Y. Then λY equals λF(X), and linearity of F gives λF(X)=F(λX). So F⁻¹(λY)=F⁻¹(F(λX))=λX. Finally substitute X=F⁻¹(Y) to get F⁻¹(λY)=λF⁻¹(Y).

What is the key move in the addition proof?

Represent each input vector as an image under F: choose X and X̃ such that F(X)=Y and F(X̃)=Ŷ. Then Y+Ŷ becomes F(X)+F(X̃). Linearity of F converts that sum into F(X+X̃). Applying F⁻¹ cancels F, yielding X+X̃, which equals F⁻¹(Y)+F⁻¹(Ŷ).

How does the matrix-product rule connect to the abstract inverse-linearity result?

In the matrix setting, composition corresponds to multiplication: if a map corresponds to AB, then its inverse corresponds to (AB)⁻¹. The rule (AB)⁻¹=B⁻¹A⁻¹ reflects how inverses interact with composition. The abstract proof then shows that beyond order reversal, the inverse transformation also preserves linear structure (addition and scalar multiplication).

What practical shortcut does the result provide in linear algebra problems?

If a linear map is invertible (bijective), there’s no need to separately check whether its inverse is linear. The inverse automatically preserves scalar multiplication and addition, so one can use linearity properties of F⁻¹ directly.

Review Questions

  1. Suppose F is linear and bijective. What two properties must be shown to conclude that F⁻¹ is linear, and how does bijectivity help in each step?
  2. In the scalar multiplication proof, why is it valid to rewrite λY as F(λX) for a suitable X?
  3. How does the addition proof use linearity of F to turn F(X)+F(X̃) into F(X+X̃)?

Key Points

  1. 1

    If a linear map F is bijective, its inverse F⁻¹ exists and is guaranteed to be linear.

  2. 2

    Linearity of F⁻¹ follows by proving it preserves scalar multiplication and vector addition.

  3. 3

    For scalar multiplication, bijectivity lets one write Y=F(X), turning F⁻¹(λY) into F⁻¹(F(λX))=λF⁻¹(Y).

  4. 4

    For addition, writing Y=F(X) and Ŷ=F(X̃) converts F⁻¹(Y+Ŷ) into F⁻¹(F(X+X̃))=F⁻¹(Y)+F⁻¹(Ŷ).

  5. 5

    In the matrix case, inverses of compositions reverse order: (AB)⁻¹=B⁻¹A⁻¹.

  6. 6

    Once invertibility is established for a linear map, checking linearity of the inverse is unnecessary.

Highlights

An invertible linear transformation has an inverse that automatically preserves linear structure.
The proof of inverse linearity relies on bijectivity to express any target vector as F(X) for a unique X.
Scalar multiplication works because λF(X)=F(λX), letting F⁻¹ cancel F.
Addition works because F(X)+F(X̃)=F(X+X̃), again allowing cancellation by F⁻¹.
Matrix inverses reverse multiplication order: (AB)⁻¹=B⁻¹A⁻¹.

Topics