Get AI summaries of any video or article — Sign up free
Linear Algebra 39 | Gaussian Elimination thumbnail

Linear Algebra 39 | Gaussian Elimination

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Gaussian elimination solves Ax = B by applying row operations to the augmented matrix [A | B] to create zeros below pivot positions.

Briefing

Gaussian elimination turns a system of linear equations into a simpler, nearly solved form by using row operations to create zeros below a leading pivot, reducing the problem step by step until only back-substitution remains. The payoff is practical: once the coefficient matrix is in upper triangular form (or its more general cousin, row echelon form), the values of the unknowns can be found starting from the bottom equation and moving upward.

The method starts by rewriting a system Ax = B as an augmented matrix [A | B]. The goal is to transform this augmented matrix using elementary row operations—without changing the solution set—so that the left side becomes upper triangular: entries below the main diagonal become zero. In that situation, the right-hand side B doesn’t need to be treated as a separate object during the triangularization; it simply transforms along with the matrix. With an upper triangular matrix in hand, backward substitution becomes straightforward. The last row directly gives the last variable (for example, if the last row reads 3x3 = 1, then x3 = 1/3). That value is substituted into the row above to solve for x2, and the process continues until x1 is determined.

A worked example illustrates the mechanics. Starting from a 3-by-3 system, the augmented matrix is manipulated column by column. In the first column, multiples of the first row are subtracted from the second and third rows to force zeros beneath the pivot. After that first elimination pass, the algorithm effectively “locks in” the first column’s triangular structure and focuses on the remaining smaller submatrix (ignoring the first column and first row for the next stage). The second elimination step repeats the same idea: use the current pivot in the second row to eliminate the entry below it in the third row. Once the matrix reaches triangular form, back-substitution yields a unique solution, reported in the correct variable order.

The general algorithm extends beyond square matrices by aiming for row echelon form rather than strict upper triangular form. In each elimination stage, the pivot position must be non-zero; if the current leading entry is zero, row exchanges may be required to bring a non-zero entry into the pivot spot. If an entire leading column is zero, elimination in that column is already complete and the process moves on. The elimination continues iteratively until the matrix is reduced to row echelon form, at which point the system can be solved using the same back-substitution logic (with the understanding that the exact structure depends on whether the matrix is square or not).

Finally, the transcript connects Gaussian elimination to matrix factorizations: LU decomposition and PLU decomposition are described as alternative viewpoints on the same elimination process, reinforcing that the core elimination steps underpin multiple computational strategies.

Cornell Notes

Gaussian elimination solves Ax = B by converting the augmented matrix [A | B] into a form with zeros below pivot positions. Row operations preserve the solution set while systematically eliminating entries column by column, shrinking the active submatrix after each stage. When the process reaches upper triangular form (for square cases) or row echelon form (more general), the remaining unknowns are found via backward substitution starting from the bottom equation. If a pivot entry is zero, row exchanges may be needed to obtain a non-zero leading element; if an entire leading column is zero, elimination skips ahead. The same elimination logic also underlies LU and PLU decompositions.

Why does creating zeros below a pivot make the system easier to solve?

Once zeros appear below the pivot positions, the coefficient matrix becomes upper triangular (or row echelon). That structure means the last equation involves only the last variable, so it can be solved directly. Then that value is substituted into the row above, which reduces the next equation to one unknown, and the process repeats upward until all variables are determined.

How do row operations avoid changing the solution set?

The elimination uses elementary row operations—like adding a multiple of one row to another or subtracting a multiple—to transform the augmented matrix. These operations correspond to algebraic manipulations that keep the set of solutions to Ax = B unchanged, even though the matrix entries change. The transcript emphasizes that the solution set of the transformed system matches the original system.

What is the elimination step in the first column of a 3×3 example?

The method keeps the first row fixed and eliminates the entries below the first pivot by subtracting appropriate multiples of the first row from the lower rows. For instance, if the second row has a 2 where the first row has a 1, subtracting 1 times the first row from the second row forces that position to become zero. The same idea applies to the third row using a multiple that cancels the entry under the pivot.

Why does the algorithm “ignore” the first column after the first elimination pass?

After the first column is successfully transformed so that everything below the pivot is zero, the remaining unknowns can be solved using only the smaller submatrix formed by the remaining rows and columns. The transcript describes this as reducing the problem to a smaller matrix and not changing the red-box region anymore.

What role do row exchanges play in Gaussian elimination?

Each elimination stage requires a non-zero pivot in the current leading position because the algorithm divides by that pivot to compute the multiples used for elimination. If the pivot is zero, row exchanges rearrange the rows to bring a non-zero entry into the pivot spot. If the entire leading column is zero, elimination in that column is already complete and the algorithm moves to the next stage.

How does backward substitution connect to the triangular form produced by elimination?

Backward substitution starts at the bottom row of the triangular (or echelon) form. The bottom row typically contains only one variable, so it yields that variable immediately. Then the computed value is substituted into the row above, turning it into an equation with one unknown, and the process continues upward until the full solution vector is obtained in the correct variable order.

Review Questions

  1. In what exact way does the structure of an upper triangular matrix make backward substitution work?
  2. What conditions determine whether row exchanges are necessary during elimination?
  3. How does the algorithm reduce the problem size after completing elimination in one column?

Key Points

  1. 1

    Gaussian elimination solves Ax = B by applying row operations to the augmented matrix [A | B] to create zeros below pivot positions.

  2. 2

    Upper triangular form enables immediate solving of the last variable and then sequentially solving the remaining variables via backward substitution.

  3. 3

    After eliminating entries in one column, the algorithm focuses on the smaller remaining submatrix, effectively shrinking the active problem.

  4. 4

    Row echelon form generalizes upper triangular form and supports elimination for non-square matrices.

  5. 5

    A non-zero pivot is required for elimination; if the pivot is zero, row exchanges can reposition a non-zero entry into the pivot spot.

  6. 6

    If a leading column contains only zeros, that elimination step is skipped because nothing can be eliminated there.

  7. 7

    LU and PLU decompositions are presented as alternative perspectives built on the same elimination process.

Highlights

Creating zeros below pivot positions is the central move that turns a system into a form solvable by backward substitution.
Once the matrix is triangular, the last equation directly determines the last variable, and each substitution step reduces the next equation to one unknown.
Gaussian elimination proceeds iteratively: eliminate in one column, then move to the smaller remaining submatrix.
Row exchanges handle the practical issue of a zero pivot, while an all-zero leading column signals that elimination can move on.

Topics