Linear Algebra 39 | Gaussian Elimination
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Gaussian elimination solves Ax = B by applying row operations to the augmented matrix [A | B] to create zeros below pivot positions.
Briefing
Gaussian elimination turns a system of linear equations into a simpler, nearly solved form by using row operations to create zeros below a leading pivot, reducing the problem step by step until only back-substitution remains. The payoff is practical: once the coefficient matrix is in upper triangular form (or its more general cousin, row echelon form), the values of the unknowns can be found starting from the bottom equation and moving upward.
The method starts by rewriting a system Ax = B as an augmented matrix [A | B]. The goal is to transform this augmented matrix using elementary row operations—without changing the solution set—so that the left side becomes upper triangular: entries below the main diagonal become zero. In that situation, the right-hand side B doesn’t need to be treated as a separate object during the triangularization; it simply transforms along with the matrix. With an upper triangular matrix in hand, backward substitution becomes straightforward. The last row directly gives the last variable (for example, if the last row reads 3x3 = 1, then x3 = 1/3). That value is substituted into the row above to solve for x2, and the process continues until x1 is determined.
A worked example illustrates the mechanics. Starting from a 3-by-3 system, the augmented matrix is manipulated column by column. In the first column, multiples of the first row are subtracted from the second and third rows to force zeros beneath the pivot. After that first elimination pass, the algorithm effectively “locks in” the first column’s triangular structure and focuses on the remaining smaller submatrix (ignoring the first column and first row for the next stage). The second elimination step repeats the same idea: use the current pivot in the second row to eliminate the entry below it in the third row. Once the matrix reaches triangular form, back-substitution yields a unique solution, reported in the correct variable order.
The general algorithm extends beyond square matrices by aiming for row echelon form rather than strict upper triangular form. In each elimination stage, the pivot position must be non-zero; if the current leading entry is zero, row exchanges may be required to bring a non-zero entry into the pivot spot. If an entire leading column is zero, elimination in that column is already complete and the process moves on. The elimination continues iteratively until the matrix is reduced to row echelon form, at which point the system can be solved using the same back-substitution logic (with the understanding that the exact structure depends on whether the matrix is square or not).
Finally, the transcript connects Gaussian elimination to matrix factorizations: LU decomposition and PLU decomposition are described as alternative viewpoints on the same elimination process, reinforcing that the core elimination steps underpin multiple computational strategies.
Cornell Notes
Gaussian elimination solves Ax = B by converting the augmented matrix [A | B] into a form with zeros below pivot positions. Row operations preserve the solution set while systematically eliminating entries column by column, shrinking the active submatrix after each stage. When the process reaches upper triangular form (for square cases) or row echelon form (more general), the remaining unknowns are found via backward substitution starting from the bottom equation. If a pivot entry is zero, row exchanges may be needed to obtain a non-zero leading element; if an entire leading column is zero, elimination skips ahead. The same elimination logic also underlies LU and PLU decompositions.
Why does creating zeros below a pivot make the system easier to solve?
How do row operations avoid changing the solution set?
What is the elimination step in the first column of a 3×3 example?
Why does the algorithm “ignore” the first column after the first elimination pass?
What role do row exchanges play in Gaussian elimination?
How does backward substitution connect to the triangular form produced by elimination?
Review Questions
- In what exact way does the structure of an upper triangular matrix make backward substitution work?
- What conditions determine whether row exchanges are necessary during elimination?
- How does the algorithm reduce the problem size after completing elimination in one column?
Key Points
- 1
Gaussian elimination solves Ax = B by applying row operations to the augmented matrix [A | B] to create zeros below pivot positions.
- 2
Upper triangular form enables immediate solving of the last variable and then sequentially solving the remaining variables via backward substitution.
- 3
After eliminating entries in one column, the algorithm focuses on the smaller remaining submatrix, effectively shrinking the active problem.
- 4
Row echelon form generalizes upper triangular form and supports elimination for non-square matrices.
- 5
A non-zero pivot is required for elimination; if the pivot is zero, row exchanges can reposition a non-zero entry into the pivot spot.
- 6
If a leading column contains only zeros, that elimination step is skipped because nothing can be eliminated there.
- 7
LU and PLU decompositions are presented as alternative perspectives built on the same elimination process.