Get AI summaries of any video or article — Sign up free
PLU decomposition - An Example [dark version] thumbnail

PLU decomposition - An Example [dark version]

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

PLU decomposition factors a matrix as A = P · L · U, where P captures row swaps, L is lower triangular, and U is row echelon form.

Briefing

PLU decomposition extends LU decomposition to cases where the first nonzero pivot in a column appears after some zeros. Instead of failing when a pivot is zero, the method uses a permutation matrix P to record the row swaps needed to bring a valid pivot into position, producing a row echelon form U (upper triangular when the matrix is square) and a lower triangular matrix L. The payoff is a reliable factorization for rectangular matrices: any suitable matrix A can be written as A = P · L · U.

The walkthrough starts by defining the three ingredients. L is a lower triangular square matrix, U is the row echelon form (denoted U, and upper triangular in the square case), and P is a permutation matrix that captures every row exchange. For the example, A is chosen as a 4×5 matrix. A direct LU-style elimination runs into trouble immediately: the first column begins with a zero pivot, so elimination can’t proceed. The fix is to scan the first column for a nonzero entry and swap rows to move that entry into the pivot position.

The first swap exchanges row 1 with row 2. That swap is encoded by a permutation matrix P1, which is built from the identity matrix with the two relevant rows flipped. Applying P1 on the left performs the row exchange so the pivot lands correctly. With the pivot in place, Gaussian elimination proceeds to create zeros below it. The multipliers used to eliminate entries below the pivot are placed into L: for each target row, the algorithm subtracts a multiple of the pivot row, and the corresponding multiple becomes an entry in L.

Next, the method moves to the second column. Here the pivot is already nonzero, so no row exchange is required. Elimination again generates zeros below the pivot, and the elimination multipliers populate L. The third column is more delicate: there are no pivots in that column, so the algorithm skips it and advances to the next column.

In the fourth column, a pivot exists below the current row position, so another row exchange is needed. A new permutation matrix P2 swaps the appropriate rows (described as a permutation between rows 3 and 4). Crucially, the procedure must keep the factorization consistent: swapping rows on the right side affects the order of elimination, while swapping columns on the left side corresponds to how the permutation interacts with the triangular structure. The transcript emphasizes that after applying the permutation on both sides in the right way, the resulting L structure remains lower triangular, even though the intermediate ordering changes.

Once the pivots are aligned, the algorithm continues elimination. In the final steps, the needed zeros are already present, so no further elimination changes are required. The result is a row echelon form U on the right, a lower triangular L, and a combined permutation matrix P such that the original matrix satisfies A = P · L · U. The example is presented as a template: apply the same pivot-search, row-swap, and multiplier-recording steps to other matrices to obtain their PLU decomposition.

Cornell Notes

PLU decomposition factors a matrix A into A = P · L · U, where L is lower triangular, U is row echelon form (upper triangular if A is square), and P is a permutation matrix that records row swaps. When a column’s current pivot position contains a zero, elimination stalls; the method searches downward for a nonzero entry and swaps rows to bring a valid pivot into place. The elimination multipliers used to create zeros below each pivot are stored in L. If a pivot appears later in the column, additional permutation steps are needed, and the permutation’s effect must be handled so the triangular structure of L is preserved. The example walks through these swaps and multiplier placements on a 4×5 matrix until U and L are fully determined.

Why does LU decomposition fail in the example, and how does PLU fix it?

LU elimination fails when the pivot position in a column is zero, because elimination needs a nonzero pivot to scale the pivot row and cancel entries below it. PLU fixes this by introducing a permutation matrix P that swaps rows to move a nonzero entry into the pivot position before elimination begins. In the example, the first column’s initial pivot is zero, so rows 1 and 2 are exchanged to place a nonzero pivot correctly.

What exactly does the permutation matrix P represent in PLU decomposition?

P is a permutation matrix that encodes every row exchange performed during pivoting. For a swap of two rows, P is formed by taking an identity matrix and flipping the two corresponding rows. The transcript notes an important property: squaring such a swap-permutation matrix returns the identity, reflecting that swapping the same two rows twice restores the original order. Applying P on the left performs the row swaps on the matrix being eliminated.

How are the entries of L determined during elimination?

As Gaussian elimination creates zeros below each pivot, the algorithm subtracts a multiple of the pivot row from a target row. Those multiples (the factors used to eliminate entries) are recorded into L. For instance, when eliminating an entry in a lower row, the multiple used to subtract the pivot row becomes the corresponding L entry. The transcript repeatedly emphasizes that only these multiplier values change in L as elimination proceeds column by column.

What happens when a column contains no pivot candidates?

If an entire column segment has no valid pivot (the transcript describes encountering a third column with no pivots), the algorithm skips that column and moves to the next one. The elimination process continues in later columns where a pivot can be found, rather than forcing elimination where no pivot exists.

Why does the example involve both row and column exchanges when applying a later permutation?

When a pivot is found below the current position, a row swap is required. The transcript highlights that applying the permutation on the right swaps rows, while applying it on the left swaps the corresponding columns. Handling both sides in the correct sequence keeps the factorization consistent and preserves the lower triangular form of L. After the permutation is applied appropriately, the method continues elimination with the pivot aligned correctly.

How does the final factorization A = P · L · U emerge from the steps?

Each pivoting step updates P to reflect the row exchanges, each elimination step fills in L with the elimination multipliers, and the remaining upper structure after elimination becomes U in row echelon form. Once all pivot columns are processed (and skipped columns are handled), the original matrix A is recovered by multiplying P · L · U, matching the transcript’s final statement of the PLU decomposition.

Review Questions

  1. In a PLU decomposition, what triggers a row swap, and how does that change the matrices involved?
  2. During elimination, what numerical values are stored in L, and how are they computed from the pivot row?
  3. How does the algorithm behave when a column has no pivot, and what ensures the decomposition still completes correctly?

Key Points

  1. 1

    PLU decomposition factors a matrix as A = P · L · U, where P captures row swaps, L is lower triangular, and U is row echelon form.

  2. 2

    A zero pivot blocks standard LU elimination; PLU resolves this by searching for a nonzero entry in the column and swapping rows using P.

  3. 3

    The entries of L come directly from the elimination multipliers used to create zeros below each pivot.

  4. 4

    Columns with no pivot are skipped, while later columns continue the pivoting and elimination process.

  5. 5

    When a pivot appears below the current row position, additional permutation steps are required, and the permutation’s effect must be handled so L remains lower triangular.

  6. 6

    For rectangular matrices, U is row echelon form rather than strictly upper triangular, but the same pivoting logic applies.

Highlights

PLU decomposition turns a pivoting problem into a structured factorization by recording row swaps in a permutation matrix P.
Elimination multipliers used to cancel entries below pivots are stored in L, making L a direct log of the elimination process.
When a column has no pivot, the algorithm skips it and continues—no forced elimination occurs where pivots can’t be formed.
Later pivoting may require careful permutation handling so that the triangular structure of L is preserved.
The worked example ends with A = P · L · U, demonstrating how row echelon form U and lower triangular L emerge from the elimination steps.

Topics

Mentioned

  • PLU
  • LU
  • U