Get AI summaries of any video or article — Sign up free
Multivariable Calculus 30 | Example for Lagrange Multipliers thumbnail

Multivariable Calculus 30 | Example for Lagrange Multipliers

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Combine multiple constraints into a vector function G(x) so the constraint set is G(x)=0.

Briefing

Lagrange multipliers pin down exactly where a linear function reaches its highest and lowest values on a curved constraint set: the cylinder–plane intersection. The setup uses a function on R^3, f(x1,x2,x3)=2x1+3x2+2x3, and a constraint defined by two equations: x1^2+x2^2=2 (a cylinder of radius √2 around the x3-axis) and x1+x3=1 (a plane). Their intersection is an infinite curve; the goal is to find candidate points for extrema of f along that curve.

The method requires two ingredients: regularity of the constraint and a “gradient balance” condition. The two constraints are packaged into a vector-valued map G:R^3→R^2 with components (x1^2+x2^2−2, x1+x3−1). Checking the Jacobian of G, the rank is 2 everywhere on the constraint set. The only potential rank drop would occur when x1=x2=0, but that cannot happen on G=0 because the first constraint would then force −2=0, a contradiction. With regularity secured, the necessary condition for constrained extrema becomes

∇f(x)=λ1∇G1(x)+λ2∇G2(x).

Because f is linear, ∇f is constant: (2,3,2). Meanwhile, ∇G1=(2x1,2x2,0) and ∇G2=(1,0,1). Solving the resulting system yields three gradient component equations plus the two constraint equations. The third gradient component immediately gives 2=λ2, letting λ2 be eliminated. Subtracting 2 from the first component equation forces 2λ1x1=0, so either x1=0 or λ1=0. The alternative λ1=0 fails because it would make the second component equation 3=0, impossible. Therefore x1 must equal 0.

With x1=0, the constraints simplify: x1^2+x2^2=2 becomes x2^2=2, giving x2=±√2, and x1+x3=1 becomes x3=1. That produces exactly two candidate points on the constraint curve: (0,√2,1) and (0,−√2,1). Evaluating f at these points shows which is which: f(0,√2,1)=3√2+2 is the maximum, while f(0,−√2,1)=−3√2+2 is the minimum.

Finally, the argument is completed using compactness: the intersection set G is treated as compact (so f achieves both a maximum and a minimum there). Since only two candidates satisfy the Lagrange condition, the lower value must be the minimum and the higher value the maximum. The result is a clean demonstration of how Lagrange multipliers convert a geometric optimization problem into an algebraic system whose solutions identify the extremal points.

Cornell Notes

A linear objective f(x1,x2,x3)=2x1+3x2+2x3 is optimized on the intersection of a cylinder and a plane: x1^2+x2^2=2 and x1+x3=1. The constraints are combined into G(x)=(x1^2+x2^2−2, x1+x3−1), and the Jacobian rank is checked to be 2 everywhere on the constraint set, so Lagrange multipliers apply. The gradient condition ∇f=λ1∇G1+λ2∇G2 leads to a solvable system; the equations force x1=0, then x2=±√2 and x3=1. Evaluating f at the two candidates shows (0,−√2,1) gives the minimum and (0,√2,1) gives the maximum. Compactness ensures these candidates are the true extrema.

How are the two constraints turned into a single Lagrange-multiplier setup?

The constraints x1^2+x2^2=2 and x1+x3=1 are packaged into a vector function G:R^3→R^2 by setting G(x)=(G1(x),G2(x)) where G1(x)=x1^2+x2^2−2 and G2(x)=x1+x3−1. The constraint set is then exactly {x∈R^3 : G(x)=0}. This matches the Lagrange-multiplier requirement of solving for points where G(x)=0 and ∇f is a linear combination of ∇G1 and ∇G2.

Why does the Jacobian rank check matter, and what rank is required here?

Lagrange multipliers require a regularity condition: the Jacobian of G must have maximal rank on the constraint set. Here G maps R^3 to R^2, so maximal rank is 2. Computing the Jacobian columns from G1 and G2 gives a rank drop only when x1=x2=0. But that cannot occur on the constraint set because G1=0 would then require x1^2+x2^2−2=−2=0, impossible. So every point satisfying G(x)=0 has rank 2, and the method applies.

What does the gradient equation look like for this specific problem?

The objective has constant gradient ∇f=(2,3,2). The constraint gradients are ∇G1=(2x1,2x2,0) and ∇G2=(1,0,1). The Lagrange condition is (2,3,2)=λ1(2x1,2x2,0)+λ2(1,0,1). Matching components yields: 2=2λ1x1+λ2, 3=2λ1x2, and 2=λ2, along with the two constraint equations x1^2+x2^2=2 and x1+x3=1.

How does the system force x1=0?

From the third component equation, 2=λ2. Substitute λ2=2 into the first component equation: 2=2λ1x1+2, so 2λ1x1=0. That gives two algebraic possibilities: x1=0 or λ1=0. If λ1=0, then the second component equation becomes 3=2λ1x2=0, a contradiction. Hence λ1≠0 and x1 must be 0.

Once x1=0, how are the two candidate points determined and how are they classified?

With x1=0, the constraints become x2^2=2 (from x1^2+x2^2=2) and x3=1 (from x1+x3=1). Thus x2=±√2, giving two candidates: (0,√2,1) and (0,−√2,1). Evaluating f shows f(0,√2,1)=2+3√2 is larger, so it is the maximum; f(0,−√2,1)=2−3√2 is smaller, so it is the minimum. Compactness of the constraint set guarantees these candidates are the true extrema.

Review Questions

  1. What regularity (Jacobian rank) condition is needed for Lagrange multipliers when there are two constraints in R^3, and how is it verified here?
  2. Write the Lagrange multiplier gradient equation for this problem and identify which component equations lead to λ2=2 and x1=0.
  3. After solving the constraints with x1=0, what are the two candidate points and which one gives the maximum value of f?

Key Points

  1. 1

    Combine multiple constraints into a vector function G(x) so the constraint set is G(x)=0.

  2. 2

    Check the Jacobian rank of G on the constraint set; here the required maximal rank is 2 and it holds everywhere on G=0.

  3. 3

    Use the necessary condition ∇f=λ1∇G1+λ2∇G2 to turn constrained optimization into a system of equations.

  4. 4

    For this linear f, ∇f is constant, simplifying the gradient-balance equations.

  5. 5

    Component-wise matching of ∇f=λ1∇G1+λ2∇G2 can quickly eliminate multipliers (here λ2=2).

  6. 6

    The algebra forces x1=0, then the constraints yield x2=±√2 and x3=1, producing exactly two candidates.

  7. 7

    Evaluating f at the candidates identifies the minimum and maximum values, with compactness ensuring no other extrema exist.

Highlights

The constraint set is the intersection of x1^2+x2^2=2 (a cylinder of radius √2) and x1+x3=1 (a plane), and the extrema of f occur at only two points.
A Jacobian rank check rules out the only potential rank-drop case (x1=x2=0) because it contradicts the constraint x1^2+x2^2=2.
The gradient condition forces λ2=2 and then x1=0; the remaining constraints give x2=±√2 and x3=1.
The maximum value is f(0,√2,1)=2+3√2 and the minimum is f(0,−√2,1)=2−3√2.

Topics