A constrained optimization problem is a problem of the form maximize (or minimize) the function subject to the condition .
In some cases one can solve for as a function of and then find the extrema of a one variable function.
That is, if the equation is equivalent to , then we may set and then find the values for which achieves an extremum. The extrema of are at .
Find the extrema of subject to .
We solve . Set . Differentiating we have . Setting , we must solve , or . Differentiating again, so that which shows that is a relative minimum of and is a relative minimum of subject to .
Find the extrema of subject to .
Using the quadratic formula, we find
Substituting the above expression for in we must find the extrema of
and
and
Setting (respectively, ) we find in each case. So the potential extrema are and .
Evaluating at , we see that so that is a relative minimum and as , is a relative maximum. (even though !)
If is a (sufficiently smooth) function in two variables and is another function in two variables, and we define , and is a relative extremum of subject to , then there is some value such that .
Find the extrema of the function subject to the constraint .
Set . Then
Setting these equal to zero, we see from the third equation that , and from the first equation that , so that from the second equation implying that . From the third equation, we obtain .
Find the potential extrema of the function subject to the constraint that .
(1) | |||
(2) | |||
(3) |
Multiplying the first line by and the second by we obtain:
Subtracting, we have
As , we conclude that . Substituting, we have .
So the potential extrema are at or .