Introduction to Optimum Design phần 3 pptx

76 472 0
Introduction to Optimum Design phần 3 pptx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

The necessary conditions for the equality and inequality constraints can be summed up in what are commonly known as the Karush-Kuhn-Tucker (KKT) first-order necessary condi- tions, displayed in Theorem 4.6: Theorem 4.6 Karush-Kuhn-Tucker (KKT) Optimality Conditions Let x* be a regular point of the feasible set that is a local minimum for f(x) subject to h i (x) = 0; i = 1 to p; g j (x) £ 0; j = 1 to m. Then there exist Lagrange multipliers v* (a p-vector) and u* (an m-vector) such that the Lagrangian function is stationary with respect to x j , v i , u j , and s j at the point x*. 1. Lagrangian Function (4.46a) 2. Gradient Conditions (4.46b) (4.47) (4.48) 3. Feasibility Check for Inequalities (4.49) 4. Switching Conditions (4.50) 5. Nonnegativity of Lagrange Multipliers for Inequalities (4.51) 6. Regularity Check Gradients of active constraints should be linearly independent. In such a case the Lagrange multipliers for the constraints are unique. ujm j * ;≥=01to ∂ ∂ =fi = = L s us j m j jj 02 0 1 * ; to sgjm jj 2 001≥£=;; or equivalently to ∂ ∂ =fi () + () == L u gs jm j jj 001 2 x*;to ∂ ∂ =fi () == L v hip i i 001x*; to ∂ ∂ = ∂ ∂ + ∂ ∂ + ∂ ∂ == == ÂÂ L x f x v h x u g x kn kk i i k i p j j k j m ** ; 11 01to Lfvhugsf ii i p jj j j m xvus x x x x vhx u gx s TT ,,, () = () + () + () + () = () + () + () + () == ÂÂ 1 2 1 2 130 INTRODUCTION TO OPTIMUM DESIGN It turns out that the necessary condition u ≥ 0 ensures that the gradients of the cost and the constraint functions point in opposite directions. This way f cannot be reduced any further by stepping in the negative gradient direction without violating the con- straint. That is, any further reduction in the cost function leads to leaving the feasible region at the candidate minimum point. This can be observed in Fig. 4-19. It is important to understand the use KKT conditions to (i) check possible optimality of a given point and (ii) determine the candidate local minimum points. Note first from Eqs. (4.47) to (4.49) that the candidate minimum point must be feasible, so we must check all the constraints to ensure their satisfaction. The gradient conditions of Eq. (4.46b) must also be satisfied simultaneously. These conditions have a geometrical meaning. To see this rewrite Eq. (4.46b) as (4.52) which shows that at the stationary point, the negative gradient direction on the left side (steep- est descent direction) for the cost function is a linear combination of the gradients of the con- straints with Lagrange multipliers as the scalar parameters of the linear combination. The m conditions in Eq. (4.50) are known as the switching conditions or complementary slackness conditions. They can be satisfied by setting either s i = 0 (zero slack implies active inequality, i.e., g i = 0), or u i = 0 (in this case g i must be £ 0 to satisfy feasibility). These con- ditions determine several cases in actual calculations, and their use must be clearly under- stood. In Example 4.29, there was only one switching condition, which gave two possible cases; case 1 where the slack variable was zero and case 2 where the Lagrange multiplier u for the inequality constraint was zero. Each of the two cases was solved for the unknowns. For general problems, there is more than one switching condition in Eq. (4.50); the number of switching conditions is equal to the number of inequality constraints for the problem. Various combinations of these conditions can give many solution cases. In general, with m inequality constraints, the switching conditions lead to 2 m distinct normal solution cases (abnormal case is the one where both u i = 0 and s i = 0). For each case, we need to solve the remaining necessary conditions for candidate local minimum points. Depending on the functions of the problem, it may or may not be possible to solve analytically the necessary conditions of each case. If the functions are nonlinear, we will have to use numerical methods to find their roots. In that case, each case may give several candidate minimum points. We shall illustrate the use of the KKT conditions in several example problems. In Example 4.29 there were only two variables, one Lagrange multiplier and one slack variable. For general problems, the unknowns are x, u, s, and v. These are n, m, m, and p dimensional vectors. There are thus (n + 2m + p) unknown variables and we need (n + 2m + p) equations to determine them. The equations needed for their solution are available in the KKT neces- sary conditions. If we count the number of equations in Eqs. (4.46) to (4.51), we find that there are indeed (n + 2m + p) equations. These equations then must be solved simultaneously for the candidate local minimum points. After the solutions are found, the remaining neces- sary conditions of Eqs. (4.49) and (4.51) must be checked. Conditions of Eq. (4.49) ensure feasibility of candidate local minimum points with respect to the inequality constraints g i (x) £ 0; i = 1 to m. And, conditions of Eq. (4.51) say that the Lagrange multipliers of the “£ type” inequality constraints must be nonnegative. Note that evaluation of s i 2 essentially implies evaluation of the constraint function g i (x), since s i 2 =-g i (x). This allows us to check feasibility of the candidate points with respect to the constraint g i (x) £ 0. It is also important to note that if an inequality constraint g i (x) £ 0 is inactive at the candidate minimum point x* [i.e., g i (x*) < 0, or s i 2 > 0], then the corre- sponding Lagrange multiplier u i * = 0 to satisfy the switching condition of Eq. (4.50). If, however, it is active [i.e., g i (x*) = 0], then the Lagrange multiplier must be nonnegative, u i * ≥ 0. This condition ensures that there are no feasible directions with respect to the ith con- straint g i (x*) £ 0 at the candidate point x* along which the cost function can reduce any further. Stated differently, the condition ensures that any reduction in the cost function at x* can occur only by stepping into the infeasible region for the constraint g i (x*) £ 0. - ∂ ∂ = ∂ ∂ + ∂ ∂ = == ÂÂ f x v h x u g x jn j i i j i p i i j i m ** ; 11 1to Optimum Design Concepts 131 Note further that the necessary conditions of Eqs. (4.46) to (4.51) are generally a non- linear system of equations in the variables x, u, s, and v. It may not be easy to solve the system analytically. Therefore, we may have to use numerical methods such as the Newton- Raphson method of Appendix C to find roots of the system. Fortunately, software, such as Excel, MATLAB, Mathematica and others, is available in most information technology center libraries to solve a nonlinear set of equations. Such programs are of great help in solving for candidate local minimum points. The following important points should be noted relative to the Karush-Kuhn-Tucker (KKT) first-order necessary conditions: 1. KKT conditions are not applicable at the points that are not regular. In those cases their use may yield candidate minimum points; however, the Lagrange multipliers are not unique. 2. Any point that does not satisfy KKT conditions cannot be a local minimum unless it is an irregular point (in that case KKT conditions are not applicable). Points satisfying the conditions are called KKT points. 3. The points satisfying KKT conditions can be constrained or unconstrained. They are unconstrained when there are no equalities and all inequalities are inactive. If the candidate point is unconstrained, it can be a local minimum, maximum, or inflection point depending on the form of the Hessian matrix of the cost function (refer to Section 4.3 for the necessary and sufficient conditions for unconstrained problems). 4. If there are equality constraints and no inequalities are active (i.e., u = 0), then the points satisfying KKT conditions are only stationary. They can be minimum, maximum, or inflection points. 5. If some inequality constraints are active and their multipliers are positive, then the points satisfying KKT conditions cannot be local maxima for the cost function (they may be local maximum points if active inequalities have zero multipliers). They may not be local minima either; this will depend on the second-order necessary and sufficient conditions discussed in Chapter 5. 6. It is important to note that value of the Lagrange multiplier for each constraint depends on the functional form for the constraint. For example, Lagrange multiplier for the constraint x/y - 10 £ 0 (y > 0) is different for the same constraint expressed as x - 10y £ 0, or 0.1x/y - 1 £ 0. The optimum solution for the problem does not change by changing the form of the constraint, but its Lagrange multiplier is changed. This is further explained in Section 4.5. Examples 4.30 and 4.31 illustrate various solutions of KKT necessary conditions for candi- date local minimum points. 132 INTRODUCTION TO OPTIMUM DESIGN EXAMPLE 4.30 Various Solutions of KKT Necessary Conditions Write KKT necessary conditions and solve them for the problem: minimize f(x) = x 3 - (b + c)x 2 + bcx + f 0 subject to a £ x £ d where 0 < a < b < c < d and f 0 are specified constants (created by Y. S. Ryu). Solution. A graph for the function is shown in Fig. 4-20. It can be seen that Point A is a constrained minimum, Point B is an unconstrained maximum, Point C is an unconstrained minimum, and Point D is a constrained maximum. We shall show how the KKT conditions distinguish between these points. Note that since only one con- 1 2 1 3 Optimum Design Concepts 133 straint can be active at the candidate minimum point (x cannot be at the points A and D simultaneously), all the feasible points are regular. There are two inequality constraints, (a) The Lagrangian function of Eq. (4.46a) for the problem is given as (b) where u 1 and u 2 are the Lagrange multipliers and s 1 and s 2 are the slack variables for g 1 = a - x £ 0 and g 2 = x - d £ 0, respectively. The KKT conditions give (c) (d) (e) (f) The switching conditions in Eq. (e) give four cases for the solution of KKT condi- tions. Each case will be considered separately and solved. Case 1: u 1 = 0, u 2 = 0. For this case, Eq. (c) gives two solutions as x = b and x = c. For these points both the inequalities are strictly satisfied because slack variables calculated from Eq. (d) are (g) (h) for xcs ca s dc==->=->:; 1 2 2 2 00 for xbs ba s db==->=->:; 1 2 2 2 00 uu 12 00≥≥; us us 11 2 2 00==; axss xdss- () += ≥ - () += ≥ 1 2 1 2 2 2 2 2 00 00,; , ∂ ∂ =-+ () +-+= L x xbcxbcuu 2 12 0 L x b c x bcx f u a x s u x d s=-+ () +++ -+ () +-+ () 1 3 1 2 32 01 1 2 22 2 gax gxd 12 00=-£ =-£; A B C D ab cd f (x ) f 0 x FIGURE 4-20 Graphical representation for Example 4.30. Point A, constrained local minimum; B, unconstrained local maximum; C, unconstrained local minimum; D, constrained local maximum. 134 INTRODUCTION TO OPTIMUM DESIGN Thus, all the KKT conditions are satisfied, and these are candidate minimum points. Since the points are unconstrained, they are actually stationary points. We can check the sufficient condition by calculating the curvature of the cost function at the two candidate points: (i) Since b < c, d 2 f/dx 2 is negative. Therefore, the sufficient condition for a local minimum is violated. Actually, the second-order necessary condition of Eq. (4.32) is also violated, so the point cannot be a local minimum for the function. It is actually a local maximum point because it satisfies the sufficient condition for that, as also seen in Fig. 4-20. (j) Since b < c, d 2 f/dx 2 is positive. Therefore, the second-order sufficient condition of Eq. (4.31) is satisfied, and this is a local minimum point, as also seen in Fig. 4-20. Case 2: u 1 = 0, s 2 = 0. g 2 is active for this case and since s 2 = 0, therefore, x = d. Equation (c) gives (k) Since d > c > b, u 2 is < 0. Actually the term within the square brackets is also the slope of the function at x = d which is positive, so u 2 < 0. The KKT necessary con- dition is violated, so there is no solution for this case, i.e., x = d is not a candidate minimum point. This is true as can be observed for the point D in Fig. 4-20. Case 3: s 1 = 0, u 2 = 0. s 1 = 0 implies that g 1 is active and, therefore, x = a. Equation (c) gives (l) Also, since u 1 = slope of the function at x = a, it is positive and all the KKT condi- tions are satisfied. Thus, x = a is a candidate minimum point. Actually x = a is a local minimum point because a feasible move from the point increases the cost function. This is a sufficient condition which we shall discuss in Chapter 5. Case 4: s 1 = 0, s 2 = 0. This case for which both constraints are active does not give any valid solution since x cannot be simultaneously equal to a and d. ua bcabcabac 1 2 0=-+ () +=- () - () > udbcdbcdcdb 2 2 =- - + () + [] =- - () - () xc df dx cb==->; 2 2 0 xb df dx xbcbc==-+ () =-<; 2 2 20 EXAMPLE 4.31 Solution of KKT Necessary Conditions Solve KKT condition for the problem: minimize f(x) = x 1 2 + x 2 2 - 3x 1 x 2 subject to g = x 1 2 + x 2 2 - 6 £ 0. Solution. The feasible region for the problem is a circle with its center at (0, 0) and radius as . This is plotted in Fig. 4-21. Several cost function contours are shown 6 Optimum Design Concepts 135 there. It can be seen that points A and B give minimum value for the cost function. The gradients of cost and constraint functions at these points are along the same line but in opposite directions, so KKT necessary conditions are satisfied. We shall verify this by writing these conditions and solving them for candidate minimum points. The Lagrange function of Eq. (4.46a) for the problem is (a) Since there is only one constraint for the problem, all points of the feasible region are regular, so the KKT necessary conditions are applicable. They are given as (b) (c) (d) (e) Equations (b)–(e) are the four equations in four unknowns, x 1 , x 2 , s, and u. Thus, in principle, we have enough equations to solve for all the unknowns. The system of equations is nonlinear; however, it is possible to analytically solve for all the roots. There are three possible ways of satisfying the switching condition of Eq. (e): (i) u = 0, (ii) s = 0, implying g is active, or (iii) u = 0 abd s = 0. We will consider each case separately and solve for roots of the necessary conditions. us = 0 xx s s u 1 2 2 222 6000+-+= ≥ ≥,, ∂ ∂ =-+ = L x xxux 2 21 2 232 0 ∂ ∂ =-+ = L x xxux 1 12 1 232 0 Lx x xx ux x s=+- + +-+ () 1 2 2 2 12 1 2 2 22 36 4 3 2 1 0 0 1 234 –1 –1–2–3–4 –2 –3 –4 Cost function contours –7 –5 –3 –1 1 B A x 2 x 1 f g g = 0 D g D D f D FIGURE 4-21 Graphical solution for Example 4.31. Local minimum points, A and B. 136 INTRODUCTION TO OPTIMUM DESIGN Case 1: u = 0. In this case, the inequality constraint is considered as inactive at the solution point. We shall solve for x 1 and x 2 and then check the constraint. Equa- tions (b) and (c) reduce to (f) This is 2 ¥ 2 homogeneous system of linear equations (right side is zero). Such a system has a nontrivial solution only if the determinant of the coefficient matrix is zero. However, since the determinant of the matrix is -5, the system has only a trivial solution, x 1 = x 2 = 0. We can also solve the system using Gaussian elimination procedures. This solution gives s 2 = 6 from Eq. (d), so the inequality is not active. Thus, the candidate minimum point for this case is (g) Case 2: s = 0. In this case, s = 0 implies inequality as active. We must solve Eqs. (b)–(d) simultaneously for x 1 , x 2 , and u. Note that this is a nonlinear set of equations, so there can be multiple roots. Equation (b) gives u =-1 + 3x 2 /2x 1 . Substituting for u in Eq. (c), we obtain x 1 2 = x 2 2 . Using this in Eq. (d), solving for x 1 and x 2 , and then solving for u, we obtain four roots of Eqs. (b), (c), and (d) as (h) The last two roots violate KKT necessary condition, u ≥ 0. Therefore, there are two candidate minimum points for this case. The first point corresponds to point A and the second one to B in Fig. 4-21. Case 3: u = 0, s = 0. With these conditions, Eqs. (b) and (c) give x 1 = 0, x 2 = 0. Substituting these into Eq. (d), we obtain s 2 = 6 π 0. Therefore, all KKT conditions cannot be satisfied. The case where both u and s are zero usually does not occur in most practical prob- lems. This can also be explained using the physical interpretation of the Lagrange multipliers discussed later in this chapter. The multiplier u for a constraint g £ 0 actu- ally gives the first derivative of the cost function with respect to variation in the right side of the constraint, i.e., u =-(∂f/∂e), where e is a small change in the constraint limit as g £ e. Therefore, u = 0 when g = 0 implies that, any change in the right side of the constraint g £ 0 has no effect on the optimum cost function value. This usually does not happen in practice. When the right side of a constraint is changed, the fea- sible region for the problem changes, which usually has some effect on the optimum solution. xx u 12 3 5 2 =- =- =-, xx u 12 3 5 2 =- = =-, xx u 12 3 1 2 ==- =, xx u 12 3 1 2 == =, xxuf 12 000000 * , * ,* , ,=== () = 23 032 0 12 12 xx xx-=-+=; The foregoing two examples illustrate the procedure of solving Karush-Kuhn-Tucker necessary conditions for candidate local minimum points. It is extremely important to understand the procedure clearly. Example 4.31 had only one inequality constraint. The switching condition of Eq. (e) gave only two normal cases—either u = 0 or s = 0 (the abnor- mal case where u = 0 and s = 0 rarely gives additional candidate points, so it will be ignored). Each of the cases gave candidate minimum point x*. For case 1 (u = 0), there was only one point x* satisfying Eqs. (b), (c), and (d). However, for case 2 (s = 0), there were four roots for Eqs. (b), (c), and (d). Two of the four roots did not satisfy nonnegativity conditions on the Lagrange multipliers. Therefore, the corresponding two roots were not candidate local minimum points. The preceding procedure is valid for more general nonlinear optimization problems. In Example 4.32, we illustrate the procedure for a problem with two design variables and two inequality constraints. Optimum Design Concepts 137 Finally, the points satisfying KKT necessary conditions for the problem are summarized 1. x 1 * = 0, x 2 * = 0, u* = 0, f(0, 0) = 0, Point O in Fig. 4-21 2. x 1 * = x 2 * = , u* = , f (, ) =-3, Point A in Fig. 4-21 3. x 1 * = x 2 * =- , u* = , f (- , - ) =-3, Point B in Fig. 4-21 It is interesting to note that points A and B satisfy the sufficient condition for local minima. As can be observed from Fig. 4-21, any feasible move from the points results in an increase in the cost and any further reduction in the cost results in violation of the constraint. It can also be observed that point O does not satisfy the sufficient condition because there are feasible directions that result in a decrease in the cost function. So, point O is only a stationary point. We shall check the sufficient conditions for this problem later in Chapter 5. 3 3 1 2 3 3 3 1 2 3 EXAMPLE 4.32 Solution of KKT Necessary Conditions Minimize f(x 1 , x 2 ) = x 1 2 + x 2 2 - 2x 1 - 2x 2 + 2 subject to g 1 = -2x 1 - x 2 + 4 £ 0, g 2 = -x 1 - 2x 2 + 4 £ 0. Solution. Figure 4-22 gives a graphical representation for the problem. The two con- straint functions are plotted and the feasible region is identified. It can be seen that point A( , ), where both the inequality constraints are active, is the optimum solu- tion for the problem. Since it is a two-variable problem, only two vectors can be lin- early independent. It can be seen in Fig. 4-22 that the constraint gradients —g 1 and —g 2 are linearly independent (hence the optimum point is regular), so any other vector can be expressed as a linear combination of them. In particular, -—f (the negative gra- dient of the cost function) can be expressed as linear combination of —g 1 and —g 2 , with positive scalars as the multipliers of the linear combination, which is precisely the KKT necessary condition of Eq. (4.46b). In the following, we shall write these conditions and solve them to verify the graphical solution. 4 3 4 3 138 INTRODUCTION TO OPTIMUM DESIGN The Lagrange function of Eq. (4.46a) for the problem is given as (a) The KKT necessary conditions are (b) (c) (d) (e) (f) Equations (b)–(f) are the six equations in six unknowns: x l , x 2 , s l , s 2 , u l , and u 2 . We must solve them simultaneously for candidate local minimum points. One way to satisfy the switching conditions of Eq. (f) is to identify various cases and then solve them for the roots. There are four cases, and we will consider each case separately and solve for all the unknowns: 1. u 1 = 0, u 2 =0 2. u 1 = 0, s 2 = 0 (g 2 = 0) 3. s 1 = 0 (g 1 = 0), u 2 = 0 4. s 1 = 0 (g 1 = 0), s 2 = 0 (g 2 = 0) us i ii ==012;, gxx s s u 212 2 2 2 2 2 24 0 0 0=- - + + = ≥ ≥;, gxx s su 112 1 2 1 2 1 24000=- - + + = ≥ ≥;, ∂ ∂ = = L x xuu 2 212 22 20 ∂ ∂ = = L x xuu 1 112 222 0 Lx x x x u x x s u x x s=+- - ++ ++ () + ++ () 1 2 2 2 12 112 1 2 21 2 2 2 222 2 4 24 4 3 2 1 0.64 0.20 0.01 1.32 1234 C A B Cost function contours Feasible region Minimum at Point A x 2 g 1 = 0 g 2 = 0 x 1 g 2 g 1 f (x*) = 2/9 x* = (4/3, 4/3) f D D D FIGURE 4-22 Graphical solution for Example 4.32. Optimum Design Concepts 139 Case 1: u 1 = 0, u 2 = 0. Equations (b) and (c) give x l = x 2 = 1. This is not a valid solution as it gives s 1 2 =-1(g 1 = 1), s 2 2 =-1(g 2 = 1) from Eqs. (d) and (e), which implies that both inequalities are violated, and so x 1 = 1 and x 2 = 1 is not a feasible design. Case 2: u 1 = 0, s 2 = 0. With these conditions, Eqs. (b), (c), and (e) become (g) These are three linear equations in the three unknowns x 1 , x 2 , and u 2 . Any method of solving a linear system of equations such as Gaussian elimination, or method of deter- minants (Cramer’s rule), can be used to find roots. Using the elimination procedure, we obtain x 1 = 1.2, x 2 = 1.4, and u 2 = 0.4. Therefore, the solution for this case is (h) We need to check for feasibility of the design point with respect to constraint g 1 before it can be claimed as a candidate local minimum point. Substituting x 1 = 1.2 and x 2 = 1.4 into Eq. (d), we find that s 1 2 =-0.2 < 0 (g 1 = 0.2), which is a violation of constraint g 1 . Therefore, case 2 also does not give any candidate local minimum point. It can be seen in Fig. 4-22 that point (1.2, 1.4) corresponds to point B, which is not in the fea- sible set. Case 3: s 1 = 0, u 2 = 0. With these conditions Eqs. (b), (c), and (d) give (i) This is again a linear system of equations for the variables x 1 , x 2 , and u 1 . Solving the system, we get the solution as (j) Checking the design for feasibility with respect to constraint g 2 , we find from Eq. (e) s 2 2 =-0.2 < 0 (g 2 = 0.2). This is not a feasible design. Therefore, Case 3 also does not give any candidate local minimum point. It can be observed in Fig. 4-22 that point (1.4, 1.2) corresponds to point C, which is not in the feasible region. Case 4: s 1 = 0, s 2 = 0. For this case, Eqs. (b) to (e) must be solved for the four unknowns x 1 , x 2 , u 1 , and u 2 . This system of equations is again linear and can be solved easily. Using the elimination procedure as before, we obtain x 1 = and x 2 = from Eqs. (d) and (e). Solving for u 1 and u 2 from Eqs. (b) and (c), we get u 1 =>0 and u 2 =>0. To check regularity condition for the point, we evaluate the gradients of the active constraints and define the constraint gradient matrix A as (k) Since rank (A) = # of active constraints, the gradients —g 1 and —g 2 are linearly inde- pendent. Thus, all the KKT conditions are satisfied and the preceding solution is a candidate local minimum point. The solution corresponds to point A in Fig. 4-22. The cost function at the point has a value of . 2 9 —= - - È Î Í ˘ ˚ ˙ —= - - È Î Í ˘ ˚ ˙ = È Î Í ˘ ˚ ˙ gg 12 2 1 1 2 21 12 ,,A 2 9 2 9 4 3 4 3 xxuuf 1212 14 12 04 0 02=====., .; ., ; . 222022 02 40 11 21 12 xu xu xx = = - - +=;; xxuu f 1212 12 14 0 04 02=====., .; , .; . 22 02220 240 12 2 2 12 xu x u xx = = - - +=,, [...]... g1 £ 0 and g3 £ 0 3 Substituting b = (1.125 ¥ 105)/d into g1 (Eq b), (2.40 ¥ 10 8 ) - 10 £ 0 or d ≥ 2 13. 33 mm (1.125 ¥ 10 5 )d (l) Substituting b = (1.125 ¥ 105)/d into g3 (Eq d), d- (2.25 ¥ 10 5 ) bd £ 0; or d £ 474 .34 mm (m) This gives limits on the depth d We can find limits on the width b by substituting Eqs (l) and (m) into bd = (1.125 ¥ 105): d ≥ 2 13. 33, b £ 527 .34 d £ 474 .33 , b ≥ 237 .17 Therefore,... 237 .17 Therefore, for this case the possible solutions are 237 .17 £ b £ 527 .34 mm; 2 13. 33 £ d £ 474 .33 mm bd = (1.125 ¥ 10 5 ) mm 2 Case 4: s1 = 0, u2 = 0, u3 = 0, u4 = 0, u5 = 0 Equations (i) and (j) reduce to 164 INTRODUCTION TO OPTIMUM DESIGN d- b- (2.40 ¥ 10 8 ) b2d 2 (4.80 ¥ 10 8 ) bd 3 = 0; or b 2 d 3 = (2.40 ¥ 10 8 ) = 0; or b 2 d 3 = (4.80 ¥ 10 8 ) Since the previous two equations are inconsistent,... in Fig 4 -31 where several cost function contours are also shown Thus, the condition of positive semidefiniteness of the Hessian can define the domain for the function over which it is convex 154 INTRODUCTION TO OPTIMUM DESIGN x2 2 –6 –4 –2 1 0 x1 –2 1 –1 –1 –2 2 4 2 3 4 5 6 6 10 30 3 50 70 –4 90 110 Cost function contours Feasible region –5 –6 FIGURE 4 -31 Graphical representation of Example 4 .39 EXAMPLE... further consideration: 1 2 3 4 5 6 7 8 u1 = 0, u2 = 0, u3 = 0, u4 = 0, u5 = 0 u1 = 0, u2 = 0, s3 = 0, u4 = 0, u5 = 0 u1 = 0, s2 = 0, u3 = 0, u4 = 0, u5 = 0 s1 = 0, u2 = 0, u3 = 0, u4 = 0, u5 = 0 u1 = 0, s2 = 0, s3 = 0, u4 = 0, u5 = 0 s1 = 0, s2 = 0, u3 = 0, u4 = 0, u5 = 0 s1 = 0, u2 = 0, s3 = 0, u4 = 0, u5 = 0 s1 = 0, s2 = 0, s3 = 0, u4 = 0, u5 = 0 Optimum Design Concepts 1 63 We consider each case at... nonnegative To see this, let us assume that we want to relax an inequality constraint gj £ 0 that is active (gj = 0) at the optimum point, i.e., we select ej > 144 INTRODUCTION TO OPTIMUM DESIGN 0 in Eq (4. 53) When a constraint is relaxed, the feasible set for the design problem expands We allow more feasible designs to be candidate minimum points Therefore, with the expanded feasible set we expect the optimum. .. function over which it is 3 convex The function f(x) is plotted in Fig 4 -30 It can be seen that the function is convex for x £ 2 and concave for x ≥ 2 [a function f(x) is called concave if -f(x) is 3 3 convex] 152 INTRODUCTION TO OPTIMUM DESIGN f (x) 140 120 100 80 60 40 20 x –5 –4 3 –2 –1 –20 1 2 3 4 5 6 –40 –60 –80 Function is convex for x ≤ 2 /3 –100 –120 –140 –160 FIGURE 4 -30 Graph of the function... us = 0, u ≥ 0 146 (c) (f) INTRODUCTION TO OPTIMUM DESIGN As in Example 4 .31 , the case where s = 0 gives candidate minimum points Solving Eqs (c)–(e), we get the two KKT points as * x1 = x 2 = 3 , u * = K 2 , * * x1 = x 2 = - 3 , u * = K 2 , * f (x*) = -3K f (x*) = -3K (g) (h) Therefore, comparing the solutions with those obtained in Example 4 .31 , we observe that u * = Ku* 4.5 .3 Effect of Scaling a Constraint... defining the equations is prepared as follows: Function F = kktsystem(x) F = [2*x(1) - 3* x(2) + 2*x (3) *x(1); 2*x(2) - 3* x(1) + 2*x (3) *x(2); x(1)^2 + x(2)^2 - 6 + x(4)^2; x (3) *x(4)]; 142 INTRODUCTION TO OPTIMUM DESIGN The first line defines a function, named “kktsystem,” that accepts a vector of variables x and returns a vector of function values F This file should be named “kktsystem” (the same name as the... this case We can leave point B toward point A and remain on the constraint g2 = 0 for optimum designs Case 6: s1 = 0, s2 = 0, u3 = 0, u4 = 0, u5 = 0 Equations (b) and (c) can be solved for the band d as b = 527 .34 mm and d = 2 13. 33 mm We can solve for u1 and u2 from Eqs (i) and (j) as u1 = 0 and u2 = (5.625 ¥ 104) Substituting values of b and d into Eq (d), we get g3 = -841 .35 < 0, so the constraint is... function is df* = -0 .39 1(500) - 0.25(500) = -32 0.5 cm3 Thus the volume of the bracket will reduce by 32 0.5 cm3 4.7.2 Design of a Rectangular Beam In Section 3. 8, a rectangular beam design problem is formulated and solved graphically We will solve the same problem using the KKT necessary conditions The problem is formulated as follows Find b and d to minimize f (b, d ) = bd (a) subject to the inequality . x0,0* * ; * * ;1 1to and to 144 INTRODUCTION TO OPTIMUM DESIGN 0 in Eq. (4. 53) . When a constraint is relaxed, the feasible set for the design problem expands. We allow more feasible designs to be candidate. = L x xxux 2 21 2 232 0 ∂ ∂ =-+ = L x xxux 1 12 1 232 0 Lx x xx ux x s=+- + +-+ () 1 2 2 2 12 1 2 2 22 36 4 3 2 1 0 0 1 234 –1 –1–2 3 4 –2 3 –4 Cost function contours –7 –5 3 –1 1 B A x 2 x 1 f g g. the following, we shall write these conditions and solve them to verify the graphical solution. 4 3 4 3 138 INTRODUCTION TO OPTIMUM DESIGN The Lagrange function of Eq. (4.46a) for the problem is

Ngày đăng: 13/08/2014, 18:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan