# Chapter 4 - Linear Programming

IntroductionConcepts and MethodsConcepts and Geometric InterpretationGeneral Statement of the Linear Programming ProblemSlack and Surplus VariablesFeasible and Basic Feasible Solutions of the Constraint EquationsOptimization with the Simplex MethodSimplex TableauMathematics of Linear ProgrammingDegeneracyArtificial VariablesFormulating and Solving ProblemsFormulating the Linear Programming Problem-A Simple RefinerySolving the Linear Programming Problem for the Simple RefinerySensitivity AnalysisChanges in the Right Hand Side of the Constraint EquationChanges in the Coefficients of the Objective FunctionChanges in the Coefficients of the Constraint EquationsAddition of New VariablesAddition of More Constraint EquationsClosureSelected List of Texts on Linear Programming and ExtensionsReferencesProblems

## Introduction

Linear programming is the most widely applied of all of the optimization methods. The technique has been used for optimizing many diverse applications, including refineries and chemical plants, livestock feed blending, routing of aircraft and scheduling their crews. Many industrial allocation and transportation problems can be optimized with this method. The application of linear programming has been successful, particularly in cases of selecting the best set of values of the variables when a large number of interrelated choices exists. Often such problems involve a small improvement per unit of material flow times large production rates to have as the net result be a significant increase in the profit of the plant. A typical example is a large oil refinery where the stream flow rates are very large, and a small improvement per unit of product is multiplied by a very large number to obtain a significant increase in profit for the refinery.

The term programming of linear programming does not refer to computer programming but to scheduling. Linear programming was developed about 1947, before the advent of the computer, when George B. Dantzig (1) recognized a generalization in the mathematics of scheduling and planning problems. Developments in linear programming have followed advances in digital computing, and now problems involving several thousand independent variables and constraints equations can be solved.

In this chapter a geometric representation and solution of a simple linear programming problem will be given initially to introduce the subject and illustrate the way to capitalize on the mathematical structure of the problem. This will be followed by a presentation of the simplex algorithm for the solution of linear programming problems. Having established the computational algorithm, we will give the procedure to convert a process flow diagram into a linear programming problem, using a simple petroleum refinery as an illustration. The method of solution, using large linear programming computer codes, then will be described, and the solution of the refinery problem, using the IBM Mathematical Programming System Extended (MPSX), will illustrate the procedure and give typical results obtained from these large codes. Once the optimal solution has been obtained, sensitivity analysis procedures will be detailed that use the optimal solution to determine ranges on the important parameters where the optimal solution remains optimal. Thus, another linear programming solution is not required. This will be illustrated also using results of the refinery problem obtained from the MPSX solution. Finally, a summary will be given of extensions to linear programming and other related topics

## Formulating the Linear Programming Problem-A Simple Refinery

To this point in the discussion of linear programming, the emphasis has been on the solution of problems by the Simplex Method. In this section procedures will be presented for the formulation of the linear programming problem for a plant or process. This will include developing the objective function from the cost or profit of the process or plant and the constraint equations from the availability of raw materials, the demand for products, and equipment capacity limitations and conversion capabilities. A simple petroleum refinery will be used as an example to illustrate these procedures. Also an optimal solution will be obtained using a large linear programming code to illustrate the use of one of these types of programs available on a large computer. In the following section the optimal solution of the general linear programming problem will be extended to a sensitivity analysis, and these results will be illustrated using the information computed from the large linear programming code for the simple refinery example.

Figure 4-7 shows the flow diagram for the simple petroleum refinery, and Table 4-2 the definition is given for the name of each of the process streams. There are only three process units in this refinery, and these are a crude oil atmospheric distillation column, a catalytic cracking unit, and a catalytic reformer. The crude oil distillation column separates the crude oil into five streams: are fuel gas, straight-run gasoline, straight-run naphtha, straight-run distillate, and straight run fuel oil. Part of the straight run naphtha is processed through the catalytic reformer to improve its quality, i.e., increase the octane number. Also parts of the straight run distillate and straight-run fuel oil are processed through the catalytic cracking unit to improve their quality so they can be blended into gasoline. The refinery produces four products: premium gasoline, regular gasoline, diesel fuel and fuel oil. Even for this simple refinery there are 33 flow rates for which the optimal values have to be determined. This small problem points out one of the difficulties of large linear programming problems. The formulation of the problem is quite straight-forward. However, there is a major accounting problem in keeping track of a large number of variables, and the collection of reliable data to go with these variables is usually very time consuming (9).

Table 4-3 lists the capacities, operating costs, process stream, mass yields, and volumetric yields for the three process units in the refinery. These are typical of a medium size refinery in the Gulf coast area. The mass yields were taken from those reported by Aronfsky, Dutton, and Tayyaabkhan (10) and were converted to volumetric yields by using API gravity data. The operating costs were furnished by the technical division of a major oil company which has refineries on the Gulf Coast.

The quality specification and physical properties are given in Table 4-4 for the process streams, and the crude oil cost and the product sales prices are given in Table 4-5. The data in Table 4-4 was reported by Aronfsky et.al. (29), and the cost and prices in Table 4-5 were obtained from the Oil and Gas Journal (11). The information given in Table 4-3, 4-4, and 4-5 is required to construct the objective function and the constraint equations for the linear programming model of the refinery.

It is standard practice to present the linear programming problem for the refinery in matrix form, as shown in Figure 4-8. In the first row the coefficients of the terms in the objective function are listed under their corresponding variables. The sales prices are shown as positive, and the cost are shown as negative, so the problem is formulated to maximize the profit. These numbers were taken from Table 4-5, and it was convenient to combine the crude cost ($32.00/barrel) with the operating cost of the crude oil atmospheric distillation column ($1.00/barrel) to show a total cost of $33.00 per barrel of crude oil processed in Figure 4-7. Consequently, the first row of Figure 4-8 represents the objective function given below:

-33.0 CRUDE + 0.01965 FGAD - 2.50 SRNRF + 0.01965 FGRF - 2.20 SRDSCC

-2.20 SRFOCC + 0.01965 FGCC + 45.36 PG + 43.68 RG + 40.32 DF + 13.14 FO The constraint equations begin with the second row in Figure 4-8. They are grouped in terms of quality and quantity constraints on the crude and products, in terms of the performance of the process unit using the volumetric yields, and in terms of the stream splits among the process units and blending into the products.

The second row is the crude availability constraint limiting the refinery to 110,000 barrels/day. This is followed by the four quantity and quality constraints associated with each product. These are the daily production and blending requirements and two quality constraints. These have been extracted from Figure 4-8 and are shown in Table 4-6 for the four products. The minimum production constraint states that the refinery must produce at least 10,000 bbl/day of premium gasoline to meet the company's marketing division's requirements. The blending constraints state that the sum of the streams going to produce premium gasoline, must equal the daily production of premium gasoline. The quality constraints use linear blending, and the sum of each component weighted by its quality must meet or exceed the quality of the product. This is illustrated with premium gasoline octane rating blending constraint, which is written as the following, using the information from the matrix:

78.5 SRGPG + 104.0 RFGPG + 65.0 SRNPG + 93.7 CCGPG - 93.0 PG> 0 (4-11)Here the premium gasoline must have an octane number of at least 93.0. Corresponding inequality constraints are specified in Table 4-6 using the same procedure for premium gasoline vapor pressure, regular gasoline octane number and vapor pressure, diesel fuel density and sulfur content, and fuel oil density and sulfur content.

The next set of information, given in the constraint equation matrix, Figure 4-8, is the description of the operation of the process unit using the volumetric yield shown in Table 4-3. This section of the matrix has been extracted and is shown in Table 4-7 for the three process units. Referring to the volumetric yields for the crude oil distillation column, this data states that 35.42 times the volumetric flow rate of crude produces the flow rate of fuel gas from the distillation column, FGAD i.e.:

35.42 CRUDE - FGAD = 0 (4-12)Corresponding yields of the other products from the crude oil distillation are determined the same way. For the catalytic reformer the yield of the Table 4-6 Quantity and Quality Constraints for the Refinery Products fuel gas (FGRF) and the reformer gasoline (RFG) are given by the following equations:

158.7 SRNRF - FGRF = 0 (4-13)0.928 SRNRF - RFG = 0 (4-14)Similar equations are used in the matrix, Figure 4-8, and are summarized in Table 4-7 for the catalytic cracking unit.

The use of volumetric yields to give linear equations to describe the performance of the process units is required for linear programming. The results will be satisfactory as long as the volumetric yields precisely describe the performance of these process units. These volumetric yields are a function of the operating conditions of the unit, e.g., temperature, feed flow rate, catalyst activity, etc. Consequently, to have an optimal solution these volumetric yields must represent the best performance of the individual process units. To account for changes in volumetric yields with operating conditions, sometimes a separate simulation program is coupled to the linear programming code to furnish best values of the volumetric yields. Then an iterative procedure is used to converge optimal linear programming flow rates with corresponding values of volumetric yields from the simulation program. (See Figure 6-5.)

The last group of terms in Figure 4-8 gives the material balance around points where streams split among process units and blend into products. The stream to be divided is given a coefficient of 1, and the resulting streams have a coefficient -1. For example, the straight-run naphtha from the crude oil distillation is split into four streams. One is sent to the catalytic reformer, and the other three are used in blending premium gasoline, regular gasoline, and diesel fuel. The equation for this split is:

SRN - SRNRF - SRNPG - SRNRG - SRNDF = 0 (4-15)There are a total of seven stream splits as shown in Figure 4-8.

This information is now available to determine the optimum operating conditions of the refinery. The optimal solution was obtained using the Mathematical Programming System Extended (MPSX) program run on the IBM 4341. The format used by this linear programming code has become an industry standard according to Murtagh (12) and is not restricted to the MPS series of codes developed originally for IBM computers. Consequently, we will also describe the input procedure for the code because of its more general nature. Also, we will use these refinery results to illustrate the additional information that can be obtained from sensitivity analysis.

## Solving the Linear Programming Problem for the Simple Refinery

Having constructed the linear programming problem matrix, we are now ready to solve the problem using a large linear programming computer program. The input and output for these programs has become relatively standard (12), making the study of one beneficial in the use of any of the others. The solution of the simple refinery has been obtained using the IBM Mathematical Programming System Extended (MPSX). The detailed documentation is given in IBM manuals (15,16) and by Murtagh(12) on the use of the program, and the following outlines its use for the refinery problem. The MPSX control program used to solve the problem is given in Table 4-8. The first two commands, PROGRAM and INITIALZ, define the beginning of the program and set up standard default values for many of the optional program parameters. TITLE writes the character string between the quotation marks at the top of every page of output. The four MOVE commands give user specified names to the input data (XDATA), internal machine code version of the problem (XPBNAME), objective function (XOBJ), and right-hand-side vector (XRHS). Next, CONVERT calls a routine to convert the input data from binary coded decimal (BCD), or communications format, into machine code for use by the program, and BCDOUT has the input data printed. The next three commands, SETUP, CRASH, and PRIMAL, indicate that the objective function is to be maximized, a starting basis is created, and the primal method is to be used to solve the problem. Output from PRIMAL is in machine code so SOLUTION is called to produce BCD output of the solution. The RANGE command is used in the sensitivity analysis to determine the range over which the variables, right-hand-sides, and the coefficients may vary without changing the basis. The last two statements, EXIT and PEND, signal the end of the control program and return control over to the computer's operating system.

Input to the MPSX program is divided into four sections: NAME, ROWS, COLUMNS, and RHS. The first two are shown in Table 4-9. The NAME section is a single line containing the identifier, NAME, and the user-defined name for the block of input data that follows. (MPSX has provisions for keeping track of several problems during execution of the control program). When the program is run it looks for input data with the same name as that stored in the internal variable XDATA. The ROWS section contains the name of every row in the model, preceded by a letter indicating whether it is a non-constrained row (N), the objective function, a less-than-or-equal-to constraint (L), a greater-than-or-equal-to constraint (G), or an equality constraint (E).

The COLUMNS section of the input data is shown in Table 4-10. It is a listing of the non-zero elements in each column of the problem matrix (Figure 4-8). Each line contains a column name followed by up to two row names and their corresponding coefficients from Figure 4-8.

The last input section is shown in Table 4-11. Here, the right-hand-side coefficients are entered in the same way that the coefficients for each column were entered in the COLUMS section, i.e., only the non-zero elements. The end of the data block is followed by an ENDATA card.

The solution to the refinery problem is presented in Table 4-12 (a) and (b), as listed in the printout from the MPSX program. It is divided into two sections, the first providing information about the constraints (rows), and the second giving information about the refinery stream variables (columns).

In the ROWS section there are eight columns of output. The first is the internal identification number given to each row by the program. The second column is the name given to the rows in the input data. Next is the AT column which contains a pair of code letters to indicate the status of each row in the optimal solution. Constraint rows in the basis have the code BS, non-basis inequality constraints that have reached their upper or lower limits have the code UL or LL, and equality constraints have the status code EQ. The fourth column is the row activity, as defined by the equation:

This is the optimal value of the left-hand side of the constraint equations. However, it is computed by subtracting the slack variable from the right hand side. The column labeled SLACK ACTIVITY contains the value of the slack variable for each row. The next three columns are associated with sensitivity analysis. The sixth and seventh columns show the lower and upper limits placed on the row activities. The final column, DUAL ACTIVITY, gives Lagrange Multipliers, which are also called the Simplex multipliers, shadow pricesand implicit prices. As will be seen subsequently in sensitivity analysis, they will relate changes in the activity to changes in the objective function. Also, the dot in the table means zero, the same convention used in the Simplex Tableau.

Examination of this section of output shows that the activity (or value) of the objective function (row 1, OBJ) is 701,823.4, i.e., the maximum profit for the refinery is $701,823.40 per day. Checking the rows which are at their lower limits, LL, for production constraints one finds that only row 15, FOMIN, is at its lower limit of 10,000 barrels/day, indicating that only the minimum required amount of fuel oil should be produced. However, row 3, PGMIN, row 7, RGMIN, and row 11, DFMIN, are all above their lower limits with values of 47,113 barrels/day for premium gasoline, 22,520 barrels/day for regular gasoline, and 12,491 barrels/day for diesel fuel. More will be said about the information in this table when sensitivity analysis is discussed.

The COLUMNS section of the solution also has eight columns. The first three are analogous to the first three in the ROWS section, i.e., an interval identification number, name of the column, and whether the variable is in the basis BS or is at its upper or lower limit, UL or LL. The fourth column, ACTIVITY, contains the optimal value for each variable. The objective function cost coefficients are listed in the column INPUT COST. REDUCED COST is the amount by which the objective function will be increased per unit increase in each non-basis variable and is part of the sensitivity analysis. It is given by cj' of equation(4-29).

For this simple refinery model, there were 33 variables whose optimal value were determined, and 38 constraint equations were satisfied. For an actual refinery there would be thousands of constraint equations, but they would be developed in the same fashion as described here. As can be seen, the model (constraint equations) was simple, and only one set of operating conditions was considered for the catalytic cracking unit, catalytic reformer, and the crude distillation column.

If the optimal flow rates do not match the corresponding values for volumetric yields, a search can be performed by repeating the problem to obtain a match of the optimal flow rates and volumetric yields. This has to be performed using a separate simulation program which generates volumetric yields from flow rate through the process units. (See Figure 6-5) Thus, the linear model of the plant can be made to account for nonlinear process operations. Another procedure, successive linear programming, uses linear programming iteratively, also, and it will be discussed in Chapter 6. The state of industrial practice using both linear programming and successive linear programming is described by Smith and Bonner(13) for configuration of new refineries and chemical plants, plant expansions, economic evaluation of investment alternatives, assessment of new technology, operating plans for existing plants, variation in feeds, costing and distribution of products, evaluation of processing and exchange agreements, forecasting of industry trends, and economic impact of regulatory changes.

## Sensitivity Analysis

Having obtained the optimal solution for a linear programming problem, it would be desirable to know how much the cost coefficients could change, for example, before it is necessary to resolve the problem. In fact there are five areas that should be examined for their effect on the optimal solution. These are:

changes in the right-hand side of the constraint equations, bichanges in the coefficients of the objective function, cjchanges in the coefficients of the constraint equations, aijaddition of new variablesaddition of more constraint equationsChanges in the right-hand side of the constraint equations correspond to changes in the maximum capacity of a process unit or the availability of a raw material, for example. Changes in the coefficients of the objective function correspond to changes of the cost or the sale price of the products, and changes in the coefficients of the constraint equations correspond to changes in volumetric yields of a process. Addition of new variables and constraint equations correspond to the addition of new process units in the plant. It is valuable to know how these various coefficients and parameters can vary without changing the optimal solution, and this may reduce the number of times the linear programming problem must be solved.

Prior to doing this post-optimal analysis, some preliminary mathematical expressions must be developed for the analysis of the effect of the above five areas on the optimal solution. These are the inverse of the optimal basis and the Lagrange Multipliers. To obtain the matrix called the inverse of the optimal basis, A-1, consider that the optimal basis has been found by the previously described Simplex Method. There are m constraint equations and n variables as given by equations 4-1a, b, and c. For convenience, the nonzero variables in the optimal basis have been rearranged to go from 1 to m, (x1*, x2*,...,xm*, 0, 0,..., 0); and there are (n - m) variables not in the basis whose value is zero. The optimal solution to this linear programming problem is indicated below where x* contains only the m nonzero basis variables.

and

A* x* = b (4-17)To solve for x*, both sides of the above equation are multiplied by the inverse of the optimal basis, A*-1 whose elements are bij and obtain:

x* = A-1b (4-18)It should be noted that A-1 may be obtained from the last step of the Simplex Method if all of the constraint equations required slack variables. If not, then it has to be obtained from the original formulation of the problem using the optimal basis found from the Simplex Method.

The linear programming problem could be solved by the classical method of Lagrange Multipliers. However, the Simplex Method gives a systematic procedure for locating the optimal basis. Having located the optimal basis by the Simplex Method, the Lagrange Multiplier formulation and the inverse of the optimal basis will be used to determine the effect of change in the right-hand side on the optimal solution. Consequently, it is necessary to compute the values of the Lagrange Multipliers as follows. Multiplying each constraint equation, (4-1b), by the Lagrange multiplier li and adding to the objective function equation (4-1a), gives the following equation:

where x1 to xm are positive numbers, i.e., values of the variables in the basis, and xm+1 to xn are zero, i.e., values of the variables that are not in the basis.

To solve this problem by classical methods, the partial derivatives of p with respect to the independent variables and the Lagrange Multipliers would be set equal to zero. Taking the partial derivatives of p with respect to the Lagrange Multipliers just gives the constraint equations, and taking the partial derivatives with respect to the independent variables, xj* (j = 1, 2, ...m) gives:

and xj* for j = m + 1,...n is zero since x* is the optimal solution.

The values of the Lagrange Multipliers are obtained from the solution of equation (4-20). Written in matrix notation, equation (4-20) is:

c + ATl = 0 (4-21)where AT is the transpose of the matrix A.

Using the matrix identity [AT]-1 = [A-1]T and solving for the Lagrange Multipliers gives:

l = -[A-1]T c (4-22)In terms of the elements of the inverse of the optimal basis bik, equation (4-22) can be written as:

With this as background, the effect of the five changes on the optimal solution can be determined. The inverse of the optimal basis A-1 and the Lagrange Multipliers will be used to evaluate these changes. The following example illustrates the computation of the inverse of the optimal basis and the Lagrange Multipliers.

Example 4-6

Solve the following problem by the Simplex Method and compute the inverse of the optimal basis and the Lagrange Multipliers:

Maximize: 2x1 + x2 + x3Subject to: x1 + x2 + x3 < 10x1 + 5x2 + x3 > 20Adding slack variables gives:

Maximize: 2x1 + x2 + x3 = pSubject to: x1 + x2 + x3 + x4 = 10x1 + 5x2 + x3 - x5 = 20An initially feasible basis is not available, and either artificial variables or algebraic manipulations must be performed to obtain one. Algebraic manipulations are used to have x1 and x2 be the variables in the basis. The result is:

- x3 - 9/4x4 - 1/4x5 = p - 17½ p = 17½x1 + x3 + 5/4x4 +1/4x5 = 7½ x1 = 7½ x2 - 1/4x4 - 1/4x5 = 2½ x2 = 2½ x3 = 0 x4 = 0 x5 = 0This is the optimum, for all the coefficients of the nonbasic variables in the objective function are negative. With the optimal solution known, the original problem now takes the form:

Maximize: 2x1 + x2 = 17½Subject to: x1 + x2 = 10x1 + 5x2 = 20The inverse of the optimal basis is computed using the cofactor method

where || A ji || = || A ij || T, and || A ij || the co-factors of the matrix A.(8)

The Lagrange Multipliers are computed using equation (4-22)

l = -[A-1]T c or l1= -9/4 and l2 = ¼

## Changes in the Right Hand Side of the Constraint Equations

Changes in the right hand side of the constraint equations, i.e., changes in the bi's, will cause changes in the values of the variables in the optimal solution, the xj's. For an optimal solution to remain optimal, the xj's cannot become negative. Equation (4-18) will be used to evaluate changes in the xj's caused by changes in the bi's. The jth component of equation 4-18 is used.

For a change in bi of an amount Dbi, the new value of xj*, called x*j,new is:

for the optimal solution x* to remain optimal, the values of x*j,new must not become negative. The problem must be resolved if any of the x*j,new's becomes negative.

The change in the value of the objective function for changes in the bi's, is computed using equation (4-19). Because the left-hand side of equation (4-19) is zero at the optimum, it can be written as:

Using the same procedure for the change Dbi, the change in the value of the objective function is:

It is from this equation that the Lagrange Multipliers receive the name shadow prices, for they have dimensions of dollars per unit and are used to compute the new value of the objective function from changes in the bi's. This is called a marginal cost calculation.

Generally, in large linear programming computer programs, part of the computations includes the calculation of x*j,new and p*new for upper and lower limits on the bi's. Also, values of the Dbi's can be computed that will give the largest possible change in the xj*'s, i.e., xj,new = 0. Simultaneous changes in the right-hand side of the constraint equations can be performed using the 100% rule, and this procedure is described by Bradley et. al.(19).

Example 4-7

For the problem given in example 4-6, find the new optimal solution for Db1 = -5 without resolving the problem. Using equation (4-25) to compute the changes in the xi's gives:

x1,new = x1 + b11 Db1 +b12 Db2x2,new = x2 + b21 Db1 +b22 Db2Substituting in the values for Db1 = -5 and Db2 = 0 gives

x1,new = 7½ + (5/4)(-5) = 5/4x2,new = 2½ + (-¼)(-5) = 15/4Using equation (4-27), the change in the objective function is computed as:

p*new = p* - [l1 Db1 + l2 Db2] = 17½ - (-9/4) (-5)p*new = 25/4 Changes in the right-hand side of the constraint equations are part of the sensitivity analysis of the MPSX program. In Table 4-12(a) the smallest and largest values of the right-hand side of the constraint equations are given for the optimal solution to remain optimal as LOWER LIMIT and UPPER LIMIT. Also the Lagrange Multipliers were computed, and these are called the DUAL ACTIVITY in the MPSX nomenclature of Table 4-12(a). In this table NONE indicates that there is no bound, and a dot indicates that the value was zero. Correspondingly, in Table 4-12(b) the upper and lower limits on the variables are given. In this case the dot indicates that the lower bound was zero, and NONE indicates that there was no upper bound on the variable because BOUNDS was not used.

## Changes in the Coefficients of the Objective Function

It is necessary to consider the effect on the optimal solution of changes in the cost coefficients of the variables in the basis and those not in the basis also. Referring to equation (4-19), the coefficients of the variables that are not in the basis, i.e., xm+1,...,xn must remain negative for maximization.

If a coefficient becomes positive from a change in the cost coefficients, it would be profitable to have that variable enter the basis.

The values of the Lagrange Multipliers are affected by changes in the cost coefficients of the variables in the basis, since they are related by equation (4-23). The term in the brackets in equation (4-28) is named the reduced cost(19), and it is convenient to define this term as c'j to obtain the equation that accounts for the effect of changes in cost coefficients on the optimal solution.

where cj' must remain negative for the optimal solution to remain optimal for maximizing.

The Lagrange Multipliers, li's, are eliminated from equation (4-29) by substituting equation (4-23) to give:

For a change, Dcj, in the nonbasic variable cost coefficient, cj, and for a change, Dck, in the basic variables cost coefficient ck, it can be shown that the following equation holds:

When maximizing, the new coefficients must remain negative for the variables not in the basis to have the optimal solution remain optimal, i.e.:

c'j, new < 0 (4-32)If (4-32) does not hold, then a new optimal solution must be obtained by solving the linear programming problem with the new values of the cost coefficients.

If the optimal solution remains optimal, the new value of the objective function can be computed with the following equation:

If the problem must be resolved, it is usually convenient to introduce an artificial variable and proceed from this point to the new optimal solution. Large linear programming codes usually have this provision. Also, they can calculate a range of values of the cost coefficients where the optimal solution remains optimal and the corresponding effect on the objective function. The procedure used is called the 100% rule and is described by Bradley, et. al.(19).

Example 4-8

For the problem given in example 4-6, compute the effect of changing the cost coefficient c1 from 2 to 3 and c3from 1 to 4, i.e., Dc1 = 1 and Dc3 = 3. Using equation (4-31) produces the following results for j = 3,4,5 (since Dc2 = 0).

c'3, new = c'3 + Dc3 - Dc1[a13b11+ a23b12]substituting

c'3, new = -1 + 3 - (1)[(1)(5/4) + (1)(-¼)] = 1c'4, new = c'4 + Dc4 - Dc1[a14b11+ a24b12]substituting

c'4, new = -9/4 + 0 - (1)[(1)(5/4) + (0)(-¼)] = -13/4c'5, new = c'5 + Dc5 - Dc1[a15b11+ a25b12]substituting

c'5,new = -¼ + 0 - (1)[(0)(5/4) + (-1)(-¼)] = -½An improvement in the objective function can be obtained, for c'3,new is greater than zero. Increasing x3 from zero to a positive number will increase the value of the objective function. However, the problem will have to be resolved.

In the MPSX program the RANGE command and the parametrics are used to find the range over which the variables, right-hand-sides, and the coefficients of the objective function and constraints may be varied without changing the basis for the optimal solution. Output from the RANGE command consists of four sections: sections 1 and 2 for rows and columns at their limit levels, and sections 3 and 4 for rows and columns at an intermediate level (in the basis), which will be described here. Further information is given in References (12, 15, and 16).

In Table 4-13 the RANGE output is shown for constraint rows at upper and lower limit levels. The first four columns have the same meaning as in the output from SOLUTION. The next four have two entries for each row. LOWER ACTIVITY and UPPER ACTIVITY are the lower and upper bounds on the range of values that the row activity (right-hand side) may have. Because the slack variable for the row is zero at a limit level, the upper and lower activities are numerically equal to the bounds of the range that the right-hand sides may have. The two UNIT COST entries are the changes in the objective function per unit change of activity when moving from the solution activity to either the upper or lower bound. The column labeled LIMITING PROCESS contains the name of the row or column that will leave the basis if the activity bounds are violated. The status column, AT, indicates the status of the leaving row or column. For example, in line 15 of Table 4-13, the row FOMIN is at its lower limit, its activity value is 10,000, and the right-hand side may take on values between 5,652.8 and 12,252.2 without changing the basis. If FOMIN goes below 5,652.8, then CCFODF would leave the basis. If FOMIN exceeds 12,252.2, then SRFODF would leave the basis. The cost associated with a change in FOMIN is $27.18/bbl with profit decreasing for an increase in FOMIN.

Similar information is provided in Table 4-14 about the range over which the nonbasis activities (variables) at upper or lower limits may be varied without forcing the row or column in LIMITING PROCESS out of the basis. An additional column is included in the table, LOWER COST/UPPER COST to show the highest and lowest cost coefficients at which the variable will remain in the basis. If the objective function cost coefficient goes to the LOWER COST, the activity will increase to UPPER ACTIVITY. Similarly, if its cost goes below UPPER COST, the activity will be decreased to LOWER ACTIVITY.

The third section of output from the range study is given in Table 4-15. It contains information about constraints that are not at their limits and, therefore, are in the basis of the optimal solution. The column headings have the same meaning as the headings for section 1 except that here the variables listed under LIMITING PROCESS will enter the basis if the bounds are exceeded.

The fourth section, shown in Table 4-16, gives the RANGE analysis of the columns in the basis. As in Table 4-15, the variable listed under LIMITING PROCESS will enter the basis when activity is forced beyond the upper or lower activity bounds. The information of greatest interest here are the entries for columns with coefficients in the objective function. These are CRUDE(39), FGAD(40), SRNRF(45), FGRF(46), SRFOCC(49), FCCC(50), PG(57), RG(62), DF(67), and FO(71). Examining the first row in Table 4-16, one finds that if the cost coefficient becomes - 41.15 the activity (crude flow rate) would be reduced from 100,000 to 94,572.98. Consequently, if the cost of crude oil is increased to $40.09/barrel (operating cost is $1.00/barrel) the refinery should reduce its throughput by only 5.2%. Also notice that the lower cost for premium gasoline (PG) is 44.082, while the input cost is 45.35. If the bulk sale price of premium gasoline were to drop to $44.08/barrel, it would be profitable for the refinery to produce 23,655 barrels/day, a drop of almost 50% from the optimum value of 47,111barrels/day currently produced. A similar analysis for fuel oil (FO) indicates that it will probably never be profitable to produce fuel oil, for the sale price would have to increase from $13.14/bbl to $40.32/barrel before production should be increased above the minimum.

## Changes in Coefficients of the Constraint Equations

Referring to equation (4-29), it is seen that changes in the aij's for the nonbasic variables will cause changes in c'j. For the optimal solution to remain optimal, c'j < 0 when maximizing, and if not, the problem must be resolved. To evaluate the changes in the coefficients of the constraint equations, aij, several pages of algebraic manipulations are required. This development is similar to the ones given here for the bi's and cj's and is discussed in detail by Garvin (3) and Gass (4), along with the subject of parametric programming, i.e., evaluating a set of ranges on the aij's, bi's, and cj's where the optimal solution remains optimal. Due to space limitations, these results will not be given here. Also the MPSX code has the capability of making these evaluations, as previously mentioned.

## Addition of New Variables

The effect of adding new variables can be determined by modifying equation (4-19). If k new variables are added to the problem, then k additional terms will be added to equation (4-19), and the coefficient of the kth term is:

Each of these k terms can be computed with the available information. If all of these are less than zero, the original optimal solution remains at the maximum. If equation (4-34) is greater than zero, the solution can be improved, and the problem has to be resolved. Artificial variables are normally used to evaluate additional variables to obtain new optimal solution.

## Addition of More Constraint Equations

For the addition of more constraint equations, the procedure is to add artificial variables and proceed with the solution to the optimum. The artificial variables supply the canonical form for the solution. The following example shows the effect of adding an additional independent variable and an additional constraint equation to a linear programming problem to illustrate the application of the methods described above.

Example 4-9

Solve the linear programming problem using the Simplex Method

Minimize: x1 - 3x2Subject to: 3x1 - x2< 7-2x1 + 4x2< 12 Introduce slack variables for an initially feasible basis, and ignore the terms in parentheses for now. This gives:

x1 - 3x2 (+ 2x5) = c c = 03x1 - x2 + x3 (+ 2x5) = 7 x3 = 7-2x1 + 4x2 + x4 = 12 x4 = 12x1 = 0x2 = 0When the Simplex Method is applied, x2 enters and x4 leaves the basis.

Performing the algebraic manipulations gives:

- 0.5x1 + 0.75x4 (+ 2x5) = c + 9 c = -9 2.5x1 + x3 + 0.25x4 (+ 2x5) = 10 x3 = 10- 0.5x1 + x2 + 0.25x4 = 3 x2 = 3x1= 0x4= 0When the Simplex Method is applied, x1 enters and x3 leaves the basis, giving the following results:

0.2x3 + 0.8x4 (+ 2.4x5) = c + 11 c = -11x1 + 0.4x3 + 0.1x4 (- 0.8x5) = 4 x1 = 4 x2 + 0.2x3 + 0.3x4 (+ 0.4x5) = 5 x2 = 5x3= 0x4= 0The optimal solution has been obtained, for all of the coefficients of the variables in the objective function (not in the basis) are positive.

We compute the inverse of the optimal basis A-1 and the Lagrange Multipliers, having obtained the optimal solution as follows:

For Lagrange Multipliers, equation (4-22) is used:

l = - [A-1]T · cand substituting gives

If the first constraint equation is changed as follows by adding another variable x5:

3x1 - x2 + 2x5 < 7and the objective function is changed by including x5 as shown below:

x1 - 3x2 + 2x5determine how this addition of a new variable affects the optimal solution found previously. The linear programming problem now has the following form:

x1 - 3x2+ 2x5 = c

3x1 - x2 + 2x5 + x3 = 7

-2x1 + 4x2 + x4 = 12To determine if the optimal solution remains optimal, equation (4-34) is used. For this problem n = 4, k = 1, and m = 2, and equation (4-34) has the form:

[c5 + a1,5 l1+ a2,5 l2]substituting gives:

[2 + 2(1/5) + 0(4/5)] = 2.4 > 0The optimal solution remains optimal, for equation (4-34) is positive, and it is not necessary to resolve the problem. x5 is not in the basis and has a value of zero.

The terms in parentheses show the solution with the additional variable included. As can be seen, the coefficient at the final step is the same as computed using equation (4-34).

Find the new optimal solution if the following constraint equation is added to the problem

-4x1 + 3x2 + 8x5 + x6 = 10The constraint equation is added to the optimal solution set so the problem will not have to be completely solved and is:

0.2x3 + 0.8x4 + 2.4x5 = c + 11

x1+ 0.4x3 + 0.1x4 - 0.8x5 = 4

x2 + 0.2x3 + 0.3x4 + 0.4x5 = 5

-4x1+ 3x2 + 8x5 + x6 = 10x6 is used as the variable in the basis from the additional constraint equation. x1 and x2 are eliminated from the added constraint equation by algebraic manipulation, and gives:

0.2x3 + 0.8x4 + 2.4x5 = c + 11 c = -11x1 + 0.4x3 + 0.1x4 - 0.8x5 = 4 x1 = 4

x2 + 0.2x3 + 0.3x4 + 0.4x5 = 5 x2 = 5

x3- 0.5x4 + 10x5 + x6 = 11 x6 = 11

x4= 0 x5= 0The new optimal solution has been found, for all of the coefficients in the objective function are positive. Artificial variables would normally have been used, especially in a computer program, to give a feasible basis and proceed to the optimum.

## Closure

In this chapter the study of linear programming was taken through the use of large computer codes to solve industrial problems. Sufficient background was provided to be able to formulate and solve linear programming problems for an industrial plant using one of the large linear programming codes and to interpret the optimal solution and associated sensitivity analysis. In addition, this background should provide the ability for independent reading on extensions of the subject.

The mathematical structure of the linear programming problem was introduced by solving a simple problem graphically. The solution was found to be at the intersection of constraint equations. The Simplex Algorithm was then presented, which showed the procedure of moving from one intersection of constraint equations (basic feasible solution) to another and having the objective function improve at each step until the optimum was reached. Having seen the Simplex Method in operation, we discussed the important theorems of linear programming, which guaranteed that the global optimum would be found for the linear objective function and linear constraints. Then methods were presented which illustrated how a process flow diagram and associated information could be converted to a linear programming problem to optimize an industrial process. This was illustrated with a simple petroleum refinery example, and the solution was obtained using a large standard linear programming code, Mathematical Programming System Extended (MPSX), on an IBM 4341 computer. The chapter was included with a discussion of post-optimal analysis procedures which evaluated the sensitivity of the solution to changes in important parameters of linear programming problem. This sensitivity analysis was illustrated using simple examples and results from the solution of the simple refinery using the MPSX code.

A list of selected references is given at the end of the chapter for information beyond that presented here. These texts include the following topics. The Revised Simplex Method is a modification of the Simplex Method that permits a more accurate and rapid solution using digital computers. The dual linear programming problem converts the original or primal problem into a corresponding dual problem which may be solved more readily than the original problem. Parametric programming is an extension of sensitivity analysis where ranges on the parameters, aij's, bi's, and cj's, are computed directly considering more than one parameter at a time. Also, there are decomposition methods that take an extremely large problem and separate or decompose it into a series of smaller problems that can be solved with reasonable computer time and space. In addition, special techniques have been developed for a class of transportation and network problems which facilitate their solution. Linear programming has been extended to consider multiple conflicting criteria, i.e., more than one objective function, and this has been named goal programming. An important extension of linear programming is the case where the variables can take on only integer values, and this has been named integer programming. Moreover, linear programming and the theory of games have been interfaced to develop optimal strategies. Finally, almost all large computers have one or more advanced linear programming codes capable of solving problems with thousands of constraints and thousands of variables. It is very time consuming and tedious task to assemble and enter reliable data correctly in using these programs. These codes, e.g., MPSX, are very efficient and use sparse matrix inversion techniques, methods for dealing with ill-conditioned matrices, structural data formats, and simplified input and output transformations. Also, they usually incorporate post optimal ranging, generalized upper bounding, and parametric programming (9,12). Again, the topics mentioned above are discussed in the articles and books in the References and the Selected List of Texts at the end of the chapter.

Selected List of Texts on Linear Programming and ExtensionstopBazaraa, M. S., and J. J. Jarris, Linear Programming and Network Flows, John Wiley and Sons, Inc., New York (1977).

Charnes, A., and W. W. Cooper, Management Models and Industrial Applications of Linear Programming, Vols. 1 and 2, John Wiley and Sons, Inc., New York (1967).

Garfinkel, R. S., and G. L. Nemhauser, Integer Programming, John Wiley and Sons, Inc., New York (1972).

Glicksman, A. M., An Introduction to Linear Programming and the Theory of Games, John Wiley and Sons, Inc., New York (1963).

Greenberg, H., Integer Programming, Academic Press, New York (1971).

Hadley, G. H., Linear Programming, Addison-Wesley, Inc., Reading, Mass. (1962).

Land, A. H., and S. Powell, Fortran Codes for Mathematical Programming: Linear, Quadratic and Discrete, John Wiley and Sons, Inc., New York (1973).

Lasdon, L., Optimization Theory for Large Systems, Macmillan and Co., New York (1970).

Naylor, T. H., and E. T. Byrne, Linear Programming Methods and Cases, Wadsworth Publishing. Co., Balmont, Calif. (1963).

Orchard-Hays, W., Advanced Linear Programming Computing Techniques, McGraw-Hill Book Co., New York (1968).

Papadimitriou, C. H., and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall, Inc., Englewood Cliffs, New Jersey (1982).

Schrage, L., Linear Programming Models with LINDO, Scientific Press, Palo Alto, Calif.(1981).

Taha, H. A., Integer Programming: Theory, Applications and Computations, Academic Press, New York (1975).

Referencestop1.Dantzig, G. B., Linear Programming and Extensions, Princeton University Press, Princeton, N.J. (1963).

2.An Introduction to Linear Programming, I.B.M. Data Processing Application Manual E20 - 8171, I.B.M. Corp., White Plains, N.Y. (1964).

3.Garvin, W. W., Introduction to Linear Programming, McGraw-Hill Book Co., N.Y. (1966).

4.Ibid., p. 10

5.Ibid., p. 12

6.Ibid., p. 21

7.Gass, S. I., Linear Programming: Methods and Applications, 5th Ed., McGraw-Hill Book Co., New York (1985).

8.Smith, C. L., R. W. Pike, P. W. Murrill, Formulation and Optimization of Mathematical Models, International Textbook Co., Scranton, Pa. (1970).

9.Holtzman, A. G., Mathematical Programming for Operations Researchers and Computer Scientists, Ed. A.G. Holtzman, Marcel Dekker, Inc., New York (1981).

10.Aronofsky, J. S., J. M. Dutton, and M. T. Tayyabkhan, Managerial Planning with Linear Programming in Process Industry Operations, John Wiley and Sons, Inc., New York (1978).

11.Anonymous, Oil and Gas Journal, 394 (May 3, 1982).

12.Murtagh, B. A., Advanced Linear Programming: Computation and Practice, McGraw-Hill Book Co., New York (1981).

13.Smith, M. G., and J. S. Bonner, Computer-Aided Process Plant Design, Ed. M.E. Leesley, Gulf Publishing Co., Houston, p. 1335 (1982).

14.Stoecker, W. F., Design of Thermal Systems, 2nd Ed., McGraw-Hill Book Co., p.271 New York (1980).

15.IBM Mathematical Programming System Extended/370 (MPSX/370) Program Reference Manual, SH19-1095-3, 4th Ed., IBM Corp., White Plains, N.Y. (1979).

16.IBM Mathematical Programming System Extended/370 Primer, GH19-1091-1, 2nd Ed., IBM Corp., White Plains, N.Y. (1979).

17.Ignizio, J. P., Linear Programming in Single and Multiple Objective Systems, Prentice-Hall, Inc., Englewood Cliffs, N. J. (1982).

18.Quandt, R. E., and H. W. Kuhn, "On Upper Bounds for the Number of Iterations in Solving Linear Programs,"Operations Research, 12:161-5 (1964).

19.Bradley, S. P., A. C. Hax, and T. L. Magnanti, Applied Mathematical Programming, Addison-Wesley Publishing Co., Reading, Mass. p. 97 (1977).

## Problems

4-1. Solve the following problem by the Simplex Method.

Maximize: 6x_{1} + x_{2} = p

Subject to: 3x_{1} + 5x_{2} __<__ 13

6x_{1} + x_{2} __<__ 12

x_{1} + 5x_{2} __<__ 10

Determine the range on x_{1} and x_{2} for which the optimal solution remains optimal. Explain. (Note: It is not necessary
to use sensitivity analysis.)

Solution

4-2. Solve the following problem by the Simplex Method.

Maximize: x_{1} + 2x_{2} + 3x_{3} - x_{4} = p

Subject to: x_{1} + 2x_{2} + 3x_{3} + x_{5} = 15

2x_{1} + x_{2} + 5x_{3} + x_{6} = 20

x_{1} + 2x_{2} + x_{3} + x_{4} = 10

Start with x_{4}, x_{5}, and x_{6} in the basis.

Solution

4-3 a. Solve the following problem by the Simplex Method.

Maximize: 2x_{1} + x_{2} = p

Subject to: x_{1} + x_{2} __<__ 6

x_{1} - x_{2} __<__ 2

x_{1} + 2x_{2} __<__ 10

x_{1} - 2x_{2} __<__ 1

b.Compute the inverse of the optimal basis, and the largest changes in b_{i}'s for the optimal solution remain optimal.

Solution

4-4. Solve the following problem by the Simplex Method.

Maximize: 3x_{1} + 2x_{2} = p

Subject to: x_{1} + x_{2} __<__ 8

2x_{1} + x_{2} __<__ 10

Solution4-5 a. Solve the following problem by the Simplex Method.

Maximize: x_{1} + 2x_{2} = p

Subject to: x_{1} + 3x_{2} __<__ 105

-x_{1} + x_{2} __<__ 15

2x_{1} + 3x_{2} __<__ 135

-3x_{1} + 2x_{2} __<__ 15

- Solve this problem by the classical theory using Lagrange Multipliers, and explain why Lagrange Multipliers are sometimes called "shadow" or "implicit" prices.

Solution

4-6a. Solve the following problem by the Simplex Method using slack and artificial variables:

Maximize: x_{1} + 10x_{2} = p

Subject to: -x_{1} + x_{2} __>__ 5

3x_{1} + x_{2} __<__ 15

b.Calculate the inverse of the optimal basis and the Lagrange Multipliers.

c.Calculate the largest changes in the right-hand side of the constraint equations
(b_{j}'s) for the optimal solution in part (a) to remain optimal.

Solution

4-7. Solve the following problem by the Simplex Method using the minimum number of slack, surplus, and artificial variables needed for an initially feasible basis.

Minimize: 2x_{1} + 4x_{2} + x_{3} = c

Subject to: x_{1} + 2x_{2} - x_{3} __<__ 5

2x_{1} - x_{2} + 2x_{3} = 2

-x_{1} + 2x_{2} + 2x_{3} __>__ 1

Solution

4-8a. Solve the following problem using the Simplex Method using an artificial variable
x_{6} in the second constraint equation and adding the term -106x_{6} to the objective function.

Maximize: 2x_{1} + x_{2} + x_{3} = p

Subject to: x_{1} + x_{2} + x_{3} __<__ 10

x_{1} + 5x_{2} + x_{3} __>__ 20

b.Compute the effect of changing cost coefficient c_{1} from 2 to 3, i.e., Dc_{1} = 1, and c_{3} from 1 to 4, i.e., Dc_{3} = 3, using the results of example 4-6.

c.Without resolving the problem, find the new optimal solution if the first constraint equation is changed to the following by using the results of example 4-6.

x_{1} + x_{2} + x_{3} __<__ 5

Also, compute the new optimal values of x_{1} and x_{2} and value of the objective function.

Solution

4-9.Consider the following linear programming problem

Maximize: 2x_{1} + x_{2} = p

Subject to: x_{1} + 2x_{2} __<__ 10

2x_{1} + 3x_{2} __<__ 12

3x_{1} + x_{2} __<__ 15

x_{1} + x_{2} __<__ 4

a.Solve the problem by the Simplex Method using slack variables in the first three equations and an artificial variable in the fourth constraint equation as the initially feasible basis.

b.The following matrix is the inverse of the optimal basis, **A**^{-1}. Multiply this matrix by the matrix **A** to obtain the unit matrix **I**.

c.Compute the Lagrange Multipliers for the problem.

d.Compute the changes in the right-hand side of the constraint equations that will cause all of the values of the variables in the basis to become zero.

Solution

4-10.(3) Consider the following problem based on a blending analysis.

Minimize: 50x_{1} + 25x_{2} = c

Subject to: x_{1} + 3x_{2} __>__ 8

3x_{1} + 4x_{2} __>__ 19

3x_{1} + x_{2} __>__ 7

a.Solve this problem by the Simplex Method.

b.Compute the inverse of the optimal basis and the Lagrange Multipliers.

c.Determine the effect on the optimal solution (variables and cost) if the right-hand side of the second constraint equations is changed from 19 to 21 and the right-hand side of the third constraint equations is changed from 7 to 8.

d.Show that the following must hold for the optimal solution to remain optimal considering changes in the cost coefficients.

3/4 __<__ c_{1}/c_{2} __<__ 3

Solution

4-11.Consider the following linear programming problem.

Maximize: x_{1} + 9x_{2} + x_{3} = p

subject to: x_{1} + 2x_{2} + 3x_{3} __<__ 9

3x_{1} + 2x_{2} + 2x_{3} __<__ 15

a.Solve this problem by the Simplex Method.

b.Compute the inverse of the optimal basis and the Lagrange Multipliers.

c.Determine the largest changes in the right-hand side and in the cost coefficients of the variables in the basis for the optimal solution to remain optimal.

Solution

4-12.Solve the following problem by the Simplex Method. To demonstrate your understanding of the use of slack and artificial variables, use the slack variable in the first two constraint equations and an artificial variable in the third constraint equation as the initially feasible basis.

Maximize: x_{1} + 2x_{2} = p

Subject to: -x_{1} + x_{2} __<__ 2

x_{1} + x_{2} __<__ 6

x_{1} + x_{2} __>__ 1

Solution

4-13

a.Derive equation (4-31) from equation (4-30). Explain the significance of the terms in equation (4-31), and discuss the application of this equation in sensitivity analysis associated with coefficients of the variables in the objective function.

b.Starting with equation (4-25) show that the change in **b** which gives the limit on D**b** for x*_{i,new} = 0 is equal to -**b**.

Solution

4-14. In a power plant which is part of a chemical plant or refinery, both electricity
and process steam (high and low pressure) can be produced. A typical power plant has
constraints associated with turbine capacity, steam pressure and amounts, and electrical
demand. In Stoecker(14) the following economic and process model is developed for
a simple power plant producing electricity, high pressure steam x_{1}, and low pressure steam x_{2}.

Maximize: 0.l6x_{1} + 0.14x_{2} = p

Subject to: x_{1} + x_{2} __<__ 20

x_{1} + 4x_{2} __<__ 60

4x_{1} + 3x_{2} __<__ 72

Determine the optimal values of x_{1} and x_{2} and the maximum profit using the Simplex Method.

Solution

4-15. A company makes two levels of purity of a product which is sold in gallon containers. Product A is of higher purity than product B with profits of $0.40 per gallon made on A and $0.30 per gallon made on B. Product A requires twice the processing time of B, and if all B is product, the com[any could make 1,000 gallons per day. However, the raw material supply is sufficient for only 800 gallons per day of both A and B combined. Product A requires a container of which only 400 one gallon containers per day are available while there are 700 one gallon containers per day available for B. Assuming all of the product can be sold of both A and B, what volumes of each should be produced to maximize the profit? Solve the problem graphically and by the Simplex Method.

Solution

4-16. A wax concentrating plant, as shown below, receives feedstock with a low concentration
of wax and refines it into a product with a high concentration of wax. In Stoecker(14)
the selling prices of the products are x_{1}, $8 per hundred pounds, and x_{2}, $6 per hundred pounds; the raw material costs are x_{3}, $1.5 per hundred pounds, and x_{4}, $3 per hundred pounds.

The plant operates under the following constraints:

- The same amount of wax leaves the plant as enters it.
- The receiving facilities of the plant are limited to no more than a total of 800 lb/hr.
- The packaging facilities can accommodate a maximum of 600 lb/hr of x
_{2}and 500 lb/hr of x_{1}.

If the operating cost of the plant is constant, use the Simplex Algorithm to determine the purchase and production plan that results in the maximum profit.

Solution

4-17. A company produces a product and a by-product, and production is limited by two constraints. One is on the availability raw material, and the other is on the capacity of the processing equipment. The product requires 3.0 units of raw material and 2.0 units of processing capacity. The byproduct requires 4.0 units of raw materials and 5.0 units of processing capacity. There is a total of 1700 units of raw material available and a total of 1600 units of processing capacity. The profit is $2.00 per unit for the product and $4.00 per unit for the by-product.

The economic model and constraints are:

Maximize: 2x_{1} + 4x_{2}

Subject to: 3x_{1} + 4x_{2} __<__ 1700 raw material constraint

2x_{1} + 5x_{2} __<__ 1600 processing capacity constraint

- Determine the maximum profit and the production of the product x
_{1}and by-product x_{2}using the Simplex Method. - Calculate the inverse of the optimal basis and the Lagrange Multipliers.
- i. If the total raw material available is increased from 1700 to 1701, determine the new product, byproduct, and profit.
- If an additional 10 units of processing capacity can be obtained at a cost of $7, i.e., 1600 is increased to 1610, is this additional capacity worth obtaining?
- A second by-product can be produced that requires 4.0 units of raw material and 3 1/3 units of processing capacity. Determine the profit that would have to be made on this by-product to consider its production.

Solution

4-18.(14) A chemical plant whose flow diagram is shown in Figure 4-9 manufactures ammonia, hydrochloric acid, urea, ammonium carbonate, and ammonia chloride from carbon dioxide, nitrogen, hydrogen, and chlorine. The x values in Figure 4-9 indicate flow rates in moles per hour.

The costs of the feed stocks are c_{1}, c_{2}, c_{3}, and c_{4}, and the values of the products are p_{5}, p_{6}, p_{7}, and p_{8} in dollars per mole, where the subscript corresponds to that of the x value. In reactor
3 the ratios of molal flow rates are m = 3x_{7} and x_{1} = 2x_{7}and, in other reactors, straightforward material balances apply. The capacity of reactor
1 is equal to or less than 2,000 mol/hr of NH_{3} and the capacity of reactor 2 is equal to or less than 1,500 mol/hr of HCl as given
by Stoecker (14).

a.Develop the expression for the profit.

b.Write the constraint equations for this plant.

Solution

4-19.(8)The flow diagram of a simple petroleum refinery is shown in Figure 4-10. The prices and quality specifications of the products and their minimum production rates are given below:

The current cost of crude is $32.00/barrel

Operating cost for separation in the crude still is $0.25 per barrel for each product produced. The operating cost for the catalytic cracking unit is $0.10 for the straight-run distillate and $0.15 for the straight-run fuel oil.

The following table gives the specifications for each blending component:

The capacity of the catalytic cracking unit must not exceed 50,000 barrels/day, and the crude still is limited at 100,000 barreals/day. The crude is separated into three volume fractions in the crude still, 0.2 straight-run gasoline, 0.5 straight-run distillate, and 0.3 straight-run fuel oil. In the catalytic cracking unit a product distribution of 0.7 barrels of cat. naphtha, 0.4 light cat. cycle oil, and 0.2 barrel of heavy cat. cycle oil is obtained per barrel of straight-run distillate. The straight run fuel oil product distribution is 0.1 barrel of cat. naphtha, 0.3 barrel of light cat. cycle oil, and 0.7 barrel of heavy cat. cycle oil.

Present a matrix representation of this simple refinery, similar to the one shown in Figure 4-8. Be sure to include the objective function and material balance, unit, and blending constraints.

Solution

4-20.For the results of the MPS optimization of the simple refinery consider the following:

a.In Table 4-12(b) on p. 4-60, it shows that the variable SRNPG is not in the basis. Compute the largest change in the cost coefficient of SRNPG for the optimal solution to remain optimal. Confirm that this is the correct answer by the sensitivity analysis results tabulated in the chapter.

b.In Table 4-12(b) the fuel oil (FO) flow rate is at the optimal value of 10,000 barrels/day. Compute the change in the profit if the fuel oil flow rate is increased to 11,000 barrels/day using Lagrange Multipliers. Would this change cause the problem to be resolved according to the MPS results, why?

c.The marketing department of the company requires a minimum of 5,000 barrels/day of residual fuel, a new product. Residual fuel (RF) is straight-run fuel oil (SRFO) directly from the atmospheric distillation column. The price is $10.00 /barrel, and it is sold "as is." Give the modifications required to the matrix in Figure 4-8 to determine the optimum way to operate the refinery with this new product.

Solution

4-21.Prepare a matrix of the objective function and constraint equations from the process flow diagram for the contact process for sulfuric acid like the one given in Figure 4-8 for the simple refinery. The process flow diagram for the contact process is given in Figure 7-21. Use the following data, and assume that the units not included below have a fixed operating cost that do not effect the optimization.

Solution

4-22.(17)In linear programming there is a dual problem that is obtained from the original or primal problem. Many times the dual problem can be solved with less difficulty than the primal one. The primal problem and corresponding dual problem are stated below in a general form.

Primal Problem Dual Problem

Maximize:** c**^{T}**x** Minimize:** b**^{T}**v**

Subject to:** A** **x** __<__ **b** Subject to:** A**^{T} **v** __>__ **c**

** x** __>__ **0** ** ** ** v** __>__ **0**

The relationships between the primal and dual problems are summarized as follows. First, the dual of the dual is the primal problem. An m x n primal gives an n x m dual. For each primal constraint there is a dual variable and vice versa. For each primal variable there is a dual constraint and vice versa. The numerical value of the maximum of the primal is equal to the numerical value of the minimum of the dual. The solution of the dual problem is the Lagrange Multipliers of the primal problem.

- Give the primal problem of the following dual problem.

Minimize: 10v_{1} + 15v_{2}

Subject to: v_{1} + 5v_{2} __>__ 8

v_{1} + v_{2} __>__ 4

- Solve the dual problem by the Simplex Method.
- Using the solution of the dual problem, determine the optimal values for the variables in the primal problem.

Solution

4-23.The dual problem of linear programming can be obtained from the primal problem using Lagrange Multipliers. Using the form of the equations given in problem 4-22 for the primal problem and, considering the slack variables have been added to the constraints, show that the Lagrangian function can be written as:

L(**x**, **l**) = **c**^{T} **x** + **l**^{T}(**A** **x** - **b**)

Rearrange this equation to give the following form.

L(**x**, **l**) = -**b**^{T}**l** + **x**^{T}(**A**^{T} + **c**)

Justify that the following constrained optimization problem can be obtained from the Lagrangian function:

Minimize: ** b**^{T}**l**

Subject to: **A**^{T}**l** __>__ **c**

This is the dual problem given in problem 4-22. Note that the independent variables of the dual problem are the Lagrange Multipliers or "shadow prices" of the primal problem.

4-24.A primal programming can be converted into a dual problem as described in problems 4-22 and 4-23. This approach is used when the dual problem is easier to solve than the primal problem. The general form of the primal problem and its dual was given in problem 4-22.

- Solve the dual problem of the primal problem and its dual given below.

Primal problem

Minimize: 10x_{1} + 6x_{2} + 8x_{3}

Subject to: x_{1} + x_{2} + 2x_{3} __>__ 2

5x_{1} + 3x_{2} + 2x_{3} __>__ 1

Dual problem:

Maximize: 2v_{1} + v_{2}

Subject to: v_{1} + 5v_{2} __<__ 10

v_{1} + 3v_{2} __<__ 6

2v_{1} + 2v_{2} __<__ 8

- In this procedure the solution of the primal problem is the negative of the coefficients of the slack variables in the objective function of the final iteration of the Simplex Method of the dual problem, and the solution of the dual problem is the negative of the Lagrange Multipliers for the primal problem. Give the solution of the primal problem and the Lagrange Multipliers for the primal problem, and show that the minimum of the objective function of the primal problem is equal to the maximum of the objective function of the dual problem.
- In the primal problem give the matrix to be inverted to compute the inverse of the optimal basis.
- Compute the Lagrange multipliers using equation (4-22), and show that they agree with the solution from the dual problem.
- A new variable x
_{6}is added to the problem, as shown below.

Minimize: 10x_{1} + 6x_{2} + 8x_{3} + 2x_{6} = p

Subject to: x_{1} + x_{2} + 2x_{3} + x_{4} + 5x_{6} = 2

5x_{1} + 3x_{2} + 2x_{3} + x_{5} + 3x_{6} = 1

Will the optimal solution remain optimal or will the problem have to be resolved? Explain.

Solution