# All Pages

### From Glossary

Randomized program |

A randomized policy is a distribution over the policy set, that describes a policy as a random variable. This is not restricted to stochastic programs. The randomized form is called a *randomized program*, and it is the following semi-infinite program in response space:

where for finitely many and

In this form, the randomized program is an ordinary linear program if is finite. More generally, the definition of renders the summations well defined. One could interpret as a randomized policy: use with probability A *pure strategy* is when for some (so for
otherwise, is called a *mixed strategy*.

One key fact is that the solutions to the original mathematical program (in standard form) correspond to pure strategies in this randomized form. Further, a key to the Lagrangian Strong Duality Theorem is that every mixed strategy is dominated by a pure strategy. Moreover, this underlies the Generalized Lagrange Multiplier method, and there is no loss in optimality to restrict mixed strategies to satisfy the *Haar condition*:

Range constraint |

A constraint of the form

Range of compatibility |

See Compatibility theory.

Rank-one correction |

This is an update to an approximation of a matrix, notably to the hessian inverse, by adding a matrix of rank 1 to it. This generally takes the form: where is a nonzero column vector (so is a rank 1 matrix).

Rank-two correction |

This is an update to an approximation of a matrix, notably to the hessian inverse, by adding a matrix of rank 2 to it. This generally takes the form of two rank-one updates: where and are linearly independent column vectors (so is a rank 2 matrix).

Rate of convergence |

See convergence.

Rates of substitution |

Arises when equations are split into dependent (or basic) variables and independent (or nonbasic) variables. In linear programming, we rewrite
as where is the vector of basic variables and is the vector of nonbasics. Then, the original equations are equivalent to
where
and
This implies that the *rate of substitution* between and is because it describes the marginal rate at which must change in response to a change in to maintain the equations with all other 's held fixed.

Ray |

A translated half-line:
where is a recession direction. We call the *root*, and we say the ray is *rooted* at (Also see extreme ray.)

Recession cone |

(also called *characteristic cone*), of a given set, The set of recession directions for

For example, let denote the recession cone of S. Then, if or if If

Recession direction |

At
this is a nonzero vector, for which the associated ray is contained in If is a recession direction for some
it is called a *recession direction* for

Recourse model |

See stochastic program.

Reduce |

Same as presolve.

Reduced cost |

In linear programming, it is the sum of the direct cost of a nonbasic variable and the indirect costs from induced changes in the levels of the basic variables (to satisfy the equations). For a dual price vector, the reduced cost vector is is

Reduced gradient method |

This applies to the case of linear constraints: where and The variables are partitioned into with the corresponding partition of such that the mathematical program is equivalent to:

The method assumes is nonsingular (i.e., only variables with linearly independent columns in can be grouped), and the nondegeneracy assumption: Now *dw* is the independent direction of change, and
thus keeping
on the hyperplanes -- i.e.,

The method considers a direction for the independent variables, which then determines the direction for the dependent variables. In particular, is chosen by evaluating the gradient of the objective:
This gradient (with respect to ) is called the *reduced gradient*:

at Then, completes the first part of the iteration. The second part is to select a step size, which can be blocked by the non-negativity of The resulting change can cause the working set to change such that the partition changes.

Also see the generalized reduced gradient method.

Redundant constraint |

A constraint whose removal does not change the feasible region. Suppose
is redundant and denotes the feasibility region without this constraint (so implies
). Then, the constraint is *strongly redundant* if
implies
it is *weakly redundant* if it is redundant and for some

Refinery problems |

There are a myriad of these, including fluid blending problems, particularly the pooling problem, to determine a least costly operation to satisfy demands for products.

Reformulation |

Obtaining a new formulation of a problem that is in some sense better, but equivalent to a given formulation. For example, consider the packing constraint: at most 1 element can be selected from Letting be the associated binary variable, the following formulations have the same feasibility region:

Formulation 2 is better because its LP relaxation is sharper: the relaxed region for 2 is a proper subset of the relaxed region for 1.

Reformulation-linearization technique |

(RLT). The Reformulation-linearization-technique (rlt) is a general methodology for recasting difficult optimization problems into higher-dimensional spaces for the purpose of obtaining tight bounds. As suggested by the name, it consists of the two basic steps of reformulation and linearization. Given a mixed 0-1 linear program in binary variables, the reformulation step creates additional redundant nonlinear constraints by multiplying the constraints by *product factors* of the binary variables and their complements and subsequently enforces the idempotent identity that The linearization step then substitutes a continuous variable for each distinct product of variables. A hierarchy of relaxations result, depending on the form of the product factors employed. An explicit algebraic characterization of the convex hull is available at the highest level, level-n. The method is also applicable to mixed 0-1 polynomial and to continuous, nonconvex programs. Furthermore, it can be applied to general discrete optimization problems by using product factors of Lagrange interpolating polynomials.

Regular constraint |

A global constraint that is defined on a fixed-length sequence of finite-domain variables and stating that the values taken by this sequence of variables belongs to a given regular language. It allows us to express relations between the variables of a sequence such as the maximum length sub-sequence of identical values.

Formally, a *deterministic finite automaton* (DFA) is described by a 5-tuple where is a finite set of states, is an alphabet, is a transition function, is the initial state, and is the set of final (or accepting) states. Given an input string, the automaton starts in the initial state and processes the string one symbol at the time, applying the transition function at each step to update the current state. If the last state reached belongs to the set of final states , then the string is accepted. All strings that are accepted by M generates the language .

Now, let be a DFA and let be a set of variables with domain for . Then .

Regular point |

This pertains to a mathematical program whose functions are in and the issue is whether the Lagrange Multiplier Rule is valid. A ** regular point** is a point that satisfies some constraint qualification, but some authors are more specific and require the

*Lagrange constraint qualification*:

- Let denote the matrix whose rows are the gradients of all active constraints. Then, is a
*regular point*if has full row rank.

This gives additional properties (e.g., see the tangent plane).

In this context, a regular point is also called *Lagrange regular*. The mathematical program is [Lagrange] regular if every feasible point is [Lagrange] regular.

Reified constraint |

If a constraint is reified by a Boolean variable
then is *true* only if holds and *false* otherwise, i.e. . As an example consider where is a constraint reified by .

Reification is the primary mechanism, for example, in composing a higher level constraint from a logical combination of other constraints: Constraint could be for variables .

Relative interior |

The interior of a set when considered in the smallest subspace containing it. In particular,
the polyhedron,
can be defined as the intersection of the inequalities that are forced, say
and the others, say
(so
and
). Then, the *relative interior* of the original polyhedron is

Relaxation |

is a relaxation of if: (1) the feasible region of contains the feasible region of and (2) the objective value in say is no worse than that of say for all in the domain of (for maximization, this means for all ).

- Some of the most common are as follows:

- dropping integer restrictions (viz., we refer to the
*LP relaxation*of a (linear) MIP); - dropping some constraints (viz., an active set method).
- aggregating constraints (viz., using surrogate constraint);
- using duality (viz., GLM).

One relaxation is *sharper* than another if its objective value is never further from the original. One way this can occur is for the objectives to be the same and the feasibility region to be less. In particular, one LP relaxation of a MIP is sharper than another if its feasible region is contained in the other.

Reoptimization |

A term sometimes used to mean a mathematical program is solved after some parameter values have been changed. The idea is to use the original solution, and perhaps some information generated by the algorithm, to solve the related problem more efficiently than starting from scratch. An example is using an advanced basis to begin the simplex method.

Reproduction operation |

See genetic algorithm.

Requirements space |

Originally coined in linear programming, this represents the set of non-negative linear combinations of the columns of the matrix For a linear program in standard form, the requirements space is the set of feasible right-hand sides, and it is a convex, polyhedral cone.

Residuals |

The difference, when seeking a solution to the system of equations, In particular, if we seek to solve and is some candidate solution, reached by an algorithm, the residual is

Response space |

The set of [sub-]range values for the functions of a mathematical program in standard form:

Here are a few examples. Also see the supplement on response space.

Restricted basis entry rule |

This is a restriction on which variables can enter the basis in a simplex method. A common rule arises in separable programming, which uses specially ordered sets: a group of non-negative variables must sum to 1 such that at most two variables are positive, and if two are positive, they must be adjacent. For example, suppose the variables are Then, it is feasible to have and but it is not feasible to have or In this case the rule is not to permit a variable to enter the basis unless it can do so without violating the adjacency requirement. For example, if is currently basic, would not be considered for entry.

Another restricted entry rule pertains to the delta form of separable programming (plus other applications): *Do not admit a variable into the basis unless its predecessor variables are at their upper bound.* This means there is an ordered set of bounded variables, say such that
Then, is not considered for basis entry unless for all

Reverse convex constraint |

where is a convex function.

Reverse convex program |

Minimizing a concave function, usually on a polyhedron. If the feasibility region is convex (as in the case of a polyhedron), non-empty and compact, it has an extreme point that is optimal.

Right-hand side |

(RHS). When considering constraints of the form and the vector is called the right-hand side.

Rim data |

In a linear program, the data values are plus (possibly) bounds, say
The matrix pertains to row-column information, whereas all bounds right-hand sides and cost coefficients pertain to a column *or* a row. The latter are called rim data elements because of their position in the following schematic:

Robust optimization |

A term given to an approach to deal with uncertainty, similar to the recourse model of stochastic programming, except that feasibility for all possible realizations (called *scenarios*) is replaced by a penalty function in the objective. As such, the approach integrates goal programming with a *scenario-based* description of problem data. To illustrate, consider the LP:

where and are random variables with possible realizations
( number of scenarios). The robust optimization model for this LP is:

where f is a function that measures the cost of the policy, is a penalty function, and (a parameter to trade off the cost of infeasibility). One example of is the expected value:
where probability of scenario s. In a *worst-case model*,
The penalty function is defined to be zero if is feasible (for all scenarios) -- i.e., In addition, satisfies a form of monotonicity: worse violations incur greater penalty. This often has the form
-- i.e., the "up" and "down" penalties, where and are strictly increasing functions.

The above makes robust optimization similar (at least in the model) to a goal program. Recently, the robust optimization community defines it differently - it optimizes for the worst-case scenario. Let the uncertain MP be given by

where is some set of scenarios (like parameter values). The * robust optimization model* (according to this more recent definition) is:

The policy is required to be feasible no matter what parameter value (scenario) occurs; hence, it is requied to be in the intersection of all possible The inner maximization yields the worst possible objective value among all scenarios. There are variations, such as "adjustability" (i.e., recourse).

Rosen decomposition |

Essentially the dual of Dantzig-Wolfe decomposition, this applies to a linear program with linking activities:

Rosenbrock function |

The original form has 2 variables:

The significance of this function is that it has "bannana-shaped" contours, making it difficult for nonlinear programming algorithms, particularly Cauchy's steepest descent, to solve it. The (global) minimum is at
with
There are variations that extend Rosenbrock's function, many by Schittkowski, such as the following on

Routing problems |

Finding a path or cycle in a network. An easy routing problem is the shortest path; a hard one is the TSP. One prevalent class, with many variations, is vehicle routing.

Categories: Nonlinear Programming | Linear Programming | Linear Programming | Nonlinear Programming | Nonlinear Programming | Linear Programming | Linear Algebra & Geometry | Linear Algebra & Geometry | Linear Algebra & Geometry | Linear Programming | Linear Programming | Linear Programming | Linear Programming | Constraint Programming | MIP/CO | MIP/CO | Constraint Programming | Nonlinear Programming | Constraint Programming | Linear Algebra & Geometry | MIP/CO | MIP/CO | MIP/CO | Linear Programming | Nonlinear Programming | Nonlinear Programming | Linear Programming | Linear Programming | Nonlinear Programming | Common Problems | MIP/CO