Return to Index
Operations Research Models and Methods
 
Models Section


Dynamic Programming

 

DP Tour

 

 

Dynamic Programming

Many planning and control problems involve a sequence of decisions that are made over time. The initial decision is followed by a second, the second by a third, and so on. The process continues perhaps infinitely. Because the word dynamic describes situations that occur over time and programming is a synonym for planning, the original definition of dynamic programming was "planning over time." In a limited sense, our concern is with decisions that relate to and affect phenomena that are functions of time. This is in contrast to other forms of mathematical programming that often, but not always, describe static decision problems. As is true in many fields, the original definition has been broadened somewhat over the years to connote an analytic approach to problems involving decisions that are not necessarily sequential but can be viewed as such. In this expanded sense, dynamic programming (DP) has come to embrace a solution methodology in addition to a class of planning problems. It is put to the best advantage when the decision set is bounded and discrete, and the objective function is nonlinear.

This section is primarily concerned with modeling of deterministic, discrete systems. Modeling requires definitions of states and decisions, as well as the specification of a measure of effectiveness. To model with dynamic programming, an abstraction or reduction of complexity from the real world problem is necessary. There are two reasons for this, one practical and the other computational. From a practical point of view, it is rarely possible to identify and evaluate all the factors that are relevant to a realistic decision problem. Thus the analyst will inevitably leave out some more or less important descriptors of the situation. From a computational point of view, only problems with relatively simple state descriptions will be solvable by dynamic programming algorithms. Thus abstraction is necessary in order to arrive at a formulation that is computationally tractable. Often a particular problem may have several alternative representations in terms of state and decision variables. It is important that the analyst realize that some formulations require more computation than others do, and hence choose the most manageable representation.

Dynamic programming has been described as the most general of the optimization approaches because conceivably it can solve the broadest class of problems. In many instances, this promise is unfulfilled because of the attending computational requirements. Certain problems, however, are particularly adaptable to the model structure and lend themselves to efficient computational procedures; in cases involving discontinuous functions or discrete variables, dynamic programming may be the only practical solution methodology.

The example for this section introduces the notation for dynamic programming with an example investment problem. We briefly outline the solution approach to provide motivation for the modeling notation. The textbook describes several additional problem classes with their dynamic programming models.




  
Return to Top

tree roots

Operations Research Models and Methods
Internet
by Paul A. Jensen
Copyright 2004 - All rights reserved