Optimization Techniques
Classical Optimization Techniques:
The classical optimization techniques are useful in finding the optimum solution or unconstrained maxima or minima of continuous and differentiable functions.
The classical optimization techniques are useful in finding the optimum solution or unconstrained maxima or minima of continuous and differentiable functions.
- These are analytical methods and make use of differential calculus in locating the optimum solution.
- The classical methods have limited scope in practical applications as some of them involve objective functions which are not continuous and/or differentiable.
- Yet, the study of these classical techniques of optimization form a basis for developing most of the numerical techniques that have evolved into advanced techniques more suitable to today’s practical problems
- These methods assume that the function is differentiable twice with respect to the design variables and the derivatives are continuous.
- Three main types of problems can be handled by the classical optimization techniques:
- single variable functions
- multivariable functions with no constraints,
- multivariable functions with both equality and inequality constraints. In problems with equality constraints the Lagrange multiplier method can be used.If the problem has inequality constraints, the Kuhn-Tucker conditions can be used to identify the optimum solution.
Numerical Methods of Optimization:
Linear programming: studies the case in which the objective function f is linear and the set A is specified using only linear equalities and inequalities. (A is the design variable space)
Integer programming: studies linear programs in which some or all variables are constrained to take on integer values.
Quadratic programming: allows the objective function to have quadratic terms, while the set A must be specified with linear equalities and inequalities
Nonlinear programming: studies the general case in which the objective function or the constraints or both contain nonlinear parts.
Stochastic programming: studies the case in which some of the constraints depend on random variables.
Dynamic programming: studies the case in which the optimization strategy is based on splitting the problem into smaller sub-problems.
Combinatorial optimization: is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one.
Infinite-dimensional optimization: studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions.
Constraint satisfaction: studies the case in which the objective function fis constant (this is used in artificial intelligence, particularly in automated reasoning).
Linear programming: studies the case in which the objective function f is linear and the set A is specified using only linear equalities and inequalities. (A is the design variable space)
Integer programming: studies linear programs in which some or all variables are constrained to take on integer values.
Quadratic programming: allows the objective function to have quadratic terms, while the set A must be specified with linear equalities and inequalities
Nonlinear programming: studies the general case in which the objective function or the constraints or both contain nonlinear parts.
Stochastic programming: studies the case in which some of the constraints depend on random variables.
Dynamic programming: studies the case in which the optimization strategy is based on splitting the problem into smaller sub-problems.
Combinatorial optimization: is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one.
Infinite-dimensional optimization: studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions.
Constraint satisfaction: studies the case in which the objective function fis constant (this is used in artificial intelligence, particularly in automated reasoning).
Advanced Optimization Techniques:
- Hill climbing: it is a graph search algorithm where the current path is extended with a successor node which is closer to the solution than the end of the current path.
- In simple hill climbing, the first closer node is chosen whereas in steepest ascent hill climbing all successors are compared and the closest to the solution is chosen. Both forms fail if there is no closer node. This may happen if there are local maxima in the search space which are not solutions.
- Hill climbing is used widely in artificial intelligence fields, for reaching a goal state from a starting node. Choice of next node/ starting node can be varied to give a number of related algorithms.
Need More Information then Download the material!
© Copyright Protected.