Jump to content

Optimization problem: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
No edit summary
Line 12: Line 12:
\end{align}</math>
\end{align}</math>
where
where
* <math>f: \mathbb{R}^n \to \mathbb{R}</math> is the '''[[Loss function|objective function]]''' to be minimized over the ''n''-variable vector <math>x</math>,
* {{math|''f'' : [[Euclidean space|ℝ<sup>''n''</sup>]] → [[Real numbers|ℝ]]}} is the '''[[Loss function|objective function]]''' to be minimized over the {{mvar|n}}-variable vector {{mvar|x}},
* <math>g_i(x) \leq 0</math> are called '''inequality [[Constraint (mathematics)|constraints]]'''
* {{math|''g<sub>i</sub>''(''x'') 0}} are called '''inequality [[Constraint (mathematics)|constraints]]'''
* <math>h_j(x) = 0</math> are called '''equality constraints''', and
* {{math|''h<sub>j</sub>''(''x'') {{=}} 0}} are called '''equality constraints''', and
* <math>m \geq 0\ and\ p \geq 0</math>.
* {{math|''m'' 0}} and {{math|''p'' 0}}.


If <math>m</math> and <math>p</math> equal 0, the problem is an unconstrained optimization problem. By convention, the standard form defines a '''minimization problem'''. A '''maximization problem''' can be treated by [[Additive inverse|negating]] the objective function.
If {{math|''m'' {{=}} ''p'' {{=}} 0}}, the problem is an unconstrained optimization problem. By convention, the standard form defines a '''minimization problem'''. A '''maximization problem''' can be treated by [[Additive inverse|negating]] the objective function.


==Combinatorial optimization problem==
==Combinatorial optimization problem==
{{Main|Combinatorial optimization}}
{{Main|Combinatorial optimization}}
Formally, a [[combinatorial optimization]] problem <math>A</math> is a quadruple{{Citation needed|date=January 2018}} <math>(I, f, m, g)</math>, where
Formally, a [[combinatorial optimization]] problem {{mvar|A}} is a quadruple{{Citation needed|date=January 2018}} {{math|(''I'', ''f'', ''m'', ''g'')}}, where
* <math>I</math> is a [[Set (mathematics)|set]] of instances;
* {{math|I}} is a [[Set (mathematics)|set]] of instances;
* given an instance <math>x \in I</math>, <math>f(x)</math> is the set of feasible solutions;
* given an instance {{math|''x'' ''I''}}, {{math|''f''(''x'')}} is the set of feasible solutions;
* given an instance <math>x</math> and a feasible solution <math>y</math> of <math>x</math>, <math>m(x, y)</math> denotes the [[Measure (mathematics)|measure]] of <math>y</math>, which is usually a [[Positive (mathematics)|positive]] [[Real number|real]].
* given an instance {{mvar|x}} and a feasible solution {{mvar|y}} of {{mvar|x}}, {{math|''m''(''x'', ''y'')}} denotes the [[Measure (mathematics)|measure]] of {{mvar|y}}, which is usually a [[Positive (mathematics)|positive]] [[Real number|real]].
* <math>g</math> is the goal function, and is either <math>\min</math> or <math>\max</math>.
* {{mvar|g}} is the goal function, and is either [[Minimum (mathematics)|{{math|min}}]] or [[Maximum (mathematics)|{{math|max}}]].


The goal is then to find for some instance <math>x</math> an ''optimal solution'', that is, a feasible solution <math>y</math> with
The goal is then to find for some instance {{mvar|x}} an ''optimal solution'', that is, a feasible solution {{mvar|y}} with


: <math>m(x, y) = g \bigl\{ m(x, y') \mid y' \in f(x) \bigr\} .</math>
: <math>
m(x, y) = g \{ m(x, y') \mid y' \in f(x) \} .
</math>


For each combinatorial optimization problem, there is a corresponding [[decision problem]] that asks whether there is a feasible solution for some particular measure <math>m_0</math>. For example, if there is a [[Graph (discrete mathematics)|graph]] <math>G</math> which contains vertices <math>u</math> and <math>v</math>, an optimization problem might be "find a path from <math>u</math> to <math>v</math> that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from <math>u</math> to <math>v</math> that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.
For each combinatorial optimization problem, there is a corresponding [[decision problem]] that asks whether there is a feasible solution for some particular measure {{math|''m''<sub>0</sub>}}. For example, if there is a [[Graph (discrete mathematics)|graph]] {{mvar|G}} which contains vertices {{mvar|u}} and {{mvar|v}}, an optimization problem might be "find a path from {{mvar|u}} to {{mvar|v}} that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from {{mvar|u}} to {{mvar|v}} that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.


In the field of [[approximation algorithm]]s, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.<ref name=Ausiello03>{{citation
In the field of [[approximation algorithm]]s, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.<ref name=Ausiello03>{{citation

Revision as of 16:23, 13 October 2019

In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. An optimization problem with discrete variables is known as a discrete optimization. In a discrete optimization problem, we are looking for an object such as an integer, permutation or graph from a countable set. Problems with continuous variables include constrained problems and multimodal problems.

Continuous optimization problem

The standard form of a continuous optimization problem is[1]

where

  • f : n is the objective function to be minimized over the n-variable vector x,
  • gi(x) ≤ 0 are called inequality constraints
  • hj(x) = 0 are called equality constraints, and
  • m ≥ 0 and p ≥ 0.

If m = p = 0, the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem. A maximization problem can be treated by negating the objective function.

Combinatorial optimization problem

Formally, a combinatorial optimization problem A is a quadruple[citation needed] (I, f, m, g), where

  • I is a set of instances;
  • given an instance xI, f(x) is the set of feasible solutions;
  • given an instance x and a feasible solution y of x, m(x, y) denotes the measure of y, which is usually a positive real.
  • g is the goal function, and is either min or max.

The goal is then to find for some instance x an optimal solution, that is, a feasible solution y with

For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure m0. For example, if there is a graph G which contains vertices u and v, an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.

In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.[2]

See also

References

  1. ^ Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. p. 129. ISBN 978-0-521-83378-3.
  2. ^ Ausiello, Giorgio; et al. (2003), Complexity and Approximation (Corrected ed.), Springer, ISBN 978-3-540-65431-5