Jump to content

Optimization problem: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
 
(45 intermediate revisions by 28 users not shown)
Line 1: Line 1:
{{Short description|Problem of finding the best feasible solution}}
{{Broader|Mathematical optimization}}
{{Broader|Mathematical optimization}}


In [[mathematics]], [[engineering]], [[computer science]] and [[economics]], an '''optimization problem''' is the [[Computational problem|problem]] of finding the ''best'' solution from all [[feasible solution]]s.
In [[mathematics]] and [[computer science]], an '''optimization problem''' is the [[Computational_problem|problem]] of finding the ''best'' solution from all [[feasible solution]]s. Optimization problems can be divided into two categories depending on whether the [[Variable (mathematics)|variables]] are [[continuous variable|continuous]] or [[discrete variable|discrete]]. An optimization problem with [[Discrete mathematics|discrete]] variables is known as a [[Combinatorial Optimization|combinatorial optimization problem]]. In a combinatorial optimization problem, we are looking for an object such as an [[integer]], [[permutation]] or [[Graph (discrete mathematics)|graph]] from a [[Finite set|finite]] (or possibly [[Countable set|countable infinite]]) set. Problems with continuous variables include constrained problems and multimodal problems.

Optimization problems can be divided into two categories, depending on whether the [[Variable (mathematics)|variables]] are [[continuous variable|continuous]] or [[discrete variable|discrete]]:
* An optimization problem with discrete variables is known as a ''[[discrete optimization]]'', in which an [[Mathematical object|object]] such as an [[integer]], [[permutation]] or [[Graph (discrete mathematics)|graph]] must be found from a [[countable set]].
* A problem with continuous variables is known as a ''[[continuous optimization]]'', in which an optimal value from a [[continuous function]] must be found. They can include [[Constrained optimization|constrained problem]]s and multimodal problems.


==Continuous optimization problem==
==Continuous optimization problem==

The ''[[Canonical form|standard form]]'' of a ([[Continuity (mathematics)|continuous]]) optimization problem is<ref>{{cite book|title=Convex Optimization|first1=Stephen P.|last1=Boyd|first2=Lieven|last2=Vandenberghe|page=129|year=2004|publisher=Cambridge University Press|isbn=978-0-521-83378-3|url=http://www.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf|format=pdf}}</ref>
The ''[[Canonical form|standard form]]'' of a [[Continuity (mathematics)|continuous]] optimization problem is<ref>{{cite book|title=Convex Optimization|first1=Stephen P.|last1=Boyd|first2=Lieven|last2=Vandenberghe|page=129|year=2004|publisher=Cambridge University Press|isbn=978-0-521-83378-3|url=https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf#page=143|format=pdf}}</ref>
: <math>\begin{align}
<math display=block>\begin{align}
&\underset{x}{\operatorname{minimize}}& & f(x) \\
&\underset{x}{\operatorname{minimize}}& & f(x) \\
&\operatorname{subject\;to}
&\operatorname{subject\;to}
& &g_i(x) \leq 0, \quad i = 1,\dots,m \\
& &g_i(x) \leq 0, \quad i = 1,\dots,m \\
&&&h_i(x) = 0, \quad i = 1, \dots,p
&&&h_j(x) = 0, \quad j = 1, \dots,p
\end{align}</math>
\end{align}</math>
where
where
* <math>f(x): \mathbb{R}^n \to \mathbb{R}</math> is the '''[[Loss function|objective function]]''' to be minimized over the variable <math>x</math>,
* {{math|''f'' : [[Euclidean space|ℝ<sup>''n''</sup>]] → [[Real numbers|ℝ]]}} is the ''[[objective function]]'' to be minimized over the {{mvar|n}}-variable vector {{mvar|x}},
* <math>g_i(x) \leq 0</math> are called '''inequality [[Constraint (mathematics)|constraints]]''', and
* {{math|''g<sub>i</sub>''(''x'') 0}} are called ''inequality [[Constraint (mathematics)|constraints]]''
* <math>h_i(x) = 0</math> are called '''equality constraints'''.
* {{math|''h<sub>j</sub>''(''x'') {{=}} 0}} are called ''equality constraints'', and
* {{math|''m'' ≥ 0}} and {{math|''p'' ≥ 0}}.
By convention, the standard form defines a '''minimization problem'''. A '''maximization problem''' can be treated by [[Additive inverse|negating]] the objective function.

If {{math|''m'' {{=}} ''p'' {{=}} 0}}, the problem is an unconstrained optimization problem. By convention, the standard form defines a '''minimization problem'''. A '''maximization problem''' can be treated by [[Additive inverse|negating]] the objective function.


==Combinatorial optimization problem==
==Combinatorial optimization problem==
{{Main|Combinatorial optimization}}


Formally, a [[combinatorial optimization]] problem <math>A</math> is a quadruple <math>(I, f, m, g)</math>, where
Formally, a [[combinatorial optimization]] problem {{mvar|A}} is a quadruple{{Citation needed|date=January 2018}} {{math|(''I'', ''f'', ''m'', ''g'')}}, where
* <math>I</math> is a [[Set (mathematics)|set]] of instances;
* {{math|I}} is a [[Set (mathematics)|set]] of instances;
* given an instance <math>x \in I</math>, <math>f(x)</math> is the set of feasible solutions;
* given an instance {{math|''x'' ''I''}}, {{math|''f''(''x'')}} is the set of feasible solutions;
* given an instance <math>x</math> and a feasible solution <math>y</math> of <math>x</math>, <math>m(x, y)</math> denotes the [[Measure (mathematics)|measure]] of <math>y</math>, which is usually a [[Positive (mathematics)|positive]] [[Real number|real]].
* given an instance {{mvar|x}} and a feasible solution {{mvar|y}} of {{mvar|x}}, {{math|''m''(''x'', ''y'')}} denotes the [[Measure (mathematics)|measure]] of {{mvar|y}}, which is usually a [[Positive (mathematics)|positive]] [[Real number|real]].
* <math>g</math> is the goal function, and is either <math>\min</math> or <math>\max</math>.
* {{mvar|g}} is the goal function, and is either {{math|[[Minimum (mathematics)|min]]}} or {{math|[[Maximum (mathematics)|max]]}}.


The goal is then to find for some instance <math>x</math> an ''optimal solution'', that is, a feasible solution <math>y</math> with
The goal is then to find for some instance {{mvar|x}} an ''optimal solution'', that is, a feasible solution {{mvar|y}} with
<math display=block>m(x, y) = g\left\{ m(x, y') : y' \in f(x) \right\}.</math>


For each combinatorial optimization problem, there is a corresponding [[decision problem]] that asks whether there is a feasible solution for some particular measure {{math|''m''<sub>0</sub>}}. For example, if there is a [[Graph (discrete mathematics)|graph]] {{mvar|G}} which contains vertices {{mvar|u}} and {{mvar|v}}, an optimization problem might be "find a path from {{mvar|u}} to {{mvar|v}} that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from {{mvar|u}} to {{mvar|v}} that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.
: <math>
m(x, y) = g \{ m(x, y') \mid y' \in f(x) \} .
</math>

For each combinatorial optimization problem, there is a corresponding [[decision problem]] that asks whether there is a feasible solution for some particular measure <math>m_0</math>. For example, if there is a [[Graph (discrete mathematics)|graph]] <math>G</math> which contains vertices <math>u</math> and <math>v</math>, an optimization problem might be "find a path from <math>u</math> to <math>v</math> that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from <math>u</math> to <math>v</math> that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.


In the field of [[approximation algorithm]]s, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.<ref name=Ausiello03>{{citation
In the field of [[approximation algorithm]]s, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.<ref name=Ausiello03>{{citation
Line 42: Line 48:
|display-authors=etal}}</ref>
|display-authors=etal}}</ref>


==See also==
=== NP optimization problem ===


* {{annotated link|Counting problem (complexity)}}
An ''NP-optimization problem'' (NPO) is a combinatorial optimization problem with the following additional conditions.<ref name=Hromkovic02>{{citation
* {{annotated link|Design Optimization}}
| last1 = Hromkovic | first1 = Juraj
* {{annotated link|Ekeland's variational principle}}
| year = 2002
* {{annotated link|Function problem}}
| edition = 2nd
* {{annotated link|Glove problem}}
| title = Algorithmics for Hard Problems
* {{annotated link|Operations research}}
| series = Texts in Theoretical Computer Science
* {{annotated link|Satisficing}} − the optimum need not be found, just a "good enough" solution.
| publisher = Springer
* {{annotated link|Search problem}}
| isbn = 978-3-540-44134-2
* {{annotated link|Semi-infinite programming}}
}}</ref> Note that the below referred [[Polynomial|polynomials]] are functions of the size of the respective functions' inputs, not the size of some implicit set of input instances.
* the size of every feasible solution <math>\scriptstyle y\in f(x)</math> is polynomially [[Bounded set|bounded]] in the size of the given instance <math>x</math>,
* the languages <math>\scriptstyle \{\,x\,\mid\, x \in I \,\}</math> and <math>\scriptstyle \{\,(x,y)\, \mid\, y \in f(x) \,\}</math> can be [[decidable language|recognized]] in [[polynomial time]], and
* ''m'' is [[polynomial time|polynomial-time computable]].


==References==
This implies that the corresponding decision problem is in [[NP (complexity)|NP]]. In computer science, interesting optimization problems usually have the above properties and are therefore NPO problems. A problem is additionally called a P-optimization (PO) problem, if there exists an algorithm which finds optimal solutions in polynomial time. Often, when dealing with the class NPO, one is interested in optimization problems for which the decision versions are NP-complete. Note that hardness relations are always with respect to some reduction. Due to the connection between approximation algorithms and computational optimization problems, reductions which preserve approximation in some respect are for this subject preferred than the usual [[Turing reduction|Turing]] and [[Karp reduction]]s. An example of such a reduction would be the [[L-reduction]]. For this reason, optimization problems with NP-complete decision versions are not necessarily called NPO-complete.<ref name=Kann92>{{citation
| last1 = Kann | first1 = Viggo
| year = 1992
| title = On the Approximability of NP-complete Optimization Problems
| publisher = Royal Institute of Technology, Sweden
| isbn = 91-7170-082-X
}}</ref>


{{reflist}}
NPO is divided into the following subclasses according to their approximability:<ref name=Hromkovic02/>
* ''NPO(I)'': Equals [[FPTAS]]. Contains the [[Knapsack problem]].
* ''NPO(II)'': Equals [[Polynomial-time approximation scheme|PTAS]]. Contains the [[Makespan scheduling problem]].
* ''NPO(III)'': :The class of NPO problems that have polynomial-time algorithms which computes solutions with a cost at most ''c'' times the optimal cost (for minimization problems) or a cost at least <math>1/c</math> of the optimal cost (for maximization problems). In [[Juraj Hromkovič|Hromkovič]]'s book, excluded from this class are all NPO(II)-problems save if P=NP. Without the exclusion, equals APX. Contains [[MAX-SAT]] and metric [[Travelling salesman problem|TSP]].
* ''NPO(IV)'': :The class of NPO problems with polynomial-time algorithms approximating the optimal solution by a ratio that is polynomial in a logarithm of the size of the input. In Hromkovic's book, all NPO(III)-problems are excluded from this class unless P=NP. Contains the [[set cover]] problem.
* ''NPO(V)'': :The class of NPO problems with polynomial-time algorithms approximating the optimal solution by a ratio bounded by some function on n. In Hromkovic's book, all NPO(IV)-problems are excluded from this class unless P=NP. Contains the [[Travelling salesman problem|TSP]] and [[Clique_problem|Max Clique problems]].


==External links==
Another class of interest is NPOPB, NPO with polynomially bounded cost functions. Problems with this condition have many desirable properties.


* {{cite web|title=How Traffic Shaping Optimizes Network Bandwidth|work=IPC|date=12 July 2016|access-date=13 February 2017|url=https://www.ipctech.com/how-traffic-shaping-optimizes-network-bandwidth}}
==References==
{{reflist}}
<ref>"How Traffic Shaping Optimizes Network Bandwidth." IPC. N.p., 12 July 2016. Web. 13 Feb. 2017.</ref>


{{Convex analysis and variational analysis}}
==See also==
*[[Semi-infinite programming]]
*[[Decision problem]]
*[[Search problem]]
*[[Counting problem (complexity)]]
*[[Function problem]]
*[[Operations research]]
{{Authority control}}
{{Authority control}}

[[Category:Computational problems]]
[[Category:Computational problems]]

Latest revision as of 01:07, 2 December 2023

In mathematics, engineering, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions.

Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:

Continuous optimization problem

[edit]

The standard form of a continuous optimization problem is[1] where

  • f : n is the objective function to be minimized over the n-variable vector x,
  • gi(x) ≤ 0 are called inequality constraints
  • hj(x) = 0 are called equality constraints, and
  • m ≥ 0 and p ≥ 0.

If m = p = 0, the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem. A maximization problem can be treated by negating the objective function.

Combinatorial optimization problem

[edit]

Formally, a combinatorial optimization problem A is a quadruple[citation needed] (I, f, m, g), where

  • I is a set of instances;
  • given an instance xI, f(x) is the set of feasible solutions;
  • given an instance x and a feasible solution y of x, m(x, y) denotes the measure of y, which is usually a positive real.
  • g is the goal function, and is either min or max.

The goal is then to find for some instance x an optimal solution, that is, a feasible solution y with

For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure m0. For example, if there is a graph G which contains vertices u and v, an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.

In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.[2]

See also

[edit]
  • Counting problem (complexity) – Type of computational problem
  • Design Optimization
  • Ekeland's variational principle – theorem that asserts that there exist nearly optimal solutions to some optimization problems
  • Function problem – Type of computational problem
  • Glove problem
  • Operations research – Discipline concerning the application of advanced analytical methods
  • Satisficing – Cognitive heuristic of searching for an acceptable decision − the optimum need not be found, just a "good enough" solution.
  • Search problem – type of computational problem represented by a binary relation
  • Semi-infinite programming – optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints

References

[edit]
  1. ^ Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. p. 129. ISBN 978-0-521-83378-3.
  2. ^ Ausiello, Giorgio; et al. (2003), Complexity and Approximation (Corrected ed.), Springer, ISBN 978-3-540-65431-5
[edit]