Jump to content

Big O notation: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
revert - rm excess whitespace and incomplete example outside main text; formatting: 12x heading-style, whitespace (using Advisor.js)
Line 5: Line 5:
A description of a function in terms of big O notation usually only provides an [[upper bound]] on the growth rate of the function. Associated with big O notation are several related notations, using the symbols [[#Family of Bachmann–Landau notations|''o'', Ω, ω, and Θ]], to describe other kinds of bounds on asymptotic growth rates. Big O notation is also used in many other fields to provide similar estimates.
A description of a function in terms of big O notation usually only provides an [[upper bound]] on the growth rate of the function. Associated with big O notation are several related notations, using the symbols [[#Family of Bachmann–Landau notations|''o'', Ω, ω, and Θ]], to describe other kinds of bounds on asymptotic growth rates. Big O notation is also used in many other fields to provide similar estimates.


== Formal definition ==
==Formal definition==
Let ''f''(''x'') and ''g''(''x'') be two functions defined on some subset of the [[real number]]s. One writes
Let ''f''(''x'') and ''g''(''x'') be two functions defined on some subset of the [[real number]]s. One writes
:<math>f(x)=O(g(x))\mbox{ as }x\to\infty\,</math>
:<math>f(x)=O(g(x))\mbox{ as }x\to\infty\,</math>
Line 40: Line 40:
:<math> |6x^4 - 2x^3 + 5| \le 13 \,|x^4 |.</math>
:<math> |6x^4 - 2x^3 + 5| \le 13 \,|x^4 |.</math>


== Usage ==
==Usage==
Big O notation has two main areas of application. In [[mathematics]], it is commonly used to describe how closely a [[Evaluating sums|finite series]] approximates a given function, especially in the case of a truncated [[Taylor series]] or [[asymptotic expansion]]. In [[computer science]], it is useful in the [[analysis of algorithms]]. In both applications, the function ''g''(''x'') appearing within the ''O''(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.
Big O notation has two main areas of application. In [[mathematics]], it is commonly used to describe how closely a [[Evaluating sums|finite series]] approximates a given function, especially in the case of a truncated [[Taylor series]] or [[asymptotic expansion]]. In [[computer science]], it is useful in the [[analysis of algorithms]]. In both applications, the function ''g''(''x'') appearing within the ''O''(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.


Line 103: Line 103:
:<math>f\in O(g) \Rightarrow kf\in O(g). </math>
:<math>f\in O(g) \Rightarrow kf\in O(g). </math>


== Multiple variables ==
==Multiple variables==
Big O (and little o, and Ω…) can also be used with multiple variables.
Big O (and little o, and Ω…) can also be used with multiple variables.


Line 130: Line 130:
(i.e., <math>\forall m\,\exists C\,\exists M\,\forall n\dots</math>).
(i.e., <math>\forall m\,\exists C\,\exists M\,\forall n\dots</math>).


== Matters of notation ==
==Matters of notation==
===Equals sign===
===Equals sign===
The statement "''f''(''x'') is ''O''(''g''(''x''))" as defined above is usually written as ''f''(''x'')&nbsp;=&nbsp;''O''(''g''(''x'')). Some consider this to be an [[abuse of notation]], since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As [[de Bruijn]] says, ''O''(''x'')&nbsp;=&nbsp;''O''(''x''<sup>2</sup>) is true but ''O''(''x''<sup>2</sup>)&nbsp;=&nbsp;''O''(''x'') is not.<ref>{{Cite book| author = [[N. G. de Bruijn]] | title=Asymptotic Methods in Analysis | place=Amsterdam |publisher=North-Holland | year=1958 | pages=5–7 | url=http://books.google.com/?id=_tnwmvHmVwMC&pg=PA5&vq=%22The+trouble+is%22 | isbn=9780486642215}}</ref> [[Donald Knuth|Knuth]] describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like ''n''&nbsp;=&nbsp;''n''<sup>2</sup> from the identities ''n''&nbsp;=&nbsp;''O''(''n''<sup>2</sup>) and ''n''<sup>2</sup>&nbsp;=&nbsp;''O''(''n''<sup>2</sup>)."<ref name="Concrete Mathematics">
The statement "''f''(''x'') is ''O''(''g''(''x''))" as defined above is usually written as ''f''(''x'')&nbsp;=&nbsp;''O''(''g''(''x'')). Some consider this to be an [[abuse of notation]], since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As [[de Bruijn]] says, ''O''(''x'')&nbsp;=&nbsp;''O''(''x''<sup>2</sup>) is true but ''O''(''x''<sup>2</sup>)&nbsp;=&nbsp;''O''(''x'') is not.<ref>{{Cite book| author = [[N. G. de Bruijn]] | title=Asymptotic Methods in Analysis | place=Amsterdam |publisher=North-Holland | year=1958 | pages=5–7 | url=http://books.google.com/?id=_tnwmvHmVwMC&pg=PA5&vq=%22The+trouble+is%22 | isbn=9780486642215}}</ref> [[Donald Knuth|Knuth]] describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like ''n''&nbsp;=&nbsp;''n''<sup>2</sup> from the identities ''n''&nbsp;=&nbsp;''O''(''n''<sup>2</sup>) and ''n''<sup>2</sup>&nbsp;=&nbsp;''O''(''n''<sup>2</sup>)."<ref name="Concrete Mathematics">
Line 160: Line 160:
:<math>f(m) = O(m^n)\,,</math>
:<math>f(m) = O(m^n)\,,</math>
:<math>g(n)\,\, = O(m^n)\,.</math>
:<math>g(n)\,\, = O(m^n)\,.</math>
The first case states that ''f''(''m'') exhibits polynomial growth, while the second, assuming ''m'' > 1, states that ''g''(''n'') exhibits exponential growth.
The first case states that ''f''(''m'') exhibits polynomial growth, while the second, assuming ''m'' > 1, states that ''g''(''n'') exhibits exponential growth.


To avoid confusion, some authors use the notation
To avoid confusion, some authors use the notation
Line 174: Line 174:
The meaning of such statements is as follows: for ''any'' functions which satisfy each <math>O(...)</math> on the left side, there are ''some'' functions satisfying each <math>O(...)</math> on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function <math>f(n)=O(1)</math>, there is some function <math>g(n)=O(e^n)</math> such that <math>n^{f(n)}=g(n)</math>." In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side.
The meaning of such statements is as follows: for ''any'' functions which satisfy each <math>O(...)</math> on the left side, there are ''some'' functions satisfying each <math>O(...)</math> on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function <math>f(n)=O(1)</math>, there is some function <math>g(n)=O(e^n)</math> such that <math>n^{f(n)}=g(n)</math>." In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side.


== Orders of common functions ==
==Orders of common functions==
Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, ''c'' is a constant and ''n'' increases without bound. The slower-growing functions are generally listed first.
Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, ''c'' is a constant and ''n'' increases without bound. The slower-growing functions are generally listed first.


Line 207: Line 207:
For any <math>k>0</math> and <math>c>0</math>, <math>O(n^c(\log n)^k)</math> is a subset of <math>O(n^{c+\varepsilon })</math> for any <math> \varepsilon >0</math>, so may be considered as a polynomial with some bigger order.
For any <math>k>0</math> and <math>c>0</math>, <math>O(n^c(\log n)^k)</math> is a subset of <math>O(n^{c+\varepsilon })</math> for any <math> \varepsilon >0</math>, so may be considered as a polynomial with some bigger order.


== Related asymptotic notations ==
==Related asymptotic notations==
Big ''O'' is the most commonly used asymptotic notation for comparing functions, although in many cases Big ''O'' may be replaced with Big Theta Θ for asymptotically tighter bounds. Here, we define some related notations in terms of Big ''O'', progressing up to the family of Bachmann–Landau notations to which Big ''O'' notation belongs.
Big ''O'' is the most commonly used asymptotic notation for comparing functions, although in many cases Big ''O'' may be replaced with Big Theta Θ for asymptotically tighter bounds. Here, we define some related notations in terms of Big ''O'', progressing up to the family of Bachmann–Landau notations to which Big ''O'' notation belongs.


=== Little-o notation ===
===Little-o notation===
The relation <math>f(x) \in o(g(x))</math> is read as "<math>f(x)</math> is little-o of <math>g(x)</math>". Intuitively, it means that <math>g(x)</math> grows much faster than <math>f(x)</math>, or similarly, the growth of <math>f(x)</math> is nothing compared to that of <math>g(x)</math>. It assumes that ''f'' and ''g'' are both functions of one variable. Formally, it states
The relation <math>f(x) \in o(g(x))</math> is read as "<math>f(x)</math> is little-o of <math>g(x)</math>". Intuitively, it means that <math>g(x)</math> grows much faster than <math>f(x)</math>, or similarly, the growth of <math>f(x)</math> is nothing compared to that of <math>g(x)</math>. It assumes that ''f'' and ''g'' are both functions of one variable. Formally, it states
:<math>\lim_{x \to \infty}\frac{f(x)}{g(x)}=0.</math>
:<math>\lim_{x \to \infty}\frac{f(x)}{g(x)}=0.</math>
Line 238: Line 238:
As with big O notation, the statement "<math>f(x)</math> is <math>o(g(x))</math>" is usually written as <math> f(x) = o(g(x))</math>, which is a slight [[abuse of notation]].
As with big O notation, the statement "<math>f(x)</math> is <math>o(g(x))</math>" is usually written as <math> f(x) = o(g(x))</math>, which is a slight [[abuse of notation]].


=== Family of Bachmann–Landau notations ===
===Family of Bachmann–Landau notations===
{| class="wikitable"
{| class="wikitable"
!Notation
!Notation
Line 316: Line 316:
So while all three statements are true, progressively more information is contained in each. In some fields, however, the Big O notation (number 2 in the lists above) would be used more commonly than the Big Theta notation (bullets number 3 in the lists above) because functions that grow more slowly are more desirable. For example, if <math>T(n)</math> represents the running time of a newly developed algorithm for input size <math>n</math>, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound.
So while all three statements are true, progressively more information is contained in each. In some fields, however, the Big O notation (number 2 in the lists above) would be used more commonly than the Big Theta notation (bullets number 3 in the lists above) because functions that grow more slowly are more desirable. For example, if <math>T(n)</math> represents the running time of a newly developed algorithm for input size <math>n</math>, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound.


=== Extensions to the Bachmann–Landau notations ===
===Extensions to the Bachmann–Landau notations===
Another notation sometimes used in computer science is Õ (read ''soft-O''): ''f''(''n'')&nbsp;=&nbsp;''Õ''(''g''(''n'')) is shorthand
Another notation sometimes used in computer science is Õ (read ''soft-O''): ''f''(''n'')&nbsp;=&nbsp;''Õ''(''g''(''n'')) is shorthand
for ''f''(''n'')&nbsp;=&nbsp;''O''(''g''(''n'')&nbsp;log<sup>''k''</sup>&nbsp;''g''(''n'')) for some ''k''. Essentially, it is Big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since log<sup>''k''</sup>&nbsp;''n'' is always ''o''(''n''<sup>ε</sup>) for any constant ''k'' and any ε&nbsp;>&nbsp;0).
for ''f''(''n'')&nbsp;=&nbsp;''O''(''g''(''n'')&nbsp;log<sup>''k''</sup>&nbsp;''g''(''n'')) for some ''k''. Essentially, it is Big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since log<sup>''k''</sup>&nbsp;''n'' is always ''o''(''n''<sup>ε</sup>) for any constant ''k'' and any ε&nbsp;>&nbsp;0).
Line 353: Line 353:
* [[Computational complexity theory]]: A sub-field strongly related to this article
* [[Computational complexity theory]]: A sub-field strongly related to this article


== Notes ==
==Notes==
{{Reflist}}
{{Reflist}}


== Further reading ==
==Further reading==
* [[Paul Bachmann]]. ''Die Analytische Zahlentheorie. Zahlentheorie''. pt. 2 Leipzig: B. G. Teubner, 1894.
* [[Paul Bachmann]]. ''Die Analytische Zahlentheorie. Zahlentheorie''. pt. 2 Leipzig: B. G. Teubner, 1894.
* [[Edmund Landau]]. ''Handbuch der Lehre von der Verteilung der Primzahlen''. 2 vols. Leipzig: B. G. Teubner, 1909.
* [[Edmund Landau]]. ''Handbuch der Lehre von der Verteilung der Primzahlen''. 2 vols. Leipzig: B. G. Teubner, 1909.
Line 370: Line 370:
* Paul E. Black, [http://www.nist.gov/dads/HTML/theta.html "Θ"], in ''Dictionary of Algorithms and Data Structures'' [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.
* Paul E. Black, [http://www.nist.gov/dads/HTML/theta.html "Θ"], in ''Dictionary of Algorithms and Data Structures'' [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.


== External links ==
==External links==
{{wikibooks|Data Structures|Asymptotic Notation#Big-O Notation|Big-O Notation}}
{{wikibooks|Data Structures|Asymptotic Notation#Big-O Notation|Big-O Notation}}
* [http://www.soe.ucsc.edu/classes/cmps102/Spring04/TantaloAsymp.pdf Introduction to Asymptotic Notations]
* [http://www.soe.ucsc.edu/classes/cmps102/Spring04/TantaloAsymp.pdf Introduction to Asymptotic Notations]
Line 396: Line 396:
[[pl:Asymptotyczne tempo wzrostu]]
[[pl:Asymptotyczne tempo wzrostu]]
[[pt:Grande-O]]
[[pt:Grande-O]]



[[ru:«O» большое и «o» малое]]
[[ru:«O» большое и «o» малое]]
[[simple:Big O notation]]
[[simple:Big O notation]]
Line 408: Line 405:
[[uk:Нотація Ландау]]
[[uk:Нотація Ландау]]
[[zh:大O符号]]
[[zh:大O符号]]



example:- consider the function f(n)=n^2 and g(n)=n^3 then
h(n)= n^(2+3)=n^5

Revision as of 16:19, 22 January 2011

In mathematics, computer science, and related fields, big-O notation (also known as big Oh notation, big Omicron notation, Landau notation, Bachmann–Landau notation, and asymptotic notation) (along with the closely related big-Omega notation, big-Theta notation, and little o notation) describes the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.

Although developed as a part of pure mathematics, this notation is now frequently also used in the analysis of algorithms to describe an algorithm's usage of computational resources: the worst case or average case running time or memory usage of an algorithm is often expressed as a function of the length of its input using big O notation. This allows algorithm designers to predict the behavior of their algorithms and to determine which of multiple algorithms to use, in a way that is independent of computer architecture or clock rate. Because big O notation discards multiplicative constants on the running time, and ignores efficiency for low input sizes, it does not always reveal the fastest algorithm in practice or for practically-sized data sets, but the approach is still very effective for comparing the scalability of various algorithms as input sizes become large.

A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates. Big O notation is also used in many other fields to provide similar estimates.

Formal definition

Let f(x) and g(x) be two functions defined on some subset of the real numbers. One writes

if and only if, for sufficiently large values of x, f(x) is at most a constant multiplied by g(x) in absolute value. That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that

In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that f(x) = O(g(x)).

The notation can also be used to describe the behavior of f near some real number a (often, a = 0): we say

if and only if there exist positive numbers δ and M such that

If g(x) is non-zero for values of x sufficiently close to a, both of these definitions can be unified using the limit superior:

if and only if

Example

In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f(x) is derived by the following simplification rules:

  • If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
  • If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted.

For example, let , and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms: 6x4, −2x3, and 5. Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the second rule: 6x4 is a product of 6 and x4 in which the first factor does not depend on x. Omitting this factor results in the simplified form x4. Thus, we say that f(x) is a big-oh of (x4) or mathematically we can write f(x) = O(x4).

One may confirm this calculation using the formal definition: let f(x) = 6x4 − 2x3 + 5 and g(x) = x4. Applying the formal definition from above, the statement that f(x) = O(x4) is equivalent to its expansion,

for some suitable choice of x0 and M and for all x > x0. To prove this, let x0 = 1 and M = 13. Then, for all x > x0:

so

Usage

Big O notation has two main areas of application. In mathematics, it is commonly used to describe how closely a finite series approximates a given function, especially in the case of a truncated Taylor series or asymptotic expansion. In computer science, it is useful in the analysis of algorithms. In both applications, the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.

There are two formally close, but noticeably different, usages of this notation: infinite asymptotics and infinitesimal asymptotics. This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument.

Infinite asymptotics

Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n2 − 2n + 2.

As n grows large, the n2 term will come to dominate, so that all other terms can be neglected — for instance when n = 500, the term 4n2 is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression's value for most purposes.

Further, the coefficients become irrelevant if we compare to any other order of expression, such as an expression containing a term n3 or n4. Even if T(n) = 1,000,000n2, if U(n) = n3, the latter will always exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,0003= U(1,000,000)). Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm.

So the big O notation captures what remains: we write either

or

and say that the algorithm has order of n2 time complexity.

Note that "=" is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is technically accurate (see the "Equals sign" discussion below) while the first is a common abuse of notation.[1]

Note that when used in computer science the domain of the expressions involved is the set of natural numbers and the functions f and g take only on finite values (i.e., f(100)=∞ is not allowed). In such uses, f(n)=O(g(n)) implies (and is actually equivalent to) that |f(n) / g(n)| is bounded by a constant no matter how the natural number n is selected. That is, in this case f(n)=O(g(n)) implies that there exists some constant C>0 such that |f(n)|≤C|g(n)| holds for all natural numbers n. This implication does not hold in the more general case when the domain of f and g may contain a finite accumulation point.

Infinitesimal asymptotics

Big O can also be used to describe the error term in an approximation to a mathematical function. The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term. For example,

expresses the fact that the error, the difference , is smaller in absolute value than some constant times when is close enough to .

Properties

If a function f(n) can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n). For example

In particular, if a function may be bounded by a polynomial in n, then as n tends to infinity, one may disregard lower-order terms of the polynomial.

and are very different. The latter grows much, much faster, no matter how big the constant c is (as long as it is greater than one). A function that grows faster than any power of n is called superpolynomial. One that grows more slowly than any exponential function of the form is called subexponential. An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for integer factorization.

is exactly the same as . The logarithms differ only by a constant factor (since ) and thus the big O notation ignores that. Similarly, logs with different constant bases are equivalent. Exponentials with different bases, on the other hand, are not of the same order. For example, and are not of the same order.

Changing units may or may not affect the order of the resulting algorithm. Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears. For example, if an algorithm runs in the order of n2, replacing n by cn means the algorithm runs in the order of , and the big O notation ignores the constant . This can be written as . If, however, an algorithm runs in the order of , replacing n with cn gives . This is not equivalent to in general.

Changing of variable may affect the order of the resulting algorithm. For example, if an algorithm's running time is O(n) when measured in terms of the number n of digits of an input number x, then its running time is O(log x) when measured as a function of the input number x itself, because n = Θ(log x).

Product

Sum

This implies , which means that is a convex cone.
If f and g are positive functions,

Multiplication by a constant

Let k be a constant. Then:
if k is nonzero.

Multiple variables

Big O (and little o, and Ω…) can also be used with multiple variables.

To define Big O formally for multiple variables, suppose and are two functions defined on some subset of . We say

if and only if

For example, the statement

asserts that there exist constants C and M such that

where g(n,m) is defined by

Note that this definition allows all of the coordinates of to increase to infinity. In particular, the statement

(i.e., ) is quite different from

(i.e., ).

Matters of notation

Equals sign

The statement "f(x) is O(g(x))" as defined above is usually written as f(x) = O(g(x)). Some consider this to be an abuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As de Bruijn says, O(x) = O(x2) is true but O(x2) = O(x) is not.[2] Knuth describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like n = n2 from the identities n = O(n2) and n2 = O(n2)."[3]

For these reasons, it would be more precise to use set notation and write f(x) ∈ O(g(x)), thinking of O(g(x)) as the class of all functions h(x) such that |h(x)| ≤ C|g(x)| for some constant C.[3] However, the use of the equals sign is customary. Knuth pointed out that "mathematicians customarily use the = sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle."[4]

Other arithmetic operators

Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. For example, h(x) + O(f(x)) denotes the collection of functions having the growth of h(x) plus a part whose growth is limited to that of f(x). Thus,

expresses the same as

Example

Suppose an algorithm is being developed to operate on a set of n elements. Its developers are interested in finding a function T(n) that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort has a known time complexity of , and after the subroutine runs the algorithm must take an additional time before it terminates. Thus the overall time complexity of the algorithm can be expressed as

This can perhaps be most easily read by replacing with "some function that grows asymptotically slower than ". Again, this usage disregards some of the formal meaning of the "=" and "+" symbols, but it does allow one to use the big O notation as a kind of convenient placeholder.

Declaration of variables

Another feature of the notation, although less exceptional, is that function arguments may need to be inferred from the context when several variables are involved. The following two right-hand side big O notations have dramatically different meanings:

The first case states that f(m) exhibits polynomial growth, while the second, assuming m > 1, states that g(n) exhibits exponential growth.

To avoid confusion, some authors use the notation

rather than the less explicit

Complex usages

In more complex usage, can appear in different places in an equation, even several times on each side. For example, the following are true for

The meaning of such statements is as follows: for any functions which satisfy each on the left side, there are some functions satisfying each on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function , there is some function such that ." In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side.

Orders of common functions

Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a constant and n increases without bound. The slower-growing functions are generally listed first.

See table of common time complexities for a more comprehensive list.

Notation Name Example
constant Determining if a number is even or odd; using a constant-size lookup table or hash table
logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap.
fractional power Searching in a kd-tree
linear Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two n-bit integers by ripple carry.
linearithmic, loglinear, or quasilinear Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort
quadratic Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or naive implementation), shell sort, quicksort (worst case), selection sort or insertion sort
polynomial or algebraic Tree-adjoining grammar parsing; maximum matching for bipartite graphs

L-notation or sub-exponential Factoring a number using the quadratic sieve or number field sieve
exponential Finding the (exact) solution to the traveling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search
factorial Solving the traveling salesman problem via brute-force search; finding the determinant with expansion by minors.

The statement is sometimes weakened to to derive simpler formulas for asymptotic complexity.

For any and , is a subset of for any , so may be considered as a polynomial with some bigger order.

Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with Big Theta Θ for asymptotically tighter bounds. Here, we define some related notations in terms of Big O, progressing up to the family of Bachmann–Landau notations to which Big O notation belongs.

Little-o notation

The relation is read as " is little-o of ". Intuitively, it means that grows much faster than , or similarly, the growth of is nothing compared to that of . It assumes that f and g are both functions of one variable. Formally, it states

Or alternatively:

Let f(x) and g(x) be two functions defined on some subset of the real numbers. One writes

if and only if, for each positive constant M, f(x) is at most M multiplied by g(x) in absolute value, for each x, which is large enough. That is, f(x) = o(g(x)) if and only if, for every , there exists a constant x0, such that

Note the difference between the earlier formal definition for the big-O notation, and the present definition of little-o: while the former has to be true for at least one constant M the latter must hold for every positive constant.[1] In this way little-o notation puts a stronger restriction compared to big-O, in that every function that is little-o is also big-O, but not every function that is big-O is also little-o.

For example,

Little-o notation is common in mathematics but rarer in computer science. In computer science the variable (and function value) is most often a natural number. In mathematics, the variable and function values are often real numbers. The following properties can be useful:

  • (and thus the above properties apply with most combinations of o and O).

As with big O notation, the statement " is " is usually written as , which is a slight abuse of notation.

Family of Bachmann–Landau notations

Notation Name Intuition As , eventually... Definition
Big Omicron; Big O; Big Oh is bounded above by (up to constant factor) asymptotically for some k
or
(Note that, since the beginning of the 20th century, papers in number theory have been increasingly and widely using this notation in the weaker sense that f = o(g) is false) Big Omega is bounded below by (up to constant factor) asymptotically for some k
Big Theta is bounded both above and below by asymptotically for some k1, k2

Small Omicron; Small O; Small Oh is dominated by asymptotically for every
Small Omega dominates asymptotically for every k
on the order of; "twiddles" is equal to asymptotically

Bachmann–Landau notation was designed around several mnemonics, as shown in the As , eventually... column above and in the bullets below. To conceptually access these mnemonics, "omicron" can be read "o-micron" and "omega" can be read "o-mega". Also, the lower-case versus capitalization of the Greek letters in Bachmann–Landau notation is mnemonic.

  • The o-micron mnemonic: The o-micron reading of and of can be thought of as "O-smaller than" and "o-smaller than", respectively. This micro/smaller mnemonic refers to: for sufficiently large input parameter(s), grows at a rate that may henceforth be less than regarding or .
  • The o-mega mnemonic: The o-mega reading of and of can be thought of as "O-larger than". This mega/larger mnemonic refers to: for sufficiently large input parameter(s), grows at a rate that may henceforth be greater than regarding or .
  • The upper-case mnemonic: This mnemonic reminds us when to use the upper-case Greek letters in and : for sufficiently large input parameter(s), grows at a rate that may henceforth be equal to regarding .
  • The lower-case mnemonic: This mnemonic reminds us when to use the lower-case Greek letters in and : for sufficiently large input parameter(s), grows at a rate that is henceforth inequal to regarding .

Aside from Big O notation, the Big Theta Θ and Big Omega Ω notations are the two most often used in computer science; the Small Omega ω notation is rarely used in computer science.

Informally, especially in computer science, the Big O notation often is permitted to be somewhat abused to describe an asymptotic tight bound where using Big Theta Θ notation might be more factually appropriate in a given context. For example, when considering a function , all of the following are generally acceptable, but tightnesses of bound (i.e., bullets 2 and 3 below) are usually strongly preferred over laxness of bound (i.e., number 1 below).

  1. T(n) = O(n100), which is identical to T(n) ∈ O(n100)
  2. T(n) = O(n3), which is identical to T(n) ∈ O(n3)
  3. T(n) = Θ(n3), which is identical to T(n) ∈ Θ(n3).

The equivalent English statements are respectively:

  1. T(n) grows asymptotically no faster than n100
  2. T(n) grows asymptotically no faster than n3
  3. T(n) grows asymptotically as fast as n3.

So while all three statements are true, progressively more information is contained in each. In some fields, however, the Big O notation (number 2 in the lists above) would be used more commonly than the Big Theta notation (bullets number 3 in the lists above) because functions that grow more slowly are more desirable. For example, if represents the running time of a newly developed algorithm for input size , the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound.

Extensions to the Bachmann–Landau notations

Another notation sometimes used in computer science is Õ (read soft-O): f(n) = Õ(g(n)) is shorthand for f(n) = O(g(n) logk g(n)) for some k. Essentially, it is Big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since logk n is always o(nε) for any constant k and any ε > 0).

The L notation, defined as

is convenient for functions that are between polynomial and exponential.

The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms), where f and g need not take their values in the same space. A generalization to functions g taking values in any topological group is also possible.

The "limiting process" x→xo can also be generalized by introducing an arbitrary filter base, i.e. to directed nets f and g.

The o notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions,

which is an equivalence relation and a more restrictive notion than the relationship "f is Θ(g)" from above. (It reduces to if f and g are positive real valued functions.) For example, 2x is Θ(x), but 2x − x is not o(x).

Graph theory

It is often useful to bound the running time of graph algorithms. Unlike most other computational problems, for a graph G = (V, E) there are two relevant parameters describing the size of the input: the number |V| of vertices in the graph and the number |E| of edges in the graph. Inside asymptotic notation (and only there), it is common to use the symbols V and E, when someone really means |V| and |E|. We adopt this convention here to simplify asymptotic functions and make them easily readable. The symbols V and E are never used inside asymptotic notation with their literal meaning, so this abuse of notation does not risk ambiguity. For example means for a suitable metric of graphs. Another common convention—referring to the values |V| and |E| by the names n and m, respectively—sidesteps this ambiguity.

History

The notation was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie ("analytic number theory"), the first volume of which (not yet containing big O notation) was published in 1892.[5] The notation was popularized in the work of number theorist Edmund Landau; hence it is sometimes called a Landau symbol. It was popularized in computer science by Donald Knuth, who (re)introduced the related Omega and Theta notations.[6] He also noted that the (then obscure) Omega notation had been introduced by Hardy and Littlewood[7] under a slightly different meaning, and proposed the current definition. Hardy's symbols were (in terms of the modern O notation)

  and  

other similar symbols were sometimes used, such as and .

The big-O, standing for "order of", was originally a capital omicron; today the identical-looking Latin capital letter O is used, but never the digit zero.

See also

Notes

  1. ^ a b Thomas H. Cormen et al., 2001, Introduction to Algorithms, Second Edition
  2. ^ N. G. de Bruijn (1958). Asymptotic Methods in Analysis. Amsterdam: North-Holland. pp. 5–7. ISBN 9780486642215.
  3. ^ a b Ronald Graham, Donald Knuth, and Oren Patashnik (1994). Concrete Mathematics (2 ed.). Reading, Massachusetts: Addison-Wesley. p. 446. ISBN 9780201558029.{{cite book}}: CS1 maint: multiple names: authors list (link)
  4. ^ Donald Knuth (June/July 1998). "Teach Calculus with Big O" (PDF). Notices of the American Mathematical Society. 45 (6): 687. {{cite journal}}: Check date values in: |date= (help) (Unabridged version)
  5. ^ Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. ISBN 0-89871-420-6, p. 25
  6. ^ Donald Knuth. Big Omicron and big Omega and big Theta, ACM SIGACT News, Volume 8, Issue 2, 1976.
  7. ^ G. H. Hardy and J. E. Littlewood, Some problems of Diophantine approximation, Acta Mathematica 37 (1914), p. 225

Further reading

  • Paul Bachmann. Die Analytische Zahlentheorie. Zahlentheorie. pt. 2 Leipzig: B. G. Teubner, 1894.
  • Edmund Landau. Handbuch der Lehre von der Verteilung der Primzahlen. 2 vols. Leipzig: B. G. Teubner, 1909.
  • G. H. Hardy. Orders of Infinity: The 'Infinitärcalcül' of Paul du Bois-Reymond, 1910.
  • Donald Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89683-4. Section 1.2.11: Asymptotic Representations, pp. 107–123.
  • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 3.1: Asymptotic notation, pp. 41–50.
  • Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Pages 226–228 of section 7.1: Measuring complexity.
  • Jeremy Avigad, Kevin Donnelly. Formalizing O notation in Isabelle/HOL
  • Paul E. Black, "big-O notation", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 11 March 2005. Retrieved December 16, 2006.
  • Paul E. Black, "little-o notation", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.
  • Paul E. Black, "Ω", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.
  • Paul E. Black, "ω", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 29 November 2004. Retrieved December 16, 2006.
  • Paul E. Black, "Θ", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. Retrieved December 16, 2006.