Jump to content

Talk:Time complexity: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Line 128: Line 128:
:I checked out the cited source (Grotschel 1988) at the library, and corrected the article accordingly. The source states verbatim: "For instance, the well-known Euclidean algorithm to compute the greatest common divisor of two integers runs in polynomial time in the Turing machine model but not in the arithmetic model." The Wiki had the Euclid's GCD exemplifying the opposite. I also made the statements about "running time" more specific, since "steps" and "length of input" are defined differently for the two models. Note that some of the statements made by 129.132.58.129 above might be a bit misleading (e.g. it doesn't make sense to talk about strongly-polynomial "in a model"). See [http://cs.stackexchange.com/a/7485/5106 this answer] for the details (as I see them, at least).
:I checked out the cited source (Grotschel 1988) at the library, and corrected the article accordingly. The source states verbatim: "For instance, the well-known Euclidean algorithm to compute the greatest common divisor of two integers runs in polynomial time in the Turing machine model but not in the arithmetic model." The Wiki had the Euclid's GCD exemplifying the opposite. I also made the statements about "running time" more specific, since "steps" and "length of input" are defined differently for the two models. Note that some of the statements made by 129.132.58.129 above might be a bit misleading (e.g. it doesn't make sense to talk about strongly-polynomial "in a model"). See [http://cs.stackexchange.com/a/7485/5106 this answer] for the details (as I see them, at least).
:[[User:Alexeicolin|Alexeicolin]] ([[User talk:Alexeicolin|talk]]) 00:45, 22 December 2012 (UTC)
:[[User:Alexeicolin|Alexeicolin]] ([[User talk:Alexeicolin|talk]]) 00:45, 22 December 2012 (UTC)

== Ask for clarification ==

From section "Sub-linear time", what is a ''string that has a 1-bit indexed by the first log(n) bits''? --[[Special:Contributions/151.75.122.18|151.75.122.18]] ([[User talk:151.75.122.18|talk]]) 20:19, 14 March 2014 (UTC)

Revision as of 20:19, 14 March 2014

WikiProject iconMathematics B‑class High‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-priority on the project's priority scale.
WikiProject iconComputer science B‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.
Things you can help WikiProject Computer science with:

Mistake

There is a mistake in description of Quasi-polynomial time: 3sat it is not NP hard problem but NP complete, we have a polynomial verification for the solution of the problem.

HTML log-star

User:Cybercobra seems to dislike "O(log* n)", and I think that "" looks totally out of place in the table. Could we perhaps find a HTML version that would be acceptable to everyone? Some attempts (see the source code for details):

O(log* n), O(log n), O(log n), O(log n),
O(log*n), O(logn), O(logn), O(logn)

The same thing in a larger font so that the differences are easier to see:

O(log* n), O(log n), O(log n), O(log n),
O(log*n), O(logn), O(logn), O(logn)

Any comments? (Personally, I think the 8th version looks best, but the 3rd one might be a reasonable compromise, as it uses only HTML 4.0 entities and hence should render correctly everywhere.) — Miym (talk) 08:46, 13 January 2010 (UTC)[reply]

Only the first one renders on IE6/XP (probably the most common browser/OS combination). Only the first 3 work in Chrome/XP. The 4th and 8th one don't render on Firefox/XP or Opera/XP. I would recommend sticking with the first one for the benefit of poor IE6 users everywhere. If this is fixed in IE7, and we don't care about IE6 users, I vote for the third one. --Robin (talk) 14:24, 13 January 2010 (UTC)[reply]

Thanks for the feedback! Another possibility might be a trick like this, using only ASCII characters, but slightly raising "*":

O(log* n), O(log* n), O(log* n)

This seems to look better than a straightforward use of a "sup" element (which raises "*" too much). I created an experimental template {{log-star}}, which we could use like this:

log* function, O(log* n), O(n log* n), ...

Miym (talk) 15:21, 13 January 2010 (UTC)[reply]

And just a minor clarification: The complication here is that in some fonts "log*" looks a bit like , while in some fonts it looks a bit like , and in some other fonts it is a compromise between the two extremes. Therefore a superscript-* looks strange with some fonts, and a non-superscript-* looks strange with some fonts. That's why I would suggest a "slightly raised *" as a compromise that might be tolerable in all situations. — Miym (talk) 15:30, 13 January 2010 (UTC)[reply]

How about just using ? --Robin (talk) 14:56, 15 January 2010 (UTC)[reply]
This would also look out of place since it's still a LaTeX picture. On the other hand, wikipedia's LaTeX pictures always look out of place and that is an issue which should be addressed in general. For now, I think the template is a good solution. ylloh (talk) 18:16, 15 January 2010 (UTC)[reply]
I like the latest version using {{log-star}}; nice work Miym. --Cybercobra (talk) 20:30, 15 January 2010 (UTC)[reply]

Sub-linear time algorithms

User:Eldar added: [1]

The specific term sub-linear time algorithm is usually reserved to algorithms that are run over fresh inputs with no assumptions on the input structure (unlike binary search or tree maintenance algorithms), and use neither parallel processing (as the NC1 matrix determinant calculation does) nor non-classical machines (as Grover's algorithm does)

I don't agree that sub-linear time algorithms exclude all other models of computation. Just like polynomial time makes sense for sequential machines, parallel machines, quantum computers etc., I think sub-linear time should make sense for all models too. Perhaps the prevailing use of sub-linear time algorithms is related to property testing, but that doesn't mean that Grover's algorithm isn't a sub-linear time algorithm. It is sub-linear time on a different model of computation. --Robin (talk) 01:52, 15 January 2010 (UTC)[reply]

Yes, I also think that the term "sub-linear time algorithm" makes sense in any model of computation. As another example, in distributed computing, it makes perfect sense to say that an algorithm runs in sublinear time (= number of communication rounds is o(n), where n is the number of nodes in the network). Hence I would suggest that we first define what sub-linear time means in general (and perhaps point out that constant time or logarithmic time is sublinear time), and then explain the properties of sub-linear time algorithms in some models of computation (e.g., emphasise that if there is no parallelism, then a classical machine can't read the entire input, and point out that Turing machines don't make much sense in the study of sub-linear time algorithms). — Miym (talk) 09:39, 15 January 2010 (UTC)[reply]
The intention was not to make claims on how the term should be used, but on how this specific phrase is actually used in the community. If you see "sub-linear algorithm" in a research paper, this is what the phrase will almost always refer to, regardless of whether this is right.Eldar (talk) 22:36, 15 January 2010 (UTC)[reply]
I agree that it is commonly used in that sense, but I don't think you can say "almost always". A quick experiment with Google Scholar, "sublinear time": hit #4: deterministic parallel algorithm; hit #5: distributed algorithm; hit #7: sublinear-time updates in dynamic data structures. There are reasonably many hits for phrases such as "sublinear parallel algorithm". It is even used in standard textbooks in the broader sense: e.g., CLRS seems to use the phrase "sublinear time" in the context of sorting networks (Section 27). What it means in the community may depend on which community you consider. :) — Miym (talk) 23:21, 15 January 2010 (UTC)[reply]
The specific use is for the three-word phrase "sublinear time algorithm" (usually without a hyphen in "sublinear"), and indeed that section was a merged stub article dealing with those. I'm starting to wonder whether things would be served better by unmerging. Eldar (talk) 22:44, 16 January 2010 (UTC)[reply]
I tried reconciling the issues raised here within the section. Take a look. Eldar (talk) 01:15, 21 January 2010 (UTC)[reply]
Thanks, in general, I think it looks fairly good now. The last paragraph might require some editing, though:
  • The 3rd paragraph says that sublinear-time algorithms "must be" randomized, and the last paragraph says that the are "typically" randomized.
  • The reference to streaming algorithms may be a bit confusing (exactly what is sublinear in what; streaming algorithms by definition read all bits of input).
Miym (talk) 09:11, 21 January 2010 (UTC)[reply]
I took care of the above. First bullet: one could nitpick that there are cases in which randomization is not required (e.g. for checking trivial "properties") so I changes the first reference to randomization. Second bullet: Indeed "streaming" was inappropriate and now removed. Eldar (talk) 00:57, 22 January 2010 (UTC)[reply]

SUBEPT and fixed parameter tractability

User:Ylloh recently added the following text to the article:

Note that it makes a difference whether the algorithm is allowed to be sub-exponential in the size of the instance, the number of vertices, or the number of edges. In parameterized complexity, this difference is made explicit by considering pairs of decision problems and parameters . SUBEPT is the class of all parameterized problems that run in time sub-exponential in k and polynomial in the input size n:[1]

.

More precisely, SUBEPT is the class of all parameterized problems for which there is a computable function with and an algorithm that decides in time .

While I find fixed parameter tractability very interesting, I don't think it should be mentioned in this article. We don't mention the class FPT, which is arguably the most important class in parametrized complexity theory. I think this article should continue to be about time complexity in one variable n, and not the type studied in parametrized complexity --- I think that would confuse the reader too. --Robin (talk) 14:01, 8 March 2010 (UTC)[reply]

First of all, let me mention that I would not want to define the term sub-exponential in a wikipedia entry twice, with two different meanings. This may be very confusing for a reader unfamiliar with the subject matter and it is why I changed the headings from "First Definition" to "SUBEXP" earlier. On the other hand, wikipedia should only mirror the actual uses of that term and unfortunately these two formalizations are indeed used. So I did not object when you reverted the headings.

I partially disagree with your remark above. Defining the term sub-exponential as with n being the input size may look like a robust notion, but it isn't. Just notice that we have no idea whether ETH can be formulated in terms of the size alone, without using the number of variables or clauses of a formula (a 3CNF formula with m clauses may be of size ). The sparsification lemma does not give us formulas of linear size, it only gives us formulas with a linear number of clauses. Of course, when we talk about running times of for natural problems then logarithmic factors for encodings make no difference -- both are . However, at the current state of research and from a complexity theoretic perspective, we need a parameter different from the input size to formally talk about sub-exponential time in the sense of the second definition. ylloh (talk) 15:18, 8 March 2010 (UTC)[reply]

Order of sections

I rearranged the linear and polynomial sections to be more in line with the standard presentation, but then realized that the sections seemed to go in order. For the table, that makes sense, but I'm not sure it makes the most sense for the article sections, so I've left it going constant, linear, polynomial. I won't be offended if someone else disagrees and reverts, but I figured I'd explain my reasoning. jheiv (talk) 22:37, 25 May 2010 (UTC)[reply]

Typos

In the subsection "Exponential time hypothesis" the symbols m and n seem to refer to the same thing. —Preceding unsigned comment added by 193.219.42.53 (talk) 10:54, 18 June 2010 (UTC)[reply]

Strongly polynomial requires integer inputs?

Time_complexity#Strongly_and_weakly_polynomial_time says, a polynomial in the number of integers in the input instance. Why is it important that the inputs be integers? Obviously, you can make a slight generalization and talk about any inputs which can be mapped to integers (i.e. you can represent a string as a very long integer, or a fixed-point rational number can be scaled to be an integer), but why need non-integer inputs be excluded? -- RoySmith (talk) 13:03, 10 September 2010 (UTC)[reply]

Really, "weakly polynomial" is just a synonym for "exponential" for positive inputs. It gets a special name only because we think about numbers differently; I don't see the same reason applying to non-integer inputs. CRGreathouse (t | c) 14:38, 10 September 2010 (UTC)[reply]
The article implies that "weakly polynomial" is a synonym for "polynomial," not for "exponential," which makes sense. The "synonym" to "exponential" that you might be thinking of is "pseudo-polynomial," which is in fact exponential on log-encoded input if it is polynomial on an unary-encoded input. Another reason to list the definitions all in one list together.
Alexeicolin (talk) 04:16, 18 December 2012 (UTC)[reply]
I agree with RoySmith, the definitions for strongly/poly/pseudo could be combined and stated simply, without invoking arithmetic model, space, and strage notion of "number of integers in the input instance", like so, in order:
Definition 1 An algorithm runs in strongly polynomial time if the number of operations (basic
arithmetic operations, comparisons, etc.) can be bounded by a polynomial in the number of data
items, and it is not dependent on the size of the data items.
Definition 2 An algorithm runs in polynomial time if the number of operations can be bounded by
a polynomial in the size of the input, when data items are encoded in binary.
Definition 3 An algorithm runs in pseudo polynomial time if the number of operations can
be bounded by a polynomial in the size of the input, when data items are encoded in unary.
Source (re-ordered): ORIE 633 Lecture Notes, David P. Williamson, Cornell University
Alexeicolin (talk) 04:16, 18 December 2012 (UTC)[reply]

Comment on text

Not sure how to add comments on text, but as I'm not sure what change to make on the page I'll describe what I mean here. In the section "Constant time" the following can be read "Hence it is a linear time operation, taking O(n) time (unless some more efficient algorithm is devised, for example, a binary search across a sorted array)". The algorithm referred to is an algorithm to locate the minimum value in an array. My question regards the paranthesis; why would a binary search be needed across a sorted array to obtain the minimum value? We could just index the first (or last) element in the array, and we would have the minimum. What am I missing?

85.24.185.204 (talk) 12:12, 12 December 2010 (UTC)[reply]

Subexponential-time algorithms for NP-complete problems

The section "Quasi-polynomial time" > "Relation to NP-complete problems" asserts "All the best-known algorithms for NP-complete problems like 3SAT etc. take exponential time", which could easily be interpreted to contradict the NP-complete page section "Common misconceptions" where it is asserted: "Further, some NP-complete problems actually have algorithms running in superpolynomial, but subexponential time." This issue was raised but not resolved on the latter page's talk page under topic "28 Exponential time as a misconception?". Grammatically, "best-known" would mean those algorithms most well known, but this usage is easily confused with "best known", meaning the algorithms with best performance known, and I suspect the latter was the intended meaning. Can anyone comment on the meaning and correctness of this statement? Thomas (talk) 06:10, 9 January 2012 (UTC)[reply]

KD-Tree search timing mismatch

Hi, on Kd-tree it says that searching is O(log(n)). Here it iays that it is n^c , 0<c <1 129.67.86.189 (talk) 19:55, 23 March 2012 (UTC)[reply]

n is not clearly defined

To someone who is not familiar with the subject, equating n with "size" is unclear. It would be helpful to have an explicit definition along the lines of "n is the size of the input measured in bits," or "n is the size of the input measured by its length as a string of characters in a given finite alphabet". At least something to the effect that n(ABCD) = 2^32 bits in ascii, or n(ABCD) = 4 characters, rather than say, n(ABCD) = 65666768 by converting the ascii to a decimal number. This can be a potential source of confusion, because O(exp(n)) often describes the number of steps in a naive algorithm, so it is not clear that this is not the desired definition of size. — Preceding unsigned comment added by 168.122.117.36 (talk) 18:32, 5 July 2012 (UTC)[reply]

Strongly and weakly polynomial time

There is this sentence in the second-last paragraph:

"There are algorithms which run in polynomial time in the arithmetic model (where arithmetic operations take constant time), but not in the Turing machine model. The Euclidean algorithm [...] is one example."

This seems wrong to me. The constant-time operations in the arithmetic model become polynomial-time on a Turing machine, so polynomial-time stays polynomial-time. This is fine for showing an algorithm that is not strongly-polynomial (in either TM or AM), but that is not what the sentence says. — Preceding unsigned comment added by 129.132.58.129 (talk) 09:55, 9 October 2012 (UTC)[reply]

I checked out the cited source (Grotschel 1988) at the library, and corrected the article accordingly. The source states verbatim: "For instance, the well-known Euclidean algorithm to compute the greatest common divisor of two integers runs in polynomial time in the Turing machine model but not in the arithmetic model." The Wiki had the Euclid's GCD exemplifying the opposite. I also made the statements about "running time" more specific, since "steps" and "length of input" are defined differently for the two models. Note that some of the statements made by 129.132.58.129 above might be a bit misleading (e.g. it doesn't make sense to talk about strongly-polynomial "in a model"). See this answer for the details (as I see them, at least).
Alexeicolin (talk) 00:45, 22 December 2012 (UTC)[reply]

Ask for clarification

From section "Sub-linear time", what is a string that has a 1-bit indexed by the first log(n) bits? --151.75.122.18 (talk) 20:19, 14 March 2014 (UTC)[reply]

  1. ^ Flum, Jörg; Grohe, Martin (2006). Parameterized Complexity Theory. Springer. p. 417. ISBN 978-3-540-29952-3. Retrieved 2010-03-05. {{cite book}}: Invalid |ref=harv (help)