Jump to content

Talk:Hypercomputation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

A note on integration/limits

While there are functions that may not have a limit and/or limit that can be found through numerical means, this does not mean that the limit/integral can't be found by a computer.

Computation > numerical analysis. 203.143.164.204 22:26, 4 November 2007 (UTC)[reply]

Any limit or integral that can be found by hand is turing computable. I think this even holds if the discoveror was extremely lucky. 68.144.80.168 (talk) 12:58, 25 June 2008 (UTC)[reply]

Unscientific proposal

This article makes it seem as if hypercomputation held any scientific credibility. It does not. The physical impossibility of this "proposal" is not stressed enough, as others have pointed out in the discussion. I recommend that it is made clear that this is a mathematical, hypothetical, concept, that has no model in any known, relevant, physics. At least it must be certain that it is impossible in the observed universe (would need new physics) Exa (talk) 13:36, 30 September 2011 (UTC)[reply]

Hypercomputation, as a mathematical or computing science concept, needs to have no "scientific credibility" at all. That said, I believe the article does need to be cleaned up to separate the math from the nonsense (which could all be moved to the article supertask) and extended (for instance, I see no mention of infinite time turing machines, ordinal turing machines, or perhaps (I'm no expert) infinitary lamda calculus) Susy 11 (talk) 15:49, 15 March 2022 (UTC)[reply]

Analogue computer properties

Kevin baas said in his edit summary, took out b%llsh@t and inserted usefull categorization.

The b%llsh@t in question was:

"[Hypercomputation with real valued analogue computing] might require quite outlandish laws of physics (for example, a measurable physical constant with the value Ω)."

Kevin, what are you talking about? Are you suggesting that analogue computers, in general, can solve the Halting problem? If so, citing a proof would be handy! If not, then the text should remain.

I am not suggesting that physics actually has improbable properties like this- just that they would be necessary, if, as I believe some articles in the literature have suggested, analogue computers could be hypercomputers. --Pde 04:22, 9 Sep 2003 (EDT)

Well, obviously if you could "measure" values in real life that, say, contain the solution to the halting problem or some such, this is equivalent to performing a hypercomputation. That seems rather far-fetched, however. Besides, AFAIK it is impossible to create a "perfectly" analog computer in real life - there is quantization at some level for all matter, energy, space, time, etc. So even if there would exist a theoretical hypercomputer it would not obey the laws of physics, which would put a rather large crimp in its usage. - JustinWick 18:55, 19 November 2005 (UTC)[reply]
It is absolutely far-fetched. If such measurements could answer the halting problem, and they are not practically achievable, we would say that the universe performs "unharnessable hypercomputation". It hypercomputes, we can't. -- pde 03:48, 2 February 2006 (UTC)[reply]

Hypercomputation definition

My understanding, as a computer programmer, of the word "hypercomputation", is a machine that can "solve problems" in less computation time than a turing machine.

This is not the way the term is used in the literature. As the article explains, "hypercomputation" refers to computing non-recursive functions- ie, those which cannot be computed by Turing machines. Many things can solve problems faster than Turing machines. Stylised real computers (eg register machines, or programming languages with no memory bounds) can do O(1) memory access, which Turing machines cannot. Quantum computers do polynomial factorisation, sqrt(N) unsorted search, etc. Non-deterministic Turing machines, which probably don't exist in reality but are nontheless mathematically interesting, are much faster than Turing machines (how much, depends on whether P=NP). These devices are not hypercomputers, and cannot even solve the halting problem, let alone any of the harder decision problems in the Arithmetic hierarchy. --Pde 00:09, 16 Sep 2003 (UTC)
"Less computation time" means 'much' faster, as you have stated. I did not say "less time", I said "less computation time", which is a different statement altogether. A turing machine cannot solve a problem in "less computation time" than any other turing machine. For instance, it cannot solve an E problem in P time. A machine capable of doing this would, by definition, be a hypercomputer. It has been demonstrated that a pulse computer is, in this sense, a hypercomputer. Perhaps if you had read what I wrote before, you would be aware of this fact. I'm sorry that it doesn't agree with your point of view. --Kevin Baas
Kevin, reading your comment, I believe that you haven't understood everything I've been saying. I'll try to work out where this is... (reads)
Aha! bingo. I think I've found the issue. When I said "cannot be computed by Turing machines", you took me literally. What I should have said, was "cannot be computed by Turing complete systems".
All the stuff about other kinds of faster-than-turing-machine computation (ordinary quantum computers, O(1) memory access, nondeterministic turing machines, pulse computers) is important, but none of it is hypercomputation. --Pde 00:34, 17 Sep 2003 (UTC)
Ordinary quantum compuers are not super-turing. They are multi-tape turing machines. (i.e. parallel processing architecture) To the extent that a quantum computer has a finite storage capacity, it has finite parallel processing capability. Up to the point where all processors are in use, the machine would appear to be computing 'faster-than-turing', because as the problem size grows, more processors (that weren't being used before) are being put to use.
Furthermore, you misunderstood my remark on the superposition of states in a quantum computer. If I have a set of, say, 3 elements, that set has exactly 7 possible nonempty subsets: {{1},{2},{3},{1,2},{2,3},{1,3},{1,2,3}}. If I have a set of say, 256 elements (the number of possible states of a byte), the number of nonempty subsets is finite. Indeed, if I have a finite set of states, the number of possible subsets, or "superpositions" (to use the quantum mechanical sense), is finite. There is no such thing as an infinite superposition. Such a phrase is completely absurd.
I think you might want to do some more reading or study on this. Basis sets for function spaces are always at least countably infinite. In the context of QM, I asked some physicists about this, and couldn't think of any spatial (as opposed to spin) Hamiltonians which *didn't* have infinite numbers of energy eigenvectors.
I also think your understanding of superpositions is incorrect. Superpositions involve not only a subset of the possible states, but a coefficient for each state; to run with our example, a superposition can be 1.0 x {1}, or 0.3 x {1} + 0.7 x {2}, etc. Each of these is a real, distinct quantum state (just not an eigenstate of the operator in question). --Pde
Ah! I've found it! (give me a break.) I'm not familiar with the term "turing complete", unless you are refering to a "universal turing machine", which, as turing proved, is just a regular turing machine. Could you briefly explain this term to me? --Kevin Baas
Consider a computational system C, computing a value v from program p and input i (ie ). C is said to be Turing-complete if there exists a program such that for any Turing machine T within inputs , encoded in a standardised manner, . The same must hold true in reverse (ie, every universal Turing machine has a program which emulates the operation of C for all inputs). Each Turing-complete system can compute the same set of functions. Some commonly discussed examples of Turing-complete systems: register machines, reductions in lambda calculus and horn clauses, universal Turing machines, idealised programming languages.
Note that this kind of computability theory is just concerned with what functions can and cannot be evaluated, rather than how "fast" the computational systems are (speed is studied in computational complexity theory). --Pde 05:42, 28 Sep 2003 (UTC)
Again and again I have to make the point: The problem is not "solving the problem" per say, but solving the problem in X time; how fast the machine converges to a solution. Whether that time be polynominal, exponential, infinite, ect. (...) --Kevin Baas
Well, computability theory, as it appears in textbooks and academic journals, is defined to be about whether a function can be evaluated in any finite time. Please note also that "solving a [mathematical] problem" is defined to mean "evaluating a function". This is not the same as the everyday expression "solving a problem". --Pde
I am not talking about computability theory. I am talking about computational complexity theory. You're the one talking about computability theory. Look, if hypercomputer is really defined as you say it is, which I'm sure the people at [www.hypercomputation.org] would dispute, then it desperately needs either to be redefined or not taught, because it takes the wrong approach. --Kevin Baas
(...) When I mention going from one computation complexity class to another, I'm not talking about a "faster" turing machine, I'm talking about something that is impossible on a turing machine (i.e. a turing-complete system). I'm talking about, for instance, solving an E problem in P time. Now a computer doesn't "solve a P problem", rather, it "solves a P problem in P time." (or a continuous problem in infinite time) When one refers to "solving a problem", they are making a time/complexity-differential statement; they are discussing related limits. (limit-theorems) This is a qualitative statement.
Efficiency is of course a subject of real interest. As I said, it is studied in computational complexity theory. But hypercomputation is not a part of computational complexity theory.
Also, your claim "I'm talking about something that is impossible on a turing machine (i.e. a turing-complete system). I'm talking about, for instance, solving an E problem in P time." is, wrong. Example 1: it is currently believed that prime factorisation is an Exp-time problem on Turing machines. But ordinary quantum computers can solve it in P-time. Ordinary quantum computers are Turing-complete systems.
Example 2: A computer whose clock rate doubles every second. Such a hypothetical device can, by definition solve any E problem in P time. It is also a Turing-complete system.
Kevin, are you finding it hard to understand my claims? If so, you should be trying to learn about the subject, before you write about it. Or are you having trouble with the fact that mathematics is constructed through fixed, almost-arbitrary definitions? In which case, you should think long and hard about why mathematicians rarely find this problematic. --Pde 00:28, 30 Sep 2003 (UTC)

I'm having trouble with your innane patronizing and concieted attitude. I'm also having trouble communicating with you, but this may in fact be the same problem.

I'll have you know that my spatial-mathematical I.Q. is over 150 (my overall I.Q. is over 130), I scored the highest in my school on the AHSME, and was the first student in my school in at least ten years to be invited to take the AIME. My "spatial reasoning" skills were tested by the GATT tests to be at or above the 99.9th percentile. (One standard deviation below that was the 99.8th percentile.) I scored a 5/5 on the A.P. Calculus exam (BC), and a 5/5 on the A.P. CompSci exam (AB). (Funny, I didn't take any computer classes in high school.) You should really be more carefull who you're addressing before you insult their intelligence.

I find it very telling how you only address small parts of my argument, leaving the rest as valid by default. Indeed, you evade the core of my argument with random precision. --Kevin Baas

(P.S. If by "turing complete", you mean a turing machine with qualitatively infinite processing time and storage capacity, then you are speaking of something that is no longer a turing machine.)

You seem to be using a paradigm that is ill-suited to the problematic. You are looking at the problem structurally, when it is a problem of dynamics.
Formal systems only exist by way of dynamical systems. They are not abstract entities that exist in-and-of-themselves. When we are considering a "machine" we are considering a dynamical system, not a formal system. Our assesments and theories are only valid with respect to dynamical systems, regardless of whether our machine is physical or theoretical. --Kevin Baas
Solving a problem in less computation time than a turing machine is itself a problem, and one that a turing machine is incapable of solving. Therefore, this is a problem that a turing machine can't do.
If a super-turing machine is not a hypercomputer, then we find ourselves lacking a term for a very important class of machines. --Kevin Baas
How would it do this? Well, first, since each 'algorithm' or 'formula' is actually a number (godel numbering), it would have to be able to "compute" a larger set of numbers than a turing machine. It would have to "compute" uncomputable numbers.
But what do we really mean by "compute"? To understand hypercomputation, one must re-evaluate this term, and deconstruct the computer.
But to make a long story short, an analog computer, like a digital computer, is a dynamical system. However, the digital computer is a special case analog computer, which only uses a subset of the possible attractors - only point attractors or point-periodic attractors. These are the "computable numbers". An analog computer which is not restricted to the range, can manifest strange attractors as solutions. (Strange attractors are stable.) In a sense, these strange attractors are the "uncomputable numbers".
One would require quite outlandish laws of physics to refute this. The question is how to design a programmable analog computer that computes in a usefull manner with these "strange attractors".
But hypercomputation is not limited to analog computers. As I have listed, they also include pulse computers, which are simpler and for now more practical than analog computers. Some simple operations have been performed with pulse computers, including common computer science problems like searching, sorting, and linear optimization.
Pulse computers have consistently computed some of these problems in less computation time than is possible with a turing machine, making it, by definition, a hypercomputer. --Kevin Baas

Kevin Baas wrote:

"I'm having trouble with your innane patronizing and concieted attitude. I'm also having trouble communicating with you, but this may in fact be the same problem."

I hope not, but I guess it's possible. Despite my genuinely patronising attitude, I am spending a lot of time explaining things to you.

"I'll have you know that my spatial-mathematical I.Q. is over 150 (my overall I.Q. is over 130), I scored the highest in my school on the AHSME, and was the first student in my school in at least ten years to be invited to take the AIME. My "spatial reasoning" skills were tested by the GATT tests to be at or above the 99.9th percentile. (One standard deviation below that was the 99.8th percentile.) I scored a 5/5 on the A.P. Calculus exam (BC), and a 5/5 on the A.P. CompSci exam (AB). (Funny, I didn't take any computer classes in high school.) You should really be more carefull who you're addressing before you insult their intelligence."

It doesn't really matter how smart you are. If you aren't able to work within the same frames of reference as other intelligent people interested in the same problems, and build on their achievements, you'll find yourself reinventing wheels- and wasting your own and other people's time.

"I find it very telling how you only address small parts of my argument, leaving the rest as valid by default. Indeed, you evade the core of my argument with random precision. --Kevin Baas"

I never claimed that everything you've said is wrong. But you've got so many important things wrong that it's seriously problematic. The thing about mathematics is, even small differences in definition or errors in inference lead to completely inconsistent conclusions.

"(P.S. If by "turing complete", you mean a turing machine with qualitatively infinite processing time and storage capacity, then you are speaking of something that is no longer a turing machine.)"

Turing machines are abstract mathematical constructs with unbounded amounts of time and storage. As long as there exists *some* finite amount of each within which each value of a function can be evaluated, the function is said to be recursive or decidable or Turing computable, often abbreviated to "computable". Interestingly, if you move from unbounded resources to resources which are actually infinite, you are able to evaluate a huge (infinite, actually) set of functions which are non-recursive- that is, questions which Turing-complete systems could never answer. Such as: what is the fiftieth Busy Beaver number? No turing machine, no silicon computer, no qubit computer, no pulse machine or other analogue computer whose design I'm aware of can ever tell you that number. But hypercomputers, if they existed, could do so.

You mentioned subtle distinctions, I've always understood "infinite" to mean "unbounded", making it a qualitative distinction: the distinction between bounded and unbounded. The difference between the distinctions that you made between 'unbounded' and 'infinite' seem to be a hidden conceptual trick: by infinite, you seem to imply that one can measure a result that requires infinite time,
Correct.
Well then, this is a logical error.
Not exactly, although it is a good example of why it's pretty damn unlikely that hypercomputation will turn out to be possible. But, read the article for some unlikely ways around this which people have dreamt up. If there is a way to perform an infininte number of computational operations in a finite part of physical spacetime, then that could lead to hypercomputation. --pde 07:28, 23 Dec 2003 (UTC)
No, it is a logical error. You seem to be skewing the interpretation so that it becomes something familiar to you, and responding with a learned construct. And btw, I don't care about people's delusional fantasies. And ya, hypercomputation insofar as it deals withn infinite lenght bit string of finite lenght, is impossible. But let's not deal with absurdities here. We are talking about zero-dimensional information or one-dimensional information or somewhere inbetween. We are not talking about zero-dimensional information with properties of one-dimensional information. But I digress, one cannot, by any measure (no pun intended), measure a result in finite time which by definition takes infinite time to measure or occur (same difference anyways). That is absurd.
whereas by 'unbounded' you imply that measurements can only be ephemeral.
Incorrect. What I mean is that, for a universal Turing machine T computing a function , for each x, there exists a finite amount of memory and time in which can be evaluated. These are "unbounded" because they are associated with the f and the x, not the T. --Pde
Thank you for the clarification. That usage of the word is confusing to me because it states something that I trivially assume. Also, because it's redundant: "abstract mathematical construct" implies, or, rather, asserts "unboundedness".
In any case, you imply the concept of information insofar as you imply the problem of measurement, which brings me to the second part of my response...
Regarding what hypercomputers can do and what is possible: yes and no. "A number", that is, a symbolic expression of a point in a phase space, is necessarily a discrete entity, for when we say symbolic we imply that there is a finite set of symbols (if the set where otherwise, we would use a different word, and a different concept.), thus the only way for the "number" to transcend the set of computable numbers, would be for it to be represented, in its most reduced form, as an infinite string of symbols. Now can create sets of symbol strings, and represent each set by a new symbol, thus creating a much larger symbolic set that can express the same information with less symbols. (you can use Shannon's information theory to calculate the information in each symbol, and you can represent this relationship in terms of information) So long as the sub-strings are of finite lenght, and their symbols are from a finite set, any set of strings that we translate to a new set of symbols, as discussed, remains of finite size. However, when we introduce an infinite string, the mentioned set becomes an infinite set. No matter how we break down the set, or the string, we still have an infinite set. In the same way we can not reach an infinite set starting with finite sets. Therefore, when we are dealing with the an infinite set or an infinite string of symbols, the concept of number that we are familiar with- a string of symbols- is inappropriate.
the third part of my response: when we speak of computers in the contemporary sense, we must adapt our paradigm: God is dead. Computers are dynamical systems, and must be understood and discussed as such, and in terms of such. Anything else is meaningless. --Kevin Baas


Poor judgement

"I'll have you know that my spatial-mathematical I.Q. is over 150 (my overall I.Q. is over 130), I scored the highest in my school on the AHSME, and was the first student in my school in at least ten years to be invited to take the AIME. My "spatial reasoning" skills were tested by the GATT tests to be at or above the 99.9th percentile. (One standard deviation below that was the 99.8th percentile.) I scored a 5/5 on the A.P. Calculus exam (BC), and a 5/5 on the A.P. CompSci exam (AB). (Funny, I didn't take any computer classes in high school.) You should really be more carefull who you're addressing before you insult their intelligence."

Kevin, you're showing a lot of mathematically uninformed arrogance here. Do the math. There are about 300 million Americans. 99.9th percentile means that you're one in a thousand, which sounds pretty nice on the surface. Of course, that means that there are 300,000 Americans alive right now who are as good or better than you in spatial reasoning. Isn't awfully prideful to assume that you have a better grip on the math of hypercomputers than all 300,000 of them? Especially when you consider that you have not taken any formal courses in the math behind them? Sure, you may have read some articles and a book or two, but it's hard to believe that laboring by yourself you out did all those who came before you on the subject of hypercomputing. Now, bear in mind that these 300,000 represent only the Americans on equal footing with you today and that most of them are older than you- that is they've had more time to study the mathematical nuances of "unbound" v. "infinite," etc. OK, I'll give that most people don't much cotton to hypercomputing. So of the 300,000 how many do you suppose have set their hands to it? 1 in 100? 1 in 1,000? 1 in 10,000? In any case, you still have yourself a churchful of folk who are well up to your level on the subject, even without the head start in years that most of them have. I'm not saying this to suggest that you can't do anything personally, but I do think that you should consider that as someone relatively new to the scene, you should sit back and come to fully understand what's been said before you throw in your two bits. If you don't... Well- you come off like someone who is mathematically ignorant, not just of the topic at hand, but of elementary percentages. I'm saying this not to make you upset, which if I were you (and I was) I would be. I'm saying this so that you can get rid of this pride before it does you damage some place more serious than an Internet forum. If you bring this to an interview or an application, whoever is looking at you will reject you, not because they don't think your smart, but because they don't think you have used your smarts to make some important deductions about the world around you. The number one deduction being, of course, that those before you might have thought through some of the problems you're just starting to consider... Just think about it, OK?

"I'll have you know that my spatial-mathematical I.Q. is over 150 (my overall I.Q. is over 130)," Who the f*ck cares. So called "smart people" make huge mistakes all the time- oftentimes as a result of being blinded by arrogance and over-confidence. I've found that most of my university instructors are often wrong and biased. BTW, I have an IQ of 145 and makes stupid mistakes all the time (childhood test, so I may have lost a few neurons since then- would explain a few things). --mav
Thanks mav. ;-) I agree. My point was not to be arrogant, it was rather to block what I percieved (with good reason) to be a rather shameless decay of diplomacy on the part of PDE. To restate what I've just said: I do not by any means consider myself an "elite". I simply believe that I deserve a prudent degree of consideration and civility, as opposed to rash assumptions and low-class remarks. --Kevin Baas

The phrase "compute directly on real numbers" is inaccurate and misleading. A "number" is a symbolic string. Any system that computes with symbolic strings ipso facto works with zero-dimensional information, and is thus a turing machine. I.e. not a hypercomputer. -Kevin Baas 2003.12.22

This statement is incorrect. It is entirely possible to have a hypercomputer which has only finite strings as inputs and outputs. There would have to be some infinite stuff (or magic), going on in between though. --pde 06:41, 23 Dec 2003 (UTC)
That may very well be the case PDE, in fact, I think what you're talking about is called a "computer", and the magic stuff you're talking about is a surjective mapping, manifested in the nonlinear dynamics of an electric circuit. A new input pushes perturbs the system, and the system settles back into an attractor (in a discrete dynamical system (i.e. a digital computer) this attractor is (for practical purposes) a periodic point attractor), thereby changing it's state as it dissipates the energy. Thus it "computes".
(new)Let me add: I assume first, ofcourse, that you forget to add the assertion taht said finite string input/output system is working with inputs and outputs in discrete time. But ofcourse this is the case, as you said "string" and thus implied that the timing of the symbols was irrelevant. (that it is not a pulse computer)
Granted that, you have neglected programmability. Ofcourse, to perform such "hypercomputations", the system would have to have an internal state space (memory) with an informaion dimension greater than zero. But to write a computer program that utilizes the attractor set of this expanded phase space, with only discrete-signal discrete-time input, would require infinite time, as it would require an uncountable number of symbols to transcend the set of possible programs in a turing machine. I imagine that a "hypercomputer" is by definition programmable, and thus can be programmed in finite time. If it is not programmable in finite time, then nothing can use it. If nothing can use it than by no measure does it manifest itself as a hypercomputer. If a "hypercomputer" must by definition be programmable (which, as pointed out in the last sentence, is neccessary in order for the term to be at all meaningfull), then a hypercomputer with finite strings as input and output is logically impossible. --Kevin Baas 09:40, 12 Jan 2004 (UTC)
In any case, it's quite off-topic. The topic is the phrase "compute directly on real numbers", esp. in regard to neural networks. Neural networks do not store information in any base, binary, digital, etc. That is, they does not store "numbers", rather state is preserved(and updated) via feedback loops between neurons. One might say that the state is stored in "real weights". This depends on the type of nueral network. But no neural network stores numbers. If there are numbers stored anywhere in the circuit, it is ipso facto a hybrid circuit. --Kevin Baas 2003.12.23

Scandalized

Well, I am a little scandalized after reading all the stuff in this page (including the magnificent I.Q. of Kevin showing how good is he making such tests), how tolerant can PDE be explaining n times the same and how after n explanations Kevin Bass seems to misunderstood again and again (confusing it with the complexity classes of computation) a simple definition of a term "hypercomputation" simply as the field of proposals on computing functions that are not computable by Turing machines. On the other hand, I suggest to add to the article the name who has used in first place the name and also a strong remark saying that the concept really have the sense of rejecting the Church-Turing thesis. In this way, it is completely false that Turing had as motivation hypercomputation instead of only have the idea of relativisation of his model (and now the computability standard model).

Hi, I agree that we should cite Copeland for introducing the term. But I'm not convinced that talking about hypercomputation necessarily rejects the Church-Turing thesis; to me it makes sense to examine whether and how hypercomputation might be possible without concluding that it actually is possible. -- pde 23:24, 8 March 2006 (UTC)[reply]
Hmm, surely showing that something is possibly possible entails that it is possible (might be possible => possible this is a theorem of S5). Discussions of hypercomputation concern the conceivability of hypercomputers, and how that relates to possibility is another matter. (Although being conceivable, logically consistent and physically serious are excellent indications of possibility).--NoizHed 03:50, 31 March 2006 (UTC)[reply]
Haha. I disagree with your translation of my comment into modal logic :). I was using two "possibility" operators: "might" was "can imagine universes where this would be" (there is a complicated scientific and epistemological question of what small odds we'd give that our universe is one of those); "possible" is "possible in a particular universe". -- pde 00:01, 2 April 2006 (UTC)[reply]

Neural networks

Hi Populus, I was just wondering, when you refer to the models of neural networks with " the ability to carry out computations with exponentially lower complexity than standard Turing machines" — are those systems just equivalent to non-deterministic Turing machines? If that is the case, the term "super-Turing computation" is being applied both to machines which are hypercomputers and to machines which are able to solve NP problems in P time, and that fact should be noted in the article. --pde 07:17, 23 Dec 2003 (UTC)

I'm just pulling in some of Kevin Baas's text from the former Super-Turing computation article. The intent was to combine all the articles on machines that violate the Church-Turing thesis (including the polynomial-time version) in one place. --Populus 12:32, 23 Dec 2003 (UTC)

Populus, do me a favor and look up Church-Turing thesis. --Kevin Baas 2003.12.24
Ok, Kevin, do you have a citation to the article(s) which introduce those "exponentially lower complexity" neural networks? --pde 23:30, 23 Dec 2003 (UTC)
I don't know where you're getting "exponentially lower complexity" from. You seem to have your computational copmlexity classes mixed up. I do remember some resources. I'll look them up after the holidays. But that's completely besides the point. The point is that, as you said, such a machine is not a hypercomputer, yet is more powerful than a turing machine. Yet, the computational complexity page supposedly links to machines which are more powerfull than turing machines. Perhaps we should make a page for machines which are more powerful than turing machines then? --Kevin Baas 2003.12.24
I can asure you, Kevin, that I do not have any complexity classes mixed up. And I'm quoting those words from the current article text, last clause of the first paragraph. According to Populus, (s)he was just importing something you'd written. Looking at the Super-Turing computation page history, I now see that he was interpreting what you'd written. But Populus added an implication that the term was used in this way in the nerual networks literature. I want to know if this is the case, and if so, where, because the meaning is obviously quite different if it reaches beyond computability theory and into complexity theory.
Of course you can make a page for machines which are faster than Turing machines, if you think you're up to it. Be sure to define carefully what kind of Turing machine you take as your baseline, including alphabets and state graphs (also remember, ordinary Turing machines cannot do O(1) memory lookup), and include non-deterministic Turing machines and quantum computers as prominent examples. --pde 23:42, 24 Dec 2003 (UTC)
OMG, I'll be sure to include my name, age, and address as well. Maybe then it'll be an academic article. That way it won't be pedantic and miss the point entirely. And I'll be sure to wear my safety googles, Mrs. Smith.
In the mean time, I'll blurt out keywords like O(1) memory lookup to sound smarter than the person I'm talking to. (shhh... they're actually really simple concepts.) --Kevin Baas

OK That is enough! this is not a flame forum and if you want to post into such then by all means go somewhere else. Kevin, if you think someone is patronising, then, say it politely, make a mature point, and move on. Many comments (by multiple comtributors) above are irrelevent. We do not need to know anybody's IQ or opinions as they add nothing to the discussion at hand and help nobody.

Suspicious "models" of hypercomputer

I am doing some major revision to filter out misinformation in this entry. I do not suggest people without a computer science background to come back and turn this page into a mess. I was a very good computer programmer when I was in high school, but this would not help me write an entry about computability. These warnings said, I removed the "model" about "exponential speedup", which is obviously wrong. Infinite time and space are required to solve the halting problem, literally speaking. Exponential time is not infinite, to simplify the reasoning. --Exa 13:21, 10 November 2005 (UTC)[reply]

Clearly what the editor was trying to say was an "asymptotic" increase in speed. It's easy to confuse asymptotes and exponentials. --Maru (talk) Contribs 02:33, 9 November 2005 (UTC)[reply]

There is another suspicious looking "model":

I second that; the only reason I didn't remove this in my recent edits was because I had no idea what it meant. --Maru (talk) Contribs 02:33, 9 November 2005 (UTC)[reply]

Unless you can point out a reference (i.e. paper) for this, I will have to remove it, because in this form it is not clear what it means. Since I can replace the set of final states, with a single final state, it is not clear how a preference ordering will help me.--Exa 13:21, 10 November 2005 (UTC)[reply]

I'm afraid I don't know what you mean here either. --Maru (talk) Contribs 02:33, 9 November 2005 (UTC)[reply]
How does this preference ordering over the final states help me solve the halting problem? That's what I'm asking, and there is no explanation. Somebody just wrote it. --Exa 13:21, 10 November 2005 (UTC)[reply]
I meant the set of final states --> final state thingie you wrote. --Maru (talk) Contribs 22:04, 10 November 2005 (UTC)[reply]
Simple, because I can replace all of those final states with arcs, and use a single sink state. At any rate, to solve the halting problem by giving a "preference order of final states", you would have to first solve the halting problem elsewhere, which requires infinite time/space to solve, and *then* provide this preference order to the computer... It just does not make any sense. If you have an oracle in the form of an infinite bitstring, you can encode this in a variety of hypothetical ways. None of these ways would seem to make an addition to the already existing explanations. Therefore, I am removing this redundant item. --Exa 22:28, 13 November 2005 (UTC)[reply]
Hi. The observation about non-det machines with preference orderings was mine. I thought it up some years ago, and therefore can't offer a citation, and I don't know if it's original. Showing that it works is straightforward. Consider a non-deterministic TM which takes (input machine and data) and has two final states, "halts" and "doesn't halt", and a preference for the "halts" state. This means that if there is any sequence of non-det transitions that reach the preferred state, it will take those; otherwise, it will find its way to the "doesn't halt" state if possible.
The program of is that it loops, at each iteration non-deterministically deciding either to simulate one more step of , or returning "doesn't halt". If the simulation of halts, immediately return "halt". This machine solves the halting problem. It is clearly physically impossible; it has some hefty implicit magic in it (and just imagine what would happen if it had I/O :). -- pde 13:44, 4 March 2006 (UTC)[reply]
What is a 'preference for the "halts" state'? Anyway, it matters not. This is not a solution to the halting problem, because you are clearly confused about what a non-deterministic turing machine is. ANY NONDETERMINISTIC TURING MACHINE CAN BE SIMULATED BY A TURING MACHINE, therefore if your proposed solution were correct, you could also demonstrate a deterministic turing machine that solved the halting problem. I am removing your example right now. Next time, please try to give a reference for such mathematical claims, or a self-contained proof. This is what scientific responsibility demands. --Exa 22:24, 15 March 2006 (UTC)[reply]
Actually, it does matter. I agree that any nondet TM can be simulated by a TM, but note carefully that "nondet TMs with preference orderings" are not the same thing as "nondet TMs". I did try to give a self-contained proof, though I admit I sketched it rather than writing out all the steps (because it's a trivial proof, really). An ordinary nondet TM has a set of final states. When faced with a nondeterministic transition, it is guaranteed to take a branch that leads to a final state, if one exists. Such machines can be simulated on ordinary TMs by forking the simulation every time there is a nondeterministic transition. Whenever any branch of the simulation reaches a final state, the simulation can halt.
A preference ordering over final states means that the machine will, if at all possible, take nondeterministic transitions that lead to the preferred final state. If there is no path to a more preferred state, it will take a path to a less preferred state if that in turn is possible. Such a machine cannot be simulated by an ordinary TM. If one branch of the simulation reaches a final state other than the most-preferred one, the simulation doesn't know whether to halt in that final state, or to keep searching for a more-preferred one. -- pde 06:11, 16 March 2006 (UTC)[reply]


Oh, by a "reference or a self-contained proof", did you mean a proof in the article, as opposed to one here on the talk page? -- pde
I am sorry, but your description is not even consistent. You probably have the idea of supplying an "oracular value" to a TM, but you can't express it well enough, or in a mathematically precise way. So far, only handwaving. At any rate, the oracular value bit is emphasized in two other models. Valid for both non deterministic and deterministic machines. Non-determinacy doesn't change a thing. Try to formalize your idea and you will see this. I mean of course a self-contained proof in the article, which is impossible because your idea is not even mathematical. --Exa 17:03, 21 March 2006 (UTC)[reply]
Can't you understand the description I posted? I'll try and get around to posting a more formal one if you like. It's true that there is something "oracular" about this machine; one way to define the machine would be: whenever a state has more than one possible transition, the machine consults an oracle to determine which transition to take. Note however that similar formulations of ordinary non-deterministic Turing machines apply (though it's not obvious when you read the current wikipedia article). See [1] for some other definitions. -- pde 09:52, 23 March 2006 (UTC)[reply]
I cannot understand the description you posted, because it is inconsistent. I am not being rude. If I cannot understand your description I am pretty sure nobody else can. As you admit you probably want to hypothesize the existence of an oracular value, which is covered by two other models. There is no such thing as a preference order, or preference of final states, and it does not even make sense. As I said, try to give a formal proof, and I am assuring that you will see your description is inconsistent. A final state is either reached, or not. There is no in between. You should try to consider a deterministic TM, and see how you can use your oracular value. But then, it will turn out to be equivalent to any other use of an oracular value. For it to deserve mention, you should show how you can solve a PARTICULAR undecidable problem like the halting problem, in precise detail, in a way that is distinct from the (well known) timesharing program that I have shown. Regards, --Exa 22:03, 23 March 2006 (UTC)[reply]
Exa, PDE's example is not inconsistent (although I found his explanation was a little hard to understand). Call a branch of (or a particular computation of) a nondeterministic Turing machine unfair iff after some finite amount of steps it is in the same state an infinite number of times and there is a particular transition it can take which it never chooses. A nondeterministic Turing machine is fair iff it produces no unfair computations. Using the algorithm PDE described fair nondeterministic Turing machines can solve the halting problem. Here's the reference you keep asking for: "Nondeterminism Fairness and a Fundamental Analogy" - Spaan, Torenvliet, van Emde Boas - EATCS bulletin, 37:186-193, 1989. In future try not to delete things just because you don't understand them! --NoizHed 03:50, 31 March 2006 (UTC)[reply]


The link to supertask in the first paragraph seems a bit odd, as the linked article does not address supertasks in the context of computer science (in fact it states this prominently). Maybe this should be removed? Or maybe someone wants to start an article on supertasks in computer science (I do not even know what they are). - JustinWick 18:32, 19 November 2005 (UTC)[reply]

Needs work

It's not at all clear what this subject is about. Theoretical models of machines that can decide undecidable languages? Real-life machines with this power?

The article first talks about how oracle machine computations aren't hypercomputations because the oracle is a mathematical abstraction, and then talks about how Omega is effectively an oracle for the halting problem.

It's not so clear what the distinction is here.

Offtopic, why do I keep finding so many references to oracle machines as though Turing was the first and last guy to talk about them? They're a pretty big part of computability and complexity theory. --Saforrest 13:41, 2 December 2005 (UTC)[reply]

A lot of the present ambiguity was due to this edit I believe. I haven't had a chance to read up on the term "super Turing computation" to work out what the literature is actually doing with it. -- pde 03:42, 2 February 2006 (UTC)[reply]
PS - Turing wasn't the last to talk about Oracle machines, just the first :P -- pde
I agree with your criticism. I mentioned Omega because there were some unclear suggestions about turing machines with super capabilities not in the original definition. Such as infinite storage/memory. It had to be made clear in what sense these are required. I am now left with two such models, one with an infinite input (this is similar to one definition of computability in computable mathematics) and the superstep one. And I think it is clear why such capabilities are required to solve the halting problem. However, I see the basic mistake you are referring to. Are we talking about abstract models? Or are we talking about physical models. For that I suggest to go back to Copeland. Also, it needs to be mentioned that Turing machine is not special, it is just one kind of a discrete/digital computer model. Another fuzzy area: are not all hypercomputers supposed to solve the halting problem? (It may well be the case that all the theoretical models should go to super-Turing, or better the two articles must be merged I don't know at the point, and sorry for having contributed to some of the confusion and possibly bad writing) --Exa 23:27, 15 March 2006 (UTC)[reply]
I have read the super turing computation entry. The introduction here is inaccurate, but the content is completely to the point. Hypercomputers solve ordinarily undecidable problems. That easy. I will think a while about how to make the connection to the halting problem in an illuminating way. --Exa 23:36, 15 March 2006 (UTC)[reply]

Footnote 2 missing referent

And who is the "her" it could be referring to?--155.143.221.40 01:42, 19 December 2005 (UTC)[reply]

Copeland, introduction, consistency

More work is required, I'd read the Copeland paper, have to cite it here, and summarize what he means by a hypercomputer, and also make the introduction a bit more tangible. Also there is a consisstency problem. In the introduction some models are mentioned which are not detailed later. --Exa 23:19, 15 March 2006 (UTC)[reply]

Is Omega an oracle

I am confused by the following addition

"Even more simply, with no need for some strange number like , an everyday Turing Machine that has access to an oracle for the halting problem can solve the halting problem."

Isn't omega effectively an oracle?1Z 11:47, 18 January 2007 (UTC)[reply]

Yes, Omega is just a real number, and it is Turing equivalent to the Haltig problem, so given either one of them it is possible to compute the other. There is nothing special about Omega, and knowing the definition of Omega doesn't illuminate hypercomuptation any more than knowing the definition of the halting problem does. It is a mistake for this article to treat it Omega as if it is something special CMummert · talk 12:07, 18 January 2007 (UTC)[reply]

Ok, but then shouldn't it read.

"Even more simply, with no need for some strange number like , an everyday Turing Machine that has access to another oracle for the halting problem can solve the halting problem."

1Z 13:00, 18 January 2007 (UTC)[reply]

Probably the author had in mind a specific set of natural numbers called the "halting problem", for example the set { e : φe(e) halts }, and when the author said "oracle for the halting problem" they meant specifically an oracle for this set rather than just any oracle with the same Turing degree. As I said above, I don't think it is worth mentioning Omega at all, because up to Turing degree Omega is the same as the halting problem. I think you are raising the same objection in different language. CMummert · talk 13:07, 18 January 2007 (UTC)[reply]
Indeed, this was my point in adding the original comment starting "even more simply". I don't have the nerve to remove absolutely every reference to Omega on this page, but it should be done. Omega is irrelevant in a straightforward characterization of hypercomputation. It is just obscuring matters completely to bring it up repeatedly in this article. Nor is Omega an oracle. An oracle is something that provides an external decision procedure for a question directly, by taking in an input and returning the answer to the question for that input. You could have an oracle for the question "is bit i of Omega a 1" if you wanted, but why bother with such a convoluted thing in order to solve the halting problem? 32F 11:59, 28 January 2007 (UTC)[reply]
Removing all references to Omega here would be fine with me; I agree it has nothing to do with hypercomputation. In contemporary terminology in recursion theory, "an oracle" is a synonym for "a set of natural numbers" or "a sequences of 0s and 1s", so that part isn't quite wrong. CMummert · talk 12:07, 28 January 2007 (UTC)[reply]
I removed the paragraph in question. THe broader problem is that AIT does not illuminate hypercomputation any more than regular recursion theory does. CMummert · talk 12:44, 28 January 2007 (UTC)[reply]

Today, algorithmic information theory helps us understand better what is required for hypercomputation. The hallmark of a hypercomputer is that it can solve the general halting problem, a feat impossible for ordinary computers. However, a normal computer can calculate the halting predicate of any program given the halting probability (Chaitin's constant, Omega), which is a random real, and holds an infinite amount of information. , then, is an oracle for the halting problem. This quantity requires infinitely many bits to represent on any medium (essentially it is incompressible), and there is no effective procedure to calculate it. A hypercomputer must, however, obtain Omega by some other means than familiar Turing computation. For ordinary discrete computers , there is an approximation procedure from below that will approximate Omega using a simple time-sharing program. However, the program can never know, at any given instant, how close it is to .

  • A discrete computer that has access to the halting probability can solve the halting problem. For an n-bit input program, the first n bits of are read, which gives the number of halting programs among all programs. Then, we timeshare all candidate programs with a simple watchguard policy. At step , we run each candidate program for steps. This iteration continues until programs terminate, and we have all n-bit programs that terminate. Then, we simply check if the input program is among these programs.
  • Even more simply, the computer could be provided with an oracle for the halting problem itself. can be computed from this oracle, and (by the previous argument) vice versa. Of course, there is no reason to concern oneself with a number such as here. It has absolutely nothing to do with hypercomputation whatsoever.

If we consider again the approximation procedure for Omega, this is a very slowly monotonically converging approximation. While every number in the series of approximation is computable, the limit is not. If the device can realize all of these infinitely many approximation steps, and obtains directly Omega, and stores it in its infinite extent, then it can easily solve the halting problem.</ref> (see supertask). Of course, the reference here to Omega is again otiose. If the machine can "store the infinite extent" of Omega, it can much more easily store an infinite list k[i], such that k[i]=0 iff machine i halts when applied to itself. This provides for a far simpler, non-convoluted, solution to the problem.

Oh, that is much much better. Even the title of that thankfully deleted section "The challenge of hypercomputation", makes no sense. What "challenge" is mentioned in there? And it makes the, I think, false claim that algorithmic information theory "helps us to understand" hypercomputation. It helps us less than "regular recursion theory" since the latter is the framework in which the concept of hypercomputation is even defined. AIT is just something else entirely. (Incidentally, it would be nice to have a "blockquote" that more clearly set off quotes like the one above, for example by putting them in a box and colouring them a nice light pastel colour. 32F 12:53, 28 January 2007 (UTC)[reply]
A while back, I put the following CSS in my monobook.css file for that effect. It is not beautiful, and the margins don't align correctly, but it does set off blockquotes visually from the rest of the text. CMummert · talk 13:02, 28 January 2007 (UTC)[reply]

#content blockquote { margin-left: 4em; background-color: #eeeeff; padding-left: 1em; border: 1pt solid #cccccc;}

Excellent thanks. I didn't know about the monobook file (haven't looked around too much in what having a user id lets you do...) 32F 13:34, 28 January 2007 (UTC)[reply]

Turing's Stance on Implementability

Hi, the introductory passage of the history of hypercomputation cites Turing's refusal to unbox the oracle in the O-machine as Turing not being interested in the implementability of the O-machine. But as Jack Copeland's paper Turing's O-machines, Searle, Penrose and the Brain (p.5) suggests, Turing wasn't concerned about the implementability of any of the machines he had proposed in the 1930s, whether universal or O. So placing such emphasis on the implementability of the O-machine seems misguided. --69.149.116.161 18:15, 15 August 2007 (UTC) (rck)[reply]


Hi, The "external links" section includes a link to a paper by Paolo Cotogno (I've corrected the author's name in the link). However, if you follow that link, you actually find a PDF version of a paper - Paolo Cotogno (2003) Hypercomputation and the Physical Church-Turing Thesis. Brit. J. Phil. Sci. 54, pp. 181-223. - that appeared in the British Journal for the Philosophy of Science. It is usual for copyright to be assigned to a journal prior to publication, so the link may be providing access to material in violation of copyright. Mike.stannett (talk) 00:50, 29 December 2007 (UTC)[reply]

cite requests

I removed a few citation requests in the "proposals" section, recently attached to sentences that mainly refer to wikilinked articles (e.g. Zeno machine) that contain reasonable sourcing (plus added an internal link for one of them). The person who added the requests might check the linked articles before reinserting the tags or adding further ones. —Preceding unsigned comment added by 207.241.238.233 (talk) 02:41, 15 January 2008 (UTC)[reply]

I added a cite for the thing about real-valued computation. The cited paper probably doesn't satisfy the request directly (e.g. it doesn't use anything like Chaitin's omega) but shows how to solve "hard" problems quickly using real-valued computers. This should be reasonably satisfactory as a mathematical answer and I don't see anyone expecting it to be a physical answer. I noticed that the "proposals" section is now a mixture of purely mathematical concepts (like the transfinite Turing machine) and concepts that claim some kind of connection with physics. Maybe these two classes of ideas should be separated from each other in the section. 207.241.238.233 (talk) 08:12, 8 February 2008 (UTC)[reply]

definition section

I agree with Unzerlegbarkeit's removal of the definition section [2] that Peterdjones reverted. The article's first sentence gives an adequate definition and matches up with the one in SEP [3]. The stuff about super-recursive algorithms is way too long-winded and should be moved further down and shortened. It also seems to have spread to a bunch of other articles recently and should be cleaned up from them too. CBM, if you're still watching this page, would you mind taking a look into that situation and telling us what you think? Thanks. 207.241.238.217 (talk) 04:05, 31 July 2008 (UTC)[reply]

Yes, the previous definition was correct. It seems the article did just fine with that definition until about a month ago, when someone decided that we simply couldn't define hypercomputation without reference to Burgin's super-recursive algorithms. --Unzerlegbarkeit (talk) 14:36, 31 July 2008 (UTC)[reply]

Siegelmann and "Turing myth"

I removed a tendentious sentence about the "Turing myth" which was extremely far from a mainstream view. Given the generally fringe-ish nature of the article topic, I suppose Siegelmann's claims are noteworthy enough to deserve a mention, but they should be cited and described neutrally. 207.241.238.217 (talk) 07:55, 13 November 2008 (UTC)[reply]

Siegelmann's claims are basically just a re-iteration of what we already know regarding Metaheuristic or Hyper-heuristic algorithms, though applied to hardware such that there are orders-of-magnitude efficiency increases. there's nothing really revolutionary or "fringe"-ish about them. they just represent a limit case of metaheuristic algorithms using massively parallel low-precision analog hardware. the basic result, that metaheuristic algorithms have super-turing performance if you compare the computational time classes side-by-side with their non-metaheuristic equivalent (e.g. traveling salesmen problems), is well established. Kevin Baastalk 18:46, 27 September 2010 (UTC)[reply]

ARNNs?

Why is there nothing in here about analog recurrent neural networks? 216.54.131.130 (talk) 16:02, 23 February 2009 (UTC)[reply]

There is now. 207.241.239.70 (talk) 10:14, 27 February 2009 (UTC)[reply]
This was a bit surprising for me to find. I skimmed the paper and couldn't make much sense out of it. And I couldn't find more information praising or criticizing Siegelmann's work. In the least, I think it should be explained in what way ARNNs are hypercomputers. To do more would be to create a wikipedia article about them, if it is truly the case that they are so important as being proven hypercomputers. Right now this article just says that she "proved that it could perform hypercomputation." It seemed a bit suspicious to me. —Preceding unsigned comment added by 141.156.179.133 (talk) 00:18, 21 January 2010 (UTC)[reply]
ARNN's are hypercomputers but like the Zeno machine, they are abstract constructions that are not physically realizable. It's not all that interesting. 66.127.55.192 (talk) 09:46, 10 February 2010 (UTC)[reply]
They are perfectly physically realizable. Our brains are but one example. Perhaps you mean noise-less ARNN's, which is just absurd. even in perfect isolation, an ARNN will generate it's own "noise". a noiseless ARNN is not even a valid construction, real OR abstract. Kevin Baastalk 18:52, 27 September 2010 (UTC)[reply]
Here is a new article discussing the expressive power of analog recurrent neural networks (http://www.sciencedirect.com/science/article/pii/S0304397512000990)150.135.222.230 (talk) 00:19, 8 March 2014 (UTC)[reply]
An ARNN with noise can be simulated to arbitrary precision with a Turing machine...so they have the same limitations. Only a perfect ARNN could do something that a turing machine can't - and in a universe with quantum randomness and plank quantization there's always going to be a source of noise. SteveBaker (talk) 14:28, 7 August 2014 (UTC)[reply]

Content from the four-page Siegelmann paper in Science proposing a unified hypercomputation model based on ARNN's is fine, but that paper doesn't claim it's physically *plausible* that any ARNN in our physical Universe could solve *useful* (rather than random) uncomputable problems; any such claims are WP:FRINGE speculation that would need to be well-sourced. Rolf H Nelson (talk) 06:12, 2 March 2016 (UTC)[reply]

Time Dilation

In the first bullet on Hypercomputer proposals, usual time dilation can't do this, I suppose you need a Malament-Hogarth space time. Furthermore, this computation takes an infinite amount of energy, so it should be considered impossible. This bullet shouldn't be put first since it's so fantastic. I'd like to see this bullet be struck completely. This article should be cleaned up for writing style and content. —Preceding unsigned comment added by Pulu (talkcontribs) 20:01, 20 November 2009 (UTC)[reply]

dubious - superliminal processing

processing is entropic transfer. if superliminal (non-entropic) transfer is not possible, neither is superliminal entropic transfer ("processing"). i suppose the question comes down to channel capacity. i'm pretty sure the math works out. Kevin Baastalk 18:41, 27 September 2010 (UTC)[reply]

Cleaning up "Hypercomputer proposals" section

I think that the section on "Hypercomputer proposals" could be greatly improved. Section 2 (pp. 3-7) of a recent paper by Aran Nayebi (which can be found here: http://arxiv.org/abs/1210.3304) provides a rather thorough and accurate survey of these models as well as the criticisms of their physical realizability in the literature. I will try to get to this cleanup soon when I get the time, but in the meantime those who would like to commence feel free to do so.

Also it does appear that the author provides his own new criticisms of hypercomputation in general (see Section 3 of the article) and proposes an alternative direction for hypercomputation (what he calls "plausible hypercomputability") in Section 4. And interestingly enough, in the Acknowledgements section (pg. 26) it appears that Martin Davis, one of the main critics of the current hypercomputation movement, has supported Nayebi's proposal for a "plausible hypercomputability". Perhaps this proposal should be mentioned in this Wikipedia page? — Preceding unsigned comment added by 198.228.216.21 (talk) 02:15, 22 October 2012 (UTC)[reply]

SImulation.

It seems that a turing machine can reasonably simulate the physical world - figure out the mass & velocity of every particle and calculate where it'll end up at some time in the future. Doesn't that mean that any physically realizable device (such as a hypothetical hypercomputer) can be simulated by a piece of software running on a turing machine. Since no piece of software running on a turing machine can solve the halting problem, neither can a simulation of a hypercomputer...or an actual hypercomputer. Ergo they can't exist.

The only get-out clause is if (using black holes or something) you can run the hypercomputer at infinite speed...which is the same thing as a turing machine running at infinite speed simulating that device.

So the bottom line here is not whether there is something "better" than a turing machine - it's whether you can (through various relativity trickery) get the turing machine to run at literally infinite speed.

But even that isn't physically realizable because the reliability of any physical machine is guaranteed to be finite (because of quantum fluctuations) and hence in the machine's own frame of reference, it can't run forever. In our frame of reference, the machine will break down without producing the answer you need the very instant you turned it on.

But (for example) you don't need to build an actual turing machine in order to prove the existence of the halting problem - so maybe you don't need a hypercomputer to make interesting and important discoveries from considering what it would be like if you DID have one just sitting around on your kitchen table.

SteveBaker (talk) 14:22, 7 August 2014 (UTC)[reply]

Super-Turing Computation As Being Equivalent To A Mind

Greetings. This is my very first time on Wikipedia talk page so I hope I'm following the guidelines. Please correct me if that is not the case.

I am entering into university as a computational neuroscience student and while learning about different computational systems for artificial intelligence, I came across this paper about how minds can be seen as hypercomputation systems.[1]

Before getting my hopes up about a method to create AI with human or greater intelligence, I would like to know the opinion of people who have participated on this page to whether super-turing machines can lead to such a thing.

NAStrahl (talk) 18:51, 30 August 2014 (UTC)[reply]

As I explained in my previous post, it seems that a Turing machine can adequately simulate the behavior of a collection of atoms - and since the human brain is composed of atoms, there is no seeming objection to a Turing machine being able to solve any problem in finite time that a human brain can solve in finite time. If we define "hypercomputation" as being some form of computation that a Turing machine cannot perform, then it follows that human brains cannot be hypercomputers. SteveBaker (talk) 21:55, 30 August 2014 (UTC)[reply]

References

  1. ^ Arkoudas, Konstantine. "The modal argument for hypercomputing minds" (PDF). School of Queen's Computing. Elsevier B.V.

Hello fellow Wikipedians,

I have just modified 2 external links on Hypercomputation. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 09:06, 7 April 2017 (UTC)[reply]

The example of a "real computer" is nonsense

Type Two Turing Machines do admit uncomputable real numbers as inputs, and yet aren't capable of meaningful "hypercomputation". What's written is totally unsalvageable gibberish. See computable analysis for a summary. --Svennik (talk) 09:06, 3 October 2024 (UTC)[reply]