Jump to content

Talk:Newcomb's paradox

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 2.25.135.6 (talk) at 19:15, 18 December 2011 (Random device?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconPhilosophy: Logic Start‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.
Associated task forces:
Taskforce icon
Logic
WikiProject iconGame theory Unassessed Low‑importance
WikiProject iconThis article is part of WikiProject Game theory, an attempt to improve, grow, and standardize Wikipedia's articles related to Game theory. We need your help!
Join in | Fix a red link | Add content | Weigh in
???This article has not yet received a rating on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the importance scale.

The Fruit Machine Gamble

This doesn’t look like a paradox at all in the context of gambling. Gamblers are frequently confronted with a choice where two intuitively logical strategies give conflicting answers. The difference between the strategies in this case is how each one deals with risk.

Imagine the following win on a fruit machine where your next move can be (1) analogous to choosing both boxes or (2) analogous to choosing one box. (For illustration I’ve set the odds of prediction to 1 in 100 but the exact odds are not important. The alien ‘Predictor’ of Newcomb’s paradox was right 999 times out of 999; the implication therefore of extremely high odds of correct prediction but no guarantee).

You have just won $1000 on a minor jackpot and your choices are as follows:

Press (1) to bank the $1000 and play for a 1 in 100 chance to win an extra $1000000, or press (2) to gamble the $1000 and play for a 99 in 100 chance to win $1000000.

Or, those choices described again without the explicit probabilities:

Press (1) to bank the $1000 and play high risk for an extra million, or press (2) to gamble the $1000 and play low risk for an extra million.

In the case of both the fruit machine and ‘the Predictor’ problem the variables are (a) the probability of the outcome (or the estimated probability that the Predictor is correct) and (b) how much risk your strategy is willing to accept.

Using the above, the strategies can be described as follows: The first strategy is “don’t take risks” so (1) is the best choice since it eliminates all risks. The second strategy is “gamble because the odds look exceptionally good” so (2) is the best choice. In other words “no risk small gain” vs “low risk large gain”

When risk is involved there are usually dozens of different strategies for the same scenario which result in a different “best choice” but I don’t think we can call each one a paradox, unless we redefine the word itself. If the same strategy used on the same scenario results in a different “best choice” then that would be a paradox, but that would be absurd, which is what in essence makes a paradox. —Preceding unsigned comment added by Colinpearse (talkcontribs) 19:48, 10 January 2010 (UTC)[reply]

Implicit assumptions and ill-defined games

This only appears to be a paradox because the game is poorly defined, and there are implicit, incompatible assumptions.

To see this, let's define it a little differently. First, quantify the "success" of the Predictor as the probability 0≤p≤1 that it/he/she is correct in the predictions. Now, to temporarily get rid of the question of the possibility of p>.5 (or the possibility that the Predictor is trying to "outsmart" you, which isn't implied anywhere in the description of the game), redefine the game as follows:

There are two doors, A and B. Behind door A will always be $1000, and behind door B will always be $1,000,000. However, there is a large, high-power laser behind door B, capable of instantly vaporizing the $1,000,000. To play, I get to push either button 1, which opens both doors, or button 2 which opens only door B. However, the instant I push button 1, the laser will fire with probability p. If instead I push button 2, the laser will fire with probability 1-p. In our redefined game, you get to choose p before we start.

If you tell me honestly what you've chosen for p, then my choice is obvious: if p>.5, I always push button 2, otherwise, I always push button 1.

But that's too easy, so let's throw in a little twist. You tell me a value for p, but then you throw in the possibility that the value you told me is not the actual value you chose. The first decision I have to make now is whether or not I think you've told me the true value of p. But, on what logic do I base that decision? It becomes a subjective judgement, based on how well I know you and whether I think you're bluffing. This is not a question of logic, and this game is not well defined.

Now, getting back to the original game, if you tell me the Predictor is correct with probability p >.5, I first have to decide whether I think that is even possible. If I believe in UFOs, ESP, and/or God, and I think you are an honest person, I might conclude that p>.5 is indeed correct. In that case I will choose box B every time, because this game is equivalent to the above redefined version of the game when the value of p is known for certain. If I don't believe in UFOs, ESP, or God, I will conclude you are a flake or a liar, or both. I will also believe that p>.5 is impossible and therefore choose both boxes every time. In this case, I won't be certain about the value, but I will be certain that p≤.5. If I believe in UFOs, ESP, and/or God, but I'm not sure about you, then I'm back to deciding whether or not you're telling the truth. But in any case, p is a critical part of the game and yet my certainty of its value depends on my belief in the existence of UFOs, ESP, or God, and possibly your veracity. Even worse, the very possibility of p>.5 depends on the existence of UFOs, ESP, and/or God, regardless of my belief. Since these are not issues which can be resolved by logic alone, the game is poorly defined for the purpose of exposing a logical inconsistency.

However, the definition of the game clearly implies p>.5, which leads us to the issue of invalid assumptions. The strategies given in the article appear to give a paradox if we assume that what's in the boxes is independent of our choice. But the critical question here is whether or not independence is a valid assumption. If we admit the possibility of p>.5, we are implicitly admitting that our choice can indeed affect what's in the boxes. While this may offend our sense of causality, the existance of p>.5 would suggest that our sense of causality is fallible. Thus, the real cause of the supposed paradox is the assumption of p>.5 and the simultaneous assumption of independence.

22:17, 23 May 2008 (UTC) DHM (I can't sign my name because I did this at work!)

Except wouldn't even a monkey would have p=.5? So if I accept your analysis, I only have to believe that the Predictor will know me slightly better than a monkey does. Isn't that already highly plausible? And yet, isn't independence is also highly plausible? Also, "p>.5" versus "independence" sounds to me an awful lot like "determinism" versus "free-will", and the article does already discuss that.
While we're redefining the game, here's my favourite: I'm not going to claim the Predictor is a psychic, an alien, or the All-Knowing. But I stipulate: the Predictor gets to meet you and ask you one question before making a Prediction. You do not have to answer honestly! But the predictor gets to observe your tone and your body language. Then, I will let you watch some other Choosers go before you, and you see:
  • M times the Predictor correctly guessing the Chooser will take both
  • N times the Predictor correctly guessing the Chooser will take B
  • X times the Predictor incorrectly guessing the Chooser will take both
  • Y times the Predictor incorrectly guessing the Chooser will take B.
Now the set of situations I describe are all possible, because, even if the Predictor is a monkey, there is always a non-zero probability for this. E.g a monkey's chances for M=N=9 and X=Y=1 are about 1 in 5000, I think. And this is a 90% success rate! So, you can always simply believe that the Predictor has been lucky, and proceed to choose both boxes, no matter what. But is that how you will behave? Are there values of M,N,X,Y for which you will change your mind? --EmbraceParadox (talk) 03:36, 24 May 2008 (UTC)[reply]

I think your observations actually support the points I was trying to make, although I should have articulated them more carefully. I should have started with a question: Is the point of the game to choose the best strategy assuming that p>.5, without asking how p>.5 can be achieved? Or is part of the strategy supposed to address the question of whether p>.5 is even possible?

If the definition of the game is the later, then it does indeed include the question of determinism vs. free will. A very insightful philosopher could possibly identify some deterministic rules governing human though to the point that he/she could predict my choices with p>.5. At the other extreme, I would expect a monkey to get p=.5 on a large enough sample size, and do agree that for a small sample size a monkey could achieve p>.5. In fact, I would expect a human being to get p=.5 on a large sample size, unless he/she does indeed posess ESP, talks to God, or is the brilliant philosopher mentioned in an earlier sentence. The problem is nobody, as far as I know, has ever resolved the issue of determinism vs. free will or any of the other phenemenon which would make the possibility of p>.5 a certainty. That is precisely why the game is ill-defined if it is indeed supposed to address this question. While the strategy doesn't depend on the precise value of p, it does depend critically on whether p>.5. If we are not given p>.5 for certain as part of the definition of the game, then we are stuck at square one with no way to proceed. There is no paradox of optimization strategies because we don't even know what we are optimizing.

If instead the definition is the former, to simply accept p>.5 as part of the definition of the game, then it is not a question of free will vs. determinism, or the existance of ESP or God or aliens. It is simply an interesting mathematical game with an element of probability. In that case my alternate definition of the game clearly demonstrates the error in assuming independence. In that version what we find behind the doors after they have opened is indeed dependent on my choice, so independence is an invalid assumption. I contend that my version is mathematically equivalent to the original version, with the Predictor of given p. Therefore, independence is an invalid assumption in the original version of the game as well.

Thus, refering to my initial question, in the later case the game is indeed ill-defined, precisely because it does involve uncertainty in the value of p, due to a variety of questions, including determinism vs. free will. In the former case, the game is well defined with p>.5, but independence does not apply; what is inside the box does indeed depend on what my choice is. In my redefined (but mathematically equivalent) version of the game the lack of independence doesn't cause any heartburn, but in the original version it certainly does. It brings in all those questions about God, aliens, free will, cause and effect. That is precisely why it seems to present a paradox: it draws attention away from the precise mathematical definition of the simple game (assuming p>.5 is given), effectively hiding the failure of independence behind a haze of esoteric philosophical nuances. This is exactly the technique of illusionists and advertising: draw your attention away from the simple facts by distracting you with something irrelevant but much more interesting.

15:51, 27 May 2008 (UTC) DHM

No, I don't agree. Your game is different in that the laser acts based on your choice, whereas the predictor acts based on what he thinks your choice will be, and you make the choice afterward. This makes all the difference, of course. I also don't agree a human would necessarily have p=.5. What about polygraphs? These might not be all that accurate, but you've already noticed they only have to be somewhat better than random. I think this establishes the possibility in principle, without appealing to God.
And in any case, what about this game: you are hooked up to a polygraph, and the Predictor asks you what you are going to Choose. You are not required to answer honestly. After your answer, he makes his Prediction in secret. Now, with his Prediction already made, you make your Choice. Whatever you were going to do when you answered the Question, you can change your mind. When he asks the Question, you should say that you are going to choose only one door, and mean it. But once he's made his Prediction, your Choice does not have any effect on it. So you should choose both doors. But of course at the time you answered the Question, you would know full well that you could later change your mind, so you wouldn't mean what you said. This conundrum is absent in your laser game. (This version makes it similar to Kavka's toxin puzzle.) --EmbraceParadox (talk) 17:26, 27 May 2008 (UTC)[reply]

Your description of the human predictor using the polygraph doesn't alter my conclusions. p describes his probability of being correct, regardless of how he arrives at those predictions. If the value of p is defined as part of the game, then the method by which his predicitions are reached and any attempts to fool him are irrelevant; he IS going to be correct with probability p. You may be trying to concoct an argument to support or deny the possibility of p>.5, but your method still leaves uncertainty, since polygraphs aren't completely reliable.

With regard to a mathematical description of the experiment in terms of the probability p, both my game and the original are indeed identical. Suppose I play against the Predictor N times, where N is large. Suppose further that I choose box B L times, and choose both boxes M times, where L+M=N. Then, since the Predictor is correct with probability p, and N is large, I expect to find after I've finished that when I chose only box B it will have contained $1,000,000 pL times and it will have been empty (1-p)L times. I also expect to find that when I chose both boxes, they will have contained $1000 pM times and $1,001,000 (1-p)M times. These are exactly the same results I expect to find after playing the version with doors and buttons and lasers if I push button 2 L times and button 1 M times.

Yes, the mechanism by which the money makes it into the boxes is different than the mechanism by which the money appears behind the two doors, but the outcome of the game for large N will be the same. As far as the outcome is concerned, it doesn't matter if the laser fires based on my immediate choice or the Predictor places the money based on a prediction of my choice. The result of both will be determined by my proportion of the two choices and the probability p.

20:40, 27 May 2008 (UTC) DHM

Well, if we're talking statistics, then my first example holds: Suppose the Predictor has played N other Choosers before you. L of them chose box B, and M of them chose both boxes. (L+M=N). The Predictor was correct about box B X times, and incorrect Y times (X+Y=L). The Predictor was correct about both boxes U times, and incorrect V times (U+V=M). Suppose further that the Predictor never claimed to be anything other than human, he just claims to be able to read people fairly well, and finally, suppose he has no evidence to back up these claims whatsoever, besides this track record of N previous games. Now it's your turn. What do you do? Does your answer depend on the values of X, Y, U, and V at all?
You might object that I haven't actually supplied you with the true probability p you keep talking about. Of course I haven't. I don't see why this needs to be specified to make this example well-defined. Prior probabilities are usually not given. Cheers, --EmbraceParadox (talk) 23:03, 27 May 2008 (UTC)[reply]

The article points out that people are equally divided on the best strategy to "maximize the player's payout". It then claims that this disagreement constitutes a paradox. That is the claim I dispute. To constitute a paradox, both strategies have to appear to draw their conclusions from logical deduction based on the same information, or postulates. The disagreement of the two groups can fail to be a paradox for various reasons: one or both groups make an invalid logical deduction, one or both groups choose to ignore one or more of the postulates, or one of the postulates is not clearly defined, requiring a subjective interpretation. There is no paradox in disagreement of subjective interpretation. Otherwise, one could conclude that political elections constitute a paradox because some people vote for Republicans while others vote for Democrats.

The reason I keep talking about the probability is because this game is indeed a question of probability. The only way to avoid that is to postulate that the Predictor is absolutely, without a doubt, 100% correct. Then you always know for certain what will be in the boxes when you make your choice. Otherwise, there is an undeniable element of chance here, and a strategy to maximize payout should account for this. The second strategy (as described in the article) chooses to accept the *postulate* that "the Predictor is almost never wrong." The first strategy chooses to ignore this *postulate*.

Thus, the issue here is the nature of the *postulate* "the Predictor is almost never wrong". The article is magnificently vague on this: "Some assume that the character always has a reputation for being completely infallible and incapable of error. The Predictor can be presented as a psychic, as a superintelligent alien, as a deity, etc." If "the Predictor is almost never wrong" is indeed a postulate of the game, then the disagreement fails to be a paradox because one group clearly incorporates it and the other clearly does not. If on the other hand the article is intentionally vague in order to force a subjective judgement on the validity of "the Predictor is almost never wrong", then the disagreement again fails to be a paradox. Either way, there is no paradox.

The problem as posed at the end of the previous posting, ("Now it's your turn. What do you do? Does your answer depend on the values of X, Y, U, and V at all?") asks whether I want to consider the data regarding the Predictor's history. This is very interesting, but it goes right to the heart of my argument. The data fall short of confirming that "the Predictor is almost never wrong", so I am forced to make a subjective judgement, first whether to consider the data at all, and second, to interpret the data if I do choose to use it.

Perhaps this answers the objection "I don't see why this needs to be specified to make this example well-defined." Yes, the game is defined well enough for anyone to play, but not well enough to conclude that disagreement over the "best" strategy constitutes a paradox. The fact that probabilities are usually not given reflects the reality of problems of this type; the "best" strategy is often not clear. I think this is what makes them interesting. But I don't think difference of opinion is synonomous with paradox.

14:08, 1 June 2008 (UTC) DHM

Having focused on the probabilistic interpretation, I'm now pondering the impications of the "failure of independence" I discussed in my first post. Specifically, my statement that "p>.5 would suggest that our sense of causality is fallible" suggests that there might be some interesting questions there, and after thinking about it beyond just a "quick fix" to the immediate problem, I see that there are some interesting consequences. It does look like it could open up a can of worms. Furthermore, I see from a quick read of some of the other websites that some smart people seem to have spent some time on the problem, so somebody has probably already gone farther down this path then I ever will, whether it leads to resolution or dead end. In any case, I can't spend any more time on this. It's been fun, thanks.

22:09, 1 June 2008 (UTC) DHM

I don't see why the data would have to fall short of confirming it. First, as you've pointed out, the Predictor only has to be slightly more reliable than a monkey, so surely it doesn't take very many tries to be statistically significant. But the article shouldn't really give the impression that mere difference of opinion constitutes a paradox. Cheers. --EmbraceParadox (talk) 19:18, 3 June 2008 (UTC)[reply]

I guess I'm thinking about a Predictor who could predict my guesses even if I'm choosing randomly, for instance tossing a coin. If in fact I'm trying to thoughtfully maximize my return I probably won't be random, and an observant Predictor should probably be able to do better than 50%. If I were indeed choosing randomly I would want a large sample to really believe the Predictor was doing better than 50%. But the more I think about it, the more I think the real interesting case is where the game does indeed postulate a Predictor with p>.5, for any strategy, including random. (Random choices is a reasonable expectation; some people seem to prefer wild guesses over thinking - teenagers come to mind.) Although this is mathematically equivalent to the doors-with-laser problem with regard to the stastical outcome, the implications regarding causality are not equivalent. I think this is where the paradox lies, but you have to think about is as more than just a probability exercise.

02:57, 4 June 2008 (UTC) DHM

Mathematics vs semantics

Perhaps we should make the mathematics under the various assumptions taken by different philosophers more clear for this page, as right now it seems that lots of different definitions and philosophical asides and semantics are all being mixed up into one article. Because some of these definitions are at odd with each other, the article becomes confusing. In other words, this is presented as a word problem, and parts of it might be more clear if presented as a math problem (or say, two math problems for different interpretations). A potential problem with this is that to even formulate the questions somewhat rigorously might be 'original research;' most people seem to have become lost in semantics and ambiguities of language. In any case, what do you think? - Connelly 04:03, 6 March 2007 (UTC)[reply]

=Excellent Post

This is an excellent post! Your concept of a mathematiucal proposition is certainly absolute, and for the exact reasons you proposed! Especially your concept of "original research"!

The exact nature of the mathematics in this paradox, is the same as in the "four part harmony notations in music"! The mathematical calculation algorithms are the same!

Also the "original research" concept, always reiterates the problem, inside the "original research" and thereby reiterates the paradox, which is literally based on the semantics of prophecy (which is semantics) to begin!

The insertion of numbers as a certainty constant,( there always has to be "one" constant ), simply stabilizes the paradox, and allows the semanticism, to run in its own chaotic order, and express the opinipon of each user only!

A type of semantic shell game! A man with two cups and a pea, could do the same thing! (three cups has even higher odds in favor of the predictor, whom only has to predict you will choose the wrong cups)! Chicogringosegundo 07:44, 13 March 2007 (UTC)Chico[reply]

Consistency of terminology

When adding to this page, can contibutors please keep to a single terminology (currently "Box A / Box B" at the start), and not deviate to other terminologies such as "red / blue"; "? / X", "Open / Closed"? Thanks! NcLean 29 May 2004

Predictor can not see the future

Pulled this paragraph because it is untrue. The two players can equally acheive their goals and reach equilibrium if Chooser always takes the closed box. Only if Predictor has a goal of minimizing payments is this true (and that is not part of Newcomb's Paradox). Rossami 21:04, 18 Mar 2004 (UTC)

A game theory analysis is straightforward. If Chooser wants to maximize profit, and Predictor wants to maximize the accuracy of the predictions, then the Nash equilibrium is for Chooser to always take 2 boxes, and for Predictor to always predict that 2 boxes will be chosen. This gives a payout of $1000 and a perfect prediction every time. If Predictor's goal is to minimize payments (rather than maximize prediction accuracy), the equilibrium is the same. If two people played this game repeatedly, they would probably settle into this equilibrium fairly quickly.

I believe that the above paragraph is correct and reinserted it. Suppose Chooser wants to maximize profit and Predictor wants to maximize prediction accuracy. Your suggestion of Chooser always taking the closed box and Predictor always predicting the closed box is not a Nash equilibrium: Chooser could improve his profit by switching strategies. AxelBoldt 18:21, 24 Mar 2004 (UTC)

If Chooser always takes the closed box and Predictor has always predicted that only the closed box will be taken, then Chooser walks away with $1,000,000 and Predictor still has a perfect prediction record. Perhaps I am misunderstanding your scenario? Rossami 23:53, 24 Mar 2004 (UTC)
That's the scenario I'm talking about. It's not a Nash equilibrium: Chooser could switch strategies, take both boxes and end up with more money. In a Nash equilibrium, none of the players can increase their profits by unilaterally changing strategy.
By contrast, the scenario where Chooser always takes both boxes and Predictor always predicts this is a Nash equilibrium: if either player unilaterally changes their strategy, they will be worse off. AxelBoldt 03:24, 25 Mar 2004 (UTC)
You're right. I was confusing Nash equilibrium with other equilibrium. Thanks for correcting. Rossami
The issue with all this is that the Chooser can't make a unilateral move. The Predictor will already have predicted that. The Chooser knows this, forcing him into a 1st order consideration of the Predictors behaviour. The result is that the Chooser always picks just B.

Unrelated insults

This page embodies the extremely stupid philosophies held by professors everywhere. Bensaccount 22:27, 18 Mar 2004 (UTC)

Good morning, Bensaccount. This is a real paradox that does not have a trivial answer. Please study the issue more carefully before imposing your personal interpretation on the problem. As the article mentions, it is often very difficult for those who see one solution to recognize the validity of the other interpretation. Thanks. Rossami 13:45, 19 Mar 2004 (UTC)

Hume quatation

The David Hume quote is interesting, but I don't see what it's connection to this article is about. Can you please make the connection more explicit? Rossami 00:57, 20 Mar 2004 (UTC)

Reverse causation is defined in the problem. The reason that people are choosing the second option is because they dont get that reverse causation (the choice in the future effects the outcome of the choice) is defined in the problem. They are arguing with something that has been defined because the wording of the problem does not convince them thoroughly enough that the events in the future are causing the results in the past.
To sum up: They are arguing with the definition of the problem.Bensaccount 01:15, 20 Mar 2004 (UTC)
I'm not sure that interpretation is universally held. A quantum physicist, for example, might reach the same conclusion that you do - choose the closed box - from a completely different perspective. The quantum perspective would consider the closed box in a state of superposition until the point of choice. Reverse causation is unnecessary. A multi-worlds cosmologist, on the other hand, might take both arguing credibly that reverse causation is an illusion - an artifact of perception. Let me take a few days to do some external research, though. Thanks. Rossami 05:04, 20 Mar 2004 (UTC)

Reverse causation is defined in the problem. This is not a paradox. Bensaccount 01:04, 22 Mar 2004 (UTC)

Wow that glass box version is confusing. Prediction with 100% certainty means that there is no free will. This is still defined in the problem. Bensaccount 03:32, 25 Mar 2004 (UTC)

I regret your removal of this relevent quotation. Do you have a reason? Bensaccount 03:55, 25 Mar 2004 (UTC)

The problem statement, at least Nozick's version, does not stipulate reverse causation. It is possible that the predictor has such an excellent success rate coincidentally, or only under conditions that do not require reverse causation. If one believes that reverse causation is impossible, then it is more likely that the predictor was only right coincidentally, regardless of how unlikely that is.

On punishing people for rational decisions, we have Pascal's Wager, which is remarkably similar -- just stipulate that Christian dogma is fact and it's "rational" to do all sorts of idiotic things. -- 98.108.225.155 (talk) 07:15, 22 November 2010 (UTC)[reply]

Major bias in the old version

The old argument is extremely biased. I will prove why:

  1. The names of the philosophers who support option two are not mentioned.
  2. The source of the 50-50 split result is not mentioned.
  3. The fact that 100% accuracy of prediction eliminates free will is not mentioned.
  4. Only the argument for the second version is given. (Shouldn;t it explore both arguments equally to be fair?).

I have more if you want it. Bensaccount 03:48, 25 Mar 2004 (UTC)

100% accuracy of prediction doesn't necessarilly eliminates free will, it needs "just" backward causality. (So your current decisions cause the predictions.)--88.101.76.122 (talk) 17:19, 6 April 2008 (UTC)[reply]

Paradox requirements

This is not a paradox because there is only one possible outcome based on the definition of the problem. Bensaccount 03:48, 25 Mar 2004 (UTC)

A paradox leads logically to self contradiction. This does no such thing. Only illogical argument with the problem itself will lead to contradiction. The problem leads only to a single final outcome. Bensaccount 04:04, 25 Mar 2004 (UTC)

This is indeed a paradox as two widely -0principles of decision making (Expected Utility and Dominance) contradict one another as to which is the best decision.

Kaikaapro 00:18, 13 September 2006 (UTC)[reply]

There is no contradiction. Chosing both boxes gives you $1000, chosing B only gives you $1000000. No contradiction. It's only counter-intuitive concept of backward causality which fools some people to argue for taking both boxes.--88.101.76.122 (talk) 17:10, 6 April 2008 (UTC)[reply]
Uh, no. There is no stipulation that choosing B only gives you $1000000. That only follows if the past success of the predictor necessitates the predictor's future success, but there is no such necessity. If backwards causality of the sort posited here is logically impossible (and I believe it is), then it is more likely that the predictor will fail this time, regardless of how unlikely that is. -- 98.108.225.155 (talk) 07:34, 22 November 2010 (UTC)[reply]
Kaikaapro, there is no paradox; the apparent clash merely indicates that the expected utility is being miscalculated by assigning an incorrect Bayesian value to the predictor's assessment. Regardless of what the predictor has done in the past, dominance assures us that we still benefit by taking both boxes. -- 98.108.225.155 (talk) 07:34, 22 November 2010 (UTC)[reply]

It isn't a paradox in a logical sense, though it would appear to be counter-intuitive to many people, which would lead them to assume a paradox was involved rather than faulty assumptions. The accuracy of the predictions is outlined in the problem, and should be assumed. Effectively the choice made is the prediction made. Even allowing for some error (almost always correct) then it would still pay off to assume 100% accuracy anyway. Ninahexan (talk) 02:25, 11 January 2011 (UTC)[reply]

expected utility?

The ecpected utility is $1,000.00 and box "B" is empty! Because the expected Utility for box "B" is any number between $0.00 and $1,000,000.00, and none of these numbers have the slightest effect, upon the fact, a+b=b+$1,000.00! as this fact is certain it is a constant, and as it is a constant it is also a certainty! This dual certainty, will cause Box "b" to be empty, because there is only two boxes, and dual certainty is already established, by one free box, a guaranteed $1,00.00!

This dual certainty, cannot be changed by an empty box, nor any amount of money, for the selection "Both" will still be worth $1,000 more than box "b" alone! Therefore for dominance, and for utility the logical choice is "both"! Chicogringosegundo 10:29, 13 March 2007 (UTC)[reply]

Whether the paradox is real

After several days research and thought, I am firmly convinced 1) that this is a paradox with a non-trivial analysis and 2) that the original version (while imperfect) was closer to NPOV than the current version.

Bensaccount's primary complaint seems to be that because "reverse causation is defined into the problem" there is only one solution. However, free will is also "defined into the problem" - otherwise Chooser is not really making a choice. Using Bensaccount's framework, we have two mutually incompatible conclusions yet neither of the premises (free will and the ability to predict the future) can be easily or obviously dismissed as untrue.

Ok well proven, I didnt see that before. I stand corrected. Bensaccount 00:54, 1 Apr 2004 (UTC)
These two things don't contradict each other. Either you will freely decide to take both boxes and your decision will cause box B to be empty. Or, you'll freely decide to take only box B, and you will cause box to contain one million.--88.101.76.122 (talk) 17:13, 6 April 2008 (UTC)[reply]

Is it not logical to suggest that the predictor is the one lacking free will, having their choice dictated by the free will of the future chooser? Ninahexan (talk) 02:29, 11 January 2011 (UTC)[reply]

Physics

Further, I remain unconvinced that Bensaccount's framework is the only framework through which this problem can be analyzed. Reverse causation is not necessarily defined into the problem. If your view of cosmology is bounded by classical physics, it is. If you extend the bounds to other views of cosmology (quantum superpositions, etc.), the problem can still be productively analyzed without resorting to reverse causation. Rossami 00:31, 31 Mar 2004 (UTC)


Calculations

Assuming that the Predictor has a probability of p of a correct prediction. Then the best strategy is

Choose one box if p > 50.05%

Choose both box if p <= 50.05%

The expected outcome of choosing one box is ( 1000000p ) dollars
The expected outcome of choosing both box is ( 1001000 - 1000000p ) dollars

I think the principle you are looking for is Expected Utility. Assuming the predictor has a probability of 90%, the expected utility would look as follows:

                             90%                     10%
                             Predictor is right      Predictor is wrong
   You take one box          $900,000.               $0
   You take two boxes        $900                    $101,000


As you can see, taking one box has the highest expected utility.

Kaikaapro 00:10, 13 September 2006 (UTC)[reply]

I think the problem in this calculation lies in the Prosecutor's fallacy : If the predictor has a probability p of 90%, what it means is this : If the prediction is X then the probability of the Decision to be X is 90%. You cannot use this probability to know the Expected Utility, which needs the probability q : Probability of the Prediction to have been X, knowing that the Decision is X If we note P=AB the event Prediction=2 boxes, P=B the event Prediction=1 Box, AB and B the events Decision = 2 boxes and resp. 1 box, using Bayes' theorem and conditional probabilities we see

and

So it all depends on the probability of the psychic to predict X and the probability of the Chooser to choose X. Can we know those values ?

If we consider the case where p = 100% the two premises : 1: the Chooser has his free will 2: the decision does not affect the prediction Are obviously contradictory. Hiding the problem behind probabilities only exploits the fact that we are not used to think in terms of time travels.

Well, anyway...

How is p different from q here: "If the predictor has a probability p of 90%, what it means is this : If the prediction is X then the probability of the Decision to be X is 90%. You cannot use this probability to know the Expected Utility, which needs the probability q : Probability of the Prediction to have been X, knowing that the Decision is X"? How are these two different, am I missing something?Krum Stanoev 09:12, 28 September 2006 (UTC)[reply]
The calculations are perfectly fine. There's no need to invoke conditional probabilities. The question at issue is what do I choose: AB or B? What you're suggesting holds my decision as a random variable and a function of the prediction; that's completely backwards. What the original comment said was, effectively Let's suppose I'm going to always choose to take B. The Predictor has an accuracy of 90%. Thus, 90% of the time he will predict B and I will get $1,000,000 and 10% of the time he will predict AB and I will get $0 and then repeated for the case where I always choose to take AB. When considering the predictions, your decision is already fixed.
You could make your decision a random variable as well, but it doesn't change the resulting best strategy: if the Predictor accuracy is >50.05%, always take B, if the accuracy is lower, always take both. Westacular 06:38, 10 June 2007 (UTC)[reply]

Nash equilibria

The game theory section is quite confused, which is amusing given how straightforward the analysis claims to be.

The article can't decide whether it's talking about a single-round Nash equilibrium, or the Nash equilibrium of a repeated game. It talks about repetition, and a threatened "retaliation" by the Predictor, but it describes an equilibrium which doesn't need retaliation or a repeated game at all. The equilibrium {choose both boxes, predict both boxes} is quite stable for the single-round game, and is the only Nash equilibrium.

Both players choosing "one box" is also a Nash equilibrium for the repeated game. However, when we have a repeated game, another Nash equilibrium is possible. If the Predictor adopts a strategy "always choose one box unless the Chooser chose two boxes last round" then the Chooser can also always choose one box and have no incentive to deviate. Deviating would net an extra 1,000 for that round, but would cap the maximum payoff at 1,000 in the next round, where it would have been 1,000,000. Hence, the deviation is discouraged and the equilibrium is stable.

The real question is, is any of this germane to the article's topic? If this section could be made concise and simple, I could see including it. I don't see a way to make it concise and still correct unless you want to dismiss the possibility of playing repeatedly. Isomorphic 02:41, 12 Aug 2004 (UTC)

Correcty, in the single round game, the only Nash Equilibrium is {choose both, predict both}. However both the article and the previous comment fail to state whether the game is finitly often or infinitly often repeated, which makes a big difference for the trigger strategy that both want to apply. In the finitly often repeated case, the trigger strategy fails, since the game can be solved by backward induction starting from the last period, where the chooser would deviate for sure. In the infinitly often repeated game, one needs a discount factor (or else the chooser would get an infinit amount of money from getting 1 each round). Then, one can calculate the discount factor needed to sustain the "good" {choose one, predict one} Equilibrium. The discount factor changes with the payoffs.
So what does this have to do with the paradox? Not a big lot if one is interested in the time travel/free will issue (not to forget that the predictor's payoff is not well defined). --Xeeron 18:20, 2 December 2005 (UTC)[reply]
I agree. The whole game theory section should get on a piece of fat and slide off. Kaikaapro 00:20, 13 September 2006 (UTC)[reply]
I went ahead and deleted it. Later on I'll add a section on Expected Utility. Kaikaapro 00:25, 13 September 2006 (UTC)[reply]

freaking crazy schiznap

This is the perfect kinda stuff to talk about when one is high/drunk. I love it. Anywho, just wanted to complement you all on a good article The bellman 01:49, 2004 Nov 25 (UTC)

No original research

This article very likely contains violations of Wikipedia's original research policy. Many of the proposed solutions to this paradox are attributed to "some philosophers" and other unnamed individuals. If you've made contributions to this article, please try to cite the sources of your information. If you've contributed theories and explanations to this article that you personally formulated yourself, please remove them, or consider moving them to other wiki projects that allow original research. I'll try to clean some of this up a bit, but it's of course easier if the original contributors help out as well. -- Schaefer 01:05, 27 Nov 2004 (UTC)

Article move

I'm proposing this article be moved from Newcomb's paradox to Newcomb's Problem. Whether this is actually a paradox is quite disputed. The second word should be capitalized as it refers to a specific problem by Newcomb. Also, it seems the original name of this really was "Newcomb's Problem". Google Scholar reports twice as many references to "Newcomb's Problem" than to "Newcomb's Paradox", and they are roughly equal on normal Google (all searches were conducted with quotes). There already exists an article at Newcomb's Problem that just redirects here, but has a few edits in the history so I'm taking this through Wikipedia:Requested moves. -- Schaefer 01:49, 27 Nov 2004 (UTC)

  • I no longer desire to move this article, after being made aware of the popularity of the term "paradox" to describe it in previous decades. I was admittedly taking the term too literally. -- Schaefer 04:51, 27 Nov 2004 (UTC)

About that Glass box...

Isn't the glass box example completely illogical, unapplicable and even unnecessary? It is like saying you have two open boxes, why bother making it glass? Come to think of it, it is actually like having no box at all. Rather, it would be equivalent to "the predictor" holding out the one or two bills and say:

"If I have two bills in my hand, I have predicted that you will only take the $1,000,000 bill. If I have only the $1,000 bill I have predicted you would have attempted to take them both, had I presented you with them".

This obviously defies all logic. In case the predictor is holding only one bill, he is not providing the so-called chooser with a choice at all, he is instead presenting him with the consequence of what choice he would have made had he had the chance to! But if determinism is the rule, does it not than follow that predictions of alternative futures/realities are impossible, or at least irrelevant? And if presented with the two bills, truly having a choice and being told he can take both, why wouldn't he? If we consider it a fact that one of the prerequisites of the paradox is that the chooser wants as much money as possible, it seems he once again is left without a choice. Indeed, the motive of the chooser has to be to get as much money as possible, if it weren't, he would choose box or boxes at random and it would indeed not matter if, instead for money, pecks of dust were placed in the boxes.

One could argue that the differences with actually having a (glass) box is that you can still pick it even if you see it is empty, but is this relevant? Quoting the present article: "If he sees the closed box is empty, he might be angry at being deprived of a chance at the big prize and so choose just the one box to demonstrate that the game is a fraud". Certainly, that would be a possibility if the chooser regarded the $1,000 dispensable, but would that not violate the "maximum money" prerequisite? Let me elucidate; I cannot see a reason why changing the value on the bills would be violating the "paradox". Let us then say than the $1,000 bill is not a $1,000 bill at all but actually a $999,999 bill. If one indeed suggests this alteration DOES violates the rules of the paradox, I question where exactly from $1,000 to $999,999 this violation occurs and how can it be explained? If it is not a violation and we go along with the example, then there is no reason for the chooser to be angry and choose the empty box. According to the predictor, the chooser could only have walked away with one measly dollar more even if he had been presented with both the bills.

Therefore, the only prediction that can be made is that the chooser will take what money he is presented with. Thus, the predictor can ultimately only offer one choice: the $1,000 bill (or the $999,999 if you prefer). Now, the circle is complete, he hasn't offered the chooser a choice and there is no paradox!

Doesn't this in fact hold equally true for the main "paradox", be the boxes made out of glass, other material or non-existant? Isn't the only difference between the glass box and the original problem that the chooser in the original will take what money he's presented with (i.e. both boxes) only if he's "rational" about it? But what if we take the rationality of the chooser for granted? In other words, what if all people were that rational, or at least informed about these circumstances before making their choice?

I realise I am not the first person to claim this is not a paradox but I haven't seen it presented this way earlier. I must however admit things get slightly more complicated when working with the main example (and I'm too tired to develop it any further right now), but anyways, I think it's interesting. But basically, I guess I'm just questioning whether philosophers really use such a flawed example of the "glass box paradox" or if it's just a violation of Wikipedia's "no original research policy". Also, excuse me for any grammatical shortcomings, I am not a native English-speaker and it's 4 PM. guess I got a little carried away! --Mackanma 02:22, 8 May 2005 (UTC)[reply]

1) "Glass box" is just a fancy way of saying "no box" or "open top box" or "box with a hole through which you can peek inside" or whatever. It's inconsequential.
2) The original situation and its outcomes appear to be somewhat counter-intuitive, but are nevertheless consistent: you pick, and whatever you choose, Predictor is always correct. Psychologically, it's convincing. It appears to be "magical", but what is foretelling the future if not magical?
3) The glass box situation is radically different. It is no longer convincing. It is self-contradictory. The paradox lies in the following question: can you accurately predict the outcome of a future event if your prediction itself affects that outcome? And that's exactly Minority Report... GregorB 19:07, Jun 2, 2005 (UTC)
That's the point of the paradox... there can be no perfect prediction of the future if there is to be free will. They're mutually exclusive, and this demonstrates it. -P.

Brain Lateralization

As a potential reason for the almost dichotomous split.

Self-oriented temporal linearity vs whatever is the other viewpoint. 24.22.227.53 22:36, 13 August 2005 (UTC)[reply]

Removed paragraph

According to Raihan's Hypothesis, there will be a third agent playing other than the Predictor and Chooser. If the Predictor is 100% correct in his prediction, then, the Chooser will logically choose exactly according to the prediction, even if decides to choose the opposite. This will be accomplished by an accident initiated by the third agent. Raihan's Hypothesis bases itself on the assumption that the past as well as the present and future are fixed unchangeable points. The knowledge of the future cannot change the future. The necessary condition of an accurate Predictor is an accurate Protector or law that will make sure this immutability. This is interesting because one accurate prediction generates at least another prediction with certain variations. For example, the Predictor predicts that in some distant future the Chooser won't burn a certain piece of paper and knowing this prediction the Chooser decided to do the opposite. What will happen? According to Raihan's hypothesis the paper won't be burnt. But the Predictor also predicts(knowing the Chooser's intention) that either the Chooser changes his decision for good or he will die or he won't get fire anywhere etc.

The above paragraph lacks citation and verifiability. Google searches for "Raihan's Hypothesis" and "Raihan Hypothesis" turn up zero hits. Searching for just Raihan returns pages mostly about a music group of the same name, and of the links that aren't about the music group, none looks relevant. Wikipedia has no further information on who Raihan is that I can find. Raihan is not mentioned as an external reference in this article. Until verifiable information on who "Raihan" is and what his hypothesis is can be found, I'm moving this paragraph to talk. -- Schaefer 06:49, 9 January 2006 (UTC)[reply]

A similar concept is listed in Wikipedia as the Novikov self-consistency principle. 88.108.228.144 22:32, 14 June 2006 (UTC)[reply]

Original research

A very strict reading of the original research page will lead one to conclude that most of what is written on this page is original research and should be deleted. I think this is unreasonable. This is a simple paradox which does have a large impact on issues like free will etc.

But the simplicity of the formulation of the paradox makes it easy to fully explain in a few sentences a new way to look at it. Strictly speaking this is "original", but it is also unpublishable.

We don't uphold this view of original research to other more techincal subjects either. Take e.g. my edits to this section.

This derivation is standard second year university stuff and thus unpublishable. However, strictly speaking, I did introduce a new thing in here you don't find in textbooks to make it easier to understand (the partition function of a single mode in order to avoid infinite products over all modes).

So, let's not be pedantic and delete things that do not need more explanations than a few sentences. Count Iblis 12:53, 21 June 2006 (UTC)[reply]

Please understand that by removing what you wrote, I was not in any way disagreeing with what was written, nor was I condoning what was already in the Thoughts section. I also apologize for the glib way I threw out the original research buzzword. Nevertheless, buzzwords aside, I do think that thoughts on free will really are qualitatively different from statistical mechanics, and there ought to be a higher standard for inclusion. May we at least remove the link to your personal blog? 192.75.48.150 17:34, 21 June 2006 (UTC)[reply]
No problem! I removed the ref. to my blog. There are probably some articles on machine intelligence, simulation argument etc. here on wiki. So, and internal link would be appropriate. The proposition that simulating the brain would generate the same consciousness as the brain itself is a hot item in philosophy. My addition is simply that you can take the predictor to be computer that simulates the brain under appropriate conditions. That's rather trivial. Count Iblis 18:23, 21 June 2006 (UTC)[reply]

Only one possible prediction ?

Ok, I'm new to this problem, but anyway there's a question that is not adressed here (edit: except in Mackanma's post, but then he has had no answer on this subject). I'm assuming here that the predictor is always right. We know that two boxes always contain more than one (or just the same amount). The Chooser has nothing to lose by choosing both boxes : the prediction is made, the money is in the boxes. In these circumstances, there is only one rational course of action for the Chooser, so there should be only one possible prediction : Chooser takes both. The question is "What the Chooser should do to maximise his gain ?" (implicitly "What is the best rational course of action"), so saying that maybe he is adventurous, or doesn't care, is not a valid counter-argument here. Maybe the key to the paradox is not to put free will into question, but to notice that the predictor cannot predict that a rational Chooser act irrationnaly. But then even the Chooser has no free will if he is supposed to act rationnaly and there is only one rational solution. What do you think ? CPM --194.51.20.124 16:56, 18 September 2006 (UTC)[reply]

Really? Even though you assume the predictor is ALWAYS right? Now, suppose you choose both boxes. The predictor is always right, by assumption. Therefore, he predicted you would choose both boxes. Therefore you make $1000. Suppose I choose only one box. The predictor is always right, by assumption. Therefore, he predicted I would choose only one box. Therefore, I make $1000000. By your argument, you acted to maximize your gain (rationally), and I did not. But, I made more money than you. A contradiction. 192.75.48.150 14:36, 28 September 2006 (UTC)[reply]

Ethics

I've been thinking about this problem the last couple of days, and I believe it applies to ethics. The two main ethical choices can be described somewhere along the lines of evolution vs. utility (e.g. right wing vs. left wing etc). You can make an equally good argument for both causes (e.g. an entrepreneur arguing with an environmentalist over property rights), much like this paradox, yet both can be seen as simultaneously true and false,. Though they both make strong arguments, they also completley contradict each other. I was wondering if any philosophers have made this observation before (which I imagine they would have), and if so who has done so. Richard001 05:06, 1 October 2006 (UTC)[reply]

Coin flips

Hi - I have a question on this, which may well be noob-like, but bear with me.

What about if, when the Predictor makes his prediction, I then flip a coin (or use some other reasonably random binary method to make my decision, which doesn't have to require a prop - no reason why it can't be purely mathematical) - heads I open Box A, tails I open Box B? Is this disallowed by the Thought Experiment? Presumably you would say that the Predictor still predicts the correct outcome? In which case he is predicting the outcome, not my means of arriving at it, right?

Therefore could you not say that I have free will, because even though the eventual outcome can be determined by the Predictor, my method of arriving at the decision cannot? Or can he only predict the outcome when the actual decision is made by 'me', without the use of external artifice?

Or would you argue that my means of arriving at my decision can be said to be irrelevant?

Not that it matters, but can I 'chain' my artifices, i.e. follow a binary tree, using different methodologies at each level, first using some random method to determine how many 'levels' I go down? Therefore even I cannot predict what decision I will make, right? I don't dispute that nothing is truly random to 'The Predictor', but surely that's irrelevant - the point is that *I* cannot know what I'm going to choose. Using a chained set of artifices to make my decision in such a way that I cannot determine the result I will choose myself, surely I then lose the link between my decision and the Predictors prediction, i.e. he is no longer predicting my decision?

And if you're going to suggest that the Predictor is not predicting my decision, rather the outcome, then surely that means a priori there is no conceivable way for me to maximize my return through any means i.e. the discussion regarding Nash Equilibriums becomes meaningless? Mikejstevenson 16:51, 17 October 2006 (UTC)[reply]

Let's make a deal / Monty Hall Paradox

I was surprised to see no reference to this game and/or the paradox, due to the similar, or even identical, problems and mathematical considerations inherent within the premise. Don't misunderstand me - I'm not saying that Newcomb's is the same problem, but they are similar enough to deserve a link or a disambiguation.

Other philosophical discussions of the 'Let's make a deal' problem usually focus on the motivations of Monty Hall (the 'predictor' in this case?). If Monty is sympathetic to the contestant, and whether he knows the contents of the hidden prize, are important considerations in choosing a strategy, although this isn't intuitive. The problem is described with more skill and in greater detail here. http://math.ucsd.edu/~crypto/Monty/montybg.html Calydon 17:57, 29 January 2007 (UTC)cal[reply]

There really isn't any substantial similarity between the problems. Westacular 06:22, 10 June 2007 (UTC)[reply]
Agreed. The important part of Newcomb's paradox is that the prize is determined by a prediction of the contestant's actions. There is no analogous prediction in the Monty Hall problem, as Monty Hall never predicts which door you will choose. He just lets you choose, provides you with some information (that one particular door you didn't select has no prize), and lets you choose again based on that information. -- Schaefer (talk) 06:38, 10 June 2007 (UTC)[reply]

Just one thing

In the description of the problem, it does not clearly state that the player is completely aware of the rules and the potential contents of both boxes, a fact everyone puzzling over the problem assumes.

Kniesten 19:36, 4 March 2007 (UTC)[reply]

"All-knowing alien paradox"

I have seen this paradox referred to as the "All-Knowing Alien Paradox" (such as on this page: http://econ161.berkeley.edu/movable_type/archives/001396.html ). Do you guys think this is a valid nickname, and should we mention it somewhere? Pescadero 02:19, 14 September 2007 (UTC)[reply]

Easy to predict

Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly." This means for the predictor it is easy to predict to which group the player belongs. Depending on this prediction, the predictor chooses the amount in Box B. What is the paradox? --NeoUrfahraner (talk) —Preceding comment was added at 03:10, 11 February 2008 (UTC)[reply]

Experiment

Just a small experiment to verify Nozick: Who thinks that choice "A and B" is better, and who thinks that "B only" is better? --NeoUrfahraner (talk) 03:29, 17 February 2008 (UTC)[reply]

Thank you for participation. Does anyone else want to answer? --NeoUrfahraner (talk) 05:53, 21 February 2008 (UTC)[reply]

My response can be found here This is included in the article itself as it was also mentioned in Ref. 3. In fact, my blogposting was also cited in that paper  :) Count Iblis (talk) 13:16, 21 February 2008 (UTC)[reply]

Do you mean that in the simulation you will choose box B only because then you will get more in the real game? --NeoUrfahraner (talk) 12:03, 22 February 2008 (UTC)[reply]
Yes, but that's done by fooling the predictor e.g. by sneaking into the room where the boxes are and drawing a cross on the wall. But, one can object that this only works when the predictor is not perfect. If the predictor is immune to this sort of tampering, then one has to choose box B only. Count Iblis (talk) 13:17, 22 February 2008 (UTC)[reply]
"To defeat the being you must choose the closed box in the simulated world and both boxes in the real world." So when the predictor is not perfect, you will choose both boxes in the real world after you chose B only in the simulation? --NeoUrfahraner (talk) 18:39, 23 February 2008 (UTC)[reply]
You wrongly assume that the purpose of the game is to defeat you. There is clearly stated that "If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000." So, if you'll take only box B, there WILL be one million inside.--88.101.76.122 (talk) 16:55, 6 April 2008 (UTC)[reply]
And you wrongly assume that the Predictor is always right without caring much about the ALMOST placed in front of it which means he doesn't have a 100% success ratio, who knows maybe 95%, as a consequence if you'll only take box B and the Predictor is wrong in one of the few probabilities he is, then there will be $0 inside your box. It depends if you believe there's actually free will that can not be guessed effectively by the Predictor (and that he is in fact just pretty good at guessing) or if you think your fate and actions are already determined and that the Predictor knows the box you would "choose" beforehand for sure. It is somewhat reminiscent to the prisoner's dilemna to some aspects -- Steph_O_Mac_rules (talk) 01:20, 26 March 2009 (UTC)[reply]

Possible resolution

To play a game against an omniscient opponent you must be/have been/become omniscient. AnaxMcShane (talk) 13:10, 16 February 2008 (UTC)[reply]

Try it backwards

Look at it a little backwards. Say you are given the choice between A and B, but instead of the predictor knowing ahead of time, he gets to decide the contents on box B after you chose. The effect is the same in that he knows what your choice is, it's just happening in a different order. If you chose only B, he will put no money in B to minimize his losses. If you chose both A and B, he will put no money in B to minimize his losses. It's a lose-lose situation. You can't get his million unless he intentionally puts the million in B after you pick it. To outsmart him is to make a second choice after he decides what's in B, and that would be cheating.

Box B is therefore effectively empty. No matter what you chose, it won't have the million in it. This defeats the purpose of chosing, and makes the predictor irrelevant. The same effect could be had by asking the choser to pick between an empty box, or $1000 and an empty box.

I propose instead taking box C by force, which is where the predictor keeps the million while not busy possibly hiding it in boxes. Daverapp (talk) 05:10, 21 February 2008 (UTC)[reply]


The predictor doesn't lie to you. So the solution it to take only box B.--88.101.76.122 (talk) 16:28, 6 April 2008 (UTC)[reply]

..

This is not a paradox. It's just maybe somewhat counter-intuitive, because your current decision has effect on the past. The predictor is always right, so there are only two possible outcomes: 1. You choose both boxes and get $1000. 2. You choose to get only B and you get one million. If you decide to take both boxes, you will cause the predictor to leave box B empty. If you choose to take only box B, you will cause the predictor to put one million into it. Other suggested possibilities cannot occur.--88.101.76.122 (talk) 16:46, 6 April 2008 (UTC)[reply]

I agree with the above user (88.101.76.122). This is precisely how I understood the problem based on the description. Nicely explained.
Both boxes -> $1000, Box B -> $1000000, every time. --88.112.23.211 (talk) 17:38, 25 November 2008 (UTC)[reply]
Except he is not always right, he is "almost always right" which leaves a margin for error and on whether or not you believe him to be always right which could affect your decision. —Preceding unsigned comment added by Steph O Mac rules (talkcontribs) 23:52, 25 March 2009 (UTC)[reply]
I still think this fails to be paradoxical, simply because you can mathematically derive the best course of action based on how likely the Predictor is to be correct, as another poster has already done earlier on this page. If you want a REAL paradox, try Prisoner's Dilemma (which I think is very closely related to this, judging by the similarity of the PD strategy to one of the mainstream NP strategies). - DrLight11 —Preceding unsigned comment added by 96.244.224.248 (talk) 03:15, 13 October 2009 (UTC)[reply]
In fact, upon further thought, I think this problem boils down to an instance in the PD where you can assume your partner will always match your decision. Hardly a brain teaser =P. - DrLight11 —Preceding unsigned comment added by 96.244.224.248 (talk) 03:18, 13 October 2009 (UTC)[reply]

Coins

The answer to this problem depends on how you view the predicter and the nature of the game, there is no wrong or right answer. Suppose the predicter can only read the choosers mind, and as he is thinking over his "choice" as the boxes are placed before him, we enter a third person to the situation, lets call him the game show host, and he gives the chooser a coin to flip in order to make his choice, what then? 193.120.116.182 (talk) 16:53, 26 May 2009 (UTC) shane mc d[reply]


==I would like to make some significant changes in the item, and would like to get some reactions before I put them in.

The crux of the paradox, in my view (and if I am wrong, I would like to see some counter arguments) is in the existence of two contradictory arguments, both being seemingly correct.

1. A powerful intuitive belief, that past events cannot be affected. My future action cannot determine the fate of an event that happened before the action.

2. Newcomb presents us with a way of doing precisely this - affecting a past event. The prediction of the wise man establishes equivalence between my choice (of renouncing the open box) and the content of the closed box, which was determined in the past. Since I can affect the future event, I can also affect the past event, which is equivalent to it.

A solution of the paradox must point out an error in one of the two arguments - they cannot co-exist. Every other approach, like game theoretical arguments, evades the need to do this.Ron Aharoni (talk) 16:06, 12 July 2010 (UTC)[reply]

I added another observation. I think that the observation does not fall within the "original research" category, since it is a very simple and probably uncontested one, and it forms a link with the free will problem. Comments on whether it is indeed the case are invited. Ron Aharoni (talk) 12:03, 16 October 2010 (UTC)[reply]

Final Section Should be Removed

The section entitled "Is the paradigm of time used in quantum field theory totally correct?" is irrelevant to Newcomb's Paradox and should be removed. Congratulations on a good summary of this issue other than this one bizarre interjection. 93.97.31.222 (talk) 18:28, 9 November 2010 (UTC)[reply]

not a paradox?

then why is the page called so? should be theorem.Lihaas (talk) 05:28, 20 November 2010 (UTC)[reply]

A strange variation

Long article and posts, I searched but did not read exhaustively. The following may be redundant.

I heard of a variation on this problem. Assume you have a friend standing on the other side of the table. Box B is also transparent (on the friend's side - you still can't see the contents). Your friend WANTS you to take both boxes.

What do you do? 67.172.122.167 (talk) 05:47, 31 July 2011 (UTC)[reply]

Oops. Aaaronsmith (talk) 05:49, 31 July 2011 (UTC)[reply]

Random device?

Just responding to this paragraph of the article:

If the player believes that the predictor can correctly predict any thoughts he or she will have, but has access to some source of random numbers that the predictor cannot predict (say, a coin to flip, or a quantum process), then the game depends on how the predictor will react to (correctly) knowing that the player will use such a process. If the predictor predicts by reproducing the player's process, then the player should open both boxes with 1/2 probability and will receive an average of $251,000; if the predictor predicts the most probable player action, then the player should open both with 1/2 - epsilon probability and will receive an average of ~$500,999.99; and if the predictor places $0 whenever they believe that the player will use a random process, then the traditional "paradox" holds unchanged.

This is all a bit tricky. Firstly, talking about a "coin" is misleading, since an (ideal) coin always has probability 1/2, but the writer is talking about what the player should do if they have access to a random device that can be set to decide between two outcomes with any desired probability. (This had me confused for a long while!)

In case 1 (the predictor replicates the process) then if you select a 50/50 probability, the expected value of payout is surely a straight average of all four possibilities ($500,500). This doesn't circumvent the paradox at all, though: choosing to open both boxes is superior to using the random device for the same reason as before (whatever is in Box B, you get more if you take both than if you don't) and choosing to open one box is superior to using the random device, since it gives a higher expected payout.

Case 2 is unclearly written, but I think the writer is saying "what if" the predictor responds to randomness by always going for the more likely outcome. In this case, setting the device to choose both boxes with probability 0.5 minus epsilon (where epsilon is a very small quantity) means there will always be a million in Box B. Since the choice is between Box B or both boxes, on average you get $1,000,500. (The writer's analysis would be correct if the choice were between Box A or both boxes.) This would be the optimum strategy, if the predictor did indeed work like that.

But I don't see why we should be entitled to assume that the predictor can predict thoughts but not the outcome of a random device. If we simply stipulate that the predictor is capable of predicting the ACTUAL decision by unspecified means, then even mentioning a random device achieves nothing. The paradox remains: "You have two possible decisions, either of which can be shown by seemingly reasonable argument to be superior to the other."

Am I right? 2.25.135.6 (talk) 18:34, 18 December 2011 (UTC)[reply]