Jump to content

Talk:Two envelopes problem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Dilaudid~enwiki (talk | contribs) at 09:40, 27 July 2011 (This is not a paradox). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Three level paradox

Actually there are a number of levels to the paradox.

The argument which leads to infinitely often switching says that because we know nothing about the amount of money in the first envelope, whatever it may be, the other envelope is equally likely to contain half or double that amount. I wrote out the elementary argument, which iNic deleted because he didn't care to look for it in the published literature, that no probability distribution of the smaller amount of money can have the property that given the amount in a randomly chosen envelope, the other envelope is equally likely to have half of double the amount.

Firstly, by a very simple argument, such a probability distribution would have to give probability to arbitrarily large amounts of money. For pragmatists this is enough to disqualify the argument. There must be an upper limit to the amount of money in the envelopes.

Secondly, by a slightly more complicated argument (using the definition of conditional probability and one line of algebra) it follows that the probability distribution of the smaller amount of money must be improper. For many applied mathematicians this is also enough to disqualify the argument. Whether one is a subjectivist or a frequentist, all probability distributions are proper (have total mass 1). At least: conventionally this is so, in mainstream probability and statistics. (There is a school of decision theory where we do not demand the countable additivity assumption. And again in decision theory, if you want to have nice theorems, e.g. that the class of admissable decision rules is equal to the class of Bayes rules, you have to add Bayes rules based on improper priors).

However this is not satisfactory to all. There are many occasions where the use of improper priors as a way to approximate total ignorance leads to reasonable results in decision theory and in statistics (though there are also dangers and other paradoxes involved). More seriously, a slight modification to the original problem brings us right back to the original paradox without use of an improper prior (though we still need distributions with no finite upper limit).

Let's look for examples and show how easy they are to generate. Let's drop the requirement that one of the envelope contains twice the amount of money as the other. Let's suppose we just have two envelopes within which are two pieces of paper on which are written two different, positive, numbers. Call the lower amount A, the larger amount A+B. So A and B are also positive and I think of them as random variables, whether from a subjectivist or a frequentist viewpoint makes no difference to the mathematics.

We choose an envelope at random. Call the number in that envelope X and the other Y. Is it possible that for all x? Well, if it follows that or both are infinite. But by symmetry (the law of is equal to the law of ) if must be the case that . Hence we must have .

So to get our paradox we do need to have infinite expectation values. Let's first just show that this is easy to arrange - I mean, in combination with for all x. Let the smaller number A have a continuous distribution with positive density on the whole real line, and let B, the difference between the larger and the smaller, also have a continuous distribution with positive density on the whole real line. Let's see what happens if we take A and B to be independent, with . A simple calculation shows directly that in this case, for all x.

So there exist examples a-plenty once we drop the assumption that the two numbers differ by a factor 2. How can we resolve the paradox?

Again some pragmatists will be happy just to see that the paradox requires not only unbounded random variables but even infinite expectation values.

But I would say to the pragamatists that though you might argue that such distributions can't occur exactly in nature, they do occur by a good approximation all over nature - just read Mandelbrot's book on fractals. Moverover as a matter of mathematical convenience it would be very unpleasant if we were forbidden to ever work with probability distributions on unbounded domains. For instance what about the standard normal distribution? And infinite expectations are not weird at all - what about the mean of the absolute value of the ratio of two independent standard normal variables? Not a particularly exotic thing. The absolute t-distribution on one degree of freedom.

Fortunately there are several ways to show why also idealists (non-pragmatists) need not be upset. In particular, we do need not switch envelopes indefinitely!

By symmetry, . So when we switch envelopes we are switching an envelope with a finite amount of money for an envelope with a finite amount of money, whose expectation value is infinite, given the number in the first envelope. But the number in the first envelope also has an infinite expectation value and in fact they two have the same marginal distribution (both with infinite expectation) and we are simply exchanging infinity for infinity. We can do that infinitely many times if we like, nothing changes. Of course an finite number is less than infinity. It's not surprising and it's not a reason to switch envelopes.

Suppose we actually looked at the number in the first envelope, suppose it was x. Should we still switch? The fact that the conditional expectation of the contents of the second envelope is infinite actually only tells us, that if we were offered many, many pairs of envelopes, and we restricted attention to those occasions on which the first envelope contained the number x, the average of the contents of the second envelope would converge to infinity as we were given more and more pairs of envelopes. We are comparing the actual contents of one envelope with the average of the contents of many. The comparison is not very interesting, if we are only playing the game once. (As Keynes famously said in his Bayesian critique of frequentist statistics, "in the long run we are all dead"). Now, the larger x is, the smaller the chance (since we are not allowing improper distributions) that the contents of the second envelope will exceed it. If we were allowed to take home the average of many second envelopes, the larger the first envelope the larger the number of "second envelopes" we would want to average, till we have a decent chance of doing better on exchanging.

So, on the one hand, infinitely switching is harmless, since each time we switch we are equally well off, if we don't look in the first envelope first. On the other hand, if we do look in the first envelope and see x, and we're interested in the possibility of getting a bigger amount of money in the other, we shouldn't switch if x is too large.

This is where the randomized solution of the variant problem comes in. If we are told nothing at all, and only use a deterministic strategy, there is no way we can decide to switch or stay on seeing the number X=x in the first envelope that guarantees us a bigger chance than 1/2 of ending up with the larger number. For any strategy we can think of, the person who offers us the envelopes can choose the numbers in the two envelopes in such a way that our strategy causes us to have a bigger chance of ending with the smaller number. However if we are allowed to use a randomized strategy then we can get the bigger number with probability bigger than 1/2, by using the random probe method. Choose a random number with positive density on the whol real line and compare it to x. When our probe lies between the two numbers in the two envelopes we'll be led to the good envelope. When it lies above both, or below both, we'll end up either with the second envelope, or the first envelope. But given the two numbers in the two envelopes, it is equally likely that the first is the smallest, as that the second is the smallest.

I hope this is all written down somewhere in the literature. (I believe it is all correct). But if not, I am happy to oblige, and get it peer reviewed so as to check the correctness, and then others may cite, if they find it useful and interesting. Richard Gill (talk)

PS if someone wants to move this material to the "Arguments" page I hope they will just go ahead and do so. I assume that the self-appointend guardian of these pages, @iNic, already moved my elegant analysis of the "equally likely double or half" assumption there too (others may well want to read it, even if he doesn't). Richard Gill (talk) 12:44, 8 May 2011 (UTC)[reply]

Sorry iNic, you did move it to the Arguements page, that's fine.
I am gathering my "original" contributions to the two envelope problem on my personal talk page, [1]. BTW I do not believe for a moment that my analyses are "original". This is not rocket science. They do not represent a personal Point of View either. Just routine analysis of the problem which any mathematician should be able to do after following Probability 101. Richard Gill (talk) 13:06, 8 May 2011 (UTC)[reply]
After writing this all up (partly reacting to comments by other editors here) I Googled "two envelopes problem" and read the first 20 hits which were mainly to semi-popular articles by professional mathematicians. They made a lot of sense, they were coherent, there was no controversy, no mutual contradictions, or unsolved problems. I did not discover anything that was not covered by my notes. Everything I have "invented" myself the last couple of days could be found in the literature. Of course: everything interesting which can be said about the two envelopes problem is common knowledge among the professionals, it's part of our folk-lore.
From the point of view of probability theory (whether interpreted in a Bayesian or frequentist way) there is absolutely nothing "unsolved" about the problem. The facts are known, they are not disputed.
It's a fact that different people like to "resolve" the paradox in different ways (some more mathematical, some more pragmatic). The chain of reasoning leading to the "switching for ever" conclusion is incorrect. You can resolve the paradox by pointing out the error in the reasoning. You can stop right there. Or, you can create a new paradox by altering the problem, and avoiding the earlier error, and coming up with a new argument which appears to lead to a similar crazy answer. You can fix the new paradox (either by pointing out a hole in the argument or by showing that the answer is not as crazy as it appears). And so on... So there is a branching family tree of related two envelope problems and different people might like to follow different paths through the tree before they are satisfied that there is only, at the end of the day, a paradox - an apparent contradiction, not a real one.
In this sense the two envelopes problem is never solved since it always remains a matter of personal taste which path to take through the branching family tree of paradoxes and resolutions and new paradoxes. So there will be never be a consensus how the problem should be solved. But this does not make it an unsolved problem, as is claimed in the first line of the article!
In the field of decision theory and mathematical economics there is debate about the meaningfulness of infinite expectations. The orthodox point of view is that we are interested in utility, not money; the utility of money is not linear. At some point having even more money gives us no further utility. Utility is always bounded. The switching paradox can only arise if we allow, not only for infinite utility, but even for infinite expected utility. Hence the paradox is simply not interesting, not relevant, to decision and utility theory.
I noticed a couple of very woolly articles by philosophers who clearly were not up to the mathematics and hence stayed in the "going round in circles" stage when one is not able to analyse the problem using appropriate tools and language.
Gill, can you please reveal which articles you think are the "very woolly" ones? What if others think that they are the good ones while dislike some of the ones you like? Please read (again) the NPOV policy of WP. iNic (talk) 22:37, 9 May 2011 (UTC)[reply]
Sorry iNic, I didn't notice your question, and now I have forgotten them. You may be sure these were not major articles in top philosophy journals. They were articles on blogs and newsletters. As you know, there must be thousands of such. Richard Gill (talk) 18:09, 6 July 2011 (UTC)[reply]
You said: what if others think they are good ones but dislike the ones I like? That would be great. Then they can add to the page and we can discuss why they are good, despite my inital impression. They can explain what "my sources" are overlooking. Collaborative editing. Good faith. We all learn from one another. We need experts from philosophy who can make sensible summaries of the philosophy literature, just as we need experts from mathematics who can make sensible summaries of the mathematical literature. Both can and should (constructively) criticize one another. Laypersons can criticize and improve both. By the way, the Wikipedia guidelines on verifiability say that the first thing to do when you come across unreferenced material is to add references yourself, not to delete. The second thing to do is to flag. The third is to delete. Richard Gill (talk) 07:16, 7 July 2011 (UTC)[reply]
It was nice to find out that the random probe method was actually invented by Thomas M. Cover, the famous information theorist, quite a few years before the two Australians wrote it up in the Proceedings of the Royal Society. He told it to them and they didn't believe it. But after 10 years it finally sank in, and then they wrote and published their paper and became briefly famous. In the meantime Tom had told his solution to many other people (I heard it along the grape-vine, very soon after Cover invented it, quite independent of the Australians). Pity I didn't write it up for a prestigious journal right then, I could have been as famous as they were. Richard Gill (talk) 09:45, 9 May 2011 (UTC)[reply]
Excellent job. This article has been a total mess for way too long. Tomixdf (talk) 11:47, 9 May 2011 (UTC)[reply]

Sources

By "Sources" I mean all of those identified in the sections now called "Notes and references", "Further reading", and "External links" AND those which are missing (per Richard Gill above, 10:27).

See my new sandbox for improved citations of the four distinct sources given for the "Exchange paradox", whose article should be deleted eventually (per P64 above, 16:21).

Here I have separately put the Further reading and External links in alphabetical order by surname --while tweaking a few entries. Along the way I deleted the one dead link to a presumably unpublished paper,

Unfortunately the Internet Archive missed it (~dfaden/philosophy/). <=That link will be valuable for anyone who needs introduction to the Internet Archive. --P64 (talk) 17:28, 10 May 2011 (UTC)[reply]

Necktie paradox closes by calling that "a rephrasing of the simplest case of the two envelopes problem". That article gives one citation,
  • Aaron C. Brown. "Neckties, wallets, and money for nothing". Journal of Recreational Mathematics 27.2 (1995): 116–22.
That was evidently identified by its title and abstract, probably not used as a source. --P64 (talk) 18:21, 10 May 2011 (UTC)[reply]
Richard Gill, it may be useful to maintain an explicit wish list for copies of sources as one section of this Talk.--P64 (talk) 18:28, 10 May 2011 (UTC)[reply]

The list of sources on the page were very long some years ago (however far from a complete list). Too long some editors thought (fought over? -RDG Pls check the version history of the page /iNic) and deleted most of the sources. I propose that we this time create and maintain a separate page containing a complete list of all sources. (I propose further that we list the sources at that page in chronological order as it has several benefits: 1. It will be easy for readers new to the subject to follow the development of ideas. 2. The list will be very easy to maintain and keep up to date for editors as new published sources simply are added at the end. 3. Persons familiar to the subject can with a glance at that page see if something new has been published that they haven't read yet.) Sources on the main page should be restricted to the sources actually used as references in tha article. iNic (talk) 23:10, 10 May 2011 (UTC)[reply]

I have a dropbox folder (www.dropbox.com) of pdf's of the important papers which are not freely available on internet. I'll gladly invite active editors to share it. Richard Gill (talk) 08:17, 12 May 2011 (UTC)[reply]
I am not so sure now about the sense of creating a complete list of "all" sources. There are so many unpublished manuscripts. If articles never got published it might be because they're not much good. But if someone else is prepared to create and maintain this list, I'll certainly be glad of its existence. @iNic, you say that sources on the main page should be restricted to sources actually used as references in the article. I think that is a wise suggestion. Are suggesting deleting the sections "further reading" and "external links"? Richard Gill (talk) 18:48, 6 July 2011 (UTC)[reply]

Non probabilistic version

I have completely rewritten the section on the Smullyan variant of the problem. I did this, by the way, after collecting and studying a large amount of literature and communication with experts from logic ... I selected just a couple of sources for mention in the text. I tried to put the message(s) of this very technical literature across in a non-technical way. Did I succeed? If not, please improve! (I have pdf's of all these many sources, if anyone wants to see them). Richard Gill (talk) 09:22, 23 June 2011 (UTC)[reply]

Please list here all printed sources you have read regarding this variant. iNic (talk) 00:19, 25 June 2011 (UTC)[reply]
That will take a while. In the meantime, anyone is welcome to join my Dropbox.
There are at least three completely different two envelope problems. With and without opening the first envelope, with and without probability. They all have numerous very easy solutions, at least, easy in their own terms. And they have different easy solutions for readers with different backgrounds. Probabilists, economists, logicians all see different issues. A layperson can "defuse" the paradox (es) by common sense, but is not equipped to even see what the different academics are on about. Should the layman care? Probably not. Should the academic professional care that the layman doesn't care? Probably not. If the academics failed in the past to communicate their concerns to the laypersons, we can hardly expect wikipedia editors to be able to do it now. The literature is vast and complex. A big challenge to an encyopediist who has to write both for laypersons and for *all* the academic communities. Solution: collaborative editing. It requires that *all *participants recogniise the "relativity" and hence incompleteness of their own point if view Richard Gill (talk) 09:50, 27 June 2011 (UTC)[reply]
By the way, when studying the Smullyan version "without probability" it is important to read Smullyan himself. Some of the authors above rephrase his words ever so slightly, in a way which actually makes a difference to a philosopher-logician analyzing ever single word with care and attention. Richard Gill (talk) 08:06, 28 June 2011 (UTC)[reply]

More sources

Here is a list of sources for which I have pdf's. Those which contain discussion of the non-probabilistic variant are marked with an (X). Richard Gill (talk) 07:43, 28 June 2011 (UTC)[reply]

C. J. Albers, B. P. Kooi, & W. Schaafsma. (X) Trying to Resolve the Two-Envelope Problem. Synthese (2005) 145: 89–109.

N.M. Blachman, R. Christensen & J. Utts. Comment on Christensen and Utts, Bayesian resolution of the ‘Exchange Paradox’. The American Statistician (1996), 50, 98–99.

David J. Chalmers. The Two-Envelope Paradox: A Complete Analysis? http://consc.net/papers/envelope.html 6 pp, 1992, unpublished.

David J. Chalmers. The St. Petersburg Two-Envelope Paradox. Analysis 62: 155-57, 2002.

James Chase. (X) The non-probabilistic two envelope paradox. Analysis 62.2, April 2002, pp. 157-60.

Gikuang Jeff Chen. The Puzzle of the Two-Envelope Puzzle. (February 26, 2007). 5pp. Available at SSRN: http://ssrn.com/abstract=1132506

R. Christensen & J. Utts. Bayesian resolution of the 'Exchange Paradox'. The American Statistician (1992), 46, 274–276.

Igor Douven. (X) A Three-step Solution to the Two-envelope Paradox. Logique et Analyse Vol. 50 No. 200 (2007). 6pp.

Ruma Falk. The Unrelenting Exchange Paradox. Teaching Statistics. Volume 30, Number 3, Autumn 2008. 86-88.

Don Fallis. Taking the Two Envelope Paradox to the Limit. Southwest Philosophy Review, 25, 2, (2009). 26pp.

Bernard D. Katz and Doris Olin. (X) A Tale of Two Envelopes. Mind, Vol. 116. 464. October 2007. 904-926.

Bernard D. Katz, Doris Olin.(X) Conditionals, Probabilities, and Utilities: More on Two Envelopes Mind, Vol. 119. 473. January 2010. 172-183.

Bruce Langtry. (X) The Classical and Maximin Versions of the Two-Envelope Paradox. Australasian Journal of Logic (2) 2004, 30–43.

Tom Loredo. The Two-Envelope Paradox. Column "Critical Thinking", Astronomy Department newsletter, Cornell University, 2004. 9pp. http://www.astro.cornell.edu/staff/loredo/bayes/tjl.html

Mark D. McDonnell and Derek Abbott. Randomized switching in the two-envelope problem. Proc. R. Soc. A 2009 465, 3309-3322.

Mark D. McDonnell, Alex J. Grant, Ingmar Land, Badri N. Vellambi, Derek Abbott, and Ken Lever. Gain from the two-envelope problem via information asymmetry: on the suboptimality of randomized switching. Proceedings of the Royal Society, A (2011). Page numbers not yet available.

John D. Norton When the sum of expectations fails us: the exchange paradox. Pacific Philosophy Quarterly 79 (1998), 34-58.

Federico O’Reilly Is there a two-envelope paradox? (2010) 9pp. http://www.dpye.iimas.unam.mx/federico/twoenvelope.pdf

Graham Priest and Greg Restall. Envelopes and Indifference. Pages 283-290 in Dialogues, Logics and Other Strange Things, essays in honour of Shahid Rahman, edited by Cédric Dégremont, Laurent Keiff and Helge Rückert, College Publications, 2008.

P. Rawling. A note on the two envelopes problem. Theory and Decision 36: 97-102, 1994.

Eric Schwitzgebel and Josh Dever. The Two Envelope Paradox and Using Variables Within the Expectation Formula. Sorites. ISSN 1135-1349. http://www.sorites.org. Issue # 20 -- March 2008. Pp. 135-140.

Peter A. Sutton. (X) The Epoch of Incredulity: A Response to Katz and Olin’s ‘A Tale of Two Envelopes’. Mind, Vol. 119 . 473 . January 2010. 159-169.

Paul Syverson. (X) Opening Two Envelopes. Acta Analytica (2010) 25:479–498.

Byeong-Uk Yi. (X) The Two-envelope Paradox With No Probability. http://philosophy.utoronto.ca/people/faculty/byeong-uk-yi 32pp. 2009.

Not yet obtained pdf:

Mark D. McDonnell, Alex J. Grant, Ingmar Land, Badri N. Vellambi, Derek Abbott, and Ken Lever. Gain from the two-envelope problem via information asymmetry: on the suboptimality of randomized switching. Proceedings of the Royal Society, A (2011).

You can download this article as pdf from this link. iNic (talk) 00:52, 2 July 2011 (UTC)[reply]
Thanks! Richard Gill (talk) 17:52, 6 July 2011 (UTC)[reply]
Here's a quote from McDonnell "It can be quickly demonstrated that any apparent paradox seen when analysing such problems is the result of incorrect mathematical reasoning that overlooks Bayes' theorem (Linzer 1994, Brams and Kilgour 1995a, Blachman et al. 1996) ... the source of the problem is essentially that what is actually a conditional probability is incorrectly assumed to be an unconditional probability". He is referring here to the original version of the paradox. Richard Gill (talk) 18:51, 6 July 2011 (UTC)[reply]
I know. It's quite common that authors claim that the solution to the problem is easy, even elementary. However, when investigating these elementary solutions they are not the same from one author to the next... iNic (talk) 10:14, 8 July 2011 (UTC)[reply]
True. Hence the fun, and the challenge (to the encyclopedist). Richard Gill (talk) 19:42, 8 July 2011 (UTC)[reply]

Completed reorganization of material

I moved two "orphaned" sections at the bottom of the article to better locations, I think; now they are subsections of sections on the same material. This concerned James Chase's example of a proper distribution for which the expectation paradox holds, and the material on infinite expected utility in the foundations of mathematical economics. I also moved some references from section titles to the body of the text, corrected a mathematical derivation within the Chase example, and added the proof that the equal conditional probabilities implies an improper probability distribution.

Concerning this proof - I'm looking forward to iNic's reaction [What reaction? iNic (talk) 11:14, 8 July 2011 (UTC)] - Blachman, Christensen and Utts (BCU) deal separately with discrete dstributions and continuous distributions, and, writing for probabilists and statisticians in a brief "letter to the editor" correcting a big mistake in CU's earlier paper -, give very few details. My analysis can be seen as a synthesis - it combines their continuous and discrete "solutions" in one - but I am not using this synthesis to promote any new opinions or to promote Own Research (no Conflict of Interest). My only aim is to explain BCU's results as simply as possible so that the wikipedia article is as accessible as possible to the largest number of readers. The alternative would be a larger list of partial results, citing more articles, but all with the same message, no contradictions. I think it would not serve the reader as well. The message is that Step 6 requires an improper distribution and, if continuous, it has to look like the continuous density 1/x on the whole line, which integrates to infinity both at zero and at infinity. Or, if discrete, like the uniform distribution on all (negative and positive) powers of 2. In both cases we have a uniform distribution over the logarithm of the amount of money, from arbitrarily small to arbitrarily large. Discrete uniform in one case, continuous uniform in the other. The key equation 2p(n)=p(n)+p(n-1) and how to solve it can be found in their article and in many others too, both in proofs of the continuous case and the discrete case. Also the idea of splitting up the whole line in powers of 2 can be found all over the place. The point is that halving or doubling always takes you from one interval to one of its neighbors. We might as well "round down" the two amounts of money by replacing them with powers of 2. Write them in binary and keep only the leading 1, replace all succeeding binary digits by zero's.[reply]

The general theorem would be as follows (but this is OR!): change to currency units such that there's positive probability the smaller amount is between 1 and 2 monetary units. Now "copy" the distribution within this interval, which could be anything whatsoever, also to the interval from 2 to 4 (you have to stretch it by a factor 2), and to the interval from 4 to 8, ... And to that from 1/2 to 1 (compressing by factors of 2).... Now glue all these probability distributions together, infinitely many of them, giving each one of them equal weight. This is what you believe about the smaller amount of money, if you truly believe that the other envelope is equally likely the smaller or the larger, independently of what's in yours! Clearly ludicrous.

I think the body of the article is much cleaner now and gives a fair picture of the literature. The next thing to do, I think, is to make sure that all useful and/or notable references are actually cited so that separate sections on "other reading" and "external links" can be deleted. It would be useful however to collect a list of all the references which we have made use of, even if not citing or recommending them, on a separate page, as iNic proposed. Maybe on the "Arguments" page?

Also it's time to merge exchange paradox with this page. Richard Gill (talk) 09:44, 7 July 2011 (UTC)[reply]

Multiple issues tag

I have removed this tag as it seems no longer applicable. If anyone has any issues perhaps they could say here what they are. Martin Hogbin (talk) 08:55, 7 July 2011 (UTC)[reply]

Great! Good to see you back here, Martin. Richard Gill (talk) 09:20, 7 July 2011 (UTC)[reply]

This tag was added by editors not because the article originally had a lot of issues, but because they didn't like the content. So I'm very happy it's removed now! iNic (talk) 11:21, 8 July 2011 (UTC)[reply]

Simplest resolution missed.

You seem to have missed the simplest resolution which is to impose a finite limit on the sum in an envelope. Under this condition,proposition 6 fails. Could not a bit on this be added. Martin Hogbin (talk) 23:22, 8 July 2011 (UTC)[reply]

There's some material at exchange paradox (see its talk page) which covers that "resolution", with literature reference, in the context of the second variant of the problem. This could be moved to two envelopes problem as part of the merger which seems to be favoured by all presently active editors.
Possibly there is "popular" literature which gives that as a resolution. The academic literature (philosophy, mathematics, statistics, economics) doesn't see unboundedness as a problem, as far as I know, and I'm not aware of any popular literature on the paradox at all. On the other hand, Smullyan's version "without probability" is not resolved in this way. So it would be wrong to give the practical reader the idea that there's nothing less to talk about once unboundedness is mentioned.
Alternatively, the argument showing the need for an improper prior distribution of X contains what you want. This could be built up more slowly, starting wiith some easier versions. For instance, suppose that there is a positive chance that one of the envelopes contains 1 dollar. Then clearly proposition 6 shows that 2 dollars, 4 dollars, ... and 1/2 a dollar, 1/4 of a dollar ... are also all possible (have positive chance). So the amount must be (possibly) larger than any amount you can imagine. Using the definition of conditional probability, you can go on to show that not only are all these amounts possible, they are even all equally likely. Why don't you add something like that to the text? It needs to be made more "plain folk friendly". I can only write long sentences with long words. Professional deformation. Richard Gill (talk) 10:48, 9 July 2011 (UTC)[reply]
I will be happy to try to add something, although I cannot see anything of assistance in the exchange paradox article. I may also try to give a summary of the paradox and its resolutions in the lead. Martin Hogbin (talk) 14:17, 9 July 2011 (UTC)[reply]
Right at the bottom. Where there is mention of what happens if there is an upper limit to the smaller amount of money X . Richard Gill (talk) 16:28, 9 July 2011 (UTC)[reply]
Thanks, I think I have said the same in simpler language so I will add the ref here. Martin Hogbin (talk)

This is not the simplest resolution. The simplest resolution of them all is the idea that A in the expected value calculation designates two different values in the same formula, and this should not be allowed. This resolution is missing too. The "boundedness of the prior" resolution you have placed as the simplest and first one is only relevant as part of a third resolution, after the introduction of priors as vehicle for a resolution. iNic (talk) 01:22, 10 July 2011 (UTC)[reply]

The impression I get of the 'two different values' resolution is that it is not supported by the mathematical community (Perhaps someone who knows could comment on this impression). Do you have high quality sources to support this reolution?
The mathematical community knows that one of the problems is not distinguishing a random variable from a particular value which it can take. Ruma Falk for instance says that this is basically all the paradox is about. As soon as you agree that TEP belongs in conventional modern-day probability theory, the next thing you can say is that the original paradox makes the mistake of confusing the probability that you have the smaller envelope, with the conditional probability of having the smaller envelope given the quantity that's in there. OK, so you fix up these issues, embedding TEP into conventional probability theory, and then you discover that there does not exist any marginal distribution of the smaller amount of money X such that the conditional probability that you have the smaller amount in your hands, given the amount in your envelope A=a, is equal to 1/2, for all possible values of a. To be specific, if you start with any probability distribution whatever of a positive random variable X, and then define A to be equal to X or 2X with probability 1/2 independently of the value of X, it's not possible that whether or not A=X is independent of A itself. See the proof presently in the article: the statistical independence of the event from the random variable A leads to the impossible requirement that X is equally likely to be between any of the intervals , where n is any integer (positive or negative or zero).
This brings us the to the usual conclusion of the original paradox, from the point of view of probability theory. However, the next thing that happened was that clever folk noticed that even though you can't find any X with the event independent of the random variable A, it is possible to find X with the consequence of the just mentioned independence that for all a.
The next step is to "resolve" this paradox by noting that all such distributions of X have infinite expectation values. It's true that the expected value of what's in the other envelope is larger than what's in the envelope in your hands, whatever that might be, but this doesnn't mean that the expected value of what's in your hands increases by the exchange. You already had an infinite expected value, and after the exchange, you have the same. It seems to me that this "explanation" is a bit lame, it leaves a fishy smell behind. I would say, but I didn't find anyone who wrote that yet, that we should ask ourselves what the expectation value actually is supposed to mean. Well, it's the average of many many independent drawings of the same random variable. This means that if we look in our envelope and see the amount a, we should certainly want to exchange it for the average of a *very large number* of copies of envelope B each created independently in the same way (many, many times: create a completely new X, from this make new A and B, then look to see if A=a, if so, keep the pair and exchange a for B; when you have done this billions of times average all the B 's). Strangely the economists don't seem to have noticed this yet they build a whole theory of "ration behaviour" on the notion that people make economic decisions in order to maximize expectation values. Keynes himself said "in the long run we are all dead" which ought to be the death-noll of that theory.
After that we have Smullyan and the philosophers who bring back a new paradox "without probability". The mathematicians are mostly not impressed by this paradox. Albers et al say that if you look at Smulyan's separate sentences he is defining his terms differently in each statement hence there is no surprise that conclusions are different. Just try to replace "the amount you'ld win if you'ld win" by it's acutal definition, in each case, and there's no paradox. It's true that if you'ld win you'll both win the amount that's in your own envelope as the difference of the amounts that would be in both envelopes in that case, though if you'ld lose, the amount you'ld lose would be half of what's in your envelope and at the same time the difference of the amounts that would be in both envelopes in that case. So what? The philosophers do find this conflict a problem and build up elaborate theories of counterfactual reasoning in attempts to find a nice theory which does not lead to paradox. After all, historians often would like to say what would have been the outcome of the second world war if Hitler hadn't lost Stalingrad, or whatever. Or literary scholars ask, would Shakespeare have been a famous playwrite if he'ld been a woman? You can say that physics is all about counterfactuals. We know what would happen to the tides if the moon was suddenly removed. OK, so there is a legitimate issue for philosophers interested in the theory of counterfactual reasoning, which is a whole lot more relevant to science than many people would first realise. But others just point out that neither of Smullyan's two conclusions give reason to switch or stay (we need probabilities, and we need utilities, in order to rationally make decisions). Neither of his reasonings use the essential fact from the original problem, which is a *probability* fact, that initially you take one of the two envelopes at random. Please note that it is precisely this fact that means that we know the answer: there is no point of exchanging your envelope unopened for the other. This point is a big support to the mathematicians who basically argue that philosophers who don't know anything about probability theory ought to keep their mouths shut. It *is* a probability problem. Since it is a probability problem you had better do the probability carefully. No excuse not even to know modern elementary probability theory.
Well, that would be my summary of the main line of the article. There is also the case what you would do if you do open your envelope and see A=a. If you know the distribution X and - for instance - want to maximize your expected outcome - then Bayes theorem tells you what to do. In some cases indeed you'll exchange for B whatever a might be. There is nothing wrong with that conclusion. The answer is that you have indeed maximized your expected value. You could also have done it by not exchanging at all. Then there is the problem what to do if you don't assume a particular distribution for X. Maybe the "host" is malevolent, filling the envelopes with money in such a way to confuse you. This is where the randomized solutions come in. The player should pick a random probe and use it to judge if A=a is large or small. Richard Gill (talk) 13:24, 10 July 2011 (UTC)[reply]
This is a good summary of your own point of view. This must not be used as a summary at Wikipedia. iNic (talk) 23:38, 10 July 2011 (UTC)[reply]
Good. If I will post this on my university home page, then you (and other editors) can also use it as a wikipedia resource, since the real person Richard D. Gill is a reliable source in mathematics and statistics. You can check this by taking ook at my CV there. I wrote my opinion of TEP as it is seen from my field. Richard Gill (talk) 08:17, 11 July 2011 (UTC)[reply]

This is a philosophical problem, not a mathematical problem. So what is or is not supported in the mathematical community is of no relevance. See Schwitzgebel and Dever: The Two Envelope Paradox and Using Variables Within the Expectation Formula iNic (talk) 11:45, 10 July 2011 (UTC)[reply]

Your link only gives an introduction to the paper. I presume tha paper makes the point that it is not valid to use certain variables within and expectation formula. Can you quote the relevant section. Martin Hogbin (talk) 13:19, 10 July 2011 (UTC)[reply]
iNic, you can't say that this is a philosophical problem. According to most mathematicians it's a mathematical problem and it's solved by doing the mathematics properly. In other words, the philosophers got screwed up because they didn't have the language to do the maths right. Later many philosophers agreed. And since this was done (by Christensen and Utts for instance) the philosophers got a whole lot quieter. Some of them retreated to the infinite expecations problem and started worrying about the foundations of economics. And then Smullyan came along and gave them a new paradox back, by taking the probability out of it again. Though again the solution of many philosophers is that he threw away the baby with the bathwater - without the probability ingredient, there is no way to decide whether to exchange or not, anyway. Richard Gill (talk) 13:34, 10 July 2011 (UTC)[reply]
Of course this is philosophy. 100%. Your smartest math students will realize this but stay quiet, not to embarrass you. iNic (talk) 23:38, 10 July 2011 (UTC)[reply]
My smartest students are enthusiastic to correct me when I am wrong, which is often the case, often deliberately. It's a good pedagogical principle.
Seriously, you cannot say TEP is 100% philosophy when half the published papers on it - and I mean the ones with novel and significant contributions, not the ones which just rechew the cud - are by probabilists, statisticians, and economists.. Nalebuff, Gardner, Christensen and Utts, Albers et al, Cover, he latest work by those Australian electrical engineers.
Are you a philosopher? If so, you are biased (like me). If you're neither a philosopher or a mathematician then I wonder how you can be so sure. I have been talking to a number of philosophers recently about TEP, it's not a big topic in philosophy, I can tell you! TEP belongs in recreational mathematics, pedagogics of maths and probability, famous brain-teasers, probability statistics and decision theory, philosophy. It's notable precisely because it does not belong to one specua,ist field, and because it is of recurrent popular interest as well (in the past) of specialist academic interest.
Ruma Falk, world famous educationalist in statistics and probability, who has written a series of papers on TEP, wrote in 2008: "An elusive probability paradox is analysed. The fallacy is traced back to improper use of a symbol that denotes at the same time a random variable and two different values that it may assume." She notes that this observation of hers is not new but cites three writers who in '93, '94 and '95 dealt with TEP in this way. Apparently Schwitzgebel and Dever did not realise that their problem (the unopened envelopes case) was already solved. Later she writes "The purpose of this note is to offer a simple formulation of the resolution that relies on elementary probability concepts and employs terms that are accessible to high school and undergraduate students, not requiring advanced technical expertise". Schtwitzgebel and Dever were two philosophy PhD students when they wrote their paper, see Eric Schwitzgebel's page on the topic [2]. It took more than two years to get published and till it finally appeared in a little known online journal. It is outside of both Schwitzgebel's and Dever's own specialist areas in philosophy (see their home pages, [3] and [4]. It does not cite *any* of the statistics and probability literature on the two envelope problem, which already then was huge. It since then has been cited three times in the academic literature. Yet the authors are trying to solve the paradox using mathematical ideas of which, it is clear to me, they have a very poor grasp! Now I'ld like to know how you (iNic) *know* that TEP is 100% philosophy. Because Schtwitzgebel and Dever like many beginner philosophers are unaware of the mathematics literature? (And don't know probability theory). You know that your personal opinion is not relevant on wikipedia. Tell me the sources for your statement. It is clear that there are plenty of sources which disagree.
I think we are going to have to settle on 50%. Richard Gill (talk) 14:15, 11 July 2011 (UTC)[reply]
Mathematical problems are problems you can formulate within mathematics or a mathematical framework. An example is Russell's paradox that can be formulated within (naive) set theory. A problem that can only be formulated within some specific (philosophical) interpretation of a theory is by necessity itself a philosophical problem. And example is the measurement problem of QM that can be formulated within some philosophical interpretations of QM only, most notable the Copenhagen interpretation of QM. It is not possible to formulate the measurement problem within QM proper. In the TEP case it can only be formulated within decision theory (which is a branch of philosophy) or within the bayesian interpretation of probability theory, which is a philosophical interpretation of probability theory. It is not possible to formulate TEP within probability theory proper. If it were your would indeed be allowed to call it a mathematical problem. iNic (talk) 20:14, 11 July 2011 (UTC)[reply]
Interesting idea. I think that decision theory is part of mathematics, and that Bayesian probability theory is also part of mathematics. But anyway, I hope you agree that what you or I think is not so important, the question is what the reliable sources think. Lots of mathematical sources think that TEP is merely a result of sloppy mathematics. The case of the measurement problem is also interesting. I think the measurement problem is a metaphysical problem for quantum theory as a whole, not restricted to one particular interpretation, and I think it is resolved by the "eventum mechanics" of Slava Belavkin. You can see my own work in this area on my university home page. Richard Gill (talk) 06:21, 12 July 2011 (UTC)[reply]
Using mathematics isn't the same as being mathematics. In everyday life that is quite uncontroversial. You can for example use a car without being a car, right? Same very basic logic applies here. Many disciplines use math as a tool. That is of course fine, but it doesn't magically transform the discipline to become mathematics. Even if everyone driving a car claims that they thereby have become a car, we have to realize that that is not true. iNic (talk) 08:15, 13 July 2011 (UTC)[reply]
I cannot see why the 'boundedness of the prior' is not relevant as it is. No doubt some extra mathematical formalism is required to make it rigorous but I am attempting to write an introduction for the layman. It seems to be supported by the quoted source. Do you have any logical objections to it. Martin Hogbin (talk) 09:40, 10 July 2011 (UTC)[reply]
I don't know of a reliable source who says that the problem is resolved on noting that the argument implies that an arbitrarily large amount of money could be involved. It may seem an obvious solution to a layperson. But neither the philosophers nor the mathematicians think that this is a valid solution. After all, the puzzle isn't meant to be a practical problem. It's meant to form a challenge to logic and/or mathematics. Well, for me there is no problem in mentioning such a layperson's solution - then the reader who is not interested in nit-picking, whether in mathematics, philosophy or logic, can skip the rest of the article. That would probably be wise. Richard Gill (talk) 15:00, 11 July 2011 (UTC)[reply]

For those solutions (or formulations of the problem) where priors are not used (or can't be used), the boundedness of a prior is of no relevance. This is easy logic. iNic (talk) 11:45, 10 July 2011 (UTC)[reply]

I do not understand what you are saying here. Martin Hogbin (talk) 13:16, 10 July 2011 (UTC)[reply]

I'm saying that when priors are not used it doesn't matter what properties they have. Another example of the same logic: If I can explain a chemical reaction without the aid of phlogiston, then, to me, it doesn't really matter what properties phlogiston has. Do you get it now? iNic (talk) 23:51, 10 July 2011 (UTC)[reply]

For many writers (including some philosophers), the problem is solved by introducing priors. They would say that there are no rational grounds for exchange without using your prior beliefs of the amounts of money in the envelopes. When you explicitly insert a prior distribution, the original paradox vanishes, it turns out that Step 6 is incompatible with any prior belief. OK, so Smullyan resurrected the paradox "without probability". But a lot of philosophers said that his paradox is not a paradox at all, since neither of his two statements is in itself a reason to switch or stay. One needs to insert probabilities and perhaps also utilities in order to make decisions. Other philosophers say that both (or all) of Smullyan's conclusions are wrong since on careful inspection he is taking illogical steps (remember that of the two things he compares, only one will be actually realized; when it is realized, the alternative is simply false.) Byeong-uk Yi argues very carefully that both Smullyan's conclusions are false. Other authors argue that just one of the two is false, they make use of the extra information that we are given one of the two envelopes at random in order to give one of his "counterfactual comparisons" a higher level of reasonableness than the other. (That was written carelessly: one has to distinguish between the correctness / incorrectness of the arguments, and the truth / falsity of the conclusions). Either way, this is all about word games and not very interesting for most ordinary folk. I would say that the mathematicians are right to say that of course you get into a mess if you don't define things carefully and don't use all the information you are given. Richard Gill (talk) 13:34, 10 July 2011 (UTC)[reply]

It seems to me that what iNic calls the simplest resolution is contentious to some degree at least, so it should not be shown first but remain were it is, later on in the article. Would there be any point in trying to write a simplified summary of the Smullian-style argument for non experts. It seems to me that this is similar to the arguments that several editors have added to this article from time to time but have now been removed. There would need to be some embedded 'health warnings'. Martin Hogbin (talk) 08:55, 11 July 2011 (UTC)[reply]

For logical and pedagogical reasons I think it must be presented first. iNic (talk) 20:23, 11 July 2011 (UTC)[reply]

'Impossible distribution' resolution superfluous?

I was going to try to write a simple summary of the resolutions of the paradox for distributions without an upper bound but it seems to me that the first resolution is superfluous (although interesting from a historical and logical perspective). The second resolution, that the expectation value is infinite for both envelopes, covers both versions of the problem given, therefore I suggest that in a summary of how this paradox is resolved for unbounded distributions only the second resolution should be given. (By the way Richard, this resolution was what I meant when I said that the paradox was similar to showing 2=3 by dividing by zero). Martin Hogbin (talk) 23:03, 9 July 2011 (UTC)[reply]

As long as we are careful *not* to imply that everyone is satisfied with each particular resolution. We start with what appears to be a logical chain of deductions but it leads to something plainly ridiculous. So something is wrong. But that doesn't mean that there's only one thing wrong. People who live in different worlds will fix the logical breakdown in different ways. For some people the paradox can already be dismissed by noticing that it implies arbitrarily large amounts of money are possible. For others the paradox is caused by sloppy thinking. For some people, arbitrarily large possible values is no big deal, but infinite expectation values is a problem. For other people the infinite expectation values are not a problem but they're the solution. For some people Smullyan's version "without probability" shows that the paradox is nothing to do with probability. For others, the problem with Smullyan's version is precisely that he takes the probability out.
Martin, here you have the excellent executive summary you are looking for! The paragraph above ^^^^ by Gill. iNic (talk) 22:23, 10 July 2011 (UTC)[reply]
I think that is an excellent summary of certain aspects of this topic but what I have written is intended to explain to the layman the various ways in which the paradox has been resolved, without being too technically incorrect. If there are serious errors in what I have written please point them out to me. Martin Hogbin (talk) 15:41, 11 July 2011 (UTC)[reply]
I think there's a concensus in mathematics that the problems are solved by doing the mathematics properly. TEP has become an amusing pedagogical example. I don't know about philosophy. The point of philosophy is not to solve problems but to create them (this has been told to me by many eminent philosophers, please don't laugh). Richard Gill (talk) 14:07, 10 July 2011 (UTC)[reply]
Another interpretation is that philosophers deals with problems that others don't even dare to see. iNic (talk) 22:23, 10 July 2011 (UTC)[reply]
That depends on the problem. If we are talking about the work of Heidegger or Nietzsche or Kant, yes. But if we are talking about the work of Schwitzgebel or Dever, I am not so sure. Those two didn't see the solution which already existed in the (elementary!) mathematics literature because they didn't know that literature and clearly would have had to follow a course in mathematics first, before trying to read it. Richard Gill (talk) 14:33, 11 July 2011 (UTC)[reply]
I was just thinking about how I could summarise the solutions in simple language, but I will give it a go and will welcome comments on the result. Martin Hogbin (talk) 15:24, 10 July 2011 (UTC)[reply]
I have now written some summaries in plain English of two resolutions given. Comments welcome. Martin Hogbin (talk) 15:44, 10 July 2011 (UTC)[reply]

The lead

I have restored my additions to the lead. The lead is intended to be a summary of the article. The lead is completely pointless if it does not state (briefly) what the problem is. It should also include a brief summary of the problem versions resolutions which I intend to add. Martin Hogbin (talk) 09:45, 10 July 2011 (UTC)[reply]

I actually agree that it would be cool if the lead could contain an executive summary of the entire article itself. However, my experience with this subject at Wikipedia is that we will only open up a new endless discussion on what to put in this mini-version of the article. Discussions of this kind is the historical reason why the lead for a long time hasn't contained a summary. So let's focus at the article itself first and agree what should be placed there before we even attempt to write an executive summary. iNic (talk) 11:22, 10 July 2011 (UTC)[reply]
I agree with you that the lead is often used by people as a place for there own OR or pet theory but I think we must have some kind of lead. As a minimum, it should say what the TEP is give some idea on how it has been approached. We should work to keep the lead in accordance with WP:lead. Martin Hogbin (talk) 12:20, 10 July 2011 (UTC)[reply]
It seems to me that since we have a fairly stable basis for the article (or don't we???) we can write a lead which summarizes what we have so far. That's a good way of increasing mutual understanding of what we have so far. And if we need to introduce more stuff we can adapt the lead later. Richard Gill (talk) 13:57, 10 July 2011 (UTC)[reply]
We are getting there, but still we have some work to do to make the article good enough. So it's too early to attempt to write a summary. iNic (talk) 19:33, 11 July 2011 (UTC)[reply]
But we do have a start of a summary, composed by Martin I believe. I think it is a good start. I do disagree with the phrase "purely theoretical cases". We would not have atom bombs or CD players without the infinitesimal calculus and Hilbert space. Mathematicians do not work with infinities for purely theoretical self-entertainment. The most substantial parts of applied mathematics are about such cases. Richard Gill (talk) 06:26, 12 July 2011 (UTC)[reply]
I have changed it to 'theoretical cases'. Would you settle for that? A case in which infinite sums are possible is hardly a realistic proposition and in common language would be described as 'theoretical'. Martin Hogbin (talk) 08:25, 12 July 2011 (UTC)[reply]
I can live with that for a while. But note: iNic calls TEP a philosophical problem, I call it a challenge to mathematical thinking. The originators of the problem did not see it as a practical problem. They presented what appears to be a convincing chain of logical deductions which leads to a contradiction and ask their readers to analyse the presented logic, and show where it goes wrong. Adding in an outside constraint ("of course the amount of money must be bounded") is not an admissible solution from this point of view. For the same money you could solve MHP by quoting Monty Hall who said that in real life the game was never played in the way the problem assumes! Richard Gill (talk) 10:59, 12 July 2011 (UTC)[reply]
Remember that I am writing for the general reader. I make no claim that restricting the money is the only or the best resolution but it is one that will satisfy many people. Martin Hogbin (talk) 21:45, 12 July 2011 (UTC)[reply]
Syverson shows that you can still approximately get the paradox with bounded amounts of money. Let A > 0 and B > 0 be the amounts in the two envelopes and X > 0 be the smaller amount, Y=2X the larger, ie (A,B)=(X,Y) or (Y,X). The usual resolution to the original paradox is to show that it is impossible to have simultaneously (1) (A,B)=(X,Y) or (Y,X) with equal probability independently of X, and (2) B=2A or A/2 with equal probability independently of A. Recall that the original reasoning fails because (1), which is given, does not imply (2), which is used at step 6. (The mistake in going to 6 was the confusion between a random variable and a particular value thereof: using the same name to stand for two different things). However it is easy to arrange that (1) and (2) both hold, to a very good approximation!

For instance, let X be uniformly distributed on the N+1 amounts of money 1,2,4, ..., 2^N where N is very large. Define Y=2X and (A,B)=(X,Y) or (Y,X) with equal probability independently of X. Then (2) does not hold exactly, but it is true for every single value of A except for 1 and 2^{N+1}. So if N is enormous, then it is true that B=2A or A/2 with overwhelmingly large probability, and E(B|A) is equal to 5A/4 with overwhelmingly large probability.

Alternatively (and that is what Syverson prefers), create X,Y,A and B exactly in this way but before envelope A is given to the player, the organiser takes a peek in it and throws both away, to replace it with another pair it if it contains 1 or 2^{N+1} (and repeats as often as is necessary). Now (2) does hold exactly but (1) is not quite true.

Syverson's paper is excellent! (He wipes the floor with S&D. His mathematics and his logic are impeccable. And he is a philosopher by training).

What is the resolution now? Well, the problem is not infinite ranges or infinite expectations. But the distribution of X does have an enormous expectation value compared to its range, it is a highly skew distribution (long tailed). Most of the time the *actual* value of A will be much smaller than the *expected* value of B - which is of course equal to the *expected* value of A! So expectation values would suggest you should switch. But when you switch, you just get one realisation of B, you do not get its expectation value. It takes a huge number of averages of independent samples before the average begins to look anything like the expectation value. The answer is: if a distribution is highly skewed to the right, you will be disappointed, because in the long run we are all dead. You never get to see the expectation value. This is of course the sound reason why economists insist that utility is bounded. You should not base decisions on expectation values when distributions are strongly skewed.

I'm afraid that TEP just isn't interesting for the lay-person. Proof: the non-existence of reliable sources in popular literature! Richard Gill (talk) 08:38, 14 July 2011 (UTC)[reply]

PS Heavy tailed and higly skew distributions are very important in the real world. Climate change, nuclear power safety, economic crisis .. these are all fields where these phenomena are of real relevance to ordinary folk. Richard Gill (talk) 08:49, 14 July 2011 (UTC)[reply]
Consider the example I just gave with X uniformly distributed on 1,2,...,2^N. A back of the envelope calculation shows that the mean value of X behaves like 2^{N+1}/N for large N. Out of the N+1 equally likely values of X, only a vanishing fraction log_2(N)/N is larger than the expectation value. In fact, only a vanishing fraction is larger than the mean value divided by 100 (or any other constant for that matter). So for large N, this simple finite example exhibits all the features of the "theoretical" situation with an infinite mean. Richard Gill (talk) 15:19, 14 July 2011 (UTC)[reply]

I more and more doubt that we will ever be able to write a summary in the lead that is both shorter than the article itself and yet accurate enough to not fool the poor executives reading only the lead. iNic (talk) 00:34, 16 July 2011 (UTC)[reply]

I think the right way is by an alternating, iterative process. Work on the body, then on the intro, ... The process never ends. New editors turn up. The literature grows. The world turns round. Richard Gill (talk) 07:10, 16 July 2011 (UTC)[reply]

What I meant was that in order to be sufficiently accurate the lead will become very long, almost indistinguishable from the article itself. But let's see... iNic (talk) 14:59, 18 July 2011 (UTC)[reply]

Yes. cf the story by Borges about the map of the world, or the novel The Life and Opinions of Tristram Shandy. Richard Gill (talk) 16:06, 18 July 2011 (UTC)[reply]

Is TEP philosophy or mathematics?

iNic, will you please provide me with sources which explain why TEP is a philosophical problem. It seems to me TEP is a problem about logic and logic lies at the basis of both philosophy and mathematics.

If TEP would be a problem in logic we would be able to derive the paradox within some specified logical system. But we can't. Same goes for mathematics and probability theory. The collection of all sources in the world support this view because the derivation that would prove this wrong is nowhere to be found, in any source. iNic (talk) 00:12, 14 July 2011 (UTC)[reply]
I would say the following: because TEP is a problem in logic we can see within a decent formal system of logic that the argument is false, ie, cannot be converted into a formal argument. This is exactly what Byeong-Uk Yi does with regard to Smullyan's version. Yi uses a sensible and uncontroversial formal logic of counterfactual reasoning and he shows that both of Smullyan's conclusions are *untrue* statements, in that system (I am not sure if he shows they are untrue because they are unprovable or because they are meaningless or because their negations are true - these things are subtly different). He shows where any attempt to translate Smullyan's verbal reasoning into a formal reasoning (within Yu's formal system) fails. But I would say that this is a very formal way to say that the argument is using the same words to denote different things. "the amount you would win if you would win" get's two different meanings, similarly that for what you would lose if you would lose, hence you get two apparently contradictory statements.
The original (standard) TEP is understood by translating the argument into modern probability theory, which is a formal system built on standard formal logic, and showing that one of the steps fails and indeed must fail since 6 is actually contradictory to the other assumptions.
Then the problem where we don't use step 6 but still get the apparent contradiction of E(B|A=a)>a for all a with appropriate distributions of X is resolved by noting that such distributions must have E(X)=infinity hence the expectation value is not very relevant to what you will see when you only live a finite length of time. Which is one of the reasons why formal decision theory usually insists on bounded utility.
Then we can continue (Syverson) and show that we still can get *approximately* the paradox with everything bounded. However inspection of his argument shows that his probability distributions are so long tailed that expectations are again totally unrepresentative of average values of finitely many repetitions. So actually we did not get further.
Or we can go back a la Smullyan and remove the probability from the paradox. Now we are back with using the same words for two different things and getting a contradiction. And anyway, the probability ingredient (that your envelope A is equally likely the smaller X or the larger Y=2X independently of X) is an essential part of the paradox. Without that ingredient, there is no reason to switch or not to switch (no rational decision making without probabilities or utilities). It is that ingredient precisely which tells us that the player may switch envelopes as many times as he likes, it makes no difference. Without that ingredient there is no way to evaluate strategies. Richard Gill (talk) 09:02, 14 July 2011 (UTC)[reply]
OK, so you call TEP a problem in logic just because someone has proven that TEP can not be formulated within a specified logical system? Let's say that someone has proved that a specific board game is not a valid game of Chess, would that proof turn the board game into a Chess game? If you answer yes to the first question you have to answer yes to the second too. iNic (talk) 00:56, 16 July 2011 (UTC)[reply]
(1) I do not call TEP a problem in logic just because ...

(2) It is not the case that answering yes to your first question would imply I should answer yes to your second question.

Regarding (1), I do not call TEP a problem in logic just because someone has shown that the paradox is resolved by looking at TEP as translated into a specified logical system. Philosophers call TEP a problem in logic. Philosophers agree that TEP (without probability) belongs to counterfactual reasoning. There exist well understood and widely accepted formal systems of counterfactual logic. Yi translates TEP into the systems and shows where the deduction fails. What does this prove? It proves that there exist philosophers (after all, Yi is faculty member at a philosophy department) who consider that TEP can be usefully studied by using the methodology of logic. People who study games could show that a certain game is not a game of chess. This does not prove that the game in question is therefore a game of chess. It does prove that the "game" is studied by people who study game, which certainly supports the notion that the game in question is indeed a game. It also supports the notion that it is a different game from chess.

Schwitzgebel and Dever are philosophers. They resolve TEP (they claim) by proving a little theorem in probability theory, whose assumptions are not satisfied by TEP, but which is such that if those assumptions had been satisfied, then the conclusion of TEP (switch, whatever) would follow. (1) What does this prove? (2) Does this paper make TEP a problem primarily *in* philosophy, logic or mathematics? My answers are: (1) it proves that Schwitzgebel and Dever paper is not very interesting and in particular that their claim is false, that they have at last succeeded in showing where TEP goes wrong where all before them had failed; (2) since this particular paper is not one of the most significant papers in TEP studies, it doesn't help answer question (2). But a glance at the literature shows that logicians, philosophers and mathematicians all write iteresting papers on TEP and all have different and interesting insights which can be brought to bear on it. Richard Gill (talk) 17:20, 17 July 2011 (UTC)[reply]
Well, the only reason why this classification thing matter at all here is that if the Wikipedia article (erroneously) claims that the paradox is in logic or in mathematics or in probability theory a lot of first time readers of the article will get worried. They will rightfully think that this page claims that this paradox shows that logic/math/probability theory is inconsistent, i.e., contain internal contradictions. This has already happened in the past when for a while this page claimed that TEP is a problem within probability theory. Some readers got worried and wondered why their teachers hadn't told them about this major problem with probability theory. I calmed them down and explained that this is a problem within some branches of philosophy only and that the wording in the lead was unfortunate. Since then have struggled to keep at least the lead in a good shape. iNic (talk) 14:03, 18 July 2011 (UTC)[reply]
If your world view is that something is mathematics when and only when mathematicians by profession write about it, and likewise for philosophy, logic and probability theory, that is fine with me. I won't try to make you change your world view. But I would appreciate if you realize that this view is not the more common view. In general people don't think that only a watchmaker can fix a watch, or that everything a watchmaker touches becomes a watch. Neither is it the common view that for example the special theory of relativity is a patent or that Gauss only made astronomical discoveries after 1807. What Einstein or Gauss did for a living is one thing, what they did for science is another. This is true in many cases still today and even more so in the past. I do understand now, however, why you stress your own profession so much here at Wikipedia and why you think that everything you say or write is by definition mathematics. iNic (talk) 14:03, 18 July 2011 (UTC)[reply]
iNic, neither my world-view or your world-view are of any relevance. For wikipedia what counts is what reliable sources say. Philosophers, logicians, and mathematicians all write about TEP and all these different kinds of people publish articles in the journals of their own field claiming they have resolved the paradox, or added an important new twist, or shedded new light, or whatever.
Schwitzgebel and Dever write in a philosophy journal and claim to be the first to have explained what is wrong with the original TEP argument. They also write a "simple solution" on their own website. What they "do" in their paper and website is probability theory. They attempt to say to philosophers some things which probabilists already know and indeed already have written. Their audience is hampered, and they are hampered, by not being familiar with the standard language of probability theory. Actually, their simple solution is quite good, in my opinion. That is to say they succeed in expressing in plain words what a probabilist more easily expresses in a couple of formulas and mention of a couple of standard concepts. But it comes to the same thing.
My profession happens to be mathematics and this profession colours how I write and how I think. But so what. On wikipedia we want to write stuff which people can understand who are not from logic, or from philosophy, or from mathematics. That's a big challenge. Richard Gill (talk) 16:04, 18 July 2011 (UTC)[reply]
Exactly, what counts is what the reliable sources say, not who wrote the reliable sources, or to whom the authors of the reliable sources wrote, or what profession the authors of the reliable sources have, or what salary the authors of the reliable sources have, or what color of the skin the authors of the reliable sources have, or what age the authors of the reliable sources have, or what religion the authors of the reliable sources have, and so on. In fact, to judge what people have to say based on prejudices of who they are is unscientific, unwikipedian and in my opinion uncivilized. iNic (talk) 00:51, 20 July 2011 (UTC)[reply]
Extraordinary! I didn't talk of age, religion or skin colour of authors or of their readers; I talked only of the academic field of academic writers and the academic field of the journals they write in, for readers from the same field. I would call that "essential context". I would say that you can't write a good encyclopedia article based on reliable sources concerning your subject, without understanding what your souces write about your subject. And that to sufficiently understand these writings, you need to understand the context in which your sources write. But you think you can write good Wikipedia articles without any understanding at all of your topic! You just cut and paste, at will. That's an extremely post-modern opinion, I think. No wonder so many editors coming to the TEP page have complained about your behaviour. At the same time you dogmatically claim TEP is a philosophical problem and S&D are the first authors ever to address the essential problem. What source supports this POV, apart from S&D themselves? Or is that as irrelevant as the question whether you're a philosopher or a mathematician? I don't ask who you are, I don't ask for your race or religion. I am intrigued to know your own expertise relative to the topic under discussion. Since you don't give reliable sources to back up your opinions. Richard Gill (talk) 19:56, 26 July 2011 (UTC)[reply]

Please will you also tell me if you are a philosopher or a logician yourself. I'm interested to understand why you think that Schwitzgebel and Dever is such a brilliant philosophy paper (written by two philosophy PhD students about a problem outside of their specialist fields). Struggling with some mathematics which is quite beyond their capacities.

I never said that the S&D is "a brilliant philosophy paper." I merely said that it is a philosophy paper written by authors that properly understand the question TEP poses. If you are the only one climbing the correct mountain, it doesn't really matter if you climb it in the technically correct way. At least until other mountaineers start to climb the correct mountain as well. (Who I am is of no importance.) iNic (talk) 00:12, 14 July 2011 (UTC)[reply]
I think the two authors do not properly understand the question. And their claim that they are the first to answer it correctly is clearly false: their answer is wrong. They show that similar reasoning gives "the right answer" in some other problems. They say that earlier authors had shown that the used reasoning violates standard rules of probability theory (which is true, of course). They then prove a new (true) theorem (within conventional probability theory) which allows them to validate the similar reasoning in the other problems because one could pretend that the reasoning was making an appeal to their theorem, and moreover the assumption in that theorem is correct in those other examples. They show that the assumption in their theorem is not true in TEP. But their theorem is not an "if and only if" theorem. Their whole paper merely shows that if you apply probability theory correctly to the other examples, you get the answer which is correct for those examples; it illustrates what we already know, that you can't translate standard original TEP into probability theory. We already know that no theorem from probability theory can "save" the "wrong reasoning" in TEP, because we know that no example can exist in classical probability theory which has exactly what is needed to produce E(B|A)=5A/4, or (to write out the meaning of that statement explicitly) E(B|A=a) = (5/4) a for all values of a.

So in my opinion S&D's logic is wrong and their paper does not add anything to what we didn't already know. Note that they try to resolve TEP by turning to probability theory!

They are also not the first mountaineers who climb the same mountain. As far as I can see the mountain that they claim to see had already been climbed many times. They climb another mountain by mistake, do not reach the top, but don't see where they have gone wrong, since they are still in the clouds. Richard Gill (talk) 08:17, 14 July 2011 (UTC)[reply]

Please will you also tell me if you are aware of any secondary or tertiary sources about TEP (eg University undergraduate texts) in philosophy. There do exist such sources in mathematics, e.g. Cox's book on inference (though some would call this a graduate text, not undergraduate). Research articles are primary sources according to wikipedia rules which you so strongly adhere to. We are not allowed to write articles whose reliable sources are original research articles. This rules out almost all of the literature on TEP. The rest consists of blogs by amateurs. That is also not a reliable source for an academic subject.

I agree that good secondary sources in this area are really scarce, not to say non existent. But this is merely a result of the fact that the philosophical discussion is still alive and kicking. When TEP is mentioned in university text books the author often merely promote his own idea of the matter anyway (for example Dennis V. Lindley, Understanding Uncertainty, 2006), with no mention at all that other scholars holds other opinions. Textbooks like that will fool the reader into believing that this problem is solved and no controversy exist anymore. The best overviews I have read are in the introductory parts of some of the more recently published papers. iNic (talk) 00:12, 14 July 2011 (UTC)[reply]

The main message for the layman should be that original TEP and Smullyan's TEP are examples of faulty logical reasoning caused by using the same symbol (or verbal description) to stand for different things at the same time. This is the executive summary of almost all the articles I have studied so far, by the way.

I agree. This should be the first suggested solution in the article. Many authors support this view. iNic (talk) 00:12, 14 July 2011 (UTC)[reply]

TEP with opened envelope belongs to probability and decision theory and is useful in the classroom for showing the strange things that happen with infinite expectation values. You do *not* expect an infinite expectation because you never live long enough. You are always disappointed. In the long run you are dead (Keynes). That's it. That's the executive summary of the decision theoretic / statistical literature on this topic. I will write a survey paper containing no original research and not promoting any personal point of view, and then at last we will have a good secondary source for the wikipedia article.

Yes, all these variants should be mentioned in the second solution in the article. iNic (talk) 00:12, 14 July 2011 (UTC)[reply]

I think that Falk's paper does constitutes a reliable secondary source. She analyses TEP from the point of view of teaching probability. She is writing for layerpersons. For undergraduates. Her executive summary is the same as what I just mentioned: examples of faulty logical reasoning caused by using the same symbol (or verbal description) to stand for different things at the same time. Don't worry, almost all of the philosophers say the same thing, but of course want to say it in a subtly different way from earlier authors, since their job is to publish papers in which they nit-pick in previously published papers.

This is again the first solution. Falk should be one of the sources there, for sure. iNic (talk) 00:12, 14 July 2011 (UTC)[reply]

I'm participating in a philosophy conference tomorrow at which several authors of TEP papers are present. That will be interesting. Richard Gill (talk) 16:51, 13 July 2011 (UTC)[reply]

TEP, Anna Karenina, and Aliens

I have written an essay on TEP, the Anna Karenina principle, and the Aliens movie franchise at my talkpage [5]. It is also on the Arguments page for Two Envelopes Problem. Comments welcome. Richard Gill (talk) 16:18, 20 July 2011 (UTC)[reply]

Dubious importance

This is most certainly a paradox. Initially confusing, due to flaw in the construction of the argument, but ultimately obvious. There is no point in switching. It's a fun game to think about. However the article states that it's an unsolved problem in mathematical economics. To confirm that, there's a citation from a philosophy journal (seriously - wtf?). I don't understand why is so much space being given to a minor mathematical puzzle. The Balance puzzle is far more interesting and difficult, but no one is going into the kind of interminable detail this article does. I appreciate that there are one or two confused post-docs writing papers about this, but isn't that true of anything (especially in philosophy)? I would like to see a short article about the simple mathematical puzzle, with a simple explanation of the logical error. Delete the references to the boring and irrelevant papers, delete the feather-light reference to Smullyan's book, include one reference to a pop-math book, and move on. Alternatively, if we are really determined to explore every angle, why don't we bring in the postmodernists - we could even include a reference to "Toward a transformative hermeneutics of quantum gravity" (which gets 20 times more citations than Chalmers's, very nice, paper) --Dilaudid (talk) 15:03, 21 July 2011 (UTC)[reply]

What is 'the problem' and 'the logical error' in your opinion? Martin Hogbin (talk) 15:57, 21 July 2011 (UTC)[reply]
INic doesn't want us to discuss the "problem" here - I'm not inclined to argue, so let's just discuss the problem with this article. I'd prefer if my initial comments are left alone INic, thanks :) Dilaudid (talk) 05:08, 22 July 2011 (UTC)[reply]

Dilaudid, don't you think the article would mention "a simple explanation of the logical error" if it existed? And don't you think the article would include a single reference to a pop-math book that explained everything so we could delete all boring research papers if there existed a pop-math book that did just that? Just let me know the name and author of the pop-math book you have in mind and I will be delighted to delete this crappy article and replace it with an account of the clear cut explanation given in that pop-math book. iNic (talk) 14:11, 22 July 2011 (UTC)[reply]

In answer to your questions - no, and no. Use Falk's article in Teaching Statistics (if A is a random variable). If you want to treat A as a number use Amos Storkey's article. If you don't know the difference, read Falk's article. It's quite an appropriate level for the paradox, it's a resource for teaching schoolchildren. :) Dilaudid (talk) 19:19, 22 July 2011 (UTC)[reply]
Amos Storkey's article? I guess you mean his website [6]. But A is a random variable here, too. Or are you referring to subjective versus frequentist descriptions? (Whether we think of the situation as arising by first choosing the smaller amount through some physical random mechanism, or whether our prior knowledge about it is summarized in a probability distribution). Richard Gill (talk) 23:30, 24 July 2011 (UTC)[reply]
Most of Storkey's article is fine. I'm not sure how reliable a source he is since he claims that any prior distribution for which it is always beneficial to switch has to be improper, but we know this is not the case, we know there are plenty of examples, though they do all have infinite expectations values, hence the beneficiality is illusory. Storkey is an assistant professor in computer science, working in machine learning and image analysis, using a lot of statistical methods. Richard Gill (talk) 14:40, 25 July 2011 (UTC)[reply]
To only mention the two most basic solutions would be too simplistic I think. We need to find a middle road where the reader gets an idea of why scholars still are publishing papers about this problem. iNic (talk) 21:11, 22 July 2011 (UTC)[reply]
In logic when an argument is proved fallacious, it is fallacious. You don't need to keep proving it fallacious again and again. Once the switching argument is proved fallacious, the "problem" becomes a "paradox". On the second point, you are also wrong. Wikipedia does not have to explain why people publish papers. If "scholars" want to publish papers on this topic, a simple note to that effect in the article is more than enough. For example, "scholars" are still publishing papers Carezani's theory of Autodynamics but this theory was rejected by scientists, and a reference is not even included in the article on special relativity. Dilaudid (talk) 09:18, 23 July 2011 (UTC)[reply]
Why scholars are still publishing papers on the problem: (a) in logic there is no concensus yet on the right way to formalize counterfactual reasoning, and as long as people want to push their pet theories, they will use TEP-without probability as an example to show that their theory works well, by giving a nice analysis of this example. In other words, it's a test case. People want to make new points about counterfactual reasoning, and use TEP to illustrate their case. (b) in the philosophy of economics there is some discussion about whether utitlity should be bounded or not, and about whether human beings' economic choices really are determined by expected utility according to (possibly not consciously known) prior distribution and utility functions. So TEP can be a test case for people working in that area. (c) McDonnell, Abbott et al are engineers / physicists who are have got involved in the trendy field of econophysics and are selling the case that their insights from physics can help understanding finance and economics. TEP is a cute example for them to gain the limelight, though as far as I can see, they did not contribute anything new in their recent papers in the Journal of the Royal Society of London. The idea of randomized solutions to the open envelope variant came from Tom Cover but is also mentioned by two of the discussants to Christensen and Utts in The American Statistician, and seems to have a longer prehistory too going back to some famous quantum physicists. (d) From time to time a couple of postdocs imagine that they have come up with a new, definitive solution and manage to publish in some obscure location or self-publish on the web, but mostly they are just saying things that have been said before in different words. The young philosopher doesn't know the probability literature, couldn't read it anyway if he knew it, and vice versa.

All in all, TEP is a convenient vehicle for gaining some attention when you want to sell some ideas - new or old - which can be remotely connected to it. Well, that's my opinion, as a fuddy-duddy opinionated arrogant old mathematician who thinks he has been around long enough to understand the world a bit better than most young whippersnappers. Richard Gill (talk) 08:11, 25 July 2011 (UTC)[reply]

Well the problem with an article like the one you propose is that we could write many different simplistic articles about this problem, everyone explaining the paradox in its own simplistic way. (If you take a look at some of the WP articles in other languages you will find that this is often the case.) According to Gill's postmordernist view--that all solutions are equally valid--this is as it should be. But I don't think this is an honest approach, and it doesn't comply with the NPOV policy. It's not good if we go into too much detail explaining every research paper, I agree with you there, but to completely remove important variants and for example not even mention that there is a logical variant of the puzzle, as you propose, is even worse. iNic (talk) 10:38, 23 July 2011 (UTC)[reply]
There are many valid proofs of Pythagoras' theorem. My point of view is not post-modernist. By symmetry, we are indifferent to switching. So we know the argument is fallacious. Therefore since it can be seen trivially that the argument is fallacious, we don't need an article on TEP at all. What's wrong with the argument? It's like asking for the cause of an air disaster. Gravity caused the plane to come down. Man's hubris caused the disaster. Pilot error? Bad design which made pilot error more easy? Economizing on maintenance procedures? Richard Gill (talk) 18:33, 23 July 2011 (UTC)[reply]
This is a very important point which I have wanted to make before. Any argument which just shows that you should not swap is not a resolution to this paradox. Arguments along the lines of 'Here is a better way to determine what you should do' are pointless. We need to show where the compelling but bogus argument presented at the start fails. Martin Hogbin (talk) 09:28, 24 July 2011 (UTC)[reply]
Exactly! And my so-called post-modernist remark is that since there are different valid thought processes / contexts which one could imagine being behind the written argument, there are different error-diagnoses which one can give. For any reasonable context one will find a resoltion. But they don't have to be the same. Was the writer after an unconditional expectation or a conditional expectation? Did the writer have a "realistic" scenario in mind where there is real money in both envelopes and hence an upper bound to the quantity, or perhaps a proper probability distribution (subjective or frequentistic)? Or is this a completely idealised and imaginary situation, the writer is a strict Bayesian, the two amounts are just two positive numbers x and 2x about which we know absolutely nothing, and hence the improper prior with density proportional to 1/x is the right way to descibe his total ignorance about it? In this last case his probability calculation is correct, but expectation values are infinite and irrelevant. Richard Gill (talk) 23:43, 24 July 2011 (UTC)[reply]
OK lets't take an analogy. Suppose there is an illusionist trick where a girl is cut in half (or so it seems) and then put back together again. No one (except for the illusionist and the girl) knows how this is done. A discussion is started among the audience how the trick is done. One child observes: "The girl survived so she was never really torn apart during the act." (Well, this might be a significant insight for a child but for all adults this is obvious. This is the very thing that need to be explained, not the explanation.) Then a physician conjectures: "If the trick with the girl is not performed at all we will save the girl from any potential injuries." (Well, that is a great idea if the mission was to save the girl from injuries. But that was not the problem we set out to solve, and in relation to that saving the girl is totally irrelevant. If we remove the trick we have removed the question, we haven't answered it.) Then an elderly man raises his voice: "Since we can see trivially that this is just a trick, since we all saw that the girl survived, no one need to waste any more time on it. How the trick works? Well, it could work in any way we could think of. It could be invisible strings, mirrors, smoke, a hidden twin copy of the girl and so on. I agree that none of them explains every detail of what we saw of the trick on stage, but taken together they explain more than enough. Who cares which one was really used anyway? " (Well, this is not a valid answer either. When we find the true mechanism behind the trick we will be able to explain 100% of what they saw. At that point all other theories explaining almost everything will become obsolete.) iNic (talk) 02:22, 25 July 2011 (UTC)[reply]
Nice example, but poor analogy. The illusionist definitely has a trick, he could explain to other illusionists how it's done and they could do the same. In this case, even if we imagine that the writer of TEP knows about probability theory and is trying to trick us, we don't know if he's going for a conditional expectation or an unconditional expectation at steps 6 and 7. (Does he get the probabilities of the two cases, or the expectations in the two cases, wrong). If he's going for the conditional expectation, we don't know if he says that the conditional probabilities are independent of A because he's mistaken or because he's using the appropriate noninformative (and unfortunately necessarily improper) prior f(x) proportional to 1/x on x>0. Or do you know things about TEP which are not published in any standard work on the paradox? *Every* known explanation of TEP does explain 100% of what we saw. But people differ in opinion which explanation they like, because they come from different backgrounds, and automatically assume a context appropriate to their professional background (mathematics / logic / semantics / philosophy / economics). They learn what they think is the trick and go on to create new tricks which people from their own field accept as similar. Richard Gill (talk) 06:10, 25 July 2011 (UTC)[reply]
PS. Suppose the TEP argument were written in a formal mathematical or logical language, and at each step, the author also stated explicitly which theorem and/or prior information he is using. Then we could put our finger on "the first place" where he breaks the rules which he claims he is using. But the TEP argument is pre-formal, it uses a mixture of ordinary language, common sense, and some amateur - informal - private - probability theory. That's why there needn't be a unique explanation of "what went wrong". Think of Russel's paradox of the set of all sets which are not subsets of themselves, or the paradox of the liar, or of the barbar who shaves everyone who shaves themselves. They are all caused by self-reference. Yet each of those paradoxes can be resolved in different ways - Russell's theory of types is only one way to solve Russell's paradox. Multi-valued logic is another way - sentences of a formal language are not either true or false, they are true or false or meaningless. (The negation of a true statement is false, the negation of a meaningless statement is meaningless).

At one level the solution of TEP is unique: it's caused by sloppy thinking. Not making distinctions which are important. At another level there is no solution: there are so many ways a single piece of sloppy thinking can be straightened out. Richard Gill (talk) 08:20, 25 July 2011 (UTC)[reply]

If you can show that the different proposed solutions (so far) are just different ways of translating the problem into a logical or mathematical framework, that would be very interesting. However, this is not the view any of the authors of the reliable sources have, and no paper so far is promoting this view, published or unpublished. Very few even mention other authors and their proposed solutions. Those who do, do it only to criticize their ideas. If you could show that all TEP authors on the contrary should embrace each other in a big hug it would be very interesting new research indeed. Until then, please treat this as your own opinion, and your own opinion only.

None of the proposed explanations so far explain everything. This is no secret and you should know it. This is the reason why people still are writing papers on this subject at the pace as they do. In contrast, there is no comparable activity in finding new proofs of the Pythagorean theorem, which, according to you, should be a parallel case. You could try to invent a set of conspiracy theories of why this is the case, or simply realize the real reason why people are writing about TEP. Which path do you choose? iNic (talk) 21:57, 26 July 2011 (UTC)[reply]

Puzzled

Richard, mentions, 'examples of faulty logical reasoning caused by using the same symbol (or verbal description) to stand for different things at the same time' and Dilaudid seems to be alluding to the same thing yet I do not see that resolution of the paradox anywhere in the article. Can someone tell me if that argument is considered a good resolution of the paradox and, if so, could they present the argument clearly here. Martin Hogbin (talk) 11:48, 22 July 2011 (UTC)[reply]

We are not here to judge which proposed solutions are 'good' or 'bad.' Our job is to give a short unbiased account of the more common solutions mentioned in published sources. This proposed solution should definitely be mentioned in the article. I think it is the simplest solution and it should be mentioned as the first proposed solution in the article. iNic (talk) 13:57, 22 July 2011 (UTC)[reply]
Falk: "The assertion in no. 6 (based on the notation of no. 1) can be paraphrased ‘whatever the amount in my envelope, the other envelope contains twice that amount with probability 1/2 and half that amount with probability 1/2’. The expected value of the money in the other envelope (no. 7) results from this statement and leads to the paradox. The fault is in the word whatever, which is equivalent to ‘for every A’. This is wrong because the other envelope contains twice my amount only if mine is the smaller amount; conversely, it contains half my amount only if mine is the larger amount. Hence each of the two terms in the formula in no. 7 applies to another value, yet both are denoted by A. In Rawling’s (1994) words, in doing so one commits the ‘cardinal sin of algebraic equivocation’ (p. 100)." Richard Gill (talk) 17:11, 22 July 2011 (UTC)[reply]
What is your opinion of this argument? Martin Hogbin (talk) 20:18, 22 July 2011 (UTC)[reply]
I do not see how Falk's argument does not apply in this situation, when you should swap, once.
You pick an envelope from a selection containing different amounts of money. A person who knows what is in your envelope then tosses a coin and puts either 1/2 or twice that amount of money in another envelope. You have the option of swapping. Would you? I would. Falk's argument seems to me to show that you should not swap and it is therefore wrong. Martin Hogbin (talk) 22:53, 22 July 2011 (UTC)[reply]
In standard TEP, when your envelope is the smaller of the two, it contains on average half what it on average contains when it is the larger. This is also Schwitzgebel and Dever's explanation (not of what is a wrong step in the argument, but why its conclusion is wrong). In your scenario, Martin, your envelope is not different by being smaller or larger. Falk puts into words that in standard TEP, what's in either envelope is statistically different when it's the larger than when it's the smaller. The wrong step in the argument assumes that whether your envelope is the smaller or larger is independent of what's in it. In maths: the wrong step is to assume that the event {A<B} is independent of the random variable A. Falk's explanation: this step is wrong, because A is not independent of {A<B}. It seems that SD are the first philosophers who say this. Falk knows it. The mathematicians knew it long before (they know the symmetry of statistical independence; philosophers don't).

Mathematicians writing for mathematicians can leave their conclusions in technical language and perhaps only semi-explicit: their intended readers understand it. But the philosophers don't understand. And vica versa. Ordinary folk understand neither. Hence the wheel is reinvented so many times. Richard Gill (talk) 17:14, 23 July 2011 (UTC)[reply]

If you are pointing out that there is no way to arrange the money in the envelopes such that the probability that B contains half of A and the probability B contains twice A are both equal to 1/2 and the situation is symmetrical then I understand and agree. If this is not what you are saying then could you explain some more please. Martin Hogbin (talk) 18:03, 23 July 2011 (UTC)[reply]
I'm not saying that. Put x and 2x in two envelopes, then completely at random write "A" and "B" on the two envelopes. The situation *is* completely symmetric. And the chances A contains twice or half what's in B are both equal to 1/2.

What these folks are saying is that it's asking fir trouble to denote both by "a" what's in A when it's larger and what's in "A" when it's the smaller. Call them "2x" and "x" instead and you don't screw up.

Alternativy, learn about conditional probability and do it with maths. The original TEP argument looks like probability theory but isn't. Philosophers think it's purely a problem of[ logic, not of mathematics. Then what I just explained is the solution. But another solution is to do it properly within probability theory. But in the end it all comes down to the same thing. [User:Gill110951|Richard Gill]] (talk) 18:14, 23 July 2011 (UTC)[reply]

How does Falk's argument not apply to my case then? Martin Hogbin (talk) 18:26, 23 July 2011 (UTC)[reply]
In your case, what's in your envelope when it's the smaller of the two is the same as what's in it when it's the larger of the two. Your envelope is filled first, say with amount a, only after that is 2a or a/2 put in the other envelope. In TEP two envelopes are both filled with amounts x and 2x. After that, you choose one at random. If it's the smaller amount it's x. If it's the larger amount, it's 2x. This is what Falk is saying, and SD, and a load of other people, though I must say they can find incredibly ugly ways to say it. Richard Gill (talk) 20:40, 23 July 2011 (UTC)[reply]
I have just added a bit to my statement above. It now says, ' there is no way to arrange the money in the envelopes such that, for every possible value of A, the probability that B contains half of A and the probability B contains twice A are both equal to 1/2 and the situation is symmetrical. Does this make sense now? Martin Hogbin (talk) 19:13, 23 July 2011 (UTC)[reply]
That makes sense to me, though some purists would prefer that you added the word "conditional" in front of probability, though.

However, it is possible to arrange things so that it is almost exactly true. This brings us to TEP-2, "Return of TEP", or "Great Expectations". If you arrange things so that what you just asked is almost exactly true, then for almost every possible value of A, the expectation of B is exactly or approximately 5A/4, so it seems one ought to switch. We seem to have a new paradox. However in that case the amount of money actually in envelope A is almost always far, far, less than its expectation value - expectation values are absolutely useless as a guide to rational choice.

You might object that this corresponds to subjective beliefs about the amount in envelope A, or the smaller of the two amounts X, which are totally unreasonable. However, there are a load of reliable sources which do state that this is not only reasonable, it's even prescribed. If you really know absolutely nothing about X, you also, I think, would be prepared to say that you know absolutely nothing about A, and absolutely nothing about X converted from dollars to Euro's or Renbini or Yen. There's exactly one probability distribution which has this property: it's the probability distribution which makes log X uniformly distributed, so log X is a completely unknown real number somewhere from minus to plus infinity.

The thing is that knowing nothing about a positive number means that however large it is, it could just as well be billions of time larger, and however small it is, it could just as well be billions of time smaller. Because of this your expectation value of what it is, is infinite. So you'll always be disappointed when you get to see the actual value x, which is a finite positive number.

I was just now talking about subjectivist probability: probability as rational belief. If you want to think of TEP with frequentist probability, then I just want to say that it is easy to arrange that the properties you listed almost exactly hold true. I can write a program on my computer which will print out two amounts x and 2x to go in the two envelopes, and though the event {A<B} won't be exactly independent of the amount A, it will be as close to independent as you like. We can play the game a billion times and never ever meet an occasion when the conditional probabilities that B=a or a/2 aren't each exactly 1/2 (given A=a). So despite what's written in the literature, the paradox of TEP-2 isn't just a paradox of infinity. Unfortunately wikipedia will have to wait 10 years before we can correct that illusion. Richard Gill (talk) 20:53, 23 July 2011 (UTC)[reply]

But surely that is still a paradox of infinity. It is only because the expectation is infinite that you do not expect to actually get it.
The case of infinite means is a special case of a heavy-tailed distribution, so heavy-tailed to the right that the mean value of the distribution is at the far right-hand end of the distribution, and hence completely unrepresentative of a single value from the distribution, or even of an average of not-astronomically-many values from the distribution. Richard Gill (talk) 23:10, 24 July 2011 (UTC)[reply]
Also, I think we agree that Falk does not put the argument very well, especially for the layman. Martin Hogbin (talk) 22:04, 23 July 2011 (UTC)[reply]
Yes. And it seems to me that Schwitzgebel and Dever also fail to put the argument very well. Though they seem to be saying the same. Richard Gill (talk) 23:10, 24 July 2011 (UTC)[reply]

iNic - please don't delete anything from this talk page. You insist on a perfect refutation of this paradox yet interfere with any discussion of it. This is not helping anyone. Thank you, Dilaudid (talk) 10:46, 24 July 2011 (UTC)[reply]

I didn't delete it. I moved it to the Arguments page. Can you please move it back? iNic (talk) 23:48, 24 July 2011 (UTC)[reply]
iNic, several serious editors are working with you on the article at the moment. It's annoying to be having to go back and forth between two pages. Sure, we are talking about content as well as form at the same time but the content discussion is in held in order to inform the discussion on form. Richard Gill (talk) 05:59, 25 July 2011 (UTC)[reply]

I'm truly sorry if the order here annoys you Gill, but not everyone is interested in reading various editor's personal views on the subject matter. But others are and this is why we have the Arguments page. However, editorial discussions should not contain any personal views on the subject matter at all. If they do there is something wrong with the discussion, and such discussions are moved to the Arguments page. iNic (talk) 22:35, 26 July 2011 (UTC)[reply]

iNic - I would also like to apologise to you. I have been too arrogant in my edits, and dismissive of your views. We are supposed to work together on this article, not fight, and I was wrong about the difficulty of the paradox - many sources (e.g. Devlin) state that it is "notorious". I hope I didn't hurt your feelings, and will be willing to work with me going forward. I would however prefer that we hold off deleting from the talk page until the weekend, when I will have more time to review what is here. Dilaudid (talk) 07:25, 27 July 2011 (UTC)[reply]

This is not a paradox

The WP article defines paradox as "a seemingly true statement or group of statements that lead to a contradiction or a situation which seems to defy logic or intuition." There is absolutely no contradiction or defying of logic in the two envelopes problem since it is blatantly obvious that switching can give you no advantage. It is only faulty reasoning that would lead you to believe that it could. Once a single watertight proof for a problem exists, it doesn't matter how many lines of faulty reasoning bored mathematicians or philosophers can come up with. It is ridiculous to suppose that one needs to refute all possible lines of reasoning to prove that the problem is not a paradox. This is not a paradox and I propose to delete any references to that from the article. Petersburg (talk) 01:30, 27 July 2011 (UTC)[reply]

I agree with the full force of your argument - this article is horribly overdone, I think the crazy variants and stupid "solutions" need to be deleted - I would appreciate your ongoing support in this. But I maintain that *it is a paradox*. A paradox is, as you state, "seemingly true" statements that lead to contradiction. The switching argument "seems to be true" when you read it without care. It is *obviously false* because it contradicts itself. This means the problem *is* a paradox - stating that it is a paradox is the same as stating that you clearly should not switch. Does this make sense to you? Dilaudid (talk) 07:20, 27 July 2011 (UTC)[reply]
This problem is a paradox, but one that has been satisfactorily resolved. There is a faulty but compelling line of reasoning that leads to a absurd conclusion. That is the paradox. Identification of the error(s) in the given line of reasoning is the resolution of the paradox.
Dilaudid, you have not made clear what you are saying. Are you saying that there is only on formulation of the problem? Is so what exactly is it.
Do you also claim that there is an obvious resolution of the paradox? If so can you state this also please. Martin Hogbin (talk) 09:08, 27 July 2011 (UTC)[reply]
see below. Dilaudid (talk) 09:40, 27 July 2011 (UTC)[reply]

Can we all agree on the following?

1) The Two Envelopes Paradox that this article is about is the variant where the envelope is not opened (there is no real discussion of the other variant)

2) The paradox is the contradiction between symmetry and the switching argument - specifically points 1-8, as written in the current article. Step 1 is an assertion. Steps 2,3,4,5,6,7,8 are deduced from the premises and the previous steps.

3) The switching argument is false (because A: it violates symmetry and specifically B: it refutes itself - the same argument can be used to prove the opposite conclusion).

4) The flaw in the argument is that step 2 cannot be deduced from the premises while the statement in step 1 is true.

The last point I guess will be contentious/new. If we are trying to prove that we can switch, and specify the amount in the envelope as A, our subsequent arguments must be true for all values of A. This means they must be true for any value of A, e.g. $2. Since we have not defined the probability distribution the envelopes are chosen from, the arguments must also be true for all distributions, e.g. the distribution "the amounts are $1 and $2 with 100% probability". We can show in this case that the probability that the other envelope contains is not 12 - it is zero. This is actually the same point that is made in Devlin and Storkey.

5) It's not actually relevant to the resolution of the paradox, but much of the literature seems to focus on the consequence of maintaining Step 1 and 2 as simultaneously true - the consequence is that we assume an improper probability distribution (not necessarily uniform, just true that p(x) = p(2x) for all x). To make the argument true, we would have to explicitly state this (or take step 2 as an assertion, rather than a deduction - some philosophers may be doing this). If we do so we then make a nonsense of steps 7 and 8, since we are comparing infinities (and infinities of the same type are always equal in size, see Cardinal number). Dilaudid (talk) 09:16, 27 July 2011 (UTC)[reply]