Jump to content

Propensity probability

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by JaapB (talk | contribs) at 19:49, 13 August 2008 (Ronald Giere). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The propensity theory of probability is one interpretation of the concept of probability. Theorists who adopt this interpretation think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind, or to yield a long run relative frequency of such an outcome[1]. This kind of objective probability is sometimes called 'chance'.

Propensities, or chances, are not relative frequencies, but puported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. A central aspect of this explanation is the Law of large numbers. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will (with high probability) be close to the probability of heads on each single toss. This law suggests that stable long-run frequencies are a manifestation of invariant single-case probabilities. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives. Hence, these single-case probabilities are known as propensities or chances.

In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular time.

The main challenge facing propensity theories is to say exactly what propensity means. (And then, of course, to show that propensity thus defined has the required properties.) At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge.

Karl Popper

The first propensity theory, due to philosopher Karl Popper, noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely indeterministic experiments.

Popper's propensities, while they are not relative frequencies, are yet defined in terms of relative frequency. As a result, they face many of the serious problems that plague frequency theories. First, propensities cannot be empirically ascertained, on this account, since the limit of a sequence is a Tail event, and is thus independent of its finite initial segments. Seeing a coin land heads every time for the first million tosses, for example, tells one nothing about the limiting proportion of heads on Popper's view. Moreover, the use of relative frequency to define propensity assumes the existence of stable relative frequencies, so one cannot then use propensity to explain the existence of stable relative frequencies, via the Law of large numbers.

Miller, Gillies

A number of other philosophers, including David Miller and Donald Gillies, have proposed propensity theories somewhat similar to Popper's, in that propensities are defined in terms of either long-run or infinitely long-run relative frequencies.

Ronald Giere

Other propensity theorists (e.g. Ronald Giere) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argue, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science.

Principal Principle

What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the Principal Principle, a term that philosophers have mostly adopted. For example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct price for a gamble that pays $1 if the coin lands heads, and nothing otherwise? According to the Principal Principle, the fair price is 32 cents.

See also

References

  • The Self and Its Brain: An Argument for Interactionism. Popper, Karl and Eccles, Sir John. 1977, ISBN 0415058988
  • The Propensity Interpretation of the Calculus of Probability and of the Quantum Theory. Popper, Karl. In Obeservation and Interpretation. Buttersworth Scientific Publications, Korner & Price (eds.) 1957. pp 65-70.
  • The Logic of Scientific Discovery. Popper, Karl. Hutchinson, London. 1959
  • Quantum Mechanics without "The Observer". Popper, Karl. In Quantum Theory and Reality. Springer-Verlag, Berlin, Heidelberg, New York. Bunge, M. (ed.). 1967
  1. ^ 'Interpretations of Probability', Stanford Encyclopedia of Philosophy [1], accessed 23 December 2006