Jump to content

Game theory

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Khendon (talk | contribs) at 13:01, 10 September 2002 (Computer Science overlap). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Game theory, a branch of mathematics, operations research and economics, analyzes interactions with formalized incentive structures ("games"). The predicted and actual behavior of individuals in these games as well as optimal strategies are studied. Seemingly different types of interactions can be characterized as having similar incentive structures, thus being examples of a particular "game."

Game theory is closely related to economics in that it seeks to find rational strategies in situations where the outcome depends not only on one's own strategy and "market conditions", but upon the strategies chosen by other players with possibly different or overlapping goals. It also finds wider application in fields such as political science and military strategy, and overlaps with Computer science.

The results can be applied to simple games of entertainment or to more significant aspects of life and society. An example of the latter is the prisoner's dilemma as popularized by mathematician Albert W. Tucker, which has many implications about the nature of human cooperation. Biologists have used game theory to understand and predict certain outcomes of evolution, such as the concept of evolutionarily stable strategy introduced by John Maynard Smith in his essay Game Theory and the Evolution of Fighting. See also Maynard Smith's book Evolution and the Theory of Games.

Other branches of mathematics, in particular probability, statistics and linear programming, are commonly used in conjuction with game theory to analyse many games.

Types of games and examples

Game theory classifies games into many categories that determine which particular methods can be applied to solving them (and indeed how one defines "solved" for a particular category). Some common categories are:

Zero-sum games are those in which the total benefit to all players in the game adds to zero (or more informally put, that each player benefits only at the expense of others). Chess and Poker are zero-sum games, because one wins exactly the amount one's opponents lose. Business, politics and the prisoner's dilemma, for example, are non-zero-sum games because some outcomes are good for all players or bad for all players. It is easier, however, to analyze a zero-sum game, and it turns out to be possible to transform any game into a zero-sum game by adding an additional dummy player often called "the board," whose losses compensate the players' net winnings.

A convenient way to represent a game is given by its payoff matrix. Consider for example the two-player zero-sum game with the following matrix:

                                Player 2 
                    Action A    Action B    Action C
          Action 1    30         -10          20 
Player 1
          Action 2    10          20         -20
  

This game is played as follows: the first player chooses one of the two actions 1 or 2, and the second player, unaware of the first player's choice, chooses one of the three actions A, B or C. Once these choices have been made, the payoff is allocated according to the table; for instance, if the first player chose action 2 and the second player chose action B, then the first player gains 20 points and the second player loses 20 points. Both players know the payoff matrix and attempt to maximize the number of their points. What should they do?

Player 1 could reason as follows: "with action 2, I could lose up to 20 points and can win only 20, while with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, player 2 would choose action C. If both players take these actions, the first player will win 20 points. But how about if player 2 anticipates the first player's reasoning and choice of action 1, and deviously goes for action B, so as to win 10 points? Or if the first player in turn anticipates this devious trick and goes for action 2, so as to win 20 points after all?

The fundamental and surprising insight by John von Neumann was that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. The probabilities are computed so as to maximize the expected point gain independent of the opponent's strategy; this leads to a linear programming problem with a unique solution for each player. This method can compute provably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that the first player should chose action 1 with probability 57% and action 2 with 43%, while the second player should assign the probabilities 0%, 57% and 43% to the three actions A, B and C.

Player one will then win 2.85 points on average per game.

Cooperative games are those in which the players may freely communicate among themselves before making game decisions and may make bargains to influence those decisions.

Complete information games are those in which each player has the same game-relevant information as every other player. Chess and the Prisoner's dilemma are complete-information games, while Poker is not.

Risk aversion

For the above example to work, the participants in the game have to be assumed to be risk neutral. This means that, for example, they would value a bet with a 50% chance of receiving 20 'points' and a 50% chance of paying nothing as being worth 10 points. However, in reality people are often risk averse and prefer a more certain outcome - they will only take a risk if they expect to make money on average. Subjective expected utility theory explains how a measure of utility can be derived which will always satisfy the criterion of risk neutrality, and hence is suitable as a measure for the payoff in game theory.

History

Though touched on by earlier mathematical results, modern game theory became a prominent branch of mathematics in the 1940s, especially after the 1944 publication of The Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. This profound work contained the method for finding optimal solutions for two-person zero-sum games. Around 1950, John Nash developed a definition of an "optimum" strategy for multi player games where no such optimum was previously defined, known as Nash equilibrium. This concept was further refined by Reinhart Selten. These men were awarded the Swedish Bank Prize in 1994 for their work on game theory, along with John Harsanyi who developed the analysis of games of incomplete information.



See also Mathematical games


Further Reading:

  • Oskar Morgenstern, John von Neumann: The Theory of Games and Economic Behavior, 3rd ed., Princeton University Press 1953
  • Mike Shor: Game Theory .net, http://www.gametheory.net. Lecture notes, interactive illustrations and other information.
  • Maynard Smith: Evolution and the Theory of Games, Cambridge University Press 1982