Showing posts with label history of probability. Show all posts
Showing posts with label history of probability. Show all posts

Monday, 25 March 2013

Probability preferences : the source of randomness is not the game

Fermat's version of the solution to the problem of points was to create a grid of possibilities reaching fully out to the point beyond which no doubt could exist as to who the winner would be.  This grid of possibilities included parts of the tree which, on one view, would be utterly irrelevant to the game in hand, and on another view, incorrectly modelled the set of possibilities embedded in the game.

Pascal's solution, by way of contrast, was a ragged tree of possibilities stretching out along each branch only as far as was needed to resolve the state of the game in question, and no further.

Pascal additionally made the mistake, in interpreting Fermat's solution, of ignoring order when tossing three dice/coins and in this mis-interpretation came up with an answer in the case of three players which diverged from his own reverse recursive solution based on the principle of fair treatment at each node of his ragged tree.

Because Pascal's wrong-headed idea of Fermat's solution did not match his own, he jumped to the conclusion that what must be wrong in Fermat's method was the extension of the tree of possibilities beyond those parts which the game in hand required.  Pascal consulted Roberval on the likely legitimacy of this fully rolled out tree of possibilities and Roberval seems to have told Pascal that this is where Fermat is going wrong, namely that this 'false assumption' of theoretical play of zombie-games leads to bad results.  It doesn't.

The evolution in time of a source of randomness was seen clearly by Fermat as separate from the rule, game or activity sitting on top of it.  In this case the game was the 'first to get N wins'   Modern derivatives when tree based methods are used all apply this same move.  First the random process's set of possibilities are evolved on a lower, supporting layer, then the payoff of the contract is worked out at the terminal time horizon.  Both in De Mere's game and with an option, there's a clearly defined termination point.  With De Mere's game, the point happens when the first player reaches N wins.  With options, the termination point is the expiry of the option.  Gambler's ruin, as I'll discuss later, doesn't have such a straightforward termination point.  So step 1 is to lay out all the possible states from now to the termination point, the tree of possibilities for the stochastic process.  Then you work out the terminal value of the contract or game and use Pascal's fairness criterion to crawl back up the second tree, until you reach the 'now' point, which gives you the fair value of the contract.  This is the essence of the finite difference solution set, and it works for path dependent and path independent pricings.  The implications of the game is that the tree is re-combinant, which means the binomial coefficients become relevant when working out the probability that each path is traversed.

Fermat has a clearer and earlier conception of this separation.  But Roberval and Pascal were right to flag this move up - what grounds did Fermat give for the move?  In modern parlance, we can see that the stochastic process, often a stock price or a spot FX or a tradeable rate, is independently observable in the market.  But back then, Pascal was struggling to separate the game from the source of randomness.  F. N. David suggests that Pascal sets Roberval up as the disbeliever as a distancing mechanism for his own failure to grasp this point.  Likewise, David suggests perhaps Pascal only solved his side of the problem after initial prompting from Fermat, in a letter which starts off the correspondence but which unfortunately no longer exists.

Of course, this isn't a solution of an unfinished game, but the fair value of the game at any point during its life. Each author I read seems clear in his mind that one other other of the great mathematicians' solution is preferred.  Is this just ignorance, aesthetic preference masquerading as informed opinion?  Yes, largely.  But my own opinion is that the both solutions share many similarities - both need to evolve a tree of possibilities, a binary tree, for which the binomial coefficients come in handy as the number of steps increases.  Both then involve evaluating the state of the game at the fixed and known horizon point.  Fermat's tree is a set of possibilities of a stochastic process.  His solution takes place exclusively at that final set of terminal nodes, but working out the ratio of the set of nodes in which player A is the winner over the total set of terminal nodes.  Pascal's tree is the tree of game states.  He reasons in a reverse iterative way until he reaches the start point, and the start point gives him his final answer.  The arithmetic triangle could help both these men build their trees as the number of steps increases.

Thursday, 21 March 2013

Warm Seat

I am really rather pleased with my reading of the history of the theory of probability.  Four points struck me about it, firstly that Cardano has a much stronger claim than the authors of histories of probability give him credit for.  Second that Pascal was wrong in criticising Fermat's combinatorial approach in the case of more than two players in the problem of points and that his mistake was an equivalence class / ordering misunderstanding about the reading of three thrown dice.  Third, that Pascal's solution is a bit like using dynamic hedging for an exotic option (one which doesn't exist yet, but which I'll call a one-touch upswing option).  And fourth, that Huygens's gambler's ruin can be made into a problem of points by using participant stakes and separately some tokens which are transferred from the loser to the winner after each throw.  On the last three of these points Todhunter and the authors Shafer and Vovk agree with me, variously.

A better name for the problem of points is the warm seat price.  And the original first-to-six game, and also Gambler's ruin with plastic tokens and stakes can both be seen as specific games for which there's a warm seat price - the fair value of the game for a participant if he wanted to get out of the game immediately.  Gambler's ruin doesn't have a definite time in the future at which point it will with certainty be known who the winner is.

It is also amusingly my warm seat moment since I didn't discover anything myself, but followed in other peoples' footsteps, and have experienced the warm seat experience of discovery others had made before me.

Tuesday, 19 March 2013

One gambler wiped out, the other withdraws his interest

In so far as odds are products of a book maker, they reflect not true chances but bookie-hedged or risk-neutral odds.  So right at the birth of probability theory you had a move from risk-neutral odds to risk neutral slices, in the sense of dividing up a pie.  The odds, remember, reflect the betting action, not directly the likelihood of respective outcomes.  If there's heavy betting in one direction, then the odds (and the corresponding probability distribution) will reflect it, regardless of any participant's own opinion on the real probabilities.  Those subjective assessments of the real likelihood start, at their most general, as a set of prior subjective probability models in each interested party's head.  Ongoing revelation of information may adjust that probability distribution.  If the event being betted on is purely random (that is, with no strategic element, a distinction Cardano made), then one or more participants might correctly model the situation in a way which is as good as they'll want, that is immune to new information.  For example, the rolling of two dice and the relative occurrence of pips summing to 10 versus the relative occurrence of pips summing to 9 is the basis of a game where an interested party may well hit upon the theoretical outcomes implied by Cardano and others, and would stick with that model.  

Another way of putting this is to say that probability theory only co-incidentally cares about correspondence to reality.  This extra property of a probability distribution over a sample space is not in any way essential.  In other words, the fair value of these games, or the various actual likelihoods are just one probability distribution of infinitely many for the game.  

Yet another way of putting this is to say that the core of the theory of probability didn't need to require the analysis of the fair odds of a game.  The discoverers ought to have been familiar with bookies odds and how they may differ from likely outcome odds.  Their move was in switching from hedge odds of "a to b" to hedge probabilities of $\frac{b}{a+b}$.  That it did bind this up with a search for fair odds is no doubt partly due to the history of the idea of a fair price, dating back in the Christian tradition as far back as Saint Thomas Aquinas.

Imagine two players, Pascal and Fermat, playing a coin tossing game.  They both arrive with equal bags of coins which represent their two wagers.  They hand these wagers to the organisers, who take care of the pair of wagers.  Imagine they each come with 6,000,000 USD.  The organisers hand out six tokens each , made of plastic and otherwise identical looking.  Then the coin is brought out.  Everyone knows that the coin will be very slightly biassed, but only the organisers know precisely to what degree, or whether towards heads or tails.  The game is simple.  Player 1 is the heads player, player 2 tails.  Player 1 starts.  He tosses a coin.  If it is heads, he takes one of his opponent's plastic coins and puts it in his pile.  If that happened, he'd have 7 to his opponent's 6.  If he's wrong, then he surrenders one of his tokens to his opponent.  Then the opponent takes his turn collecting on tails and paying out on heads.  The game ends when the winner gets to have all 12 tokens and the loser has 0 tokens.  The winner keeps the 12,000,000 USD, a tidy 100% profit for an afternoon's work.  The loser just lost 6,000,000 USD.  Each player can quit the game at any point.

Meanwhile this game is televised and on the internet.  There are 15 major independent betting cartels around the world taking bets on the game.  In each of these geographic regions, the betting is radically different, leading to 15 sets of odds on a Pascal or a Fermat victory.

Totally independent to those 15 cartels of betting, there are a further 15 betting cartels which have an inside bet on, which pays out if you guessed who would see 6 victories first, not necessarily in a row.

Now this second have is inside the first, since you can't finish the first game unless you collected 6 points too.  Pascal and Fermat don't know or care about the inner game.  They're battling it out for total ownership of the tokens, at which point their game ends.  The second betting cartel are guaranteed to finish in at most 11 tosses every time, and possibly as few as 6 tosses.

Just by coincidence, Fermat, player 1, gets 4 heads in a row, to bring him to 10 points of total ownership of all the tokens.  He only needs 2 more heads to win.  At this point Pascal decides to quit the game.  To betters in cartel 1 it looks like Pascal and Fermat are playing gambler's ruin, to cartel 1 it looks like they're playing 'first to get six wins', which is the game the real Pascal and Fermat analyse in their famous letters.

Soon after, Pascal's religious conversion wipes out his gambling dalliance, and Fermat, only partly engaged with this problem, withdraws his interest.  Both men metaphorically enacting gambler's ruin and the problem of points.

Tuesday, 12 March 2013

Probability preferences

In order to support my claim that Pascal (and to some extent, Fermat) are too highly praised in the history of probability theory, I'd like to make a claim about what I see as important in the constellation of ideas around the birth of probability theory.  This is my opinion, and is based on what I know that has happened in the subject of probability theory since the time of Cardano, Pascal, Fermat and Huygens.

Concepts of primary importance in probability theory (in the pre-Kolmogorov world of Cardano, Fermat, Pascal)
  1. Event Space
  2. Independence. 
  3. Conjunction and disjunction.     
  4. Equivalence class.  
  5. Parallel/sequential irrelevance of future outcomes.  
  6. A relation between historical observed regularities and multiple future possible worlds.  
  7. A clear separation between the implementation of the random process(es) and the implementation of the activity, game, contract, etc. which utilises the source of randomness.

Concepts of secondary importance.
  1. Equi-probable event space. 
  2. Expectation.  
  3. Single versus multiple random sources.  
  4. Law of large numbers (though it is of primary importance to the dependent subject of statistics).
  5. i.i.d. (two or more random sources which are independent and identically distributed)
  6. A Bernoulli scheme
  7. The binomial distribution
  8. Stirling's approximation for n factorial
  9. The normal distribution
  10. Information content of a random device
  11. Identification of the activity, game, contract, etc, as purely random, or additionally strategic. 

I'd like to say something about each of these in turn.

Before I do, I'd like to say this - the Greeks didn't develop probability theory, as Bernstein and also David suggest, due to a preference for theory over experimentation, but perhaps because probabilities are ratios, and the Indians didn't invent base ten positional number notation until the eighth century A.D., making subsequent manipulations of these ratios more notationally bearable.  No doubt the early renaissance love of experimentation (Bacon and Galileo) may have assisted in drawing the parallel between the outcome of a scientific experiment and the outcome of a randomisation machine.