## Tuesday, 19 March 2013

### Probability Preferences : Independence is primary, multiple random sources secondary

I have already talked about the absolute importance of the idea of mutual exclusivity, disjunction to probability theory and how it enables the addition of probabilities.  I'd now like to chat about independence. Remember I said that the pairwise disjoint sets were absolutely dependent, in the sense that knowing one happened tells you everything you need to know about whether the other happened.  Note the opposite is not the case.  That is, you can also have absolutely dependent events which are nevertheless not mutually exclusive.  I will give three examples, though of course the classic example of independence is two (or more) separate randomisation machines in operation.

Take a die.  Give each face six different colours.  Then give the faces six separate figurative etchings.  Then add six separate signatures to the faces.  When you roll this die and are told it landed red face up, you know with certainty which etching landed face up, and which signature is on that face.  But those three events are not mutually exclusive.

Take another die, with the traditional pips.  Event E1 is tossing of an even number.  Event E2 is the tossing of 1,2,3 or 4. $P(E1)=\frac{1}{2}$ and $P(E2)=\frac{2}{3}$.  The occurrence of $E1 \cap E2$ is satisfied only by throwing a 2 or a 4 and so  $P(E1E1) = \frac{1}{3}$.  This means, weirdly, that E1 and E2 are considered independent, since knowing that one occurred didn't change your best guess of the likelihood of the other.  The events are independent within the toss of a single randomisation machine.

In a previous posting, I mentioned having 52 cards strung out with 52 people, and when someone decides, they pick up a card, and in that act, disable that possibility for the 51 others.  This system is mutually exclusive.  You can create independence by splitting the audio link into two channels.  The independence of the channels creates the independent pair of randomisation machines.

As the second example hinted at, independence means $P(E1E2) = P(E1) \times P(E2)$.  The most obvious way in which this can happen over one or more randomisation machines is for it to happen over two machines, where E1 can only happen as an outcome of machine 1 and E2 from machine 2.  This is what you might call segregated independence - all the ways E1 can be realised happen to be on randomisation machine 1 and all E2s on a second randomisation machine.  Example two could be called technical independence.

As the single randomisation machine becomes more complex - 12 faces instead of 6; 24 faces, 1000 faces, a countably large number of faces, it becomes clear that independence of a rich kind is entirely possible with just one source of randomness.  Another way of saying this is that multiple sources of randomness are just one way, albeit the most obvious way, of achieving independence.  Hence relegating that idea to the second tier in importance.

### One gambler wiped out, the other withdraws his interest

In so far as odds are products of a book maker, they reflect not true chances but bookie-hedged or risk-neutral odds.  So right at the birth of probability theory you had a move from risk-neutral odds to risk neutral slices, in the sense of dividing up a pie.  The odds, remember, reflect the betting action, not directly the likelihood of respective outcomes.  If there's heavy betting in one direction, then the odds (and the corresponding probability distribution) will reflect it, regardless of any participant's own opinion on the real probabilities.  Those subjective assessments of the real likelihood start, at their most general, as a set of prior subjective probability models in each interested party's head.  Ongoing revelation of information may adjust that probability distribution.  If the event being betted on is purely random (that is, with no strategic element, a distinction Cardano made), then one or more participants might correctly model the situation in a way which is as good as they'll want, that is immune to new information.  For example, the rolling of two dice and the relative occurrence of pips summing to 10 versus the relative occurrence of pips summing to 9 is the basis of a game where an interested party may well hit upon the theoretical outcomes implied by Cardano and others, and would stick with that model.

Another way of putting this is to say that probability theory only co-incidentally cares about correspondence to reality.  This extra property of a probability distribution over a sample space is not in any way essential.  In other words, the fair value of these games, or the various actual likelihoods are just one probability distribution of infinitely many for the game.

Yet another way of putting this is to say that the core of the theory of probability didn't need to require the analysis of the fair odds of a game.  The discoverers ought to have been familiar with bookies odds and how they may differ from likely outcome odds.  Their move was in switching from hedge odds of "a to b" to hedge probabilities of $\frac{b}{a+b}$.  That it did bind this up with a search for fair odds is no doubt partly due to the history of the idea of a fair price, dating back in the Christian tradition as far back as Saint Thomas Aquinas.

Imagine two players, Pascal and Fermat, playing a coin tossing game.  They both arrive with equal bags of coins which represent their two wagers.  They hand these wagers to the organisers, who take care of the pair of wagers.  Imagine they each come with 6,000,000 USD.  The organisers hand out six tokens each , made of plastic and otherwise identical looking.  Then the coin is brought out.  Everyone knows that the coin will be very slightly biassed, but only the organisers know precisely to what degree, or whether towards heads or tails.  The game is simple.  Player 1 is the heads player, player 2 tails.  Player 1 starts.  He tosses a coin.  If it is heads, he takes one of his opponent's plastic coins and puts it in his pile.  If that happened, he'd have 7 to his opponent's 6.  If he's wrong, then he surrenders one of his tokens to his opponent.  Then the opponent takes his turn collecting on tails and paying out on heads.  The game ends when the winner gets to have all 12 tokens and the loser has 0 tokens.  The winner keeps the 12,000,000 USD, a tidy 100% profit for an afternoon's work.  The loser just lost 6,000,000 USD.  Each player can quit the game at any point.

Meanwhile this game is televised and on the internet.  There are 15 major independent betting cartels around the world taking bets on the game.  In each of these geographic regions, the betting is radically different, leading to 15 sets of odds on a Pascal or a Fermat victory.

Totally independent to those 15 cartels of betting, there are a further 15 betting cartels which have an inside bet on, which pays out if you guessed who would see 6 victories first, not necessarily in a row.

Now this second have is inside the first, since you can't finish the first game unless you collected 6 points too.  Pascal and Fermat don't know or care about the inner game.  They're battling it out for total ownership of the tokens, at which point their game ends.  The second betting cartel are guaranteed to finish in at most 11 tosses every time, and possibly as few as 6 tosses.

Just by coincidence, Fermat, player 1, gets 4 heads in a row, to bring him to 10 points of total ownership of all the tokens.  He only needs 2 more heads to win.  At this point Pascal decides to quit the game.  To betters in cartel 1 it looks like Pascal and Fermat are playing gambler's ruin, to cartel 1 it looks like they're playing 'first to get six wins', which is the game the real Pascal and Fermat analyse in their famous letters.

Soon after, Pascal's religious conversion wipes out his gambling dalliance, and Fermat, only partly engaged with this problem, withdraws his interest.  Both men metaphorically enacting gambler's ruin and the problem of points.