Tuesday 19 March 2013

Probability Preferences : Independence is primary, multiple random sources secondary

I have already talked about the absolute importance of the idea of mutual exclusivity, disjunction to probability theory and how it enables the addition of probabilities.  I'd now like to chat about independence. Remember I said that the pairwise disjoint sets were absolutely dependent, in the sense that knowing one happened tells you everything you need to know about whether the other happened.  Note the opposite is not the case.  That is, you can also have absolutely dependent events which are nevertheless not mutually exclusive.  I will give three examples, though of course the classic example of independence is two (or more) separate randomisation machines in operation.

Take a die.  Give each face six different colours.  Then give the faces six separate figurative etchings.  Then add six separate signatures to the faces.  When you roll this die and are told it landed red face up, you know with certainty which etching landed face up, and which signature is on that face.  But those three events are not mutually exclusive.

Take another die, with the traditional pips.  Event E1 is tossing of an even number.  Event E2 is the tossing of 1,2,3 or 4. $P(E1)=\frac{1}{2}$ and $P(E2)=\frac{2}{3}$.  The occurrence of $E1 \cap E2$ is satisfied only by throwing a 2 or a 4 and so  $P(E1E1) = \frac{1}{3}$.  This means, weirdly, that E1 and E2 are considered independent, since knowing that one occurred didn't change your best guess of the likelihood of the other.  The events are independent within the toss of a single randomisation machine.

In a previous posting, I mentioned having 52 cards strung out with 52 people, and when someone decides, they pick up a card, and in that act, disable that possibility for the 51 others.  This system is mutually exclusive.  You can create independence by splitting the audio link into two channels.  The independence of the channels creates the independent pair of randomisation machines.

As the second example hinted at, independence means $P(E1E2) = P(E1) \times P(E2)$.  The most obvious way in which this can happen over one or more randomisation machines is for it to happen over two machines, where E1 can only happen as an outcome of machine 1 and E2 from machine 2.  This is what you might call segregated independence - all the ways E1 can be realised happen to be on randomisation machine 1 and all E2s on a second randomisation machine.  Example two could be called technical independence.

As the single randomisation machine becomes more complex - 12 faces instead of 6; 24 faces, 1000 faces, a countably large number of faces, it becomes clear that independence of a rich kind is entirely possible with just one source of randomness.  Another way of saying this is that multiple sources of randomness are just one way, albeit the most obvious way, of achieving independence.  Hence relegating that idea to the second tier in importance.

No comments:

Post a Comment