Of the great moments in the history of classical probability theory, three stand out. First comes the idea of a relative frequency, which in essence is a discovery about the stability of certain kinds of measurement outcomes. That relative frequency compared with other relative frequencies, accounting for a disjoint and complete set of possible outcomes. This idea must have been at least

*comprehensible*, if not actually*thought about*, since the very first time a human could throw a lump of shit at a wall - the likelihood of it sticking to any brick is proportional to that brick's (relative) wall area. In other words, even if no-one explicitly had that thought, I can easily imagine that if I suggested to people in prehistoric times to bet on which brick the shit would land, their choices would be partly determined by (all other things being equal) the brick's presenting surface area.The second great moment in the intellectual history of classical probability is the creation of an artificially regular event space, the so-called equi-probable event space.

This simplifying adjustment to the idea of a stable and complete set of relative frequencies allows analysts to physically count tiles to work out likelihoods or (proto-)probabilities - it being a ratio of one count over another, larger one. Each of the $n$ equi-probable tiles has a probability of $\frac{1}{n}$ and of course $\sum_1^n{\frac{1}{n}} =1$

Explicitly counting only gets you so far before you might start making errors. Imagine immense walls of tens of thousands of mosaic tiles. As problem complexity increases, counting needs to be industrialised. Enter the third great idea to become relevant - namely the application of counting and permutation rules as a way of formalising and regularising the process of counting enormous fractional event spaces. The formulae for these counting power tools are $C_{n,k} = \frac{n!}{k!(n-k)!}$ and $P_{n,k} = \frac{n!}{(n-k)!}$ and the 'cheat sheet' is known as Pascal's triangle.

A fourth moment (not so much of classical probability but of modern decision theory and psychology), is the ongoing sequence of discoveries of the biasses and flaws, the self-deceptions and regularly occurring mistakes humans make in estimating subjective probabilities themselves. This brings in the work of Kahneman and Tversky and the subject of behavioural finance, and aims to discover the perceptual and cognitive bias which real humans inject into the mathematically rather more rational classical probabilistic modelling approach which has been achieved by our cultures so far in history.

Two final pleasing elements of the wall metaphor.

One. The wall has two relevant dimensions, which reminds me of the law of multiplication of outcomes for independent events. If an experiment has $n_1$ possible outcomes and a second, independent experiment has $n_2$ possible outcomes, then the joint experiment has $n_1 \times n_2$ outcomes. Just like the length-wise and breadth-wise brick counts of a wall. This generalises up nicely to many dimensions: $\prod^{k} \frac{1}{n_k}$

Two. Walls are human constructs which help make buildings, another wonder of human culture. As both of these technologies evolved, so too did the edifices they enabled become more marvellous.