Sunday, 17 April 2011

Volatility is not uncertainty.

Twelve months ago a hedge fund was created.  Eleven months ago it made a 1% first month return; for -2% for its second month, and so on.  By now, its 12 monthly percent returns look like this: $\left\{1,-2,2,\frac{1}{2},\frac{3}{4},-\frac{1}{2},0,0,0,1,-3,-4\right\}$.  It is useful to distinguish where the uncertainty lies.

The sequence of returns are certain enough.  They're a part of history.  You can calculate their sample variance as $\sigma^2  = \frac{1}{11} \sum_{m=1}^{12}(r_i - \bar{r})^2$ where $\bar{r}$ is the average, $-0.27$ in this case.  And the sample historical volatility $\sigma = 1.8$.  These are all certain.  The calculated volatility tells you with certainty just how variable those returns were.

If I got the measurements of the sizes of the planets in our solar system, I could likewise calculate the population variance and volatility.  With certainty.  I'm not saying anything whatsoever about their likelihood of change in the future.


In the world of finance and investing, we usually perform two extra operations which introduce uncertainty.  First, we decide we want to consider the unknown future and rope in history to help us.  Second, we construct a reference model, random in nature, which we hypothesise has been generating the returns we have seen so far and which will continue to generate the returns likewise into the unknown future.  That's a big second step.

Without wanting right now to go into issues about how valid this is, or even what form the model might take, I'll jump right in and suggest that next month's returns are expected to come in between  $-2$% and $1.4$%.  As soon as we decided to make a prediction about the unknown future, we added a whole bunch of uncertainty.  By picking our model (which, after all, might be an inappropriate choice), we've added model uncertainty.  By assuming that the future is going to be like the past, we've expressed a level of trust in reality which emboldens us to apply volatility to reduce all the uncertainties we just introduced. 

A second way you could introduce uncertainty was to create a guessing game.   Write all 12 returns down on pieces of paper and put them in a hat.  Let a glamorous assistant pull a piece of paper out of the hat.  Then let people bet cash to profit or lose from the difference between the drawn number and the mean.  In those circumstances the volatility of the original returns would help you size your bet.



Running bones run their course


I noticed, while reading F.N. David's history of probability, how similar were the average information contents in throwing the four sheep heels of pre-historical times, and throwing two dice, if you applied an equivalence class typical of the act of tossing, namely losing sight of the order of the tossed objects.  I then worked through the idea of equivalence classes taking a single die as an example.

When you grab a fistful of bones or dice and toss them, you are discarding information because it is cognitively easier for you to lose track of the landing locations of the individual dice.  In other words, when you introduce identical randomisation machines and parallelise their execution, you may not have the capacity to track their order.  Here's an example of how the simpler reality is harder to model mathematically than than the more complex reality.  I think this is one of the places which throw people off course when they're trying to learn probability.  It is never clearly explained in any of the probability books I've come across in my life.  We come to the book expecting the models to apply to simple, perhaps even artificial, reality, and then you work up from there to more complex.  But most books use tossing examples as the natural first example of equivalence class construction and the peculiar thing about tossing is that the real human practice has historically been the path of least resistance, ignoring order.  

Multiple dice analysis is easier since all the faces are equi-probable, and I'll go through a couple of examples in a separate post.  In a further post, I'll explain combinations and permutations in general.  Again, I'm not hugely convinced the words combination and permutation are the best descriptions of these rather ad hoc but useful analytical tools.  I know I certainly have had a problem with them.

When it comes to the analysis of 4 astragali combinations, it isn't enough for your equivalence classes to be of the type 'four of the same kind', 'a pair of pairs', etc, as I did for the three dice.  Since the faces are non-equiprobable, I need to distinguish 'four 1s' from 'four threes', for example.  So in all, I need three levels - the first level 'a pair of pairs', the second level 'a pair of threes and a pair of ones' and the third level being the combinatorial step - i.e. how many ways can you permute a pair of ones and a pair of threes.

The Chinese are credited with inventing paper, around the 9th Century A.D..  One of the first uses they put it to was the invention of playing cards.  In fact, it has been suggested that the first deck of cards had 21 different pip-style cards, $I = 21 \times \frac{1}{21} \times \log_2 \frac{1}{21}=4.3$ bits - just the same amount of information in tossing two dice without care for the dice order.  Again, I find that informational continuity amazing as each new technology innovation is introduced, allowing a cultural continuity.