Sunday 17 November 2019

Putting all your eggs in the saftest basket known to man

In the end it is somewhat ironic that Markowitz starts off being concerned that real investors don't do what was logically implied by the Burr Williams approach, namely to put all their investment in a single maximal $E[r']$.  This leads to the efficient frontier set of portfolios.  Yet if he would have put the risk free asset in there, his efficient frontier collapses down to a single line, replicating the capital market line anyway.

Isn't it weird also that no-one is concerned that the efficient portfolio, on its way to being the CML, remains nonlinear until the final moment.  It also flattens out the curvature and hence the juice, the value, of the free lunch, namely diversification.  The minimum variance portfolio which contains treasury bills, plus all of the stocks in the stock market, is one where you have 100% of your assets in bills.

Automating the journey along the Capital Market Line

With the arrival of the ill-fitting Capital Market Line onto the efficient frontier, that eternal Achilles heel of portfolio selection, the moment when the analyst has to go back to the investor and asks them where they prefer to be on the efficient frontier reappears as a question of where the investor ought to be on the Capital Market Line.  Markowitz had always been comfortable, having been taught by the Bayesian Jimmy Savage, with considering the expected returns and covariances as a modellable step.  Soon to come was Fama, telling the world that, at least up to that point, the observed price history of the security was the best model of stock returns, which somewhat distracted the intellectual impetus from producing a proper Bayesian framework, as per Black and Litterman.

However, I think even the self-estimate on how risk adverse the investor was could perhaps have been re calibrated as a decision, not a preference.  Making it a decision allows the tools of decision theory to be making suggestions here.  Rather like the potential claimed benefits of traffic smoothing, low insurance, low accident rates, fewer cars in a world of driverless cars.

When an investor is asked for his preference I presume he in effect is really making some kind of unspoken decision either then, or at some point in the past.  And after all, how stable, how low a variance, is attached to the investor's risk appetite.  Using words like 'appetite' make it sound like Keynes's 'animal spirits' inside the hearts of those people who make business investment decisions.   If there's in principle the idea of varying appetites, which of course lies behind the final stage of classic Markowitzian portfolio selection, then how rational can that set of disparate appetites be?

Asking the question opens the Pandora's box.  That same box which remains closed in other realms of seeming human freedom - the law, consumption of goods.

Here's my initial stab at offering a model which demonstrates simultaneously multiple appetites but with retaining an element of rationality.  That's not to say that a behavioural scientist can't spot irrational decisions in practice.  Rather like CAPM, this attempt to remain rational might provide an interesting theoretical framework.

Aged 90, if we're lucky, on our death bed, we don't need to worry too much any more about our future investments.  Clearly, the older you are, the more you can afford to backslide down the CLM towards the $R_f$ point, ceteris paribus.  Conversely the more your expected lifetime consumption is ahead of you, the more you'd be tempted to inch up the CML.

Trying to take each element separately is difficult since there are clearly relationships between them.  Element two is wealth.  Clearly the average person becomes wealthier the older they get.  But, assuming a more or less stable spending pattern, it could be argued (billionaires aside) that as wealth increases, the investor relaxes back down the CML.

Both these first two elements are in a loose sense endogenous; the third element, such that it can be known, would be exogenous - if you have a model which tells you how likely it is that there's going to be a recession, then that could drive you up and down the CML.

Perhaps what's needed is an overrideable automation switch for where the investor resides on the CML right now.  That is, a personalisation process which guides the investor, which demonstrates to the investor how they deviate from the average CML lifetime journey.

Element four could be mark to market performance. Say the investor has a short term institutional hurdle to overcome, e.g. they would be happy making 10% each year.  I.e. that they're not just maximising lifetime expected wealth but  have equally important short term goals. Let's say we count a year as running from October to September and the investor achieved 10% by April one year.  Perhaps he'd be content to step away from the volatility from April to September one year.  By asking an investor what their risk appetite is, you're already asking them to accept a non market return, since only those investors who stick stubbornly to the tangency portfolio will see this.  Most investors will spend most of their investment life somewhere between $R_f$ and M, probably closer to M.  So they will have accepted a lower expected return anyway.  Some portfolio managers and hedge fund managers already run their businesses akin to this - they have perhaps loose monthly or quarterly targets and will step off the gas deliberately on occasion on good periods.  Converse they might increase leverage disastrously if they feel they're in catch up, a move often described as doubling down after the famous Martingale betting strategy. 

The argument against element three above is as follows: first, many hedge funds have tried to beat the market.  Not many consistently do, and even less so do it for macro economic reasons.  No such an exogenous model could be built.  To its advantage would be the fact that this model trades only in highly liquid assets (e.g. e-minis and treasury bills) so this would reduce transaction costs.  Secondly, transaction costs these days can be built in as a constraint to the portfolio optimisation step.  Another point in its favour is that the economic model can be made to generate actionable signals at a very coarse level.  A counter-cyclical model, which pushes you up the CML when a recession has occurred, would be a first step (running the risk of all such portfolio insurances of the past); a second step would be a model which slightly accelerates the back sliding only after the period of economic expansion is well beyond the normal range of duration for economic expansions.  This second step in effect relies on there being some form of stability to the shape and duration of the credit cycle.

Leaving the investor's risk preferences as an unopened black box seems to be missing a trick.

Saturday 16 November 2019

Sharpe Hacks the Efficient Frontier Diagram

Markowitz didn't add the capital market line to his famous efficient frontier diagram.  The idea of a capital market line dates back at least as far as Irving Fisher, and it seems that Tobin, in his 1958 "Liquidity Preference as Behavior toward Risk", in which he references the earlier work of Markowitz, and asks the question: why would a rational investor ever own his government's 0 yield obligations (cash, to you and I) versus that same government's non-0 yielding bonds (or bills).

He could have considered, and perhaps he did think this way, interest-bearing government obligations as just one more asset to drop into the portfolio.  But, according to the following informative blog post on the subject of Tobin's separation theorem, there's a better way of doing this.  Before going into it in detail, note that the theory is becoming more institutionalised here by treating risk free lending as a separate element to the portfolio selection problem.  In effect, we've added a rather arbitrary and uniquely characterised asset.  Not only that, I've always thought the capital market line is a weird graft for Lintner (1965) and Sharpe (1964) to add on to the efficient portfolio diagram.

The efficient portfolio diagram is a $(\sigma, r)$ space, it is true, but in Markowitz's formulation, each vector in this space is also a collection or zero or more different portfolio combinations.  Whereas, when you add the CML, whilst it is true that the proportion $p$ of cash and $1-p$ of the market portfolio is at each point on the line is at that higher level a unique portfolio in its own right, we are coming from a semantic interpretation of the efficient frontier where each distinct point contains a different portfolio of risky assets.  On the CML, every point has precisely the same market portfolio, more or less watered down by a mix of the risky market portfolio (in general, the tangency portfolio, before we get to the CAPM step).  

As a side note Markowitz has been noticeably critical of the CAPM assumption on limitless lending and borrowing at the risk free rate, and unbounded short assumptions.  In other words, he's likely to approve of the CML from $R_f$ up to the point it hits the market portfolio, at which point, like a ghost train shunted on to the more realistic track, he would probably proceed along the rest of the efficient frontier.

Clearly, the tangency portfolio has the highest sharpe ratio, in any line emanating from the risk free rate at the ordinate.  Sharpe and Lintner were to argue that point happens to be the market portfolio (on the assumption that all investors had the same $E[r]$ and $E[\sigma^2]$ expectations and all cared only for these two moments in making their investment decisions.

The addition in this way of the tangency line always felt as if it were a geometric hack around the then costly step of having to run a brand new portfolio optimisation with treasuries added as the $N+1$th asset.  Remember, at this point the CAPM hasn't been postulated and the tangency portfolio was not necessarily the market portfolio, so the tangency portfolio was still going to have to be optimised.  However, the CML from $R_f$ to $M$, as it approaches $M$ is actually above (and hence better than) the original efficient frontier.  And again, if one was happy with the assumption that one can borrow limitlessly, then all points to the right of $M$ would be objectively higher and hence more preferred than those on the original efficient frontier.

However, how would they look when you plot the CML plus original efficient frontier together with the new efficient frontier with treasuries as the $N+1$th asset?  Clearly that new frontier would be closer to the line, and flatter.    And Markowitz would then ask the investor to chose where they want to be on the new $N+1$ (nonlinear) efficient frontier.  Also, linear regression of stocks via a sensitivity of $\beta_i$ to the market would not be such a done deal.

A further problem I have with the CML is that treasuries, even bills, most surely do have variance, albeit very small, and only if the one-period analysis matches precisely the maturity of the bill will there be no variance.  Perhaps on an $N+1$th efficient frontier, the CML isn't the one with the highest $\frac{R_p - R_f}{\sigma}$.  I can well imagine that, leaving the original CML on the graph, as you chart the new Markowitz $N+1$ frontier, then there'd be points along that new frontier which have better risk-return profiles that those of the CML associated with an $N$ portfolio.


As a matter of academic fact, Sharpe actually attached the CML to the efficient frontier first in his 1963 paper "A Simplified Model for Portfolio Analysis", where he very much sees the regression step which ultimately later leads to his concept of beta and which makes attachment to economic equilibrium theory as an optimisation step to reduce the number of estimable parameters.  In the same vein, he sees the CML (the idea for which he doesn't credit Tobin/Fisher, whereas a year later in his classic CAPM paper, he does credit Tobin - who himself doesn't credit Fisher) as a speedup only.  He says:
There is some interest rate $r_i$ at which money can be lent with virtual assurance that both principal and interest will be returned; at the least, money can be buried in the ground ($r_i=0$).  Such an alternative could be included as one possible security ($A_i = 1+r_i, B_i=0, Q_i=0$) but this would necessitate some needless computation.  In order to minimise computing time, lending at some pure interest rate is taken into account explicitly in the diagnonal code.
Wow.  What a poor reason, in retrospect, for doing it this way.  By 1964 he found his economic justification, namely that it theoretically recapitulated a classical Fisherian capital market line. But in 1963 it was just a hack.  Even his choice of variable name $A_i$ for $E[r_i]$ and $Q_i$ for $\sigma_i$ showed where his head was at - namely he was an operations research guy at this point, working with Markowitz in an operations research private firm.

At the very least, it seems to me, there's no theoretically good reason why we can't just add a risk free asset into the mix and do away with the CML.  That way, we'd get a touch of variance in the asset, and a degree of purity back, Markowitzian framework purity.  CAPM certainly is needed to produce beta, the major factor, but that after all is a function of the security market line, a different line.


Wednesday 6 November 2019

The meat of Markowitz 1952

In the end, what Markowitz 1952 does is twofold:

First, it introduces the problem of minimisation of variance subject to constraints in the application context of portfolios of return-bearing entities.  Once introduced, the case of a small number of entities is solved geometrically.   By 1959, the preferred solution to this was the simplex method.  By 1972 Black noted that all you need is two points on the efficient frontier to be able to extrapolate all points.  By 2019 you have a plethora of R (and other) libraries which can do this for you.

Second, a connection is established with the then current economic theory of rational utility.  Here he sketches the briefest of arguments for whether his maxim (expected mean maximisation with expected variance minimisation) is a decent model of investment behaviour.  He claims that his rule is more like investment behaviour than speculative behaviour.  However he makes a typo (one of several I spotted).  He claims that, for his maxim  $\frac {\partial U}{\partial E} > 0$ but also that $\frac {\partial U}{\partial E} < 0$ whereas that second one should read $\frac {\partial U}{\partial V} < 0$.  His claim that his approximation to the wealth utility function, having no third moment, distinguishes it from the propensity to gamble.  It was t be over a decade later before a proper mathematical analysis of how the E-V shaped up as a possible candidate investor utility function, and, if so, what an equilibrium world would look like if every investor operated under the same utility function.

Markowitz and expectation

One of Harry Markowitz's aha moments comes when he reads John Burr WIlliams, on equity prices being the present value of future dividends received.  Markowitz rightly tightened this definition up to foreground the fact that this model works with some future uncertainty, so the phrase 'present value of future dividends' ought to be 'expected present value of future dividends'.  We are dealing with a probability distribution here, together with some variance expressing our current uncertainty.  When variance here is a now-fact, representing our own measure of ignorance, that fits well with a Bayesian/Information Theoretic framework.  

I note that in 1952 the idea of future expected volatility was very dramatically under-developed.  It was still two decades away from the Black-Scholes paper and the trading of listed equity options on exchange.  The term implied volatility was not in common finance parlance.  

The other interpretation of variance in Markowitz's classic Portfolio Selection, 1952 is that it ought to be the expected future variability in the stock's (or portfolio's, or asset's, or factor's) return.  That is, the first of Markowitz's two stages in selecting a portfolio is making an estimate of the expected return and expected variance of the return stream.

He says:
The process of selecting a portfolio may be divided into two stages. The first stage starts with observation and experience and ends with beliefs about the future performances of available securities. The second stage starts with the relevant beliefs about future performances and ends with the choice of portfolio. 
 I'm mentioning this since I think Markowitz thought of minimum variance as a tool in the 'decision making with uncertainty' toolbox, namely that it in effect operationalises diversification, something he comes into the discussion wanting to foreground more than it had been foregrounded in the last.

What has happened largely since then is that maximum likelihood historical estimates of expected return and expected variance have taken precedence.  Of course, this is convenient, but it doesn't need to be so.  For example, imagine that a pair of companies have just entered into an M&A arrangement.  In this case, historical returns tell only a part of the story.

Also, if you believe Shiller 1981, the realised volatility of stock prices in general over the next time period will be much greater than the volatility on show for dividends and perhaps also not much like the realised volatility for the time period just past.

Taking a step back even further, we are assuming that the expected mean and variance of the relevant expected distribution is of the sort of shape which can appropriately be summarised by a unimodal distribution with finite variance, and that these first two moments give us a meaningful flavour of the distribution.  But again, just think of the expected distribution of an acquired company in an M&A deal half way through the deal.  This isn't likely to be normal-like for example, and may well be bimodal.