Monday, 28 November 2016
A feeling for equity factors
Wednesday, 16 November 2016
Two interpretations of the Security Market Line
The chart documents some relationships about some particular stock or portfolio. It is interesting that for the purposes of the SML it doesn't matter of you're considering a single stock or a portfolio of some socks with a bunch of weightings.
The Y axis shows you what the CAPM model output at any given moment thinks a set of portfolios is likely to return. The X axis shows the beta of that portfolio. So each <x,y> co-ordinate represents a set of portfolios where each member shares the same expected return and the same beta as as its cohabitees. This set of portfolios behind a single point can be considered infinite. And of course there are an infinite number of points on he Cartesian plane.
Only the set of portfolios which give the best return over risk profile exist on a single upwardly sloping line referred to as the SML. These on-line portfolios are expected to return the market return (plus the risk free rate).
The slope of the line is referred to as the Treynor ratio and equals the excess return to be expected from the market now in excess of a (fairly) risk free rate.
Passive index trackers have as their job the task of residing at the point <1, E[R_m ] +R_f> in as cheap a way as possible. That is to say, there are many portfolios which have a beta of (about) 1.0 and which have an expected return of the return of the market. Passive fund managers try to implement being on this point in as cost effective way as possible. Passive fund managers of leveraged offerings try to do the same thing but at betas of 2.0, 3.0, 0.5 etc.
CAPM tells you there's no point being off-the-line as you're taking diversifiable risk and hence shouldn't be getting paid for it. You only get paid for diversifiable risk, that is, risk which is correlated with the market.
Active portfolio management believes that some portfolios exist above and below the SML and can be exploited to make returns greater than the market.
Deciding which value of x you'd like is not something the model can help you with. That represents an exogenous 'risk appetite choice. Once you've made that choice the SML tells you, assuming it is based on a well functioning and calibrated CAPM, how much you can expect to make.
Let's imagine you have a normal risk appetite and set x=1. There are many ways of constructing a portfolio which delivers that return but the one where you're fully invested in the market portfolio is a natural choice. You could be fully invested in a number of other portfolios which do the same. Or you could be under-using your notional capital investment and using market weights for all other stocks; or you could be borrowing money and over-gearing your unit of capital to raise the beta greater than 1.
That is, by using financing gearing, you can travel with a fixed market portfolio up and down the x-axis, in theory to any value of x. Of course you can't get infinite leverage but still, the theory assumes you can.
You can achieve the same by using leveraged products - equity options or equity futures, for example. These narrow your time horizon (theta) but in theory you don't need to worry about that.
If you try to be, e.g., fully invested and then try to tilt the beta by owning long more high beta stocks than the market, you will indeed see your Y value increase but this will also be taking risk which is diversifiable. So you'll be taking risk you are not getting paid for. Achieving this same level of return can be achieved in a more efficient way with the market portfolio and some form of leverage, and this approach is theoretically to be preferred on this basis.
In practice there are costs associated with gaining any leverage to achieve a desired return. Perhaps a better model is a CAPM with funding costs burned in.
Also you won't see a SML with negative x values. There's no reason why not. Sometimes you my be seeking a portfolio which returns less than the risk free rate (and perhaps even negative returns) in certain circumstances. In this case you'd see the return go negative as your beta goes negative.
A question arises in my head. Long term, which is the best value of x to sit on together with a market portfolio, if your goal is to maximise expected excess returns across all time periods and business cycles. I think this is a permutation of asking whether there's a way of forecasting the Treynor ratio (the equity risk premium). If you could, then you could move to a x>1 portfolio construction and move likewise to a x<1 construction when your model calls a decreasing equity risk premium.
What if my equity risk premium forecaster was a random process which swept randomly through the 0.9-1.1 range? Long term, would this not be equivalent to a steady 1.0? Could the algoithm have a degree of mean reversion at the back. That is to say, if a long term random peppering of the 0.9-1.0 space delivers 1.0 results, then if your active algorithm has placed you at 0.9 for a while, might it then increase the hit rate at the >1.0 space?
So the SML is an SML for today, and the slope of that curve may steepen or become shallow though time. Probably within a very tight range.
Calculating and predicting the equity risk premium seems to be perhaps an even more valuable thing to do than trying to do active equity factor portfolio modelling.
Tuesday, 15 November 2016
Scissors, a reference portfolio and a clear correlation divide
The Moves Like Jagger model (MLJ) is a kind of look inside a behaviour. The look inside in effect chops the behaviour in two. It acts like a pair of scissors. All it needs is a reference behaviour. The scissors then chop any to-be-analysed behaviour into two pieces. One is perfectly correlated to the reference behaviour. The other is perfectly uncorrelated with the reference behaviour.
All that the capital asset pricing model (CAPM) adds is a statement that the behaviour of the reference is worthwhile, indeed the ideal behaviour. CAPM in effect adds morality to the scissors. It claims that you can't act better than the reference behaviour. A consequence of this is that the uncorrelated behaviour is in some sense wrong, sinful if you like. Why would you do it if the ideal behaviour is to be strived for? By making the ideal behaviour a target, you start then to see the uncorrelated behaviour as distorting, wrong, avoidable, residual. So the language of leftovers or residua enters.
When we finally get to the equity factors active management approach, a space again opens up to re-analyse the so-called residua into a superlative component and a random component. The superlative component is behaviour which is actually better than the reference behaviour. FInally after this better behaviour is analysed (it is called alpha), it is claimed that the remainder is once again residua.
The scissors operation of covariance is the tool. CAPM is the use of the tool in a context of some assumptions around the perfection of the reference behaviour. Post-CAPM/equity factors is the use of the scissors in the context of some assumptions around the possibility of exceeding the quality of the reference behaviour.
One aspect of CAPM I have not spent much time on is the element of risk appetite. Let's pretend that the only asset available to you is the Vanguard market ETF and that you have 1 unit of capital which you've allocated to investing. No sector choices are possible. No single name choices are possible. Are you limited to receiving on average just the market return? No, because there's one investment decision you need to make which is prior to the CAPM, namely how much of your investment unit you'd like to keep in cash and what fraction you'd like to invest in the market.
The way you go about that decision is an asset allocation decision, and is a function of your appetite for risk, which is said to exist prior to the CAPM reasoning. If you have no appetite for risk you invest precisely 0% of your unit capital in the market portfolio (and receive in return the risk free rate). In theory, you could invest 100% of your unit of capital and receive the market return. Indeed, in theory you can invest >100% of your unit of capital, through the process of borrowing (funding leverage) or through the selection of assets witg built in leverage (asset leverage). Through either of these techniques, or any combination of both, you can get a return which is an amplified version of the market return.
With amplified returns though, or gearing, you run the risk of experiencing gambler's ruin early in the investing game. Gambler's ruin traditionally happens when the capital reduces to 0. Its probability can be estimated. With any kind of amplified return, there's a point before 0 where your broker, through margin calls, will effectively bring the investing game to a halt.
The process by which you decide what your degree of investment and amplification in the market is going to be is an asset allocation decision. You're after all investing your unit either in the market portfolio or in the risk free asset. This decision can be made once forever. What is the single static best allocation of cash between the risk free asset and a correspondingly over-or under-invested market portfolio? Or this decision can be time sensitive - i.e. your decision can move with time.
Insofar as the investment community does this in a more or less correlated way this creates waves of risk-on and risk-off patterns in markets.
Making a single fixed static allocation decision is a bit like a surfer who bobs up and down in a stationary way as waves arrive to the shore. Trying to be dynamic about it is like that same surfer standing up at some point and trying to let that wave take him to shore on a rewarding ride. The CAPM in a sense tells you nothing about which of these two approaches are best for long term returns.
Monday, 14 November 2016
Moves like Jagger
Friday, 4 November 2016
Which sector to begin
Thursday, 3 November 2016
Factor multi-temporality and the layer model is born
Another element of the approach I read about when I'm reading around equity factors which I don't like much is the implict attempt to provide a single time granularity acoss all factors.
In their steady march to generalisation, modellers have arrived at fundamental and/or economic factor models which all sit on underying observation data which occurs at the same time frequency - often end of day. In the same way as they assume or manufacture more-or-less homogeneous equity atoms, with a singular universal semantics, so too do they expect their operation to work at the same speed. But some equity factors could be slow moving, some could be much more rapidly moving. They hope that the combination of steady time evolution plus a rolling window of history over which parameter updates are performed will provide enough dynamism to allow fast moving and in-play factors to come to the fore. I'm thinking here on the fundamental factor side of momentum based factors, and on the economic factor side, of major economic announcements. Major economic announcements, scenduled and unscheduled, can move some assets, indeed some stocks, more than others.
If Ihad additional buckets to push stocks, then this opens up the possibility of various factor models being optimised to different speeds of the market.
A factor, after all, in its most general sense, is just a series of observables which may change the expected return of a stock. I am all for making factors be as widely scoped as possible. If a factor exists out there which isn't commonly recognised as such but which is predictive on the expected return of a stock, then it ought to be considered. Time does not move homogeneously for all factors, nor does a factor have to look like a factor to the factor community.
As well as looking at sector ETFs for a leg up into the world of factor modelling, I'll also be looking at multi-factor ETFs to see what's being implemented out there, as a way of getting straight to the kinds of factors which the community keep coming back to. I expect to see some momentum factors in there, and some value/steady growth ones. I expect there to be a preponderance of fundamental factors too - i.e. based on the idea that equity factor modelling is at heart the replacement equity analysts. But to me non-fundamental factors will be the heart of my approach.
This regime switching idea I have can be described as follows. For certain periods in the evolving life of a publicly listed company, the dominant factor is the beta to the market. But at other times the regime switches and the dominant attractor factor is in-play m&a, or, stock being shorted in the lead up and launch of a new convertible issue, or stock being bought up by virtue of its inclusion (or exclusion) in a major market index. Or that the stock has entered the 'week before FOMC announcement', it being particulaly interest rate sensitive, during earnings season, around dividend ex date periods.
In keeping with my humility approach, I set the hurdle high for regimes other than the beta regime - since that's the least damaging position to adopt.
Of course driving this all would be an asset allocation model, which again defaulted in moments of ignorance to the set of parameters which are generally considered a good mix. This would give your stock allocation some fixed amount within which to play.
The sector/geography/ETF context would be the main habitat of a stock and it only gets stolen away by other pseudo sectors for a set of specific reasons. To repeat, the alternative is to fully include all the relevant factors on an equal footing and to let the rolling window of calibrating market parameters to drive weight to the in play factors. I think this is going to expose the model to sparse data problems but in some primitie sense they can be considered compatible. In one you get the benefit of a single driving equation but weakened use of limited data below.
It is my view that one must be prepared to keep a really good model on ice for potentially years until the triggering signal is stong enough to activate it. I shall refer to these as episodic factor sets. Having them 'always on' and bleeding s-called explanatory power to them each day seems wrong to me.
So my model shapes up as follows: there's an asset allocation driving layer. Within that, for each asset, there's a layer which sets a target long/short ratio. (These two together represent your setting of the leverage). When you have your asset size and long/short ratio, you set about finding the set of stocks in your sectors/regimes which ought to contribute positively to alpha. FInally, especially for short term economic or fundamental factors, your way of expressing your equity exposure can be done through long or short the stock, but also through more or less complex options strategies.
Wednesday, 2 November 2016
What does 'own the market' actually mean?
Tuesday, 1 November 2016
Equity Factors - a humble beginning
Sunday, 20 March 2016
Liquidity and central bank policy
Liquidity in context IV - the life of a de facto corporate liquidity manager
This posting is about dumbing down liquidity management in language which most people can easily understand and relate to. Liquidity management is mostly about the maintenance of good operational cash flow balances to cover the expected and predictably unexpected vicissitudes and seasonalities of corporate life. There was a time not that long ago (up until the 1960s) when operational cash flow management was a private little secret of the treasury department. They skimmed some free cash flow from operations and kept a store of it to meet more or less expected corporate cash call events. When looked at this way, you suddenly realise that financial demands can in theory be a lot more predictable than operational events. It is more certain knowing when you need to repay your bonds than it does when you'll need to pay for repairs to an uninsured industrial accident.
These events in question (cash calls) each and every one of them can have an uncertainty added to the cash-amount and time schedule you'd normally think of as the definitive parameters. If you could model all cash call events somehow, then the aggregate cash schedule and its concomitant variance would feed into a pretty decent corporate liquidity management model. At its most fundamental, these cash calls are modelled as call options on zero coupon bonds. Each event will have its own notional value, volatility expectation, strike price. When you aggregate this portfolio of real options up you've got your funding liquidity mostly modelled. One must be realistic about just what fraction of the operating corporate environment is amenable to modelling, and also with respect to just how fast the situation could change. The more chaotic the likelihood of change is, then the more difficult it is to extract value from a liquidity management regime. Or to put this more dramatically, there's a level of chaos in the liquidity environment above which it doesn't make much sense to model liquidity. What you modelled today becomes largely detached from which realistically could happen tomorrow. In general, if the future state of some system is so unpredictable based on today's models, then those models aren't much use.
Why did liquidity management stop being a private skimming operation of the treasury department in the 1960s? Partly because of the advances in financial engineering from the 1960s onward (Treynor) which paved the way for more sophisticated financial engineering at corporate finance departments. Similarly, macro-economic climate became incredibly volatile following Nixon's decision to end Breton Woods agreement, leading to currency volatility and destabilising inflation. Corporate treasurers responded by bringing some basic financial engineering to the largely in-house management of corporate cash calls. Finally, financial engineering was also focusing the minds of corporate executives at technology companies starting in late 1950s silicon valley, via the issuing of executive stock options, which accounting bodies valued as stock price minus strike - effectively ignoring the intrinsic value element (we had to wait for the Black-Scholes equation for that). This caused them to tilt in favour of investment returns over (liquidity) risk. In essence, to really manage a firm's liquidity so that there is always a sufficient cash buffer in the end detracts from short term investment gains. Corporate executives, especially in 'innovative' technology companies were now personally incentiveised to maximise precisely these short term investment returns, in ways which used less capital.
What new tricks did they come up with?
How about taking those ideas in fixed income financial engineering and look to calculate the duration of cash flows with a view to matching those cash flowsOr perhaps finding some third party to write some liquidity options for you so you can have them as a form of cheap liquidity insurance.Or renegotiate the clauses around commitment in the contracts you have with banks over your loans.Or get loans from the capital markets, dis-intermediating your firm's normal pool of lending banks.Perhaps stealing that idea of Markowitz and paying close attention to the free lunch you can achieve through diversification - in this case, the diversification of your funding sources.Lastly, don't just have a pool of liquidity buffer cash sitting at a bank earning perhaps a negative real rate in times of high and volatile inflation, which not buy assets with this pool of cash, gaining a higher return which maintaining the average liquidity profile of the pool.