Monday 28 November 2016

A feeling for equity factors


At this point I wanted to spend a little time thinking about how realistic equity factor modelling is.  To the degree that it isn't realistic, what can be said about the limitations.

The first point is an obvious one about data quality.  Stocks have corporate actions.  They split, they have regular and irregular dividends.  They become involved in m&a activity.  They dual list.  They enter and exit indices.    Each of these, and many more, real world effects can be considered as data quality challenges.  Some of these can also be considered to be the normal everyday lived experience of the average stock, and on that basis ought to be dealt with squarely by the model.

This double approach - the degree to which you pre-filter your universe, clearly can have results ramifications.  Statistically, what is it exactly we do when we remove outliers, and how justified are we to do that.  People tend to be informed by the economic and theoretic reality of stocks and the CAPM in deciding how to treat data issues.  

But in the end we are trying to do the following: find a single, more or less stable, relationship - a linear one - which captures this primary 'like Jagger/not like Jagger' distinction in stocks.  In other big data enterprises, sparse data is a problem, but with equity factors, for the life of each stock, you will often have continuous (end of day) prices over the examination period.  Clearly some stocks are going to be more liquid than others, but they're all likely to be liquid enough to provide an end of day price.  And thanks to the very idea of beta, we can be assured that we're always finding end of day correlations with the whole market, which means that the correlation data embedded in the stock's current beta number is also not going to run into the sparse data problem.

The whole idea of CAPM and equity factors is underpinned  by the idea that it is meaningful to talk about the average properties of stocks - that there is, in a sense, an average stock.

If you imagine the primary regression chart underlying CAPM, perhaps imagine whether some shape other than linear might be applied to the regression.    The security market line (SML) shows you what expected return you ought to expect from the equity market you just analysed knowing only what that stock's beta is.  Or alternatively how leveraged you chose to be in any given stock.  But imagine this line isn't linear.

How would it deviate from linearity?  With high beta stocks (and when operating with stocks, the assumption is that they are usually going to be positive beta, with a notional holding of a single unit of them long) what does that do to the expected payoff.  

Expected payoffs, if they partition a stock space, can be considered to be additive.  In other words, pretend you divide all stocks into 2 groups - those whose corporate name begins with the letter M or lower, and that the other group represents all other stocks.  When you calculate their SMLs separately, you'd first of all expect them to look identical.  But in any case, you could combine them to reach an average SML for both halves.  If i performed the same analysis for all companies whose name started with M,  versus all the rest, you'd want to weight these two SMLs to account for the fact that the market is clearly mostly like the 'non M' category. So perhaps you can weight by market capitalisation proportion.

Now start imagining some interesting partitions.  If you found a really significant partition based on some economically relevant measure, M, you could always immediately know what the 'non M' SML must look like, since you know that when combined with M, the two together combine to give you the SML associated with the market.

To repeat, there is often considered to be one SML, 'the' SML, which allows you to work on leveraged passive ETFs, for example.  Namely that if you were oblivious to the real effects of leverage, you'd be indifferent to where on the straight line you chose to be.  But think of non-random partitions of the stock universe.  To the degree that these partitions are information rich, the resulting partitions could be considered to be different SMLs.  CAPM's spin on all this is that you're a fool to want any line other than the SLM line, since you're not going to get paid for concentration risk.

You can think of all these component SMLs as linear combinations of their respective partition component SML.  Or as non-linear.

The idea that you can leverage the market to achieve whichever level or return you like and it is identical to selecting high beta stocks only to reach the same expected return is clearly a poor assumption.  It is kind of like the idea that portfolio insurance and puts are the same thing.  In theory only.

The degree to which high beta stocks are better is the degree to which you might imagine that the SML will droop on the upside.  I.e. that you would be happy to take less for the high beta stocks implementation when compared to the leverage way of getting to that return.

Put another way.  Leverage is costly.  So the high beta end of the SML is likely to droop as it tries and fails to live up to the theory  of CAPM.  Perhaps there's an argument for saying that the low beta end of the SML must be perky in contrast to the high beta leverage droop, to make the final theoretical SLM be the particular gradient of line it is.  Another thought.  If high beta stocks are the preferred way to achieve leverage at reasonable cost, then perhaps these stocks are more prized for this very reason, and bid up?  In being too expensive, perhaps their returns are poorer as a result?

Wednesday 16 November 2016

Two interpretations of the Security Market Line

The security market line (SML) is a straight line drawn to represent at a high level the conclusions of the CAPM.  I have two distinct ways of reading it.  First the standard one.

The chart documents some relationships about some particular stock or portfolio.  It is interesting that for the purposes of the SML it doesn't matter of you're considering a single stock or a portfolio of some socks with a bunch of weightings.

The Y axis shows you what the CAPM model output at any given moment thinks a set of portfolios is likely to return.  The X axis shows the beta of that portfolio.  So each <x,y> co-ordinate represents a set of portfolios where each member shares the same expected return and the same beta as as its cohabitees.  This set of portfolios behind a single point can be considered infinite.  And of course there are an infinite number of points on he Cartesian plane. 

Only the set of portfolios which give the best return over risk profile exist on a single upwardly sloping line referred to as the SML.  These on-line portfolios are expected to return the market return (plus the risk free rate).

The slope of the line is referred to as the Treynor ratio and equals the excess return to be expected from the market now in excess of a (fairly) risk free rate.

Passive index trackers have as their job the task of residing at the point <1, E[R_m ] +R_f> in as cheap a way as possible.  That is to say, there are many portfolios which have a beta of (about) 1.0 and which have an expected return of the return of the market.  Passive fund managers try to implement being on this point in as cost effective way as possible.    Passive fund managers of leveraged offerings try to do the same thing but at betas of 2.0, 3.0, 0.5 etc.

CAPM tells you there's no point being off-the-line as you're taking diversifiable risk and hence shouldn't be getting paid for it.  You only get paid for diversifiable risk, that is, risk which is correlated with the market.

Active portfolio management believes that some portfolios exist above and below the SML and can be exploited to make returns greater than the market.

Deciding which value of x you'd like is not something the model can help you with.  That represents an exogenous 'risk appetite choice.  Once you've made that choice the SML tells you, assuming it is based on a well functioning and calibrated CAPM, how much you can expect to make. 

Let's imagine you have a normal risk appetite and set x=1.  There are many ways of constructing a portfolio which delivers that return but the one where you're fully invested in the market portfolio is a natural choice.  You could be fully invested in a number of other portfolios which do the same.  Or you could be under-using your notional capital investment and using market weights for all other stocks; or you could be borrowing money and over-gearing your unit of capital to raise the beta greater than 1.

That is, by using financing gearing, you can travel with a fixed market portfolio up and down the x-axis, in theory to any value of x.  Of course you can't get infinite leverage but still, the theory assumes you can. 

You can achieve the same by using leveraged products - equity options or equity futures, for example.  These narrow your time horizon (theta) but in theory you don't need to worry about that.

If you try to be, e.g., fully invested and then try to tilt the beta by owning long more high beta stocks than the market, you will indeed see your Y value increase but this will also be taking risk which is diversifiable.  So you'll be taking risk you are not getting paid for.  Achieving this same level of return can be achieved in a more efficient way with the market portfolio and some form of leverage, and this approach is theoretically to be preferred on this basis.

In practice there are costs associated with gaining any leverage to achieve a desired return.  Perhaps a better model is a CAPM with funding costs burned in.

Also you won't see a SML with negative x values.  There's no reason why not.  Sometimes you my be seeking a portfolio which returns less than the risk free rate (and perhaps even negative returns) in certain circumstances.  In this case you'd see the return go negative as your beta goes negative.

A question arises in my head.  Long term, which is the best value of x to sit on together with a market portfolio, if your goal is to maximise expected excess returns across all time periods and business cycles.  I think this is a permutation of asking whether there's a way of forecasting the Treynor ratio (the equity risk premium).  If you could, then you could move to a x>1 portfolio construction and move likewise to a x<1 construction when your model calls a decreasing equity risk premium.

What if my equity risk premium forecaster was a random process which swept randomly through the 0.9-1.1 range?  Long term, would this not be equivalent to a steady 1.0?  Could the algoithm have a degree of mean reversion at the back.  That is to say, if a long term random peppering of the 0.9-1.0 space delivers 1.0 results, then if your active algorithm has placed you at 0.9 for a while, might it then increase the hit rate at the >1.0 space?

So the SML is an SML for today, and the slope of that curve may steepen or become shallow though time.  Probably within a very tight range.

Calculating and predicting the equity risk premium seems to be perhaps an even more valuable thing to do than trying to do active equity factor portfolio modelling.

Tuesday 15 November 2016

Scissors, a reference portfolio and a clear correlation divide

The Moves Like Jagger model (MLJ) is a kind of look inside a behaviour.  The look inside in effect chops the behaviour in two.  It acts like a pair of scissors.  All it needs is a reference behaviour.  The scissors then chop any to-be-analysed behaviour into two pieces.  One is perfectly correlated to the reference behaviour.  The other is perfectly uncorrelated with the reference behaviour.

All that the capital asset pricing model (CAPM) adds is a statement that the behaviour of the reference is worthwhile, indeed the ideal behaviour.  CAPM in effect adds morality to the scissors.  It claims that you can't act better than the reference behaviour.  A consequence of this is that the uncorrelated behaviour is in some sense wrong, sinful if you like.  Why would you do it if the ideal behaviour is to be strived for?  By making the ideal behaviour a target, you start then to see the uncorrelated behaviour as distorting, wrong, avoidable, residual.  So the language of leftovers or residua enters.

When we finally get to the equity factors active management approach, a space again opens up to re-analyse the so-called residua into a superlative component and a random component.  The superlative component is behaviour which is actually better than the reference behaviour.  FInally after this better behaviour is analysed (it is called alpha), it is claimed that the remainder is once again residua.

The scissors operation of covariance is the tool.  CAPM is the use of the tool in a context of some assumptions around the perfection of the reference behaviour.  Post-CAPM/equity factors is the use of the scissors in the context of some assumptions around the possibility of exceeding the quality of the reference behaviour.

One aspect of CAPM I have not spent much time on is the element of risk appetite.  Let's pretend that the only asset available to you is the Vanguard market ETF and that you have 1 unit of capital which you've allocated to investing.  No sector choices are possible.  No single name choices are possible.  Are you limited to receiving on average just the market return?  No, because there's one investment decision you need to make which is prior to the  CAPM, namely how much of your investment unit you'd like to keep in cash and what fraction you'd like to invest in the market.

The way you go about that decision is an asset allocation decision, and is a function of your appetite for risk, which is said to exist prior to the CAPM reasoning.  If you have no appetite for risk you invest precisely 0% of your unit capital in the market portfolio (and receive in return the risk free rate).  In theory, you could invest 100% of your unit of capital and receive the market return.  Indeed, in theory you can invest >100% of your unit of capital, through the process of borrowing (funding leverage) or through the selection of assets witg built in leverage (asset leverage).  Through either of these techniques, or any combination of both, you can get a return which is an amplified version of the market return. 

With amplified returns though, or gearing, you run the risk of experiencing gambler's ruin early in the investing game.   Gambler's ruin traditionally happens when the capital reduces to 0.  Its probability can be estimated.  With any kind of amplified return, there's a point before 0 where your broker, through margin calls, will effectively bring the investing game to a halt.

The process by which you decide what your degree of investment and amplification in the market is going to be is an asset allocation decision.  You're after all investing your unit either in the market portfolio or in the risk free asset.  This decision can be made once forever.  What is the single static best allocation of cash between the risk free asset and a correspondingly over-or under-invested market portfolio?  Or this decision can be time sensitive - i.e. your decision can move with time.

Insofar as the investment community does this in a more or less correlated way this creates waves of risk-on and risk-off patterns in markets. 

Making a single fixed static allocation decision is a bit like a surfer who bobs up and down in a stationary way as waves arrive to the shore.  Trying to be dynamic about it is like that same surfer standing up at some point and trying to let that wave take him to shore on a rewarding ride.  The CAPM in a sense tells you nothing about which of these two approaches are best for long term returns.


Monday 14 November 2016

Moves like Jagger


I will introduce the concepts of the capital asset pricing model and beta with a analogy.  

Dancing.
We all dance.  Some are better dancers than others.   There's a lot of trans-cultural variation to be sure.  Some influencing goes on to be sure.  Some don't dance at all.

Imagine your job is to build a model of how individuals dance and you come up with the following: all dance more or less like Mick Jagger.  Outrageous, I know.  But just imagine you watch how Mick Jagger dances.  You absorb that knowledge.  You have it in your head.  He's the template.  You now are armed with a reference point and set out to build your Moves Like Jagger (MLJ) model of human dance.

Imagine you find someone who dances incredibly like Jagger.  And another who moves like Jagger, but in a less spastic way.  And another who does so in a grossly caricatured way.  Perhaps yet another who only has the odd echo of Jagger in the way she claps her hands like a camp pair of cymbals.

MLJ when applied to any human being ought to be able to describe two things.  First, just how like Jagger this person moves, and, all the rest of his moves which don't really line up as classic Jagger moves.  So the generalised MLJ model version 1 is like a pair of scissors.  It analyses anyone's moves as those which correspond more or less to Jagger like moves, together with all his other moves which don't seem to fit the Jagger pattern.

V1 of your model insists that moving like Jagger is somehow a dancer's best goal.  The second fraction of his analysed behaviour which isn't like Jagger is deemed by you a failure.  You refer to these abortive un-Jagger moves as residual moves.  A mistake.  Clearly some dancers will make more mistakes than others.  So, one possible initial model is to have a simplistic weighting of moves, which you assign some fraction to a dancer's Jagger moves, and the rest to mistakes.  M = w_j J +(1- w_j) E.  Read this as: my model treats a dancer's moves as some fraction w_j like Jagger's and the rest as an error.

In V2 you realise some people move exactly like Jagger but more or less exaggerated.  Wilder flailing, more melodramatic hops, extreme chicken pecking  neck moves.  While others tone it down, but essentially incorporate all his moves.

So you switch to a new model, where there's a scaling factor b, measuring the degree of brio with which the dancer copies Jagger.  Again, an error term E will mop up the residual error, i.e. what remains when you've catered for the brio-adjusted moves.   M = bJ + E.  Jagger impersonators might have a brio score of 2.0.  Modest dancers might score 0.5 or lower.

The capital asset pricing model is the equivalent of the claim that the average dancer moves exactly like Jagger.  So, Iggy pop, Ian Curtis, Morrisey, Tom Waits, you and , and everyone else too.  When you average up our moves, we dance on average exactly like Jagger.  He's kind of a Jungian dance archetype for us all.

In any case, V2 is a two-part analysis of moves.  Part one compares them to some reference moves, and part two characterises the remnant.    

With stock returns, the moves-like function is implemented with Cov(A,B) the co-variance between period-returns for series A and period-returns for series B.  In fact, the period returns (e.g. daily returns) are adjusted to be returns in excess of a corresponding risk free rate.  The dance analogy might see the equivalent of the risk free rate would be the moves linked to breathing.    Co-variance is symmetric - Cov(A,B) = Cov(B,A) - but since we are interested in the Jagger reference dance moves, we normalise the co-variance by the variance of Jagger himself / the market.  This achieves the goal of making Jagger get a brio (or beta) score of 1.0.  He dances exactly like himself.  He will also get an E score of 0.

Equity factor portfolio management adds two elements further.  First, it adds multiple dance archetypes to the model.  Second, it assumes that it isn't the case that the average dancer dances like Jagger (or the set of Jungian dance archetypes).  That there's a possibility that some dance moves could be better than the archetypes.  New moves may be possible.

There appears to be no universally accepted theory on how to pick your archetypes.  Data driven selection has issues, and theory driven selection can go badly wrong too.  Iggy pop after all dances quite a lot like Jagger anyway.  Are you really saying much by adding him in?  

A final angle on this metaphor.  I always thought Jagger was doing a very poor early 60s James Brown impression anyway!  With a high comedic beta.  Similarly the choice of what 'the market' means is up for grabs.  And certainly I can see that change over time too.

Friday 4 November 2016

Which sector to begin


My layered model is in theory anyway going to cover all the relevant sectors and then some.  But Where should I start?  Let me initially make some comments about the pros and cons of starting with each of the sectors.
These comments coalesce my initial views on the sectors together with my assessment of the likely economic future.  Regardless of my final sector choice, I will be picking an ETF which trades on a US exchange and which has liquidity and, likely, a preponderance of US then European corporate entities in it.  So my focus will not just be on global economics but also US and European in particular.  My horizon of choice is 2 years starting from now.

The macro economy, the US economy and the European economy are all three inhabiting a unique economic climate.  We have slowly emerged from a significant synchronised world recession, and these economies all bear a large and growing fraction of debt.  The hotly debated termination of so-called financial repression  is a hot topic (2016) and certainly interest rates will in the US soon be on the rise.  How far and how steep the rise will be is uncertain - perhaps to 2%, historically low, before we can expect to hit another recession.  At which point CBs will perhaps need to implement further quantitative easings or decompose the structural impediments to running an economy with significant negative real interest rates.

It has also  been arguable just why the various quantitative easings have not resulted in greater levels of inflation.  On the other hand, productivity growth is sub-par and returns to labour are notoriously low versus returns to capital.  This, together with a natural uptick in returns to the well educated as a result of globalisation has created a growing anti-globalisation political backdrop.

Sustained low rates encourage the growth of borrowings and make the burden of current borrowings easier to bear.  These becomes less so as interest rates rise.  Savers benefit (the wealthy save a larger fraction of their income than the poor) and borrowers loose out as the cost of new borrowings grows.  But existing borrowings are based on an at-onset nominal borrow level and terms are often fixed on a life-of-deal basis or 2-5 years for longer borrowings - notably domestic property.  For holders of of debt like this, some inflation erodes the burden of their debt.

Energy
Energy is a large, multi-faceted and I think, complex sector.  It is subject to world geo-political risk and to the economic and commodity cycles.  It also is subject to innovation risk - note how the new generation of US natural gas suppliers are going head to head with Saudi Arabia and the resulting oil price volatility.  This sector, while offering huge opportunity for big wins and losses, I think is too complex to be my first sector.

Materials 
Industrials 
I like these categories a lot.  It represents the main economy and hence will be influenced a lot by it.  It is close enough to manufacturing to allow dramatic growth and contraction based on workable economies of scale.  Downsides are it too is sensitive to the hard-to-predict commodity cycle and to the effects of political instability generally.

Consumer discretionary 
Consumer staples 
I'm lumping these to together.  I'd like to know when they broke out as 2 peer sectors, as a matter of historical fact.  But generally this is very consumer based and hence sensitive to the business cycle (and commodity cycle too).  There's a huge range of companies in here.    The essential business model here is the manufacture and sale of millions of consumable objects to millions of consumers.  It is a scale sector.  If those objects cost little and get sold often to the average consumer, it is more likely to be in the staple category, rarely bought expensive objects being more in the discretionary.

This is such a heterogeneous (and large) sector that I wonder how stable the in-sector fundamentals would be.

Healthcare 
This is also a growth sector which I like.  The biggest problem is the likelihood of political interference.  However there will be a continuing need for this service and governments, directly or indirectly, will be funding it.  A recent survey of the UK's National Health Service found that, of the lifetime cost associated with an average person's use of the service, the lion's share of that cost happens in the last 6 months of their life, when expensive operations, death-fighting treatments, intensive support, palliative care, pain relief, therapy, etc all happen most frequently.

A fair fraction of the healthcare offering, I would imagine, is service based, which is potentially harder to scale.

Financials 
Another sector I am going to eliminate quickly as a first sector candidate is financials.  The reason is that financials present unique valuation problems for analysts and they are tied in special ways to the credit cycle and to the economic cycle generally.  Whilst their in-sector interpretation might be consistent, the meaning of all those fundamental factor levels will be, due to the mix on their balance sheets, too different to the other sectors.  Also politics plays too large a role in these names, and this varies too much in recent times. I think there's a decent chance that this pattern will continue given the overall anti-globalisation mood.

Information technology 
Telecommunication services 
I lump these two together.  In a way telecommunication services is the most apparent outgrowth of information technology.  Both are reasonable candidates.  Telecoms is more regulation sensitive and by now has a small number of well known names who though former public ownership histories or though m&a tend to have grown quite dominant in a regional context.  However I expect increasing levels of regulation in telecoms.

Information technology I like.  This is a decent candidate.
It is effectively a post WWII industry and endlessly innovative.  I see great growth potential here.  It also has a large number of new entrants too, with concomitant risk to incumbents.  It touches other sectors too.  Given the rate of innovation and the ease with which information services can travel across national boundaries, I feel that this can also escape a degree of regulation at its innovative edges.  Eventually big hitters will succumb to the regional jurisdictional demand, but that still leaves many potentially globalised companies a lot of growing space before that happens.

Utilities 
Utility companies can be thought of as dividend products with regulatory variance and with additional sensitivity to innovative insurgent companies trying to break into their market.  They also tend to be regional, often national.  Governments like there to be a local domestic champion or set of incumbents.  The barriers to entry are high.  Fees, and hence profits are closely  regulated.

Services get to be considered utilities insofar as the service they provide has come to seem, in that country, essential to an average household's happiness.

There will be many utility-like companies in various other sectors.  Utilities thee days include: domestic energy companies; electricity delivery companies; water companies, fixed line telephone companies.  Other utility services are performed directly by local government in partnership with outsourced private companies - for example around waste disposal.  After a period of rapid innovation, often in technology sectors, a service stabilises in its offering and gains a large fraction of the population as its customer.  This is when additional government control is initiated around the service.

Companies like this tend to be low beta, with a decent dividend.  Some utilities one could imagine would be around forever - water companies, electricity companies, waste disposal companies.  And some perhaps seem more time-specific - i'm thinking here of data suppliers (telecoms and technology).  Maybe in time these will settle down and become as tightly regulated as other utilities.  Some forms of insurance often seem to be to approach becoming utilities, and in a sense high street banking is also highly regulated and shares some characteristics with utilities, except for their stock volatility.

There are subtleties within the energy utilities since there will be unique factors associated with the cost of delivering gas for heating to a house as opposed to other forms of heating.  Households can and do switch between utilities, though often there's quite some inertia around switching,

+ these companies often are following similar business models and report their earnings in a long established set of conventions.  A consequence ought to be semantically homogeneous factors.  Also given the more or less fungible nature of the service, fundamental factors, in a healthy competitive environment, ought to be able to distinguish successful from less successfully managed companies. 

- However these companies themselves are likely to be sensitive to commodity prices worldwide and to political-regional disruption.  This is likely harder to predict.  On the plus side of this point, some oil or gas hedging can smooth out short term disruptions.

- in America a utility company is also subject to state-wide political regulation, which can be quite distorting in theory  As an almost example, consider medicare providers.  Whilst not utilities, they are a quasi-utility under Obamacare, and company coverage state-wide has become a political pawn in the recent DOJ battles with merging healthcare providers.  I would guess that the variance on price action between similar utilities is small, meaning there's less juice in equity factor modelling. Consequently if the model worked, it is likely to beat the market by a smaller amount.  All in all this is a reasonable first choice, but I'd only put it as high as 'reasonable'

Real estate 
What are you doing when you buy the property market in general?  You're buying a collection of companies who all have a remarkably consistent business model.  They borrow some fraction (1-d)% of the principal for the property estate, valued at P.  They pay back some interest on the Px(1-d) borrowed at a rate of m% and receive some rent on the full value r% (m<r) and they retain a cash buffer such that draw-downs in the value of the property capital value, to say P', will not result in bankruptcy ( P' +PV(borrowings) <E, the net equity in the company.  And you do this in as tax efficient a way for your tax jurisdiction.  Key moments in this involve when it is necessary to reoll the debt, which occurs periodically and the timing is sensitive to the volatility of equity valuations generally and to the cost of funding the estate.  It is assumed that the rental yield is less important or volatile but this is less so for commercial property, where economic downturns make rental demand fall also.  These three factors are all inter-related.

Two general observations to note about this model: there's a good degree of homogeneity about it, and second it ought to be incredibly sensitive to interest rates and property prices and the economy.

So I would expect economic factors to drive the allocation decision on this and the more traditional equity factors to drive the fundamentals of comparing one company to another.  

On this basis real estate is a reasonable initial candidate for being the first sector to look at.  It might be that there are moments when the asset allocator would flash net short, in which case the factors would look to highlight critical or unhealthy indicators of particular names.  It would be necessary to see how property companies in the US structure themselves (REIT v property company).  Finally, you'd expect fundamental factors which track this company's debt sustainability and degradation might be useful, as would factors which track this company's specific sensitivity to interest rate rises.

+ property is a real asset and hence has a degree of inflation protection burned in
- property funding becomes more costly in a rising rate environment, damping demand for property, all other things being equal
+ there is a recent painful US/UK memory of negative equity and this together from governmental involvement in regulating the level of this market would likely mean no bubble gets as out of hand as it did in the early 2000's
- the primary factors which drive this market are new houshold formation, government regulation, and interest rates (by which I mean the business cycle, the credit cycle).  Constant and predictable levels of government regulation lead to stable factors but there is always the risk of additional regulation.  The US and UK (and Europe too) are quite heavily involved in these markets.
+ in the US at least, property cycles tend to last 20 years or so.  The last crash was only about a decade ago so we can reasonably expect about another decade before the next big blowup

Conclusion
The two final candidates are real estate and information technology.   Real estate is, by market cap, the smallest and most stable of the sectors whereas IT is the largest.  The global dividend yield on real estate is 3.7% whereas the equivalent on IT is 1.7%.  This cuts both ways - if my strategy is to own the market (avg div yield 2.7%) and short the sector, then own my model's weightings of the sector's stocks, then there will be a higher sector bleed from being short the retail ETF.  The equivalent strategy with IT will see the market div yield more easily pay for the short sector div yield.  

I think there's more variability in interpretation on IT companies and also their business models are more heterogeneous.

EPS growth over the next 3 years is looking a lot healthier for IT and is pretty flat for real estate.  However, if I implemented my { long market, short sector long model names } portfolio then the EPS growth creates headwinds for me - any mistake in my names choice and I'll be loosing based on expected sector growth.  Whereas with retail, stable earnings outlooks mean there's probably no reason to believe a short ETF position will get whacked on EPS.

I think the 1 trillion USD real estate market will be my first sector to explore the ideas of equity factor modelling.

Thursday 3 November 2016

Factor multi-temporality and the layer model is born

Another element of the approach I read about when I'm reading around equity factors which I don't like much is the implict attempt to provide a single time granularity acoss all factors. 

In their steady march to generalisation, modellers have arrived at fundamental and/or economic factor models which all sit on underying observation data which occurs at the same time frequency - often end of day.  In the same way as they assume or manufacture more-or-less homogeneous equity atoms, with a singular universal semantics, so too do they expect their operation to work at the same speed.  But some equity factors could be slow moving, some could be much more rapidly moving.  They hope that the combination of steady time evolution plus a rolling window of history over which parameter updates are performed will provide enough dynamism to allow fast moving and in-play factors to come to the fore.  I'm thinking here on the fundamental factor side of momentum based factors, and on the economic factor side, of major economic announcements.  Major economic announcements, scenduled and unscheduled, can move some assets, indeed some stocks, more than others.

If Ihad additional buckets to push stocks, then this opens up the possibility of various factor models being optimised to different speeds of the market.

A factor, after all, in its most general sense, is just a series of observables which may change the expected return of a stock.    I am all for making factors be as widely scoped as possible.  If a factor exists out there which isn't commonly recognised as such but which is predictive on the expected return of a stock, then it ought to be considered.  Time does not move homogeneously for all factors, nor does a factor have to look like a factor to the factor community.

As well as looking at sector ETFs for a leg up into the world of factor modelling, I'll also be looking at multi-factor ETFs to see what's being implemented out there, as a way of getting straight to the kinds of factors which the community keep coming back to.  I expect to see some momentum factors in there, and some value/steady growth ones.  I expect there to be a preponderance of fundamental factors too - i.e. based on the idea that equity factor modelling is at heart the replacement equity analysts.  But to me non-fundamental factors will be the heart of my approach.

This regime switching idea I have can be described as follows.  For certain periods in the evolving life of a publicly listed company, the dominant factor is the beta to the market.  But at other times the regime switches and the dominant attractor factor is in-play m&a, or, stock being shorted in the lead up and launch of a new convertible issue, or stock being bought up by virtue of its inclusion (or exclusion) in a major market index.  Or that the stock has entered the 'week before FOMC announcement', it being particulaly interest rate sensitive, during earnings season, around dividend ex date periods.

In keeping with my humility approach, I set the hurdle high for regimes other than the beta regime - since that's the least damaging position to adopt.

Of course driving this all would be an asset allocation model, which again defaulted in moments of ignorance to the set of parameters which are generally considered a good mix.  This would give your stock allocation some fixed amount within which to play.

The sector/geography/ETF context would be the main habitat of a stock and it only gets stolen away by other pseudo sectors for a set of specific reasons.  To repeat, the alternative is to fully include all the relevant factors on an equal footing and to let the rolling window of calibrating market parameters to drive weight to the in play factors.  I think this is going to expose the model to sparse data problems but in some primitie sense they can be considered compatible.  In one you get the benefit of a single driving equation but weakened use of limited data below.

It is my view that one must be prepared to keep a really good model on ice for potentially years until the triggering signal is stong enough to activate it.  I shall refer to these as episodic factor sets.  Having them 'always on' and bleeding s-called explanatory power to them each day seems wrong to me.

So my model shapes up as follows:  there's an asset allocation driving layer. Within that, for each asset, there's a layer which sets a target long/short ratio.  (These two together represent your setting of the leverage).  When you have your asset size and long/short ratio, you set about finding the set of stocks in your sectors/regimes which ought to contribute positively to alpha.  FInally, especially for short term economic or fundamental factors, your way of expressing your equity exposure can be done through long or short the stock, but also through more or less complex options strategies. 


Wednesday 2 November 2016

What does 'own the market' actually mean?


Equity factor modelling is all about a comparison of two populations.  On the one hand is the set of stocks which your model suggests you own (think of this really as in the limit a set of weights $w_s$ over an agreed universe of stocks, $S$.  In other words, you don't need to think of it as the selection of a subset of stocks out of a universe, you really are just working out what fraction of the universe to own.  Secondly you don't really need to know what quantities to own, merely what percentage (long or short) of some reference investment amount (your capital).

This set of weights is then compared with a second set of seemingly uncontroversial weights - namely the stock weights in some well known index.  Often you'll want this to be market capitalisation weighted and not price weighted (S&P 500, not Dow Jones).  The wider this universe, the better, since you are looking in the maximum number of places for an edge.  Or so they say.

Clearly, equity factor modelling is all about imagining the stocks of this universe as similar atoms, with more or less stable cross-reading semantically coherent 'properties'.  With these atoms, we then apply in effect a form of statistical physics analysis to the atoms.  We need to work hard to maintain the fiction of more-or-less-homogeneity since in reality there are all sorts of bespoke events which make you soon realise that the atoms are quite different in their own way.

Nonetheless, practitioners typically set a wide universe.  The widest universe is the universe of stocks worldwide.  But this approach is often carried out for at most the US and Europe.  The ideal widest though, is all tradeable equities.  But that decision, made many decades ago, to prefer capitalisation weighted proportions as the default must be seen for what it is - an assumption.  It is a decision that has become performative in the industry - it is hard to go back.  Since everyone assumes this, you must too.  But it is important to point out that this is an assumption.

For example, for sophisticated investors, they may have access to private equity.  Or they may prefer to ignore tiny stocks.  Or illiquid ones.

Many equity factors and factor databases present fundamental factor data-sets which are in effect sector and accounting-regime-sensitive attributes, not globally applicable ones.  This semantic variance need to be dealt with head on and can only be done by fully understanding the balance sheet of firm in all of the related accounting jurisdictions.  And the common sector and region/country accounting practices.  

If these anomalies are not fully understood and dealt with then you will be  making comparisons between seemingly similar dimensions, leading to poor selection of portfolio weightings.  I will cal this issue the factor polysemy issue.

So I think I will approach this from a more narrow point of view.
I'd like to make my market be a country-specific sector.  This effectively eliminates a lot of this semantic cross fire.  What do I lose in doing this.
Well, my universe is smaller.  So the opportunity set is smaller.  But there's no evidence to suggest that an equity factor edge is more appropriate in any one sector than another.

How will I deal with the fact that my sector, being only a sector, will not perform like the market as a whole?  Option 1: I can just accept that as a given.  In other words, I will be aiming to beat the sector ETF's performance.

Option 2: I can hedge the factor portfolio with short quantities of the ETF itself, becoming exposed only to the out-performance of the selection, and not to the ETF performance itself.

Option 3: I can hedge to the ETF as above but add back in the market.  In this way I get exposure to the market, with just the sector beta knocked out, and with my factor exposure for those sector names (long and short) I can achieve a mostly beta performance with some sector alpha.

Option 3 opens up the possibility of the strategy being comparable in returns to the market.  It opens up the possibility of an external 'sector switching' process which allows me to stop running on one sector and open on another.  There are many cyclical phenomena which could drive this - the business cycle and observed sector rotation effects.  There can be other pseudo-sectors too - the in-play m&a sector, the convertible  new issue sector.  In short, these represent a generalised way to think of 'sector' beyond e.g. GICS.

In general option 3 can be generalised to running some fractional allocation to all sectors in parallel to each other.  So rather than turning on or off sectors binary fashion, you ease into them via a reallocation process.

This model could build well since it allows you to start small, it doesn't commit you to having a pan-sector factor set and it can be driven by economic considerations.

It also nicely partitions the universe of stock data so that they are held out for the appropriate models, minimising data mining and over fitting risks.  It is grounded nicely in reality too.  It parallelises the factor efforts and domain knowledge of the respective domain experts.  This approach can also work at the geographic level too.  It is also quite possible that there are multiple sector specific databases of information which make sense for stocks only in that sector.

The handy thing about basing my universe on an ETF is the company itself publishes its constituents and weights regularly so leverage off the back of their static data operation and their indexation algorithm.

It also suggests a first equity factor model to build - one where the only factor is the beta of the stock back to the target ETF.  This would nicely operate as a quality check for the algorithm so far.  The expected result of this will hopefully be fairly close to the performance of the ETF itself, adjusting for the ETF's own tracking error.

Tuesday 1 November 2016

Equity Factors - a humble beginning


I would like to start the process of thinking about equity factors.  The goal is to understand how they're being used and also to come up with my own way of using them. First of all I am going to invent a historical narrative as a way of understanding how factors fit in to the world of investing, and what's likely to happen to them in the future.

First, there are two communities of analyst whose work is being replaced here.  Equity analysts and econometricians.  For over a hundred years, there have  been approaches to the question: what investment should I make?  And the first and, to my mind, most important kind of answer here is an economics based one - in which asset classes, with which distribution of capital and with which strategies.  To answer, it would be great to have a predictive model of how the business cycle works .  That way, you can drive your asset allocation and your sector rotation etc.

That's currently clearly not an easy path to take. Some macro based hedge funds do indeed excel at this, and even over long periods of time.

I am very drawn to this approach.

But bond futures, yield curves, international fx markets, derivatives, interest rate swaps, volatility regimes are all rather hard to get your head around.

Equities are in many ways simpler to understand by the masses, and the equity markets are also quite well developed.  So there has sprung up a dedicated equity market in the West, which can sometimes make contact with economic models, but which also equally is happy puttering along in its own world.  

In that world, then, there are some people who give advice on which equities to buy - in reality this too is a capital allocation question - implicitly you can assume that you own the market to start with, and all you're deciding is to which degree are you idiosyncratically deviating from this perspective.

The corresponding static (seemingly static) starting point for the economic model is some generalisation of the portfolio theory of Markowitz.  Originally that theory asked what ratio of capital should  be allocated between competing assets.  Generalise that up and throw in all possible investable asset types and you can reach, in theory, a static level which maximises your expected returns over multiple business cycles.  And this ratio ought to be the kind of mix we all have in our pensions.

Of course, the investment industry doesn't work like this and insofar as we each manage our own pension contribution ratios, we are all likely to be sub-par on this ideal static perspective.

Next you'd like the model to move.  Not so much a singular static set of allocation parameters but a set of time-based ones which can then become sensitive, in theory to the vagaries of the business cycle, of the credit cycle, of the monetary cycle, of sector rotations.

When this is all sorted out, you'd then like to optimise your participation in the various markets - the bond and stock markets both have thousands of single names in there.  Can I do better than owning the market?  On this premise, most investment management is based.  In general, the answer is NO.  But marketing budgets, general ignorance and suspicion that knowledgeable insiders might know better than you leads one one to evacuate a large fraction of our potential lifetime investment profits down the gold plated toilets of the yachts of hedge fund managers the world over.

But equity markets take a special place in the hierarchy of potential investments.  Because they correspond to entities and behaviours we ourselves feel comfortable with, this particular asset has become well developed and is indeed the heart of the capitalist system - perhaps one chamber anyway.

In other words there are people whose lives are dedicated to encouraging you, for a fee, to purchase a different mix of stocks than the market mix.

The investment professionals who do this come in all shapes and sizes but on average, after fees, they are not worth it.  Individually, however, they can be worth it.  Those individuals themselves have a method.  That method can be parameterised and to some extent, replicated in an algorithm.  Those algorithms constitute the centre of gravity of equity factor modelling.  They determine what gets measured, what training data-sets are out there, what perspectives people look at.  Part of the unspoken impossibility of equity factor modelling is in understanding which of these has any juice left, and how best to interpret them.

Equity factor modelling can be seen, in a way, as an attempt to do two things.  First, to take a successful investment manager's success, analyse it, parameterise it, commodify it, turn it into an algorithm and apply that algorithm on an industrial scale (covering significantly more names than any one group of investment professionals can).

Second, there is a realisation that even among the better investment professionals, there will be behavioural biases, constraints, limits, and that in the market itself, there will also be mispricings based on these same behavioural biases - persistent discrepancies which can theoretically be exploited to make your returns on equity investment better than the market average.

This immediately begs the question: how long can any one wrinkle be exploited for?  In the last 3 or so years, there have been literally hundreds of multi-factor ETFs created.  If these prove popular, then they ought to iron away any advantage they spot.  So the long term success of equity factor modelling it always going to be self-limiting.

Having said that, certainly for the next 30 years, there appears to be potentially enough juice in equity factor modelling to make it a viable and attractive business.

There's a bit of a catch 22 situation here.  By the time there exists products which help equity factor modellers get their hands on the right kind of historical factor data, that implies you're already down the road a bit in the journey which leads to the factor being arbitraged away.  Second if you're too early to the party with a new factor nobody has examined yet, then there's a chance the market won't see it yet as a mispricing and no convergence will happen, meaning that you observe no edge in your post hoc pnl.

I will call this the entropic fate of specific factors.  Too young, they appear to be noise to the market, too old, they are arbitraged away.

Even during their observable life, there is the problem of the optimal selection of factors for the current moment in time.  Factors come in and out of fashion.  Factors stop working for a while.  Factor mix perhaps changes in ways which are poorly understood,  This I will call the circadian rhythm of factor sets.

And at any point in the life of an active factor model, there will come a moment when a new factor is born and you want to know how best to integrate it to your factor model.  This I will call the ecological adaptation of your factor model.

This last point is related to how you decide to add factors to your model, even at the outset.

Either you can get an edge with a factor model or you can't.  Assuming a position of humility, the default factor model ought to be treated as a hurdle you only jump over when you are sure you can.  When in doubt, the best is to revert to a model which just buys the market.  This ought to be a general principle which takes you all the way back up to a static asset allocation which works in all weathers and economic climates.  Only bother to deviate from the single market owning allocations when your confidence threshold is reached.

So the first principle of equity factor modelling is: this just may be self-deluded costly baloney, so move cautiously.

A decent leverage starting point is to see who's in the multi factor game and in particular who is offering those as product offerings. Which factors keep coming up and why. Perhaps it will turn out best to just buy these ETFs , perhaps with a degree of timing driven by an economic model.  

However, separate from any investment consequence, I would like to pursue a more under the hood academic interest in the subject. I would like to build my own in principle.