Monday, 28 November 2016

A feeling for equity factors


At this point I wanted to spend a little time thinking about how realistic equity factor modelling is.  To the degree that it isn't realistic, what can be said about the limitations.

The first point is an obvious one about data quality.  Stocks have corporate actions.  They split, they have regular and irregular dividends.  They become involved in m&a activity.  They dual list.  They enter and exit indices.    Each of these, and many more, real world effects can be considered as data quality challenges.  Some of these can also be considered to be the normal everyday lived experience of the average stock, and on that basis ought to be dealt with squarely by the model.

This double approach - the degree to which you pre-filter your universe, clearly can have results ramifications.  Statistically, what is it exactly we do when we remove outliers, and how justified are we to do that.  People tend to be informed by the economic and theoretic reality of stocks and the CAPM in deciding how to treat data issues.  

But in the end we are trying to do the following: find a single, more or less stable, relationship - a linear one - which captures this primary 'like Jagger/not like Jagger' distinction in stocks.  In other big data enterprises, sparse data is a problem, but with equity factors, for the life of each stock, you will often have continuous (end of day) prices over the examination period.  Clearly some stocks are going to be more liquid than others, but they're all likely to be liquid enough to provide an end of day price.  And thanks to the very idea of beta, we can be assured that we're always finding end of day correlations with the whole market, which means that the correlation data embedded in the stock's current beta number is also not going to run into the sparse data problem.

The whole idea of CAPM and equity factors is underpinned  by the idea that it is meaningful to talk about the average properties of stocks - that there is, in a sense, an average stock.

If you imagine the primary regression chart underlying CAPM, perhaps imagine whether some shape other than linear might be applied to the regression.    The security market line (SML) shows you what expected return you ought to expect from the equity market you just analysed knowing only what that stock's beta is.  Or alternatively how leveraged you chose to be in any given stock.  But imagine this line isn't linear.

How would it deviate from linearity?  With high beta stocks (and when operating with stocks, the assumption is that they are usually going to be positive beta, with a notional holding of a single unit of them long) what does that do to the expected payoff.  

Expected payoffs, if they partition a stock space, can be considered to be additive.  In other words, pretend you divide all stocks into 2 groups - those whose corporate name begins with the letter M or lower, and that the other group represents all other stocks.  When you calculate their SMLs separately, you'd first of all expect them to look identical.  But in any case, you could combine them to reach an average SML for both halves.  If i performed the same analysis for all companies whose name started with M,  versus all the rest, you'd want to weight these two SMLs to account for the fact that the market is clearly mostly like the 'non M' category. So perhaps you can weight by market capitalisation proportion.

Now start imagining some interesting partitions.  If you found a really significant partition based on some economically relevant measure, M, you could always immediately know what the 'non M' SML must look like, since you know that when combined with M, the two together combine to give you the SML associated with the market.

To repeat, there is often considered to be one SML, 'the' SML, which allows you to work on leveraged passive ETFs, for example.  Namely that if you were oblivious to the real effects of leverage, you'd be indifferent to where on the straight line you chose to be.  But think of non-random partitions of the stock universe.  To the degree that these partitions are information rich, the resulting partitions could be considered to be different SMLs.  CAPM's spin on all this is that you're a fool to want any line other than the SLM line, since you're not going to get paid for concentration risk.

You can think of all these component SMLs as linear combinations of their respective partition component SML.  Or as non-linear.

The idea that you can leverage the market to achieve whichever level or return you like and it is identical to selecting high beta stocks only to reach the same expected return is clearly a poor assumption.  It is kind of like the idea that portfolio insurance and puts are the same thing.  In theory only.

The degree to which high beta stocks are better is the degree to which you might imagine that the SML will droop on the upside.  I.e. that you would be happy to take less for the high beta stocks implementation when compared to the leverage way of getting to that return.

Put another way.  Leverage is costly.  So the high beta end of the SML is likely to droop as it tries and fails to live up to the theory  of CAPM.  Perhaps there's an argument for saying that the low beta end of the SML must be perky in contrast to the high beta leverage droop, to make the final theoretical SLM be the particular gradient of line it is.  Another thought.  If high beta stocks are the preferred way to achieve leverage at reasonable cost, then perhaps these stocks are more prized for this very reason, and bid up?  In being too expensive, perhaps their returns are poorer as a result?

Wednesday, 16 November 2016

Two interpretations of the Security Market Line

The security market line (SML) is a straight line drawn to represent at a high level the conclusions of the CAPM.  I have two distinct ways of reading it.  First the standard one.

The chart documents some relationships about some particular stock or portfolio.  It is interesting that for the purposes of the SML it doesn't matter of you're considering a single stock or a portfolio of some socks with a bunch of weightings.

The Y axis shows you what the CAPM model output at any given moment thinks a set of portfolios is likely to return.  The X axis shows the beta of that portfolio.  So each <x,y> co-ordinate represents a set of portfolios where each member shares the same expected return and the same beta as as its cohabitees.  This set of portfolios behind a single point can be considered infinite.  And of course there are an infinite number of points on he Cartesian plane. 

Only the set of portfolios which give the best return over risk profile exist on a single upwardly sloping line referred to as the SML.  These on-line portfolios are expected to return the market return (plus the risk free rate).

The slope of the line is referred to as the Treynor ratio and equals the excess return to be expected from the market now in excess of a (fairly) risk free rate.

Passive index trackers have as their job the task of residing at the point <1, E[R_m ] +R_f> in as cheap a way as possible.  That is to say, there are many portfolios which have a beta of (about) 1.0 and which have an expected return of the return of the market.  Passive fund managers try to implement being on this point in as cost effective way as possible.    Passive fund managers of leveraged offerings try to do the same thing but at betas of 2.0, 3.0, 0.5 etc.

CAPM tells you there's no point being off-the-line as you're taking diversifiable risk and hence shouldn't be getting paid for it.  You only get paid for diversifiable risk, that is, risk which is correlated with the market.

Active portfolio management believes that some portfolios exist above and below the SML and can be exploited to make returns greater than the market.

Deciding which value of x you'd like is not something the model can help you with.  That represents an exogenous 'risk appetite choice.  Once you've made that choice the SML tells you, assuming it is based on a well functioning and calibrated CAPM, how much you can expect to make. 

Let's imagine you have a normal risk appetite and set x=1.  There are many ways of constructing a portfolio which delivers that return but the one where you're fully invested in the market portfolio is a natural choice.  You could be fully invested in a number of other portfolios which do the same.  Or you could be under-using your notional capital investment and using market weights for all other stocks; or you could be borrowing money and over-gearing your unit of capital to raise the beta greater than 1.

That is, by using financing gearing, you can travel with a fixed market portfolio up and down the x-axis, in theory to any value of x.  Of course you can't get infinite leverage but still, the theory assumes you can. 

You can achieve the same by using leveraged products - equity options or equity futures, for example.  These narrow your time horizon (theta) but in theory you don't need to worry about that.

If you try to be, e.g., fully invested and then try to tilt the beta by owning long more high beta stocks than the market, you will indeed see your Y value increase but this will also be taking risk which is diversifiable.  So you'll be taking risk you are not getting paid for.  Achieving this same level of return can be achieved in a more efficient way with the market portfolio and some form of leverage, and this approach is theoretically to be preferred on this basis.

In practice there are costs associated with gaining any leverage to achieve a desired return.  Perhaps a better model is a CAPM with funding costs burned in.

Also you won't see a SML with negative x values.  There's no reason why not.  Sometimes you my be seeking a portfolio which returns less than the risk free rate (and perhaps even negative returns) in certain circumstances.  In this case you'd see the return go negative as your beta goes negative.

A question arises in my head.  Long term, which is the best value of x to sit on together with a market portfolio, if your goal is to maximise expected excess returns across all time periods and business cycles.  I think this is a permutation of asking whether there's a way of forecasting the Treynor ratio (the equity risk premium).  If you could, then you could move to a x>1 portfolio construction and move likewise to a x<1 construction when your model calls a decreasing equity risk premium.

What if my equity risk premium forecaster was a random process which swept randomly through the 0.9-1.1 range?  Long term, would this not be equivalent to a steady 1.0?  Could the algoithm have a degree of mean reversion at the back.  That is to say, if a long term random peppering of the 0.9-1.0 space delivers 1.0 results, then if your active algorithm has placed you at 0.9 for a while, might it then increase the hit rate at the >1.0 space?

So the SML is an SML for today, and the slope of that curve may steepen or become shallow though time.  Probably within a very tight range.

Calculating and predicting the equity risk premium seems to be perhaps an even more valuable thing to do than trying to do active equity factor portfolio modelling.

Tuesday, 15 November 2016

Scissors, a reference portfolio and a clear correlation divide

The Moves Like Jagger model (MLJ) is a kind of look inside a behaviour.  The look inside in effect chops the behaviour in two.  It acts like a pair of scissors.  All it needs is a reference behaviour.  The scissors then chop any to-be-analysed behaviour into two pieces.  One is perfectly correlated to the reference behaviour.  The other is perfectly uncorrelated with the reference behaviour.

All that the capital asset pricing model (CAPM) adds is a statement that the behaviour of the reference is worthwhile, indeed the ideal behaviour.  CAPM in effect adds morality to the scissors.  It claims that you can't act better than the reference behaviour.  A consequence of this is that the uncorrelated behaviour is in some sense wrong, sinful if you like.  Why would you do it if the ideal behaviour is to be strived for?  By making the ideal behaviour a target, you start then to see the uncorrelated behaviour as distorting, wrong, avoidable, residual.  So the language of leftovers or residua enters.

When we finally get to the equity factors active management approach, a space again opens up to re-analyse the so-called residua into a superlative component and a random component.  The superlative component is behaviour which is actually better than the reference behaviour.  FInally after this better behaviour is analysed (it is called alpha), it is claimed that the remainder is once again residua.

The scissors operation of covariance is the tool.  CAPM is the use of the tool in a context of some assumptions around the perfection of the reference behaviour.  Post-CAPM/equity factors is the use of the scissors in the context of some assumptions around the possibility of exceeding the quality of the reference behaviour.

One aspect of CAPM I have not spent much time on is the element of risk appetite.  Let's pretend that the only asset available to you is the Vanguard market ETF and that you have 1 unit of capital which you've allocated to investing.  No sector choices are possible.  No single name choices are possible.  Are you limited to receiving on average just the market return?  No, because there's one investment decision you need to make which is prior to the  CAPM, namely how much of your investment unit you'd like to keep in cash and what fraction you'd like to invest in the market.

The way you go about that decision is an asset allocation decision, and is a function of your appetite for risk, which is said to exist prior to the CAPM reasoning.  If you have no appetite for risk you invest precisely 0% of your unit capital in the market portfolio (and receive in return the risk free rate).  In theory, you could invest 100% of your unit of capital and receive the market return.  Indeed, in theory you can invest >100% of your unit of capital, through the process of borrowing (funding leverage) or through the selection of assets witg built in leverage (asset leverage).  Through either of these techniques, or any combination of both, you can get a return which is an amplified version of the market return. 

With amplified returns though, or gearing, you run the risk of experiencing gambler's ruin early in the investing game.   Gambler's ruin traditionally happens when the capital reduces to 0.  Its probability can be estimated.  With any kind of amplified return, there's a point before 0 where your broker, through margin calls, will effectively bring the investing game to a halt.

The process by which you decide what your degree of investment and amplification in the market is going to be is an asset allocation decision.  You're after all investing your unit either in the market portfolio or in the risk free asset.  This decision can be made once forever.  What is the single static best allocation of cash between the risk free asset and a correspondingly over-or under-invested market portfolio?  Or this decision can be time sensitive - i.e. your decision can move with time.

Insofar as the investment community does this in a more or less correlated way this creates waves of risk-on and risk-off patterns in markets. 

Making a single fixed static allocation decision is a bit like a surfer who bobs up and down in a stationary way as waves arrive to the shore.  Trying to be dynamic about it is like that same surfer standing up at some point and trying to let that wave take him to shore on a rewarding ride.  The CAPM in a sense tells you nothing about which of these two approaches are best for long term returns.


Monday, 14 November 2016

Moves like Jagger


I will introduce the concepts of the capital asset pricing model and beta with a analogy.  

Dancing.
We all dance.  Some are better dancers than others.   There's a lot of trans-cultural variation to be sure.  Some influencing goes on to be sure.  Some don't dance at all.

Imagine your job is to build a model of how individuals dance and you come up with the following: all dance more or less like Mick Jagger.  Outrageous, I know.  But just imagine you watch how Mick Jagger dances.  You absorb that knowledge.  You have it in your head.  He's the template.  You now are armed with a reference point and set out to build your Moves Like Jagger (MLJ) model of human dance.

Imagine you find someone who dances incredibly like Jagger.  And another who moves like Jagger, but in a less spastic way.  And another who does so in a grossly caricatured way.  Perhaps yet another who only has the odd echo of Jagger in the way she claps her hands like a camp pair of cymbals.

MLJ when applied to any human being ought to be able to describe two things.  First, just how like Jagger this person moves, and, all the rest of his moves which don't really line up as classic Jagger moves.  So the generalised MLJ model version 1 is like a pair of scissors.  It analyses anyone's moves as those which correspond more or less to Jagger like moves, together with all his other moves which don't seem to fit the Jagger pattern.

V1 of your model insists that moving like Jagger is somehow a dancer's best goal.  The second fraction of his analysed behaviour which isn't like Jagger is deemed by you a failure.  You refer to these abortive un-Jagger moves as residual moves.  A mistake.  Clearly some dancers will make more mistakes than others.  So, one possible initial model is to have a simplistic weighting of moves, which you assign some fraction to a dancer's Jagger moves, and the rest to mistakes.  M = w_j J +(1- w_j) E.  Read this as: my model treats a dancer's moves as some fraction w_j like Jagger's and the rest as an error.

In V2 you realise some people move exactly like Jagger but more or less exaggerated.  Wilder flailing, more melodramatic hops, extreme chicken pecking  neck moves.  While others tone it down, but essentially incorporate all his moves.

So you switch to a new model, where there's a scaling factor b, measuring the degree of brio with which the dancer copies Jagger.  Again, an error term E will mop up the residual error, i.e. what remains when you've catered for the brio-adjusted moves.   M = bJ + E.  Jagger impersonators might have a brio score of 2.0.  Modest dancers might score 0.5 or lower.

The capital asset pricing model is the equivalent of the claim that the average dancer moves exactly like Jagger.  So, Iggy pop, Ian Curtis, Morrisey, Tom Waits, you and , and everyone else too.  When you average up our moves, we dance on average exactly like Jagger.  He's kind of a Jungian dance archetype for us all.

In any case, V2 is a two-part analysis of moves.  Part one compares them to some reference moves, and part two characterises the remnant.    

With stock returns, the moves-like function is implemented with Cov(A,B) the co-variance between period-returns for series A and period-returns for series B.  In fact, the period returns (e.g. daily returns) are adjusted to be returns in excess of a corresponding risk free rate.  The dance analogy might see the equivalent of the risk free rate would be the moves linked to breathing.    Co-variance is symmetric - Cov(A,B) = Cov(B,A) - but since we are interested in the Jagger reference dance moves, we normalise the co-variance by the variance of Jagger himself / the market.  This achieves the goal of making Jagger get a brio (or beta) score of 1.0.  He dances exactly like himself.  He will also get an E score of 0.

Equity factor portfolio management adds two elements further.  First, it adds multiple dance archetypes to the model.  Second, it assumes that it isn't the case that the average dancer dances like Jagger (or the set of Jungian dance archetypes).  That there's a possibility that some dance moves could be better than the archetypes.  New moves may be possible.

There appears to be no universally accepted theory on how to pick your archetypes.  Data driven selection has issues, and theory driven selection can go badly wrong too.  Iggy pop after all dances quite a lot like Jagger anyway.  Are you really saying much by adding him in?  

A final angle on this metaphor.  I always thought Jagger was doing a very poor early 60s James Brown impression anyway!  With a high comedic beta.  Similarly the choice of what 'the market' means is up for grabs.  And certainly I can see that change over time too.

Friday, 4 November 2016

Which sector to begin


My layered model is in theory anyway going to cover all the relevant sectors and then some.  But Where should I start?  Let me initially make some comments about the pros and cons of starting with each of the sectors.
These comments coalesce my initial views on the sectors together with my assessment of the likely economic future.  Regardless of my final sector choice, I will be picking an ETF which trades on a US exchange and which has liquidity and, likely, a preponderance of US then European corporate entities in it.  So my focus will not just be on global economics but also US and European in particular.  My horizon of choice is 2 years starting from now.

The macro economy, the US economy and the European economy are all three inhabiting a unique economic climate.  We have slowly emerged from a significant synchronised world recession, and these economies all bear a large and growing fraction of debt.  The hotly debated termination of so-called financial repression  is a hot topic (2016) and certainly interest rates will in the US soon be on the rise.  How far and how steep the rise will be is uncertain - perhaps to 2%, historically low, before we can expect to hit another recession.  At which point CBs will perhaps need to implement further quantitative easings or decompose the structural impediments to running an economy with significant negative real interest rates.

It has also  been arguable just why the various quantitative easings have not resulted in greater levels of inflation.  On the other hand, productivity growth is sub-par and returns to labour are notoriously low versus returns to capital.  This, together with a natural uptick in returns to the well educated as a result of globalisation has created a growing anti-globalisation political backdrop.

Sustained low rates encourage the growth of borrowings and make the burden of current borrowings easier to bear.  These becomes less so as interest rates rise.  Savers benefit (the wealthy save a larger fraction of their income than the poor) and borrowers loose out as the cost of new borrowings grows.  But existing borrowings are based on an at-onset nominal borrow level and terms are often fixed on a life-of-deal basis or 2-5 years for longer borrowings - notably domestic property.  For holders of of debt like this, some inflation erodes the burden of their debt.

Energy
Energy is a large, multi-faceted and I think, complex sector.  It is subject to world geo-political risk and to the economic and commodity cycles.  It also is subject to innovation risk - note how the new generation of US natural gas suppliers are going head to head with Saudi Arabia and the resulting oil price volatility.  This sector, while offering huge opportunity for big wins and losses, I think is too complex to be my first sector.

Materials 
Industrials 
I like these categories a lot.  It represents the main economy and hence will be influenced a lot by it.  It is close enough to manufacturing to allow dramatic growth and contraction based on workable economies of scale.  Downsides are it too is sensitive to the hard-to-predict commodity cycle and to the effects of political instability generally.

Consumer discretionary 
Consumer staples 
I'm lumping these to together.  I'd like to know when they broke out as 2 peer sectors, as a matter of historical fact.  But generally this is very consumer based and hence sensitive to the business cycle (and commodity cycle too).  There's a huge range of companies in here.    The essential business model here is the manufacture and sale of millions of consumable objects to millions of consumers.  It is a scale sector.  If those objects cost little and get sold often to the average consumer, it is more likely to be in the staple category, rarely bought expensive objects being more in the discretionary.

This is such a heterogeneous (and large) sector that I wonder how stable the in-sector fundamentals would be.

Healthcare 
This is also a growth sector which I like.  The biggest problem is the likelihood of political interference.  However there will be a continuing need for this service and governments, directly or indirectly, will be funding it.  A recent survey of the UK's National Health Service found that, of the lifetime cost associated with an average person's use of the service, the lion's share of that cost happens in the last 6 months of their life, when expensive operations, death-fighting treatments, intensive support, palliative care, pain relief, therapy, etc all happen most frequently.

A fair fraction of the healthcare offering, I would imagine, is service based, which is potentially harder to scale.

Financials 
Another sector I am going to eliminate quickly as a first sector candidate is financials.  The reason is that financials present unique valuation problems for analysts and they are tied in special ways to the credit cycle and to the economic cycle generally.  Whilst their in-sector interpretation might be consistent, the meaning of all those fundamental factor levels will be, due to the mix on their balance sheets, too different to the other sectors.  Also politics plays too large a role in these names, and this varies too much in recent times. I think there's a decent chance that this pattern will continue given the overall anti-globalisation mood.

Information technology 
Telecommunication services 
I lump these two together.  In a way telecommunication services is the most apparent outgrowth of information technology.  Both are reasonable candidates.  Telecoms is more regulation sensitive and by now has a small number of well known names who though former public ownership histories or though m&a tend to have grown quite dominant in a regional context.  However I expect increasing levels of regulation in telecoms.

Information technology I like.  This is a decent candidate.
It is effectively a post WWII industry and endlessly innovative.  I see great growth potential here.  It also has a large number of new entrants too, with concomitant risk to incumbents.  It touches other sectors too.  Given the rate of innovation and the ease with which information services can travel across national boundaries, I feel that this can also escape a degree of regulation at its innovative edges.  Eventually big hitters will succumb to the regional jurisdictional demand, but that still leaves many potentially globalised companies a lot of growing space before that happens.

Utilities 
Utility companies can be thought of as dividend products with regulatory variance and with additional sensitivity to innovative insurgent companies trying to break into their market.  They also tend to be regional, often national.  Governments like there to be a local domestic champion or set of incumbents.  The barriers to entry are high.  Fees, and hence profits are closely  regulated.

Services get to be considered utilities insofar as the service they provide has come to seem, in that country, essential to an average household's happiness.

There will be many utility-like companies in various other sectors.  Utilities thee days include: domestic energy companies; electricity delivery companies; water companies, fixed line telephone companies.  Other utility services are performed directly by local government in partnership with outsourced private companies - for example around waste disposal.  After a period of rapid innovation, often in technology sectors, a service stabilises in its offering and gains a large fraction of the population as its customer.  This is when additional government control is initiated around the service.

Companies like this tend to be low beta, with a decent dividend.  Some utilities one could imagine would be around forever - water companies, electricity companies, waste disposal companies.  And some perhaps seem more time-specific - i'm thinking here of data suppliers (telecoms and technology).  Maybe in time these will settle down and become as tightly regulated as other utilities.  Some forms of insurance often seem to be to approach becoming utilities, and in a sense high street banking is also highly regulated and shares some characteristics with utilities, except for their stock volatility.

There are subtleties within the energy utilities since there will be unique factors associated with the cost of delivering gas for heating to a house as opposed to other forms of heating.  Households can and do switch between utilities, though often there's quite some inertia around switching,

+ these companies often are following similar business models and report their earnings in a long established set of conventions.  A consequence ought to be semantically homogeneous factors.  Also given the more or less fungible nature of the service, fundamental factors, in a healthy competitive environment, ought to be able to distinguish successful from less successfully managed companies. 

- However these companies themselves are likely to be sensitive to commodity prices worldwide and to political-regional disruption.  This is likely harder to predict.  On the plus side of this point, some oil or gas hedging can smooth out short term disruptions.

- in America a utility company is also subject to state-wide political regulation, which can be quite distorting in theory  As an almost example, consider medicare providers.  Whilst not utilities, they are a quasi-utility under Obamacare, and company coverage state-wide has become a political pawn in the recent DOJ battles with merging healthcare providers.  I would guess that the variance on price action between similar utilities is small, meaning there's less juice in equity factor modelling. Consequently if the model worked, it is likely to beat the market by a smaller amount.  All in all this is a reasonable first choice, but I'd only put it as high as 'reasonable'

Real estate 
What are you doing when you buy the property market in general?  You're buying a collection of companies who all have a remarkably consistent business model.  They borrow some fraction (1-d)% of the principal for the property estate, valued at P.  They pay back some interest on the Px(1-d) borrowed at a rate of m% and receive some rent on the full value r% (m<r) and they retain a cash buffer such that draw-downs in the value of the property capital value, to say P', will not result in bankruptcy ( P' +PV(borrowings) <E, the net equity in the company.  And you do this in as tax efficient a way for your tax jurisdiction.  Key moments in this involve when it is necessary to reoll the debt, which occurs periodically and the timing is sensitive to the volatility of equity valuations generally and to the cost of funding the estate.  It is assumed that the rental yield is less important or volatile but this is less so for commercial property, where economic downturns make rental demand fall also.  These three factors are all inter-related.

Two general observations to note about this model: there's a good degree of homogeneity about it, and second it ought to be incredibly sensitive to interest rates and property prices and the economy.

So I would expect economic factors to drive the allocation decision on this and the more traditional equity factors to drive the fundamentals of comparing one company to another.  

On this basis real estate is a reasonable initial candidate for being the first sector to look at.  It might be that there are moments when the asset allocator would flash net short, in which case the factors would look to highlight critical or unhealthy indicators of particular names.  It would be necessary to see how property companies in the US structure themselves (REIT v property company).  Finally, you'd expect fundamental factors which track this company's debt sustainability and degradation might be useful, as would factors which track this company's specific sensitivity to interest rate rises.

+ property is a real asset and hence has a degree of inflation protection burned in
- property funding becomes more costly in a rising rate environment, damping demand for property, all other things being equal
+ there is a recent painful US/UK memory of negative equity and this together from governmental involvement in regulating the level of this market would likely mean no bubble gets as out of hand as it did in the early 2000's
- the primary factors which drive this market are new houshold formation, government regulation, and interest rates (by which I mean the business cycle, the credit cycle).  Constant and predictable levels of government regulation lead to stable factors but there is always the risk of additional regulation.  The US and UK (and Europe too) are quite heavily involved in these markets.
+ in the US at least, property cycles tend to last 20 years or so.  The last crash was only about a decade ago so we can reasonably expect about another decade before the next big blowup

Conclusion
The two final candidates are real estate and information technology.   Real estate is, by market cap, the smallest and most stable of the sectors whereas IT is the largest.  The global dividend yield on real estate is 3.7% whereas the equivalent on IT is 1.7%.  This cuts both ways - if my strategy is to own the market (avg div yield 2.7%) and short the sector, then own my model's weightings of the sector's stocks, then there will be a higher sector bleed from being short the retail ETF.  The equivalent strategy with IT will see the market div yield more easily pay for the short sector div yield.  

I think there's more variability in interpretation on IT companies and also their business models are more heterogeneous.

EPS growth over the next 3 years is looking a lot healthier for IT and is pretty flat for real estate.  However, if I implemented my { long market, short sector long model names } portfolio then the EPS growth creates headwinds for me - any mistake in my names choice and I'll be loosing based on expected sector growth.  Whereas with retail, stable earnings outlooks mean there's probably no reason to believe a short ETF position will get whacked on EPS.

I think the 1 trillion USD real estate market will be my first sector to explore the ideas of equity factor modelling.

Thursday, 3 November 2016

Factor multi-temporality and the layer model is born

Another element of the approach I read about when I'm reading around equity factors which I don't like much is the implict attempt to provide a single time granularity acoss all factors. 

In their steady march to generalisation, modellers have arrived at fundamental and/or economic factor models which all sit on underying observation data which occurs at the same time frequency - often end of day.  In the same way as they assume or manufacture more-or-less homogeneous equity atoms, with a singular universal semantics, so too do they expect their operation to work at the same speed.  But some equity factors could be slow moving, some could be much more rapidly moving.  They hope that the combination of steady time evolution plus a rolling window of history over which parameter updates are performed will provide enough dynamism to allow fast moving and in-play factors to come to the fore.  I'm thinking here on the fundamental factor side of momentum based factors, and on the economic factor side, of major economic announcements.  Major economic announcements, scenduled and unscheduled, can move some assets, indeed some stocks, more than others.

If Ihad additional buckets to push stocks, then this opens up the possibility of various factor models being optimised to different speeds of the market.

A factor, after all, in its most general sense, is just a series of observables which may change the expected return of a stock.    I am all for making factors be as widely scoped as possible.  If a factor exists out there which isn't commonly recognised as such but which is predictive on the expected return of a stock, then it ought to be considered.  Time does not move homogeneously for all factors, nor does a factor have to look like a factor to the factor community.

As well as looking at sector ETFs for a leg up into the world of factor modelling, I'll also be looking at multi-factor ETFs to see what's being implemented out there, as a way of getting straight to the kinds of factors which the community keep coming back to.  I expect to see some momentum factors in there, and some value/steady growth ones.  I expect there to be a preponderance of fundamental factors too - i.e. based on the idea that equity factor modelling is at heart the replacement equity analysts.  But to me non-fundamental factors will be the heart of my approach.

This regime switching idea I have can be described as follows.  For certain periods in the evolving life of a publicly listed company, the dominant factor is the beta to the market.  But at other times the regime switches and the dominant attractor factor is in-play m&a, or, stock being shorted in the lead up and launch of a new convertible issue, or stock being bought up by virtue of its inclusion (or exclusion) in a major market index.  Or that the stock has entered the 'week before FOMC announcement', it being particulaly interest rate sensitive, during earnings season, around dividend ex date periods.

In keeping with my humility approach, I set the hurdle high for regimes other than the beta regime - since that's the least damaging position to adopt.

Of course driving this all would be an asset allocation model, which again defaulted in moments of ignorance to the set of parameters which are generally considered a good mix.  This would give your stock allocation some fixed amount within which to play.

The sector/geography/ETF context would be the main habitat of a stock and it only gets stolen away by other pseudo sectors for a set of specific reasons.  To repeat, the alternative is to fully include all the relevant factors on an equal footing and to let the rolling window of calibrating market parameters to drive weight to the in play factors.  I think this is going to expose the model to sparse data problems but in some primitie sense they can be considered compatible.  In one you get the benefit of a single driving equation but weakened use of limited data below.

It is my view that one must be prepared to keep a really good model on ice for potentially years until the triggering signal is stong enough to activate it.  I shall refer to these as episodic factor sets.  Having them 'always on' and bleeding s-called explanatory power to them each day seems wrong to me.

So my model shapes up as follows:  there's an asset allocation driving layer. Within that, for each asset, there's a layer which sets a target long/short ratio.  (These two together represent your setting of the leverage).  When you have your asset size and long/short ratio, you set about finding the set of stocks in your sectors/regimes which ought to contribute positively to alpha.  FInally, especially for short term economic or fundamental factors, your way of expressing your equity exposure can be done through long or short the stock, but also through more or less complex options strategies. 


Wednesday, 2 November 2016

What does 'own the market' actually mean?


Equity factor modelling is all about a comparison of two populations.  On the one hand is the set of stocks which your model suggests you own (think of this really as in the limit a set of weights $w_s$ over an agreed universe of stocks, $S$.  In other words, you don't need to think of it as the selection of a subset of stocks out of a universe, you really are just working out what fraction of the universe to own.  Secondly you don't really need to know what quantities to own, merely what percentage (long or short) of some reference investment amount (your capital).

This set of weights is then compared with a second set of seemingly uncontroversial weights - namely the stock weights in some well known index.  Often you'll want this to be market capitalisation weighted and not price weighted (S&P 500, not Dow Jones).  The wider this universe, the better, since you are looking in the maximum number of places for an edge.  Or so they say.

Clearly, equity factor modelling is all about imagining the stocks of this universe as similar atoms, with more or less stable cross-reading semantically coherent 'properties'.  With these atoms, we then apply in effect a form of statistical physics analysis to the atoms.  We need to work hard to maintain the fiction of more-or-less-homogeneity since in reality there are all sorts of bespoke events which make you soon realise that the atoms are quite different in their own way.

Nonetheless, practitioners typically set a wide universe.  The widest universe is the universe of stocks worldwide.  But this approach is often carried out for at most the US and Europe.  The ideal widest though, is all tradeable equities.  But that decision, made many decades ago, to prefer capitalisation weighted proportions as the default must be seen for what it is - an assumption.  It is a decision that has become performative in the industry - it is hard to go back.  Since everyone assumes this, you must too.  But it is important to point out that this is an assumption.

For example, for sophisticated investors, they may have access to private equity.  Or they may prefer to ignore tiny stocks.  Or illiquid ones.

Many equity factors and factor databases present fundamental factor data-sets which are in effect sector and accounting-regime-sensitive attributes, not globally applicable ones.  This semantic variance need to be dealt with head on and can only be done by fully understanding the balance sheet of firm in all of the related accounting jurisdictions.  And the common sector and region/country accounting practices.  

If these anomalies are not fully understood and dealt with then you will be  making comparisons between seemingly similar dimensions, leading to poor selection of portfolio weightings.  I will cal this issue the factor polysemy issue.

So I think I will approach this from a more narrow point of view.
I'd like to make my market be a country-specific sector.  This effectively eliminates a lot of this semantic cross fire.  What do I lose in doing this.
Well, my universe is smaller.  So the opportunity set is smaller.  But there's no evidence to suggest that an equity factor edge is more appropriate in any one sector than another.

How will I deal with the fact that my sector, being only a sector, will not perform like the market as a whole?  Option 1: I can just accept that as a given.  In other words, I will be aiming to beat the sector ETF's performance.

Option 2: I can hedge the factor portfolio with short quantities of the ETF itself, becoming exposed only to the out-performance of the selection, and not to the ETF performance itself.

Option 3: I can hedge to the ETF as above but add back in the market.  In this way I get exposure to the market, with just the sector beta knocked out, and with my factor exposure for those sector names (long and short) I can achieve a mostly beta performance with some sector alpha.

Option 3 opens up the possibility of the strategy being comparable in returns to the market.  It opens up the possibility of an external 'sector switching' process which allows me to stop running on one sector and open on another.  There are many cyclical phenomena which could drive this - the business cycle and observed sector rotation effects.  There can be other pseudo-sectors too - the in-play m&a sector, the convertible  new issue sector.  In short, these represent a generalised way to think of 'sector' beyond e.g. GICS.

In general option 3 can be generalised to running some fractional allocation to all sectors in parallel to each other.  So rather than turning on or off sectors binary fashion, you ease into them via a reallocation process.

This model could build well since it allows you to start small, it doesn't commit you to having a pan-sector factor set and it can be driven by economic considerations.

It also nicely partitions the universe of stock data so that they are held out for the appropriate models, minimising data mining and over fitting risks.  It is grounded nicely in reality too.  It parallelises the factor efforts and domain knowledge of the respective domain experts.  This approach can also work at the geographic level too.  It is also quite possible that there are multiple sector specific databases of information which make sense for stocks only in that sector.

The handy thing about basing my universe on an ETF is the company itself publishes its constituents and weights regularly so leverage off the back of their static data operation and their indexation algorithm.

It also suggests a first equity factor model to build - one where the only factor is the beta of the stock back to the target ETF.  This would nicely operate as a quality check for the algorithm so far.  The expected result of this will hopefully be fairly close to the performance of the ETF itself, adjusting for the ETF's own tracking error.

Tuesday, 1 November 2016

Equity Factors - a humble beginning


I would like to start the process of thinking about equity factors.  The goal is to understand how they're being used and also to come up with my own way of using them. First of all I am going to invent a historical narrative as a way of understanding how factors fit in to the world of investing, and what's likely to happen to them in the future.

First, there are two communities of analyst whose work is being replaced here.  Equity analysts and econometricians.  For over a hundred years, there have  been approaches to the question: what investment should I make?  And the first and, to my mind, most important kind of answer here is an economics based one - in which asset classes, with which distribution of capital and with which strategies.  To answer, it would be great to have a predictive model of how the business cycle works .  That way, you can drive your asset allocation and your sector rotation etc.

That's currently clearly not an easy path to take. Some macro based hedge funds do indeed excel at this, and even over long periods of time.

I am very drawn to this approach.

But bond futures, yield curves, international fx markets, derivatives, interest rate swaps, volatility regimes are all rather hard to get your head around.

Equities are in many ways simpler to understand by the masses, and the equity markets are also quite well developed.  So there has sprung up a dedicated equity market in the West, which can sometimes make contact with economic models, but which also equally is happy puttering along in its own world.  

In that world, then, there are some people who give advice on which equities to buy - in reality this too is a capital allocation question - implicitly you can assume that you own the market to start with, and all you're deciding is to which degree are you idiosyncratically deviating from this perspective.

The corresponding static (seemingly static) starting point for the economic model is some generalisation of the portfolio theory of Markowitz.  Originally that theory asked what ratio of capital should  be allocated between competing assets.  Generalise that up and throw in all possible investable asset types and you can reach, in theory, a static level which maximises your expected returns over multiple business cycles.  And this ratio ought to be the kind of mix we all have in our pensions.

Of course, the investment industry doesn't work like this and insofar as we each manage our own pension contribution ratios, we are all likely to be sub-par on this ideal static perspective.

Next you'd like the model to move.  Not so much a singular static set of allocation parameters but a set of time-based ones which can then become sensitive, in theory to the vagaries of the business cycle, of the credit cycle, of the monetary cycle, of sector rotations.

When this is all sorted out, you'd then like to optimise your participation in the various markets - the bond and stock markets both have thousands of single names in there.  Can I do better than owning the market?  On this premise, most investment management is based.  In general, the answer is NO.  But marketing budgets, general ignorance and suspicion that knowledgeable insiders might know better than you leads one one to evacuate a large fraction of our potential lifetime investment profits down the gold plated toilets of the yachts of hedge fund managers the world over.

But equity markets take a special place in the hierarchy of potential investments.  Because they correspond to entities and behaviours we ourselves feel comfortable with, this particular asset has become well developed and is indeed the heart of the capitalist system - perhaps one chamber anyway.

In other words there are people whose lives are dedicated to encouraging you, for a fee, to purchase a different mix of stocks than the market mix.

The investment professionals who do this come in all shapes and sizes but on average, after fees, they are not worth it.  Individually, however, they can be worth it.  Those individuals themselves have a method.  That method can be parameterised and to some extent, replicated in an algorithm.  Those algorithms constitute the centre of gravity of equity factor modelling.  They determine what gets measured, what training data-sets are out there, what perspectives people look at.  Part of the unspoken impossibility of equity factor modelling is in understanding which of these has any juice left, and how best to interpret them.

Equity factor modelling can be seen, in a way, as an attempt to do two things.  First, to take a successful investment manager's success, analyse it, parameterise it, commodify it, turn it into an algorithm and apply that algorithm on an industrial scale (covering significantly more names than any one group of investment professionals can).

Second, there is a realisation that even among the better investment professionals, there will be behavioural biases, constraints, limits, and that in the market itself, there will also be mispricings based on these same behavioural biases - persistent discrepancies which can theoretically be exploited to make your returns on equity investment better than the market average.

This immediately begs the question: how long can any one wrinkle be exploited for?  In the last 3 or so years, there have been literally hundreds of multi-factor ETFs created.  If these prove popular, then they ought to iron away any advantage they spot.  So the long term success of equity factor modelling it always going to be self-limiting.

Having said that, certainly for the next 30 years, there appears to be potentially enough juice in equity factor modelling to make it a viable and attractive business.

There's a bit of a catch 22 situation here.  By the time there exists products which help equity factor modellers get their hands on the right kind of historical factor data, that implies you're already down the road a bit in the journey which leads to the factor being arbitraged away.  Second if you're too early to the party with a new factor nobody has examined yet, then there's a chance the market won't see it yet as a mispricing and no convergence will happen, meaning that you observe no edge in your post hoc pnl.

I will call this the entropic fate of specific factors.  Too young, they appear to be noise to the market, too old, they are arbitraged away.

Even during their observable life, there is the problem of the optimal selection of factors for the current moment in time.  Factors come in and out of fashion.  Factors stop working for a while.  Factor mix perhaps changes in ways which are poorly understood,  This I will call the circadian rhythm of factor sets.

And at any point in the life of an active factor model, there will come a moment when a new factor is born and you want to know how best to integrate it to your factor model.  This I will call the ecological adaptation of your factor model.

This last point is related to how you decide to add factors to your model, even at the outset.

Either you can get an edge with a factor model or you can't.  Assuming a position of humility, the default factor model ought to be treated as a hurdle you only jump over when you are sure you can.  When in doubt, the best is to revert to a model which just buys the market.  This ought to be a general principle which takes you all the way back up to a static asset allocation which works in all weathers and economic climates.  Only bother to deviate from the single market owning allocations when your confidence threshold is reached.

So the first principle of equity factor modelling is: this just may be self-deluded costly baloney, so move cautiously.

A decent leverage starting point is to see who's in the multi factor game and in particular who is offering those as product offerings. Which factors keep coming up and why. Perhaps it will turn out best to just buy these ETFs , perhaps with a degree of timing driven by an economic model.  

However, separate from any investment consequence, I would like to pursue a more under the hood academic interest in the subject. I would like to build my own in principle.

Sunday, 20 March 2016

Liquidity and central bank policy

These things a twenty first century central banker knows:

The corporate world is gotten more complex since the advent of financial engineering.  The continuous expectation of investors to maximise risk adjusted returns leads to increased chances that the firm might succumb  to the temptation of gearing, with the result that the firm goes out of business.  Not all firms are exposed evenly to this risk - it clusters, and those industries most at risk are those with funding and asset mismatches.  This is almost the definition of the base business model of retail banking sector.  So banks have to set up shop at the foot of the volcano.  That's their job.  And those banks all have to do this, to a greater or lesser extent.

If a central banker want to avoid systemic risks, he tries to put in place measures which address this risk.  Perhaps a demand for higher capital buffers (some of which capital operates a s a liquidity buffer).  But banks respond to these capital requirements by grossing down their balance sheets rather than taking the direct hit to their share price as a result of the likely reduction in equity returns which happens when the regulator asks banks to sit on more capital.  

In the limit, the banks become not only heavily regulated by government, but the key mechanism for allocating credit (capital) to those parts of the economy that need it becomes a quasi-government function.  Bank returns become largely policy driven and the financial services industry starts to resemble a government department with ludicrously paid employees.  It is not that regulator imposition has unforeseen effects on liquidity of firms and systemic liquidity per so, it is that these banks need to place their business at the foot of the volcano in the first place.  You can't legislate geography away, so to speak.

When firms go bust, they often go through an illiquidity phase on their way to extinction.  This must continue to happen for Schumpeterian reasons, so policy setters need to distinguish when to act and when not to act.  This determines their actions vis-a-vis their role as final liquidity providers - i.e. lenders of last resort.  But if you do this too generously you end up with an economy of zombie corporations - Japan has faced this situation for decades.  And if you don't do it too actively, you enable avoidable contractions and recessions.  The ease with which various central banks pull this trigger largely drives the modern discussion of progressivism versus the Austrian or neo-classical approach in politics. 

Finally, even if central bankers and policy makers decided that they'd run the risk of forcing banks to hold dramatically more capital, as a kind of ground zero solution to the negative consequences of the inherently systemic nature of banking, the industry would migrate increasingly to the shadow banking sector.  This is already happening to some degree - witness the birth of so-called peer to peer lending.

Liquidity in context IV - the life of a de facto corporate liquidity manager

This posting is about dumbing down liquidity management in language which  most people can easily understand and relate to.  Liquidity management is mostly about the maintenance of good operational cash flow balances to cover the expected and predictably unexpected vicissitudes and seasonalities of corporate life.  There was a time not that long ago (up until the 1960s) when operational cash flow management was a private little secret of the treasury department.  They skimmed some free cash flow from operations and kept a store of it to meet more or less expected corporate cash call events.  When looked at this way, you suddenly realise that financial demands can in theory be a lot more predictable than operational events.  It is more certain knowing when you need to repay your bonds than it does when you'll need to pay for repairs to an uninsured industrial accident.  


These events in question (cash calls) each and every one of them can have an uncertainty added to the cash-amount and time schedule you'd normally think of as the definitive parameters.  If you could model all cash call events somehow, then the aggregate cash schedule and its concomitant variance would feed into a pretty decent corporate liquidity management model.  At its most fundamental, these cash calls are modelled as call options on zero coupon bonds.  Each event will have its own notional value, volatility expectation, strike price.  When you aggregate this portfolio of real options up you've got your funding liquidity mostly modelled.  One must be realistic about just what fraction of the operating corporate environment is amenable to modelling, and also with respect to just how fast the situation could change.  The more chaotic the likelihood of change is, then the more difficult it is to extract value from a liquidity management regime.  Or to put this more dramatically, there's a level of chaos in the liquidity environment above which it doesn't make much sense to model liquidity.  What you modelled today becomes largely detached from which realistically could happen tomorrow.  In general, if the future state of some system is so unpredictable based on today's models, then those models aren't much use.


Why did liquidity management stop being a private skimming operation of the treasury department in the 1960s?  Partly because of the advances in financial engineering from the 1960s onward (Treynor) which paved the way for more sophisticated financial engineering at corporate finance departments.  Similarly, macro-economic  climate became incredibly volatile following Nixon's decision to end Breton Woods agreement, leading to currency volatility and destabilising inflation.  Corporate treasurers responded by bringing some basic financial engineering to the largely in-house management of corporate cash calls.  Finally, financial engineering was also focusing the minds of corporate executives at technology companies starting in late 1950s silicon valley, via the issuing of executive stock options, which accounting bodies valued as stock price minus strike - effectively ignoring the intrinsic value element (we had to wait for the Black-Scholes equation for that).  This caused them to tilt in favour of investment returns over (liquidity) risk.  In essence, to really manage a firm's liquidity so that there is always a sufficient cash buffer in the end detracts from short term investment gains.  Corporate executives, especially in 'innovative' technology companies were now personally incentiveised to maximise precisely these short term investment returns, in ways which used less capital.


What new tricks did they come up with?

How about taking those ideas in fixed income financial engineering and look to calculate the duration of cash flows with a view to matching those cash flowsOr perhaps finding some third party to write some liquidity options for you so you can have them as a form of cheap liquidity insurance.Or renegotiate the clauses around commitment in the contracts you have with banks over your loans.Or get loans from the capital markets, dis-intermediating your firm's normal pool of lending banks.Perhaps stealing that idea of Markowitz and paying close attention to the free lunch you can achieve through diversification - in this case, the diversification of your funding sources.Lastly, don't just have a pool of liquidity buffer cash sitting at a bank earning perhaps a negative real rate in times of high and volatile inflation, which not buy assets with this pool of cash, gaining a higher return which maintaining the average liquidity profile of the pool.



Friday, 4 March 2016

Liquidity in context III


Yesterday I talked about bottom up asset liquidity.  Today I shall continue reviewing the various forms of words which appear in discussions of liquidity.

Liquidity mismatch.
Think of a firm's need for cash as a demand curve.  And its ability to get its hands on cash as a supply curve.  A liquidity mismatch occurs when this set of curves are out of sync.

I shall give two made up examples - an industrial goods manufacturer and a multi-strategy hedge fund.  First the industrial goods company.

The company already has a number of loans, bonds, convertibles outstanding with a number of market participants.  It also has operating cash and holds a number of near-cash securities.  On top of all of this, it has a set of assets and new projects and ongoing projects.  These ongoing projects deliver cash flows into the organisation.  The expected magnitude and timing of these cash-flows is an ongoing estimation problem for the CFO.  It is also a function of the economy generally, of sales, of a broad range of conditions, in other words.

Meanwhile its financial liabilities (those loans and bonds) have a mostly very clear timeline of coupon payments and repayment dates.  It is, of course, part of the CFO's job to manage all of this, but they are operating in an uncertain world.  Projects may bleed, they may fail catastrophically.  Macro-economic disaster might befall the economy.  What resources does the firm have to draw on to meet those more-or-less well known short term cash demands?

Side note.  The need for cash doesn't in general need to be short term, but that is clearly the most pressing end of the timeline.  The immediate future is the period which most rapidly becomes 'now' and 'now' is when a creditor may declare its dissatisfaction with the borrowing firm.

The firm has cash and cash equivalents.  Some of this is considered operating cash - money in the till, to use a shop-keeping analogy.  This cash in a sense needs to be there for the smooth operation of the day to day business of the firm.  But in an emergency this might be considered a pot to be raided.  If the company is prudent, it will also have cash and near cash reserves (certificates of deposit, short term sovereign securities).  A very conservative company might chose to have enough cash in these reserves to pay the next n months, but of course, the n months will pass, and that pot needs to be replenished.  The pot itself is depleted only in exceptional circumstances.  The downside of having too big a pot of cash and cash equivalents is that it is capital sitting earning not much more than a risk free return.  And firms have as a goal the desire to produce a return on equity in excess of the risk free rate.  Otherwise why would an investor invest in a firm in the first place?

So, assuming a new demand for cash materialised, where else might the firm look?  Perhaps new loans or new bonds.  Perhaps a rights issue (a request from current and potential equity investors to give the company cash in return for ultimate fractional ownership in the company).  Perhaps cost savings.  Perhaps the shuttering of certain projects, with concomitant staff reductions.  Perhaps the sale of certain assets in the market - plant, financial securities.  Perhaps the monetisation of some fraction of its asset base.  But as you can imagine, all these options take time, and perhaps some mark down on sale prices - after all, the market might perceive the firm as executing a fire sale, so might be tempted to offer fire-sale prices.

This misalignment of (potentially immediate, potentially short term) demands for cash with (somewhat longer term) supply is what is known as a liquidity mismatch.  

If you think about it, to say that a firm is experiencing a liquidity problem in the first place is to identify a more or less dramatic liquidity mismatch.  So in a sense most liquidity problems are liquidity mismatch problems, and the word liquidity can often be considered as a synonym for a liquidity mismatch problem.

In case 2, the multi-strategy hedge fund, there is a little stub of a management company managing a potentially much larger pool of investments on behalf of investors.  The management firm itself, often a partnership, received equity investment by founding partners, who are said to have committed their cash for the long(-ish)term.  It will have well understood staffing costs and fixed costs.  In some ways, the investment management firm is a bit like 'head office' for a large goods manufacturer, but without the regional factories, offices, large staff, input supply chains, etc.  So the cash flows of the management firm are somewhat clearer.  Also those management firms might have loans but they won't typically be as well developed as with non-financial firms.  For multi-strategy hedge funds, the 'work' happens in the collection of financial assets and liabilities within its fund(s).  These investors in the fund can be flighty, and prime brokers can also adjust the generosity of their leverage terms.  Both of these create the possibility of a liquidity demand.  The fund manager only has the set of assets and liabilities in the fund to supply this needed cash.  So for them asset liquidity modelling and funding liquidity are important, as are a full incorporation of the set of firm constraints on liquidity scenarios.  And where it is unrealistic to fully model the constraints, to approximate them very conservatively.

Despite the seeming differences, both firms managing the possibility of liquidity mismatch are doing the same thing, namely being continually responsive to the balance between demands for cash with sources of cash.

In the next posting I look at the variance (and vol. of vol.) on the demand side of the 'liquidity mismatch' risk which firms of all kinds face.

Thursday, 3 March 2016

Liquidity in context - II


Last time I was thinking about funding liquidity and had in my head the multi-strategy hedge fund.  The two primary demands for cash come from prime brokers, who might offer less favourable leverage terms to the hedge fund, which would manifest itself as a demand for more cash to be deposited with them for a given set of holdings on the hedge fund's book at that PB.  The fund would then stump up more cash or gross down their set of holdings.  The second demand is if a significant number and weight of investors in the fund decided, subject to their gates, to redeem their investment.  Th.is is either going to be funded out of the hedge fund's cash (or cash equivalents) bucket or it will make them sell some of their assets and liabilities.  Which brings me on to ...

Asset Liquidity. (Or more strictly speaking, bottom up asset liquidity).
A firm owns a number of units of some security.  The 'asset liquidity' question arises about that holding.  The form of the question is always one of T|CF, C|TF or F|C,T and the source of the answer comes from (1) two facts about the firm and (2) a set of facts about the market for that security.  

The primary firm fact is the position size.  The secondary fact is which collection of constraints are apposite for the liquidation of that asset.  The constraints impose costs (financial, time, fraction) on the unwind.

The market facts are more numerous.  Measuring a market's liquidity is a large subject and the set of data to come to an opinion about its current liquidity is probably asset type and market-specific.  But in general they are statistical reads on the market.

The final piece of the puzzle is how to codify the various statistical reads on the market to produce a liquidity response curve for that market at that time.  Actually, it is not a 2D curve but a 3D surface, with the primary independent variable being F*E, the fraction of the fund's holding of this security being targeted in the liquidity scenario at hand, multiplied by the exposure, E this firm  has to the asset (in simple cases, its quantity).  The surface exists for every exposure point.  In the most general case, the set of curves would extend into negative exposure values for F*E, allowing for asymmetric markets.  The slightly  simpler case is to assume the market is symmetrical and the sign of the exposure is not important.

Whilst in theory all those response curves exist, for any given day, you may only be interested in a single one of them, namely the curve associated with the F*E value in play on that day in your firm.

Usually, either the cost threshold is a parameter of the liquidity run, or the time threshold is given.  In this case, the surface becomes a curve.  E.g. Time(F*E|F=100%, C<1%)  - a safe and compete wind down curve / asset liquidity estimate.  Cost(F*E|F=50%,T=3d) - a drop dead target of 3d to reduce the holding size by half.  Both of these are asset liquidity estimates.

Think of the cost and time curves both in terms of the absolute cost (EUR) or time (days) for a position of size F*E to be unwound, in which case this is an upward sloping convex curve of some sort or another; or think of the cost as a cost per unit, in which case its convexity is fully explained by the expect saturation cost associated with bringing a larger and larger fraction to market.  Very liquid markets have a flat per-unit response curve both for time and cost.

I will call these per-unit response curves lower case $t_m(F_s \times E_f|F_s,C_s,n_s)$ where $m$ stands for a market object, $s$ being a liquidity scenario parameter and $f$ being a fact of the firm and $n_f$ being the collection of firm unwind constraints.  Likewise the second of the possible asset liquidity measures is $c_m(F_s \times E_f|F_s,T_s,n_s)$.

Wednesday, 2 March 2016

Liquidity in context - I


In this posting I'd like to talk about a couple of liquidity related phrases and their meaning.  

First of all the word 'liquidity' itself.  Someone at some point in history decided that describing the degree to which an entity is able to meet its obligations through access to cash when required required a metaphor of a liquid.  If flows everywhere, which I think must be the point originally.  It is from this starting point that you get a liquidation activity, which is when non-liquid assets get disposed of (sold) for cash.  The word liquidation now also carries a strong separate sense, meaning to have its structure (solidity) destroyed (melted).  I think this is secondary.  From bankruptcy terminology, the word has entered common parlance to mean to end or terminate something or someone.

Liquidity risk.
This term, in contradistinction to market risk, credit risk, macro-economic risk (pan-market risk), etc., is an umbrella term describing the measurement and management of the risk that an entity (typically a firm) cannot meet one or more obligations (financial obligations).  As such it is a species of financial risk,  as opposed to non-financial risk.  Cash is a financial asset after all, so no surprise there.

Funding liquidity.
Organisations fund their operations through equity investment in the firm, financial markets debt, bank loans, and various forms of credit or leverage agreement.  Each potential provider of this funding is making an ongoing endless decision about the firm with respect to how worried they are about getting their investment back.  It is this ongoing endless decision which is one of the causes of funding liquidity risk.  Take, for example, a modern hedge fund.  It may have received cash from equity investors in the firm.  This cash received is used to fund projects within the firm.  The resulting equity represents a liability to the hedge fund.  It owes the equity investors.  How much they owe is a function of how the world values that equity component.  Is it work only as much as the original capital investment (i.e. the book cost, in accounting terms) or has the firm managed to grow its enterprise value and hence does the world now value the equity stake higher?  How ongoing is the re-appraisal of the value of the firm's equity?  This can vary a lot.  Publicly listed companies have active secondary markets and hence the current value of the equity is continuously evaluated.  Private firms (as a majority of hedge funds are) get their equity marked much less frequently.  Also, in one sense the final owner of the equity is irrelevant for these purposes.  Whereas equity owners may (and do) decide whether to sell their stake on the secondary market or privately all the time, the principal agents of the firm still regard this liability as ever-present.  In the general case, normal transactions in the secondary market may provide liquidity to the owners of the equity but the company itself has long since used the original capital for various purposes.  In abstract, the  equity owner owns this asset forever (even though their identity changes from secondary market trade to secondary market trade).

Next come bank loans.  Again, cash came in to the fund and a series of obligations got created.  These obligations are utterly different to the firm's obligation to the equity owner.  The loan obligation includes a lot more certainty and specificity - interest payments need to be made on certain dates, the loan has a maturity which is well understood.    Firms use loans on an ongoing basis, so there's always a chance that the loan providers worsen the terms of the loan or fail to consider rolling the loan.  This is a potential cause of funding liquidity risk.  Similarly for all forms of capital market bond - the lender is a collection of market participants, but otherwise the structure and risks are the same.  Next a hedge fund might get leverage from prime brokers.  This amounts to a greater or lesser spending capacity for the hedge fund, at a fee for the prime broker.  Finally the hedge fund itself has a number of investors in the fund vehicle itself.  These investors can be much more flighty and might decide on an ongoing basis to either keep their money in that fund, or redeem their investment, subject to an often complex set of company-imposed withdrawn constraints often called 'gates'.

Funding liquidity risk can be instigated by the firm's loan creditors, the fixed income market in its aggregate willingness to lend the firm new money on an ongoing basis, its investors and even by secondary effects of its equity holders, insofar as selling pressure on the firm's equity may feed back negatively into direct funding sources.  Funding liquidity risk is ultimately caused by one of two drivers - first, the set of funders collectively decide that the firm is less worthy of funding and second the set of funders either individually or collectively themselves become stressed and are caused to reduce the level of their funding to that (and potentially other) firms.

So much for the causes, the mechanism is also two parted.  Firms have ongoing funding requirements.  The degree to which this is lumpy or smooth is a whole world in itself.  But if a firm experiences a funding liquidity episode, then the funders singly or collectively might change or exercise clauses in the current set of in play funding transactions to make the level of funding reduce, or secondly they might worsen terms of new funding transactions with the firm and in the limit completely refuse to offer any additional new funding.  Loans, bonds and converts offer a degree of stability in their prospectuses which allows for funding stability.  Prime brokers can much more rapidly change the terms of their implicit funding pretty much overnight, and hence are a potential cause of much more immediate and unpredictable funding liquidity risk for a hedge fund.  Hedge fund don't often have loans or bonds though, so their primary funding liquidity risk vectors are investor and prime broker flightiness.  So when it comes to estimating the nature of the funding obligation (how much needs to be liquidated and by when or at what cost) then modelling investor gates and the volatility of PB leverage will need to be examined to establish the magnitudes of liquidity risks in various scenarios.

Tuesday, 1 March 2016

Behavioural Economics diagram


I wanted to try to fit into one image the various levels of my understanding of the word of behavioural economics and here it is.  The idea is it is a bastardisation of the normal distribution, which is associated in my head with the rationalist, probabilist, utility maximising approach to so called economic thinking.

1.  The distribution x-axis scale is ratio based and not linear or difference bases.  From psychophysics, we learn that humans are better at giving relative price valuations than absolute ones

2. The meaning of gain versus loss feels totally different for us.  The same economic loss or benefit expressed as a loss or a gain is valued differently by us - we have to lose.

3.  The mean of this comedic curve waggles around a lot, with a listening ear, being swayed by priming effects, driving anchoring effect.

4.  Certainty effect - as you move from 99% to 100% there's a discontinuity in human thinking.  A certainty premium.

5. Then at the ends, before the certainty effect discontinuity, there are 2 threads each at either extreme point.  This represents our perceived attitude to risk based on the two dimensions of - loss or gain; and likely, unlikely.  On the loss side, we like to 'go for broke' facing a likely loss whereas become risk adverse facing an unlikely loss.  Whereas on the gain side, we are risk adverse facing a likely gain (a bird in the hand is worth two in the bush) and facing an unlikely gain, we become 'no guts no glory' risk welcoming.