Tuesday, 1 March 2016

Behavioural Economics diagram

I wanted to try to fit into one image the various levels of my understanding of the word of behavioural economics and here it is.  The idea is it is a bastardisation of the normal distribution, which is associated in my head with the rationalist, probabilist, utility maximising approach to so called economic thinking.

1.  The distribution x-axis scale is ratio based and not linear or difference bases.  From psychophysics, we learn that humans are better at giving relative price valuations than absolute ones

2. The meaning of gain versus loss feels totally different for us.  The same economic loss or benefit expressed as a loss or a gain is valued differently by us - we have to lose.

3.  The mean of this comedic curve waggles around a lot, with a listening ear, being swayed by priming effects, driving anchoring effect.

4.  Certainty effect - as you move from 99% to 100% there's a discontinuity in human thinking.  A certainty premium.

5. Then at the ends, before the certainty effect discontinuity, there are 2 threads each at either extreme point.  This represents our perceived attitude to risk based on the two dimensions of - loss or gain; and likely, unlikely.  On the loss side, we like to 'go for broke' facing a likely loss whereas become risk adverse facing an unlikely loss.  Whereas on the gain side, we are risk adverse facing a likely gain (a bird in the hand is worth two in the bush) and facing an unlikely gain, we become 'no guts no glory' risk welcoming.

Liquidity constraints

An interesting angle on summarising liquidity is to think of constraints.  Constraints in the most general sense.  A constraint can be a financial obligation (for example, an expectation by creditors that a company meet its short term liabilities) or a policy constraint (the SEC mandates that mutual funds need to state regularly the fraction of their assets they could liquidate in a three day time frame, an exchange has rules for liquidity providers in a market, the European regulator demands that a hedge fund prove that it can meet reasonable worst case expectations for investor redemptions given the set of investor gates and share classes in place).

Whilst this is no doubt a massively complex domain, in essence what is happening is that certain constraints are being introduced on one or more target hypothetical unwindings.  These are really liquidity scenarios.  The constraints represent more or less realistic representations of the business context within that liquidity scenario operates.

Or put another way, the liquidity scenario itself is defined by not only a goal but also a collection of constraints.  This represents the goal of a liquidity algorithm.

There are three levels of interest when it comes to liquidity algorithms.  The first is a firm perspective.  The firm can be a financial or a non-financial firm.  Non-financial firms may be required (by auditors operating in a specific corporate legal environment) to demonstrate they can meet their short term creditor obligations.  To become confident that this can happen, the capital structure and operating cash flows of the company need to be estimated.  This is often what a credit analyst does.  If the firm is a financial institution or fund, then it can be said to have additional opportunities to meet it same set of funding obligations through liquidating some of its positions.  This analysis would pull in market measures of liquidity for the pool of assets and liabilities of the fund.

For specific markets, there are ways to measure how liquid in general that market is.  This is useful for market participants as a fact in its own right, but it is also useful feed-in data for financial firms with a large number of marketable securities.

And at the macro-economic level, regulators and ultimately governments are interested in systemic perspectives on liquidity, but as a potential early warning indicator, and, through policy response, in mitigating systemic weakness in the prevailing environment.  Regulators reach into markets and into firms and impose a policy environment for them, which is another way of saying that, even from the point of view of the firm, in the general case the firm may need to be cognisant of its own balance sheet, the state of the relevant markets and finally the overarching policy or regulatory environment.

These three perspectives always interact.

If this is the overall shape of the liquidity landscape, how best can a general data point be represented?  I think, for the firm on any given date, a useful 3 dimensional point emerges, <T,C,F>, representing time, cost and fraction.  Imagine that any two of these three dimensions can be fixed or controlled, then a bottom up liquidity analysis algorithm would provide for the third.

For example, a fully general purpose liquidity system can be supplied with a cost and a fraction (assuming effectively no liquidity slippage on selling an asset - cost = 0 - and assuming a requirement of having to sell all of one's holding of that asset - Fraction = 1) then a time (number of days to liquidate) can be the result of the bottom up algorithm.  This I refer to as T|CF

A second example, say you want to know what the cost would be in selling 100%of your holding in 3 days (C|FT).

Third, F|CT tells you how much of your holding you can sell at a given cost in a given number of days.

So there are at least 3 classes of problem to solve in a (corporate) liquidity system - T|CF, C|FT and F|CT, subject to a range of bespoke constraints originating in policy and non-policy constraints.  

It should be obvious that you need a lot of data to do this properly.  Balance sheet data, equity owner or investor data,  market data, constraints data.  From this data you solve one of more of the primary classes of liquidity problem - T|CF, C|FT and/or F|CT.  Of these, T|CF is of primary importance and the one which naturally comes to mind first.  But each data point <T,C,F> can in theory inform all three classes of problem.  Part of what the liquidity algorithm does is constrained aggregation, interpolation, extrapolation and (re-)presentation for end consumers.

A last point, when expressed as a point <T,C,F> represents actual values, but when represented in | notation, the given elements represent  thresholds - i.e. T|CF means "how many days would it take for me to get rid of (at least) F% of my holding of an asset at (no more than) C% cost.