Friday, 24 August 2018

The Art (pah) of Asset (pah) Allocation

Of course calling a book The Art of Asset Allocation is just asking for trouble. Back in the olden days of investing, you bought assets, the primary uncertainty being what fraction of your investable wealth was to be allocated to which broad asset category.  These days this has been generalised to strategy allocation, for the financial industry (and for a growing number of individuals too).  You allocate to equity long short, to volatility arbitrage, to mergers and acquisitions, to capital structure arbitrage, to convertible arbitrage.  Each strategy, in other words, could contain long and also short positions, subject to financing costs and limited by a degree of leverage typically offered to hedge funds.

Hedge funds were created in 1949 by Alfred Jones (covering equity long short strategies); convertible arbitrage   was pioneered in the 1960s by Ed Thorpe, after the casinos banned him from his card counting and expectation-based betting; volatility arbitrage blossomed in the years after equity index option trading on exchange met the Black-Scholes calculator; merger (risk) arbitrage had already made it into the third edition of Benjamin's "Security Analysis", 1951.  Capital Structure is much more high powered than that, and had to wait until Merton's 1974 model of credit in terms of the set of assets and liabilities (including residual equity) of the firm.

A key fact about successful trading strategies is that, by definition, they become popular and 'over funded' (tragedy of the commons), leading to more money (and, on average more diluted talent) chasing the same market.   This fact ought to be written in stone on any 'guidelines for strategy allocation' work.  It is continually chipping away at the returns associated with these second generation strategies.

The first generation of strategies are the purchase of assets and liabilities on a buy-and-hold basis.  Here, the term would have been relevant.  The primary question facing first generation investors was: how much of which asset class to hold, and for how long until the next re-balance process.  This first generation of strategies of course is still around, and super slim, in the form of the burgeoning ETF markets.  For a modern take on the first generation investors, there were two paths you could go down.  The old (but still popular) and CAPM-ignorant (pre Markowitz) strategy of not just buying the market, but attempting to buy sectors (or themes) in ratios not related to their market cap ratios.  This shades into thematic investing.  The idea here is the investor knows something the market doesn't  The finance professors are usually not so keen on this strategy, which I'll call gen-1-slanted.

The second path a modern investor may take when it comes to their buy-and-hold allocation is to buy the market en toto  and to fine tune your risk appetite via using leverage to achieve whatever level of relative volatility (or beta) you'd like.  If you want to own the market, the path is clear, with ETFs and futures.  If you currently express a less fulsome risk appetite, you place some of your funds in cash or treasuries, then achieve your <1 beta.  Consequently if you want more risk, you lever up your ownership of the market (eg in futures, in leveraged ETFs).

Which brings me to Darst's use again of 'art' here.  If the real problem which he's expecting his investor clients to solve is one of understanding the relationship between all these strategies, then that's a big ask for many investors.  Especially the part where he asks investors to understand not only the returns, volatility of each of these scenarios, but to understand their fundamentals and valuations, together with technical and liquidity dimensions, plus market psychologies on top of all that.   In other words he's claiming it is an art then expects the truly hard part to be performed by the investor (or perhaps a further set of costly advisers).

A key philosophical question which comes up, and for which the Markowitz approach may not be sufficient, is how many different kinds of strategy could there be, and how does one allocate between them.  How stable can they be?  What is the evolution of their life-cycle returns?

There's a great confluence of fairly simple mathematics here - gambler's ruin, covariance matrices, regression / series analysis - which will provide the intellectual backbone to a proper look at modelling the act of optimising the spread of your investment wealth across an unknown number of life-cycle-sensitive strategies in the face of uncertainty.  This book doesn't go anywhere near this, but I shall carry on reading it.

Wednesday, 22 August 2018

Will you still need me, will you still feed me, when I'm 640

Imagine a world where humans lived much longer than their current 70-80 year range (for Westerners).  Imagine they lived 640 years.  Earning just a single 1% above the prevailing inflation rate would transform one unit of capital into 601.  That's surely going to be enough to retire on.  One presumption here is nothing about the economy changes, but in a sense this could change everything - for a start there'd be a lot more capital seeking a return.  Also, at what point in those 640 years would we decide to stop working?  Nevertheless, that's an assumption of this post.

Would it be enough to retire on?  To answer that, I'd first like to know how a typical salaried person's salary growth would slow down over the centuries.  We work from 20 to 60, approximately, and we see over that period a growth in salary.  This (again in the Western world) represents a career trajectory which, I think, we can't extend onward for centuries. The pattern of real  lifetime wage growth, I strongly suspect, would flatten out after a while and we'd have a more or less inflation-stable income.  Of course, so much is uncertain here.  Would we be as productive or less so, aged 200?  We're in the realm of science fiction, for sure but that's useful to imagine a flat-lining, since one could then conceive some parameter, ω, ranging typically between 0 and 1, which, when achieved might lead us to retire.  The parameter represents the fraction of our mature stable salary S such that we'd be happy to retire on ωS for our remaining time alive. By 'retire' of course, I don't mean become inactive, I mean having in essence the ability to self-fund a liveable income.

To translate this into capital terms, how much capital would one need to accumulate so that it earned us a real return of ωS indefinitely.  Let's further assume for simplicity that we immediately start earning S at the beginning of our working career.    In other words, how much capital would you need to accumulate in order to be able to pay for a perpetuity worth, in real terms, ωS paid to you yearly foreverGiven the length of time here, it is fine to approximate the fixed term annuity with a perpetuity, since they'll both amount to a similar value, and the maths for a perpetuity is simpler.

This capital amount R would be our retirement trigger such that  $R=\omega S/r$.  With $r$ the real rate of return in the above example set at 1%, $R=100 \omega S$.  A general guideline of 67% is often given for the expected final pension of retiring Westerners.  This means on the ultra conservative estimate, you'd better have 67 times your salary before you can retire.  That's a lot.

How long would you have to work when you could put some savings fraction $\delta$ of your salary away every year and until you reached  $R=67 S$? I.e. how many annual payments of $\delta S$, growing each year in a retirement pot for you at a real rate of return again of 1% would result in a pot of size  $R$?  This second problem isn't a simple annuity problem, since even though you're paying a fixed amount each year for $n$ years, the real point is that each year your pool of retirement capital grows, and it is this larger pool which is subject to the following year's growth of 1% real return.  This compounding element will mean many fewer years to wait for freedom from wage slavery.  But how many years?  This structure isn't a plain annuity but more like a sinking fund, whose formula is $\frac{Kr}{(1+r)^n-1}$ where $K$ is the target amount you're planning to need in $n$ years.  Assuming annual compounding and real growth of $r$ which you can consistently receive on your growing fund.

For my current needs, I'm saying that $\delta S = \frac{r\omega S/r}{(1+r)^n-1}$.  I now want to rearrange this to solve for $n$.  First of all I notice that on the top line the rates cancel, so I can write
$(1+r)^n-1= \frac{\omega S}{\delta S}$ and rather conveniently the capital amounts cancel,  $(1+r)^n-1= \frac{\omega}{\delta}$,  The capital amounts cancelling merely reminds me that this simplistic analysis would hold, given the same simplifying assumptions, for any wage slave, regardless of their actual income level.  Moving on, $(1+r)^n= \frac{\omega}{\delta}+1$ and if I take logs on both sides $n \ln(1+r)= \ln(\frac{\omega}{\delta}+1)$ before finally arriving at $n = \frac{\ln(1+\frac{\omega}{\delta})}{\ln(1+r)}$.

Let's plug some sample values in.  Stick with $\omega=\frac{2}{3}$.  Now, we all try to save 5% of our salary at least into the pension pot each year with our current life timeline.  Let's assume this doesn't change.  $\delta = \frac{1}{20}$.  Again let us make the real return 1%.  That's $\frac{1.1583}{0.0043}$ or 268 years (or 41% of your extended life of 640.  For reference purposes, 41% of 60 working life years is about 25 years.  So if you start at 20 and die at 80, saving 5% a year, on the expectation of two thirds final salary means you can retire at 80-25 or 55.

What if you earned a real 2% on your annual saving, all else staying the same?  You get 134 years of saving.  And if you were prepared to forgo 10% of your salary each year for pension saving, all other things the same?  You'd work for 205 years.  Next, if you got a 2% real rate and saved 10% of your salary, you'd take 103 years (16% of your potential working life) before you could retire.

According to the Fed, 5.89% is the Western world's long term current real rate of return.  So, unless you were unlucky enough to hit a world war, this rate of return on 10% pension contributions would have you working for only 35 years, out of your 640 years of living.

By the way, 67% salary as an annuity, discounted at 5.89% real, costs you about 11.4 times your salary.  The major element I leave out of the above is the fact that the annuity your retirement pot buys you is not going to grow with inflation.

UK working age income is currently (2018) 18k p.a.  So you'd need at least 204k in your pot.  For richer folks, say on 100k, you'd need more than 1.1 million in your pension pot to get you a 67k lifestyle (less, assuming the power of inflation).  I am, of course, ignoring the UK government state pension.

Sunday, 12 August 2018

The art of asset allocation - poor figure 1.5

Darst ends his bombastic preface with a trite lesson on the etymology of the word 'art', being an expression of something beautifully put together, with skill and in adherence to a craft's skill base.  He adds, pompously and wholly inappropriately, "In addition to these senses of the term 'art', an important reason for naming this book .. relates to the use of more than 130 illustrations and charts intended to help investors to quickly grasp and retain important asset allocation and investment concepts".  Big self-praise indeed.  I've already indicated how strongly I disagree in my first blog on this book.

Let's take one of those early charts and dis-articulate it.  Figure 1.5 purports to show something simple and important - namely the effect of inflation on an asset, over various ranges of time and inflation rates.  How does one construct a chart like this.  Step 1 is go into excel, add a formula to a rectangle of cells, then take it into powerpoint, add crude arrows over the headings and hey presto.  This isn't art.  At all.

First of all, look how he's aligned the arrows (the only possible act of  creativity here).  He wants the downward facing arrows to indicate depreciation in real value as a result of inflationary erosion, so he overlays downward facing block arrows to semantically flag to the reader 'going down'.  However, when her comes to represent the effect of inflation, he clearly intends to have this go in the opposite direction (higher inflation after all erodes faster).  But putting the arrow the other way around (his claimed art-innovation here) merely shows an inflation rate reading pointing 'up' but with numbers decreasing.  This is a visualisation mismatch - a semantically jarring chart which, far from adding to clarity, pointlessly detracts away from clarity.

Second, I hear you say, '"but the guy's a finance guy, what matters is the rigour and discipline he applies to the numbers".  Well, wrong again.  I ran 3 versions of this simple table in a spreadsheet, first of all the correct way (with geometric inflation, since the effect of inflation is geometric) and secondly, using annual compounding.  In neither case did I replicate his numbers.  To get his numbers I have to apply simple interest adjustments, a process which at one point intensifies inaccurately the degree of erosion (helping him make his point, but via a mechanism which is unwarranted) and fails to represent any reality for how inflation as an economic phenomenon occurs.  

Here's the chart showing the erosion with geometric compounding $e^{-it}$
years
151020
inflation0.010.990.950.900.82
0.020.980.900.820.67
0.030.970.860.740.55
0.040.960.820.670.45
0.050.950.780.610.37
0.060.940.740.550.30
0.070.930.700.500.25
0.080.920.670.450.20
0.090.910.640.410.17
0.10.900.610.370.14
0.120.890.550.300.09
0.150.860.470.220.05

Taking as a representative point, the 10 year, 15% point, and applying the annualised formula I get 0.25 instead of 0.22.  I.e. you lose less.  Yet for this point he reports 0.20.  The only way to get there is to apply the following algorithm: $0.85^{10}$, which of course doesn't handle compounding at all.

In conclusion, aesthetically and numerically, I am not a fan of figure 1.5.   Also, I note that the western economies will try to position themselves at 2-3% inflation.  Let's assume this will continue to be the case, more or less, until one dies.  This will guarantee a 50% loss in real terms over the first half of the average working person's life.  Over the full 40 years, two thirds of your purchasing power would be eroded by putting your capital under a mattress in these circumstances.

The Art of Asset Allocation

I'm starting to read "The Art of Asset Allocation" by David Darst.  It already possesses the fly cover and typography of a mostly-empty finance book, and I have decided to go hard on it.

I've read the preface and chapter 1 and I can see that he does two things with diagrams.  One, he utterly recapitulates precisely the same message in his largely textual figures in the textual body of the book, in essence doubling up the message, flabbing the book contents out.  Two, he sees it as some form of art, whereas in reality it is largely a power-point mockery of art.

Here's his chapter 1 message.  The first enemy of the capital owner is the oft-present influence of inflation, eating away at capital's purchase power.  This, of course, is a message of returns (real returns being greater than 0 in fact) and not at all specifically directly related to asset allocation.  But it is fine nonetheless to borrow this out of kilter core concern of finance.

A point I really don't like about chapter one, but one which is partly true, is the way the author tries to make too many points of decision in asset allocation to be driven by investor preference.  For example inter-asset correlation isn't really just an investor decision.  There's of course a choice to be made in working out this window upon which you base your correlations, but this isn't simply a function of investor preference.  This,  ideally, is either something the asset allocation adviser can do for you, or if not, then provide guesses which are going to be at least as informed as your own.  

Also, what's going on with figure 1.2, which seemingly gives four fundamental meanings of asset allocation.  If you read these meanings closely, you'll find that they're largely repeats of each other - blending trade offs is really just the same as balancing characteristics and setting constraints on representation is precisely just a re-description of the very act of diversification.  Perhaps an unacknowledged goal for the author is to have a chunky tome. 

In terms of ideas from the book, chapter 1 introduces us to the following: first, there's a sequence of six steps in asset allocation (the diagrammatic 'art' here is a series of six boxes with a horizontal arrow running across the top - so, artless, really would be a better adjective.  He also attributes most of these plodding Powerpoint/Excel efforts as "source: the author").  Second, there's a Maslowian foundations pyramid.  Also, already at this point one asks what the relationship is between the steps and the pyramid, and to what degree these also are overlaps.  Third is a kind of meta-analysis - pros and cons of engaging in asset analysis.  Fourth, Darst reminds us of a pedestrian and widely recognised distinction between asset categories which protect capital and those which grow it.  Lastly, and to continue situating this quotidian distinction, he reminds us of the two elements of the entropic bite of financial reality - inflationary erosion and short term volatility - financially dying of old age and dying young in an accident.

I'm keen to re-describe this more concisely and in an order which makes more logical sense, but I'm sticking with the criticism of chapter 1 as I see it.

So, these steps of his.  First, specify assumptions about future expected behaviour of asset classes.  This pretty much is a task he assigns jointly to the capital owner and the allocation specialist.  I challenge this.  And will challenge it at later points in this book - he's making this a joint responsibility of the capital owner.  This ought to be the job of the allocation specialist.  Making it the capital owner's job feels like pre-preemptive blame sharing.  By this step, Darst doesn't mean the classic portfolio optimisation step of deciding where the capital owner wants to be on the efficient frontier - this happens in step two.  No, by step one he means listing the future expected returns, associated risk (vol) and inter-asset correlations.  This is a largely empirical exercise.  Yes, there's implicitly a model behind it and yes there needs to be parameter selection (which to repeat comes on step two).  Step one, as far as I can see, is the running of a mean-variance-correlation analysis on asset categorisations observed universally.  This could be a singular input data set for all capital owners.  Nor does Darst hint at the monumental and incomplete effort this entails for the whole of humanity.    Describing this as the capital owner spelling out his assumptions on future returns, volatility and correlation is akin to the maths teacher asking his pupils to explain calculus to him before he starts that lesson.  Part of the motivation of this blame-sharing move is that the models which are used are rear-view mirror models masquerading as future-seeing  machines.  But the financial world of tomorrow is always somewhat surprising.  The best these models can do is to empirically adopt some form of maximum likelihood estimation principle or, through sheer random luck or through prescient and incredibly rare analysis, make statements about the financial future not observable in the empirical data.  Claiming that instead it ought to come printed on a page under the arm of a capital owner in that first series of meetings with his well-paid allocation analyst is quite improper.

I notice in passing that books like this, and certainly this book, relies heavily on the adjectival space of 'discipline'.  This is probably for two reasons.  First, the finance industry takes so much money off capital owners that it must be repeatedly made clear to them that nothing is being wasted here.  Second, books like this are a form of management consultancy brochure, exploiting and dumbing down academic research yet also papering over the found reality of what way money managers actually work.  There's frequently little concern for asset allocation precision and, believe it or not, for empirical analysis.  Thus 'discipline' is a marketing utopia.

Step two.  The selection of the right set of assets which "match the investor's profile and objectives", and pick the appropriate point on the risk/return profile.  I imagine that, in the limit, this is largely answering the same question for everyone everywhere.  Imagine a book which had a chapter for each of the currencies of the world.  In each chapter, a section for the capital owner's age, and within each section, some relevant data.  This singular book should answer most of these questions for most investors.  Also I think the idea of hiring an allocation analyst to deal with only a subset of your capital, and perhaps for a specific objective, is less optimal (though of course it happens) that a singular view of the person and their hopes for their capital through their whole life, and permeating across all levels of capital from a single dollar to many billions.  You'd then only need to pick a new chapter or section if your base currency jurisdiction or capital level changes significantly.  I'd also assume that all assets would be owned, even in fractional weightings.  That way, the act is not one of selection but one of allocating a weight - of deciding where to slice the pie up (or more generally, you're re-slicing an already sliced pie).

Another slight digression.  A capital owner already has implicitly or explicitly made an allocation decision.  Even if they hold their capital in US dollars under a bed, this is an allocation decision.  So the asset allocation industry is always in the game of making a series of re-allocations  across time.  The act of reallocating, however, in any discussion I've seen about it, is implicitly described as a singular, complete act of re-slicing a pie of capital.  However, there's a way of adjusting the slices which may be more efficient, and more in tune with how capital arrives with capital owners.  And that is to allocate any new incremental capital in such a way that the slices move to the weights you desire, without touching the current set of capital allocations.  This would work if the re-balancing occurred in line with the arrival of new capital.  For the sake of giving this a name, I will call it marginal re-balancing.  In the limit, ignoring costs, the marginal re-balancing process is continuous.  Related to marginal re-balancing is the idea that, for all reasonable sets of allocation decisions given the multifarious behaviours of world economies, it might be that there are certain low or high values for these re-balancing weights such that one can say that each asset class can have a fixed core allocation.  This is, of course, a popular and well understood allocation idea (core allocations and peripheral adjustments).   An advantage of recognising this point lies in the likely reduced transaction costs associated with permanently holding a large fraction (in aggregate more than e.g. 50%) of one's capital in the respective asset classes.  I will refer to this as the core allocation stability thesis.  It is either true at meaningful allocation levels or it isn't.  It is mathematically true that you're always going to get some non-0 threshold weighting for each asset.  The empirical question is whether these cores are in fact large enough in magnitude.  In answering this question, we need first to answer a different question, which is how dynamic is the theoretically perfect allocation algorithm likely to be?  A highly dynamic algorithm might require 0% in US equities at some point.  A conservative one perhaps looks for a set of long term fixed (or, in the limit, actually fixed) set of allocations.

Going back to the idea of continuous, theoretically ideal re-balancing.  The other element of a general modelling of this is a description of how the marginal fractional unit of capital arrives at the pool of extant capital which the capital owner possesses, and at what point in the capital owner's existence.  If capital arrives steadily (net capital grows steadily), this lines up well with the theoretical idea of continuous allocation decisions.  If net capital grows in a more volatile way over a person's life, then that phenomenon too might itself be an input to the ideal capital allocation process.  This process can be referred to as the net capital growth process, and it can have a (stochastic) volatility.

Saturday, 11 August 2018

'The Sceptical Economist'

    The Skeptical Economist Jonathan Aldred 2009 The Sovereign Consumer The sovereign consumer represents the first stage in a process of economic abstraction which leads to the so-called 'homo economicus' model of motivations for human behaviour. It is in essence a simplified model of how the average human might shop. When extended, it becomes a view of the rational cost-benefit analysis which the average human applies to many quasi-economic decision making situations. The consumer is sovereign in the sense that his primary motivations are internally generated. The would contrast, for example, with a Hegelian analysis, which looks to see how cultural influences determine human economic (and other) choice. A typical story there would be the influence of advertising. A second way in which they're sovereign is more internal. The model for human shopping first advocated by the early economists (Smith) claimed that rationality was sovereign over emotion, whim, brainwashing, etc. This is the so-called rational economic decision maker. Notice the strong enlightenment influence of rationalism here but the absense of an ought/is distinction - these people reasoned (perhaps wrongly - see Rorty, etc) that reason ought to be how we shop and by implication (wrongly) that it is a good model for how we actually shop. Think Descartes, who applied a process of stripping away that which he could doubt, only to leave him with a power to cogitate. A similar move is made by the early economists. They found a rational homunculus at the centre too, just like Descartes and the other rationalists. The philosophical influence of this stripping away coincides with the scientific approach which was gaining ground in enlightenment times - namely the model building approach of the astronomer. Think Gauss, Bernoulli, Laplace, De Moivre, Pascal, Fermat and the foundations of probability and statistics. One element of what they were doing was building a (simple) mathematical model. In that realm phenomena to be explained were based on normal distributions, unlike economic/financial data, which seems to be based on Levy probability - with fat tails. But still, I should be very clear that we shouldn't criticise the desire to simplify using models - it is a great thing, but you must always remember the simplifying assumptions you made in your head at the start. This book is all about going back to those simplifying assumptions and re-examining them and re-describing them as now too simplistic. There's clearly a political agenda in his book, which again, is fine, but when reading it, it is best to keep this in mind. So think of the sovereign consumer as the Cartesian 'Cogito'; on a trip to the local market. What are the basic elements of the myth of the autonomous shopper? I have (sovereign) preferences. (When it breaks down and you introduce Hegel or behavioural finance, you still have preferences, but biased/non-sovereign ones). I have choice - there's more than one thing I can buy (extended, more than one thing I can chose to do); I.e. I have something to do, a decision to make. Next, I have information about the choices available to me. Think of knowledge of the existence of choice as a kind of level 0 information. You have information that there are 3 products which can satisfy your preference. Finally there's a cost constraint - usually expressed as a budget. Just thinking in general about these four tenets of the automous shopper. Two of them are already general purpose enough - information and choice. The model doesn't go in for simplifying the arrival of information or the possibility of enormous choice. And thinking about it from a political economy point of view, there aren't any vested interests in allowing any kind of simplifications to go through here. The other two - a cost constraint and a model of preferences which are boundaried at the individual - both of these have been put to political use and both been recently criticised. Actually, I could well imaging that you could have an infinite set of cost constraints and the classical model could still go through. But since we usually have a single unit of account (money, regardless of the currency), they probably can be all 'translated' into a 'dollar value'. But this book spends time criticising this 'cash out to cash' moralism, so there's definitely a critique there. Finally, the idea that you draw you boundary of influence on preferences to the individual human, this is the strong enlightenment position. Our whole western legal and political system is predicated on a 'responsible' autonomous individual - the criminal justice system doesn't make much sense without it, property rights, protestantism, reward structures, etc. It is deeply ingrained into our western culture, so to see it pop up as a simplifying assumption in the core classical economic model is bound to be no surprise. How does all this hang together ? Well you can imagine that the simplistic model of 'maximum finding' on a well-defined mathematical function could be a simulacrum for 'chosing'. The optimisation process is on the function of the user's happiness. Picking one choice from among many in the presence of information (even if the information is incomplete) is kind of like finding a maximum in a mathematical function. You're constrained by your budget. You can see first of all how things like Lagrangian Multipliers are going to be useful to the mathematical economist with this kind of setup. This happiness function is usually called the utility function. As you could imagine yourself, this is probably a function of a lot of variables. It must be time-varying, surely. And it is probably a function of how wealthy you are (as per Daniel Bernoulli and the marginal utility of rich and poor people being different). It must also be a function of what information is available to you at that time. This is your filtration, in stochastic processes terminology - your information set. By saying this is a function of a lot of variables (or perhaps some of them are parameters), then what you're saying is that you are faced with a family of possible utility functions, predicated on the choice you're about to make. The process of selecting that utility function is the process of solving for (finding the variable or parameter; or vector of variables and parameters) a maximum of U(). Side note: is there one single utility function for a person, or one for each choice they need to make. I guess there's a correlated set of individual utility functions which can all roll up into one major individual-level utility function. This could even be considered to roll up to one for a community. This introduces a stochastic element - your U() has a t-subscript, meaning that at any t in the past you solved back then to maximise your utility and this leaves you now in the current state, looking to make the choice you are currently faced wth. Aldred's first point here is that this decision isn't sovereign. Making the assumption that it is allows you to have a simpler utility function than it might otherwise be, so it is certainly fine as a first step in the process of properly modelling all of this. His criticisms:-
    • preferences are hazy and hesitant
    • getting all the information on the options costs time and effort and people usually only do it to a certain degree
    • the choice you make may be fallible - you don't get the arg max of the function - we all make mistakes [this is similar to the performance errors in linguistics; note the existence of performance errors didn't ruin 'transformational grammar' for Chomsky]
    The first two of these can be considered two sides of the same coin - namely where to look to blame for the lack of processing capacity. The third in a way is just noise on the channel. It doesn't invalidate anything about the theory, so I consider it the weakest of the three criticisms. Now, as for the first two criticisms - from an information theory point of view, it is like saying that there is inherent uncertainty in the formation of the message in the first place. This is kind of Platonic in its criticism of the non-philosopher king/ordinary man approach to thinking. But if there really is uncertainty in what we want when we set out at the start of a process of choosing, then it is right to model this rather than to assume the origin of preferences to be clearer and more certain than it actually is. From an information theory point of view, it is almost as if the sender of the message doesn't know what they're sending until it has been generated. So clearly a simple information sender type of metaphor isn't quite catching the essence of what really might be going on. But what actually is going on? Can the information theoretic model be elaborated to recover this missing part of the model of the autonomous shopper? I guess it must be that the message you're generating inside yourself is a bitmask, or incomplete message which through iteration and discovery of 'responses' you refine further until a choice is made. The other side of the coin of criticising the brainpower of the shopper - which has the clear Platonic elitism associated with it - is a version of the same argument, which suggests that there's a cost to assembling all the various peices of information together to arrive at a properly informed choice is a costly exercvise. So whereas the Philosopher Kings criticise the capacity for shoppers to know what they want, the economist's own critique of this tenet of the classical model is that 'geting to be informed' about a choice turns out to have costs associated with it. Again, this is the sort of criticism which I could well imagine could be integrated back in to essentially the same model. In fact, I'm sure several Nobel prizes have already been awarded for trying to do this kind of integration. If you think about it, where are you if you are faced with a budget constraint on the act of accessing your preferences? What's going to drive your preferences Critique of equi-available options in the standard model (p14) Evaluating the options open to you definitely is a problem. The whole of behavioural finance can be seen as an examination of the fact that framing of options (how they're presented to you, expressed, contextualised, etc) can lead to different choices being made by you, all other things being equal. Framing shouldn't be observed with the classical 'sovereign consumer' model. Aldred gives some examples from behavioural economics to illustrate framing effects:-
    • when you frame an option as the default option, it is more likely to be chosen. (the nudge effect)
    • theatre tickets, lose cash equivalent on way to theatre; versus, lose ticket on way to theatre
    • availability effect - how 'ready to hand' the information is. This iunfluences the likelihood of choice of an option. What makes a choice ready to hand? Well, it is like asking what makes it memorable - you heard it recently, it is vivid, striking and distinctive. This explains why we worry about striking ways of dying rather than the ones which are more likely to kill us. So, we're more likely to buy the more 'available' choices. The essence of advertising and 'brand awareness' is an exploitation of this availability effect. Think of an actual market - if you're looking for a particular brand of toothpaste there and 90% of the shops have brand #2, then the chances of brand #2 being chosen by you even though you'd rationally prefer brand #1 (e.g. it is cheaper) go up. This perhaps could be modelled by Bayesian prior / posterior modelling. Advertising/branding is a whole section of our economy based on exploiting the availability effect
    • current emotional state framing effect - how we're feeling now effects our decision
    Framing effects - an idea How are these framing effects related? There's a basic model here. (i) A chooser in a certain state. (ii) The reality of how frequent and 'ready to hand' instances of actual choice are there before us, including as a specific sub-case (ii-a) the initial/first/most frequent/least cost 'default' option. So Bayes theorem could explain a lot of this as likelihood of choice ready to hand. And the emotional state angle could really be incorporated into the utility function, in theory. Perhaps the rational function optimizer isn't optimising a deterministic function but one with a stochastic component. Maybe emotional demands, preferences, etc, could be modelled as stochastic processes? This would allow for the model to cope with the same user making a different decision based on the internal stochastic/emotional state in his head. And the model deals with the 'cost' of finding out in a bayesian / sampling way. (A bit like the idea of demes in genetics). I.e. you sample your immediate environment up to a maximum of your cost budget for discovering choices, elaborating choices. Then this is sensitive to the Bayesian priors, frequencies of discovery. You then chose based on this sampling. What's the real message about framing effects from the book here - that the so-called objective options - the things 'out there' which you get information on, these are actually subjective. Them being subject to internal motivations and drives doesn't make it impossible to model them, though, I think. But it may make it more complex to model. Side note: this kind of reminds me of the objectivist/anglo-saxon versus subjectivist/european philosophical debates.. Husserl, Bergson, Merleau-Ponty might be pleased. But not too pleased cos there is science in behavioural economics - theories are tested, invalidated, etc. This criticism of the uniformity and neutrality of objective options doesn't strike me as fatal. It is kind of like the difference between an equi-probable prior probability distribution and one where the priors are unevenly distributed, where there are a bunch of conditional probabilities, etc. In essence you can still imagine a model being built of a 'slightly less than sovereign consumer' which retains a lot of the characteristics (mathematical rigour, for one) being maintained. Isn't this just what's really going to happen in this domain? I imagine so. Critique of definable preferences in the standard model (p16) Aldred continues on the theme of criticising the illusion of clear preferences. We're bad at working out just how satisfied a choice will make us in the future. And in particular, we tend to over-estimate our level of happiness in the future as the result of selecting any given choice which is at hand. Not only are we bad at projecting/measuring how happy a choice will make us in the future, we're also bad at remembering how happy a choice made us in the past - we tend to remember the more extreme (bad or good) moments, and we have a tendency to over-weight the more recent memories. This is called Peak-End Evaluation. The author then talks about the interesting colonoscopy experiment. If you leave a colonoscope inserted for an extra minute once the procedure is over, the patient gives more weight to this experience (which is relatively speaking, less painful, though still uncomfortable). This makes a difference to their later appreciation of the whole experience. Both these forward looking and backward looking failures are accompanied by a third, our own concept of what is in our own self-interest may itself be confused. This is, to my mind, a philosophical criticism about the possibility of coming to know your own preferences and in working out what is in your own self interest. This to me sounds like it has its origins in Plato and the later Wittgenstein, to pick just two examples. To make the case he tells us to consider the problem of self-control in consumption decisions. I know lack of control is a big philosophical area in itself, but the author concentrates on the consumer angle. Things we do to help ourselves cope with the reality of self control - we put our alarm clock out of reach, we sign up to regular pension contributions. Examples of loss of self-control - we go to a restaurant determined not to have a pudding, but change our mind. We aren't rushed, or lacking information. We often seem to have competing preferences all the time. Significant uses of the myth of the sovereign consumer (1) In Advertising Advertisers like to 'explain away' the apparent success of advertising by saying that it isn't 'persuasion' based but 'informational' - ie they aren't looking to change your mind, just to tell you what choices you have. This preserves the illusion of the sovereign consumer. But wait, most advertising isn't informational at all. The economic cheerleaders of advertising come back with a seemingly more nuanced defence: our underlying preferences don't apply to products at all - the properly concern the underlying characteristics of products. A product (consumable item) is just a bundle of characteristics. This is a pretty straightforward dereferencing move. The 'economically useful' point of advertising is that it gives you information about this bundle of characteristics. It additionally tries to persuade you on this product. Hence your preferences remain sovereign in theory. This is the 'service' which advertising applies to advertising - information about characteristics - and these feed into your sovereign preferences about characteristics, without changing them. Yeah, right. This does sound like an argumentative sleight of hand. Still, this is how models develop. Doesn't sound like a revolutionary move. Apparently this line of reasoning is brought out by the advertising lobby when defending the right to target 'vulnerable' groups. (2) In Revealed preference theory This is a Paul Samuelson idea (1938). Basically, you can find out the best option to chose from by looking at what consumers actually purchased. (In general, their purchasing habits). It is a move which is very common in finance, where you imply a fact from the state of (some) market variables at a certain time. It was an attempt to replace utility theory, since this was (is) considered not operationally defined, hence not very much use. Think about how wonderful and straightforward everything would be if revealed preference theory turned out to be true. You'd be able to chart a course for better decisions simply by crunching a bunch of market time series. Where does the idea of the sovereign consumer come from? So far he hasn't said too much about that. It is a fundamental core of economics. So it probably dates back a few hundred years, in some form or other. ie Revealed preference theory isn't really the start of this idea. Ultimately it has philosophical origins (Plato) but I can also see the influence of early political economy too - Hobbes. It is a model. A simplification, and one which creates a whole range of subjects to investigate, so it is a productive simplification. And, again like all models, just because it contains some simplifications, doesn't mean you should abandon it - all models are in essence like that. The sovereign consumer is the 'enlightenment' rational agent - either the undecieved Cartesian 'Ego' or perhaps the Hobbesean before he surrenders his rights to the sovereign. Just while I'm on the subject of definitions, rational expectations is the theory that people guess the future correctly, on average (Begg). This more clearly has a rationalism origin. In fact it is just the idea of rationalism applied to economic questions. Again, you can see it as a model, where you model the decision maker as someone who is rational and can cast that rationality forward into the future to build his own model of the future based on what he currently knows. And his decisions are based on this act of rational foresight. The use all the available information. Rational expectations theory answers the (important) question: how do people form expectations of what will happen in the future (e.g. inflation expectations). The answer this theory gives is: fully rationally! So if the rational way to come to an opinion about inflation - i.e. in order to have an opinion, this theory says we take in history of inflation, government policy, likely policy changes in the relevant future, just about anything which is humanly knowable about the situation. In this context, you can think of revealed preference theory as rational expectations + efficient market hypothesis. That's a neat way of summarising it. If you can afford a set of options A, B, ..Z right now (within your budget), and you picked A, then the fact you could afford B, C, ..Z means that your revealed preference is for A over, B, A over C, ... A over Z. The theory states you'd never ever chose B, C, ...Z in any set of options available to you in the future (ceteris paribus) if you can afford any of them, since your revealed preference is for A. It is almost like: you're in a current state at time t such that you picked A. Well, if you're currently building out a tree of possibilities of choice out into the future, with a filtration I_t - ie knowing only what you know now - in any future occasion faced with some other set which includes A, B, ...Z (and possibly other things), then you'd pick A over B,...Z again. (It is agnostic on whether your new choices had available to you something which was even more desirable than A (and affordable). The criticism is that it is in fact impossible in a real world to say what options were eliminated to make you pick A that first time. Hence the full set of preferences weren't revealed. But unless I knew more details about it, I don't know if this criticism is just a misunderstanding about infinite sets. Isn't it a relationship between a distinct choice and a (perhaps finite, perhaps infinite) set of choices which you could have made, Does it need to be defined over all possible affordable alternatives? That seems not necessary. Not sure. But if it doesn't have to be over all affordable alternatives - a circumscribed revealed preference, then perhaps it can survive that criticism. (3) Later in the book, he'll give the example of 'discounting the cost of future lives' which is a move often taken in everything from road planning to global climate control debates. To work out the cost of a sickness or death they sometimes 'back out' the cost of working in an industry as a way of putting a price on illness and death. To do this, there's an impicit use of the revealed preference theory again - the worker is assumed to be fully rational, with a range of choices open to him and so you can imply the full cost of his likely early death by the premium he gets paid from doing this job. The author later separately criticises this argumentative move. He states "Real people don't choose jobs as sovereign consumers chose washing powder"(p21)
    (i) growth is good because it leads to more consumption
    (4) In justifying economic growth (the proper subject of chapter 2, so read this as a prelim) The classical argument:

    (ii) consumption is good because it leads to more preference satisfaction [this is the step which relies on revealed preference theory & the myth of the sovereign consumer]
    (iii) preference satisfaction is good becasue it makes people better off.

    If consumption is so unjustified, why do we do it then?(p22)
    He's just taken away a couple of the intellectual underpinnings of the myth of economic growth being good for us. This is what's he's aiming at in the whole book. But now he gives an essentially psychological explanation for why it happens. Well, there's 'why is it happening now' and there's 'why did it happen historically'. And there is probably a political reason for why it happened back then. Also, back when the developed world was developing, then growth was a good thing, but not any longer. It is worth remembering this distinction.#
    Because it is addictive and competitive (remember, these are psychological explanations - they were always true, even during that time of the West's development where 'going for growth' was a good thing to do. Purchases are addictive - you accomodate to the ownership of it and the purchasing of more and more gives you the pleasure you require. This kind of backs on to Daniel Bernoulli's claim that a marginal extra dollar isn't as beneficial to a wealthy person as to a poor person. (Which itself is an idea which leads to progressive policy decisions from governments - the best use of a dollar goes not to the person who can afford it, but to the person who'll receive the most benefit from it). The happiness treadmill is a phrase which sums up this element of consumption. Evidence for this 'happiness accomodation' (i) lottery winners only temporarily elevate their happiness level (ii) newly paraplegic people eventually return to their normal level of happiness. Conclusion