Winner of the New Statesman SPERI Prize in Political Economy 2016

Monday, 31 March 2014

The Left and Economic Policy

Why does the economic policy pursued or proposed by the left in Europe often seem so pathetic? The clearest example of this is France. France is subject to the same fiscal straightjacket as other Eurozone countries, but when a left wing government was elected in April 2012, they proposed staying within this straightjacket by raising taxes rather than cutting spending. Although sensible from a macroeconomic point of view, this encountered hostility from predictable quarters, as I noted here. But in January this year President François Hollande announced a change in direction, proposing tax cuts for business and public spending cuts. When your macroeconomic announcements are praised by Germany’s foreign minister as courageous, you should be very worried indeed. Any hopes that Hollande might lead a fight against austerity in Europe completely disappeared at that point.

You could argue that France was initially trying to oppose irresistible economic and political forces, and no doubt there is some truth in that. But what was striking was the manner in which Hollande announced his change in direction. He said “It is upon supply that we need to act. On supply! This is not contradictory with demand. Supply actually creates demand“. This is not anti-left so much as anti-economics. Kevin O’Rourke suggests this tells us that to all intents and purposes there is no left in many European countries. It would indeed be easy to tell similar stories about the centre left in other European countries, like Germany or the Netherlands. With, that is, the possible recent exception of the Vatican!

Unfortunately Europe here includes the UK. Labour’s shadow chancellor, Ed Balls, was correct in saying that the government’s austerity measures were too far, too fast, yet the party now seems to want to show they are as tough on the deficit as George Osborne. (Its opposition prior to that often appeared half hearted and apologetic.) Again you could argue that they have no choice given the forces lined up against them, and again I would agree that this is a powerful argument, but I cannot help feeling that this not the complete story.

I am not trying to suggest that if Labour had taken better positions, it would have necessarily made much difference. Take the issue of flooding, where Labour did try. The BBC failed to ‘call’ this issue, by for example reproducing the official data shown here, and instead fell back on ‘views on shape of the earth differ’ type reporting. Here the BBC failed in its mission to inform, and instead behaved in a quite cowardly manner. But at least in this case Labour tried.  

What strikes me about the economic pronouncements of the Labour Party is the number of tricks they miss. On too far, too fast, for example, an obvious line of attack would have been to note how Osborne did change his policy (proclaiming U turn! finally followed our advice etc). In addition they could say the recovery only took place once austerity was (temporarily) abandoned. Simplistic stuff I agree, but this is politics. To take a much more recent example, an easy line for Labour to take on the last budget and pensions was that Osborne’s policies would reduce incomes for prudent pensioners. Yet all Labour seems to be saying is that they will support the reforms, but want to wait to see the details. In other words, there is no opposition to the government’s claim that this was a budget for savers and pensioners.

With austerity and pensions there may be subtle factors that I have missed, but in their absence one conclusion you could draw is that the Labour Party in the UK is not getting good economic advice. I’m afraid I have no deeper knowledge on whether this is true or not. That has to be the conclusion in the case of Hollande’s apparent embrace of Says Law. Yet I doubt that the left does not want good economic advice. As I noted here, in the last Labour government the influence of mainstream economics had never been greater. Is this a paradox?

Perhaps not, if you think about resources and institutions. Seeking out good advice (and distinguishing it from bad advice) takes either money or time. An established government finds this much easier than an opposition or a new government. When labour came to power in 1997 they did immediately introduce well researched and judged innovations in monetary and fiscal policy, but they had had 18 years to work them out.

In addition, with the Eurozone there may be a factor to do with governance. I have just read a fascinating paper by Stephanie Mudge, which compares how economic advice was mediated into left wing thinking in the 1930s compared to today. To quote: “it stands to reason that an economics that works through inherently oppositional national-level partisan institutions would be especially fertile terrain for the articulation of alternatives; an economics that keeps its distance from partisan institutions and is more removed from national politics, but is closely tied to Europe’s overarching governing financial architecture, probably is not.” What is certainly true for both the Eurozone and the UK is that leaders of independent central banks often appear naturally disposed to fiscal retrenchment.

This gives us two problems that occur for the left and not the right. However the right has two problems of its own when it comes to getting good policy advice. The first comes from a key difference between the two: the right has an ideology (neoliberalism), the left no longer does. The second is that the resources for the right often come with strings that promote the self interest of a dominant elite. So although the right has more resources to get good economic advice, these strings and their dominant ideology too often gets in the way. But what this ideology and these resources are very good at is providing simple sound bites and a clear narrative.

  

Saturday, 29 March 2014

Pensions and neoliberal fantasies

As those in the UK will know, one of the major changes announced in the recent budget was to ‘free up’ defined contribution pension schemes so that recipients were no longer forced to buy an annuity with their pension, but could instead take the cash sum and spend or save it how they liked. This has been generally praised by our predominantly neoliberal press. The government’s line that this was a budget for savers and pensioners has been accepted uncritically. Giving people the choice of what to do with their money - what could be wrong with that? After the budget the UK press were full of stories of new pensioners trying to cancel their annuity contracts.

So if I suggest that those who are due to receive a defined contribution pension in the next few years and who want to invest their money prudently are likely to be worse off as a result of this budget, that might come as a bit of a surprise. The reason is because of three things economists (who are not automatically neoliberal) worry about: adverse selection, moral hazard and myopia. I will translate these in turn, in ascending order of importance. But before I do a very simple point. Annuities are a good idea, because they insure against uncertain lifetimes. So unless you know your date of death with be earlier than for your age group, you should invest a large part of your pension in some form of annuity.

Moral hazard. Pensioners can now take the risk that they will not live for long and blow their pension on expensive holidays, knowing that if they are wrong and live longer they can always fall back on the welfare state. A reasonable state pension should avoid this (because those receiving it do not qualify for welfare payments), but the IFS believe (pdf) this will only be partially true in the UK.

Myopia. As Tony Yates points out “there is abundant evidence from the experimental and other empirical literature in behavioural economics and finance that we are i) terrible at paying proper attention to the wants of our future selves [usually neglecting them] and ii) terrible at responding rationally to risk.” We know (IFS again) that people underestimate the life expectancy of their age group.

Adverse selection. If everyone has to take out an annuity, annuity providers can make a reasonable guess at how long people on average will live. If instead people can choose, annuity providers face an additional uncertainty: are those not choosing to take out an annuity doing so because they believe they will not live as long as the average for their age group? If that is true - which it almost certainly is - then annuity rates will fall, because those still taking out annuities will live on average for longer. A greater concern is that this additional uncertainty will reduce annuity rates still further, as annuity providers require an additional margin to compensate them for the extra risk they face. In theory, the market could collapse completely.

I do not mean to imply that any of these, or even all three combined, are sufficient to justify compulsory annuitisation. What they do show is that the naive ‘choice must be good’ line may be neoliberal, but it is not economics. What does seem pretty clear is that the budget will lead to a reduction in annuity rates, so a perfectly reasonable headline after the budget would have been “Chancellor cuts incomes for new prudent pensioners”. If you do not remember that headline in your newspaper, perhaps you should change newspaper.

There is another neoliberal fantasy, and that is that private provision must be better than public provision. Yet pensions illustrate one area where this can be the opposite of the truth. Defined contribution pension schemes suffer from intergenerational risk. Suppose that those arguing real interest rates will stay low for a long time are right (secular stagnation). That means that the generation receiving their pension during this period will end up with a lower pension income that those that go before or after them. Indeed it is just this effect which has made annuities unpopular and which the government is playing to. People would like to insure against this kind of risk, but the problem in this case is that we need an insurer who in effect lives forever, so they can smooth out these good and bad times. There is just one economic actor that could do this, and that is the state. The state could do this in many ways, ranging from some form of unfunded government pension scheme to providing insurance to annuity providers.

As Tony Yates notes, you can see additional government borrowing during severe (liquidity trap) recessions as just this kind of intergenerational risk sharing. He also points out that there are time inconsistency issues when the state performs this role, although I would add that if the old continue to vote more than the young these may not be critical. If Roger Farmer is right (pdf), these issues involving uncertainty over generations may have consequences far beyond pension provision, and again the state may have a key role to play. Unless, of course, you are a neoliberal who will not countenance such things.

This post was inspired by this from Tony Yates, and also drew heavily on this post-budget briefing by IFS economist Carl Emmerson. For a much more detailed analysis (based on the government’s earlier proposals but also of relevance now) see this study (pdf) by David Blake, Edmund Cannon and Ian Tonks. 

Friday, 28 March 2014

Time inconsistency and debt

For macroeconomists

In recent posts I’ve talked about empirical work I did a decade ago on exchange rates, a non-technical piece on policy I wrote a few years ago, and some recent microfounded analysis undertaken by others. So for completeness, here is something on a pretty technical, thoroughly microfounded article that I wrote with Campbell Leith that recently came out in the JMCB.

The place to start is a result that is relatively familiar. When governments can commit and follow an optimal policy, steady state debt follows a random walk. If we start out from a position where the debt stock is at its optimal level, and then there is a shock that causes debt to rise, the optimal response is to let debt stay permanently higher. This is essentially a variant of the tax smoothing idea. Taxes could rise to bring debt back down to its pre-shock level, but that incurs current costs (higher distortionary taxes) for future benefits (less debt, therefore less debt interest, therefore lower taxes). If the real interest rate equals the rate of time preference, then tax smoothing implies it is better to smooth taxes, which in turn implies it is better to let debt stay higher.

I discussed this result in this post based an earlier EJ article written with Tatiana Kirsanova. That paper focused on simple fiscal rules combined with optimal monetary policy. This JMCB paper just looks at optimal policy, where the government jointly controls monetary and fiscal instruments, in a very conventional New Keynesian model.

In this kind of model the description above of the optimal response to a debt shock is not quite complete. In the initial period, governments will act to reduce debt by a small amount. It is optimal to engineer a small burst of surprise inflation to reduce debt. This only occurs in the initial period, and it has a hardly noticeable impact on debt. However once that period has passed, the same incentive exists to generate a bit of surprise inflation in the new current period to further reduce debt. So the policy is time inconsistent for this reason.

The time inconsistency problem is similar to the familiar inflation bias case for monetary policy. There, the optimal policy would be to achieve the inflation target, but if the natural rate of output is inefficiently low there is an incentive to generate an initial burst of surprise inflation. The only way of removing this temptation in future periods is to run inflation well above target, which is inflation bias.

So how do we remove the incentive to keep cutting debt a little bit? The answer is obvious once you state it, but it is unfortunately non-trivial to prove, which is what the paper does. The incentive to initiate a small amount of surprise inflation to reduce excess debt exists as long as there is excess debt. To remove the incentive to cut debt by a little, you have to cut debt by a lot. The discretionary, time consistent response to a positive shock to debt is to bring debt very quickly back to the pre-shock level.
 
So the first best policy, if the government can commit, is to let debt stay higher. The inferior policy that results from a lack of commitment is that debt is brought back down very quickly. If this result seems strange, it may be because we have in the back of our minds the real world problem of deficit bias and potential default. However neither is present in this model: the government is benevolent, and there is nothing in the model to make high levels of debt problematic. 

The paper calculates welfare in both the commitment and discretionary cases. The welfare costs of any shock to the public finances are much greater under discretion, as you might guess for a policy that immediately brings debt back down to its original level. Finally the paper looks at ‘quasi-commitment’, which puts some probability on plans being revised.

The paper takes an idealised set-up (benevolent governments) in a simple, idealised model (e.g. agents live forever), so it is a long way from practical policy concerns. (If you want something along those lines, see this paper I wrote with Lars Calmfors.) However what this paper does show is that there is no necessary linkage between the problem of time inconsistency and the lack of debt control. In a simple New Keynesian model lack of credibility can lead to excessive control of debt.

    

Wednesday, 26 March 2014

It’s the economics, not the politics

A reflection on rereading an old paper

Regular readers will have noticed that I’m not a great fan of the current government’s economic policies, or its Chancellor (with the very important exception of setting up the OBR). Some will assume that this reflects a political bias - indeed those who are political animals often cannot conceive that everything is not politically driven. If you are looking for evidence either way, this post is about that.

Yesterday I received an email advertising ‘The Economics of Austerity’, which is a collection of essays published by Edward Elgar and chosen by Suzanne Konzelmann. There are 47 in all, but as this includes pieces by Hume, Smith, Ricardo and Mill, you can see that this collection aims to give a historical perspective on the subject. To be honest the email might have got lost in my in-tray if I hadn’t noticed it started ‘Dear Contributor’. Sure enough, in the eight essays dealing with the period after the financial crisis, there was my name alongside others, including some guy named Krugman.

I should have been flattered, but instead my heart sank. The selected paper was originally published in OXREP in 2010, and I remember it now as being hopelessly optimistic. It was written before the Euro crisis, so before austerity became almost universal. Little over a year later I wrote a paper with the title ‘Lessons from failure: fiscal policy, indulgence and ideology’, which seems much more appropriate in the current environment. Yet I thought I ought to reread my OXREP article, to confirm just how dated it had become.

Actually it really is not that bad. The key points are still things I believe. The financial crisis was primarily a crisis involving financial regulation rather than monetary policy or global imbalances. The idea that the Great Moderation was due to improved monetary policy was sound, but it always came with a caveat involving large negative shocks, because of the zero lower bound (ZLB). The ZLB could be mitigated using what I now call a ‘forward commitment’ to higher future inflation, but time inconsistency would make central banks reluctant to pursue that. The obvious alternative was expansionary fiscal policy. If concerns over debt where a constraint, then an effective measure was balanced budget increases in government spending.

This last point is so important, yet it can get lost in the debate. If you want to plug a demand gap at the ZLB, temporary increases in government spending financed by temporary increases in taxes work, because a lot of the tax increase comes out of saving rather than consumption. As the tax increase is temporary and only happens while there is widespread unemployment, concerns about the incentive effects of higher taxes on labour supply are at worst irrelevant. This is basic macroeconomics. But then I wrote this:

“The main problem with fiscal measures to expand the economy which do not raise debt is political. Higher government spending, even if it is temporary, raises taxes and temporarily increases the size of the state, which is unpopular on the right of the political spectrum. Fiscal transfers that move money from unconstrained savers to those who are credit constrained also tend to involve transfers from the rich to the poor. Although useful from the point of view of stimulating effective demand, they may not be politically acceptable.”

Quite. What was missing was an equivalent paragraph saying that, even if there was no economic problem with raising debt, debt financed fiscal expansion might be resisted for political reasons. However, I now want to return to where I started. The OXREP paper was written before the current coalition was elected. It set out how I saw the macroeconomics. At the ZLB you could use unconventional monetary policy, but in addition you should use fiscal stimulus, whether debt was a constraint or not.

It was politics - and ideology - that got in the way of good macroeconomics, which is why the UK and global recession has been so prolonged. And it is that tendency that is personified by George Osborne. Even if debt was erroneously thought to be a constraint, we should have had tax financed increases in public investment rather than cuts. This extra investment could have been on politically neutral things, like flood defences.

Unfortunately austerity turns out to be part of a pattern. There is another example from the OXREP paper. When talking about an environment of low real interest rates, I noted the danger of housing bubbles, but also how specific fiscal instruments could be effective (relative to raising interest rates) in dampening these bubbles. A corollary, of course, is that these same instruments used in reverse can be used to make bubbles much worse, or indeed to initiate them, as in Help to Buy. House prices are now above their previous 2008 (bubble?) peak. Maybe this is good politics, but it is lousy economics.

So I do not think the complaint of political bias stands up. What you could perhaps argue is that I’m being politically naive: that all Chancellors maximise political advantage at the expense of national economic interest. The fact that Gordon Brown’s scorecard seems much better (including resisting Blair to stay out of the Euro) could just reflect opportunities and circumstances rather than anything else. It is certainly true that the position George Osborne inherited, as a result of the financial crash, was much more difficult than Brown’s inheritance. But go back to the time I wrote the OXREP article. At the end of 2008 Labour did undertake fiscal expansion, and it was opposed by Cameron and Osborne. As I noted here, Osborne in April 2009 argued that monetary policy should “bear the strain of stimulating demand”, seemingly oblivious to interest rates being as low as they could go. So Labour’s policy was consistent with the arguments in my OXREP article (which in turn reflected basic macroeconomics), while Conservative policy just ignored them. 

So thank you Dr. Konzelmann, for including me in such good company. But also thank you for making me reread the paper and revise my memory of it. [1]


[1] I fear I cannot also thank the publishers, who tell me that unfortunately I cannot have a complimentary copy, because of the large number of contributors. I’ll leave it to Hume, Smith, Ricardo and Mill to complain directly. I guess Keynes, who has 5 essays in the book, probably got a copy!

Tuesday, 25 March 2014

More thoughts on ‘expectations driven’ liquidity traps

Warning - this is technical, so really just for macroeconomists.

The idea that we could get stuck in a steady state with nominal interest rates at zero and negative inflation has been dismissed by some because it has been associated with the policy proposal to raise nominal interest rates to avoid that outcome. Here I want to explore an alternative interpretation that disconnects the theory from the policy.

First, a recap on the theory. Take a really simple model, where the real interest rate is positive and constant. The central bank sets the nominal rate according to a Taylor rule that obeys the Taylor principle. The rule is calibrated such that there is a steady state at which nominal interest rates are positive and inflation is at target. Furthermore under rational expectations/perfect foresight, if agents know the inflation target, the real rate and the rule, we immediately go to that steady state. In more complex and realistic models it may take time to get to this ‘intended’ steady state, but it is ‘locally’ or ‘saddlepath’ stable.

However there is another steady state, because nominal interest rates cannot go below zero. For given real interest rates, this Zero Lower Bound (ZLB) steady state must involve negative inflation. This steady state is ‘indeterminate’, which means that we can describe dynamic perfect foresight paths that start at some arbitrary level of inflation below the central bank’s target, but end up at the ZLB steady state. To see this diagrammatically, look at this earlier post, or (plus algebra) this from David Andolfatto, or pages 123 to 135 in Woodford’s Interest and Prices. [1]

Stephanie Schmitt-Grohe and Martın Uribe have a paper which embeds this logic in a more elaborate model of involuntary unemployment based on nominal wage rigidity. They suggest that it tells a better story about the US recession than the New Keynesian idea involving a downward shift in the natural real interest rate. It is a story of a jobless recovery: growth resumes at the ZLB steady state, but involuntary unemployment is also positive at that steady state, because inflation is negative and nominal wages are downward rigid.

The paper interprets the ZLB steady state as one where agents have the wrong expectations about the central bank target. Call this the ‘mistaken beliefs’ story. With that story, raising rates could reveal or signal the authority’s true inflation target. In Stephanie and Martin’s paper, because raising nominal rates leads to an immediate change in beliefs and therefore a rise in expected inflation, we see an immediate jump to output growth above trend, which allows unemployment to fall. Many will just think this idea is incredible, but as Paul Krugman keeps emphasising, ZLB economics often turns things upside down.

In an earlier post I suggested that this story could possibly be plausible for (pre Abe?) Japan or the Euro area, because their inflation targets are one-sided: they seem content if inflation is below target, so in principle it might be possible to believe they might be content to end up at the ZLB steady state. In addition there is no QE in the Eurozone, and was only briefly in Japan. I suggested, I hope correctly, that the situation is different in the US, and it clearly is in principle in the UK. For that reason alone, I thought the mistaken beliefs story unlikely for these two countries.

However, after seeing Stephanie present her paper last week and thinking more about it, I wondered whether we could give the ZLB steady state a different interpretation? Suppose agents believe that is where inflation is heading because they do not think monetary (or any other) policy is capable of achieving the inflation target. Given current attitudes to fiscal policy, and a pessimistic view of the power of QE, this interpretation does not seem so farfetched for the US or UK. Economists sometimes worry about a deflationary spiral, where inflation just keeps falling into a bottomless pit. But maybe the ZLB steady state is like a ledge that can stop this descent. [2]

Under this interpretation, the policy of raising interest rates could be a disaster. There is no boost from any increase in expected inflation, because beliefs do not change. We lose the negative inflation equilibrium, but what seems likely in that situation is that we just get a negative deflationary spiral. The ledge preventing descent into the deflationary pit crumbles away. [3] If this interpretation is tenable, then it means that the possibility of becoming stuck in a ZLB steady state is not necessarily linked to the policy proposal of raising rates to get out of it.

My own view of the evidence is that the ‘balance sheet recession/natural rate too low’ story is still the more convincing, and that we are currently seeing in the US and UK a very slow return to the inflation target equilibrium. However that story is not without its problems: with a simple New Keynesian Phillips curve, a gradual reduction in the output gap should be associated with inflation gradually rising towards target, which is not what we are seeing at the moment. So I am not so confident that I can dismiss the ZLB steady state story out of hand. The message I draw from that possibility is that inflation targets need to be two sided and clear, and that policy (monetary and fiscal) at the ZLB should do everything it can to try and achieve that target. Assuming that below target inflation must eventually rise because nominal rates are zero could turn out to be a big mistake. [4]


[1] At the intended steady state, where inflation is at target, the target fixes the end-point of any dynamic process, and this (rather than history) then determines the initial level of inflation. At the ZLB steady state, there are multiple dynamic paths that lead there, so something else (‘confidence’) fixes the initial point. It cannot be history, because the model is forward looking and history does not matter. This may be a little too arbitrary or extreme for some tastes.

It is also controversial whether an inflation target is sufficient to fix an end-point for any dynamic inflation process, rather than allowing dynamic processes that explode. As I note in my earlier post, John Cochrane says: “Transversality conditions can rule out real explosions, but not nominal explosions.” I have less of a problem than he does with this. 

[2] A third interpretation might be that agents revise down their beliefs as inflation falls. The problem there is that this involves learning, which may make the stability of the ZLB steady state problematic. Jess Benhabib, George Evans, and Seppo Honkapohja have modelled learning when there are the same two steady states, and what they find is that the ZLB steady state is unstable: inflation keeps on falling. (It is a deflationary spiral.) My intuitive explanation for their result is that learning is equivalent to introducing backward looking expectations dynamics, and typically an indeterminate equilibrium with rational expectations dynamics (which the ZLB steady state is) becomes unstable with backward looking dynamics. Equally a ‘saddlepoint’ perfect foresight equilibrium (which the intended steady state is) becomes stable when expectations are backward looking, so they find that the inflation target steady state is stable under learning.

[3] Following on from footnote [1], you might ask why agents in this case will not select the only steady state left, and therefore raise their expectations. Why can I imagine agents assuming a deflationary spiral, but I want to rule out inflationary spirals? The answer is because the ZLB provides asymmetry. If it looks like an inflationary spiral is developing, the central bank can depart from its Taylor rule and raise rates substantially. That should change beliefs. They cannot do the same for a deflationary spiral.

[4] In an early draft of this post I had a different introduction, based on Narayana Kocherlakota’s recent dissent. Some may recall that Kocherlakota originally put forward the mistaken beliefs ZLB steady state idea, but then seemingly recanted. So my idea was that perhaps what had changed was not his view about the theory, but his interpretation of it. However having read some more about his current views, I don’t think this stands up, but it was such a neat idea I cannot resist mentioning it as a footnote. 


Sunday, 23 March 2014

Bank says money multiplier is wrong - should we be shocked?

For teachers and students of economics

A post I wrote nearly two years ago had the emphatic title “Kill the money multiplier!” A recent article in the Bank of England’s Quarterly bulletin, by Michael McLeay, Amar Radia and Ryland Thomas, is a little more circumspect, but the message is essentially the same. One of their ‘headlines’ is: “Money creation in practice differs from some popular misconceptions — banks do not act simply as intermediaries, lending out deposits that savers place with them, and nor do they ‘multiply up’ central bank money to create new loans and deposits.”

The article has created quite a stir. (See this post from Frances Coppola.) Some have tried to suggest that it represents a fatal blow to mainstream theory, or current policy. David Graeber writes that the article has “effectively thrown the entire theoretical basis for austerity out of the window.” This is nonsense. What the article does is outline the understanding of most of those currently involved in monetary policy (including academics), and contrasts this with how monetary policy is taught in undergraduate textbooks. (I should add that the article does this rather well, and is well worth reading.)

So why is there this disconnect between current thought and the undergraduate textbooks? The textbook approach does have its supporters: see this post by Nick Rowe for example. I could try and portray this as a continuing battle between Wicksellians and Monetarists, and discuss whether Quantitative Easing is a win for either side. That would make a nice discussion. However I think using textbooks as a pretext would be wrong.

Think of another standard part of textbook macro besides the LM curve and money multiplier. One of the first things students also learn is the Keynesian multiplier, where changes in government spending can lead to much larger changes to output because the marginal propensity to consume is closer to one than zero. Again this does not correspond with how most macroeconomists today think about the real world. Is this disconnect because of a rearguard action by old fashioned Keynesians who insist that the 1960s way of doing macro must survive? Of course not. (Any new readers please note, I am not arguing against Keynesian economics or that the multiplier is zero - see here. I just think we would be better off with new undergrads assuming a multiplier of one, and focusing instead on why output was demand determined in the first place.)

In both cases, this disconnect between undergraduate textbook macro and current practice has a far simpler explanation - the textbooks are out of date. The core of what is taught to undergraduates has not changed in fifty years, whereas macroeconomic thinking has changed substantially. However we are not using fifty year old textbooks. You will actually find a great deal of the more modern stuff in the textbooks, but essentially in the form of add-ons. So first students are taught that central banks fix the money supply, and then they learn about Taylor rules. First they are taught about a Keynesian consumption function (with a large mpc), and then they learn about consumption smoothing.

This is silly. It is also dangerous, because the problem with add-ons is that they may not get added on. In particular, students who learn all about the money multiplier may never go on to be taught that if banks are not short of reserves or have easy access to them, they can simply create deposits by issuing loans. I suspect it is this lacuna which helped motivate the Bank’s authors to write their article. 

So how does this silly and dangerous situation persist? One clue is that the same gap between what is taught and current practice does not exist at masters level. Masters teaching is much less dependent on the textbook. So I think we really need to look at the production of textbooks to understand what is going on. Now what follows is a theory, and as I have never written a textbook or talked about this to those who have it is based on no empirical evidence - but my theory is microfounded!

Suppose a leading macroeconomists wants to write a textbook, and they want to throw away the LM curve, and the Keynesian consumption function - or at least not start off with these out of date bits of kit. The publisher will do some market research. The market will consist of two types. There are the young radicals, who are just starting out and are desperate to teach in a more modern way. There are also the older traditionalists, who have been teaching macro according to the existing textbooks for some time. They will tell the publisher that while they have no objection to this more modern stuff appearing somewhere - indeed they think it is a good idea - they really need a textbook that starts off in the traditional way so that they do not need to rewrite their whole course. The publisher then persuades the author that, to make any money, they need to start with the traditional stuff.

If this textbook writer is representative, then the radicals will only have traditional textbooks to choose from. They have to be very radical indeed to teach without a textbook, so they start teaching in the traditional way. This unfortunately means that the radical becomes over time a traditionalist, and the process continues. The once radical will tell themselves that assuming money is fixed, while clearly not literally true, is not so misleading. The marginal propensity to consume may in practice be nearer zero than one, but at least it gets the kids to do some simple algebra and think about system feedbacks. And hey, a reserve constraint on banks issuing money could exist in some situations.

As a result, some students end up believing that banks just lend out deposits and that the central bank controls the money supply via a multiplier. And a central bank feels it needs to write an article pointing out that this is not so. That I think is a bit shocking.  


Saturday, 22 March 2014

What place do applied middlebrow models have?

Mainly for economists, although the jargon I cannot help using is not critical to the message

Paul Krugman writes that “the effect of the insistence that everything involve intertemporal optimization has been to drive out middlebrow economic modeling.” I like the term middlebrow. A ‘highbrow’ is ‘One who possesses or affects a high degree of culture or learning’, and I think that has a nice ambiguity to it.

Paul gives a couple of examples where middlebrow research is getting squeezed out. I want to add an applied example of my own which I believe shows clearly that there is a problem, but also why there is no simple answer. The story is a little on the long side, but telling it provides evidence of a pattern which is my main point.

I first started working on the calculation of equilibrium exchange rates with Ray Barrell in the late 1980s. The partial equilibrium model we constructed, looking at the G7 currencies, was based on an approach pioneered by John Williamson, which he called the FEER. This essentially uses trade equations to estimate the exchange rate which would produce an off-model guess at a ‘sustainable’ current account. If you want to think of it in terms of a two dimensional diagram, think about a supply and demand curve in real exchange rate/output space (sometimes called the Swan diagram: see this post for example.) It is a good example of a well used middlebrow model. What FEER analysis effectively does is estimate the demand curve, conditional on off-model assumptions about asset accumulation.

Our paper attracted a lot of interest, and we did send it to a couple of journals. However the response was consistent: the model was rather traditional, partial equilibrium, not microfounded, and therefore of not enough interest for a major journal. All this was true, so we gave up on publication. However the UK was about to enter the ERM, so we used the model as part of a comprehensive analysis of an optimal entry rate. Our analysis, published by the Manchester school as part of a conference volume, suggested that the rate we did join the ERM at - 2.95 DM/£ - was too high and would intensify the recession. Subsequent events did not prove us wrong. [1]

In the late 1990s John Williamson asked me to update the analysis, which I did with Rebecca Driver. It was published as a monograph, and neither of us thought it worth the effort to go for what we knew would be a minor journal publication. 

In 2003 the UK government had to decide whether to join the Euro. It undertook a number of background studies to help it make that decision, but it also needed to know what rate to enter at if they decided to join. Initially they asked me to review existing studies on what the equilibrium Euro/Sterling rate might be. There were no papers (microfounded or not) published in top journals. I focused on three studies: my earlier work with Driver, one available as an IMF working paper, and one published in a policy oriented journal. They then asked me (in 2002) to update my own earlier estimates, which I did with a new model of the yen, dollar, euro and sterling. At the time the exchange rate had been around 1.6 Euro/£ for over 2 years - I calculated that the equilibrium rate was a little below 1.4 E/£. By the time the study was published in 2003 the rate had fallen to around 1.45 E/£, and it stayed near that level until 2007.

There is a lot to say about that work, including a rather amusing incident that happened just after the study was published, but that is for another day. The point I want to make here is that I did not even try to publish this 2002 exercise. I feel rather guilty about this now, because the study is so hard to find. [2] Yet as an academic the incentives for me to publish in a minor policy orientated journal were zero.

John Williamson’s FEER framework anticipated aspects of the New Open Economy approach. In the mid 2000s Obstfeld and Rogoff applied (without as far as I know acknowledging Williamson) a similar approach to an analysis of the US exchange rate and current account, at a time when many people thought the financial crisis would be all about the dollar. The best known example of this work was published as a Brookings paper. Their analysis is microfounded in the sense that it includes deep parameters like demand elasticities, but it remains partial equilibrium and there is nothing intertemporal. Their microfoundations focus is elegant, but its output is more limited: they only calculate the impact of a given US current account change on equilibrium exchange rates, and do not try and estimate what equilibrium rates actually are. However the point I want to make here is that this was analysis of key interest, undertaken by two of the best international macroeconomists in the world, but it was not published in one of the top six journals. [3] [4] [5]

It is not difficult to see why. There is little here to interest either a theorist or an econometrician. The theory is well known, and models are either calibrated or involve very simple estimation. Yet constructing a model (microfounded or middlebrow), and taking it to the data, is a non-trivial task, and there are plenty of ways to get nonsense out. I use a lot of skill and experience in this work. (You could say they are the skills and experience of an engineer or experimental scientist, rather than those of a theorist.) More importantly, the output is of considerable interest to policymakers and others. Every time someone says a currency is under or overvalued they are making a judgement about equilibrium exchange rates.

Now I suspect a lot of academic macroeconomists would say that this is the kind of work that should be done in policymaking institutions, and not by academics. Yet I kept being asked to update my work, and I guess Obstfeld and Rogoff did theirs, because policymaking institutions - with the important exception of the IMF - typically do not have the resources to maintain and develop models of this kind. Indeed increasingly I suspect that because it will not be published in good journals they infer that it is somehow not worth doing. [6]

I cannot help feel that there is some kind of ‘knowledge failure’ here, and that it is fairly specific to macro because macro involves models. This is not frontiers of research stuff, so you can see why you would not find it in the journals that leading theorists or econometricians would routinely read. My work was an empirical application of a middlebrow model, but still in my view the most reliable method we have at calculating equilibrium rates. It is quite consistent with a microfounded approach, but the microfoundations themselves are not terribly interesting. It can be very important (suppose we had joined the Euro in 2003), yet I did not have the incentive to even try and publish what I had done. Isn’t that rather strange?


[1] We published our analysis before we joined the ERM. A well known FT journalist told me at the time that he thought we had won the intellectual argument, but he still felt instinctively that 2.95 was the right rate to enter at. The UK government also followed their instincts, with disastrous results.

[2]  It recently took me 30 minutes to find a ‘snapshot’ from the Treasury website hosted in the national archives where the links worked - thanks HMT!

[3] This update of my Treasury analysis (published as a book chapter) has an appendix which goes through the microfoundations of the FEER approach. I could talk about the relative merits of a microfoundations centred approach and my more data based FEER approach, but that is not the key point I want to make here.

[4] Why not make the model general equilibrium using intertemporal theory to model the current account, for example? Unfortunately the standard intertemporal model is hopeless at describing trends in the data, yet matching trends in the data is vital in calculating equilibrium exchange rates. I essentially make the same point here when discussing the US savings ratio. There have been empirical studies of medium term current accounts (I discuss a recent one here), but these are reduced form and pretty ‘ad hoc’ by today’s standards. Again you will not typically find these studies in the top journals.

[5] Why is publication in top journals important? Because that gives the best economists the incentive to try and improve or develop work, and therefore to get even better answers. Policymakers asking economists like me to do occasional work is fine, but you really want a forum where others can – uninvited – critique and improve on that work, and which provides a memory so that new work does not ignore what has gone before.  

[6] You might be tempted to say that if this analysis was any good, those involved in the FOREX market could make money from it. I have been in a number of meetings with people like that, and they are interested until they ask how long an exchange rate might typically take to get to its equilibrium rate. When I say around 5 years, they nearly always lose interest! For those who think all you need at this horizon is PPP, read this.


Friday, 21 March 2014

Price level targeting intuition

For students and maybe teachers of macroeconomics. The analysis here is standard: a more general discussion can be found in Woodford's Interest and Prices for example (see pages 497-501 in particular). All this adds is a bit of intuition which I at least found helpful. If there are any mistakes in the algebra or numbers below, please let me know and I will correct them 

When monetary policy can commit (i.e. follow a time inconsistent policy), why does the optimal response to an anticipated cost-push shock involve bringing the price level back to its original value? I do not think it is obvious why it should, yet the result is an important part of the justification for price level or nominal income targeting, so here is my attempt at some intuition.

To make things simple, ignore discounting in both the monetary policymakers objectives and the New Keynesian Phillips curve (NKPC). For notational clarity, assume perfect foresight. So the monetary policymaker tries to minimise the weighted sum of the output gap (y) and inflation (π), both squared (the inflation target is zero), from period zero onwards, subject to a series of NKPC constraints. The shock is a cost-push shock (u) in period zero, which is observed at the beginning of period zero.

To start us off, assume that the policymaker can only set period zero output and inflation. Expected inflation in period 1 is zero (the shock is not persistent, and the central bank is credible.) So the problem can be expressed as choosing output and inflation to minimise the Lagrangian:


This gives us two first order conditions:



which can be combined as





Equation (1) can be thought of as a policy rule: the combination of the output gap and inflation that optimal monetary policy would select if it cannot achieve zero for both. So, for example, if output has a large impact on inflation, then (1) gives a larger ‘weight’ to inflation. If people like diagrams, we can represent the loss function by indifference curves around the ‘bliss point’ zero, which are circles if β is one. The monetary rule (1) is the line joining the points where these indifference curves are tangent to the Phillips curves.

To take a concrete example, let the cost push shock be 10, and set α=β=1. Adding (1) to the Phillips curve implies that the central bank creates a negative output gap of 5, which gives an inflation rate of 5. The optimal policy involves one of intratemporal smoothing, balancing the costs of inflation against the costs of lower output. The welfare cost is 50, compared to a cost of 100 if the policymaker allowed no fall in output.

Suppose now that the policymaker can make promises about period 1 only. The Lagrangian then becomes





The first order conditions always imply that the Lagrange multiplier for any time period is equal to the output gap for that period divided by α. In addition to the first order condition (1) for period zero inflation, we also obtain





We can add (1) to this, to get





Equation (2) gives us the key intuition behind the price targeting result. Suppose αβ is large, so the final term is small. In this case (2) tells us that the sum of inflation in the two periods will be close to zero. Higher inflation in period zero will be almost balanced by negative inflation in period one. A moment’s thought implies that this must mean the price level at the end of period one will be close to its original value.

Inflation in period zero will be positive as a result of the cost push shock. We can reduce its size by creating negative inflation in period 1. By creating negative inflation of x in period 1, we reduce inflation in period zero by x. With a cost push shock of 10, creating negative inflation of 5 in period 1 balances positive inflation of 5 in period zero, which is the optimum combination. Creating less negative inflation in period 1 will lead to a greater welfare loss, but so will reducing inflation by more than 5 in period 1.

However, what if αβ is not large? Specifically, suppose we return to the example where α=β=1. Combining this with the NKPC for each period implies the optimal policy is





The optimal policy creates negative inflation in period one, but not by enough to keep the price level unchanged. Prices end up higher by 2, compared to 5 when we could only change period zero values. The welfare cost is now 40, which is an improvement on 50.

Why does the case non-negligible αβ stop approximate price level targeting in this two period case? Think about what exact price level targeting would imply. It would involve inflation of 5 in period zero and -5 in period one. This could be achieved with an output gap of -5 in period one, but no output gap in period zero. So although inflation would be balanced, output gaps would not be. A more balanced output combination involves a higher final price level.

(The policy is now time inconsistent: at t=1 there is an incentive for the policymaker to not carry through and reduce output, but instead set the output gap to zero. Unfortunately if this change in policy is anticipated in period 0, inflation will be 6 rather than 4 in period 0, and the overall welfare cost will be 52 (36+16), which is worse than the case where policy only operated in period zero.)

Suppose we now allow the policymaker to make promises in period 0 about the output gap in period 1 and 2. Instead of just reducing output in period one, we can spread lower output over periods one and two. The output costs become more balanced, which reduces the extent to which we fail to achieve a balanced inflation profile. We can then derive the following policy rule

                                                                  
As the fall in output in period 2 is likely to be lower than the previous fall in output in period 1, the deviation from price level targeting is reduced.

If we allow the policymaker to make commitments T periods ahead, then we can derive the following first order condition:


                                                  
High inflation in period 0 can now be balanced by negative inflation in many later periods. Intuitively the output gap in period T will become very small as T becomes large. This implies that the sum of inflation over all periods is almost zero. That means that the price level in period T is almost the same as the original price level. Thus the optimal policy in effect involves a long term price target, although that target is approached gradually.


Thursday, 20 March 2014

See no evil

The collapse in UK productivity since the recession is extraordinary. Here is a chart from the OBR (pdf), showing how much lower potential output is estimated to be compared to estimates made just 6 years ago.

    Chart 3.7: Potential output relative to the Treasury's Budget 2008 forecast


According to the OBR, nearly 14% of output has been permanently lost, because underlying productivity has stopped growing over the last six years. This is unprecedented. As I have shown in earlier posts, UK GDP per capita has shown almost constant trend growth since the Second World War - until now. Not only is this ‘sudden stop’ in productivity unique in recent UK history, it is also a decline that is worse in the UK than almost anywhere else.

In quantitative terms understanding why this has happened dominates everything else as far as the UK macroeconomy is concerned, including the impact of austerity (if the two are unrelated). It should be what is obsessing any UK Chancellor. Yet this is something George Osborne hardly talks about. In his budget speech the word ‘productivity’ does not appear even once.

It is not hard to understand why the Chancellor keeps quiet about this productivity disaster. When growth was non-existent through 2011 and 2012, he liked to point to buoyant employment as an indicator of success, which of course is the obverse of the productivity puzzle. It would be embarrassing for him now to recognise how misleading this was. The Labour opposition’s main attack point concerns falling living standards, which is in large part a result of stagnant productivity, so he would much rather talk about something else.
  
The fact that this Chancellor thinks this way is not surprising. To quote Tim Harford: “So let us applaud George Osborne for playing his own game well – a game in which economic logic is an irritation, the national interest is a distraction, and party politics is everything.” Yet it should be noted that this kind of attitude can only work in an environment where the government has substantial control over much of the media. I made fun of this here, but it is a serious abrogation of democracy.
  
All this would matter less if actions revealed some coherent analysis and serious effort to deal with the problem. One widely held theory, championed by the recently appointed Deputy Governor of the Bank of England Ben Broadbent, is that the productivity collapse has something to do with what has happened to the UK’s major banks. This is certainly something LibDem Business Secretary Vince Cable worries about. Yet the government’s attempts to do something in this area, although welcome, have been modest, and these measures have had no noticeable impact on productivity. You would be forgiven for not knowing that the state currently owns a large part of the UK banking sector.

Another possible explanation for the productivity puzzle is subdued investment, and the need for many productivity improvements to be embodied in the form of new capital. The budget did increase investment allowances, doubling the allowance introduced at the end of 2012. But the real question, which I asked back in 2012, is why all this was not done much earlier. It is respectable to believe such measures are ineffective or inefficient, but this budget indicates that is not the Chancellor’s view. So why not introduce these incentives when the recession was at its worst, rather than when growth is recovering? It is just another manifestation of the austerity U turn.

George Osborne’s twin obsessions are winning the next election and reducing the size of the state. However perhaps he should also worry about his legacy. There are two possible verdicts that history will bestow. The first, and more optimistic, is that the OBR is wrong. There will be a long and vigorous recovery such that over the next decade the economy does recover the ground it has lost. The question economic historians will then ask is why the recession lasted so long, and George Osborne’s austerity will be up there as a major explanation. He will be remembered as the Chancellor who helped create the longest recession the UK has ever had. The second possibility is that the OBR is right, and this productivity has been lost forever. In that case historians will search in vain for his analysis of the problem, and mark him down as the Chancellor who presided over a disaster and pretended it was not happening.