Winner of the New Statesman SPERI Prize in Political Economy 2016


Tuesday 20 September 2016

Paul Romer on macroeconomics

It is a great irony that the microfoundations project, which was meant to make macro just another application of microeconomics, has left macroeconomics with very few friends among other economists. The latest broadside comes from Paul Romer. Yes it is unfair, and yes it is wide of the mark in places, but it will not be ignored by those outside mainstream macro. This is partly because he discusses issues on which modern macro is extremely vulnerable.

The first is its treatment of data. Paul’s discussion of identification illustrates how macroeconomics needs to use all the hard information it can get to parameterise its models. Yet microfounded models, the only models deemed acceptable in top journals for both theoretical and empirical analysis, are normally rather selective about the data they focus on. Both micro and macro evidence is either ignored because it is inconvenient, or put on a to do list for further research. This is an inevitable result of making internal consistency an admissibility criteria for publishable work.

The second vulnerability is a conservatism which also arises from this methodology. The microfoundations criteria taken in its strict form makes it intractable to model some processes: for example modelling sticky prices where actual menu costs are a deep parameter. Instead DSGE modelling uses tricks, like Calvo contracts. But who decides whether these tricks amount to acceptable microfoundations or are instead ad hoc or implausible? The answer depends a lot on conventions among macroeconomists, and like all conventions these move slowly. Again this is a problem generated by the microfoundations methodology.

Paul’s discussion of real effects from monetary policy, and the insistence on productivity shocks as business cycle drivers, is pretty dated. (And, as a result, it completely misleads Paul Mason here.) Yet it took a long time for RBC models to be replaced by New Keynesian models, and you will still see RBC models around. Elements of the New Classical counter revolution of the 1980s still persist in some places. It was only a few years ago that I listened to a seminar paper where the financial crisis was modelled as a large negative productivity shock.

Only in a discipline which has deemed microfoundations as the only acceptable way of modelling can practitioners still feel embarrassed about including sticky prices because their microfoundations (the tricks mentioned above) are problematic . Only in that discipline can respected macroeconomists argue that because of these problematic microfoundations it is best to ignore something like sticky prices when doing policy work: and argument that would be laughed out of court in any other science. In no other discipline could you have a debate about whether it was better to model what you can microfound rather than model what you can see. Other economists understand this, but many macroeconomists still think this is all quite normal.   

22 comments:

  1. 'In no other discipline could you have a debate about whether it was better to model what you can microfound rather than model what you can see.'

    Well you would; contemporary historians debate just about everything. That's the beauty of open, inter-disciplinary subjects.

    Can you explain to this non-economist, what you find so disagreeable in Paul Mason's analysis? It read to me as much a criticism of groupthink as anything else. Are the models he proposes towards the end of his piece unfit for purpose?

    ReplyDelete
    Replies
    1. Not disagreeable, just wrong. Paul Mason writes "Yet orthodox economic theory insists it would have no real effect if the central banks pulled all this support – since the equations tell them there is no correlation between monetary policy and output." That statement, which is the impression Paul Romer's article gives, might just have been true in a few years in the 1980s before New Keynesian theory arrived. Since the 1990s New Keynesian theory is now the orthodoxy, and is used by central banks around the world. A conversation with any mainstream macroeconomist would have put Paul Mason right on this.

      Delete
    2. Thanks for your response, which I accept given your far greater knowledge of such things!
      What about the rest of the piece, the arguments re groupthink, paradigm shifts and new models? Do you think he has a point?

      Delete
    3. When I read that I thought "what the hell is he talking about - either my understanding of mainstream macro is way out, or his is." Glad to discover it is indeed his. Always surprises me when people like Paul Mason don't run controversial technical articles like this past academic economists first. I definitely would.

      Delete
    4. "Since the 1990s New Keynesian theory is now the orthodoxy, and is used by central banks around the world."

      I think when it comes to making the call, NKT has less relevance than what you might believe - even if there are hordes of economists in their research departments playing around with their models. How much do you think NKT played in designing the UMP that followed the credit crunch? As far as economic theory was used at all, it was very old stuff when it was. Operation Twist, or even earlier stuff.

      Delete
    5. Simon is right. I'm an undergraduate and I've been taught a Neo-Keynesian model and have heard nothing except terse criticism of the RBC school.

      Delete
    6. pewartstoat: Yes, but I think what he misses is the extent to which these things are rooted in the methodological approach.

      Delete
  2. THis is great to read.

    Surely it's only a year or two ago that you were defending microfoundations - not as the only admissible methodology, it's true, but as an essential component of 'serious' macroeconomics!

    I'm really glad to see that you are now exposing the limitations of the microfoundation criterion in any policy-oriented paper. I hope this is a precursor of a more general trend to bring macroeconomic theory back into contact with the real world in which people (sorry, economic actors) behave in ways that satisfy more than a purely rational, maximising goal.

    Now, it seems, all that is needed is to convince the high priests, aka journal editors, that macro is more than an internally consistent set of axioms and equations.

    ReplyDelete
    Replies
    1. On the point of consistency, my argument has always been that microfoundations is a progressive research strategy, but it should not be the only research strategy. I have not changed my view on this for decades!

      Delete
  3. I didn't think his mockery of shocks did justice to the question, fun though it was, but I do think that when you combine models in which mechanisms that determine what happens to observable variables are driven by possibly fictional possibly meaningful shocks, and the methods by which you identify those shocks are weak, then you are at risk of continuing to work with fantastical nonsense.

    I think it is reasonable to think that there are things which happen in the economy which we cannot model but would impinge on the behaviour being modeled and hence insert shock processes in the relevant equations, but if you start working backwards from the observed data and the mechanisms assumed by the model, to conclude that *these* shocks are driving everything, then you are not in a good place. And if you really do think that unmodeled factors in whatever equation are so important, it would be nice to provide a bit more evidence as to what they are in reality.

    ReplyDelete
    Replies
    1. Once upon a time we had exogenous and endogenous variables, and only endogenous variables were stochastic. Comparative statics involved changing exogenous variables. Now everything is an endogenous process, and comparative statics involves shocks. I cannot see why this is a big deal.

      Delete
    2. I think the problem is when you conclude that recessions are, for example, "really" being driven by, say, investment price shocks, because you have 'identified' large shocks there, and 'identified' minor shocks elsewhere. Does that make sense?

      Delete
    3. Really, I think the most important criticism that Romer puts forward is the one related to the (fantastical) shocks and I think that's the one we have to clarify. I do not really understand what he aims at with that criticism. On twitter I asked him and Kocherlakota for clarification about this, arguing that if we have models without endogenous cycles or chaos, we need to have shocks to preferences or production functions to have fluctuations. Unless, we want to say that all fluctuations are generated by policy shocks. Kocherlakota replied that shocks to beliefs about other peoples' decisions could be considered. Romer said that we should find what causes fluctuations and capture that in a model, which to me is the same as saying that fluctuations should be obtained endogenously, so in a deterministic fashion. But maybe I am missing something. I do not think this makes much sense, we may be unable to understand what causes demand shocks, for instance, but we may want to understand the reaction of the economy to such shocks. So it may be legitimate to assume a shock to preferences that generates an increase in spending today, even if it is probably not true that literally people change their preferences at once. I would like to know what you think about this.

      Delete
  4. Imagine if all of physics had to be "microfounded" on quantum mechanics or that all of biology had to be "microfounded" on chemistry. We would have no theory of relativity or of natural selection.

    ReplyDelete
  5. How can central banks be Keynesian (new. Or otherwise) without control of fiscal matters?

    ReplyDelete
    Replies
    1. Because they have a different definition of Keynesian to yours.

      Delete
  6. The Lucas critique basically boiled down to: mainstream econometricians are trying to estimate parameters that don't even exist, since relationships between macro variables are not physical constants but merely the observed consequences of micro-level choices. But models in the New Classical tradition, including New Keynesian DSGE models, (almost?) always incorporate the concept of a representative agent. Now, since nobody imagines that human preferences are identical and quasi-homothetic, we know that this representative agent cannot be a real thing in the world, and the parameters of its utility function have no more "reality" to then than the aggregate correlations found in old-mainstream or "Paleo-Keynesian" models.

    So in what respect are NK models an "improvement" on their forbears?

    ReplyDelete
    Replies
    1. I agree, which is what my Review of Keynesian Economics article is partly about.

      Delete
    2. "parameters that don't even exist"

      that's a good one -- ask God if they don't exist!

      Delete
  7. A very clever blog post of microfoundations:
    https://meansquarederrors.blogspot.de/2016/09/the-microfoundations-hoax.html

    ReplyDelete
  8. Deep parameters ...... This is a very self deceiving term of art in macro modeling

    ReplyDelete
  9. Can I ask an (I am sure) extremely naïve question? Why doesn't someone derive micro-foundations from macroeconomic theory (marked to market)? If it were me, I would proceed in two steps analogous to quantum theory in physics. I would first take observed behavior (including irrationality, lack of information, and the fact that most companies today are services rather than manufacturing ones) as the "low-energy" state. I would then derive average behavior as the "high-energy" state, again "marked to market". This would allow me to real-world test my resulting microfoundations. I might incorporate considerations of "if this goes on" to capture things like Minsky-moment behavior, and I might also go back and see what questions the resulting microfoundations raise about my macroeconomic model. I might even crowdsource the effort.

    What I like about this approach (if it is at all feasible) is that it puts the onus squarely back on the journals. They want microfoundations; you have given them some, marked to market. They say these are inferior to their microfoundations; the onus is on them to prove it, and you have real-world data on your side.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.