Winner of the New Statesman SPERI Prize in Political Economy 2016


Sunday 22 April 2012

Microfoundations and Evidence (2): Ideological bias


               Internal consistency rather than external consistency is the admissibility criteria for microfounded models. Which means in ordinary English that academic papers presenting macroeconomic models will be rejected if some parts are theoretically inconsistent with other parts, but not if some model property is inconsistent with the data. However the motivation for a paper will often be a ‘puzzle’, which is an empirical fact that cannot as yet by explained by a model. However the paper is not required to be consistent will all other relevant facts, so external consistency is not as important as internal consistency.
               In a previous post I expressed a concern that researchers might tend to choose puzzles that were relatively easy to solve, rather than puzzles that were really important. In this post I want to raise another problem, which is that some researchers might select facts on the basis of ideology. The example that I find most telling here is unemployment and Real Business Cycle models.
Why is a large part of macroeconomics all about understanding the booms and busts of the business cycle? The answer is obvious: the consequences of booms – rising inflation – and busts – rising unemployment – are large macroeconomic ‘bads’. No one disagrees about rising inflation being a serious problem. Almost no one disagrees about rising unemployment.  Except, it would appear, the large number of macroeconomists who use Real Business Cycle (RBC) models to study the business cycle.
In RBC models, all changes in unemployment are voluntary. If unemployment is rising, it is because more workers are choosing leisure rather than work. As a result, high unemployment in a recession is not a problem at all. It just so happens that (because of a temporary absence of new discoveries) real wages are relatively low, so workers choose to work less and enjoy more free time. As RBC models do not say much about inflation, then according to this theory the business cycle is not a problem at all.
If anyone is reading this who is not familiar with macroeconomics, you might guess that this rather counterintuitive theory is some very marginal and long forgotten macroeconomic idea. You would be very wrong. RBC models were dominant in the 1980s, and many macroeconomists still model business cycles this way. I have even seen textbooks where the only account of the business cycle is a basic RBC model.
But perhaps common sense here is wrong, and the RBC approach is right. Perhaps, despite appearances, high levels of unemployment in a recession are just people choosing to enjoy more leisure. Unfortunately not. One of the really robust findings revealed by happiness data (see here for a recent comprehensive survey) is that unemployment increases unhappiness. As Chris Dillow notes from some recent research, unemployment appears worse than divorce or widowhood, in the sense that the happiness of the unemployed does not adapt over time to their state. Given the future earnings loss implied by spells of unemployment documented here, this is not that surprising. It is also not surprising that quits (voluntary exits) from employment are negatively correlated with unemployment, which is also difficult to rationalise with the RBC approach.
Now the RBC literature is very empirically orientated. It is all about trying to get closer to the observed patterns of cyclical variation in key macro variables. Yet what seems like a rather important fact about business cycles, which is that changes in unemployment are involuntary, is largely ignored. (By involuntary I mean the unemployed are looking for work at the current real wage, which they would not be under RBC theory.) There would seem to be only one defence of this approach (apart from denying the fact), and that is that these models could be easily adapted to explain involuntary unemployment, without the rest of the model changing in any important way. If this was the case, you might expect papers that present RBC theory to say so, but they generally do not. New Keynesian models are RBC models plus sticky prices, but that plus bit is crucial. Not only does it allow involuntary unemployment, and therefore a role for policy to smooth the cycle, but it also changes other properties of the model.
What could account for this particular selective use of evidence? One explanation is ideological. The commonsense view of the business cycle, and the need to in some sense smooth this cycle, is that it involves a market failure that requires the intervention of a state institution in some form. If your ideological view is to deny market failure where possible, and therefore minimise a role for the state, then it is natural enough (although hardly scientific) to ignore inconvenient facts. For the record I think those on the left are as capable of ignoring inconvenient facts: however there is not a left wing equivalent of RBC theory which plays a central role in mainstream macroeconomics.
In this and a previous post I have looked at two biases that can arise in the puzzle selection that drives microfoundation model development. There may well be others. Do these biases matter? I think they do for two reasons. First from a purely academic point of view they distort the development of the discipline. As I keep stressing, I do think the microfoundations project is important and useful, but that means anything that distorts in energies is a problem. Second, policy does rely on academic macroeconomics, and both the examples of bias that I use in this post and the last could have been the source of important policy errors.
One way of reading these two posts is a way of exploring Krugman’s Mistaking Beauty for Truth essay. I know the reactions of colleagues, and bloggers, to this piece have been quite extreme: some endorsing it totally, while others taking strong exception to its perceived targets. My own reaction is very similar to Karl Smith here. I regard what has happened as a result of the scramble for austerity in 2010 to be in part a failure of academic macroeconomics. It would be easy to suggest that this was only the result of unfortunate technical errors, or political interference, and that otherwise the way we do macro is basically fine. I think Krugman was right to suggest otherwise. Given the conservative tendency in any group, an essay that said maybe there might just be an underlying problem here would have been ignored. The discipline needed a wake-up call from someone with authority who knew what they were talking about. Identifying exactly what those problems are, and what to do about them, seems to me an important endeavour that has only just begun.


Thursday 19 April 2012

The EuroZone as One Country

Or does the fact that it is not matter for its overall macro policy

               The Eurozone is undertaking more austerity than either the US or the UK (see here), yet its overall budgetary position is much more favourable than either of these two countries. Can this be right, at a time when the Eurozone is in recession? If we thought about the Eurozone as a single country, then clearly it is not. Everything that is wrong with current UK policy would be even more wrong in the Eurozone. The question I ask here is whether the fact that it is not one country changes this assessment.
               The Eurozone is like one country in having a single central bank. The ECB’s nominal interest rate is stuck at their equivalent of the zero lower bound. One possibility would be for the ECB to effectively raise the inflation target, and hope that this in turn raised inflation expectations and thereby stimulated demand: NGDP targets and all that. However it almost certainly will not do this. Furthermore, its inflation target is 2% or less, and it appears to be thinking about significantly less than 2% at the moment (see the quote from Draghi reported here). So given this constraint on conventional monetary policy, what should fiscal policy do?
               For the Eurozone as a whole the situation is no different from the US, or the UK. A recession in which interest rates are at the zero lower bound is not the time to undertake austerity. The Euro area as a whole should be reducing its underlying budget deficit much more slowly. It needs a large fiscal stimulus relative to current plans. But can we translate this aggregate conclusion into action to be undertaken at the individual country level?
               I do not think a social planner in charge of the Eurozone that treated all its citizens equally would have a problem. They would reason as follows. In aggregate we need a large stimulus relative to existing plans. In addition, we need a significant inflation differential between Germany and non-Germany to open up. Suppose 2% is the optimal inflation rate, and we need a 2% gap between Germany and non-Germany (which is probably a lower bound on the inflation gap required). We could have 2% (Germany) and 0% (non-Germany), or 3% and 1%. The latter will be preferred on simple convexity grounds – it is better to spread the pain equally. In addition, the difficulty of reducing inflation when it is already near zero is well known, so it would be less costly to raise inflation in Germany. A large part of the stimulus would therefore go to Germany, raising inflation above the optimal level from the German national point of view. Kantoos disagrees, but note that this would also be the effect of the ECB successfully raising the aggregate inflation target.
               However this is academic, as we do not have a Eurozone central planner. In principle non-Germany could compensate Germany to achieve the optimal aggregate outcome, but it is unclear what non-Germany has to offer. So let us take it as a constraint that Germany will not adopt a large stimulus. It may also do its best to counteract the impact of any expansionary ECB policy on its own economy (see Kantoos again). This means that we cannot have as large a stimulus for the Eurozone as a whole as we could if it was one country. We have to go for 2% and 0% rather than 3% and 1%, which means lower Eurozone output. But does it mean the current level of austerity is correct?
               At present austerity in non-Germany is being driven by each country’s bond market. If the Eurozone really was one country, which issued Eurozone debt, this would not be happening. Just as savers are happy to buy UK or US debt, they would happily buy Euro debt. (No one buying UK debt is too worried about the widening North-South divide in the UK!) The Euro is highly unlikely to default on its debt because the ECB can print Euros.
               Germany has ruled out Eurobonds, so are we back to our previous problem? No, because the ECB can act as if they existed, by (indirectly) buying national governments debt when the market will not. As Jonathan Portes notes, this is why the crisis appeared to go away over the last few months. Buyers of non-German debt are worried about default, and the ECB can rule out default by being the buyer of last resort (through proxies if necessary: for the wisdom or otherwise of this indirect approach, see here and here). The reasons why it chooses to do this in what appears to be an erratic and unpredictable way were discussed by Fred Bergsten and Jacob Kirkegaard at VoxEU, and I commented on this here. While this might have been appropriate for some countries a year or two ago, the strategy is now doing significant harm. My own view (and more important others) is that the ECB is too concerned about moral hazard, and not concerned enough about the impact of austerity on non-German output, with the result that we are seeing much more austerity than is necessary. The ECB could still exercise fiscal discipline by varying the rate at which it capped the interest rate on non-German debt. This could be done on a country by country basis, and perhaps should be (see here and here).
               Instead the ECB appears to be using market sentiment as an index of national fiscal discipline. This puts national governments in an impossible position (see, for example, this from John McHale). They can try and demonstrate that they will not default by piling on the austerity, but in the process they may actually be making their longer term fiscal positions worse.
               Using market sentiment as an indicator of fiscal sobriety is particularly inappropriate at the moment, as market concerns may be much more focused on the health of national banking systems and the knock on effects if governments are required to rescue national banks. (See this discussion of ‘sudden stops’ at Bruegel, and beware of commentators who know what the market is thinking.)  To the extent that this is true, austerity will make things worse. As the economy contracts, more loans go bad, and banks balance sheets worsen. Perhaps ($) the ECB is beginning to realise this. But without clear signals and statements from the ECB that no further fiscal tightening is required, there is a real danger that national governments may continue to tighten too quickly.
               Viewing the Eurozone as a single country clearly indicates a substantial easing of both fiscal and monetary policy. German national self interest, combined with the need for non-German competitiveness to improve, does moderate the amount of fiscal easing that can occur, but there is no reason why the ECB should reduce aggregate inflation on this account. However the amount of aggregate fiscal austerity that this implies is still considerably less than is currently being enacted. Here the ECB has the ability to remove the constraint imposed by national government bond markets, and it should do so before the degree of austerity currently being enacted does lasting damage to the sustainability of the Eurozone.

Monday 16 April 2012

Monetary policy and Financial Stability

               Here are some initial thoughts reading Michael Woodford’s new NBER paper. The paper addresses the following question. Should our current monetary policy framework, which involves ‘flexible inflation targeting’ (more on what this means below), be modified in an attempt to avoid a financial crisis of the type experienced in 2008? We can put the same question in a more specific way – should the Fed or Bank of England have raised interest rates by more in the years before the crisis in an effort to avoid the build up of excess leverage, even if movements in output and inflation indicated otherwise? The paper suggests the answer to both questions is yes. Although the paper contains plenty of maths, it is reasonably reader friendly, so I hope reading this will encourage you to read it.
               First, what is flexible inflation targeting? Both inflation and the output gap influence welfare, so policy is always about getting the right balance between them. In a simple set-up, when a ‘cost push’ shock (like an increase in VAT) raises inflation for a given output gap, then creating a negative output gap so as to moderate the rise in inflation is sensible. The optimal policy can be expressed as a relationship (a ‘target criterion’) between deviations in the price level from some target value (the ‘price gap’) and the output gap. In the case of the cost-push shock, policy ensures that the output gap gradually narrows as the price level gradually convergences back to its original level. This is not quite the same as ‘hitting the inflation target in two years’, which is sometimes how the Bank of England describes its policy. It is, however, how most economists would describe optimal policy (assuming the central bank has the credibility to commit).
               This ignores credit risk. In an earlier paper, Curdia and Woodford (2009) introduced a credit friction distortion (CFD) variable, which is related to the risk premium between lending rates to risky individuals/firms compared to the return on safe assets. This variable matters, because it can influence how consumption (demand) evolves, and it can influence the (New Keynesian) Phillips curve. It also influences social welfare itself. In the original paper CFD is treated as exogenous. Here it is allowed to take two values: a crisis value, and a normal value, but crucially the chance that it will move to the crisis value depends on leverage, which in turn depends on the output gap. It is this last link that is crucial. Without it a financial crisis would just be treated like any other shock: monetary policy would respond to it, but the relationship between prices and the output gap after the shock would remain the one described above.
               If leverage depends on output, this is no longer the case. The relationship between the output gap and the price gap now depends on what Woodford calls the ‘marginal crisis risk’. If the risk of a damaging financial crisis is particularly high, then it is optimal to reduce output in an effort to reduce leverage even if prices are at their target. In a way this is just commonsense when you have only one instrument (monetary policy) and many targets. Without this link, optimal policy was a matter of getting the right balance between output and inflation, and the maths tells you the form that balance takes. Add a third target (the chance of a crisis) that can be influenced by monetary policy, and that has to be balanced against the other two.
               What the maths tells you is the following. Suppose the optimal policy when a financial crisis is endogenous is expressed in the form of a modified price target like the one I have just described. Suppose also that policy did tighten to head off the possibility of a crisis, and this led to lower output and inflation than would otherwise have occurred. To quote Woodford: “Under the criterion proposed above, any departure of the price level from its long-run target path that is justified by an assessment of variations in the projected marginal crisis risk will subsequently have to be reversed.” In other words, a fear that central banks would constantly undershoot the inflation target because they were cautious about financial conditions does not follow under this policy, because in this case prices would move further and further away from their target path, and this would not be optimal.
               None of this reduces the desirability of finding other ‘macro-prudential’ financial instruments that could also mitigate the chances of a financial crisis. The better those instruments are, the less need there is to modify standard monetary policy to take account of that risk.
What this analysis suggests is that the possibility of a financial crisis leads to a modification of optimal monetary policy as we now understand it, but not its complete overthrow. Woodford argues that flexible inflation targeting has served us well. To quote:

“Despite a serious disruption of the world financial system, that some have compared in magnitude to that suffered in the 1930s, this time none of the major economies fell into deflationary spirals. And despite large swings in oil prices, the effects on the dynamics of wages and prices this time have been modest. These comparatively benign outcomes are surely due in large part to the fact that inflation expectations in most of the major economies have remained quite well anchored in the face of these substantial disturbances. And it is arguable that the credibility with regard to control of the rate of inflation that the leading central banks have achieved over the past twenty years deserves a great deal of the credit for this stability.”

On that I have to agree.
               I would like to make one final, somewhat tangential, observation. This paper seems to be a clear example of what I have called elsewhere (see this post and the paper referenced at its end) a ‘pragmatic microfoundations’ approach. Woodford’s endogenisation of the impact of policy on leverage and crisis is ad hoc (he describes it as ‘reduced form’), and he stresses repeatedly the need to do further work on modelling these links in a more structural manner. Nevertheless it would seem to me strange to dismiss what he does as ‘not proper macroeconomics’, as a microfoundations purist might do.  

Friday 13 April 2012

More on tax increases versus spending cuts in an austerity programme

Using sales taxes to mimic monetary policy in a liquidity trap

                Monetary policy works by changing the real interest rate. At the zero lower bound monetary policy loses its power, unless it can influence inflation expectations. However inflation expectations for consumers will also depend on the evolution of sales taxes (indirect taxes like VAT). A pre-announced increase in sales taxes will raise expected inflation, and so reduce the real interest rate faced by consumers at the zero lower bound. In this sense changes in sales taxes can mimic monetary policy.
                I first wrote about this some time ago (for the record Wren-Lewis, 2000, but some of that discussion is a little dated now). As a result, I thought the (surprise) temporary VAT cut introduced by the UK government at the end of 2008 was a good idea (see here). Thanks to @daniel’s comment on an earlier post, I’ve now read a nice and straightforward paper by Isabel Correia, Emmanuel Farhi, Juan Pablo Nicolini and Pedro Teles, which formalises this idea in a standard New Keynesian model at a zero lower bound.
                My original piece on this suggested that a temporary but unexpected cut in VAT, like a temporary increase in government spending, could provide an effective stimulus to aggregate demand for a given monetary stance. But what are the implications for Eurozone countries that are trying to bring down government debt? They are the mirror image: it is not a good idea to use cuts in government spending, or a surprise increase in sales taxes, as a way to bring down debt in a recession. However pre-announced increases in sales taxes work differently.
                If the increase in sales taxes is announced but delayed, then this provides a stimulus before the point at which it is increased (as in the UK 2008/9 case). Put simply, people buy before higher taxes raise prices. Furthermore, if the increase in VAT is not permanent (because the intension is to bring down debt), but it is expected to end at a point at which interest rates are no longer at a zero lower bound, then the subsequent deflationary impulse can be counteracted by monetary policy. Of course, a key proviso is that interest rates have to stay at their lower bound when sales taxes are rising – obviously higher inflation will not stimulate the economy if it is offset by rising nominal rates (see here for an example).
                This suggests that pre-announced increases in sales taxes could be used as an effective consolidation device that also stimulated demand. The government announces a sequence of increases in indirect taxes, which go on increasing for as long as the zero bound is expected to be a constraint. Once the economy has recovered such that interest rates can rise, sales taxes can then be steadily reduced towards some desired long run level, achieving whatever reduction in debt is required.
                This idea is conceptually different from but complementary to the argument I have made before for using taxes rather than government spending as part of a fiscal consolidation programme. That argument was based on consumers smoothing the income effect of tax changes. The above is an additional point about the impact of taxes on inflation, which changes the incentive to consume today rather than tomorrow.
                Raising consumer prices via sales taxes increases negative incentive effects on labour supply. The optimal strategy examined by Correia et al is in fact revenue neutral, because it involves offsetting changes in income taxes/subsidies.  As indirect taxes rise, income taxes fall (or income subsidies rise). In an environment where there is no need for fiscal austerity, this is the welfare maximising plan. When government debt is excessive, then reducing debt is bound to involve either increasing tax distortions or the suboptimal provision of public goods or transfers for some time. In this case there would be no point in a tax switch: you just raise sales taxes.
                When I have asked whether this form of deficit reduction strategy was considered in Ireland, one of the responses was that there was a concern about the distributional impact of indirect tax increases relative to direct taxes. That is a whole different issue, and should ideally be dealt with by different means. From a macroeconomic point of view temporary increases in any taxes that have relatively small income effects because of consumption smoothing, but a significant impact in raising inflation, should be the preferred fiscal consolidation instrument at the zero lower bound. 

Wednesday 11 April 2012

Some notes on macro modelling

             This post is prompted by this post by Robert Waldmann, and this by Noah Smith, commenting on an earlier post by Wieland and Wolters

1) Forecasting and policy analysis

Noah repeats what is a standard line, which is that microfounded models are for policy analysis and not forecasting, and for forecasting “we don't need the structural [microfounded] models, and might as well toss them out”. The reason he gives is policy invariance: microfounded models address the Lucas critique.
While the Lucas critique is important, it is not in my view the reason we have microfounded models. The need for internal consistency drives the microfoundations project. Often internal consistency and addressing the Lucas critique go together, but not always. The clearest example is Woodford’s derivation of a quadratic social welfare function from agents’ utility. This is not needed to address the Lucas critique, but it is required for an internally consistent analysis of what a benevolent policy maker should do.
Why is internal consistency important? Because we think that agents in the real world are internally consistent, so models that are not can make mistakes. They can make mistakes in forecasting as well as policy analysis.
However, in an effort to achieve internal consistency, we may well ignore important features of the real world. ‘Ad hoc’ models that capture these features may be better models, and give better policy advice, even though they are potentially internally inconsistent.
So microfounded models could be better at forecasting, and ‘ad hoc’ models could give better policy advice. In that sense I think Noah is repeating a common misperception.

2) On a pedantic point, there is a long tradition of comparing different macromodels, both for forecasting and policy analysis, so Wieland and Wolters is hardly a first step. In the UK for 16 years we had an excellent research centre that did just that, run by Ken Wallis. There is a wealth of expertise there, which anyone doing this kind of comparative analysis needs to tap.

3)  Just in case anyone reading Robert’s post gets the wrong impression, the idea of the core/periphery structure for the Bank of England’s model came from economists at the Bank (strongly influenced by the antecedents from other central banks that I mentioned in my post), and not me. My role was mainly to give advice on theoretical aspects of the model to a very competent team who needed little of it. However Robert and Noah are wrong to suggest that because the Bank uses the core/periphery structure for forecasting, there is no point in having the microfounded core. For example, you can do policy analysis with both the complete model and just the core.

4) This final comment is just for those who read Robert’s post, and is very pedantic. Robert starts off by saying “As far as I can tell, Simon Wren-Lewis has been convinced by Paul Krugman”. The first point is that all my posts on this issue have come from a consistent view. I think microfoundations modelling is an important thing to do, but I do not think it is the only valid way of modelling the economy and doing policy analysis. I think Paul Krugman and I are on absolutely the same page here, and always have been. Robert is however right that my aim has been to convince those doing microfounded modelling of this point.
I’ve disagreed with Paul Krugman (and Robert) on the empirical success of the microfoundations approach, and I still disagree. But given that we agree that analysing microfounded models is useful, I don’t think this is terribly important. I picked up on the ‘mistaking beauty for truth’ phrase, because – taken literally – I don’t think that this is a problematic force behind the way the microfoundations project progresses. All scientists like simplicity, and they also get complicated when they need to, and DSGE models do the same. What I think is problematic is the weak role played by external consistency that I illustrated here, and the role of ideology. On the latter I think I’m once again on the same page as Paul Krugman. 

Monday 9 April 2012

Microfoundations and Evidence (1): the street light problem

                One way or reading the microfoundations debate is as a clash between ‘high theory’ and ‘practical policy’. Greg Mankiw in a well known paper talks about scientists and engineers. Thomas Mayer in his book Truth versus Precision in Economics (1993) distinguishes between ‘formalist’ and ‘empirical science’. Similar ideas are perhaps behind my discussion of microfoundations and central bank models, and Mark Thoma’s discussion here.
                In these accounts, ‘high theory’ is potentially autonomous. The problem focused on is that this theory has not yet produced the goods as far as policy is concerned, and asks what economists who advise policy makers should do in the meantime. But the presumption is generally that theory will get there as soon as it can. But will it do so of its own accord? Is it the case that academics are quite good at selecting what the important puzzles are, or do they need others more connected to the data to help them?
                There is a longstanding worry that some puzzles are selected because they are relatively easy to solve, and not because they are important. Like the proverbial person looking under the street light for their keys that they lost somewhere less well lit. This is the subject of this post. A later post will look at another concern, which is that there may be an ideological element in puzzle selection. In both cases these biases in puzzle selection can persist because the discipline exerted by external consistency is weak.
The example that reminded me about this came from this graph.

US Savings Rate


The role of the savings rate in contributing to the Great Recession in the US and elsewhere has been widely discussed. Some authors have speculated on the role that credit conditions might have played in this e.g. Eggertsson and Krugman here, or Hall here. But what about the steady fall in savings from the early 1980s until the recession?
                Given the importance of consumption in macroeconomics, you would imagine there would be a huge literature, both empirical and theoretical, on this. Whatever this literature concluded, you would also imagine that the key policy making institutions would incorporate the results of this research in their models. Finally you might expect any academic papers that used a consumption model which completely failed to address this trend might be treated with some scepticism. OK, maybe I’m overdoing it a bit, but you get the idea. (There has of course been academic work on trying to explain the chart above: a nice summary by Guidolin and Jeunesse is here. My claim that this literature is not as large as it should be is of course difficult to judge, let alone verify, but I’ll make it nonetheless.)
                It would be particularly ironic if it turned out that credit conditions were responsible for both the downward trend and its reversal in the Great Recession. However that is exactly the claim made in two recent papers, by Carroll et al here and Aron et al (published in Review of Income and Wealth (2011), earlier version here), with the later looking at the UK and Japan as well as the US. Now if you think this is obvious nonsense, and there is an alternative and well understood explanation for these trends, then you can stop reading now. But otherwise, suppose these authors are right, why has it taken so long for this to be discovered, let alone be incorporated into mainstream macromodels?
                Well in the discovery sense it has not. John Muellbauer and Anthony Murphy have been exploring these ideas ever since the UK consumption boom of the late 1980s. As I explained in an earlier post, there was another explanation for this boom besides credit conditions that was more consistent with the standard intertemporal model, but the evidence for this was hardly compelling. The problem might be not so much evidence, as the difficulty in incorporating credit effects of this kind into standard DSGE models. Even writing down a tractable microfounded consumption function that incorporates these effects is difficult, although Carroll et al do present one. Incorporating it into a DSGE model would require endogenising credit conditions by modelling the banking sector, leverage etc . This is something that is now beginning to happen largely as a result of the Great Recession, but before that it was hardly a major area of research.
                So here is my concern. The behaviour of savings in the US, UK and elsewhere has represented a major ‘puzzle’ for at least two decades, but it has not been a major focus of academic research. The key reason for that has been the difficulty of modelling an obvious answer to the puzzle in terms of the microfoundations approach. John Muellbauer makes a similar claim in this paper. To quote: “While DSGE models are useful research tools for developing analytical insights, the highly simplified assumptions needed to obtain tractable general equilibrium solutions often undermine their usefulness. As we have seen, the data violate key assumptions made in these models, and the match to institutional realities, at both micro and macro levels, is often very poor.”
                I do think microfoundations methodology is progressive. The concern is that, as a project, it may tend to progress in directions of least resistance rather than in the areas that really matter – until perhaps a crisis occurs. This is not really mistaking beauty for truth: there are plenty of rather ugly DSGE macro papers out there, one or two of which I have helped write. It is about how puzzles are chosen. When a new PhD student comes to me with an idea, I will of course ask myself is this interesting and important, but my concern will also be whether the student is taking on something where they can get a clear and publishable result in the time available.
When I described the Bank of England’s macromodel BEQM, I talked about the microfounded core, and the periphery equations that helped fit the data better. If all macroeconomists worked for the Bank of England, then that construct contains a mechanism that could overcome this problem. The forecasters and policy analysts would know from their periphery equations where the priority work needed to be done, and this would set the agenda for those working on microfounded theory.
                In the real world the incentive for most academics is to get publications, often within a limited time frame. When the focus of macroeconomic analysis is on internal consistency rather than external consistency, then it is unclear whether this incentive mechanism is socially optimal. If it is not, then one solution is for all macroeconomists to work for central banks! A more realistic alternative might be to reprise within academic macroeconomics a modelling tradition which placed more emphasis on external consistency and less on internal consistency, to work alongside the microfoundations approach. (Justin Fox makes a similar point in relation to financial modelling.)     

Friday 6 April 2012

The Financial Market as a Vengeful God

                Reading this Jonathan Portes post, I recalled a point in my undergraduate lectures where I have a little fun at the expense of economic pundits from the City. After explaining Uncovered Interest Parity (if you do not know what UIP is, it does not matter), I tell them that they can now immediately comment on how the foreign exchange market reacts to an increase in interest rates, whatever happens to the exchange rate. If the exchange rate appreciates, that is because domestic assets are more attractive. If the exchange rate does not change, that is because the interest rate increase was already discounted. If the exchange rate depreciates, well the markets were expecting a larger increase.
                This is meant to make a serious point about the difficulties in testing UIP, but if I’m feeling mischievous I then point out that city pundits always seem to know with certainty why the markets have moved this way or that. Now in goods markets, firms pay market researchers serious money to find out why consumers are or are not buying their products, but in the financial markets this appears unnecessary. Despite market moves being made by thousands of trades and by thousands of people, the motivation for these trades appears clear. It is as if each trade is accompanied by the trader completing the following sentence: ‘I bought/sold this currency today because ....’. The truth, I reveal to my stunned audience, is that these pundits are just guessing based on no evidence whatsoever.
                Of course city pundits have no reason to be honest. When asked ‘why has the dollar appreciated’, I would like them to reply ‘well no one really knows, but one possible factor might be...’. They never do. If I wanted to be unkind, I might suggest that these pundits want to appear like high priests, with a unique ability to understand the mysterious mind of the market. As high priests have discovered over and over again, if you can convince people that you have a direct line to an otherwise mysterious but powerful deity, you can do rather well for yourself. And sometimes financial markets can appear a bit like vengeful gods, capable of sudden acts of destructive anger that appear to come from nowhere.
                If I wanted to ratchet up the unkindness I could go on as follows. It is in the priest’s interest to tell the faithful that the god is indeed quite fickle in its mood, and while placid at the moment, it could turn nasty at the slightest provocation. Keep those offerings coming, to make sure that the god stays happy (and don’t think about where those offerings go). If you are particularly generous, the priest will promise to give you the heads up if any changes in mood are imminent. If you cannot be a priest yourself, you can always set up as an advisor (HT DeLong), who will tell people which priests have a better line to the financial market god. 
                OK, this is a bit silly, but sometimes listening to policymakers you wonder whether they think this way. (Perhaps because they talk to the wrong people – see Jonathan again here.) For ‘confidence’, read the mood of the financial market god, or even the many gods of the economy as a whole. For offerings and sacrifices, read austerity. Muti and Padoan tell us “the Eurozone is still in a situation in which multiple equilibria can materialise”. They go on “In a situation of multiple equilibria, where confidence plays a crucial role, the distinction between short-term and long-term measures (suggesting the possibility of postponing action) is misleading and could be possibly dangerous. Short-term measures that weaken confidence would push the medium-term dynamics towards a bad equilibrium.”
                I assume this is about austerity. Here the game for many Eurozone countries is to demonstrate that they are not like Greece. I think that in this game there may be an advantage in front-loading austerity to demonstrate the ability and intension to avoid default, although I think Brad DeLong disagrees. However, as my very first post said, this need not be about appeasing a market god but instead the very human ECB. If deficit reduction programmes are reasonable and are implemented (in cyclically adjusted terms, without moving the potential output goalposts every time output falls), the ECB should ensure interest rates on debt are low enough to make those programmes sustainable.
                Having just gone through a recession largely caused by excessive over confidence in the financial markets about the ability to manage risks, it is natural to think everything is down to confidence. (The word appears eight times in Muti and Padoan’s article.) However in most situations I think markets and economies react in straightforward and understandable ways. The importance of confidence can be overdone, as it is often a symptom rather than a prime cause. To treat financial markets or the economy as a whole as always behaving like a vengeful god whose mood and confidence can ebb and flow at the slightest provocation is not the way to make good policy.
                Jonathan’s post also quotes Shakespeare, so how about this from Julius Caesar

Men at some time are masters of their fates;
The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings.

Wednesday 4 April 2012

On successful fiscal consolidations

In a recent Vox piece, Alesina and Giavazzi argue that “adjustments achieved through spending cuts are less recessionary than those achieved through tax increases”. At first sight this seems to contradict basic macroeconomics. As I and others have pointed out on many occasions, the impact of cuts in government spending on goods and services are passed straight through to demand, while the income effect of temporary increases in tax will be smoothed by consumers. That is why balanced budget but temporary cuts in government spending are deflationary.
However, what we may have here is just another example of failing to condition on monetary policy. One of the most comprehensive studies of this issue, discussed by Alesina and Giavazzi, is contained in an IMF report, which uses a ‘narrative’ approach to identifying episodes of fiscal consolidation. (This approach was applied to monetary policy by Romer and Romer here: the detailed catalogue of fiscal events is in this IMF working paper. See Jeremie Cohen-Setton (Bruegel) for more on this.) As Alesina and Giavazzi are a little unfair in the way they characterise this report, I will quote extracts from its first four conclusions.

1)    “Fiscal consolidation typically has a contractionary effect on output. A fiscal consolidation equal to 1 percent of GDP typically reduces GDP by about 0.5 percent within two years and raises the unemployment rate by about 0.3 percentage point.”
2)    “Reductions in interest rates usually support output during episodes of fiscal consolidation”
3)    “A decline in the real value of the domestic currency typically plays an important cushioning role by spurring net exports and is usually due to nominal depreciation or currency devaluation.”
4)    “Fiscal contraction that relies on spending cuts tends to have smaller contractionary effects than tax-based adjustments. This is partly because central banks usually provide substantially more stimulus following a spending-based contraction than following a tax-based contraction. Monetary stimulus is particularly weak following indirect tax hikes (such as the value-added tax, VAT) that raise prices.”

The reaction of monetary policy is crucial here. As the report makes clear, if interest rates cannot fall to offset the impact of fiscal consolidation, or if currencies cannot depreciate because everyone is implementing austerity, the deflationary impact will be much greater.
            To quote Alesina and Giavazzi, the report’s authors “agree that spending-based adjustments are indeed those that work – but not because of their composition, rather because almost ‘by chance’ spending-based adjustments are accompanied by reductions in long-term interest rates, or a stabilisation of the exchange rate, the stock market, or all of the above.” That is unfair. As the quotes above show, and any reasonable reading of the whole report confirms, the impact of consolidation is directly linked to the way monetary policy works. Perhaps the crime committed by the IMF report is that it didn’t stress enough the effects of taxes on the confidence of entrepreneurs that Alesina and Giavazzi seem to think is central.
            Point (4) does indeed imply that cutting spending is less contractionary than raising taxes, but again the reaction of monetary policy is crucial. If, as is suggested, monetary policy does not reduce interest rates following tax increases because of the impact of taxes on prices, then it is monetary policy that is leading to the difference in the impact of spending and taxes.
            There is another interesting, if tentative, result from this analysis. Government spending here includes transfers as well as consumption and investment. The report finds that cutting transfers is mildly expansionary, while the costs of cutting consumption or investment are greater, although they do caution about small sample sizes. As cuts in transfers can be smoothed, this fits with basic theory. Alternatively, it may be that cutting transfers is signalling some kind of intent, which may in turn encourage the monetary authority to ease monetary policy more.
            The reason for stressing the role of monetary policy in all these findings should be obvious. At the zero lower bound, monetary policy cannot compensate in the normal way for the deflationary impact of fiscal consolidation. We cannot use evidence from the past when monetary policy was not so constrained to tell us what will happen today. This is well known for austerity in general, but it applies equally to the composition of fiscal consolidation.

           


Tuesday 3 April 2012

Framing fiscal stimulus arguments

                It struck me reading DeLong and Summers that Keynesians like myself often inadvertently provoke opposition. When we discuss temporary increases in government spending, we typically assume it is debt financed. Furthermore, as the interest on higher debt has to be paid for, we generally assume taxes increase to do that. So even though our increase in government spending is temporary, both taxes and debt end up being permanently higher. At a rather basic and non-intellectual level, I think this puts many people off from the start.
                Instead we could start with a temporary balanced budget increase in spending. That way neither taxes nor debt are higher in the long run. In addition, for anyone who has done their post graduate training in the last twenty years that is the natural way to start thinking about what is going on. Or if we want to avoid tax increases altogether, why not finance any increase in debt by reducing government spending rather than raising taxes? If raising debt in the long run is a problem (and I think there are good reasons why it might be), then why not use lower government spending after the stimulus not just to pay the interest on debt, but to pay off all the additional debt incurred by the stimulus. When you think about all these possibilities, the standard choice of debt finance paid for by permanently higher taxes is really the least likely to win friends.
                Now you could say that Keynesians do this because this policy choice is the most effective form of stimulus. In that case, why do we nearly always choose an increase in government consumption, rather than public investment? Additional public investment will have some positive impact on the supply of output, which will raise future taxes in much the same way as the hysteresis effects that DeLong and Summers analyse.
                So if I was trying to convince John Cochrane (or less ambitiously Tyler Cowen) of the efficacy of fiscal stimulus, how would I do it? I think I would use a two period model. The first period is Keynesian with interest rates stuck at the zero lower bound, and is the period in which we undertake a debt financed increase in government spending. The second period is classical, but where supply is influenced either by the additional infrastructure investment in period 1, or by hysteresis effects. All debt is paid off by the end of period 2 through lower government spending, but lower government spending does not influence output because this period is classical. (We could add a final third period which is the steady state and is uninfluenced by anything that happens in periods 1 and 2, just to show that we do not believe hysteresis or infrastructure effects last forever.)
                So what is there not to like about this policy? Output is higher in period 1 because demand is higher, and is higher in period 2 because supply is higher on average. There is no long run increase in debt. There is no increase in tax rates at any point. As period 2 is probably longer than period 1, we even have more time in which government spending is reduced rather than increased relative to base. However  because output and therefore tax receipts are higher in both periods, the reduction in government spending required to pay off the debt might not need to be that large: this is how the DeLong and Summers argument would be translated in this set-up. The framework focuses on the essential reason why stimulus works. We shift demand into a period in which it matters - because monetary policy is ineffective at the zero lower bound in period 1 - and out of a period in which it does not - because monetary policy works in period 2.  

Monday 2 April 2012

The Falklands War: a simple cost benefit analysis

                It is the 30th anniversary of Argentina’s attack on the Falklands islands. I was against the UK responding with a counter invasion. The key justification for taking the islands back by force was that the people there wanted to live under British rather than Argentinean rule. The population at the time was about 1,800. Nearly 900 soldiers lost their lives in that conflict. The financial cost for the UK was estimated at around $2 billion. 
                My argument at the time was very simple. The order of magnitude of the financial cost was pretty well known in advance.The UK government could have offered each islander $1 million dollars, either as compensation for living under Argentinean rule, or for being relocated in some part of the UK. (The Highlands of Scotland is pretty empty and would be the closest substitute.) The financial cost to the government would be the same, but no lives would be lost.
                There were various counterarguments to this ‘crude’ utilitarian reasoning. One was that, if the UK had not attempted to fight, this would set a precedent which would encourage other dictators to use force in a similar manner. All that was demonstrated, of course, was that the UK was prepared to fight to protect one of its dependencies, against another side it thought it could beat. I do not think the Falklands war has really stopped Spain invading Gibraltar. Another argument was that the UK had a moral duty to protect its citizens. Strangely, this moral duty seemed not to apply to the similar number of residents of Diego Garcia some 10 years earlier, who were removed by the British from their homes to make way for a US airbase.
                Unfortunately I think the actual decision to fight back had little to do with principles. The moment Mrs Thatcher was told she could win, it would have been too humiliating for her government not to go to war. What I find much more depressing is that the war was hugely popular in the UK. National honor was at stake. Just before hostilities began, an opinion poll had only 18% of people saying that the UK government had been too willing to use force. Tellingly, however, only 14% of those polled were prepared to sacrifice more than 100 UK servicemen’s lives to regain the islands. 
                The conflict had huge consequences for both countries. It helped keep Mrs Thatcher in power for another decade, but it was fatal for the Argentine junta.  The ‘Falklands factor’ may have encouraged Tony Blair’s military interventionism. This, together with a feeling of gratitude to the US for their intelligence support during the conflict, may have played a part when it came to UK involvement in Iraq. Any ex post cost benefit analysis is hugely complicated and uncertain, but I still think my ex ante crude utilitarian view is compelling.
                The UK declared war three days before my wife and I were due to fly to Peru for a month long holiday. We went as planned, despite knowing that Peru would be very supportive of Argentina. In the first three weeks the Peruvians we talked to regarded the whole thing with bemused curiosity, and there was no ill feeling towards us. The atmosphere changed a bit just before we left after the General Belgrano was sunk. Lives were being lost as two countries attempted to salvage their national pride.                
                 

Sunday 1 April 2012

Happiness and Paternalism

                Although I clearly do not agree with current UK macroeconomic policy, I did note at the end of a recent post that the government had taken the positive step of collecting more data on happiness. (It also deserves considerable credit for setting up the Office for Budget Responsibility, which it predecessor did not have the courage to do.) So I was interested to see a recent broadside from the Institute of Economic Affairs attacking this decision, and the whole happiness project more generally.
                Their collection of essays is slightly schizophrenic. It includes papers that try and show happiness is unrelated to equality, or employment protection legislation, and is negatively related to government consumption. However other papers and the introduction also argue that happiness data is an unreliable guide to wellbeing, and that the government should not use happiness data to promote wellbeing explicitly. What is the underlying problem? Why put so much effort into criticising a few extra questions in a survey?
                This is a question that the New Economics Foundation asks in a refreshingly restrained response to the IEA document. The answer they suggest is that the IEA, and many of its contributors, have a fear that happiness data will be used by governments to do things government thinks will make people happier, rather than allowing individuals themselves to decide what makes them happy. 
                I am sometimes asked by students whether economics as a discipline has an ideological bias. What they often have in mind is the role of markets. My response, which I think is reasonable, is that once you get beyond the welfare theorems in Econ 101, what most economists spend their time doing is analysing market imperfections. So if you want to know what is wrong with markets, ask an economist.
                However I think most economists do share one philosophical characteristic, and that is a deep distrust of paternalism.  This is something I share – by and large, if it does not adversely affect other people, individuals should be allowed to make their own choices. But the by and large here is crucial. Sometimes individuals do make choices which are clearly bad for them.
                This was cogently argued by Richard Thaler and Cass Sunstein in a short paper provocatively entitled ‘Libertarian Paternalism’ (American Economic Review, 2003, Vol. 93, pp. 175-9). A great deal of behavioural economics is all about departures from rationality, and these in turn can lead to people making choices that are not optimal. Thaler and Sunstein point out that sometimes government cannot avoid making decisions that influence choices. An example they give is enrolment in employment based savings plans. Should people be given the choice to opt in or opt out?  What is called ‘status quo bias’ means that which happens will influence people’s choice. Given this, surely it is best for the government to choose the option that makes people better off in its judgement.  The authors have subsequently developed these ideas in their book Nudge. The Economist has a nice summary of this position, and some arguments against it, here. Nudge has been very influential among policymakers, both in the UK and the US.
                Decisions on whether to make savings schemes opt in or opt out, and similar nudges, seem fairly innocuous even if they are important. But what about forcing people to do things they might definitely decide otherwise not to do? Like wearing seat belts. Some economists have difficulties with making the wearing of seat belts compulsory, and regularly cite the possibility that it might encourage drivers to drive more dangerously. This belief seems fairly impervious to contrary evidence, and this post  from philosopher/psychologist J.D.Trout has a justifiable go at economists as a result. The unfortunate truth is that individuals are rather bad at assessing low probability high risk events, and as a result it makes sense – at least in this case – to take away their choice. Can anyone think of a recent similar example with rather more global consequences?!
                So the bad news for the IEA and similar devotees of absolute individual sovereignty is that sometimes people do systematically make bad decisions, and the state is right on those occasions to do something about it. Equally, the state is often too paternalistic, and interferes when it should not. The state can also make bad choices. Given this, the more data we have that allows us to sort out whether government is helping or meddling the better. That is why happiness data is useful, because it can help us do this.