Quantitative Macroeconomics [F2017]
"The era of closed-form solutions for their own sake should be over. Newer generations get similar intuitions from computer-generated examples than from functional expressions", Jose-Victor Rios-Rull, JME (2008).
Course Description:
Quantitative Macroeconomics (Unit I) follows the first year PhD macro sequence. The goal of this course is to equip you with a wide set of tools to (i) solve macroeconomic models with heterogenous agents aka Aiyagari-Bewley-Hugget-Imrohoroglu (ABHI) economies and (ii) relate these models to data to answer quantitative questions. You will learn to do so by doing. That is, this course will require intensive computational work by students.
The ABHI economies are the industry standard in macro. These economies can take the form of infinite horizon, lifecycle environments, and overlapping generations. Importantly, the presence of heterogeneity requires taking good care of distributions and aggregate consistency. We will discuss carefully how to do this in both stationary and nonstationary environments.
This course is demanding and I expect you to be engaged continuously from day one. The grade will be some weighted average of regular homeworks.
We meet Tuesdays and Thursday 16:45-18:15 in the LRC room.
Class Diary:
- Tue Sep 5: We went over the syllabus [.pdf] and discussed the rules of the game. We discussed what quantitative macro is and its intrinsic relation with computation and measurement. We went over typical examples of quantitative experiments that we do in macro (e.g., how much of X explains Y? what is the welfare and redistributive effect of a given policy?). We briefly introduced some computational basics including rounding error, approximation error, and human error. Two warnings: understand the routines (and subroutines) that you use and change them if you need to because the computer's and the (sub)routines' default options might not suit your purposes. Do not let your computer choose things for you, because you are the one responsible for your results, not your computer. Slides: [What is Quantitative Macro?]
- Wed Sep 6: We posed the projection methods algorithm. Slides: [Projection Methods: An Algorithm]. This requires the knowledge of several numerical techniques that we will cover in the following days. We started with numerical differentiation and used a modified formula for the two-sided derivative that helps reduce the harm of dealing with very small numbers. Slides: [Numerical Differentiation and Integration] Then we started to see how to approximate functions. First, we briefly went over local methods and discussed some of their limitations (e.g., distance to a singularity, large shocks, etc.). Second, we moved quickly to global methods which are essential to ABHI models. Our discussion started with spectral methods which implies that our approximant is defined over the entire domain of interest. Two main choices need to be made: Interpolation nodes and basis functions (i.e., polynomials). We discussed some nice properties of using Chebyshev nodes (as opposed to equally spaced nodes) and orthogonal polynomials (as opposed to the monomial basis). In particular, we focused on Chebyshev polynomials. Slides: [Function Approximation]
- Tue Sep 12: We reviewed spectral methods for function approximation. Then we introduced finite-element methods/splines (a good reference is the chapter by Ellen McGrattan (1998). Finite-element methods are particularly useful when we want local support for example to problems with functions that have kinks (e.g., inequality constraints that occasionally bind). We discussed linear splines and cubic splines. We briefly discussed Schumacker splines that preserve monotonicity and concavity. We ended the class discussing how to solve systems of nonlinear equations. We paid particular attention to Gauss-Jacobi and Gauss-Seidel methods. Slides: [Nonlinear Systems]
- Thu Sep 14: We started our work on numerical optimization. We briefly discussed Newton and Quasti-Newton methods and introduced simulated annealing. We went in detail on free derivative methods, in particular, the Nelder-Mead algorithm. Slides: [Numerical Optimization]. Then we started to use the power of the VFI algorithm and applied it, step by step, to the neoclassical growth model. To approximate the value function we used discrete methods. We then discussed some speeding-algorithms that used properties of the decision rule (monotonicity) and the value function (concavity), as well as local search and Howard's policy iteration. We saw the curse of dimensionality in action when we added a shock to the model. Slides: [Value Function Methods: Discrete and Continuous Methods]. Note that your homework 3 adds an elastic labor supply choice to this set up.
Please, refresh your knowledge on dynamic programming (Value Function Iteration, VFI) if you need to. This is a must. Slides: [Dynamic Programming].
- Wed Sep 20: Adrian presented your HWK1. We discussed tensors as a way to handle multidimensional problems. First, we showed the case of the stochastic neoclassical growth model with two state variables, capital and a productivity shock, in which we approximate the value function with discrete methods with separate grids for capital and the productivity shock. Clearly, the dimension of the state space grows exponentially. We then moved the discussion to the use of continuous methods to approximate for the value function. Note that in both cases, whether our approximant for the value function is discrete or continuous, we apply the VFI algorithm to solve for the value function. As a working example we approximated the value function of the deterministic neoclassical growth model with a B-1 (linear) spline. Then, we discussed how to create tensors when the approximant is continuous. In this case we build tensors with Kronecker products of the bases functions used to approximate the unknown function. The curse of dimensionality bites us again, and we discussed one way to handle it using complete polynomials that reduce the state space where the polynomials need to be evaluated (Gaspar and Judd, 1997). Slides: [Tensors and the Curse of Dimensionality]
- Tue Sep 26: Shangdi presented HWK1 Extended. Solving model economies with a continuous shock can be done, but it is very costly, computationally. So it’s useful to learn how to approximate a continuous process through a finite-state Markov chain that will mimic closely the underlying process. We discussed ways to discretize an stochastic AR(1) process using Tauchen's (1986) method (see our updated slides on VFI). We showed how the method can be applied to more than one variable as well as to more than one lag AR(n). Other methods such as the Rouwenhorst (1995) were proposed for highly persistent processes (Kopecky and Suen, 2010). We retook our discussion on the curse of dimensionality with emphasis on the Smolyak algorithm (Krueger and Kubler, 2004, and Malin et al, 2011). The idea of the Smolyak algorithm is to build a sparse grid (and associated tensor products and interpolating functions) to deal with problems with a large state space (e.g., large OLG models or multicountry analysis).
- Thu Sep 28: Juan went over your HWK2 in detail. We discussed the results of a two period model with occupational choice where the tradeoffs between productivity (output), risk and insurance were present. We also discussed the solution to the intrahousehold allocation problem. Adrian briefly covered the computation of a transition in the neoclassical growth model as the last part of HWK2.
- Thu Oct 5: We introduced ABHI economies. Slides: [Aiyagari-Bewley-Hugget-Imrohoroglu Economies]. We started with a benchmark economy with incomplete markets in which agents differ only by the realization of idiosyncratic income shocks. To get the ball rolling we assumed that prices are given (i.e., a partial equilibrium set up). We introduced the notion of precautionary savings and looked at Euler equations with liquidity constraints (occasionally binding). We discussed how to estimate income processes using as example the variance-covariance structure of income residuals with separate transitory and permanent components. We then discussed how to solve for value functions (Bellman equation) and policy functions (Euler Equation) for ABHI models with infinite horizon and finite horizon. Along the way we briefly discussed how to map models to data using calibration and simulated methods of moments (SMM).
- Tue Oct 10: We continued our discussion on how to solve ABHI economies (see slides from previous class). We unpacked solutions to both infinite and finite horizon versions of the ABHI economies using projection methods. We can use iterative methods for both the value function in the Bellman equation and the policy function in the Euler equation, and these iterations can be done with continuous method approximations of the associated unkown functions (either value function or policy function). We provided two working examples for the Euler equation: piecewise linear splines and Chebyshev approximation. We also discussed the non-iterative methods in the Euler equation. Then we defined the recursive competitive equilibrium and provided an algorithm to compute the general equilibrium. A key ingredient is the computation of the endogenous transition probability function mapping the joint distribution of assets and productivity shocks today into tomorrows' joint distribution. We discussed some speeding up algorithms: the Endogenous Grid Method (Carroll, 2006), the Envelope Condition Method (Maliar and Maliar 2013) and the precomputation of expectations (Judd et al, 2016). We also discussed how to check for the accuracy of our approximated solutions. Slides: [Accuracy Tests]
- Wed Oct 11: We discussed how to conduct welfare analysis and policy evaluation that incorporates transitional dynamics. We took on the example of an unexpected permanent increase in capital income taxes. We went in detail over the algorithm used to compute the optimal sequence of policy functions, and we described consumption equivalent variation exercises to assess whether that policy change was welfare improving or not. Note that the welfare gains/losses in the ABHI model are a function of the state variables (assets and idiosyncratic productivity shocks in our case), that is, the welfare implications are idiosyncratic and a given policy may be not pareto improving. Slides: [Welfare Analysis and Policy Evaluation]. We briefly went into two specific applications with two OLG models that study optimal capital and labor income taxation by Conesa, Kitao and Krueger (2009) and by Krueger and Ludwigh (2016). Slides: [OLG and Optimal Taxation].
Homeworks
Students should expect one homework per foreseeable day of class:
- [Homework 1 ] [Due Thu Sep 14] Numerical differentiation and function approximation
- [Homework 1 [Extension] ] [Due Tue Sep 26] Multivariate approximation: A CES production function
- [Homework 2] [Due Thu Sep 28] Solve for the allocations in the following models (nonlinear systems): (1) a 2-period model of occupational choice with uncertainty and heterogenous initial wealth, (2) an intrahousehold allocation problem, and (3) transition in the neoclassical growth model.
- [Homework 3] [Due Tue Oct 3] VFI using discrete and continuous methods. Compare accuracy and time performance (Chebyshev regression vs. cubic splines)
- [Homework 4] [Due Tue Oct 10] Dealing with sparse matrices, stationary distributions, inequality and mobility.
- [Homework 5] [Due Tue Oct 17] Solving a model of wealth distribution (ABHI economies).