*"The era of closed-form solutions for their own sake should be over. Newer generations get similar intuitions from computer-generated examples than from functional expressions"*, Jose-Victor Rios-Rull, JME (2008).

Quantitative Macroeconomics (Unit I) follows the first year PhD macro sequence. The goal of this course is to equip you with a wide set of tools to (i) solve macroeconomic models with heterogenous agents aka Aiyagari-Bewley-Hugget-Imrohoroglu (ABHI) economies and (ii) relate these models to data to answer quantitative questions. You will learn to do so by doing. That is, this course will require intensive computational work by students.

The ABHI economies are the industry standard in macro. These economies can take the form of infinite horizon, lifecycle environments, and overlapping generations. Importantly, the presence of heterogeneity requires taking good care of distributions and aggregate consistency. We will discuss carefully how to do this in both stationary and nonstationary environments.

This course is demanding and I expect you to be engaged continuously from day one. The grade will be some weighted average of regular homeworks.

We meet Tuesdays and Thursday 16:45-18:15 in the LRC room.

**Tue Sep 5:**We went over the syllabus [.pdf] and discussed the rules of the game. We discussed what quantitative macro is and its intrinsec relation with computation and measurement. We went over typical examples of quantitative experiments that we do in macro (e.g., how much of X explains Y? what is the welfare and redistributive effect of a given policy?). We briefly introduced some computational basics including rounding error, approximation error, and human error. Two warnings: understand the routines (and subroutines) that you use and change them if you need to. That is, do not let the computer choose things for you (the computer's and the (sub)routines' default options might not suit you). Slides: [What is Quantitative Macro?]**Wed Sep 6:**We posed the projection methods algorithm. Slides: [Projection Methods: An Algorithm]. This requires the knowledge of several numerical techniques that we will cover in the following days. We started with numerical differentiation and used a modified formula for the two-sided derivative that helps reduce the harm of dealing with very small numbers. Slides: [Numerical Differentiation and Integration] Then we started to see how to approximate functions. First, we briefly went over local methods and discussed some of their limitations (e.g., distance to a singularity, large shocks, etc.). Second, we moved quickly to global methods which are essential to ABHI models. Our discussion started with spectral methods which implies that our approximand is defined over the entire domain of interest. Two main choices need to be made: Interpolation nodes and basis functions (i.e., polynomials). We discussed some nice properties of using Chebyshev nodes (as opposed to equally spaced nodes) and orthogonal polynomials (as opposed to the monomial basis). In particular, we focused on Chebyshev polynomials. Slides: [Function Approximation]**Tue Sep 12:**We reviewed spectral methods for function approximation. Then we introduced finite-element methods/splines (a good reference is the chapter by Ellen McGrattan in Marimon/Scott 1998). Finite-element methods are particularly useful when we want local support for example to problems with functions that have kinks (e.g., inequality constraints that occasionally bind). We discussed linear splines and cubic splines. Then we discussed briefly schumacker splines that preserve monotonicity and concavity. We ended the class discussing how to solve systems of nonlinear equations. We paid particular attention to Gauss-Jacobi and Gauss-Seidel methods. Slides: [Nonlinear Systems]**Thu Sep 14:**We started our work on numerical optimization. We briefly discussed Newton and Quasti-Newton methods and introduced simulated annealing. We went in detail on free derivative methods, in particular, the Nelder-Mead algorithm. Slides: [Numerical Optimization]. Then we started to use the power of the VFI algorithm and applied it, step by step, to the neoclassical growth model. To approximate the value function we used discrete methods. We then discussed some speeding-algorithms that used properties of the decision rule (monotonicity) and the value function (concavity), as well as local search and Howard's policy iteration. Then, we saw the curse of dimensionality in action when we added a shock to the model. we will apply VFI with discrete and continuous methods. Slides: [Value Function Methods: Discrete and Continuous Methods]. Note that your homework 3 adds an elastic labor supply choice to this set up.**Tue Sep 19:**We will wrap up the discussion on value function methods using an approximation that is continous over the state space. We will go over tensors and their choice to deal with more than one dimension.**Wed Sep 20:**

Please, refresh your knowledge on dynamic programming (Value Function Iteration, VFI) if you need to. This is a must. Slides: [Dynamic Programming].

Students should expect one homework per foreseeable Thursday:

- [Homework 1 ] [Due Thur Sep 14] Numerical differentiation and function approximation
- [Homework 1 [Extension] ] [Due Tue Sep 19] Multivariate function approximation
- [Homework 2] [Due Thur Sep 21] Solve for the allocations in the following models (nonlinear systems): (1) a two period model of occupational choice with uncertainty and heterogenous initial wealth, (2) a static intrahousehold allocation problem, and (3) a transition in the neo-classical growth model.
- [Homework 3] [Due Thur Sep 28] VFI using discrete methods. Compare accuracy and time performance with continuous methods (Chebyshev regression vs. cubic splines)
- [Homework 4] [Due TBD] An ABHI economy [Partial Equilibrium]
- [Homework 5] [Due TBD] An ABHI economy [General Equilibrium]
- [Homework 6] [Due TBD] An ABHI economy with entrepreneurial choice
- [Homework 7] [Due TBD] Assessing optimal tax progressivity