Empirical Literature Business Cycles Research Paper

This sample Empirical Literature Business Cycles Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of research paper topics, and browse research paper examples.

Business cycle researchers study the temporary deviation of macroeconomic variables from their underlying trend. During the nineteenth century, theories explaining business cycles were typically driven by real factors (as opposed to monetary factors), invariably of agricultural origin. Of course, given the relative size and importance of the agricultural sector at that time, these theories had some degree of success. However, as agriculture began to decline in importance, macroeconomists searched for other driving forces for the business cycle. Friedrich August von Hayek (1899–1992) suggested monetary policy, while John Maynard Keynes (1883–1946) surmised that business cycles were driven by a force he called “animal spirits,” with nontrivial roles for sticky prices and wages.

The 1970s saw the birth of the rational expectations approach to macroeconomics, coming mainly out of the new classical economics school of thought. In fact, the rational expectations revolution is often credited with returning real driving factors to the forefront of business cycle research. According to this school of thought, given that people are rational and can accurately forecast future events and the actions of policymakers, monetary and fiscal policies are likely to be ineffective in stimulating the economy (one could say that money is neutral in the former case of monetary policy, and that Ricardian equivalence holds in the latter case of fiscal policy). Additionally, in the absence of market frictions (all markets clear), an assumption held by new classical economists, sticky prices and wages are rendered nonfactors in causing macroeconomic variables to deviate from trend. Therefore, the following questions naturally arise. What are the sources of business cycle movement if people can correctly (or with a great degree of accuracy) anticipate the actions of policymakers? And given that the economy is perpetually selfadjusting so that all markets quickly return to equilibrium (no market friction), how do business cycles arise? That is, given the well-functioning macroeconomy proposed by the new classical school, what causes macrovariables to temporarily deviate from trend? The answer, according to new classical macroeconomists, is that observed cycles are driven by (unanticipated) shocks to real factors, specifically by supply-side factors that alter factor productivity and the capital-labor ratio. This led to the rebirth of real business cycle (RBC) theory with technology shock as its driving force, instead of agricultural factors as previously suggested.

In order to assess the applicability of the RBC theory, it should readily lend itself to empirical testing, as should any good economic theory. Economic theory, therefore, should be parsimonious while containing (enough) important features that, when estimated, the theory produces results that accord with the data, in some statistical sense. The RBC theory is no exception, and its success, hitherto, has been the ease with which it lends itself to estimation. To be clear, the purpose of RBC research, the narrow focus of this entry, is to investigate how much output variation or, more broadly, the degree to which cyclical fluctuations in key macroeconomic data over business cycle frequency, can be accounted for by technology shocks.

Calibration and regression methods are the commonly used techniques for empirically testing the RBC paradigm. Each technique, if properly applied, can be a useful tool in testing the merit of a particular model or in differentiating among various classes of models. Unfortunately, however, within the RBC framework, each tool can be subject to abuse by the researcher who wants to promote his or her own agenda at the expense of science. However, such blatant misuse of the empirical tools has been the exception rather than the rule, and the Cowles Commission (a research institute established in 1932 by businessman and economist Alfred Cowles, dedicated to linking economic theory to mathematics and statistics) should be proud to see both theory and estimation, of one form or another, appearing in more and more published articles. If only macroeconomists could agree on a particular theory or method!

Business Cycle Facts

There are four facts that any RBC model should be able to capture (see, for example, Hansen and Wright 1992, p. 3, Tables 1 and 2), namely:

  1. Investment is three times as volatile as output.
  2. Consumption (nondurable goods) is less volatilethan output.
  3. Labor input is nearly as volatile as output.
  4. Labor and productivity are essentially uncorrelated.

Another auxiliary feature of business cycles is that most variation in output, at business cycle frequency, is due to labor input. In addition, macroeconomic data exhibit a high degree of persistence over the cycle. Researchers invariably interpret the former as implying that movements in capital are unimportant at business cycle frequency and can thus be ignored when modeling business cycle fluctuations, whereas the latter points to the use of driving forces in our model economy that display some degree of persistence. The persistence of the driving forces will find its way into the model data via what is commonly called the transmission mechanism. (See Cogley and Nason [1995] for reasons why having the persistence in the data solely due to the driving process is a weakness rather than strength of the model.)

Calibration

Made popular by Finn Kydland and Edward Prescott (1982), calibration is defined as the estimation of some parameters of a model, under the assumption that the model is correct, as a middle step in the study of other parameters. Thomas Cooley (1997) describes it as a strategy for finding numerical values for the parameters of artificial economic worlds. Basically, the researcher chooses values for certain parameters (the parameters that he has no interest in making economic predictions) and has the model spit out values for those parameters, left free in the exercise, for which he wants the model to make predictions.

Prescott in his 2004 Nobel Prize lecture lays out what a sound calibrating exercise of an RBC model would entail and closes by stressing the need for scientific discipline during the process by the researcher (see also Cooley 1997). Such discipline can generally be thought of as choosing the relevant parameter values based on sound microeconomic evidence and choosing functional forms for technology and preferences that display properties characteristic of the economy of interest. For example, Prescott (1986) chose his utility function based on the observation that per capita leisure displays no observable trend over time, and he used a Cobb-Douglas production function because of the constancy of capital share over the said period.

In an attempt to match the aforementioned business cycle facts, Prescott calibrates a baseline one-sector neoclassical growth model emphasizing technology shock as the main driving force behind cyclical fluctuations. He found that such a model was able to match not only the magnitude of output fluctuations but also the relative volatilities of both consumption and investment to output. The model failed, however, in its inability to match the facts pertaining to labor (facts three and four in the previous section). The ability of such a simple model, driven solely by technology shock, to match so many elements of the data is what led Kydland and Prescott (and others) in many of their papers, together and singly, to strongly advocate for technology shock and the RBC paradigm as being the impetus behind cyclical fluctuations in macroeconomic data.

An attractive feature of the calibration approach to estimation is its flexibility. A calibrated RBC model can be judged a success or failure depending on how many, and which, of the key business cycle facts it can capture. To be fair, one will scarcely find a model that is able to mimic all features of an economy—that is why it is a model. Therefore, economic researchers have to be prepared to judge a model as successful even though it fails on certain grounds—this is where calibration and calibrated RBC models fall short, since no goodness-of-fit statistic is provided with which to judge a model’s success. However, this too is where calibration has been unfairly criticized, since any unmatched moment can be considered essential depending on the reader. That said, calibration can prove useful in pointing out dimensions along which existing models can be improved. For example, to improve upon the baseline RBC model, Gary Hansen and Randall Wright (1992) incorporated additional shocks, specifically to fiscal policy, and other features to technology and preferences (for example, accounting for household production) that mainly affected the moments of labor market while leaving the other (successful) elements of the model economy intact. These modifications resulted in modified RBC models that came closer to mimicking the actual U.S. economy. However, Hansen and Wright did not promote one particular modification over another; they left that up to the reader.

There are advantages and disadvantages of calibration over its econometric counterpart. Promoters of calibration usually point to the selection of parameters based on microeconomic evidence as an indication of the strength of the exercise. They claim that more information can be incorporated into the calibration exercise (compared to the econometric approach), which allows the calibrated model to be held to higher standards. The second issue, alluded to earlier, is the meaning of rejection or acceptance of a model based on statistics. A model that fits the data well along every dimension except one (unimportant) may be rejected based on some statistical test, or a model may fail to be rejected because the data is consistent with a wide range of possibilities (see Chari et al. 2005).

Those opposed to using calibration as an estimation device usually point to the extreme faith that one has to have in the model. Pure calibrators literally accept their model as truth. They also point to the fact that calibrated models are bad for forecasts (making predictions of the future), since they assume constancy of the selected parameters over time. Finally, calibration is only useful in times of structural stability, a point related to the lack of forecast ability.

Regression Methods

The most popular regression approach to the study of real business cycles and measures of technological innovation was developed by economist Robert Merton Solow (1956). The Solow approach measures inputs to capital and labor, and it labels as technology (more precisely, total factor productivity, or TFP) the difference between output and the measured inputs. In a more recent paper, Susanto Basu, John Fernald, and Miles Kimball (2004) modify the Solow approach by incorporating capital utilization and effort on the part of labor. They then measure technology shocks as the difference between output and measures of inputs, capital and labor, at both the extensive and intensive margins.

A more direct approach to the measure of technology shocks can be found in John Shea (1998). Shea uses estimates of research and development and patents to gauge technological changes over time.

More recently, macroeconomists have used the structural vector autoregressive (SVAR) approach to study the technology-driven RBC paradigm (see Gali 1999; Francis and Ramey 2005). Researchers using this method take key identifying assumptions from the theoretical model and impose them on the data. They then perturb the technology shocks identified this way and examine the responses of key macroeconomic variables to such perturbation. The responses are then compared to the business cycle facts in order to make a judgment about the plausibility of the RBC model in driving economic fluctuations.

Overall, the studies employing one or the other regression approach have not been good for the RBC model. These studies have failed to match the business cycle facts and have thus put into question the validity of the technology-driven RBC paradigm.

Conclusion

Other empirical RBC studies have brought to life the study of business cycles. The debates have centered on two important themes: (1) finding the correct empirical approach to use when studying this phenomenon; and (2) coming up with an answer to the question of what happens after a technology shock. As of 2006, there was no resolution for either of these problems, leaving room for clever young macroeconomists to shed new light and ideas on a long-standing subject at the heart of the work of many policymakers and academics.

Bibliography:

  1. Basu, Susanto, John Fernald, and Miles Kimball. 2004. Are Technology Improvements Contractionary? NBER Working Paper No. 10592. Cambridge, MA: National Bureau of Economic Research.
  2. Chari, V. V., Patrick J. Kehoe, and Ellen R. McGrattan. 2005. A Critique of Structural VARs Using Real Business Cycle Theory. Working Paper 631. Minneapolis, MN: Federal Reserve Bank of Minneapolis.
  3. Cogley, Timothy, and James M. Nason. 1995. Output Dynamics in Real-Business-Cycle Models. American Economic Review 85: 492–511.
  4. Cooley, Thomas F. 1997. Calibrated Models. Oxford Review of Economic Policy 13 (3): 55–69.
  5. Francis, Neville, and Valerie A. Ramey. 2005. Is the TechnologyDriven Real Business Cycle Hypothesis Dead? Shocks and Aggregate Fluctuations Revisited. Journal of Monetary Economics 52 (8): 1379–1399.
  6. Galí, Jordi. 1999. Technology, Employment, and the Business Cycle: Do Technology Shocks Explain Aggregate Fluctuations. American Economic Review 89: 249–271.
  7. Hansen, Gary, and Randall Wright. 1992. The Labor Market in Real Business Cycle Theory. Federal Reserve Bank of Minneapolis Quarterly Review 16 (2): 1–12.
  8. Kydland, Finn, and Edward Prescott. 1982. Time to Build and Aggregate Fluctuations. Econometrica 50: 1345–1370.
  9. Prescott, Edward. 1986. Theory Ahead of Business-Cycle Measurement. Federal Reserve Bank of Minneapolis Quarterly Review 10 (4): 9–22.
  10. Prescott, Edward. 2004. Nobel Prize Lecture: The Transformation of Macroeconomic Policy and Research. Journal of Political Economy 114 (2): 203–235.
  11. Shea, John. 1999. What Do Technology Shocks Do? In NBER Macroeconomics Annual 1998, eds. Ben S. Bernanke and Julio J. Rotemberg, 275–310. Cambridge, MA: MIT Press.
  12. Solow, Robert. 1956. A Contribution to the Theory of Economic Growth. Quarterly Journal of Economics 70: 65–94.

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655