nep-ecm New Economics Papers
on Econometrics
Issue of 2006‒07‒09
thirteen papers chosen by
Sune Karlsson
Orebro University

  1. Empirical Likelihood Methods in Econometrics: Theory and Practice By Yuichi Kitamura
  3. Forecasting of small macroeconomic VARs in the presence of instabilities By Todd E. Clark; Michael W. McCracken
  4. Moments of IV and JIVE Estimators By Russell Davidson; James MacKinnon
  5. Improving small area estimation by combining surveys: new perspectives in regional statistics By Albert Satorra; Eva Ventura; Alex Costa
  6. Statistics of extremes under random censoring By Einmahl,John H.J.; Fils-Villetard,Amelie; Guillou,Armelle
  7. Assessing Structural VARs By Lawrence J. Christiano; Martin Eichenbaum; Robert Vigfusson
  8. Likelihood ratio tests on cointegrating vectors, disequilibrium adjustment vectors, and their orthogonal complements By Norman Morin
  9. Solving linear rational expectations models: a horse race By Gary S. Anderson
  10. Why Has U.S. Inflation Become Harder to Forecast? By James H. Stock; Mark W. Watson
  11. A General Stochastic Volatility Model for the Pricing and Forecasting of Interest Rate Derivatives By Anders B. Trolle; Eduardo S. Schwartz
  12. The Relationship between Risk and Expected Return in Europe. By Ángel León; Juan Nave; Gonzalo Rubio
  13. Contributions of Zvi Griliches By James Heckman

  1. By: Yuichi Kitamura (Cowles Foundation, Yale University)
    Abstract: Recent developments in empirical likelihood (EL) methods are reviewed. First, to put the method in perspective, two interpretations of empirical likelihood are presented, one as a nonparametric maximum likelihood estimation method (NPMLE) and the other as a generalized minimum contrast estimator (GMC). The latter interpretation provides a clear connection between EL, GMM, GEL and other related estimators. Second, EL is shown to have various advantages over other methods. The theory of large deviations demonstrates that EL emerges naturally in achieving asymptotic optimality both for estimation and testing. Interestingly, higher order asymptotic analysis also suggests that EL is generally a preferred method. Third, extensions of EL are discussed in various settings, including estimation of conditional moment restriction models, nonparametric specification testing and time series models. Finally, practical issues in applying EL to real data, such as computational algorithms for EL, are discussed. Numerical examples to illustrate the efficacy of the method are presented.
    Keywords: Convex analysis, Empirical distribution, GNP-optimality, Large deviation principle, Moment restriction models, Nonparametric test, NPMLE, Semiparametric efficiency, Weak dependence
    JEL: C14
    Date: 2006–06
  2. By: Manuel A. Domínguez; Ignacio N. Lobato
    Abstract: This article addresses statistical inference in models defined by conditional moment restrictions. Our motivation comes from two observations. First, generalized method of moments, which is the most popular methodology for statistical inference for these models, provides a unified methodology for statistical inference, but it yields inconsistent statistical procedures. Second, consistent specification testing for these models has abandoned a unified approach by regarding as unrelated parameter estimation and model checking. In this article, we provide a consistent specification test, which allows us to propose a simple unified methodology that yields consistent statistical procedures. Although the test enjoys optimality properties, the asymptotic distribution of the considered test statistic depends on the specific data generating process. Therefore, standard asymptotic inference procedures are not feasible. Nevertheless, we show that a simple original wild bootstrap procedure properly estimates the asymptotic null distribution of the test statistic.
    Date: 2006–06
  3. By: Todd E. Clark; Michael W. McCracken
    Abstract: Small-scale VARs have come to be widely used in macroeconomics, for purposes ranging from forecasting output, prices, and interest rates to modeling expectations formation in theoretical models. However, a body of recent work suggests such VAR models may be prone to instabilities. In the face of such instabilities, a variety of estimation or forecasting methods might be used to improve the accuracy of forecasts from a VAR. These methods include using different approaches to lag selection, observation windows for estimation, (over-) differencing, intercept correction, stochastically time--varying parameters, break dating, discounted least squares, Bayesian shrinkage, detrending of inflation and interest rates, and model averaging. Focusing on simple models of U.S. output, prices, and interest rates, this paper compares the effectiveness of such methods. Our goal is to identify those approaches that, in real time, yield the most accurate forecasts of these variables. We use forecasts from simple univariate time series models, the Survey of Professional Forecasters and the Federal Reserve Board's Greenbook as benchmarks.
    Keywords: Economic forecasting ; Time-series analysis
    Date: 2006
  4. By: Russell Davidson (McGill University); James MacKinnon (Queen's University)
    Abstract: We develop a new method, based on the use of polar coordinates, to investigate the existence of moments for instrumental variables and related estimators in the linear regression model. For generalized IV estimators, we obtain familiar results. For JIVE, we obtain the new result that this estimator has no moments at all. Simulation results illustrate the consequences of its lack of moments.
    Keywords: polar coordinates, simultaneous equations, JIVE, moments, instrumental variables
    JEL: C10 C13
    Date: 2006–06
  5. By: Albert Satorra; Eva Ventura; Alex Costa
    Abstract: A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.
    Keywords: Composite estimator, complementary survey, mean squared error, official statistics, regional statistics, small area
    Date: 2006–06
  6. By: Einmahl,John H.J.; Fils-Villetard,Amelie; Guillou,Armelle (Tilburg University, Center for Economic Research)
    Abstract: We investigate the estimation of the extreme value index, when the data are subject to random censorship. We prove in a unified way detailed asymptotic normality results for various estimators of the extreme value index and use these estimators as the main building block for estimators of extreme quantiles. We illustrate the quality of these methods by a small simulation study and apply the estimators to medical data.
    Keywords: 62G05;62G20;62G32;62N02; Asymptotic normality;extreme value index;extreme quantiles;random censoring
    JEL: C13 C14 C41
    Date: 2006
  7. By: Lawrence J. Christiano; Martin Eichenbaum; Robert Vigfusson
    Abstract: This paper analyzes the quality of VAR-based procedures for estimating the response of the economy to a shock. We focus on two key issues. First, do VAR-based confidence intervals accurately reflect the actual degree of sampling uncertainty associated with impulse response functions? Second, what is the size of bias relative to confidence intervals, and how do coverage rates of confidence intervals compare with their nominal size? We address these questions using data generated from a series of estimated dynamic, stochastic general equilibrium models. We organize most of our analysis around a particular question that has attracted a great deal of attention in the literature: How do hours worked respond to an identified shock? In all of our examples, as long as the variance in hours worked due to a given shock is above the remarkably low number of 1 percent, structural VARs perform well. This finding is true regardless of whether identification is based on short-run or long-run restrictions. Confidence intervals are wider in the case of long-run restrictions. Even so, long-run identified VARs can be useful for discriminating among competing economic models.
    JEL: C1
    Date: 2006–07
  8. By: Norman Morin
    Abstract: Cointegration theory provides a flexible class of statistical models that combine long-run relationships and short-run dynamics. This paper presents three likelihood ratio (LR) tests for simultaneously testing restrictions on cointegrating relationships and on how quickly the system reacts to the deviation from equilibrium implied by the cointegrating relationships. Both the orthogonal complements of the cointegrating vectors and of the vectors of adjustment speeds have been used to define the common stochastic trends of a nonstationary system. The restrictions implicitly placed on the orthogonal complements of the cointegrating vectors and of the adjustment speeds are identified for a class of LR tests, including those developed in this paper. It is shown how these tests can be interpreted as tests for restrictions on the orthogonal complements of the cointegrating relationships and adjustment vectors, which allow one to combine and test for economically meaningful restrictions on cointegrating relationships and on common stochastic trends.
    Date: 2006
  9. By: Gary S. Anderson
    Abstract: This paper compares the functionality, accuracy, computational efficiency, and practicalities of alternative approaches to solving linear rational expectations models, including the procedures of (Sims, 1996), (Anderson and Moore, 1983), (Binder and Pesaran, 1994), (King and Watson, 1998), (Klein, 1999), and (Uhlig, 1999). While all six prcedures yield similar results for models with a unique stationary solution, the AIM algorithm of (Anderson and Moore, 1983) provides the highest accuracy; furthermore, this procedure exhibits significant gains in computational efficiency for larger-scale models.
    Date: 2006
  10. By: James H. Stock; Mark W. Watson
    Abstract: Forecasts of the rate of price inflation play a central role in the formulation of monetary policy, and forecasting inflation is a key job for economists at the Federal Reserve Board. This paper examines whether this job has become harder and, to the extent that it has, what changes in the inflation process have made it so. The main finding is that the univariate inflation process is well described by an unobserved component trend-cycle model with stochastic volatility or, equivalently, an integrated moving average process with time-varying parameters; this model explains a variety of recent univariate inflation forecasting puzzles. It appears currently to be difficult for multivariate forecasts to improve on forecasts made using this time-varying univariate model.
    JEL: C53 E37
    Date: 2006–06
  11. By: Anders B. Trolle; Eduardo S. Schwartz
    Abstract: We develop a tractable and flexible stochastic volatility multi-factor model of the term structure of interest rates. It features correlations between innovations to forward rates and volatilities, quasi-analytical prices of zero-coupon bond options and dynamics of the forward rate curve, under both the actual and risk-neutral measure, in terms of a finite-dimensional affine state vector. The model has a very good fit to an extensive panel data set of interest rates, swaptions and caps. In particular, the model matches the implied cap skews and the dynamics of implied volatilities. The model also performs well in forecasting interest rates and derivatives.
    JEL: E43 G13
    Date: 2006–06
  12. By: Ángel León (University of Alicante); Juan Nave (University of Castilla La Mancha); Gonzalo Rubio (University of the Basque Country)
    Abstract: We employ MIDAS (Mixed Data Sampling) to study the risk-expected return trade-off in several European stock indices. Using MIDAS, we report that, in most indices, there is a significant and positive relationship between risk and expected return. This strongly contrasts with the result we obtain when we employ both symmetric and asymmetric GARCH models for conditional variance. We also find that asymmetric specifications of the variance process within the MIDAS framework improve the relationship between risk and expected return. Finally, we introduce bivariate MIDAS and find some evidence of significant pricing of the hedging component for the intertemporal riskreturn trade-off.
    Keywords: Risk-return trade-off, hedging component, MIDAS, conditional variance
    JEL: G12 C22
    Date: 2005–07–04
  13. By: James Heckman
    Abstract: This paper summarizes the major research contributions of Zvi Griliches.
    JEL: B31 D24 O33
    Date: 2006–06

This nep-ecm issue is ©2006 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.