nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒02‒07
nineteen papers chosen by
Sune Karlsson
Orebro University

  1. Infinite-dimensional VARs and factor models By Alexander Chudik; M. Hashem Pesaran
  2. Efficient Estimation of Semiparametric Conditional Moment Models with Possibly Nonsmooth Residuals By Chen, Xiaohong; Pouzo, Demian
  3. Semiparametric E±ciency in GMM Models of Nonclassical Measurement Errors, Missing Data and Treatment Effects By Chen, Xiaohong; Hong, Han; Tarozzi, Alessandro
  4. A Powerful Tuning Parameter Free Test of the Autoregressive Unit Root Hypothesis By Nielsen, Morten
  5. Estimation of Nonparametric Conditional Moment Models with Possibly Nonsmooth Moments By Chen, Xiaohong; Pouzo, Demian
  6. Path Forecast Evaluation By Jorda, Oscar; Marcellino, Massimiliano
  7. Adaptive Experimental Design Using the Propensity Score By Jinyong Hahn; Keisuke Hirano; Dean Karlan
  8. Bootstrap prediction intervals for threshold autoregressive models By Jing, Li
  9. Comparing IV With Structural Models: What Simple IV Can and Cannot Identify By James J. Heckman; Sergio Urzua
  10. An Econometric Cntribution to the Intertemporal Approach of the Current Account By Wagner Piazza Gaglianone; João Victor Issler
  11. On the use of robust regression in econometrics By Markus Baldauf; J.M.C. Santos Silva
  12. Nonlinearity and Temporal Dependence By Chen, Xiaohong; Hansen, Lars Peter; Carrasco, Marine
  13. The Taylor rule and forecast intervals for exchange rates By Jian Wang; Jason J. Wu
  14. Estimating Affine Multifactor Term Structure Models Using Closed-Form Likelihood Expansions By Ait-Sahalia, Yacine; Kimmel, Robert L.
  15. Identifying the elasticity of substitution with biased technical change By Miguel A. León-Ledesma; Peter McAdam; Alpo Willman
  16. The Smooth Colonel Meets the Reverend By Kiefer, Nicholas M.; Racine, Jeffrey S.
  17. Coupling Index and Stocks By Benjamin Jourdain; Mohamed Sbai
  18. Some correlation properties of spatial autoregressions By Martellosio, Federico
  19. Default Estimation, Correlated Defaults, and Expert Information By Kiefer, Nicholas M.

  1. By: Alexander Chudik (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); M. Hashem Pesaran (University of Cambridge, CIMF and USC; Faculty of Economics, Austin Robinson Building, Sidgwick Avenue, Cambridge, CB3 9DD, United Kingdom.)
    Abstract: This paper introduces a novel approach for dealing with the 'curse of dimensionality' in the case of large linear dynamic systems. Restrictions on the coefficients of an unrestricted VAR are proposed that are binding only in a limit as the number of endogenous variables tends to infinity. It is shown that under such restrictions, an infinite-dimensional VAR (or IVAR) can be arbitrarily well characterized by a large number of finite-dimensional models in the spirit of the global VAR model proposed in Pesaran et al. (JBES, 2004). The paper also considers IVAR models with dominant individual units and shows that this will lead to a dynamic factor model with the dominant unit acting as the factor. The problems of estimation and inference in a stationary IVAR with unknown number of unobserved common factors are also investigated. A cross section augmented least squares estimator is proposed and its asymptotic distribution is derived. Satisfactory small sample properties are documented by Monte Carlo experiments. JEL Classification: C10, C33, C51.
    Keywords: Large N and T Panels, Weak and Strong Cross Section Dependence, VAR, Global VAR, Factor Models.
    Date: 2009–01
  2. By: Chen, Xiaohong (Yale U); Pouzo, Demian (New York U)
    Abstract: For semi/nonparametric conditional moment models containing unknown parametric components (theta) and unknown functions of endogenous variables (h), Newey and Powell (2003) and Ai and Chen (2003) propose sieve minimum distance (SMD) estimation of (theta, h) and derive the large sample properties. This paper greatly extends their results by establishing the followings: (1) The penalized SMD (PSMD) estimator (hat{theta}, hat{h}) can simultaneously achieve root-n asymptotic normality of theta hat and nonparametric optimal convergence rate of hat{h}, allowing for models with possibly nonsmooth residuals and/or noncompact infinite dimensional parameter spaces. (2) A simple weighted bootstrap procedure can consistently estimate the limiting distribution of the PSMD hat{theta}. (3) The semiparametric efficiency bound results of Ai and Chen (2003) remain valid for conditional models with nonsmooth residuals, and the optimally weighted PSMD estimator achieves the bounds. (4) The profiled optimally weighted PSMD criterion is asymptotically Chi-square distributed, which implies an alternative consistent estimation of confidence region of the efficient PSMD estimator of theta. All the theoretical results are stated in terms of any consistent nonparametric estimator of conditional mean functions. We illustrate our general theories using a partially linear quantile instrumental variables regression, a Monte Carlo study, and an empirical estimation of the shape-invariant quantile Engel curves with endogenous total expenditure.
    JEL: C14
    Date: 2008–02
  3. By: Chen, Xiaohong (New York U); Hong, Han (Duke U); Tarozzi, Alessandro
    Abstract: We study semiparametric efficiency bounds and efficient estimation of parameters defined through general nonlinear, possibly non-smooth and over-identified moment restrictions, where the sampling information consists of a primary sample and an auxiliary sample. The variables of interest in the moment conditions are not directly observable in the primary data set, but the primary data set contains proxy variables which are correlated with the variables of interest. The auxiliary data set contains information about the conditional distribution of the variables of interest given the proxy variables. Identification is achieved by the assumption that this conditional distribution is the same in both the primary and auxiliary data sets. We provide semiparametric efficiency bounds for both the "verify-out-of-sample" case, where the two samples are independent, and the "verify-in-sample" case, where the auxiliary sample is a subset of the primary sample; and the bounds are derived when the propensity score is unknown, or known, or belongs to a correctly specified parametric family. These efficiency variance bounds indicate that the propensity score is ancillary for the "verify-in-sample" case, but is not ancillary for the "verify-out-of-sample" case. We show that sieve conditional expectation projection based GMM estimators achieve the semiparametric efficiency bounds for all the above mentioned cases, and establish their asymptotic efficiency under mild regularity conditions. Although inverse probability weighting based GMM estimators are also shown to be semiparametrically efficient, they need stronger regularity conditions and clever combinations of nonparametric and parametric estimates of the propensity score to achieve the efficiency bounds for various cases. Our results contribute to the literature on non-classical measurement error models, missing data and treatment effects.
    JEL: C1
    Date: 2008–03
  4. By: Nielsen, Morten (Cornell U and CREATES)
    Abstract: This paper presents a family of simple nonparametric unit root tests indexed by one parameter, d, and containing Breitung's (2002) test as the special case d = 1. It is shown that (i) each member of the family with d > 0 is consistent, (ii) the asymptotic distribution depends on d, and thus reects the parameter chosen to implement the test, and (iii) since the asymptotic distribution depends on d and the test remains consistent for all d > 0, it is possible to analyze the power of the test for different values of d. The usual Phillips-Perron or Dickey-Fuller type tests are characterized by tuning parameters (bandwidth, lag length, etc.), i.e. parameters which change the test statistic but are not reected in the asymptotic distribution, and thus have none of these three properties. It is shown that members of the family with d < 1 have higher asymptotic local power than the Breitung (2002) test, and when d is small the asymptotic local power of the proposed nonparametric test is relatively close to the parametric power envelope, particularly in the case with a linear timetrend. Furthermore, GLS detrending is shown to improve power when d is small, which is not the case for Breitung's (2002) test. Simulations demonstrate that, apart from some size distortion in the presence of large negative AR or MA coefficients, the proposed test has good finite sample properties in the presence of both linear and nonlinear short-run dynamics. When applying a sieve bootstrap procedure, the proposed test has very good size properties, with finite sample power that is higher than that of Breitung's (2002) test and even rivals the (nearly) optimal parametric GLS detrended augmented Dickey-Fuller test with lag length chosen by an information criterion.
    JEL: C22
    Date: 2008–05
  5. By: Chen, Xiaohong (Yale U); Pouzo, Demian (New York U)
    Abstract: This paper studies nonparametric estimation of conditional moment models in which the residual functions could be nonsmooth with respect to the unknown functions of endogenous variables. It is a problem of nonparametric nonlinear instrumental variables (IV) estimation, and a difficult nonlinear ill-posed inverse problem with an unknown operator. We first propose a penalized sieve minimum distance (SMD) estimator of the unknown functions that are identified via the conditional moment models. We then establish its consistency and convergence rate (in strong metric), allowing for possibly non-compact function parameter spaces, possibly non-compact finite or infinite dimensional sieves with flexible lower semicompact or convex penalty, or finite dimensional linear sieves without penalty. Under relatively low-level sufficient conditions, and for both mildly and severely ill-posed problems, we show that the convergence rates for the nonlinear ill-posed inverse problems coincide with the known minimax optimal rates for the nonparametric mean IV regression. We illustrate the theory by two important applications: root-n asymptotic normality of the plug-in penalized SMD estimator of a weighted average derivative of a nonparametric nonlinear IV regression, and the convergence rate of a nonparametric additive quantile IV regression. We also present a simulation study and an empirical estimation of a system of nonparametric quantile IV Engel curves.
    JEL: C13
    Date: 2008–04
  6. By: Jorda, Oscar (U of California, Davis); Marcellino, Massimiliano (Universita Bocconi)
    Abstract: A path forecast refers to the sequence of forecasts 1 to H periods into the future. A summary of the range of possible paths the predicted variable may follow for a given confidence level requires construction of simultaneous confidence regions that adjust for any covariance between the elements of the path forecast. This paper shows how to construct such regions with the joint predictive density and Scheffe's (1953) S-method. In addition, the joint predictive density can be used to construct simple statistics to evaluate the local internal consistency of a forecasting exercise of a system of variables. Monte Carlo simulations demonstrate that these simultaneous confidence regions provide approximately correct coverage in situations where traditional error bands, based on the collection of marginal predictive densities for each horizon, are vastly off mark. The paper showcases these methods with an application to the most recent monetary episode of interest rate hikes in the U.S. macroeconomy.
    JEL: C32
    Date: 2008–07
  7. By: Jinyong Hahn (Department of Economics, UCLA); Keisuke Hirano (University of Arizona); Dean Karlan (Economic Growth Center, Yale University)
    Abstract: Many social experiments are run in multiple waves, or are replications of earlier social experiments. In principle, the sampling design can be modified in later stages or replications to allow for more efficient estimation of causal effects. We consider the design of a two-stage experiment for estimating an average treatment effect, when covariate information is available for experimental subjects. We use data from the first stage to choose a conditional treatment assignment rule for units in the second stage of the experiment. This amounts to choosing the propensity score, the conditional probability of treatment given covariates. We propose to select the propensity score to minimize the asymptotic variance bound for estimating the average treatment effect. Our procedure can be implemented simply using standard statistical software and has attractive large-sample properties.
    Keywords: experimental design, propensity score, efficiency bound
    JEL: C1 C14 C9 C93 C13
    Date: 2009–01
  8. By: Jing, Li
    Abstract: This paper examines the performance of prediction intervals based on bootstrap for threshold autoregressive models. We consider four bootstrap methods to account for the variability of estimates, correct the small-sample bias of autoregressive coefficients and allow for heterogeneous errors. Simulation shows that (1) accounting for the sampling variability of estimated threshold values is necessary despite super-consistency, (2) bias-correction leads to better prediction intervals under certain circumstances, and (3) two-sample bootstrap can improve long term forecast when errors are regime-dependent.
    Keywords: Bootstrap; Interval Forecasting; Threshold Autoregressive Models; Time Series; Simulation
    JEL: C53 C22 C15
    Date: 2009–01
  9. By: James J. Heckman; Sergio Urzua
    Abstract: This paper compares the economic questions addressed by instrumental variables estimators with those addressed by structural approaches. We discuss Marschak's Maxim: estimators should be selected on the basis of their ability to answer well-posed economic problems with minimal assumptions. A key identifying assumption that allows structural methods to be more informative than IV can be tested with data and does not have to be imposed.
    JEL: C31
    Date: 2009–02
  10. By: Wagner Piazza Gaglianone; João Victor Issler
    Abstract: This paper investigates an intertemporal optimization model to analyze the current account through Campbell & Shiller’s (1987) approach. In this setup, a Wald test is conducted to analyze a set of restrictions imposed to a VAR, used to forecast the current account for a set of countries. We focused here on three estimation procedures: OLS, SUR and the two-way error decomposition of Fuller & Battese (1974). We also propose an original note on Granger causality, which is a necessary condition to perform the Wald test. Theoretical results show that, in the presence of global shocks, OLS and SUR estimators might lead to a biased covariance matrix, with serious implications to the validation of the model. A small Monte Carlo simulation confirms these findings and indicates the Fuller & Battese procedure in the presence of global shocks. An empirical exercise for the G-7 countries is also provided, and the results of the Wald test substantially change with different estimation techniques. In addition, global shocks can account up to 40% of the total residuals of the G-7.
    Date: 2009–01
  11. By: Markus Baldauf; J.M.C. Santos Silva
    Abstract: The use of robust regression estimators obtained by iteratively reweighted least squares (IRLS) is gaining popularity among applied econometricians. The main argument invoked to justify the use of the robust IRLS estimators is that they provide efficiency gains in the presence of outliers or non-normal errors. Unfortunately, most practitioners seem to be unaware of the fact that heteroskedastic and skewed errors can dramatically affect the properties of these estimators. In this paper we reconsider the interpretation of the robust IRLS estimators when used in typical econometric problems, and conclude that their use cannot be generally recommended.
    Date: 2009–01–28
  12. By: Chen, Xiaohong (Yale U); Hansen, Lars Peter (U of Chicago); Carrasco, Marine (U of Montreal)
    Abstract: Nonlinearities in the drift and diffusion coefficients influence temporal dependence in scalar diffusion models. We study this link using two notions of temporal dependence: beta-mixing and rho-mixing. We show that beta-mixing and rho-mixing with exponential decay are essentially equivalent concepts for scalar diffusions. For stationary diffusions that fail to be rho-mixing, we show that they are still beta-mixing except that the decay rates are slower than exponential. For such processes we find transformations of the Markov states that have finite variances but infinite spectral densities at frequency zero. Some have spectral densities that diverge at frequency zero in a manner similar to that of stochastic processes with long memory. Finally we show how nonlinear, state-dependent, Poisson sampling alters the unconditional distribution as well as the temporal dependence.
    JEL: C12
    Date: 2008–05
  13. By: Jian Wang; Jason J. Wu
    Abstract: This paper attacks the Meese-Rogoff (exchange rate disconnect) puzzle from a different perspective: out-of-sample interval forecasting. Most studies in the literature focus on point forecasts. In this paper, we apply Robust Semi-parametric (RS) interval forecasting to a group of Taylor rule models. Forecast intervals for twelve OECD exchange rates are generated and modified tests of Giacomini and White (2006) are conducted to compare the performance of Taylor rule models and the random walk. Our contribution is twofold. First, we find that in general, Taylor rule models generate tighter forecast intervals than the random walk, given that their intervals cover out-of-sample exchange rate realizations equally well. This result is more pronounced at longer horizons. Our results suggest a connection between exchange rates and economic fundamentals: economic variables contain information useful in forecasting the distributions of exchange rates. The benchmark Taylor rule model is also found to perform better than the monetary and PPP models. Second, the inference framework proposed in this paper for forecast-interval evaluation, can be applied in a broader context, such as inflation forecasting, not just to the models and interval forecasting methods used in this paper.
    Date: 2009
  14. By: Ait-Sahalia, Yacine (Princeton U); Kimmel, Robert L. (Ohio State U)
    Abstract: We develop and implement a technique for maximum likelihood estimation in closed-form of multivariate affine yield models of the term structure of interest rates. We derive closed-form approximations to the likelihood functions for all nine of the Dai and Singleton (2000) canonical affine models with one, two, or three underlying factors. Monte Carlo simulations reveal that this technique very accurately approximates true maximum likelihood, which is, in general, infeasible for affine models. We also apply the method to a dataset consisting of synthetic US Treasury strips, and find parameter estimates for nine different affine yield models, each using two different market price of risk specifications. One advantage of maximum likelihood estimation is the ability to compare non-nested models using likelihood ratio tests. We find, using these tests, that the choice of preferred canonical model depends on the market price of risk specification. Comparison to other approximation methods, Euler and QML, on both simulated and real data suggest that our approximation technique is much closer to true MLE than alternative methods.
    Date: 2008–10
  15. By: Miguel A. León-Ledesma (Department of Economics, Keynes College, University of Kent, Canterbury, Kent CT2 7NP, United Kingdom.); Peter McAdam (Corresponding author: Research Department, European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main.); Alpo Willman (Research Department, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: Despite being critical parameters in many economic fields, the received wisdom, in theoretical and empirical literatures, states that joint identification of the elasticity of capital-labor substitution and technical bias is infeasible. This paper challenges that pessimistic interpretation. Putting the new approach of "normalized" production functions at the heart of a Monte Carlo analysis we identify the conditions under which identification is feasible and robust. The key result is that the jointly modeling the production function and first-order conditions is superior to single-equation approaches in terms of robustly capturing production and technical parameters, especially when merged with "normalization". Our results will have fundamental implications for production-function estimation under non-neutral technical change, for understanding the empirical relevance of normalization and the variability underlying past empirical studies. JEL Classification: C22, E23, O30, 051.
    Keywords: Constant Elasticity of Substitution, Factor-Augmenting Technical Change, Normalization, Factor Income share, Identification, Monte Carlo.
    Date: 2009–01
  16. By: Kiefer, Nicholas M. (Cornell U and CREATES, The Danish Science Foundation, U of Aarhus); Racine, Jeffrey S. (McMaster U)
    Abstract: Kernel smoothing techniques have attracted much attention and some notoriety in recent years. The attention is well deserved as kernel methods free researchers from having to impose rigid parametric structure on their data. The notoriety arises from the fact that the amount of smoothing (i.e., local averaging) that is appropriate for the problem at hand is under the control of the researcher. In this paper we provide a deeper understanding of kernel smoothing methods for discrete data by leveraging the unexplored links between hierarchical Bayesmodels and kernelmethods for discrete processes. A number of potentially useful results are thereby obtained, including bounds on when kernel smoothing can be expected to dominate non-smooth (e.g., parametric) approaches in mean squared error and suggestions for thinking about the appropriate amount of smoothing.
    Date: 2008–05
  17. By: Benjamin Jourdain (CERMICS - Centre d'Enseignement et de Recherche en Mathématiques, Informatique et Calcul Scientifique - INRIA - Ecole Nationale des Ponts et Chaussées); Mohamed Sbai (CERMICS - Centre d'Enseignement et de Recherche en Mathématiques, Informatique et Calcul Scientifique - INRIA - Ecole Nationale des Ponts et Chaussées)
    Abstract: In this paper, we are interested in continuous time models in which the index level induces some feedback on the dynamics of its composing stocks. More precisely, we propose a model in which the log-returns of each stock may be decomposed into a systemic part proportional to the log-returns of the index plus an idiosyncratic part. We show that, when the number of stocks in the index is large, this model may be approximated by a local volatility model for the index and a stochastic volatility model for each stock with volatility driven by the index. We address calibration of both the limit and the original models.
    Keywords: Index modeling, calibration, non-parametric estimation
    Date: 2008–12–17
  18. By: Martellosio, Federico
    Abstract: This paper investigates how the correlations implied by a first-order simultaneous autoregressive (SAR(1)) process are affected by the weights matrix W and the autocorrelation parameter . We provide an interpretation of the covariances between the random variables observed at two spatial units, based on a particular type of walks connecting the two units. The interpretation serves to explain a number of correlation properties of SAR(1) models, and clarifies why it is impossible to control the correlations through the specification of W.
    Keywords: simultaneous autoregressions; spatial autocorrelation; spatial weights matrices; walks in graphs.
    JEL: C50 C21
    Date: 2008–10
  19. By: Kiefer, Nicholas M. (Cornell U and US Department of the Treasury)
    Abstract: Capital allocation decisions are made on the basis of an assessment of creditworthiness. Default is a rare event for most segments of a bank's portfolio and data information can be minimal. Inference about default rates is essential for efficient capital allocation, for risk management and for compliance with the requirements of the Basel II rules on capital standards for banks. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using prior distributions assessed from industry experts. A maximum entropy approach is used to represent expert information. The binomial model, most common in applications, is extended to allow correlated defaults yet remain consistent with Basel II. The application shows that probabilistic information can be elicited from experts and econometric methods can be useful even when data information is sparse.
    Date: 2008–04

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.