nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒12‒11
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. A Bootstrap Cointegration Rank Test for Panels of VAR Models By Laurent A.F. Callot
  2. Nonparametric Regression with Nonparametrically Generated Covariates By Enno Mammen; Christoph Rothe; Melanie Schienle
  3. Estimating Nonlinear DSGE Models by the Simulated Method of Moments By Francisco J. Ruge-Murcia
  4. An Alternative Asymptotic Analysis of Residual-Based Statistics By Elena Andreou; Bas J.M. Werker
  5. Bootstrap inference for K-nearest neighbour matching estimators By de Luna, Xavier; Johansson, Per; Sjöstedt-de Luna, Sara
  6. Inference on Income Distributions By Russell Davidson
  7. Robust Estimation of Operational Risk By Nataliya Horbenko; Peter Ruckdeschel; Taehan Bae
  8. Forecasting in the presence of recent structural change By Eklund, Jana; Kapetanios, George; Price, Simon
  9. The Effects Of Monetary Policy Shocks In Peru: Semi-Structural Identification Using A Factor-Augmented Vector Autoregressive Model By Erick Lahura
  10. An Agnostic Look at Bayesian Statistics and Econometrics By Russell Davidson
  11. Confronting Model Misspecification in Macroeconomics By DANIEL F. WAGGONER; TAO ZHA
  12. Should macroeconomic forecasters use daily financial data and how? By Elena Andreou; Eric Ghysels; Andros Kourtellos
  13. ExtrapoLATE-ing: External Validity and Overidentification in the LATE Framework By Joshua Angrist; Ivan Fernandez-Val
  14. Nonparametric Identification of Dynamic Games with Discrete and Continuous Choices By Jason R. Blevins
  15. Factor Analysis of a Large DSGE Model By Alexei Onatski; Francisco J. Ruge-Murcia

  1. By: Laurent A.F. Callot (School of Economics and Management, Aarhus University and CREATES)
    Abstract: This paper proposes a sequential procedure to determine the common cointegration rank of panels of cointegrated VARs. It shows how a panel of cointegrated VARs can be transformed in a set of independent individual models. The likelihood function of the transformed panel is the sum of the likelihood functions of the individual Cointegrated VARs (CVAR) models. A bootstrap based procedure is used to compute empirical distributions of the trace test statistics for these individual models. From these empirical distributions two panel trace test statistics are constructed. The satisfying small sample properties of these tests are documented by means of Monte Carlo. An empirical application illustrates the usefullness of this tests.
    Keywords: Rank test, Panel data, Cointegration, Bootstrap, Cross section dependence.
    JEL: C12 C32 C33
    Date: 2010–12–01
  2. By: Enno Mammen; Christoph Rothe; Melanie Schienle
    Abstract: We analyze the properties of non- and semiparametric estimation procedures involving nonparametric regression with generated covariates. Such estimators appear in numerous econometric applications, including nonparametric estimation of simultaneous equation models, sample selection models, treatment effect models, and censored regression models, but so far there seems to be no unified theory to establish their statistical properties. Our paper provides such results, allowing to establish asymptotic properties like rates of consistency or asymptotic normality for a wide range of semi- and nonparametric estimators. We also show how to account for the presence of nonparametrically generated regressors when computing standard errors.
    Keywords: Empirical Process, Propensity Score, Control Variable Methods, Semiparametric Estimation
    JEL: C14 C31
    Date: 2010–12
  3. By: Francisco J. Ruge-Murcia (Department of Economics, University of Montréal; The Rimini Centre for Economic Analysis (RCEA))
    Abstract: This paper studies the application of the simulated method of moments (SMM) for the estimation of nonlinear dynamic stochastic general equilibrium (DSGE) models. Monte Carlo analysis is employed to examine the small-sample properties of SMM in specifications with different curvature. Results show that SMM is computationally efficient and delivers accurate estimates, even when the simulated series are relatively short. However, asymptotic standard errors tend to overstate the actual variability of the estimates and, consequently, statistical inference is conservative. A simple strategy to incorporate priors in a method of moments context is proposed. An empirical application to the macroeconomic effects of rare events indicates that negatively skewed productivity shocks induce agents to accumulate additional capital and can endogenously generate asymmetric business cycles.
    Keywords: Monte-Carlo analysis; priors; perturbation methods, rare events, skewness
    JEL: C15 C11 E2
    Date: 2010–01
  4. By: Elena Andreou; Bas J.M. Werker
    Abstract: This paper presents an alternative method to derive the limiting distribution of residual-based statistics. Our method does not impose an explicit assumption of (asymptotic) smoothness of the statistic of interest with respect to the model's parameters. and, thus, is especially useful in cases where such smoothness is difficult to establish. Instead, we use a locally uniform convergence in distribution condition, which is automatically satisfied by residual-based specification test statistics. To illustrate, we derive the limiting distribution of a new functional form specification test for discrete choice models, as well as a runs-based tests for conditional symmetry in dynamic volatility models.
    Keywords: Le Cam's third lemma, Local Asymptotic Normality (LAN)
    Date: 2010–11
  5. By: de Luna, Xavier (Umeå University); Johansson, Per (IFAU - Institute for Labour Market Policy Evaluation); Sjöstedt-de Luna, Sara (Umeå University)
    Abstract: Abadie and Imbens (2008, Econometrica) showed that classical bootstrap schemes fail to provide correct inference for K-nearest neighbour (KNN) matching estimators of average causal effects. This is an interesting result showing that bootstrap should not be applied without theoretical justification. In this paper, we present two resampling schemes, which we show provide valid inference for KNN matching estimators. We resample "estimated individual causal effects" (EICE), i.e. the difference in outcome between matched pairs, instead of the original data. Moreover, by taking differences in EICEs ordered with respect to the matching covariate, we obtain a bootstrap scheme valid also with heterogeneous causal effects where mild assumptions on the heterogeneity are imposed. We provide proofs of the validity of the proposed resampling based inferences. A simulation study illustrates finite sample properties.
    Keywords: Block bootstrap; subsampling; average causal/treatment effect
    JEL: C14 C21
    Date: 2010–11–19
  6. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, McGill University - McGill)
    Abstract: This paper attempts to provide a synthetic view of varied techniques available for per- forming inference on income distributions. Two main approaches can be distinguished: one in which the object of interest is some index of income inequality or poverty, the other based on notions of stochastic dominance. From the statistical point of view, many techniques are common to both approaches, although of course some are specific to one of them. I assume throughout that inference about population quantities is to be based on a sample or samples, and, formally, all randomness is due to that of the sampling process. Inference can be either asymptotic or bootstrap-based. In principle, the bootstrap is an ideal tool, since in this paper I ignore issues of complex sampling schemes, and suppose that observations are IID. However both bootstrap inference, and, to a considerably greater extent, asymptotic inference can fall foul of difficulties associated with the heavy right-hand tails observed with many income distributions. I mention some recent attempts to circumvent these difficulties.
    Keywords: Income distribution; delta method; asymptotic inference; bootstrap; influence function; empirical process
    Date: 2010–11–30
  7. By: Nataliya Horbenko; Peter Ruckdeschel; Taehan Bae
    Abstract: According to the Loss Distribution Approach, the operational risk of a bank is determined as 99.9% quantile of the respective loss distribution, covering unexpected severe events. The 99.9% quantile is a tail event. Supported by the Pickands-Balkema-de Haan Theorem, tail events exceeding some high threshold are usually modeled by a Generalized Pareto Distribution (GPD). However, because of the heavy-tailedness of this distribution, estimation of its tail quantiles is not a trivial task, which becomes even more difficult when there are outliers in the data, or data is pooled among several sources. In such situations where the origin and representativeness of the available data is not clear, robust methods provide a remedy which can provide reliable estimates when classical methods already fail. We illustrate this, applying such robust methods for parameter estimation of a GPD - including some recently developed methods - to data from Algorithmics Inc. To better understand these results, we provide some useful diagnostic plots adjusted for this context: influence plot, outlyingness plot, and QQ plot with robust confidence bands.
    Date: 2010–12
  8. By: Eklund, Jana (Bank of England); Kapetanios, George (Queen Mary College, London); Price, Simon (Bank of England)
    Abstract: We examine how to forecast after a recent break. We consider monitoring for change and then combining forecasts from models that do and do not use data before the change; and robust methods, namely rolling regressions, forecast averaging over different windows and exponentially weighted moving average (EWMA) forecasting. We derive analytical results for the performance of the robust methods relative to a full-sample recursive benchmark. For a location model subject to stochastic breaks the relative mean square forecast error ranking is EWMA < rolling regression < forecast averaging. No clear ranking emerges under deterministic breaks. In Monte Carlo experiments forecast averaging improves performance in many cases with little penalty where there are small or infrequent changes. Similar results emerge when we examine a large number of UK and US macroeconomic series.
    Keywords: monitoring; recent structural change; forecast combination; robust forecasts
    JEL: C10 C59
    Date: 2010–12–02
  9. By: Erick Lahura (Central Bank of Peru and LSE)
    Abstract: The main goal of this paper is to analyze the effects of monetary policy shocks in Peru, taking into account two important issues that have been addressed separately in the VAR literature. The first one is the difficulty to identify the most appropriate indicator of monetary policy stance, which is usually assumed rather than determined from an estimated model. The second one is the fact that monetary policy decisions are based on the analysis of a wide range of economic and financial data, which is at odds with the small number of variables specified in most VAR models. To overcome the first issue, Bernanke and Mihov (1998) proposed a semi-structural VAR model from which the indicator of monetary policy stance can be derived rather than assumed. Meanwhile, the data problem has been resolved recently by Bernanke, Boivin and Eliasz (2005) using a Factor-Augmented Vector Autoregressive (FAVAR) model. In order to capture these two issues simultaneously, we propose an extension of the FAVAR model that incorporates a semi-structural identification approach a la Bernanke and Mihov, resulting in a VAR model that we denominate SS-FAVAR. Using data for Peru, the results show that the SS-FAVAR's impulse-response functions (IRFs) provide a more coherent picture of the effects of monetary policy shocks compared to the IRFs of alternative VAR models. Furthermore, it is found that innovations to nonborrowed reserves can be identified as monetary policy shocks for the period 1995-2003.
    Keywords: VAR, FAVAR, Monetary Policy, Semi-Structural Identification.
    JEL: C32 C43 E50 E52 E58
    Date: 2010–08
  10. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, McGill University - McGill)
    Abstract: Bayesians and non-Bayesians, often called frequentists, seem to be perpetually at loggerheads on fundamental questions of statistical inference. This paper takes as agnostic a stand as is possible for a practising frequentist, and tries to elicit a Bayesian answer to questions of interest to frequentists. The argument is based on my presentation at a debate organised by the Rimini Centre for Economic Analysis, between me as the frequentist `àdvocate'', and Christian Robert on the Bayesian side.
    Keywords: Bayesian methods; bootstrap; Bahadur-Savage result
    Date: 2010–11–30
    Abstract: We confront model misspecification in macroeconomics by proposing an analytic framework for merging multiple models. This framework allows us to 1) address uncertainty about models and parameters simultaneously and 2) trace out the historical periods in which one model dominates other models. We apply the framework to a richly parameterized DSGE model and a corresponding BVAR model. The merged model, fitting the data better than both individual models, substantially alters economic inferences about the DSGE parameters and about the implied impulse responses.
    Date: 2010–11
  12. By: Elena Andreou; Eric Ghysels; Andros Kourtellos
    Abstract: We introduce easy to implement regression-based methods for predicting quarterly real economic activity that use daily financial data and rely on forecast combinations of MIDAS regressions. Our analysis is designed to elucidate the value of daily information and provide real-time forecast updates of the current (nowcasting) and future quarters. Our findings show that while on average the predictive ability of all models worsens substantially following the financial crisis, the models we propose suffer relatively less losses than the traditional ones. Moreover, these predictive gains are primarily driven by the classes of government securities, equities, and especially corporate risk.
    Keywords: MIDAS, macro forecasting, leads, daily financial information, daily factors.
    Date: 2010–11
  13. By: Joshua Angrist; Ivan Fernandez-Val
    Abstract: This paper develops a covariate-based approach to the external validity of instrumental variables (IV) estimates. Assuming that differences in observed complier characteristics are what make IV estimates differ from one another and from parameters like the effect of treatment on the treated, we show how to construct estimates for new subpopulations from a given set of covariate-specific LATEs. We also develop a reweighting procedure that uses the traditional overidentification test statistic to define a population for which a given pair of IV estimates has external validity. These ideas are illustrated through a comparison of twins and sex-composition IV estimates of the effects childbearing on labor supply.
    JEL: C01 C13 C31 C53
    Date: 2010–12
  14. By: Jason R. Blevins (Department of Economics, Ohio State University)
    Abstract: This paper shows that the payoff functions in a class of dynamic games of incomplete information are nonparametrically identified under standard assumptions currently used in applied work. Models of this kind are prevalent in empirical industrial organization where, for example, firms in oligopolistic industries make discrete entry and exit decisions followed by continuous investment or pricing decisions. We also provide results for single-agent models, a leading special case which is commonly employed in applied microeconomics more generally.
    Keywords: dynamic games, dynamic discrete choice, nonparametric identification
    JEL: C5 C14 C73
    Date: 2010–11
  15. By: Alexei Onatski (Faculty of Economics, University of Cambridge); Francisco J. Ruge-Murcia (Department of Economics, University of Montréal; The Rimini Centre for Economic Analysis (RCEA))
    Abstract: We study the workings of the factor analysis of high-dimensional data using arti…cial series generated from a large, multi-sector dynamic stochastic general equilibrium (DSGE) model. The objective is to use the DSGE model as a laboratory that allow us to shed some light on the practical bene…ts and limitations of using factor analysis techniques on economic data. We explain in what sense the arti…cial data can be thought of having a factor structure, study the theoretical and fi…nite sample properties of the principal components estimates of the factor space, investigate the substantive reason(s) for the good performance of diffusion index forecasts, and assess the quality of the factor analysis of highly dissagregated data. In all our exercises, we explain the precise relationship between the factors and the basic macroeconomic shocks postulated by the model.
    Keywords: Multisector economies, principal components, forecasting, pervasiveness, FAVAR
    JEL: C3 C5 E3
    Date: 2010–01

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.