nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒06‒20
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Inference in linear models with structural changes and mixed identification strength By Bertille Antoine; Otilia
  2. Asymptotic distributions for quasi-efficient estimators in echelon VARMA models By Jean-Marie Dufour; Tarek Jouini
  3. Exact confidence sets and goodness-of-fit methods for stable distributions By Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf
  4. STR: A Seasonal-Trend Decomposition Procedure Based on Regression By Alexander Dokumentov; Rob J. Hyndman
  5. Selection of an Estimation Window in the Presence of Data Revisions and Recent Structural Breaks By Jari Hännäkäinen
  6. Short term inflation forecasting: the M.E.T.A. approach By Giacomo Sbrana; Andrea Silvestrini; Fabrizio Venditti
  7. Probabilistic time series forecasting with boosted additive models: an application to smart meter data By Souhaib Ben Taieb; Raphael Huser; Rob J. Hyndman; Marc G. Genton
  8. Volatility Modeling with a Generalized t-distribution By Andrew Harvey and Rutger-Jan Lange
  9. A Particular Form of Non-Constant Effect in Two-Stage Quantile Regression By Tae-Hwan Kim; Christophe Muller
  10. Seasonal copula models for the analysis of glacier discharge at King George Island, Antarctica By M. Gómez; M. C. Ausin; M. C. Domínguez
  11. Multi-step forecasting in the presence of breaks By Jari Hännäkäinen
  12. Bootstrap inference in regressions with estimated factors and serial correlation By Antoine Djogbenou; Sílvia Gonçalves; Benoit Perron
  13. Identifying Multiple Marginal Effects with a Single Binary Instrument or by Regression Discontinuity By Carolina Caetano; Juan Carlos Escaniano
  14. Optimizing Policymakers' Loss Functions in Crisis Prediction: Before, Within or After? By P. Sarlin; Gregor von Schweinitz
  15. The Empirical Economist's Toolkit: From Models to Methods By Matthew T. Panhans; John D. Singleton

  1. By: Bertille Antoine (Simon Fraser University); Otilia (Tilburg University)
    Abstract: This paper considers estimation and inference in a linear model with endogenous regressors and parameter instability. We allow for structural changes in the parameters of the structural and reduced forms, and for mixed identification strength: the identification may not be strong over the whole sample, and may even change over time. In addition, we allow the second moments of the data generating process to change over time (e.g. changes in the variance of the structural errors, and/or in the variance of the instruments). We propose and derive the limiting distributions of two tests for parameter changes in the structural form: when the reduced form is stable and when the reduced form exhibits structural change. We also propose and derive new GMM estimators for the unstable structural form. We show that if the RF is stable, they are more efficient than the standard subsample GMM estimators, even in the presence of weaker identification patterns.
    Keywords: GMM; Semi-strong identification; Break-point
    JEL: C13 C22 C26 C36 C51
    Date: 2015–06–06
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp15-05&r=ecm
  2. By: Jean-Marie Dufour; Tarek Jouini
    Abstract: Usual inference methods for stable distributions are typically based on limit distributions. But asymptotic approximations can easily be unreliable in such cases, for standard regularity conditions may not apply or may hold only weakly. This paper proposes finite-sample tests and confidence sets for tail thickness and asymmetry parameters (a and b ) of stable distributions. The confidence sets are built by inverting exact goodness-of-fit tests for hypotheses which assign specific values to these parameters. We propose extensions of the Kolmogorov-Smirnov, Shapiro-Wilk and Filliben criteria, as well as the quantile-based statistics proposed by McCulloch (1986) in order to better capture tail behavior. The suggested criteria compare empirical goodness-of-fit or quantile-based measures with their hypothesized values. Since the distributions involved are quite complex and non-standard, the relevant hypothetical measures are approximated by simulation, and p-values are obtained using Monte Carlo (MC) test techniques. The properties of the proposed procedures are investigated by simulation. In contrast with conventional wisdom, we find reliable results with sample sizes as small as 25. The proposed methodology is applied to daily electricity price data in the U.S. over the period 2001-2006. The results show clearly that heavy kurtosis and asymmetry are prevalent in these series.
    Keywords: stable distribution; skewness; asymmetry; exact test; Monte Carlo test; specification test; goodness-of-fit; tail parameter; electricity price,
    Date: 2015–06–12
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2015s-26&r=ecm
  3. By: Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf
    Abstract: Usual inference methods for stable distributions are typically based on limit distributions. But asymptotic approximations can easily be unreliable in such cases, for standard regularity conditions may not apply or may hold only weakly. This paper proposes finite-sample tests and confidence sets for tail thickness and asymmetry parameters (a and b ) of stable distributions. The confidence sets are built by inverting exact goodness-of-fit tests for hypotheses which assign specific values to these parameters. We propose extensions of the Kolmogorov-Smirnov, Shapiro-Wilk and Filliben criteria, as well as the quantile-based statistics proposed by McCulloch (1986) in order to better capture tail behavior. The suggested criteria compare empirical goodness-of-fit or quantile-based measures with their hypothesized values. Since the distributions involved are quite complex and non-standard, the relevant hypothetical measures are approximated by simulation, and p-values are obtained using Monte Carlo (MC) test techniques. The properties of the proposed procedures are investigated by simulation. In contrast with conventional wisdom, we find reliable results with sample sizes as small as 25. The proposed methodology is applied to daily electricity price data in the U.S. over the period 2001-2006. The results show clearly that heavy kurtosis and asymmetry are prevalent in these series.
    Keywords: stable distribution; skewness; asymmetry; exact test; Monte Carlo test; specification test; goodness-of-fit; tail parameter; electricity price,
    Date: 2015–06–12
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2015s-25&r=ecm
  4. By: Alexander Dokumentov; Rob J. Hyndman
    Abstract: We propose new generic methods for decomposing seasonal data: STR (a Seasonal-Trend decomposition procedure based on Regression) and Robust STR. In some ways, STR is similar to Ridge Regression and Robust STR can be related to LASSO. Our new methods are much more general than any alternative time series decomposition methods. They allow for multiple seasonal and cyclic components, and multiple linear regressors with constant, flexible, seasonal and cyclic influence. Seasonal patterns (for both seasonal components and seasonal regressors) can be fractional and flexible over time; moreover they can be either strictly periodic or have a more complex topology. We also provide confidence intervals for the estimated components, and discuss how STR can be used for forecasting.
    Keywords: time series decomposition, seasonal data, Tikhonov regularisation, ridge regression, LASSO, STL, TBATS, X-12-ARIMA, BSM
    JEL: C10 C14 C22
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-13&r=ecm
  5. By: Jari Hännäkäinen (School of Management, University of Tampere)
    Abstract: In this paper, we analyze the forecasting performance of a set of widely used window selection methods in the presence of data revisions and recent structural breaks. Our Monte Carlo and empirical results show that the expanding window estimator often yields the most accurate forecasts after a recent break. It performs well regardless of whether the revisions are news or noise, or whether we forecast first-release or final values. We find that the differences in the forecasting accuracy are large in practice, especially when we forecast GDP deflator growth after the break of the early 1980s.
    Keywords: Recent structural break, choice of estimation window, forecasting, real-time data
    JEL: C22 C53 C82
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:tam:wpaper:1392&r=ecm
  6. By: Giacomo Sbrana (NEOMA Business School); Andrea Silvestrini (Bank of Italy); Fabrizio Venditti (Bank of Italy)
    Abstract: Forecasting inflation is an important and challenging task. In this paper we assume that the core inflation components evolve as a multivariate local level process. This model, which is theoretically attractive for modelling inflation dynamics, has been used only to a limited extent to date owing to computational complications with the conventional multivariate maximum likelihood estimator, especially when the system is large. We propose the use of a method called “Moments Estimation Through Aggregation” (M.E.T.A.), which reduces computational costs significantly and delivers prompt and accurate parameter estimates, as we show in a Monte Carlo exercise. In an application to euro-area inflation we find that our forecasts compare well with those generated by alternative univariate constant and time-varying parameter models as well as with those of professional forecasters and vector autoregressions.
    Keywords: inflation, forecasting, aggregation, state space models
    JEL: C32 C53 E31 E37
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1016_15&r=ecm
  7. By: Souhaib Ben Taieb; Raphael Huser; Rob J. Hyndman; Marc G. Genton
    Abstract: A large body of the forecasting literature so far has been focused on forecasting the conditional mean of future observations. However, there is an increasing need for generating the entire conditional distribution of future observations in order to effectively quantify the uncertainty in time series data. We present two different methods for probabilistic time series forecasting that allow the inclusion of a possibly large set of exogenous variables. One method is based on forecasting both the conditional mean and variance of the future distribution using a traditional regression approach. The other directly computes multiple quantiles of the future distribution using quantile regression. We propose an implementation for the two methods based on boosted additive models, which enjoy many useful properties including accuracy, flexibility, interpretability and automatic variable selection. We conduct extensive experiments using electricity smart meter data, on both aggregated and disaggregated scales, to compare the two forecasting methods for the challenging problem of forecasting the distribution of future electricity consumption. The empirical results demonstrate that the mean and variance forecasting provides better forecasts for aggregated demand, while the flexibility of the quantile regression approach is more suitable for disaggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.
    Keywords: Additive models, boosting, density forecasting, energy forecasting, probabilistic forecasting
    JEL: Q47 C14 C22
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-12&r=ecm
  8. By: Andrew Harvey and Rutger-Jan Lange
    Abstract: Beta-t-EGARCH models in which the dynamics of the logarithm of scale are driven by the conditional score are known to exhibit attractive theoretical properties for the t-distribution and general error distribution (GED). The generalized-t includes both as special cases. We derive the information matrix for the generalized-t and show that, when parameterized with the inverse of the tail index, it remains positive definite as the tail index goes to infinity and the distribution becomes a GED. Hence it is possible to construct Lagrange multiplier tests of the null hypothesis of light tails against the alternative of fat tails. Analytic expressions may be obtained for the unconditional moments in the EGARCH model and the information matrix for the dynamic parameters obtained. The distribution may be extended by allowing for skewness and asymmetry in the shape parameters and the asymptotic theory for the associated EGARCH models may be correspondingly extended. For positive variables, the GB2 distribution may be parameterized so that it goes to the generalised gamma in the limit as the tail index goes to infinity. Again dynamic volatility may be introduced and properties of the model obtained. Overall the approach offers a unified, flexible, robust and practical treatment of dynamic scale.
    Keywords: Asymmetric price transmission, cost pass-through, electricity markets, price theory, rockets and feathers
    Date: 2015–06–11
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1517&r=ecm
  9. By: Tae-Hwan Kim (Department of Economics, Yonsei University); Christophe Muller (Aix-Marseille University (Aix-Marseille School of Economics), CNRS & EHESS)
    Abstract: We study the fitted-value approach to quantile regression in the presence of endogeneity under a weakened form of the IV condition. In this context, we exhibit the possibility of a particular form of non-constant effect models with the fitted-value approach, a situation often believed to be ruled out. However, only the constant effect coefficients of the model can be consistently estimated. Finally, we discuss practical examples where this approach can be useful to avoid misspecification of quantile models.
    Keywords: Two-Stage Estimation, quantile regression, fitted-value approach
    JEL: C13 C21 C31
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:aim:wpaimx:1522&r=ecm
  10. By: M. Gómez; M. C. Ausin; M. C. Domínguez
    Abstract: Modelling glacier discharge is an important issue in hydrology and climate research. Glaciers represent a fundamental water resource when melting of snow contributes to runoff. Glaciers are also studied as natural global warming sensors. GLACKMA association has implemented one of their Pilot Experimental Watersheds at the King George Island in the Antarctica which records values of the liquid discharge from Collins glacier. In this paper, we propose the use of time-varying copula models for analyzing the relationship between air temperature and glacier discharge, which is clearly non constant and non linear through time. A seasonal copula model is defined where both the marginal and copula parameters vary periodically along time following a seasonal dynamic. Full Bayesian inference is performed such that the marginal and copula parameters are estimated in a one single step, in contrast with the usual twostep approach. Bayesian prediction and model selection is also carried out for the proposed model such that Bayesian credible intervals can be obtained for the conditional glacier discharge given a value of the temperature at any given time point. The proposed methodology is illustrated using the GLACKMA real data where there is, in addition, a hydrological year of missing discharge data which were not possible to measure accurately due to hard meteorological conditions.
    Keywords: Bayesian inference , Copulas , Glacier discharge , Seasonality , Melt modelling , MCMC
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1513&r=ecm
  11. By: Jari Hännäkäinen (School of Management, University of Tampere)
    Abstract: This paper analyzes the relative performance of multi-step forecasting methods in the presence of breaks and data revisions. Our Monte Carlo simulations indicate that the type and the timing of the break affect the relative accuracy of the methods. The iterated method typically performs the best in unstable environments, especially if the parameters are subject to small breaks. This result holds regardless of whether data revisions add news or reduce noise. Empirical analysis of real-time U.S. output and inflation series shows that the alternative multi-step methods only episodically improve upon the iterated method.
    Keywords: Structural breaks, multi-step forecasting, intercept correction, real-time data
    JEL: C22 C53 C82
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:tam:wpaper:1494&r=ecm
  12. By: Antoine Djogbenou; Sílvia Gonçalves; Benoit Perron
    Abstract: This paper considers bootstrap inference in a factor-augmented regression context where the errors could potentially be serially correlated. This generalizes results in Gonçalves and Perron (2013) and makes the bootstrap applicable to forecasting contexts where the forecast horizon is greater than one. We propose and justify two residual-based approaches, a block wild bootstrap (BWB) and a dependent wild bootstrap (DWB). Our simulations document improvement in coverage rates of confidence intervals for the coefficients when using BWB or DWB relative to both asymptotic theory and the wild bootstrap when serial correlation is present in the regression errors.
    Keywords: Factor model, bootstrap, serial correlation, forecast.,
    Date: 2015–05–29
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2015s-20&r=ecm
  13. By: Carolina Caetano (University of Rochester); Juan Carlos Escaniano (Indiana University)
    Abstract: This paper proposes a new strategy for the identication of all the marginal eects of an endogenous multi-valued variable (which can be continuous, or a vector) in a regression model with one binary instrumental variable. The unobservables must be separable from the endogenous variable of interest in the model. Identication is achieved by exploiting heterogeneity of the \rst stage" in covariates. The covariates themselves may be endogenous, and their endogeneity does not need to be modeled. With some modications, the identication strategy is extended to the Regression Discontinuity Design (RDD) with multi-valued endogenous variables, thereby showing that adding covariates in RDD may improve identication. This paper also provides parametric, semiparametric and nonparametric estimators based on the identication strategy, discusses their asymptotic properties, and shows that the estimators have satisfactory performance in moderate samples sizes. All the proposed estimators can be implemented as Two-Stage Least Squares (TSLS). Finally, we apply our methods to the problem of estimating the eect of air quality on house prices, based on Chay and Greenstone (2005)
    Keywords: Conditional Instrumental Variables; Endogeneity; Binary Instrument; Regression Discontinuity Design; Varying Coecients; Nonparametric
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:inu:caeprp:2015009&r=ecm
  14. By: P. Sarlin; Gregor von Schweinitz
    Abstract: Early-warning models most commonly optimize signaling thresholds on crisis probabilities. The ex-post threshold optimization is based upon a loss function accounting for preferences between forecast errors, but comes with two crucial drawbacks: unstable thresholds in recursive estimations and an in-sample overfit at the expense of out-of-sample performance. We propose two alternatives for threshold setting: (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. We provide simulated and real-world evidence that this simplification results in stable thresholds and improves out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models.
    Keywords: early-warning models, loss functions, threshold setting, predictive performance
    JEL: C35 C53 G01
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:iwh:dispap:6-15&r=ecm
  15. By: Matthew T. Panhans; John D. Singleton
    Abstract: While historians of economics have noted the transition toward empirical work in economics since the 1970s, less understood is the shift toward \quasi-experimental" methods in applied microeconomics. Angrist and Pischke (2010) trumpet the wide application of these methods as a \credibility revolution" in econometrics that has nally provided persuasive answers to a diverse set of questions. Particularly in uential in the applied areas of labor, education, public, and health economics, the methods shape the knowledge produced by economists and the expertise they possess. First documenting their growth bibliometrically, this paper aims to illuminate the origins, content, and contexts of quasi-experimental research designs, which seek natural experiments to justify causal inference. To highlight lines of continuity and discontinuity in the transition, the quasi-experimental program is situated in the historical context of the Cowles econometric framework and a case study from the economics of education is used to contrast the practical implementation of the approaches. Finally, signicant historical contexts of the paradigm shift are explored, including the marketability of quasi-experimental methods and the 1980s crisis in econometrics.
    Keywords: econometrics, quasi-experimental methods, natural experiments, applied economics
    JEL: B21 B23 B4 C1
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:hec:heccee:2015-3&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.