nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒09‒16
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Generalized Flat-Top Realized Kernel Estimation of Ex-Post Variation of Asset Prices Contaminated by Noise By Rasmus Tangsgaard Varneskov
  2. Optimal R-Estimation of a Spherical Location By Christophe Ley; Yves-Caoimhin Swan; Baba Thiam; Thomas Verdebout
  3. Subset hypotheses testing and instrument exclusion in the linear IV regression By Firmin Doko Tchatoka
  4. Wavelet Based Outlier Correction for Power Controlled Turning Point Detection in Surveillance Systems By Yushu Li
  5. Heaping-Induced Bias in Regression-Discontinuity Designs By Alan I. Barreca; Jason M. Lindo; Glen R. Waddell
  6. Interpreting interaction terms in linear and non-linear models: A cautionary tale By Drichoutis, Andreas
  7. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques By Anders Bredahl Kock; Timo Teräsvirta
  8. Stochastic trends and seasonality in economic time series: new evidence from Bayesian stochastic model specification search By Stefano Grassi; Tommaso Proietti
  9. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009 By Anders Bredahl Kock; Timo Teräsvirta
  10. A Heteroskedasticity Robust Breusch-Pagan Test for Contemporaneous Correlation in Dynamic Panel Data Models By Andreea Halunga; Chris D. Orme; Takashi Yamagata
  11. Forecasting Volatility with Copula-Based Time Series Models By Oleg Sokolinskiy; Dick van Dijk
  12. On the Choice of the Unit Period in Time Series Models By Peter Fuleky
  13. Improving the reliability of real-time Hodrick-Prescott filtering using survey forecasts By Jaqueson K. Galimberti; Marcelo L. Moura
  14. Professional Forecasters: How to Understand and Exploit Them Through a DSGE Model By Luis E. Rojas
  15. Nonidentification of Insurance Models with Probability of Accidents By Gaurab Aryal; Isabelle Perrigne; Quang Vuong

  1. By: Rasmus Tangsgaard Varneskov (Aarhus University and CREATES)
    Abstract: This paper introduces a new class of generalized at-top realized kernels for estimation of quadratic variation in the presence of market microstructure noise that is allowed to exhibit a non-trivial dependence structure and to be correlated with the ecient price process. The estimators in this class are shown to be consistent, asymptotically unbiased, and mixed gaussian with an optimal n^(1/4)-convergence rate. In addition, an ecient and asymptotically normal estimator of the long run variance of the market microstructure noise is provided along with novel and consistent estimators of the asymptotic variance of the at-top realized kernels and of the integrated quarticity, respectively, creating a powerful, unied framework for analyzing quadratic variation. A nite sample correction ensures non-negativity of the at-top realized kernels without a ecting asymptotic properties. Lastly, in an extensive simulation study, important practical issues such as the choice of kernel function and tuning parameters are addressed, the adequacy of the asymptotic distribution in nite samples is assessed, and it is shown that estimators in this class exhibit a superior bias and root mean squared error tradeo relative to competing estimators. The impact of using various realized estimators is illustrated in a small empirical application to noisy high frequency stock market data.
    Keywords: Bias Reduction, Nonparametric Estimation, Market Microstructure Noise, Quadratic Variation.
    JEL: C14 C15 C50
    Date: 2011–09–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-31&r=ecm
  2. By: Christophe Ley; Yves-Caoimhin Swan; Baba Thiam; Thomas Verdebout
    Abstract: In this paper, we provide R-estimators of the location of a rotationally symmetric distribution on the unit sphere of Rk. In order to do so we first prove the local asymptotic normality property of a sequence of rotationally symmetric models; this is a non standard result due to the curved nature of the unit sphere. We then construct our estimators by adapting the Le Cam one-step methodology to spherical statistics and ranks. We show that they are asymptotically normal under any rotationally symmetric distribution and achieve the efficiency bound under a specific density. Their small sample behavior is studied via a Monte Carlo simulation and our methodology is illustrated on geological data.
    Keywords: local asymptotic normality; rank-based methods; R-estimation; spherical statistics
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/96823&r=ecm
  3. By: Firmin Doko Tchatoka (School of Economics and Finance, University of Tasmania)
    Abstract: This paper investigates the asymptotic size properties of robust subset tests when instruments are left out of the analysis. Recently, robust subset procedures have been developed for testing hypotheses which are specified on the subsets of the structural parameters or on the parameters associated with the included exogenous variables. It has been shown that they never over-reject the true parameter values even when nuisance parameters are not identified. However, their robustness to instrument exclusion has not been investigated. Instrument exclusion is an important problem in econometrics and there are at least two reasons to be concerned. Firstly, it is difficult in practice to assess whether an instrument has been omitted. For example, some components of the “identifying” instruments that are excluded from the structural equation may be quite uncertain or “left out” of the analysis. Secondly, in many instrumental variable (IV) applications, an infinite number of instruments are available for use in large sample estimation. This is particularly the case with most time series models. If a given variable, say Xt, is a legitimate instrument, so too are its lags Xt1; Xt2. Hence, instrument exclusion seems highly likely in most practical situations. After formulating a general asymptotic framework which allows one to study this issue in a convenient way, I consider two main setups: (1) the missing instruments are (possibly) relevant, and, (2) they are asymptotically weak. In both setups, I show that all subset procedures studied are in general consistent against instrument inclusion (hence asymptotically invalid for the subset hypothesis of interest). I characterize cases where consistency may not hold, but the asymptotic distribution is modified in a way that would lead to size distortions in large samples. I propose a “rule of thumb” which allows to practitioners to know whether a missing instrument is detrimental or not to subset procedures. I present a Monte Carlo experiment confirming that the subset procedures are unreliable when instruments are missing.
    Keywords: REPEC,
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:tas:wpaper:10668&r=ecm
  4. By: Yushu Li (Centre for Labour Market Policy Research (CAFO), Department of Economic and Statistics)
    Abstract: Detection turning points in unimodel has various applications to time series which have cyclic periods. Related techniques are widely explored in the field of statistical surveillance, that is, on-line turning point detection procedures. This paper will first present a power controlled turning point detection method based on the theory of the likelihood ratio test in statistical surveillance. Next we show how outliers will influence the performance of this methodology. Due to the sensitivity of the surveillance system to outliers, we finally present a wavelet multiresolution (MRA) based outlier elimination approach, which can be combined with the on-line turning point detection process and will then alleviate the false alarm problem introduced by the outliers.
    Keywords: Unimodel, Turning point, Statistical surveillance, Outlier, Wavelet multiresolution, Threshold.
    JEL: C12 C15 C22
    Date: 2011–07–15
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-29&r=ecm
  5. By: Alan I. Barreca; Jason M. Lindo; Glen R. Waddell
    Abstract: This study uses Monte Carlo simulations to demonstrate that regression-discontinuity designs arrive at biased estimates when attributes related to outcomes predict heaping in the running variable. After showing that our usual diagnostics are poorly suited to identifying this type of problem, we provide alternatives. We also demonstrate how the magnitude and direction of the bias varies with bandwidth choice and the location of the data heaps relative to the treatment threshold. Finally, we discuss approaches to correcting for this type of problem before considering these issues in several non-simulated environments.
    JEL: C14 C21 I12
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:17408&r=ecm
  6. By: Drichoutis, Andreas
    Abstract: Interaction terms are often misinterpreted in the empirical economics literature by assuming that the coefficient of interest represents unconditional marginal changes. I present the correct way to estimate conditional marginal changes in a series of non-linear models including (ordered) logit/probit regressions, censored and truncated regressions. The linear regression model is used as the benchmark case.
    Keywords: interaction terms; ordered probit; ordered logit; truncated regression; censored regression; nonlinear models
    JEL: C51 C12 C24 C25
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:33251&r=ecm
  7. By: Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES)
    Abstract: In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally identified. Recently, White (2006) presented a solution that amounts to converting the specification and nonlinear estimation problem into a linear model selection and estimation problem. He called this procedure the QuickNet and we shall compare its performance to two other procedures which are built on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where this is not done.
    Keywords: artificial neural network, forecast comparison, model selection, nonlinear autoregressive model, nonlinear time series, root mean square forecast error, Wilcoxon’s signed-rank test
    JEL: C22 C45 C52 C53
    Date: 2011–08–26
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-27&r=ecm
  8. By: Stefano Grassi (Aarhus University and CREATES); Tommaso Proietti (University of Sydney)
    Abstract: An important issue in modelling economic time series is whether key unobserved components representing trends, seasonality and calendar components, are deterministic or evolutive. We address it by applying a recently proposed Bayesian variable selection methodology to an encompassing linear mixed model that features, along with deterministic effects, additional random explanatory variables that account for the evolution of the underlying level, slope, seasonality and trading days. Variable selection is performed by estimating the posterior model probabilities using a suitable Gibbs sampling scheme. The paper conducts an extensive empirical application on a large and representative set of monthly time series concerning industrial production and retail turnover. We find strong support for the presence of stochastic trends in the series, either in the form of a time-varying level, or, less frequently, of a stochastic slope, or both. Seasonality is a more stable component: only in 70% of the cases we were able to select at least one stochastic trigonometric cycle out of the six possible cycles. Most frequently the time variation is found in correspondence with the fundamental and the first harmonic cycles. An interesting and intuitively plausible finding is that the probability of estimating time-varying components increases with the sample size available. However, even for very large sample sizes we were unable to find stochastically varying calendar effects.
    Keywords: Bayesian model selection, stationarity, unit roots, stochastic trends, variable selection.
    JEL: E32 E37 C53
    Date: 2011–09–02
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-30&r=ecm
  9. By: Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES)
    Abstract: In this work we consider forecasting macroeconomic variables dur- ing an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feedforward autoregressive neural net- work models. What makes these models interesting in the present context is that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. These models are often difficult to estimate, and we follow the idea of White (2006) to transform the speci?fication and non- linear estimation problem into a linear model selection and estimation problem. To this end we employ three automatic modelling devices. One of them is White's QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians. The performance of these three model selectors is compared by look- ing at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment se- ries of the G7 countries and the four Scandinavian ones, and focus on forecasting during the economic crisis 2007-2009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other.
    Keywords: Autometrics, economic forecasting, Marginal Bridge estimator, neural network, nonlinear time series model, Wilcoxon's signed-rank test
    JEL: C22 C45 C52 C53
    Date: 2011–08–26
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-28&r=ecm
  10. By: Andreea Halunga; Chris D. Orme; Takashi Yamagata
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:man:sespap:1118&r=ecm
  11. By: Oleg Sokolinskiy (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam)
    Abstract: This paper develops a novel approach to modeling and forecasting realized volatility (RV) measures based on copula functions. Copula-based time series models can capture relevant characteristics of volatility such as nonlinear dynamics and long-memory type behavior in a flexible yet parsimonious way. In an empirical application to daily volatility for S&P500 index futures, we find that the copula-based RV (C-RV) model outperforms conventional forecasting approaches for one-day ahead volatility forecasts in terms of accuracy and efficiency. Among the copula specifications considered, the Gumbel C-RV model achieves the best forecast performance, which highlights the importance of asymmetry and upper tail dependence for modeling volatility dynamics. Although we find substantial variation in the copula parameter estimates over time, conditional copulas do not improve the accuracy of volatility forecasts.
    Keywords: Nonlinear dependence; long memory; copulas; volatility forecasting
    JEL: C22 C53
    Date: 2011–09–05
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20110125&r=ecm
  12. By: Peter Fuleky (UHERO, and Department of Economics, University of Hawaii at Manoa)
    Abstract: When estimating the parameters of a process, researchers can choose the reference unit of time (unit period) for their study. Frequently, they set the unit period equal to the observation interval. However, I show that decoupling the unit period from the observation interval facilitates the comparison of parameter estimates across studies with different data sampling frequencies. If the unit period is standardized (for example annualized) across these studies, then the parameters will represent the same attributes of the underlying process, and their interpretation will be independent of the sampling frequency.
    Keywords: Unit Period, Sampling Frequency, Bias, Time Series.
    JEL: C13 C22 C51 C82
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:hae:wpaper:2011-4&r=ecm
  13. By: Jaqueson K. Galimberti; Marcelo L. Moura
    Abstract: Incorporating survey forecasts to a forecast-augmented Hodrick-Prescott filter, we evidence a considerable improvement to the reliability of US output-gap estimation in realtime. Odds of extracting wrong signals of output-gap estimates are found to reduce by almost a half, and the magnitude of revisions to these estimates accounts to only three fifths of the output-gap average size, usually an one-by-one ratio. We further analyze how this end-of-sample uncertainty evolves as time goes on and observations accumulate, showing that a 90% rate of correct assessments of the output-gap sign can be attained with five quarters of delay using survey forecasts.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:man:cgbcrp:159&r=ecm
  14. By: Luis E. Rojas
    Abstract: This paper derives a link between the forecasts of professional forecasters and a DSGE model. I show that the forecasts of a professional forecaster can be incorporated to the state space representation of the model by allowing the measurement error of the forecast and the structural shocks to be correlated. The parameters capturing this correlation are reduced form parameters that allow to address two issues i) How the forecasts of the professional forecaster can be exploited as a source of information for the estimation of the model and ii) How to characterize the deviations of the professional forecaster from an ideal complete information forecaster in terms of the shocks and the structure of the economy.
    Date: 2011–08–15
    URL: http://d.repec.org/n?u=RePEc:col:000094:008945&r=ecm
  15. By: Gaurab Aryal; Isabelle Perrigne; Quang Vuong
    Abstract: In contrast to Aryal, Perrigne and Vuong (2009), this note shows that in an insurance model with multidimensional screening when only information on whether the insuree has been involved in some accident is available, the joint distribution of risk and risk aversion is not identified.
    JEL: C14 L62 D82 D86
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:acb:cbeeco:2011-552&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.