nep-ets New Economics Papers
on Econometric Time Series
Issue of 2018‒03‒19
twelve papers chosen by
Yong Yin
SUNY at Buffalo

  1. Semiparametric detection of changes in long range dependence By Fabrizio Iacone; Stepana Lazarova
  2. Radial Basis Functions Neural Networks for Nonlinear Time Series Analysis and Time-Varying Effects of Supply Shocks By KANAZAWA, Nobuyuki
  3. Finite Sample Theory and Bias Correction of Maximum Likelihood Estimators in the EGARCH Model By Antonis Demos; Dimitra Kyriakopoulou
  4. Finite Sample Theory and Bias Correction of MLEs in the EGARCH Model (Technical Appendix I) By Antonis Demos; Dimitra Kyriakopoulou
  5. Finite Sample Theory and Bias Correction of MLEs in the EGARCH Model (Technical Appendix II) By Antonis Demos; Dimitra Kyriakopoulou
  6. Negative Binomial Autoregressive Process By Yang Lu; Christian Gourieroux
  7. Permutation Tests for Equality of Distributions of Functional Data By Federico A. Bugni; Joel L. Horowitz
  8. Bootstrap-Assisted Unit Root Testing With Piecewise Locally Stationary Errors By Yeonwoo Rho; Xiaofeng Shao
  9. Skewness-Adjusted Bootstrap Confidence Intervals and Confidence Bands for Impulse Response Functions By Daniel Grabowski; Anna Staszewska-Bystrova; Peter Winker
  10. Forecasting dynamically asymmetric fluctuations of the U.S. business cycle By Emilio Zanetti Chini
  11. Comparing different data descriptors in Indirect Inference tests on DSGE models By Meenagh, David; Minford, Patrick; Wickens, Michael; Xu, Yongdeng
  12. An Overview of Modified Semiparametric Memory Estimation Methods By Busch, Marie; Sibbertsen, Philipp

  1. By: Fabrizio Iacone (University of York); Stepana Lazarova (Queen Mary University of London)
    Abstract: We consider changes in the degree of persistence of a process when the degree of persistence is characterized as the order of integration of a strongly dependent process. To avoid the risk of incorrectly specifing the data generating process we employ local Whittle estimates which uses only frequencies local at zero. The limit distribution of the test statistic under the null is not standard but it is well known in the literature. A Monte Carlo study shows that this inference procedure performs well in finite samples.
    Keywords: Long memory, persistence, break, local Whittle estimate
    JEL: C22
    Date: 2017–08–18
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:830&r=ets
  2. By: KANAZAWA, Nobuyuki
    Abstract: I propose a flexible nonlinear method for studying the time series properties of macroeconomic variables. In particular, I focus on a class of Artificial Neural Networks (ANN) called the Radial Basis Functions (RBF). To assess the validity of the RBF approach in the macroeconomic time series analysis, I conduct a Monte Carlo experiment using the data generated from a nonlinear New Keynesian (NK) model. I find that the RBF estimator can uncover the structure of the nonlinear NK model from the simulated data whose length is as small as 300 periods. Finally, I apply the RBF estimator to the quarterly US data and show that the response of the macroeconomic variables to a positive supply shock exhibits a substantial time variation. In particular, the positive supply shocks are found to have significantly weaker expansionary effects during the zero lower bound periods as well as periods between 2003 and 2004. The finding is consistent with a basic NK model, which predicts that the higher real interest rate due to the monetary policy inaction weakens the effects of supply shocks.
    Keywords: Neural Networks, Radial Basis Functions, Zero Lower Bound, Supply Shocks
    JEL: C45 E31
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:hit:hiasdp:hias-e-64&r=ets
  3. By: Antonis Demos (www.aueb.gr/users/demos); Dimitra Kyriakopoulou
    Abstract: We derive analytical expressions of bias approximations for maximum likelihood (ML) and quasi-maximum likelihood (QML) estimators of the EGARCH(1; 1) parameters that enable us to correct after the bias of all estimators. The bias correction mechanism is constructed under the specification of two methods that are analytically described. We also evaluate the residual bootstrapped estimator as a measure of performance. Monte Carlo simulations indicate that, for given sets of parameters values, the bias corrections work satisfactory for all parameters. The proposed full-step estimator performs better than the classical one and is also faster than the bootstrap. The results can be also used to formulate the approximate Edgeworth distribution of the estimators.
    Keywords: Exponential GARCH, maximum likelihood estimation, finite sample properties, bias approximations, bias correction, Edgeworth expansion, bootstrap
    JEL: C13 C22
    Date: 2018–02–23
    URL: http://d.repec.org/n?u=RePEc:aue:wpaper:1802&r=ets
  4. By: Antonis Demos (www.aueb.gr/users/demos); Dimitra Kyriakopoulou
    Date: 2018–02–23
    URL: http://d.repec.org/n?u=RePEc:aue:wpaper:1803&r=ets
  5. By: Antonis Demos (www.aueb.gr/users/demos); Dimitra Kyriakopoulou
    Date: 2018–02–23
    URL: http://d.repec.org/n?u=RePEc:aue:wpaper:1804&r=ets
  6. By: Yang Lu (Centre d'Economie de l'Université de Paris Nord (CEPN)); Christian Gourieroux (University of Toronto and Toulouse School of Economics)
    Abstract: We introduce Negative Binomial Autoregressive (NBAR) processes for (univariate and bivariate) count time series. The univariate NBAR process is defined jointly with an underlying intensity process, which is autoregressive gamma. The resulting count process is Markov, with negative binomial conditional and marginal distributions. The process is then extended to the bivariate case with a Wishart autoregressive matrix intensity process. The NBAR processes are Compound Autoregressive, which allows for simple stationarity condition and quasi-closed form nonlinear forecasting formulas at any horizon, as well as a computationally tractable generalized method of moment estimator. The model is applied to a pairwise analysis of weekly occurrence counts of a contagious disease between the greater Paris region and other French regions.
    Keywords: Compound Autoregressive, Poisson-gamma conjugacy
    JEL: C32
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:upn:wpaper:2018-01&r=ets
  7. By: Federico A. Bugni; Joel L. Horowitz
    Abstract: Economic data are often generated by stochastic processes that take place in continuous time, though observations may occur only at discrete times. For example, electricity and gas consumption take place in continuous time. Data generated by a continuous time stochastic process are called functional data. This paper is concerned with comparing two or more stochastic processes that generate functional data. The data may be produced by a randomized experiment in which there are multiple treatments. The paper presents a test of the hypothesis that the same stochastic process generates all the functional data. In contrast to existing methods, the test described here applies to both functional data and multiple treatments. The test is presented as a permutation test, which ensures that in a finite sample, the true and nominal probabilities of rejecting a correct null hypothesis are equal. The paper also presents the asymptotic distribution of the test statistic under alternative hypotheses. The results of Monte Carlo experiments and an application to an experiment on billing and pricing of natural gas illustrate the usefulness of the test.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.00798&r=ets
  8. By: Yeonwoo Rho; Xiaofeng Shao
    Abstract: In unit root testing, a piecewise locally stationary process is adopted to accommodate nonstationary errors that can have both smooth and abrupt changes in second- or higher-order properties. Under this framework, the limiting null distributions of the conventional unit root test statistics are derived and shown to contain a number of unknown parameters. To circumvent the difficulty of direct consistent estimation, we propose to use the dependent wild bootstrap to approximate the non-pivotal limiting null distributions and provide a rigorous theoretical justification for bootstrap consistency. The proposed method is compared through finite sample simulations with the recolored wild bootstrap procedure, which was developed for errors that follow a heteroscedastic linear process. Further, a combination of autoregressive sieve recoloring with the dependent wild bootstrap is shown to perform well. The validity of the dependent wild bootstrap in a nonstationary setting is demonstrated for the first time, showing the possibility of extensions to other inference problems associated with locally stationary processes.
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1802.05333&r=ets
  9. By: Daniel Grabowski (University of Giessen); Anna Staszewska-Bystrova (University of Lodz); Peter Winker (University of Giessen)
    Abstract: This article investigates the construction of skewness-adjusted confidence intervals and joint confidence bands for impulse response functions from vector autoregressive models. Three different implementations of the skewness adjustment are investigated. The methods are based on a bootstrap algorithm that adjusts mean and skewness of the bootstrap distribution of the autoregressive coefficients before the impulse response functions are computed. Using extensive Monte Carlo simulations, the methods are shown to improve the coverage accuracy in small and medium sized samples and for unit root processes for both known and unknown lag orders.
    Keywords: Bootstrap, confidence intervals, joint confidence bands, vector autoregression
    JEL: C15 C32
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:mar:magkse:201810&r=ets
  10. By: Emilio Zanetti Chini (Department of Economics and Management, University of Pavia)
    Abstract: The Generalized Smooth Transition Auto-Regression (GSTAR) parametrizes the joint asymmetry in the duration and length of cycles in macroeconomic time series by using particular generalizations of the logistic function. The symmetric smooth transition and linear auto-regressions are peculiar cases of the new parametrization. A test for the null hypothesis of dynamic symmetry is discussed. Two case studies indicate that dynamic asymmetry is a key feature of the U.S. economy. Our model beats its competitors in point forecasting, but this superiority becomes less evident in density forecasting and in uncertain forecasting environments.
    Keywords: Density forecasts, Econometric modelling, Evaluating forecasts, Generalized logistic, Industrial production, Nonlinear time series, Point forecasts, Statistical tests, Unemployment.
    JEL: C22 C51 C52
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0156&r=ets
  11. By: Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael (Cardiff Business School); Xu, Yongdeng (Cardiff Business School)
    Abstract: Indirect inference testing can be carried out with a variety of auxiliary models. Asymptotically these different models make no difference. However, in small samples power can differ. We explore small sample power and estimation bias both with different variable combinations and models of description --- Vector Auto Regressions, Impulse Response Functions or Moments (corresponding to the Simulated Methods of Moments) --- in the auxiliary model. We find that VAR and IRF descriptors perform slightly better than Moments but that different three variable combinations make little difference. More than three variables raises power and lowers bias but reduces the chances of finding a tractable model that passes the test.
    Keywords: Indirect Inference, DGSE model, Auxiliary Models, Simulated Moments Method, Impulse Response Functions, VAR, Moments, power, bias
    JEL: C12 C32 C52 E1
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2018/7&r=ets
  12. By: Busch, Marie; Sibbertsen, Philipp
    Abstract: Several modified estimation methods of the memory parameter have been introduced in the past years. They aim to decrease the upward bias of the memory parameter in cases of low frequency contaminations or an additive noise component, especially in situations with a short-memory process being contaminated. In this paper, we provide an overview and compare the performance of nine semiparametric estimation methods. Among them are two standard methods, four modified approaches to account for low frequency contaminations and three procedures developed for perturbed fractional processes. We conduct an extensive Monte Carlo study for a variety of parameter constellations and several DGPs. Furthermore, an empirical application of the log-absolute return series of the S&P 500 shows that the estimation results combined with a long-memory test indicate a spurious long-memory process.
    Keywords: Spurious Long Memory; Semiparametric estimation; Low frequency contamination; Pertubation;Monte Carlo simulation
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-628&r=ets

This nep-ets issue is ©2018 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.