nep-ets New Economics Papers
on Econometric Time Series
Issue of 2020‒02‒17
nine papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Estimating persistence for irregularly spaced historical data By Franses, Ph.H.B.F.
  2. NAPLES;Mining the lead-lag Relationship from Non-synchronous and High-frequency Data By Katsuya Ito; Kei Nakagawa
  3. Nowcasting Turkish GDP with MIDAS: Role of Functional Form of the Lag Polynomial By Mahmut Gunay
  4. Generalized Forecasr Averaging in Autoregressions with a Near Unit Root By Mohitosh Kejriwal; Xuewen Yu
  5. Diffusion Copulas: Identification and Estimation By Ruijun Bu; Kaddour Hadri; Dennis Kristensen
  6. Structural Change and the Problem of Phantom Break Locations By Yao Rao; Brendan McCabe
  7. Rethinking error correction model in macroeconometric analysis : A relevant review By Christian Pinshi
  8. Forecasting NIFTY 50 benchmark Index using Seasonal ARIMA time series models By Amit Tewari
  9. Bayesian Model Averaging for Autoregressive Distributed Lag (BMA_ADL) in gretl By Blazejowski, Marcin; Kwiatkowski, Jacek

  1. By: Franses, Ph.H.B.F.
    Abstract: This paper introduces to the literature on Economic History a measure of persistence which is particularly useful if the data are irregularly spaced. An illustration to 10 historical unevenly spaced data series for Holland of 1738 to 1779 showed the merits of the methodology
    Keywords: Irregularly spaced time series, Economic history, First order autoregression, Persistence
    JEL: C32 N01
    Date: 2019–09–01
  2. By: Katsuya Ito; Kei Nakagawa
    Abstract: In time-series analysis, the term "lead-lag effect" is used to describe a delayed effect on a given time series caused by another time series. lead-lag effects are ubiquitous in practice and are specifically critical in formulating investment strategies in high-frequency trading. At present, there are three major challenges in analyzing the lead-lag effects. First, in practical applications, not all time series are observed synchronously. Second, the size of the relevant dataset and rate of change of the environment is increasingly faster, and it is becoming more difficult to complete the computation within a particular time limit. Third, some lead-lag effects are time-varying and only last for a short period, and their delay lengths are often affected by external factors. In this paper, we propose NAPLES (Negative And Positive lead-lag EStimator), a new statistical measure that resolves all these problems. Through experiments on artificial and real datasets, we demonstrate that NAPLES has a strong correlation with the actual lead-lag effects, including those triggered by significant macroeconomic announcements.
    Date: 2020–02
  3. By: Mahmut Gunay
    Abstract: In this paper, we analyze short-term forecasts of Turkish GDP growth using Mixed DAta Sampling (MIDAS) approach. We consider six alternatives for functional form of the lag polynomial in the MIDAS equation, five to twelve lags of the explanatory high frequency variables and produce short-term forecasts for nine forecast horizons starting with the release of data for six months before the start of the target quarter to the release of the data for the last month of the quarter. Our results indicate that functional form of the lag polynomials play non-negligible role on the short-term forecast performance but a specific functional form does not perform globally well for all forecast horizons, for all lag lengths or for all indicators. Import quantity indices perform relatively better until first month’s data for the target quarter become available. As data accumulate for the monthly indicators for the target quarter, real domestic turnover and industrial production indicators stand out in terms of short-term forecasting performance. When all of the three months’ realizations for the monthly indicators become available for the quarter that we want to forecast, unrestricted MIDAS type equations with around five lags with real domestic turnover and industrial production indicators track the GDP growth relatively successfully.
    Keywords: GDP, Forecasting, MIDAS, Polynomial form
    JEL: C53 E37
    Date: 2020
  4. By: Mohitosh Kejriwal; Xuewen Yu
    Abstract: This paper develops a new approach to forecasting a highly persistent time series that employs feasible generalized least squares (FGLS) estimation of the deterministic components in conjunction with Mallows model averaging.
    JEL: C22
    Date: 2019–12
  5. By: Ruijun Bu; Kaddour Hadri; Dennis Kristensen
    Abstract: We propose a new semiparametric approach for modelling nonlinear univariate diffusions, where the observed processes are nonparametric transformations of underlying parametric diffusions (UPDs). This modelling strategy yields a general class of semiparametric Markov diffusion models with parametric dynamic copulas and nonparametric marginal distributions. We provide primitive conditions for the identification of the UPD parameters together with the unknown transformations from discrete samples. Semiparametric likelihood-based estimators of the UPD parameters are developed and we show that under regularity conditions both the parametric and nonparametric components converge with parametric rate towards Normal distributions. Kernel-based drift and diffusion estimators are also proposed and shown to be normally distributed in large samples. A simulation study investigates the Önite sample performance of our estimators in the context of modelling US short-term interest rates.
    Keywords: Continuous-time model; diffusion process; copula; transformation model; identification; nonparametric; semiparametric; maximum likelihood; sieve; kernel smoothing
    JEL: C14 C22 C32 C58 G12
    Date: 2018–07
  6. By: Yao Rao; Brendan McCabe
    Abstract: It is well known, in structural break problems, that it is much easier to detect the existence of a break in a data set than to determine the location of such a break in the sample span. This paper investigates why, in the context of Gaussian linear regressions, using a decision theory framework. The nub of the problem, even for moderately sized breaks, is that the posterior probability distribution of the possible break points is usually not very informative about the true break location. The information content is measured here by a proper scoring rule. Hence, even a locally optimal break location procedure, as introduced here, is ineffective. In the regression context, it turns out to be quite common, indeed the norm, for break location procedures to misidentify the true break position up to 100% of the time. Unfortunately too, the magnitude of the di§erence between the misidentified and true break locations is usually not small.
    Keywords: CUSUM test, Phantom Break Locations, Structural change
    Date: 2018–11
  7. By: Christian Pinshi (UNIKIN - University of Kinshasa)
    Abstract: The cointégration methodology has bridged the growing gap between economists and econometricians in understanding dynamics, equilibrium and bias on the reliability of macroeconomic and financial analysis, which is subject to non-stationary behavior. This paper proposes a comprehensive literature review on the relevance of the error correction model. Econometricians and economists have shown that error-correction model is a powerful machine that provides the economic system and macroeconomic policy with a refinement in the econometric results.
    Keywords: Keys words : Cointegration,Error correction model,Macroeconomics JEL Classification : C32,E0
    Date: 2020–01–25
  8. By: Amit Tewari
    Abstract: This paper analyses how Time Series Analysis techniques can be applied to capture movement of an exchange traded index in a stock market. Specifically, Seasonal Auto Regressive Integrated Moving Average (SARIMA) class of models is applied to capture the movement of Nifty 50 index which is one of the most actively exchange traded contracts globally [1]. A total of 729 model parameter combinations were evaluated and the most appropriate selected for making the final forecast based on AIC criteria [8]. NIFTY 50 can be used for a variety of purposes such as benchmarking fund portfolios, launching of index funds, exchange traded funds (ETFs) and structured products. The index tracks the behaviour of a portfolio of blue chip companies, the largest and most liquid Indian securities and can be regarded as a true reflection of the Indian stock market [2].
    Date: 2020–01
  9. By: Blazejowski, Marcin; Kwiatkowski, Jacek
    Abstract: This paper presents a software package that implements Bayesian Model Averaging for Autoregressive Distributed Lag models BMA_ADL ver.~0.9 in gretl. Gretl (the GNU regression, econometrics and time-series library) is an increasingly popular free, open-source software for econometric analysis with an easy-to-use graphical user interface. Bayesian Model Averaging (BMA) incorporates model uncertainty into conclusions about the estimated parameters. It is an efficient tool for discovering the most likely models and variables by obtaining estimates of their posterior characteristics.
    Keywords: BMA, gretl, model selection
    JEL: C2 C51 C63
    Date: 2020–01–17

This nep-ets issue is ©2020 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.