nep-ets New Economics Papers
on Econometric Time Series
Issue of 2021‒01‒04
eleven papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Frequency Domain Local Bootstrap in long memory time series By Arteche González, Jesús María
  2. Spurious relationships in high dimensional systems with strong or mild persistence By Pitarakis, Jean-Yves; Gonzalo, Jesús
  3. Temperature Anomalies, Long Memory, and Aggregation By J. Eduardo Vera-Valdés
  4. Adaptative predictability of stock market returns By Lopes Moreira Da Veiga, María Helena; Mao, Xiuping; Casas Villalba, Maria Isabel
  5. Out of sample predictability in predictive regressions with many predictor candidates By Pitarakis, Jean-Yves; Gonzalo, Jesús
  6. Exchange Rates and Macroeconomic Fundamentals: Evidence of Instabilities from Time-Varying Factor Loadings By Eric Hillebrand; Jakob Mikkelsen; Lars Spreng; Giovanni Urga
  7. Ethereum gas price statistics By David Carl; Christian Ewerhart
  8. Tests of Conditional Predictive Ability: Existence, Size, and Power By Michael W. McCracken
  9. A Bayesian Dynamic Compositional Model for Large Density Combinations in Finance By Roberto Casarin; Stefano Grassi; Francesco Ravazzolo; Herman K. van Dijk
  10. Uncovering regimes in out of sample forecast errors from predictive regressions By Pitarakis, Jean-Yves; Gonzalo, Jesús; Da Silva Neto, Anibal Emiliano
  11. Testing for Structural Change of Predictive Regression Model to Threshold Predictive Regression Model By Fukang Zhu; Mengya Liu; Shiqing Ling; Zongwu Cai

  1. By: Arteche González, Jesús María
    Abstract: Bootstrap techniques in the frequency domain have been proved to be effective instruments to approximate the distribution of many statistics of weakly dependent (short memory) series. However their validity with long memory has not been analysed yet. This paper proposes a Frequency Domain Local Bootstrap (FDLB) based on resampling a locally studentised version of the periodogram in a neighbourhood of the frequency of interest. A bound of the Mallows distance between the distributions of the original and bootstrap periodograms is offered for stationary and non-stationary long memory series. This result is in turn used to justify the use of FDLB for some statistics such as the average periodogram or the Local Whittle (LW) estimator. Finally, the finite sample behaviour of the FDLB in the LW estimator is analysed in a Monte Carlo, comparing its performance with rival alternatives.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:48980&r=all
  2. By: Pitarakis, Jean-Yves; Gonzalo, Jesús
    Abstract: This paper is concerned with the interactions of persistence and dimensionality in the context of the eigenvalue estimation problem of large covariance matrices arising in cointegration and principal component analysis. Following a review of the early and more recent developments in this area we investigate the behaviour of these eigenvalues in a VAR setting that blends pure unit root, local to unit root and mildly integrated components. Our results highlight the seriousness of spurious relationships that may arise in such Big Data environments even when the degree of persistence of variables involved is mild and is affecting only a small proportion of a large data matrix with important implications for forecasts based on principal component regressions and related methods. We argue that first differencing prior to principal component analysis may be suitable even in stationary ornearly-stationary environments.
    Keywords: Principal Components; High Dimensional Covariances; Persistence; Spurious Factors; Spurious Cointegration
    Date: 2020–12–09
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:31553&r=all
  3. By: J. Eduardo Vera-Valdés (Aalborg University and CREATES)
    Abstract: Econometric studies for global heating have typically used regional or global temperature averages to show that they exhibit long memory properties. One typical explanation behind the long memory properties of temperature averages is cross-sectional aggregation. Nonetheless, the formal analysis regarding the effect that aggregation has on the long memory dynamics of temperature data has been missing. Thus, this paper studies the long memory properties of individual grid temperatures and compares them against the long memory dynamics of global and regional averages. Our results show that the long memory parameters in individual grid observations are smaller than the ones from regional averages. Global and regional long memory estimates are found to be greatly affected by temperature measurements at the Tropics, where the data is less reliable. Thus, this paper supports the notion that aggregation may be exacerbating the long memory estimated in regional and global temperature data. The results are robust to the bandwidth parameter, limit for station radius of influence, and sampling frequency.
    Keywords: Global Heating, Temperature Anomalies, Climate Econometrics, Long Memory, Aggregation
    JEL: Q54 C22 C43 C14
    Date: 2020–12–16
    URL: http://d.repec.org/n?u=RePEc:aah:create:2020-16&r=all
  4. By: Lopes Moreira Da Veiga, María Helena; Mao, Xiuping; Casas Villalba, Maria Isabel
    Abstract: We revisit the stock market return predictability using the variance risk premium and conditional variance as predictors of classical predictive regressions and time-varying coefficient predictive regressions. Also, we propose three new models to forecast the conditional variance and estimate the variance risk premium. Our empirical results show, first, that the flexibility provided by time-varying coefficient regressions often improve the ability of the variance risk premium, the conditional variance, and other control variables to predict stock market returns. Second, the conditional variance and variance risk premium obtained from varying coefficient models perform consistently well at predicting stock market returns. Finally, the time-varying coefficient predictive regressions show that the variance risk premium is a predictor of stock market excess returns before the global financial crisis of 2007, but its predictability decreases in the post global financial crisis period at the 3-month horizon. At the 12-month horizon, both the variance risk premium and conditional variance are predictors of stock excess returns during most of 2000-2015.
    Keywords: Variance Risk Premium; Time-Varying Coefficient Predictive Regressions; Time-Varying Coefficient Har-Type Models; Realized Variance; Predictability; Nonparametric Methods
    JEL: G1 C53 C52 C51 C22
    Date: 2020–12–18
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:31648&r=all
  5. By: Pitarakis, Jean-Yves; Gonzalo, Jesús
    Abstract: This paper is concerned with detecting the presence of out of sample predictability in linear predictive regressions with a potentially large set of candidate predictors. We propose a procedure based on out of sample MSE comparisons that is implementedin a pairwise manner using one predictor at a time and resulting in an aggregate test statistic that is standard normally distributed under the none hypothesis of no linear predictability. Predictors can be highly persistent, purely stationary or a combination of both. Upon rejection of the none hypothesis we subsequently introduce a predictor screening procedure designed to identify the most active predictors.
    Keywords: High Dimensional Predictability; Predictive Regressions; Forecasting
    JEL: C53 C52 C32 C12
    Date: 2020–12–09
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:31554&r=all
  6. By: Eric Hillebrand (Aarhus University and CREATES); Jakob Mikkelsen (Danmarks Nationalbank); Lars Spreng (Cass Business School); Giovanni Urga (Cass Business School and Bergamo University)
    Abstract: We examine the relationship between exchange rates and macroeconomic fundamentals using a two-step maximum likelihood estimator through which we compute time-varying factor loadings. Factors are obtained as principal components, extracted from a large macro-dataset. Using 14 currencies over 1995–2018, we show that the loadings on the factors vary considerably over time with frequent sign changes. Allowing for time-varying loadings increases the percentage of explained variation in exchanges rates by an order of magnitude. Accounting for instabilities improves the predictive ability of the model globally and locally during crises, and yields better forecast of sign changes in exchange rates.
    Keywords: foreign exchange rates, macroeconomic factors, time-varying loadings, high-dimensional factor models, exchange rate forecasting
    JEL: C32 C38 C51 C52 C53 C55 F31
    Date: 2020–12–21
    URL: http://d.repec.org/n?u=RePEc:aah:create:2020-19&r=all
  7. By: David Carl; Christian Ewerhart
    Abstract: For users of the Ethereum network, the gas price is a crucial parameter that determines how swiftly the decentralized consensus protocol confirms a transaction. This paper studies the statistics of the Ethereum gas price. We start with some conceptual discussion of the gas price notion in view of the actual transaction-selection strategies used by Ethereum miners. Subsequently, we provide the descriptive statistics of what we call the threshold gas price. Finally, we identify and estimate a seasonal ARIMA (SARIMA) model for predicting the hourly median of the threshold gas price.
    Keywords: Ethereum, gas price, confirmation time, forecasting
    JEL: C57 C22 D85 E37
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:373&r=all
  8. By: Michael W. McCracken
    Abstract: We investigate a test of conditional predictive ability described in Giacomini and White (2006; Econometrica). Our main goal is simply to demonstrate existence of the null hypothesis and, in doing so, clarify just how unlikely it is for this hypothesis to hold. We do so using a simple example of point forecasting under quadratic loss. We then provide simulation evidence on the size and power of the test. While the test can be accurately sized we find that power is typically low.
    Keywords: prediction; out-of-sample; inference
    JEL: C53 C12 C52
    Date: 2020–12–18
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:89216&r=all
  9. By: Roberto Casarin (University Ca' Foscari of Venice, Italy); Stefano Grassi (University of Rome Tor Vergata, Italy); Francesco Ravazzolo (Free University of Bozen-Bolzano, Italy; BI Norwegian Business School, Norway; Rimini Centre for Economic Analysis); Herman K. van Dijk (Econometric Institute, Erasmus University Rotterdam, The Netherlands; Norges Bank, Norway; Tinbergen Institute, The Netherlands; Rimini Centre for Economic Analysis)
    Abstract: A Bayesian dynamic compositional model is introduced that can deal with combining a large set of predictive densities. It extends the mixture of experts and the smoothly mixing regression models by allowing for combination weight dependence across models and time. A compositional model with Logistic-normal noise is specified for the latent weight dynamics and the class-preserving property of the logistic-normal is used to reduce the dimension of the latent space and to build a compositional factor model. The projection used in the dimensionality reduction is based on a dynamic clustering process which partitions the large set of predictive densities into a smaller number of subsets. We exploit the state space form of the model to provide an efficient inference procedure based on Particle MCMC. The approach is applied to track the Standard & Poor 500 index combining 3712 predictive densities, based on 1856 US individual stocks, clustered in relatively small number of model sets. For the period 2007-2009, which included the financial crisis, substantial predictive gains are obtained, in particular, in the tails using Value-at-Risk. Similar predictive gains are obtained for the US Treasury Bill yield using a large set of macroeconomic variables. Evidence obtained on model set incompleteness and dynamic patterns in the financial clusters provide valuable signals for improved modelling and more effective economic and financial decisions.
    Keywords: Density Combination, Large Set of Predictive Densities, Compositional Factor Models, Nonlinear State Space, Bayesian Inference
    JEL: C11 C15 C53 E37
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:20-27&r=all
  10. By: Pitarakis, Jean-Yves; Gonzalo, Jesús; Da Silva Neto, Anibal Emiliano
    Abstract: We introduce a set of test statistics for assessing the presence of regimes in out of sample forecast errors produced by recursively estimated linear predictive regressions that can accommodate multiple highly persistent predictors. Our tests statistics are designed to be robust to the chosen starting window size and are shown to be both consistent and locally powerful. Their limiting none distributionsare also free of nuisance parameters and hence robust to the degree of persistence of the predictors.Our methods are subsequently applied to the predictability of the value premium whose dynamics are shown to be characterised by state dependence.
    Keywords: Thresholds; Cusum; Out Of Sample Forecast Errors; Predictability; Predictive Regressions
    JEL: C58 C53 C22 C12
    Date: 2020–12–09
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:31555&r=all
  11. By: Fukang Zhu (School of Mathematics, Jilin University, Changchun, Jilin 130012, China); Mengya Liu (School of Mathematics, Jilin University, Changchun, Jilin 130012, China); Shiqing Ling (Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: This paper investigates two test statistics for structural changes and thresh- olds in predictive regression models. The generalized likelihood ratio (GLR) test is proposed for the stationary predictor and the generalized F test is suggested for the unit root predictor. Under the null hypothesis of no structural change and threshold, it is shown that the GLR test statistic converges to a function of a centered Gaussian process, and the generalized F test statistic converges to a function of Brownian motions. A Bootstrap method is proposed to obtain the critical values of test statistics. Simulation studies and a real example are given to assess the performances of the tests.
    Keywords: Bootstrap method, Generalized F-test; Generalized likelihood ratio test; Predictive regression, Structural change, Threshold model.
    JEL: C12 C22 C58 G12
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202021&r=all

This nep-ets issue is ©2021 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.