nep-ets New Economics Papers
on Econometric Time Series
Issue of 2021‒08‒16
eight papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Dynamic functional time-series forecasts of foreign exchange implied volatility surfaces By Han Lin Shang; Fearghal Kearney
  2. Inference in heavy-tailed non-stationary multivariate time series By Matteo Barigozzi; Giuseppe Cavaliere; Lorenzo Trapani
  3. Feature-based Bayesian forecasting model averaging By Li Li; Yanfei Kang; Feng Li
  4. Sparse Temporal Disaggregation By Luke Mosley; Idris Eckley; Alex Gibberd
  5. Bootstrap long memory processes in the frequency domain By Hidalgo, Javier
  6. A dynamic leverage stochastic volatility model By Nguyen, Hoang; Nguyen, Trong-Nghia; Tran, Minh-Ngoc
  7. On Modelling of Crude Oil Futures in a Bivariate State-Space Framework By Peilun He; Karol Binkowski; Nino Kordzakhia; Pavel Shevchenko
  8. On the Parameter Estimation in the Schwartz-Smiths Two-Factor Model By Karol Binkowski; Peilun He; Nino Kordzakhia; Pavel Shevchenko

  1. By: Han Lin Shang; Fearghal Kearney
    Abstract: This paper presents static and dynamic versions of univariate, multivariate, and multilevel functional time-series methods to forecast implied volatility surfaces in foreign exchange markets. We find that dynamic functional principal component analysis generally improves out-of-sample forecast accuracy. More specifically, the dynamic univariate functional time-series method shows the greatest improvement. Our models lead to multiple instances of statistically significant improvements in forecast accuracy for daily EUR-USD, EUR-GBP, and EUR-JPY implied volatility surfaces across various maturities, when benchmarked against established methods. A stylised trading strategy is also employed to demonstrate the potential economic benefits of our proposed approach.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.14026&r=
  2. By: Matteo Barigozzi; Giuseppe Cavaliere; Lorenzo Trapani
    Abstract: We study inference on the common stochastic trends in a non-stationary, $N$-variate time series $y_{t}$, in the possible presence of heavy tails. We propose a novel methodology which does not require any knowledge or estimation of the tail index, or even knowledge as to whether certain moments (such as the variance) exist or not, and develop an estimator of the number of stochastic trends $m$ based on the eigenvalues of the sample second moment matrix of $y_{t}$. We study the rates of such eigenvalues, showing that the first $m$ ones diverge, as the sample size $T$ passes to infinity, at a rate faster by $O\left(T \right)$ than the remaining $N-m$ ones, irrespective of the tail index. We thus exploit this eigen-gap by constructing, for each eigenvalue, a test statistic which diverges to positive infinity or drifts to zero according to whether the relevant eigenvalue belongs to the set of the first $m$ eigenvalues or not. We then construct a randomised statistic based on this, using it as part of a sequential testing procedure, ensuring consistency of the resulting estimator of $m$. We also discuss an estimator of the common trends based on principal components and show that, up to a an invertible linear transformation, such estimator is consistent in the sense that the estimation error is of smaller order than the trend itself. Finally, we also consider the case in which we relax the standard assumption of \textit{i.i.d.} innovations, by allowing for heterogeneity of a very general form in the scale of the innovations. A Monte Carlo study shows that the proposed estimator for $m$ performs particularly well, even in samples of small size. We complete the paper by presenting four illustrative applications covering commodity prices, interest rates data, long run PPP and cryptocurrency markets.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.13894&r=
  3. By: Li Li; Yanfei Kang; Feng Li
    Abstract: In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time series features, which is called Feature-based Bayesian Forecasting Model Averaging (FEBAMA). Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecasting combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to add weight to the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point and density forecasts.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02082&r=
  4. By: Luke Mosley; Idris Eckley; Alex Gibberd
    Abstract: Temporal disaggregation is a method commonly used in official statistics to enable high-frequency estimates of key economic indicators, such as GDP. Traditionally, such methods have relied on only a couple of high-frequency indicator series to produce estimates. However, the prevalence of large, and increasing, volumes of administrative and alternative data-sources motivates the need for such methods to be adapted for high-dimensional settings. In this article, we propose a novel sparse temporal-disaggregation procedure and contrast this with the classical Chow-Lin method. We demonstrate the performance of our proposed method through simulation study, highlighting various advantages realised. We also explore its application to disaggregation of UK gross domestic product data, demonstrating the method's ability to operate when the number of potential indicators is greater than the number of low-frequency observations.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.05783&r=
  5. By: Hidalgo, Javier
    Abstract: The aim of the paper is to describe a bootstrap, contrary to the sieve boot- strap, valid under either long memory (LM) or short memory (SM) depen- dence. One of the reasons of the failure of the sieve bootstrap in our context is that under LM dependence, the sieve bootstrap may not be able to capture the true covariance structure of the original data. We also describe and ex- amine the validity of the bootstrap scheme for the least squares estimator of the parameter in a regression model and for model specification. The moti- vation for the latter example comes from the observation that the asymptotic distribution of the test is intractable.
    Keywords: Long memory; bootstrap methods; aggregation; semiparametric model
    JEL: J1 C1
    Date: 2020–07–21
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:106149&r=
  6. By: Nguyen, Hoang (Örebro University School of Business); Nguyen, Trong-Nghia (The University of Sydney Business School); Tran, Minh-Ngoc (The University of Sydney Business School)
    Abstract: Stock returns are considered as a convolution of two random processes that are the return innovation and the volatility innovation. The correlation of these two processes tends to be negative which is the so-called leverage effect. In this study, we propose a dynamic leverage stochastic volatility (DLSV) model where the correlation structure between the return innovation and the volatility innovation is assumed to follow a generalized autoregressive score (GAS) process. We founnd that the leverage effect is reinforced in the market downturn period and weakened in the market upturn period.
    Keywords: Dynamic leverage; GAS; stochastic volatility (SV)
    JEL: C11 C52 C58
    Date: 2021–05–20
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2021_014&r=
  7. By: Peilun He; Karol Binkowski; Nino Kordzakhia; Pavel Shevchenko
    Abstract: We study a bivariate latent factor model for the pricing of commodity fu- tures. The two unobservable state variables representing the short and long term fac- tors are modelled as Ornstein-Uhlenbeck (OU) processes. The Kalman Filter (KF) algorithm has been implemented to estimate the unobservable factors as well as unknown model parameters. The estimates of model parameters were obtained by maximising a Gaussian likelihood function. The algorithm has been applied to WTI Crude Oil NYMEX futures data.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.01886&r=
  8. By: Karol Binkowski; Peilun He; Nino Kordzakhia; Pavel Shevchenko
    Abstract: The two unobservable state variables representing the short and long term factors introduced by Schwartz and Smith in [16] for risk-neutral pricing of futures contracts are modelled as two correlated Ornstein-Uhlenbeck processes. The Kalman Filter (KF) method has been implemented to estimate the short and long term factors jointly with un- known model parameters. The parameter identification problem arising within the likelihood function in the KF has been addressed by introduc- ing an additional constraint. The obtained model parameter estimates are the conditional Maximum Likelihood Estimators (MLEs) evaluated within the KF. Consistency of the conditional MLEs is studied. The methodology has been tested on simulated data.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.01881&r=

This nep-ets issue is ©2021 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.