nep-ets New Economics Papers
on Econometric Time Series
Issue of 2023‒04‒03
seven papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Realized recurrent conditional heteroskedasticity model for volatility modelling By Chen Liu; Chao Wang; Minh-Ngoc Tran; Robert Kohn
  2. Creating Disasters: Recession Forecasting with GAN-Generated Synthetic Time Series Data By Sam Dannels
  3. Constructing High Frequency Economic Indicators by Imputation By Serena Ng; Susannah Scanlan
  4. Estimation of high-dimensional change-points under a group sparsity structure By Cai, Hanqing; Wang, Tengyao
  5. An adaptive volatility method for probabilistic forecasting and its application to the M6 financial forecasting competition By Joseph de Vilmarest; Nicklas Werge
  6. Oil and the Stock Market Revisited: A Mixed Functional VAR Approach By Yoosoon Chang; Hilde C. Bjornland; Jamie L. Cross
  7. Information criteria for outlier detection avoiding arbitrary significance levels By Riani, Marco; Atkinson, Anthony C.; Corbellini, Aldo; Farcomeni, Alessio; Laurini, Fabrizio

  1. By: Chen Liu; Chao Wang; Minh-Ngoc Tran; Robert Kohn
    Abstract: We propose a new approach to volatility modelling by combining deep learning (LSTM) and realized volatility measures. This LSTM-enhanced realized GARCH framework incorporates and distills modeling advances from financial econometrics, high frequency trading data and deep learning. Bayesian inference via the Sequential Monte Carlo method is employed for statistical inference and forecasting. The new framework can jointly model the returns and realized volatility measures, has an excellent in-sample fit and superior predictive performance compared to several benchmark models, while being able to adapt well to the stylized facts in volatility. The performance of the new framework is tested using a wide range of metrics, from marginal likelihood, volatility forecasting, to tail risk forecasting and option pricing. We report on a comprehensive empirical study using 31 widely traded stock indices over a time period that includes COVID-19 pandemic.
    Date: 2023–02
  2. By: Sam Dannels
    Abstract: A common problem when forecasting rare events, such as recessions, is limited data availability. Recent advancements in deep learning and generative adversarial networks (GANs) make it possible to produce high-fidelity synthetic data in large quantities. This paper uses a model called DoppelGANger, a GAN tailored to producing synthetic time series data, to generate synthetic Treasury yield time series and associated recession indicators. It is then shown that short-range forecasting performance for Treasury yields is improved for models trained on synthetic data relative to models trained only on real data. Finally, synthetic recession conditions are produced and used to train classification models to predict the probability of a future recession. It is shown that training models on synthetic recessions can improve a model's ability to predict future recessions over a model trained only on real data.
    Date: 2023–02
  3. By: Serena Ng; Susannah Scanlan
    Abstract: Monthly and weekly economic indicators are often taken to be the largest common factor estimated from high and low frequency data, either separately or jointly. To incorporate mixed frequency information without directly modeling them, we target a low frequency diffusion index that is already available, and treat high frequency values as missing. We impute these values using multiple factors estimated from the high frequency data. In the empirical examples considered, static matrix completion that does not account for serial correlation in the idiosyncratic errors yields imprecise estimates of the missing values irrespective of how the factors are estimated. Single equation and systems-based dynamic procedures yield imputed values that are closer to the observed ones. This is the case in the counterfactual exercise that imputes the monthly values of consumer sentiment series before 1978 when the data was released only on a quarterly basis. This is also the case for a weekly version of the CFNAI index of economic activity that is imputed using seasonally unadjusted data. The imputed series reveals episodes of increased variability of weekly economic information that are masked by the monthly data, notably around the 2014-15 collapse in oil prices.
    Date: 2023–03
  4. By: Cai, Hanqing; Wang, Tengyao
    Abstract: Change-points are a routine feature of `big data' observed in the form of high-dimensional data streams. In many such data streams, the component series possess group structures and it is natural to assume that changes only occur in a small number of all groups. We propose a new change point procedure, called groupInspect, that exploits the group sparsity structure to estimate a projection direction so as to aggregate information across the component series to successfully estimate the change-point in the mean structure of the series. We prove that the estimated projection direction is minimax optimal, up to logarithmic factors, when all group sizes are of comparable order. Moreover, our theory provide strong guarantees on the rate of convergence of the change-point location estimator. Numerical studies demonstrates the competitive performance of groupInspect in a wide range of settings and a real data example conrms the practical usefulness of our procedure.
    Keywords: change-point analysis; high-dimensional data; group sparsity; EP/T02772X/1
    JEL: C1
    Date: 2023–03–03
  5. By: Joseph de Vilmarest; Nicklas Werge
    Abstract: In this note, we address the problem of probabilistic forecasting using an adaptive volatility method based on classical time-varying volatility models and stochastic optimization algorithms. These principles were successfully applied in the recent M6 financial forecasting competition for both probabilistic forecasting and investment decision-making under the team named AdaGaussMC. The key points of our strategy are: (a) apply a univariate time-varying volatility model, called AdaVol, (b) obtain probabilistic forecasts of future returns, and (c) optimize the competition metrics using stochastic gradient-based algorithms. We claim that the frugality of the methods implies its robustness and consistency.
    Date: 2023–03
  6. By: Yoosoon Chang (Indiana University, Department of Economics); Hilde C. Bjornland (BI Norwegian Business School); Jamie L. Cross (BI Norwegian Business School)
    Abstract: This paper proposes a new mixed vector autoregression (MVAR) model to examine the relationship between aggregate time series and functional variables in a multivariate setting. The model facilitates a re-examination of the oil-stock price nexus by estimating the effects of demand and supply shocks from the global market for crude oil on the entire distribution of U.S. stock returns since the late 1980s. We show that the MVAR effectively extracts information from the returns distribution that is more relevant for understanding the oil-stock price nexus beyond simply looking at the first few moments. Using novel functional impulse response functions (FIRFs), we find that oil market demand and supply shocks tend to increase returns, reduce volatility, and have an asymmetric effect on the returns distribution as a whole. In a value-at-risk (VaR) analysis we also find that the oil market contains important information that reduces expected loss, and that the response of VaR to the oil market demand and supply shocks has changed over time.
    Keywords: Oil market, stock market, oil-stock price nexus, functional VAR.
    Date: 2023–03
  7. By: Riani, Marco; Atkinson, Anthony C.; Corbellini, Aldo; Farcomeni, Alessio; Laurini, Fabrizio
    Abstract: Information criteria for model choice are extended to the detection of outliers in regression models. For deletion of observations (hard trimming) the family of models is generated by monitoring properties of the fitted models as the trimming level is varied. For soft trimming (downweighting of observations), some properties are monitored as the efficiency or breakdown point of the robust regression is varied. Least Trimmed Squares and the Forward Search are used to monitor hard trimming, with MM- and S-estimation the methods for soft trimming. Bayesian Information Criteria (BIC) for both scenarios are developed and results about their asymptotic properties provided. In agreement with the theory, simulations and data analyses show good performance for the hard trimming methods for outlier detection. Importantly, this is achieved very simply, without the need to specify either significance levels or decision rules for multiple outliers.
    Keywords: automatic data analysis; Bayesian Information Criterion (BIC); forward search; least trimmed squares; MM-estimation; S-estimation
    JEL: C1
    Date: 2022–02–25

This nep-ets issue is ©2023 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.