nep-ets New Economics Papers
on Econometric Time Series
Issue of 2021‒06‒14
ten papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Recent Developments of the Autoregressive Distributed Lag Modelling Framework By JIN SEO CHO; MATTHEW GREENWOOD-NIMMO; YONGCHEOL SHIN
  2. The COVID-19 shock and challenges for time series models By Bobeica, Elena; Hartwig, Benny
  3. Macroeconomic Forecasting and Variable Ordering in Multivariate Stochastic Volatility Models By Jonas E. Arias; Juan F. Rubio-Ramirez; Minchul Shin
  4. GARCHNet - Value-at-Risk forecasting with novel approach to GARCH models based on neural networks By Mateusz Buczyński; Marcin Chlebus
  5. A Bayesian quantile time series model for asset returns By Griffin, Jim E.; Mitrodima, Gelly
  6. Is U.S. real output growth really non-normal? Testing distributional assumptions in time-varying location-scale models By Matei Demetrescu; Robinson Kruse-Becher
  7. HCR & HCR-GARCH – novel statistical learning models for Value at Risk estimation By Michał Woźniak; Marcin Chlebus
  8. Identification and Estimation of Non-stationary Hidden Markov Models By Martin Garcia-Vazquez
  9. Joint Asymptotic Properties of Stopping Times and Sequential Estimators for Stationary First-order Autoregressive Models By Kohtaro Hitomi; Keiji Nagai; Yoshihiko Nishiyama; Junfan Tao
  10. Testing for equal predictive accuracy with strong dependence By Laura Coroneo; Fabrizio Iacone

  1. By: JIN SEO CHO (Yonsei Univ); MATTHEW GREENWOOD-NIMMO (University of Melbourne); YONGCHEOL SHIN (University of York)
    Abstract: We review the literature on the Autoregressive Distributed Lag (ARDL) model, from its origins in the analysis of autocorrelated trend stationary processes to its subsequent applications in the analysis of cointegrated non-stationary time series. We then survey several recent extensions of the ARDL model, including asymmetric and nonlinear generalisations of the ARDL model, the quantile ARDL model, the pooled mean group dynamic panel data model and the spatio-temporal ARDL model.
    Keywords: Autoregressive Distributed Lag (ARDL) Model; Asymmetry, Nonlinearity and Threshold Effects; Quantile Regression; Panel Data; Spatial Analysis
    JEL: C22
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2021rwp-186&r=
  2. By: Bobeica, Elena; Hartwig, Benny
    Abstract: We document the impact of COVID-19 on frequently employed time series models, with a focus on euro area inflation. We show that for both single equation models (Phillips curves) and Vector Autoregressions (VARs) estimated parameters change notably with the pandemic. In a VAR, allowing the errors to have a distribution with fatter tails than the Gaussian one equips the model to better deal with the COVID-19 shock. A standard Gaussian VAR can still be used for producing conditional forecasts when relevant off-model information is used. We illustrate this by conditioning on official projections for a set of variables, but also by tilting to expectations from the Survey of Professional Forecasters. For Phillips curves, averaging across many conditional forecasts in a thick modelling framework offers some hedge against parameter instability. JEL Classification: C53, E31, E37
    Keywords: COVID-19, forecasting, inflation, student's t errors, tilting, VAR
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212558&r=
  3. By: Jonas E. Arias; Juan F. Rubio-Ramirez; Minchul Shin
    Abstract: We document five novel empirical findings on the well-known potential ordering drawback associated with the time-varying parameter vector autoregression with stochastic volatility developed by Cogley and Sargent (2005) and Primiceri (2005), CSP-SV. First, the ordering does not affect point prediction. Second, the standard deviation of the predictive densities implied by different orderings can differ substantially. Third, the average length of the prediction intervals is also sensitive to the ordering. Fourth, the best ordering for one variable in terms of log-predictive scores does not necessarily imply the best ordering for another variable under the same metric. Fifth, the best ordering for variable x in terms of log-predictive scores tends to put the variable x first while the worst ordering for variable x tends to put the variable x last. Then, we consider two alternative ordering invariant time-varying parameter VAR-SV models: the discounted Wishart SV model (DW-SV) and the dynamic stochastic correlation SV model (DSC-SV). The DW-SV underperforms relative to each ordering of the CSP-SV. The DSC-SV has an out-of-sample forecasting performance comparable to the median outcomes across orderings of the CSP-SV.
    Keywords: Vector Autoregressions; Time-Varying Parameters; Stochastic Volatility; Variable Ordering; Cholesky Decomposition; Wishart Process; Dynamic Conditional Correlation; Out-of-sample Forecasting Evaluation
    JEL: C8 C11 C32 C53
    Date: 2021–06–03
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:92355&r=
  4. By: Mateusz Buczyński (Interdisciplinary Doctoral School, University of Warsaw); Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw)
    Abstract: This study proposes a new GARCH specification, adapting a long short-term memory (LSTM) neural network's architecture. Classical GARCH models have been proven to give substantially good results in the case of financial modeling, where high volatility can be observed. In particular, their high value is often praised in the case of Value-at-Risk. However, the lack of nonlinear structure in most of the approaches entails that the conditional variance is not represented in the model well enough. On the contrary, recent rapid advancement of deep learning methods is said to be capable of describing any nonlinear relationships prominently. We suggest GARCHNet - a nonlinear approach to conditional variance that combines LSTM neural networks with maximum likelihood estimators of probability in GARCH. The distributions of the innovations considered in the paper are: normal, t and skewed t, however the approach does enable extensions to other distributions as well. To evaluate our model, we have executed an empirical study on the log returns of WIG 20 (Warsaw Stock Exchange Index) in four different time periods throughout 2005 and 2021 with varying levels of observed volatility. Our findings confirm the validity of the solution, however we present several directions to develop it further.
    Keywords: Value-at-Risk, GARCH, neural networks, LSTM
    JEL: G32 C52 C53 C58
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2021-08&r=
  5. By: Griffin, Jim E.; Mitrodima, Gelly
    Abstract: We consider jointly modeling a finite collection of quantiles over time. Formal Bayesian inference on quantiles is challenging since we need access to both the quantile function and the likelihood. We propose a flexible Bayesian time-varying transformation model, which allows the likelihood and the quantile function to be directly calculated. We derive conditions for stationarity, discuss suitable priors, and describe a Markov chain Monte Carlo algorithm for inference. We illustrate the usefulness of the model for estimation and forecasting on stock, index, and commodity returns.
    Keywords: Bayesian nonparametrics; Predictive density; Stationarity; Transformation models
    JEL: C1 J1
    Date: 2020–06–10
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:105610&r=
  6. By: Matei Demetrescu (University of Kiel); Robinson Kruse-Becher (University of Hagen and CREATES)
    Abstract: Testing distributional assumptions is an evergreen topic in statistics, econometrics and other quantitative disciplines. A key assumption for extant distributional tests is some form of stationarity. Yet, under time-varying mean or time-varying volatility, the observed marginal distribution belongs to a mixture family with components having the same baseline distribution but different location and scale parameters. Therefore, distribution tests consistently reject when stationarity assumptions are violated, even if the baseline distribution is correctly specified. At the same time, time-varying means or variances are common in economic data. We therefore propose distribution tests that are robustified to such time-variability of the data by means of a local standardization procedure. As a leading case in applied work, we demonstrate our approach in detail for the case of testing normality, while our main results are extended to general location-scale models without essential modifications. In addition to time-varying mean and volatility functions, the data generating process may exhibit features such as generic serial dependence. Specifically, we base our tests on raw moments of probability integral transformations of the series standardized using rolling windows of data, of suitably chosen width. The use of probability integral transforms is advantageous as they accommodate a wide range of distributions to be tested for and imply simple raw moment restrictions. Flexible nonparametric estimators of the mean and the variance functions are employed for the local standardization. Short-run dynamics are taken into account using the (fixed-b) Heteroskedasticity and Autocorrelation Robust [HAR] approach of Kiefer and Vogelsang (2005, Econometric Theory), which leads to robustness of the proposed test statistics to the estimation error induced by the local standardization. To ease implementation, we propose a simple rule for choosing the tuning parameters of the standardization procedure, as well as an effective finite-sample adjustment. The provided Monte Carlo experiments show that the new tests perform well in terms of size and power and outperform alternative tests even under stationarity. Finally, we find in contrast to other studies no evidence against normality of the aggregate U.S. real output growth rates after accounting for time-variation in mean and variance.
    Keywords: Distribution testing, Probability integral transformation, Local standardization, Nonparametric estimation, Heteroskedasticity and autocorrelation robust inference
    JEL: C12 C14 C22 E01 E32
    Date: 2021–05–20
    URL: http://d.repec.org/n?u=RePEc:aah:create:2021-07&r=
  7. By: Michał Woźniak (Faculty of Economic Sciences, University of Warsaw); Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw)
    Abstract: Market risk researchers agree that an ideal model for Value at Risk (VaR) estimation does not exist, different models performance strongly depends on current economic circumstances. Under the conditions of sudden volatility increase, such as during the global economic crisis caused by the Covid-19 pandemic, no classical VaR model worked properly even for the group of the largest market indices. Therefore, the aim of the article is to present and formally test three novel statistical learning models for VaR estimation: HCR, HCR-GARCH and HCR-QML-GARCH, which, by considering additional volatility term (due to time context and statistical moments), should be able to perform well in times of market turbulence. In the benchmark procedure we compare the 1% and 2.5% one-day-ahead VaR forecasts obtained with the above models against the estimates of classical methods like: Historical Simulation, KDE, Modified Cornish-Fisher Expansion, GARCH(1,1) with varied distributions, RiskMetrics™, EVT and QML-GARCH. Four periods that vary in terms of market volatility: 2006-9, 2008-11, 2014-17 and mid-2016 to mid-2020 for six different stock market indexes: DAX, WIG 20, MOEX, S&P 500, Nikkei and SHC are selected. Models quality is tested from two perspectives: fulfilling regulatory requirements and forecasting adequateness. Obtained results show that HCR-GARCH outperforms other models during periods of sudden increased volatility in the markets. At the same time, HCR-QML-GARCH liberalizes the conservative estimates of HCR-GARCH and allows its use under moderate volatility, without any major loss of quality in times of crisis.
    Keywords: Value at Risk, Hierarchical Correlation Reconstruction, GARCH, Standardized Residuals
    JEL: G32 C52 C53 C58
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2021-10&r=
  8. By: Martin Garcia-Vazquez (University of Minnesota)
    Abstract: This paper provides a novel constructive identification proof for non-stationary Hidden Markov models. The identification result establishes that only two periods of time are required if one wants to identify transition probabilities between those two periods. This is achieved by using three conditionally independent noisy measures of the hidden state. The paper also provides a novel estimator for nonstationary hidden Markov models based on the identification proof. Montecarlo experiments show that this estimator is faster to compute than maximum likelihood, and almost as precise for large enough samples. Moreover, I show how my identification proof and my estimator can be used in two different relevant applications: Identification and estimation of Conditional Choice Probabilities, initial conditions and laws of motion in dynamic discrete choice models when there is an unobservable state; and identification and estimation of the production function of cognitive skills in a child development context when skills and investment are unobserved.
    Keywords: identification, Child Development, cognitive skills, investment in children
    JEL: C10 J24
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:hka:wpaper:2021-023&r=
  9. By: Kohtaro Hitomi (Kyoto Institute of Technology); Keiji Nagai (Yokohama National University); Yoshihiko Nishiyama (Institute of Economic Research, Kyoto University); Junfan Tao (JSPS International Research Fellow (Kyoto University), Institute of Economic Research, Kyoto University)
    Abstract: Currently, because online data is abundant and can be collected more easily , people often face the problem of making correct statistical decisions as soon as possible. If the online data is sequentially available, sequential analysis is appropriate for handling such a problem. We consider the joint asymptotic properties of stopping times and sequential estimators for stationary first-order autoregressive (AR(1)) processes under independent and identically distributed errors with zero mean and finite variance. Using the stopping times introduced by Lai and Siegmund (1983) for AR(1), we investigate the joint asymptotic properties of the stopping times, the sequential least square estimator (LSE), and the estimator of U+03C3U+00b2. The functional central limit theorem for nonlinear ergodic stationary processes is crucial for obtaining our main results with respect to their asymptotic properties. We found that the sequential least square estimator and stopping times exhibit joint asymptotic normality. When U+03C3U+00b2 is estimated, the joint limiting distribution degenerates and the asymptotic variance of the stopping time is strictly smaller than that of the stopping time with a known U+03C3U+00b2.
    Keywords: Observed Fisher information, joint asymptotic normality, functional central limit theorem in D[0,U+221E), Anscombebe's Theorem
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:1060&r=
  10. By: Laura Coroneo; Fabrizio Iacone
    Abstract: We revisit the Diebold and Mariano (1995) test, investigating the consequences of having autocorrelation in the loss differential. This situation can arise not only when a forecast is sub-optimal under MSE loss, but also when it is optimal under an alternative loss, or it is evaluated on a short sample, or when a forecast with weakly dependent forecast errors is compared to a naive benchmark. We show that the power of the Diebold and Mariano (1995) test decreases as the dependence increases, making it more difficult to obtain statistically significant evidence of superior predictive ability against less accurate benchmarks. Moreover, we find that after a certain threshold the test has no power and the correct null hypothesis is spuriously rejected. Taken together, these results caution to seriously consider the dependence properties of the selected forecast and of the loss differential before the application of the Diebold and Mariano (1995) test, especially when naive benchmarks are considered.
    Keywords: strong autocorrelation, Forecast evaluation, Diebold and Mariano Test, Long Run Variance Estimation.
    JEL: C12 C32 C53
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:21/03&r=

This nep-ets issue is ©2021 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.