nep-ets New Economics Papers
on Econometric Time Series
Issue of 2023‒01‒09
ten papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. On consistency and sparsity for high-dimensional functional time series with application to autoregressions By Guo, Shaojun; Qiao, Xinghao
  2. Strict stationarity of Poisson integer-valued ARCH processes of order infinity By Mawuli Segnon
  3. A smooth transition autoregressive model for matrix-variate time series By Andrea Bucci
  4. Estimation and Testing in a Perturbed Multivariate Long Memory Framework By Less, Vivien; Sibbertsen, Philipp
  5. Estimation of continuous-time linear DSGE models from discrete-time measurements By Bent Jesper Christensen; Luca Neri; Juan Carlos Parra-Alvarez
  6. The Choice of GARCH Models to Forecast Value-at-Risk for Currencies (Euro Exchange Rates), Crypto Assets (Bitcoin and Ethereum), Gold, Silver and Crude Oil: Automated Processes, Statistical Distribution Models and the Specification of the Mean Equationn By Andreas Marcus Gohs
  7. Bayesian Multivariate Quantile Regression with alternative Time-varying Volatility Specifications By Matteo Iacopini; Francesco Ravazzolo; Luca Rossini
  8. Smooth and Abrupt Dynamics in Financial Volatility: the MS-MEM-MIDAS By L. Scaffidi Domianello; G.M. Gallo; E. Otranto
  9. Machine Learning Algorithms for Time Series Analysis and Forecasting By Rameshwar Garg; Shriya Barpanda; Girish Rao Salanke N S; Ramya S
  10. External Instrument SVAR Analysis for Noninvertible Shocks By Forni, Mario; Gambetti, Luca; Ricco, Giovanni

  1. By: Guo, Shaojun; Qiao, Xinghao
    Abstract: Modelling a large collection of functional time series arises in a broad spectral of real applications. Under such a scenario, not only the number of functional variables can be diverging with, or even larger than the number of temporally dependent functional observations, but each function itself is an infinite-dimensional object, posing a challenging task. In this paper, we propose a three-step procedure to estimate high-dimensional functional time series models. To provide theoretical guarantees for the three-step procedure, we focus on multivariate stationary processes and propose a novel functional stability measure based on their spectral properties. Such stability measure facilitates the development of some useful concentration bounds on sample (auto)covariance functions, which serve as a fundamental tool for further convergence analysis in high-dimensional settings. As functional principal component analysis (FPCA) is one of the key dimension reduction techniques in the first step, we also investigate the non-asymptotic properties of the relevant estimated terms under a FPCA framework. To illustrate with an important application, we consider vector functional autoregressive models and develop a regularization approach to estimate autoregressive coefficient functions under the sparsity constraint. Using our derived non-asymptotic results, we investigate convergence properties of the regularized estimate under high-dimensional scaling. Finally, the finite-sample performance of the proposed method is examined through both simulations and a public financial dataset.
    Keywords: functional principal component analysis; functional stability measure; high-dimensional functional time series; non-asymptotics; sparsity; vector functional autoregression; Functional principal component analysis; Shaojun Guo was partially supported by the National Natural Science Foundation of China (No. 11771447)
    JEL: C1
    Date: 2023–02–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:114638&r=ets
  2. By: Mawuli Segnon
    Abstract: This paper establishes necessary and sufficient conditions for the existence of a unique strictly stationary and ergodic solution for integer-valued autoregressive conditional heteroscedasticity (INARCH) processes. We also provide conditions that guarantee existence of higher order moments. The results apply to integer-valued GARCH model, and its long-memory versions with hyperbolically decaying coefficients and turn out to be instrumental on deriving large sample properties of the maximum likelihood estimators of the model parameters.
    Keywords: INARCH processes; Stationarity; Ergodicity; Lyapunov exponent; Maximum likelihood estimation
    JEL: C1 C4 C5
    Date: 2022–12
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:10222&r=ets
  3. By: Andrea Bucci
    Abstract: In many applications, data are observed as matrices with temporal dependence. Matrix-variate time series modeling is a new branch of econometrics. Although stylized facts in several fields, the existing models do not account for regime switches in the dynamics of matrices that are not abrupt. In this paper, we extend linear matrix-variate autoregressive models by introducing a regime-switching model capable of accounting for smooth changes, the matrix smooth transition autoregressive model. We present the estimation processes with the asymptotic properties demonstrated with simulated and real data.
    Date: 2022–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2212.08615&r=ets
  4. By: Less, Vivien; Sibbertsen, Philipp
    Abstract: We propose a semiparametric multivariate estimator and a multivariate score-type testing procedure under a perturbed multivariate fractional process. The estimator is based on the periodogram and uses a local Whittle criterion function which is generalised by an additional constant to capture the perturbation given in the long memory process. Explicitly addressing the noise term when approximating the spectral density near the origin results in a bias reduction, but at the cost of an increase in the asymptotic variance of the estimator. Further, we introduce a multivariate testing procedure to detect spurious long memory under a perturbed fractional framework. The test statistic is based on the weighted sum of the partial derivatives of the multivariate local Whittle with noise estimator. We show consistency of the test against the alternatives of smooth trend and random level shift processes. In addition, we prove consistency and asymptotic normality of the local Whittle estimator and we derive the limiting distribution of the test. An empirical example on the squared returns and the realised volatilities from the BEL 20, S&P BSE SENSEX, and the Spanish IBEX is conducted, and shows the usefulness of the procedures.
    Keywords: Signal-plus-noise; Multivariate local Whittle; Perturbation; Spurious long memory; Semi-parametric estimation; Stochastic volatility
    JEL: C12 C13 C32
    Date: 2022–12
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-704&r=ets
  5. By: Bent Jesper Christensen (Aarhus University, Dale T. Mortensen Center, Danish Finance Institute, CREATES); Luca Neri (University of Bologna, Dale T. Mortensen Center, Ca’ Foscari University of Venice, CREATES); Juan Carlos Parra-Alvarez (Aarhus University, Dale T. Mortensen Center, Danish Finance Institute and CREATES)
    Abstract: We provide a general state space framework for estimation of the parameters of continuous-time linear DSGE models from data that are only available at discrete points in time. Our approach relies on the exact discrete-time representation of the equilibrium dynamics, which allows avoiding discretization errors. Using the Kalman filter, we construct the exact likelihood for data sampled either as stocks or flows, and estimate frequency-invariant parameters by maximum likelihood. We address the aliasing problem arising in multivariate settings and provide conditions for precluding it, which is required for local identification of the parameters in the continuous-time economic model. We recover the unobserved structural shocks at measurement times from the reduced-form residuals in the state space representation by exploiting the underlying causal links imposed by the economic theory and the information content of the discrete-time observations. We illustrate our approach using an off-the-shelf real business cycle model. We conduct extensive Monte Carlo experiments to study the finite sample properties of the estimator based on the exact discrete-time representation, and show they are superior to those based on a naive Euler-Maruyama discretization of the economic model. Finally, we estimate the model using postwar U.S. macroeconomic data, and offer examples of applications of our approach, including historical shock decomposition at different frequencies, and estimation based on mixed-frequency data. JEL classification: C13, C32, C68, E13, E32, J22 Key words: DSGE models, continuous time, exact discrete-time representation, stock and flow variables, Kalman filter, maximum likelihood, aliasing, structural shocks
    Date: 2022–12–20
    URL: http://d.repec.org/n?u=RePEc:aah:create:2022-12&r=ets
  6. By: Andreas Marcus Gohs (University of Kassel)
    Abstract: Regular or automated processes require reliable software applications that provide accurate volatility and Value-at-Risk forecasts. The univariate and multivariate GARCH models proposed in the literature are reviewed and the suitability of selected R functions for automated forecasting systems is discussed. With the Markov-switching GARCH function constructed for modelling regime changes, parameter estimates are reliably obtained in studies with moving time windows. In contrast, in the case of structural breaks or outliers, the algorithm of the ordinary GARCH function often does not return valid parameter estimates and fails. VaR prognoses are produced for extreme quantiles (up to 99.9%) and three alternative distribution assumptions (Skew Student-T, Student-T and Gaussian). Accurate one-day-ahead VaR predictions up to the 99% quantile are generally obtained for the time series when Skew Student-T distributed innovations are assumed. The VaR exceedance rates and their percentage deviations from the target alpha as well as the mean and median excess loss are reported. The accompanying mean equation is often omitted when fitting GARCH models to heteroskedastic time series. The impact of this on the accuracy of VaR forecasts is investigated. Coefficients of the ordinary (Pearson) and the default correlation are calculated for moving time windows. Since the calculated default correlation depends on the VaR forecasts, analyses are performed for different quantiles, the ordinary and the MS-GARCH function and specifications of mean equations.
    Keywords: Conditional volatility, Skew Student T, Markov Switching MS-GARCH, Multivariate GARCH, Mean Excess Loss, Default Correlation, Software R
    JEL: G17 F31 G01 G11
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:mar:magkse:202246&r=ets
  7. By: Matteo Iacopini; Francesco Ravazzolo; Luca Rossini
    Abstract: This article proposes a novel Bayesian multivariate quantile regression to forecast the tail behavior of US macro and financial indicators, where the homoskedasticity assumption is relaxed to allow for time-varying volatility. In particular, we exploit the mixture representation of the multivariate asymmetric Laplace likelihood and the Cholesky-type decomposition of the scale matrix to introduce stochastic volatility and GARCH processes, and we provide an efficient MCMC to estimate them. The proposed models outperform the homoskedastic benchmark mainly when predicting the distribution's tails. We provide a model combination using a quantile score-based weighting scheme, which leads to improved performances, notably when no single model uniformly outperforms the other across quantiles, time, or variables.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.16121&r=ets
  8. By: L. Scaffidi Domianello; G.M. Gallo; E. Otranto
    Abstract: In this paper we remark that the evolution of the realized volatility is characterized by a combination between high–frequency dynamics and a smoother persistent dynamics evolving at a lower–frequency. We suggest a new Multiplicative Error Model which combines the mixed frequency features of a MIDAS with Markovian dynamics. When estimated in–sample on the realized kernel volatility of the S&P500 index, this model dominates other simpler specifications, especially when monthly aggregated realized volatility is used. The same pattern is confirmed in the out–of–sample forecasting performance which suggests that adding an abrupt change in the average level of volatility better helps in tracking extreme episodes of volatility and a relative quick absorption of the shocks.
    Keywords: Short– and Long–Run Components;realized volatility;Multiplicative Error Model;MIDAS;markov switching
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:202205&r=ets
  9. By: Rameshwar Garg; Shriya Barpanda; Girish Rao Salanke N S; Ramya S
    Abstract: Time series data is being used everywhere, from sales records to patients' health evolution metrics. The ability to deal with this data has become a necessity, and time series analysis and forecasting are used for the same. Every Machine Learning enthusiast would consider these as very important tools, as they deepen the understanding of the characteristics of data. Forecasting is used to predict the value of a variable in the future, based on its past occurrences. A detailed survey of the various methods that are used for forecasting has been presented in this paper. The complete process of forecasting, from preprocessing to validation has also been explained thoroughly. Various statistical and deep learning models have been considered, notably, ARIMA, Prophet and LSTMs. Hybrid versions of Machine Learning models have also been explored and elucidated. Our work can be used by anyone to develop a good understanding of the forecasting process, and to identify various state of the art models which are being used today.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.14387&r=ets
  10. By: Forni, Mario (University of Modena and Reggio Emilia, CEPR and RECent); Gambetti, Luca (University of Barcelona, BSE, University of Turin & CCA); Ricco, Giovanni (University of Warwick, OFCE-SciencesPo, and CEPR)
    Abstract: We propose a novel external-instrument SVAR procedure to identify and estimate the impulse response functions, regardless of the shock being invertible or recoverable. When the shock is recoverable, we also show how to estimate the unit variance shock and the ‘absolute’ response functions. When the shock is invertible, the method collapses to the standard proxy-SVAR procedure. We show how to test for recoverability and invertibility. We apply our techniques to a monetary policy VAR. It turns out that, using standard specifications, the monetary policy shock is not invertible, but is recoverable. When using our procedure, results are plausible even in a parsimonious specification, not including financial variables. Monetary policy has significant and sizeable effects on prices. JEL Codes: C32 ; E32.
    Keywords: Proxy-SVAR ; SVAR-IV ; Impulse response functions ; Variance Decomposition ; Historical Decomposition ; Monetary Policy Shock
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1444&r=ets

This nep-ets issue is ©2023 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.