nep-ets New Economics Papers
on Econometric Time Series
Issue of 2024‒06‒24
six papers chosen by
Jaqueson K. Galimberti, Asian Development Bank


  1. Partial Identification of Heteroskedastic Structural VARs: Theory and Bayesian Inference By Helmut Lütkepohl; Fei Shang; Luis Uzeda; Tomasz Woźniak
  2. Comparing predictive ability in presence of instability over a very short time By Fabrizio Iacone; Luca Rossini; Andrea Viselli
  3. Double Robustness of Local Projections and Some Unpleasant VARithmetic By Jos\'e Luis Montiel Olea; Mikkel Plagborg-M{\o}ller; Eric Qian; Christian K. Wolf
  4. Fitting complex stochastic volatility models using Laplace approximation By Marín Díazaraque, Juan Miguel; Romero, Eva; Lopes Moreira Da Veiga, María Helena
  5. Generating density nowcasts for U.S. GDP growth with deep learning: Bayes by Backprop and Monte Carlo dropout By Krist\'of N\'emeth; D\'aniel Hadh\'azi
  6. Should We Augment Large Covariance Matrix Estimation with Auxiliary Network Information? By Ge, S.; Li, S.; Linton, O. B.; Liu, W.; Su, W.

  1. By: Helmut Lütkepohl; Fei Shang; Luis Uzeda; Tomasz Woźniak
    Abstract: We consider structural vector autoregressions identified through stochastic volatility. Our focus is on whether a particular structural shock is identified by heteroskedasticity without the need to impose any sign or exclusion restrictions. Three contributions emerge from our exercise: (i) a set of conditions under which the matrix containing structural parameters is partially or globally unique; (ii) a statistical procedure to assess the validity of the conditions mentioned above; and (iii) a shrinkage prior distribution for conditional variances centred on a hypothesis of homoskedasticity. Such a prior ensures that the evidence for identifying a structural shock comes only from the data and is not favoured by the prior. We illustrate our new methods using a U.S. fiscal structural model.
    Keywords: Identification through heteroskedasticity, stochastic volatility, non-centred parameterisation, shrinkage prior, normal product distribution, tax shocks
    JEL: C11 C12 C32 E62
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp2081&r=
  2. By: Fabrizio Iacone; Luca Rossini; Andrea Viselli
    Abstract: We consider forecast comparison in the presence of instability when this affects only a short period of time. We demonstrate that global tests do not perform well in this case, as they were not designed to capture very short-lived instabilities, and their power vanishes altogether when the magnitude of the shock is very large. We then discuss and propose approaches that are more suitable to detect such situations, such as nonparametric methods (S test or MAX procedure). We illustrate these results in different Monte Carlo exercises and in evaluating the nowcast of the quarterly US nominal GDP from the Survey of Professional Forecasters (SPF) against a naive benchmark of no growth, over the period that includes the GDP instability brought by the Covid-19 crisis. We recommend that the forecaster should not pool the sample, but exclude the short periods of high local instability from the evaluation exercise.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.11954&r=
  3. By: Jos\'e Luis Montiel Olea; Mikkel Plagborg-M{\o}ller; Eric Qian; Christian K. Wolf
    Abstract: We consider impulse response inference in a locally misspecified stationary vector autoregression (VAR) model. The conventional local projection (LP) confidence interval has correct coverage even when the misspecification is so large that it can be detected with probability approaching 1. This follows from a "double robustness" property analogous to that of modern estimators for partially linear regressions. In contrast, VAR confidence intervals dramatically undercover even for misspecification so small that it is difficult to detect statistically and cannot be ruled out based on economic theory. This is because of a "no free lunch" result for VARs: the worst-case bias and coverage distortion are small if, and only if, the variance is close to that of LP. While VAR coverage can be restored by using a bias-aware critical value or a large lag length, the resulting confidence interval tends to be at least as wide as the LP interval.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.09509&r=
  4. By: Marín Díazaraque, Juan Miguel; Romero, Eva; Lopes Moreira Da Veiga, María Helena
    Abstract: The paper proposes the use of Laplace approximation (LA) to estimate complex univariate symmetric and asymmetric stochastic volatility (SV) models with flexible distributions for standardized returns. LA is a method for approximating integrals, especially in Bayesian statistics, and is often used to approximate the posterior distribution of the model parameters. This method simplifies complex problems by focusing on the most critical areas and using a well-understood approximation. We show how easily complex SV models can be estimated and analyzed using LA, with changes to specifications, priors, and sampling error distributions requiring only minor changes to the code. The simulation study shows that the LA estimates of the model parameters are close to the true values in finite samples and that the proposed estimator is computationally efficient and fast. It is an effective alternative to existing estimation methods for SV models. Finally, we evaluate the in-sample and out-of-sample performance of the models by forecasting one-day-ahead volatility. We use four well-known energy index series: two for clean energy and two for conventional (brown) energy. In the out-of-sample analysis, we also examine the impact of climate policy uncertainty and energy prices on the volatility forecasts. The results support the use of asymmetric SV models for clean energy series and symmetric SV models for brown energy indices conditional on these state variables.
    Keywords: Asymmetric Volatility; Laplace Approximation; Stochastic Volatility
    Date: 2024–06–06
    URL: https://d.repec.org/n?u=RePEc:cte:wsrepe:43947&r=
  5. By: Krist\'of N\'emeth; D\'aniel Hadh\'azi
    Abstract: Recent results in the literature indicate that artificial neural networks (ANNs) can outperform the dynamic factor model (DFM) in terms of the accuracy of GDP nowcasts. Compared to the DFM, the performance advantage of these highly flexible, nonlinear estimators is particularly evident in periods of recessions and structural breaks. From the perspective of policy-makers, however, nowcasts are the most useful when they are conveyed with uncertainty attached to them. While the DFM and other classical time series approaches analytically derive the predictive (conditional) distribution for GDP growth, ANNs can only produce point nowcasts based on their default training procedure (backpropagation). To fill this gap, first in the literature, we adapt two different deep learning algorithms that enable ANNs to generate density nowcasts for U.S. GDP growth: Bayes by Backprop and Monte Carlo dropout. The accuracy of point nowcasts, defined as the mean of the empirical predictive distribution, is evaluated relative to a naive constant growth model for GDP and a benchmark DFM specification. Using a 1D CNN as the underlying ANN architecture, both algorithms outperform those benchmarks during the evaluation period (2012:Q1 -- 2022:Q4). Furthermore, both algorithms are able to dynamically adjust the location (mean), scale (variance), and shape (skew) of the empirical predictive distribution. The results indicate that both Bayes by Backprop and Monte Carlo dropout can effectively augment the scope and functionality of ANNs, rendering them a fully compatible and competitive alternative for classical time series approaches.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.15579&r=
  6. By: Ge, S.; Li, S.; Linton, O. B.; Liu, W.; Su, W.
    Abstract: In this paper, we propose two novel frameworks to incorporate auxiliary information about connectivity among entities (i.e., network information) into the estimation of large covariance matrices. The current literature either completely ignores this kind of network information (e.g., thresholding and shrinkage) or utilizes some simple network structure under very restrictive settings (e.g., banding). In the era of big data, we can easily get access to auxiliary information about the complex connectivity structure among entities. Depending on the features of the auxiliary network information at hand and the structure of the covariance matrix, we provide two different frameworks correspondingly —the Network Guided Thresholding and the Network Guided Banding. We show that both Network Guided estimators have optimal convergence rates over a larger class of sparse covariance matrix. Simulation studies demonstrate that they generally outperform other pure statistical methods, especially when the true covariance matrix is sparse, and the auxiliary network contains genuine information. Empirically, we apply our method to the estimation of the covariance matrix with the help of many financial linkage data of asset returns to attain the global minimum variance (GMV) portfolio.
    Keywords: Banding, Big Data, Large Covariance Matrix, Network, Thresholding
    JEL: C13 C58 G11
    Date: 2024–05–20
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2427&r=

This nep-ets issue is ©2024 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.