
on Econometric Time Series 
By:  Atsushi Inoue; Lutz Kilian 
Abstract:  Several recent studies have expressed concern that the Haar prior typically imposed in estimating signidentified VAR models may be unintentionally informative about the implied prior for the structural impulse responses. This question is indeed important, but we show that the tools that have been used in the literature to illustrate this potential problem are invalid. Specifically, we show that it does not make sense from a Bayesian point of view to characterize the impulse response prior based on the distribution of the impulse responses conditional on the maximum likelihood estimator of the reducedform parameters, since the prior does not, in general, depend on the data. We illustrate that this approach tends to produce highly misleading estimates of the impulse response priors. We formally derive the correct impulse response prior distribution and show that there is no evidence that typical signidentified VAR models estimated using conventional priors tend to imply unintentionally informative priors for the impulse response vector or that the corresponding posterior is dominated by the prior. Our evidence suggests that concerns about the Haar prior for the rotation matrix have been greatly overstated and that alternative estimation methods are not required in typical applications. Finally, we demonstrate that the alternative Bayesian approach to estimating signidentified VAR models proposed by Baumeister and Hamilton (2015) suffers from exactly the same conceptual shortcoming as the conventional approach. We illustrate that this alternative approach may imply highly economically implausible impulse response priors. 
Keywords:  Prior; posterior; impulse response; loss function; joint inference; absolute loss; median 
JEL:  C22 C32 C52 E31 Q43 
Date:  2020–12–03 
URL:  http://d.repec.org/n?u=RePEc:fip:feddwp:89121&r=all 
By:  Hess Chung; Cristina FuentesAlbero; Matthias Paustian; Damjan Pfajfar 
Abstract:  This paper advocates chaining the decomposition of shocks into contributions from forecast errors to the shock decomposition of the latent vector to better understand model inference about latent variables. Such a double decomposition allows us to gauge the inuence of data on latent variables, like the data decomposition. However, by taking into account the transmission mechanisms of each type of shock, we can highlight the economic structure underlying the relationship between the data and the latent variables. We demonstrate the usefulness of this approach by detailing the role of observable variables in estimating the output gap in two models. 
Keywords:  Kalman smoother; Latent variables; Shock decomposition; Data decomposition; Double decomposition 
JEL:  C18 C32 C52 
Date:  2020–12–04 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:2020100&r=all 
By:  Shige Peng; Shuzhen Yang 
Abstract:  Financial time series admits inherent uncertainty and randomness that changes over time. To clearly describe volatility uncertainty of the time series, we assume that the volatility of risky assets holds value between the minimum volatility and maximum volatility of the assets. This study establishes autoregressive models to determine the maximum and minimum volatilities, where the ratio of minimum volatility to maximum volatility can measure volatility uncertainty. By utilizing the value at risk (VaR) predictor model under volatility uncertainty, we introduce the risk and uncertainty, and show that the autoregressive model of volatility uncertainty is a powerful tool in predicting the VaR for a benchmark dataset. 
Date:  2020–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2011.09226&r=all 
By:  Vincent W. C. Tan; Stefan Zohren 
Abstract:  We introduce a novel covariance estimator that exploits the heteroscedastic nature of financial time series by employing exponential weighted moving averages and shrinking the insample eigenvalues through crossvalidation. Our estimator is modelagnostic in that we make no assumptions on the distribution of the random entries of the matrix or structure of the covariance matrix. Additionally, we show how Random Matrix Theory can provide guidance for automatic tuning of the hyperparameter which characterizes the time scale for the dynamics of the estimator. By attenuating the noise from both the crosssectional and timeseries dimensions, we empirically demonstrate the superiority of our estimator over competing estimators that are based on exponentiallyweighted and uniformlyweighted covariance matrices. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.05757&r=all 
By:  Young Shin Kim; KumHwan Roh; Raphaël Douady (CES  Centre d'économie de la Sorbonne  UP1  Université PanthéonSorbonne  CNRS  Centre National de la Recherche Scientifique) 
Abstract:  In this paper, we introduce a new time series model having a stochastic exponential tail. This model is constructed based on the Normal Tempered Stable distribution with a timevarying parameter. The model captures the stochastic exponential tail, which generates the volatility smile effect and volatility term structure in option pricing. Moreover, the model describes the timevarying volatility of volatility. We empirically show the stochastic skewness and stochastic kurtosis by applying the model to analyze S\&P 500 index return data. We present the MonteCarlo simulation technique for the parameter calibration of the model for the S\&P 500 option prices. We can see that the stochastic exponential tail makes the model better to analyze the market option prices by the calibration. 
Keywords:  Levy Process,Normal tempered stable distribution,Volatility of volatility,Stochastic exponential tail,Option Pricing 
Date:  2020–11–22 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:hal03018495&r=all 
By:  Ilya Archakov; Peter Reinhard Hansen; Asger Lunde 
Abstract:  We propose a novel class of multivariate GARCH models that utilize realized measures of volatilities and correlations. The central component is an unconstrained vector parametrization of the correlation matrix that facilitates modeling of the correlation structure. The parametrization is based on the matrix logarithmic transformation that retains the positive definiteness as an innate property. A factor approach offers a way to impose a parsimonious structure in high dimensional system and we show that a factor framework arises naturally in some existing models. We apply the model to returns of nine assets and employ the factor structure that emerges from a block correlation specification. An auxiliary empirical finding is that the empirical distribution of parametrized realized correlations is approximately Gaussian. This observation is analogous to the wellknown result for logarithmically transformed realized variances. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.02708&r=all 