
on Econometrics 
By:  Filippeli, Thomai (Bank of England); Harrison, Richard (Bank of England); Theodoridis, Konstantinos (Cardiff Business School) 
Abstract:  We present a new method for estimating Bayesian vector autoregression (VAR) models using priors from a dynamic stochastic general equilibrium (DSGE) model. We use the DSGE model priors to determine the moments of an independent NormalWishart prior for the VAR parameters. Two hyperparameters control the tightness of the DSGEimplied priors on the autoregressive coefficients and the residual covariance matrix respectively. Determining these hyperparameters by selecting the values that maximize the marginal likelihood of the Bayesian VAR provides a method for isolating subsets of DSGE parameter priors that are at odds with the data. We illustrate the ability of our approach to correctly detect incorrect DSGE priors for the variance of structural shocks using a Monte Carlo experiment. We also demonstrate how posterior estimates of the DSGE parameter vector can be recovered from the BVAR posterior estimates: a new ‘quasiBayesian’ DSGE estimation. An empirical application on US data reveals economically meaningful differences in posterior parameter estimates when comparing our quasiBayesian estimator with Bayesian maximum likelihood. Our method also indicates that the DSGE prior implications for the residual covariance matrix are at odds with the data. 
Keywords:  BVAR; DSGE; DSGEVAR; Gibbs sampling; marginal likelihood evaluation; predictive likelihood evaluation; quasiBayesian DSGE estimation 
JEL:  C11 C13 C32 C52 
Date:  2018–03–02 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0716&r=ecm 
By:  James G. MacKinnon (Queen's University); Matthew D. Webb (Carleton University) 
Abstract:  When there are few treated clusters in a pure treatment or differenceindifferences setting, t tests based on a clusterrobust variance estimator (CRVE) can severely overreject. Although procedures based on the wild cluster bootstrap often work well when the number of treated clusters is not too small, they can either overreject or underreject seriously when it is. In a previous paper, we showed that procedures based on randomization inference (RI) can work well in such cases. However, RI can be impractical when the number of clusters is small. We propose a bootstrapbased alternative to randomization inference, which mitigates the discrete nature of RI P values in the fewclusters case. 
Keywords:  CRVE, grouped data, clustered data, panel data, wild cluster bootstrap, differenceindifferences, DiD, randomization inference 
JEL:  C12 C21 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1404&r=ecm 
By:  Giuseppe Buccheri; Giacomo Bormetti; Fulvio Corsi; Fabrizio Lillo 
Abstract:  We propose a new multivariate conditional correlation model able to deal with data featuring both observational noise and asynchronicity. When modelling highfrequency multivariate financial timeseries, the presence of both problems and the requirement for positivedefinite estimates makes the estimation and forecast of the intraday dynamics of conditional covariance matrices particularly difficult. Our approach tackles all these challenging tasks within a new Gaussian statespace model with scoredriven timevarying parameters that can be estimated using standard maximum likelihood methods. Similarly to DCC models, large dimensionality is handled by separating the estimation of correlations from individual volatilities. As an interesting outcome of this approach, intraday patterns are recovered without the need of any crosssectional averaging, allowing, for instance, to estimate the realtime response of the market covariances to macronews announcements. 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.04894&r=ecm 
By:  David Bauder; Taras Bodnar; Nestor Parolya; Wolfgang Schmid 
Abstract:  The paper solves the problem of optimal portfolio choice when the parameters of the asset returns distribution, like the mean vector and the covariance matrix are unknown and have to be estimated by using historical data of the asset returns. The new approach employs the Bayesian posterior predictive distribution which is the distribution of the future realization of the asset returns given the observable sample. The parameters of the posterior predictive distributions are functions of the observed data values and, consequently, the solution of the optimization problem is expressed in terms of data only and does not depend on unknown quantities. In contrast, the optimization problem of the traditional approach is based on unknown quantities which are estimated in the second step leading to a suboptimal solution. We also derive a very useful stochastic representation of the posterior predictive distribution whose application leads not only to the solution of the considered optimization problem, but provides the posterior predictive distribution of the optimal portfolio return used to construct a prediction interval. A Bayesian efficient frontier, a set of optimal portfolios obtained by employing the posterior predictive distribution, is constructed as well. Theoretically and using real data we show that the Bayesian efficient frontier outperforms the sample efficient frontier, a common estimator of the set of optimal portfolios known to be overoptimistic. 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.03573&r=ecm 
By:  Christopoulos, Dimitris; McAdam, Peter; Tzavalis, Elias 
Abstract:  We suggest a new method dealing with the problem of endogeneity of the threshold variable in single regression threshold models and seemingly unrelated systems of them based on copula theory. This theory enables us to relax the assumption that the threshold variable is normally distributed and to capture the dependence between the error term and the threshold variable in each regime of the model independently of the marginal distribution of the threshold variable. This distribution can be estimated nonparametrically conditionally on the value of threshold parameter. To estimate the slope and threshold parameters of the model adjusted for the endogeneity of the threshold variable, we suggest a twostep concentrated least squares estimation method where the threshold parameter is estimated based on a search procedure, in the first step. A Monte Carlo study indicates that the suggested method deals with the endogeneity problem of the threshold variable satisfactorily. As an empirical illustration, we estimate a threshold model of the foreigntrade multiplier conditional on the real exchange rate volatility regime. We suggest a bootstrap procedure to examine if there are significant differences in the foreigntrade multiplier effects across the two regimes of the model, under potential endogeneity of the threshold variable. JEL Classification: C12, C13, C21, C22 
Keywords:  copulas, foreign trade multiplier, Kourtellos et al. (2016), SUR systems, threshold model 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20182136&r=ecm 
By:  Elhorst, J. Paul; Gross, Marco; Tereanu, Eugen 
Abstract:  We bring together the spatial and global vector autoregressive (GVAR) classes of econometric models by providing a detailed methodological review of where they meet in terms of structure, interpretation, and estimation methods. We discuss the structure of crosssection connectivity (weight) matrices used by these models and its implications for estimation. Primarily motivated by the continuously expanding literature on spillovers, we define a broad and measurable concept of spillovers. We formalize it analytically through the indirect effects used in the spatial literature and impulse responses used in the GVAR literature. Finally, we propose a practical stepbystep approach for applied researchers who need to account for the existence and strength of crosssectional dependence in the data. This approach aims to support the selection of the appropriate modeling and estimation method and of choices that represent empirical spillovers in a clear and interpretable form. JEL Classification: C33, C38, C51 
Keywords:  GVARs, spatial models, spillovers, weak and strong crosssectional dependence 
Date:  2018–02 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20182134&r=ecm 
By:  Mengheng Li (VU Amsterdam); Siem Jan (S.J.) Koopman (VU Amsterdam; Tinbergen Institute, The Netherlands) 
Abstract:  We consider unobserved components time series models where the components are stochastically evolving over time and are subject to stochastic volatility. It enables the disentanglement of dynamic structures in both the mean and the variance of the observed time series. We develop a simulated maximum likelihood estimation method based on importance sampling and assess its performance in a Monte Carlo study. This modelling framework with trend, seasonal and irregular components is applied to quarterly and monthly US inflation in an empirical study. We find that the persistence of quarterly inflation has increased during the 2008 financial crisis while it has recently returned to its precrisis level. The extracted volatility pattern for the trend component can be associated with the energy shocks in the 1970s while that for the irregular component responds to the monetary regime changes from the 1980s. The scale of the changes in the seasonal component has been largest during the beginning of the 1990s. We finally present empirical evidence of relative improvements in the accuracies of point and density forecasts for monthly US inflation. 
Keywords:  Importance Sampling; Kalman Filter; Monte Carlo Simulation; Stochastic Volatility; Unobserved Components Time Series Model; Inflation 
JEL:  C32 C53 E31 E37 
Date:  2018–03–21 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20180027&r=ecm 
By:  Rothfelder, Mario (Tilburg University, School of Economics and Management) 
Abstract:  This thesis is composed of three essays on timevarying parameters and time series networks where each essay deals with specific aspects thereof. The thesis starts with proposing a 2SLS based test for a threshold in models with endogenous regressors in Chapter 2. Many economic models are formulated in this way, for example output growth or unemployment rates in different states of the economy. Therefore, it is necessary to have tools available which are capable of indicating whether such effects exist in the data or not. Chapter 3 proposes, to my best knowledge, the first estimator for the inverse of the longrun covariance matrix of a linear, potentially heteroskedastic stochastic process under unknown sparsity constraints. That is, the econometrician does not know which entries of the inverse are equal to zero and which not. Such situations naturally arise, for example, when modelling partial correlation networks based on time series data. Finally, in Chapter 4 this thesis empirically investigates how robust two commonly applied network measures, the From and the Todegree, are to the exclusion of central nodes in financial volatility networks. This question is motivated by the current empirical literature which excludes certain nodes such as Lehman Brothers from their analysis. 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiutis:fc7a10c07eee479aac22b36ce45f2037&r=ecm 
By:  Zihao Yuan; Qing Zhang; Yunxia Li 
Abstract:  Spatial association and heterogeneity are two critical areas in the research about spatial analysis, geography, statistics and so on. Though large amounts of outstanding methods has been proposed and studied, there are few of them tend to study spatial association under heterogeneous environment. Additionally, most of the traditional methods are based on distance statistic and spatial weighted matrix. However, in some abstract spatial situations, distance statistic can not be applied since we can not even observe the geographical locations directly. Meanwhile, under these circumstances, due to invisibility of spatial positions, designing of weight matrix can not absolutely avoid subjectivity. In this paper, a new entropybased method, which is datadriven and distributionfree, has been proposed to help us investigate spatial association while fully taking the fact that heterogeneity widely exist. Specifically, this method is not bounded with distance statistic or weight matrix. Asymmetrical dependence is adopted to reflect the heterogeneity in spatial association for each individual and the whole discussion in this paper is performed on spatiotemporal data with only assuming stationary mdependent over time. 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.02334&r=ecm 
By:  MirandaAgrippino, Silvia (Bank of England and CFM); Ricco, Giovanni (University of Warwick and OFCE  SciencesPo) 
Abstract:  This article reviews Bayesian inference methods for Vector Autoregression models, commonly used priors for economic and financial variables, and applications to structural analysis and forecasting. 
Keywords:  Bayesian inference ; Vector Autoregression Models ; BVAR ; SVAR ; forecasting 
JEL:  C30 C32 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:wrk:warwec:1159&r=ecm 
By:  Paolo Gorgi (VU Amsterdam); Siem Jan (S.J.) Koopman (VU Amsterdam; Tinbergen Institute, The Netherlands); Mengheng Li (VU Amsterdam) 
Abstract:  We introduce a mixedfrequency scoredriven dynamic model for multiple time series where the score contributions from highfrequency variables are transformed by means of a mixeddata sampling weighting scheme. The resulting dynamic model delivers a flexible and easytoimplement framework for the forecasting of a lowfrequency time series variable through the use of timely information from highfrequency variables. We aim to verify insample and outofsample performances of the model in an empirical study on the forecasting of U.S.~headline inflation. In particular, we forecast monthly inflation using daily oil prices and quarterly inflation using effective federal funds rates. The forecasting results and other findings are promising. Our proposed scoredriven dynamic model with mixeddata sampling weighting outperforms competing models in terms of point and density forecasts. 
Keywords:  Factor model; GAS model; Inflation forecasting; MIDAS; Scoredriven model; Weighted maximum likelihood 
JEL:  C42 
Date:  2018–03–21 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20180026&r=ecm 
By:  Marta Banbura (European Central Bank, Germany); Andries van Vlodrop (VU Amsterdam, the Netherlands) 
Abstract:  We develop a vector autoregressive model with time variation in the mean and the variance. The unobserved timevarying mean is assumed to follow a random walk and we also link it to longterm Consensus forecasts, similar in spirit to so called democratic priors. The changes in variance are modelled via stochastic volatility. The proposed Gibbs sampler allows the researcher to use a large crosssectional dimension in a feasible amount of computational time. The slowly changing mean can account for a number of secular developments such as changing inflation expectations, slowing productivity growth or demographics. We show the good forecasting performance of the model relative to popular alternatives, including standard Bayesian VARs with Minnesota priors, VARs with democratic priors and standard timevarying parameter VARs for the euro area, the United States and Japan. In particular, incorporating survey forecast information helps to reduce the uncertainty about the unconditional mean and along with the time variation improves the longrun forecasting performance of the VAR models. 
Keywords:  Consensus forecasts; forecast evaluation; large crosssections; state space models. 
JEL:  C11 C32 C53 E37 
Date:  2018–03–21 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20180025&r=ecm 
By:  Martin Iseringhausen () 
Abstract:  While the volatility of financial returns has been extensively modelled as timevarying, skewness is usually either assumed constant or neglected by assuming symmetric model innovations. However, it has long been understood that accounting for (timevarying) asymmetry as a measure of crash risk is important for both investors and policy makers. This paper extends a standard stochastic volatility model to account for timevarying skewness. We estimate the model by extensions of traditional Bayesian Markov Chain Monte Carlo (MCMC) methods for stochastic volatility models. When applying this model to the returns of four major exchange rates, skewness is found to vary substantially over time. The results support a potential link between carry trading and crash risk. Finally, investors appear to demand compensation for a negatively skewed return distribution. 
Keywords:  Bayesian analysis, crash risk, foreign exchange, time variation 
JEL:  C11 C58 F31 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:18/944&r=ecm 