nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒04‒02
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. DSGE-based priors for BVARs and quasi-Bayesian DSGE estimation By Filippeli, Thomai; Harrison, Richard; Theodoridis, Konstantinos
  2. Wild Bootstrap Randomization Inference for Few Treated Clusters By James G. MacKinnon; Matthew D. Webb
  3. A Score-Driven Conditional Correlation Model for Noisy and Asynchronous Data: an Application to High-Frequency Covariance Dynamics By Giuseppe Buccheri; Giacomo Bormetti; Fulvio Corsi; Fabrizio Lillo
  4. Bayesian mean-variance analysis: Optimal portfolio selection under parameter uncertainty By David Bauder; Taras Bodnar; Nestor Parolya; Wolfgang Schmid
  5. Dealing with endogeneity in threshold models using copulas: an illustration to the foreign trade multiplier By Christopoulos, Dimitris; McAdam, Peter; Tzavalis, Elias
  6. Spillovers in space and time: where spatial econometrics and Global VAR models meet By Elhorst, J. Paul; Gross, Marco; Tereanu, Eugen
  7. Unobserved Components with Stochastic Volatility in U.S. Inflation: Estimation and Signal Extraction By Mengheng Li; Siem Jan (S.J.) Koopman
  8. Three essays on time-varying parameters and time series networks By Rothfelder, Mario
  9. A Nonparametric Approach to Measure the Heterogeneous Spatial Association: Under Spatial Temporal Data By Zihao Yuan; Qing Zhang; Yunxia Li
  10. Bayesian Vector Autoregressions By Miranda-Agrippino, Silvia; Ricco, Giovanni
  11. Forecasting economic time series using score-driven dynamic models with mixed-data sampling By Paolo Gorgi; Siem Jan (S.J.) Koopman; Mengheng Li
  12. Forecasting with Bayesian Vector Autoregressions with Time Variation in the Mean By Marta Banbura; Andries van Vlodrop
  13. THE TIME-VARYING ASYMMETRY OF EXCHANGE RATE RETURNS: A STOCHASTIC VOLATILITY – STOCHASTIC SKEWNESS MODEL By Martin Iseringhausen

  1. By: Filippeli, Thomai (Bank of England); Harrison, Richard (Bank of England); Theodoridis, Konstantinos (Cardiff Business School)
    Abstract: We present a new method for estimating Bayesian vector auto-regression (VAR) models using priors from a dynamic stochastic general equilibrium (DSGE) model. We use the DSGE model priors to determine the moments of an independent Normal-Wishart prior for the VAR parameters. Two hyper-parameters control the tightness of the DSGE-implied priors on the autoregressive coefficients and the residual covariance matrix respectively. Determining these hyper-parameters by selecting the values that maximize the marginal likelihood of the Bayesian VAR provides a method for isolating subsets of DSGE parameter priors that are at odds with the data. We illustrate the ability of our approach to correctly detect incorrect DSGE priors for the variance of structural shocks using a Monte Carlo experiment. We also demonstrate how posterior estimates of the DSGE parameter vector can be recovered from the BVAR posterior estimates: a new ‘quasi-Bayesian’ DSGE estimation. An empirical application on US data reveals economically meaningful differences in posterior parameter estimates when comparing our quasi-Bayesian estimator with Bayesian maximum likelihood. Our method also indicates that the DSGE prior implications for the residual covariance matrix are at odds with the data.
    Keywords: BVAR; DSGE; DSGE-VAR; Gibbs sampling; marginal likelihood evaluation; predictive likelihood evaluation; quasi-Bayesian DSGE estimation
    JEL: C11 C13 C32 C52
    Date: 2018–03–02
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0716&r=ecm
  2. By: James G. MacKinnon (Queen's University); Matthew D. Webb (Carleton University)
    Abstract: When there are few treated clusters in a pure treatment or difference-in-differences setting, t tests based on a cluster-robust variance estimator (CRVE) can severely over-reject. Although procedures based on the wild cluster bootstrap often work well when the number of treated clusters is not too small, they can either over-reject or under-reject seriously when it is. In a previous paper, we showed that procedures based on randomization inference (RI) can work well in such cases. However, RI can be impractical when the number of clusters is small. We propose a bootstrap-based alternative to randomization inference, which mitigates the discrete nature of RI P values in the few-clusters case.
    Keywords: CRVE, grouped data, clustered data, panel data, wild cluster bootstrap, difference-in-differences, DiD, randomization inference
    JEL: C12 C21
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1404&r=ecm
  3. By: Giuseppe Buccheri; Giacomo Bormetti; Fulvio Corsi; Fabrizio Lillo
    Abstract: We propose a new multivariate conditional correlation model able to deal with data featuring both observational noise and asynchronicity. When modelling high-frequency multivariate financial time-series, the presence of both problems and the requirement for positive-definite estimates makes the estimation and forecast of the intraday dynamics of conditional covariance matrices particularly difficult. Our approach tackles all these challenging tasks within a new Gaussian state-space model with score-driven time-varying parameters that can be estimated using standard maximum likelihood methods. Similarly to DCC models, large dimensionality is handled by separating the estimation of correlations from individual volatilities. As an interesting outcome of this approach, intra-day patterns are recovered without the need of any cross-sectional averaging, allowing, for instance, to estimate the real-time response of the market covariances to macro-news announcements.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.04894&r=ecm
  4. By: David Bauder; Taras Bodnar; Nestor Parolya; Wolfgang Schmid
    Abstract: The paper solves the problem of optimal portfolio choice when the parameters of the asset returns distribution, like the mean vector and the covariance matrix are unknown and have to be estimated by using historical data of the asset returns. The new approach employs the Bayesian posterior predictive distribution which is the distribution of the future realization of the asset returns given the observable sample. The parameters of the posterior predictive distributions are functions of the observed data values and, consequently, the solution of the optimization problem is expressed in terms of data only and does not depend on unknown quantities. In contrast, the optimization problem of the traditional approach is based on unknown quantities which are estimated in the second step leading to a suboptimal solution. We also derive a very useful stochastic representation of the posterior predictive distribution whose application leads not only to the solution of the considered optimization problem, but provides the posterior predictive distribution of the optimal portfolio return used to construct a prediction interval. A Bayesian efficient frontier, a set of optimal portfolios obtained by employing the posterior predictive distribution, is constructed as well. Theoretically and using real data we show that the Bayesian efficient frontier outperforms the sample efficient frontier, a common estimator of the set of optimal portfolios known to be overoptimistic.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.03573&r=ecm
  5. By: Christopoulos, Dimitris; McAdam, Peter; Tzavalis, Elias
    Abstract: We suggest a new method dealing with the problem of endogeneity of the threshold variable in single regression threshold models and seemingly unrelated systems of them based on copula theory. This theory enables us to relax the assumption that the threshold variable is normally distributed and to capture the dependence between the error term and the threshold variable in each regime of the model independently of the marginal distribution of the threshold variable. This distribution can be estimated non-parametrically conditionally on the value of threshold parameter. To estimate the slope and threshold parameters of the model adjusted for the endogeneity of the threshold variable, we suggest a two-step concentrated least squares estimation method where the threshold parameter is estimated based on a search procedure, in the first step. A Monte Carlo study indicates that the suggested method deals with the endogeneity problem of the threshold variable satisfactorily. As an empirical illustration, we estimate a threshold model of the foreign-trade multiplier conditional on the real exchange rate volatility regime. We suggest a bootstrap procedure to examine if there are significant differences in the foreign-trade multiplier effects across the two regimes of the model, under potential endogeneity of the threshold variable. JEL Classification: C12, C13, C21, C22
    Keywords: copulas, foreign trade multiplier, Kourtellos et al. (2016), SUR systems, threshold model
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20182136&r=ecm
  6. By: Elhorst, J. Paul; Gross, Marco; Tereanu, Eugen
    Abstract: We bring together the spatial and global vector autoregressive (GVAR) classes of econometric models by providing a detailed methodological review of where they meet in terms of structure, interpretation, and estimation methods. We discuss the structure of cross-section connectivity (weight) matrices used by these models and its implications for estimation. Primarily motivated by the continuously expanding literature on spillovers, we define a broad and measurable concept of spillovers. We formalize it analytically through the indirect effects used in the spatial literature and impulse responses used in the GVAR literature. Finally, we propose a practical step-by-step approach for applied researchers who need to account for the existence and strength of cross-sectional dependence in the data. This approach aims to support the selection of the appropriate modeling and estimation method and of choices that represent empirical spillovers in a clear and interpretable form. JEL Classification: C33, C38, C51
    Keywords: GVARs, spatial models, spillovers, weak and strong cross-sectional dependence
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20182134&r=ecm
  7. By: Mengheng Li (VU Amsterdam); Siem Jan (S.J.) Koopman (VU Amsterdam; Tinbergen Institute, The Netherlands)
    Abstract: We consider unobserved components time series models where the components are stochastically evolving over time and are subject to stochastic volatility. It enables the disentanglement of dynamic structures in both the mean and the variance of the observed time series. We develop a simulated maximum likelihood estimation method based on importance sampling and assess its performance in a Monte Carlo study. This modelling framework with trend, seasonal and irregular components is applied to quarterly and monthly US inflation in an empirical study. We find that the persistence of quarterly inflation has increased during the 2008 financial crisis while it has recently returned to its pre-crisis level. The extracted volatility pattern for the trend component can be associated with the energy shocks in the 1970s while that for the irregular component responds to the monetary regime changes from the 1980s. The scale of the changes in the seasonal component has been largest during the beginning of the 1990s. We finally present empirical evidence of relative improvements in the accuracies of point and density forecasts for monthly US inflation.
    Keywords: Importance Sampling; Kalman Filter; Monte Carlo Simulation; Stochastic Volatility; Unobserved Components Time Series Model; Inflation
    JEL: C32 C53 E31 E37
    Date: 2018–03–21
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20180027&r=ecm
  8. By: Rothfelder, Mario (Tilburg University, School of Economics and Management)
    Abstract: This thesis is composed of three essays on time-varying parameters and time series networks where each essay deals with specific aspects thereof. The thesis starts with proposing a 2SLS based test for a threshold in models with endogenous regressors in Chapter 2. Many economic models are formulated in this way, for example output growth or unemployment rates in different states of the economy. Therefore, it is necessary to have tools available which are capable of indicating whether such effects exist in the data or not. Chapter 3 proposes, to my best knowledge, the first estimator for the inverse of the long-run covariance matrix of a linear, potentially heteroskedastic stochastic process under unknown sparsity constraints. That is, the econometrician does not know which entries of the inverse are equal to zero and which not. Such situations naturally arise, for example, when modelling partial correlation networks based on time series data. Finally, in Chapter 4 this thesis empirically investigates how robust two commonly applied network measures, the From- and the To-degree, are to the exclusion of central nodes in financial volatility networks. This question is motivated by the current empirical literature which excludes certain nodes such as Lehman Brothers from their analysis.
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:tiu:tiutis:fc7a10c0-7eee-479a-ac22-b36ce45f2037&r=ecm
  9. By: Zihao Yuan; Qing Zhang; Yunxia Li
    Abstract: Spatial association and heterogeneity are two critical areas in the research about spatial analysis, geography, statistics and so on. Though large amounts of outstanding methods has been proposed and studied, there are few of them tend to study spatial association under heterogeneous environment. Additionally, most of the traditional methods are based on distance statistic and spatial weighted matrix. However, in some abstract spatial situations, distance statistic can not be applied since we can not even observe the geographical locations directly. Meanwhile, under these circumstances, due to invisibility of spatial positions, designing of weight matrix can not absolutely avoid subjectivity. In this paper, a new entropy-based method, which is data-driven and distribution-free, has been proposed to help us investigate spatial association while fully taking the fact that heterogeneity widely exist. Specifically, this method is not bounded with distance statistic or weight matrix. Asymmetrical dependence is adopted to reflect the heterogeneity in spatial association for each individual and the whole discussion in this paper is performed on spatio-temporal data with only assuming stationary m-dependent over time.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.02334&r=ecm
  10. By: Miranda-Agrippino, Silvia (Bank of England and CFM); Ricco, Giovanni (University of Warwick and OFCE - SciencesPo)
    Abstract: This article reviews Bayesian inference methods for Vector Autoregression models, commonly used priors for economic and financial variables, and applications to structural analysis and forecasting.
    Keywords: Bayesian inference ; Vector Autoregression Models ; BVAR ; SVAR ; forecasting
    JEL: C30 C32
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1159&r=ecm
  11. By: Paolo Gorgi (VU Amsterdam); Siem Jan (S.J.) Koopman (VU Amsterdam; Tinbergen Institute, The Netherlands); Mengheng Li (VU Amsterdam)
    Abstract: We introduce a mixed-frequency score-driven dynamic model for multiple time series where the score contributions from high-frequency variables are transformed by means of a mixed-data sampling weighting scheme. The resulting dynamic model delivers a flexible and easy-to-implement framework for the forecasting of a low-frequency time series variable through the use of timely information from high-frequency variables. We aim to verify in-sample and out-of-sample performances of the model in an empirical study on the forecasting of U.S.~headline inflation. In particular, we forecast monthly inflation using daily oil prices and quarterly inflation using effective federal funds rates. The forecasting results and other findings are promising. Our proposed score-driven dynamic model with mixed-data sampling weighting outperforms competing models in terms of point and density forecasts.
    Keywords: Factor model; GAS model; Inflation forecasting; MIDAS; Score-driven model; Weighted maximum likelihood
    JEL: C42
    Date: 2018–03–21
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20180026&r=ecm
  12. By: Marta Banbura (European Central Bank, Germany); Andries van Vlodrop (VU Amsterdam, the Netherlands)
    Abstract: We develop a vector autoregressive model with time variation in the mean and the variance. The unobserved time-varying mean is assumed to follow a random walk and we also link it to long-term Consensus forecasts, similar in spirit to so called democratic priors. The changes in variance are modelled via stochastic volatility. The proposed Gibbs sampler allows the researcher to use a large cross-sectional dimension in a feasible amount of computational time. The slowly changing mean can account for a number of secular developments such as changing inflation expectations, slowing productivity growth or demographics. We show the good forecasting performance of the model relative to popular alternatives, including standard Bayesian VARs with Minnesota priors, VARs with democratic priors and standard time-varying parameter VARs for the euro area, the United States and Japan. In particular, incorporating survey forecast information helps to reduce the uncertainty about the unconditional mean and along with the time variation improves the long-run forecasting performance of the VAR models.
    Keywords: Consensus forecasts; forecast evaluation; large cross-sections; state space models.
    JEL: C11 C32 C53 E37
    Date: 2018–03–21
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20180025&r=ecm
  13. By: Martin Iseringhausen (-)
    Abstract: While the volatility of financial returns has been extensively modelled as time-varying, skewness is usually either assumed constant or neglected by assuming symmetric model innovations. However, it has long been understood that accounting for (time-varying) asymmetry as a measure of crash risk is important for both investors and policy makers. This paper extends a standard stochastic volatility model to account for time-varying skewness. We estimate the model by extensions of traditional Bayesian Markov Chain Monte Carlo (MCMC) methods for stochastic volatility models. When applying this model to the returns of four major exchange rates, skewness is found to vary substantially over time. The results support a potential link between carry trading and crash risk. Finally, investors appear to demand compensation for a negatively skewed return distribution.
    Keywords: Bayesian analysis, crash risk, foreign exchange, time variation
    JEL: C11 C58 F31
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:18/944&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.