|
on Econometrics |
By: | Marcellino, Massimiliano; Sivec, Vasja |
Abstract: | Large scale factor models have been often adopted both for forecasting and to identify structural shocks and their transmission mechanism. Mixed frequency factor models have been also used in a reduced form context, but not for structural applications, and in this paper we close this gap. First, we adapt a simple technique developed in a small scale mixed frequency VAR and factor context to the large scale case, and compare the resulting model with existing alternatives. Second, using Monte Carlo experiments, we show that the finite sample properties of the mixed frequency factor model estimation procedure are quite good. Finally, to illustrate the method we present three empirical examples dealing with the effects of, respectively, monetary, oil, and fiscal shocks. |
Keywords: | estimation; identification; impulse response function; mixed frequency data; Structural FAVAR; temporal aggregation |
JEL: | C32 C43 E32 |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:10610&r=ecm |
By: | Quiroz, Matias (Research Department, Central Bank of Sweden); Villani, Mattias (Linköpings University); Kohn, Robert (Australian School of Business, University of New South Wales) |
Abstract: | The computing time for Markov Chain Monte Carlo (MCMC) algorithms can be prohibitively large for datasets with many observations, especially when the data density for each observation is costly to evaluate. We propose a framework where the likelihood function is estimated from a random subset of the data, resulting in substantially fewer density evaluations. The data subsets are selected using an efficient Probability Proportional-to-Size (PPS) sampling scheme, where the inclusion probability of an observation is proportional to an approximation of its contribution to the log-likelihood function. Three broad classes of approximations are presented. The proposed algorithm is shown to sample from a distribu- tion that is within O(m^-1/2) of the true posterior, where m is the subsample size. Moreover, the constant in the O(m^-1/2) error bound of the likelihood is shown to be small and the approximation error is demonstrated to be negligible even for a small m in our applications. We propose a simple way to adaptively choose the sample size m during the MCMC to optimize sampling efficiency for a fixed computational budget. The method is applied to a bivariate probit model on a data set with half a million observations, and on a Weibull regression model with random effects for discrete-time survival data. |
Keywords: | Bayesian inference; Markov Chain Monte Carlo; Pseudo-marginal MCMC; Big Data; Probability Proportional-to-Size sampling; Numerical integration. |
JEL: | C11 C13 C15 C83 |
Date: | 2015–03–01 |
URL: | http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0297&r=ecm |
By: | Droumaguet, Matthieu; Warne, Anders; Woźniak, Tomasz |
Abstract: | We derive restrictions for Granger noncausality in Markov-switching vector autoregressive models and also show under which conditions a variable does not affect the forecast of the hidden Markov process. Based on Bayesian approach to evaluating the hypotheses, the computational tools for posterior inference include a novel block Metropolis-Hastings sampling algorithm for the estimation of the restricted models. We analyze a system of monthly US data on money and income. The test results in MS-VARs contradict those in linear VARs: the money aggregate M1 is useful for forecasting income and for predicting the next period’s state. JEL Classification: C11, C12, C32, C53, E32 |
Keywords: | Bayesian hypothesis testing, block Metropolis-Hastings sampling, Markov-switching models, mixture models, posterior odds ratio |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20151794&r=ecm |
By: | David Pacini; Frank Windmeijer |
Abstract: | We derive moment conditions for dynamic, AR(1) panel data models when values of the outcome variable are missing. In this context, commonly used estimators only use data on individuals observed for at least three consecutive periods. We derive moment conditions for observations with at least three non-consecutive observations for estimation of the parameters by GMM. |
Keywords: | Panel Data, Missing Values. |
JEL: | C33 C51 |
Date: | 2015–05–26 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:15/660&r=ecm |
By: | Abhimanyu Gupta |
Abstract: | Autoregressive spectral density estimation for stationary random fields on a regular spatial lattice has the advantage of providing a guaranteed positive-definite estimate even when suitable edge-effect correction is employed.We consider processes with a half-plane infinite autoregressive representation and use truncated versions of this to estimate the spectral density. The truncation length is allowed to diverge in all dimensions in order to avoid the potential bias which would accrue due to truncation at a fixed lag-length. Consistency and strong consistency of the proposed estimator, both uniform in frequencies, are established. Under suitable conditions the asymptotic distribution of the estimate is shown to be zero-mean normal and independent at fixed distinct frequencies, mirroring the behaviour for time series. The key to the results is the covariance structure of stationary random fields defined on regularly spaced lattices. We study this in detail and show the covariance matrix to satisfy a generalization of the Toeplitz property familiar from time series analysis. A small Monte Carlo experiment examines finite sample performance. |
Date: | 2015–05–01 |
URL: | http://d.repec.org/n?u=RePEc:esx:essedp:767&r=ecm |
By: | Giuseppe De Luca (University of Palermo, Italy); Jan Magnus (Faculty of Economics and Business Administration, VU University Amsterdam, the Netherlands); Franco Peracchi (University of Tor Vergata, Rome, Italy) |
Abstract: | This paper studies what happens when we move from a short regression to a long regression (or vice versa), when the long regression is shorter than the data-generation process. In the special case where the long regression equals the data-generation process, the least-squares estimators have smaller bias (in fact zero bias) but larger variances in the long regression than in the short regression. But if the long regression is also misspecified, the bias may not be smaller. We provide bias and mean squared error comparisons and study the dependence of the differences on the misspecification parameter. |
Keywords: | Omitted variables; Misspecification; Least-squares estimators; Bias; Mean squared error |
JEL: | C13 C51 C52 |
Date: | 2015–05–22 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150061&r=ecm |
By: | Forni, Mario; Hallin, Marc; Lippi, Marco; Zaffaroni, Paolo |
Abstract: | Factor models, all particular cases of the Generalized Dynamic Factor Model (GDFM) introduced in Forni, Hallin, Lippi and Reichlin (2000), have become extremely popular in the theory and practice of large panels of time series data. The asymptotic properties (consistency and rates) of the corresponding estimators have been studied in Forni, Hallin, Lippi and Reichlin (2004). Those estimators, however, rely on Brillinger's dynamic principal components, and thus involve two-sided filters, which leads to rather poor forecasting performances. No such problem arises with estimators based on standard (static) principal components, which have been dominant in this literature. On the other hand, the consistency of those static estimators requires the assumption that the space spanned by the factors has finite dimension, which severely restricts the generality afforded by the GDFM. This paper derives the asymptotic properties of a semiparametric estimator of the loadings and common shocks based on one-sided filters recently proposed by Forni, Hallin, Lippi and Zaffaroni (2015). Consistency and exact rates of convergence are obtained for this estimator, under a general class of GDFMs that does not require a finite-dimensional factor space. A Monte Carlo experiment corroborates those theoretical results and demonstrates the excellent performance of those estimators in out-of-sample forecasting. |
Keywords: | Consistency and rates.; Generalized dynamic factor models.; High -dimensional time series.; One-sided representations of dynamic factor models.; Vector processes with singular spectral density |
JEL: | C0 C01 E0 |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:10618&r=ecm |
By: | Eleanor Sanderson; Frank Windmeijer |
Abstract: | We consider testing for weak instruments in a model with multiple endogenous variables. Unlike Stock and Yogo (2005), who considered a weak instruments problem where the rank of the matrix of reduced form parameters is near zero, here we consider a weak instruments problem of a near rank reduction of one in the matrix of reduced form parameters. For example, in a two-variable model, we consider weak instrument asymptotics of the form π1=δπ2 + c/sqrt(n) where π1 and π2 are the parameters in the two reduced-form equations, c is a vector of constants and n is the sample size. We investigate the use of a conditional first-stage F-statistic along the lines of the proposal by Angrist and Pischke (2009) and show that, unless δ=0, the variance in the denominator of their F-statistic needs to be adjusted in order to get a correct asymptotic distribution when testing the hypothesis H0: π1=δπ2. We show that a corrected conditional F-statistic is equivalent to the Cragg and Donald (1993) minimum eigenvalue rank test statistic, and is informative about the maximum total relative bias of the 2SLS estimator and the Wald tests size distortions. When δ=0 in the two-variable model, or when there are more than two endogenous variables, further information over and above the Cragg-Donald statistic can be obtained about the nature of the weak instrument problem by computing the conditional first stage F-statistics. |
Keywords: | weak instruments, multiple endogenous variables, F-test. |
JEL: | C12 C36 |
Date: | 2014–06–30 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:15/644&r=ecm |
By: | Toshihiro Abe; Christophe Ley |
Keywords: | circular-linear data; directional-linear data; distributions on the cylinder; sineskewed von Mises distributions; weidbull distributions |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/200090&r=ecm |
By: | Tommaso Proietti (University of Rome “Tor Vergata” and Creates); Alessandra Luati (University of Bologna) |
Abstract: | The paper introduces the generalised partial autocorrelation (GPAC) coefficients of a stationary stochastic process. The latter are related to the generalised autocovariances, the inverse Fourier transform coefficients of a power transformation of the spectral density function. By interpreting the generalized partial autocorrelations as the partial autocorrelation coefficients of an auxiliary process, we derive their properties and relate them to essential features of the original process. Based on a parameterisation suggested by Barndorff-Nielsen and Schou (1973) and on Whittle likelihood, we develop an estimation strategy for the GPAC coefficients. We further prove that the GPAC coefficients can be used to estimate the mutual information between the past and the future of a time series. |
Keywords: | Generalised autocovariance, Spectral models, Whittle likelihood, Reparameterisation |
JEL: | C22 C52 |
Date: | 2015–05–25 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2015-24&r=ecm |
By: | Antoine Kornprobst; Raphael Douady |
Abstract: | The aim of this work is to build financial crisis indicators based on market data time series. After choosing an optimal size for a rolling window, the market data is seen every trading day as a random matrix from which a covariance and correlation matrix is obtained. Our indicators deal with the spectral properties of these covariance and correlation matrices. Our basic financial intuition is that correlation and volatility are like the heartbeat of the financial market: when correlations between asset prices increase or develop abnormal patterns, when volatility starts to increase, then a crisis event might be around the corner. Our indicators will be mainly of two types. The first one is based on the Hellinger distance, computed between the distribution of the eigenvalues of the empirical covariance matrix and the distribution of the eigenvalues of a reference covariance matrix. As reference distributions we will use the theoretical Marchenko Pastur distribution and, mainly, simulated ones using a random matrix of the same size as the empirical rolling matrix and constituted of Gaussian or Student-t coefficients with some simulated correlations. The idea behind this first type of indicators is that when the empirical distribution of the spectrum of the covariance matrix is deviating from the reference in the sense of Hellinger, then a crisis may be forthcoming. The second type of indicators is based on the study of the spectral radius and the trace of the covariance and correlation matrices as a mean to directly study the volatility and correlations inside the market. The idea behind the second type of indicators is the fact that large eigenvalues are a sign of dynamic instability. |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1506.00806&r=ecm |
By: | Eric Delattre; Richard Moussa (Université de Cergy-Pontoise, THEMA) |
Abstract: | In order to assess causality between binary economic outcomes, we consider the es- timation of a bivariate dynamic probit model on panel data that has the particulary to account the initial conditions of the dynamic process. Due to the untractable form of the likelihood function that is a two dimensions integral, we use an approximation method : the adaptative Gauss-Hermite quadrature method as proposed by Liu and Pierce (1994). For the accuracy of the method and to reduce computing time, we derive the gradient of the log-likelihood and the hessian of the integrand. The estimation method has been im- plemented using the d1 method of Stata software. We made an empirical validation of our estimation method by applying on simulated data set. We also analyze the impact of the number of quadrature points on the estimations and on the estimation process duration. We then conclude that when exceeding 16 quadrature points on our simulated data set, the relative differences in the estimated coecients are around 0.01% but the computing time grows up exponentially. |
Keywords: | Causality; Bivariate Dynamic Probit; Gauss-Hermite Quadrature; Simulated Likelihood; Gradient; Hessian; Stata |
JEL: | C5 C6 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:ema:worpap:2015-04&r=ecm |
By: | Thibault Fally |
Abstract: | The gravity equation for trade flows is one of the most successful empirical models in economics and has long played a central role in the trade literature (Anderson, 2011). Different approaches to estimate the gravity equation, i.e. reduced-form or more structural, have been proposed. This paper examines the role of adding-up constraints as the key difference between structural gravity with "multilateral resistance" indexes and reduced-form gravity with simple fixed effects by exporter and importer. In particular, estimating gravity equations using the Poisson Pseudo-Maximum-Likelihood Estimator (Poisson PML) with fixed effects automatically satisfies these constraints and is consistent with the introduction of "multilateral resistance" indexes as in Anderson and van Wincoop (2003). |
JEL: | C13 C50 F10 F15 |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:21212&r=ecm |
By: | Fabiana Gomez; David Pacini |
Abstract: | We investigate the problem of counting biased forecasters among a group of unbiased and biased forecasters of macroeconomic variables. The innovation is to implement a procedure controlling for the expected proportion of unbiased forecasters that could be erroneously classified as biased (i.e., the false discovery rate). Monte Carlo exercises illustrate the relevance of controlling the false discovery rate in this context. Using data from the Survey of Professional Forecasters, we find that up to 7 out of 10 forecasters classified as biased by a procedure not controlling the false discovery rate may actually be unbiased. |
Keywords: | Biased Forecasters, Multiple Testing, False Discovery Rate. |
JEL: | C12 C23 E17 |
Date: | 2015–05–27 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:15/661&r=ecm |
By: | Adam D. Bull |
Abstract: | In quantitative finance, we often fit a parametric semimartingale model to asset prices. To ensure our model is correct, we must then perform goodness-of-fit tests. In this paper, we give a new goodness-of-fit test for volatility-like processes, which is easily applied to a variety of semimartingale models. In each case, we reduce the problem to the detection of a semimartingale observed under noise. In this setting, we then describe a wavelet-thresholding test, which obtains adaptive and near-optimal detection rates. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1506.00088&r=ecm |
By: | Chiu, Ching-Wai (Jeremy) (Bank of England); Mumtaz, Haroon (Queen Mary University of London); Pinter, Gabor (Bank of England) |
Abstract: | In this paper, we provide evidence that fat tails and stochastic volatility can be important in improving in-sample fit and out-of-sample forecasting performance. Specifically, we construct a VAR model where the orthogonalised shocks feature Student’s t distribution and time-varying variance. We estimate this model using US data on output growth, inflation, interest rates and stock returns. In terms of in-sample fit, the VAR model featuring both stochastic volatility and t-distributed disturbances outperforms restricted alternatives that feature either attributes. The VAR model with t disturbances results in density forecasts for industrial production and stock returns that are superior to alternatives that assume Gaussianity, and this difference is especially stark over the recent Great Recession. Further international evidence confirms that accounting for both stochastic volatility and Student’s t-distributed disturbances may lead to improved forecast accuracy. |
Keywords: | Bayesian VAR; fat-tails; stochastic volatility; Great Recession |
JEL: | C11 C32 C52 |
Date: | 2015–05–29 |
URL: | http://d.repec.org/n?u=RePEc:boe:boeewp:0528&r=ecm |
By: | Dongkoo Kim; Tae-hwan Rhee; Keunkwan Ryu; Changmock Shin |
Abstract: | Economic forecasts are quite essential in our daily lives, which is why many research institutions periodically make and publish forecasts of main economic indicators. We ask (1) whether we can consistently have a better prediction when we combine multiple forecasts of the same variable and (2) if we can, what will be the optimal method of combination. We linearly combine multiple linear combinations of existing forecasts to form a new forecast (“combination of combinations”), and the weights are given by Bayesian model averaging. In the case of forecasts on Germany’s real GDP growth rate, this new forecast dominates any single forecast in terms of root-mean-square prediction errors. |
Keywords: | Combination of forecasts; Bayesian model averaging |
JEL: | E32 E37 |
Date: | 2015–03 |
URL: | http://d.repec.org/n?u=RePEc:rwi:repape:0546&r=ecm |
By: | Camba-Méndez, Gonzalo; Kapetanios, George; Papailias, Fotis; Weale, Martin R. |
Abstract: | This paper assesses the forecasting performance of various variable reduction and variable selection methods. A small and a large set of wisely chosen variables are used in forecasting the industrial production growth for four Euro Area economies. The results indicate that the Automatic Leading Indicator (ALI) model performs well compared to other variable reduction methods in small datasets. However, Partial Least Squares and variable selection using heuristic optimisations of information criteria along with the ALI could be used in model averaging methodologies. JEL Classification: C11, C32, C52 |
Keywords: | Bayesian shrinkage regression, dynamic factor model, euro area, forecasting, Kalman filter, partial least squares |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20151773&r=ecm |
By: | Paola Cerchiello (Department of Economics and Management, University of Pavia); Paolo Giudici (Department of Economics and Management, University of Pavia) |
Abstract: | The quality of academic research is difficult to measure and rather controversial. Hirsch has proposed the h index, a measure that has the advantage of summarizing in a single summary statistic the information that is contained in the citation counts of each scientist. Although the h index has received a great deal of interest, only a few papers have analyzed its statistical properties and implications. We claim that statistical modeling can give a lot of added value over a simple summary like the h index. To show this, in the paper we propose a negative binomial distribution to jointly model the two main components of the h index: the number of papers and their citations. We then propose a Bayesian model that allows to obtain posterior inferences on the parameters of the distribution and, in addition, a predictive distribution for the h index itself. Such a predictive distribution can be used to compare scientists on a fairer ground, and in terms of their future contribution, rather than on their past performance. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:pav:demwpp:102&r=ecm |
By: | Riccardo M. Masolo (Bank of England; Centre for Macroeconomics (CFM)); Alessia Paccagnini (Dipartimento di Economia, Metodi Quantitativi e Strategie d'Impresa (DEMS) Facoltà di Economia Università degli Studi di Milano-Bicocca) |
Abstract: | We propose a new VAR identification strategy to study the impact of noise shocks on aggregate activity. We do so exploiting the informational advantage the econometrician has, relative to the economic agent. The latter, who is uncertain about the underlying state of the economy, responds to the noisy early data releases. The former, with the benefit of hindsight, has access to data revisions as well, which can be used to identify noise shocks. By using a VAR we can avoid making very specific assumptions on the process driving data revisions. We rather remain agnostic about it but make our identification strategy robust to whether data revisions are driven by noise or news. Our analysis shows that a surprising report of output growth numbers delivers a persistent and hump-shaped response of real output and unemployment. The responses are qualitatively similar but an order of magnitude smaller than those to a demand shock. Finally, our counterfactual analysis supports the view that it would not be possible to identify noise shocks unless different vintages of data are used. |
Keywords: | Noise Shocks, Data Revisions, VAR, Impulse-Response Functions |
JEL: | E3 C1 D8 |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:cfm:wpaper:1510&r=ecm |