
on Econometrics 
By:  Warne, Anders; Coenen, Günter; Christoffel, Kai 
Abstract:  This paper shows how to compute the hstepahead predictive likelihood for any subset of the observed variables in parametric discrete time series models estimated with Bayesian methods. The subset of variables may vary across forecast horizons and the problem thereby covers marginal and joint predictive likelihoods for a fixed subset as special cases. The basic idea is to utilize wellknown techniques for handling missing data when computing the likelihood function, such as a missing observations consistent Kalman filter for linear Gaussian models, but it also extends to nonlinear, nonnormal statespace models. The predictive likelihood can thereafter be calculated via Monte Carlo integration using draws from the posterior distribution. As an empirical illustration, we use euro area data and compare the forecasting performance of the New AreaWide Model, a smallopeneconomy DSGE model, to DSGEVARs, and to reducedform linear Gaussian models. JEL Classification: C11, C32, C52, C53, E37 
Keywords:  Bayesian inference, forecasting, Kalman filter, Missing data, Monte Carlo integration 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131536&r=ecm 
By:  Joel Horowitz (Institute for Fiscal Studies and Northwestern University) 
Abstract:  A parameter of an econometric model is identified if there is a onetoone or manytoone mapping from the population distribution of the available data to the parameter. Often, this mapping is obtained by inverting a mapping from the parameter to the population distribution. If the inverse mapping is discontinuous, then estimation of the parameter usually presents an illposed inverse problem. Such problems arise in many settings in economics and other fields where the parameter of interest is a function. This paper explains how illposedness arises and why it causes problems for estimation. The need to modify or 'regularise' the identifying mapping is explained, and methods for regularisation and estimation are discussed. Methods for forming confidence intervals and testing hypotheses are summarised. It is shown that a hypothesis test can be more 'precise' in a certain sense than an estimator. An empirical example illustrates estimation in an illposed setting in economics. 
Keywords:  regularisation, nonparametric estimation, density estimation, deconvolution, nonparametric instrumental variables, Fredholm equation 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:37/13&r=ecm 
By:  Nikhil Agarwal (Dept. of Economics, MIT); William Diamond (Dept. of Economics, Harvard University) 
Abstract:  We study estimation and nonparametric identification of preferences in twosided matching markets using data from a single market with many agents. We consider a model in which preferences of each side of the market is homogeneous, utility is non transferable utility and the observed matches are pairwise stable. We show that preferences are not identified with data on onetoone matches but are nonparametrically identified when data from manytoone matches are observed. This difference in the identifiability of the model is illustrated by comparing two simulated objective functions, one that does and the other that does not use information available in manytoone matching. We also prove consistency of a method of moments estimator for a parametric model under a data generating process in which the size of the matching market increases, but data only on one market is observed. Since matches in a single market are interdependent, our proof of consistency cannot rely on observations of independent matches. Finally, we present Monte Carlo studies of a simulation based estimator. 
Keywords:  Twosided matching, Identification, Estimation 
JEL:  C13 C14 C78 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1905&r=ecm 
By:  Mawuli Segnon; Thomas Lux 
Abstract:  This chapter provides an overview over the recently developed so called multifractal (MF) approach for modeling and forecasting volatility. We outline the genesis of this approach from similar models of turbulent flows in statistical physics and provide details on different specifications of multifractal time series models in finance, available methods for their estimation, and the current state of their empirical applications 
Keywords:  Multifractal processes, random measures, stochastic volatility, forecasting 
JEL:  C20 F37 G15 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:kie:kieliw:1860&r=ecm 
By:  Amisano, Gianni; Geweke, John 
Abstract:  Prediction of macroeconomic aggregates is one of the primary functions of macroeconometric models, including dynamic factor models, dynamic stochastic general equilibrium models, and vector autoregressions. This study establishes methods that improve the predictions of these models, using a representative model from each class and a canonical 7variable postwar US data set. It focuses on prediction over the period 1966 through 2011. It measures the quality of prediction by the probability densities assigned to the actual values of these variables, one quarter ahead, by the predictive distributions of the models in real time. Two steps lead to substantial improvement. The first is to use full Bayesian predictive distributions rather than substitute a "plugin" posterior mode for parameters. Across models and quarters, this leads to a mean improvement in probability of 50.4%. The second is to use an equallyweighted pool of predictive densities from the three models, which leads to a mean improvement in probability of 41.9% over the full Bayesian predictive distributions of the individual models. This improvement is much better than that a¤orded by Bayesian model averaging. The study uses several analytical tools, including pooling, analysis of predictive variance, and probability integral transform tests, to understand and interpret the improvements. JEL Classification: C11, C51 C53 
Keywords:  Analysis of variance, Bayesian model averaging, dynamic factor model, dynamic stochastic general equilibrium model, prediction pools, probability integral transform test, vector autoregression model 
Date:  2013–04 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131537&r=ecm 
By:  Igor Borovikov; Michael Sadovsky 
Abstract:  Here we present a novel approach to statistical analysis of financial time series. The approach is based on $n$grams frequency dictionaries derived from the quantized market data. Such dictionaries are studied by evaluating their information capacity using relative entropy. A specific quantization of (originally continuous) financial data is considered: so called binary quantization. Possible applications of the proposed technique include market event study with the $n$grams of higher information value. The finite length of the input data presents certain computational and theoretical challenges discussed in the paper. also, some other versions of a quantization are discussed. 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1308.2732&r=ecm 
By:  Leo Krippner 
Abstract:  When nominal interest rates are near their zero lower bound (ZLB), as in many developed economies at the time of writing, it is theoretically untenable to apply the popular class of Gaussian affine term structure models (GATSMs) given their inherent material probabilities of negative interest rates. Hence, I propose a tractable modification for GATSMs that enforces the ZLB, and which approximates the fully arbitragefree but much less tractable framework proposed in Black (1995). I apply my framework to United States yield curve data, with robust estimation via the iterated extended Kalman filter, and first show that the twofactor results are very similar to those from a comparable Black model. I then estimate two and threefactor models with longermaturity data sets to illustrate that my ZLB framework can readily be applied in circumstances would computationally burdensome or infeasible within the Black framework. 
Keywords:  zero lower bound; term structure of interest rates; Gaussian affine term structure models; shadow short rate; shadow term structure 
JEL:  E43 G12 G13 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201349&r=ecm 
By:  James G. MacKinnon (Queen's University); Matthew D. Webb (University of Calgary) 
Abstract:  The cluster robust variance estimator (CRVE) relies on the number of clusters being large. The precise meaning of 'large' is ambiguous, but a shorthand 'rule of 42' has emerged in the literature. We show that this rule depends crucially on the assumption of equalsized clusters. Monte Carlo evidence suggests that rejection frequencies at the five percent level can be more than twice the desired size when a dataset has 50 clusters proportional to the populations of the US states. In contrast, using a cluster wild bootstrap procedure for the same dataset usually results in very accurate rejection frequencies. We also show that, when the test regressor is a dummy variable, both conventional and bootstrap tests perform badly when the proportion of clusters treated is very small or very large. A third set of simulations uses placebo laws to see whether similar results hold in a differenceindifferences framework. 
Keywords:  CRVE, grouped data, clustered data, panel data, cluster wild bootstrap 
JEL:  C15 C21 C23 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1314&r=ecm 
By:  Jan R. Magnus (The University of Sydney Business School); Andrey L. Vasnev 
Abstract:  Sensitivity analysis is important for its own sake and also in combination with diagnostic testing. We consider the question how to use sensitivity statistics in practice, in particular how to judge whether sensitivity is large or small. For this purpose we distinguish between absolute and relative sensitivity and highlight the contextdependent nature of any sensitivity analysis. Relative sensitivity is then applied in the context of forecast combination and sensitivitybased weights are introduced. All concepts are illustrated through the European yield curve. In this context it is natural to look at sensitivity to autocorrelation and normality assumptions. Different forecasting models are combined with equal, fitbased and sensitivitybased weights, and compared with the multivariate and random walk benchmarks. We show that the fitbased weights and the sensitivitybased weights are complementary. For longterm maturities the sensitivitybased weights perform better than other weights. 
Keywords:  Sensitivity analysis, Forecast combination, Yield curve prediction 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:04/2013&r=ecm 
By:  Andrey L. Vasnev (The University of Sydney Business School); Laurent L. Pauwels 
Abstract:  The problem of finding appropriate weights to combine several density forecasts is an important issue currently debated in the forecast combination literature. Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density forecasts with optimal weights obtained from solving an optimization problem. This paper studies the properties of this optimization problem when the number of forecasting periods is relatively small and finds that it often produces corner solutions by allocating all the weight to one density forecast only. This paper's practical recommendation is to have an additional training sample period for the optimal weights. While reserving a portion of the data for parameter estimation and making pseudooutofsample forecasts are common practices in the empirical literature, employing a separate training sample for the optimal weights is novel, and it is suggested because it decreases the chances of corner solutions. Alternative logscore or quadraticscore weighting schemes do not have this training sample requirement. 
Keywords:  Forecast combination; Density forecast; Optimization; Optimal weight; Discrete choice models 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:01/2013&r=ecm 
By:  Desislava Chetalova; Thilo A. Schmitt; Rudi Sch\"afer; Thomas Guhr 
Abstract:  We consider random vectors drawn from a multivariate normal distribution and compute the sample statistics in the presence of nonstationary correlations. For this purpose, we construct an ensemble of random correlation matrices and average the normal distribution over this ensemble. The resulting distribution contains a modified Bessel function of the second kind whose behavior differs significantly from the multivariate normal distribution, in the central part as well as in the tails. This result is then applied to asset returns. We compare with empirical return distributions using daily data from the Nasdaq Composite Index in the period from 1992 to 2012. The comparison reveals good agreement, the average portfolio return distribution describes the data well especially in the central part of the distribution. This in turn confirms our ansatz to model the nonstationarity by an ensemble average. 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1308.3961&r=ecm 
By:  Gross, Marco 
Abstract:  This paper aims to illustrate how weight matrices that are needed to construct foreign variable vectors in Global Vector Autoregressive (GVAR) models can be estimated jointly with the GVAR's parameters. An application to real GDP and consumption expenditure price inflation as well as a controlled Monte Carlo simulation serve to highlight that 1) In the application at hand, the estimated weights differ for some countries significantly from tradebased ones that are traditionally employed in that context; 2) misspecified weights might bias the GVAR estimate and therefore distort its dynamics; 3) using estimated GVAR weights instead of tradebased ones (to the extent that they differ and the latter bias the global model estimates) shall enhance the outofsample forecast performance of the GVAR. Devising a method for estimating GVAR weights is particularly useful for contexts in which it is not obvious how weights could otherwise be constructed from data. JEL Classification: C33, C53, C61, E17 
Keywords:  forecasting and simulation, Global macroeconometric modeling, models with panel data 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131523&r=ecm 
By:  Binder, Michael; Gross, Marco 
Abstract:  The purpose of the paper is to develop a RegimeSwitching Global Vector Autoregressive (RSGVAR) model. The RSGVAR model allows for recurring or nonrecurring structural changes in all or a subset of countries. It can be used to generate regimedependent impulse response functions which are conditional upon a regimeconstellation across countries. Coupling the RS and the GVAR methodology improves outofsample forecast accuracy significantly in an application to real GDP, price inflation, and stock prices. JEL Classification: C32, E17, G20 
Keywords:  forecasting and simulation, Global macroeconometric modeling, nonlinear modeling, Regime switching 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131569&r=ecm 
By:  Smets, Frank; Warne, Anders; Wouters, Raf 
Abstract:  This paper analyses the realtime forecasting performance of the New Keynesian DSGE model of Galí, Smets, and Wouters (2012) estimated on euro area data. It investigates to what extent forecasts of inflation, GDP growth and unemployment by professional forecasters improve the forecasting performance. We consider two approaches for conditioning on such information. Under the “noise” approach, the mean professional forecasts are assumed to be noisy indicators of the rational expectations forecasts implied by the DSGE model. Under the “news” approach, it is assumed that the forecasts reveal the presence of expected future structural shocks in line with those estimated over the past. The forecasts of the DSGE model are compared with those from a Bayesian VAR model and a random walk. JEL Classification: E24, E31, E32 
Keywords:  Bayesian methods, DSGE model, estimated New Keynesian model, macroeconomic forecasting, realtime data, survey data 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131571&r=ecm 
By:  Irina Murtazashvil; Di Liu; Artem Prokhorov 
Abstract:  We estimate intergenerational income mobility in the USA and Sweden. To measure the degree to which income status is transmitted from one generation to another we propose a nonparametric estimator, which is particularly relevant for crosscountry comparisons. Our approach allows intergenerational mobility to vary across observable family characteristics. Furthermore, it ts situations when data on fathers and sons come from di fferent samples. Finally, our estimator is consistent in the presence of measurement error in fathers' longrun economic status. We fi nd that family background captured by fathers' education matters for intergenerational income persistence in the USA more than in Sweden suggesting that the character of inequality in the two countries is rather different. 
Keywords:  Intergeneration Income Persistence; GMM estimation 
JEL:  D3 C14 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:07_2013&r=ecm 
By:  Groenen, P.J.F.; Borg, I. 
Abstract:  Multidimensional scaling (MDS) has established itself as a standard tool for statisticians and applied researchers. Its success is due to its simple and easily interpretable representation of potentially complex structural data. These data are typically embedded into a 2dimensional map, where the objects of interest (items, attributes, stimuli, respondents, etc.) correspond to points such that those that are near to each other are empirically similar, and those that are far apart are different. In this paper, we pay tribute to several important developers of MDS and give a subjective overview of milestones in MDS developments. We also discuss the present situation of MDS and give a brief outlook on its future. 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765039177&r=ecm 
By:  Bańbura, Marta; Giannone, Domenico; Modugno, Michele; Reichlin, Lucrezia 
Abstract:  The term nowcasting is a contraction for now and forecasting and has been used for a longtime in meteorology and recently also in economics. In this paper we survey recent developments in economic nowcasting with special focus on those models that formalize key features of how market participants and policy makers read macroeconomic data releases in real time, which involves: monitoring many data, forming expectations about them and revising the assessment on the state of the economy whenever realizations diverge sizeably from those expectations. (Prepared for G. Elliott and A. Timmermann, eds., Handbook of Economic Forecasting, Volume 2, ElsevierNorth Holland). JEL Classification: E32, E37, C01, C33, C53 
Keywords:  macroeconomic forecasting, Macroeconomic news, mixed frequency, realtime data, state space models 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131564&r=ecm 
By:  Mitesh Kataria (Max Planck Institute of Economics, Strategic Interaction Group,Jena) 
Abstract:  Maniadis et al. (2013) present a theoretical framework that aims at providing insights into the mechanics of proper inference. They suggest that a decision about whether to call an experimental finding noteworthy, or deserving of great attention, should be based on the calculated poststudy probability. Although I in large agree with most points in Maniadis et al. (2013), this note raises some important caveats. 
Keywords:  Bayes' theorem, Methodology 
JEL:  C11 C12 C80 
Date:  2013–08–14 
URL:  http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2013030&r=ecm 