nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒08‒23
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Predictive likelihood comparisons with DSGE and DSGE-VAR models By Warne, Anders; Coenen, Günter; Christoffel, Kai
  2. Ill-posed inverse problems in economics By Joel Horowitz
  3. Identification and Estimation in Two-Sided Matching Markets By Nikhil Agarwal; William Diamond
  4. Multifractal Models in Finance: Their Origin, Propterties, and Applications By Mawuli Segnon; Thomas Lux
  5. Prediction using several macroeconomic models By Amisano, Gianni; Geweke, John
  6. A relative information approach to financial time series analysis using binary $N$-grams dictionaries By Igor Borovikov; Michael Sadovsky
  7. A tractable framework for zero-lower-bound Gaussian term structure models By Leo Krippner
  8. Wild Bootstrap Inference for Wildly Different Cluster Sizes By James G. MacKinnon; Matthew D. Webb
  9. Practical use of sensitivity in econometrics with an illustration to forecast combinationsPractical use of sensitivity in econometrics with an illustration to forecast combinations By Jan R. Magnus; Andrey L. Vasnev
  10. Practical considerations for optimal weights in density forecast combination By Andrey L. Vasnev; Laurent L. Pauwels
  11. Portfolio return distributions: Sample statistics with non-stationary correlations By Desislava Chetalova; Thilo A. Schmitt; Rudi Sch\"afer; Thomas Guhr
  12. Estimating GVAR weight matrices By Gross, Marco
  13. Regime-switching global vector autoregressive models By Binder, Michael; Gross, Marco
  14. Professional forecasters and the real-time forecasting performance of an estimated new keynesian model for the euro area By Smets, Frank; Warne, Anders; Wouters, Raf
  15. Two-Sample Nonparametric Estimation of Intergenerational Income Mobility By Irina Murtazashvil; Di Liu; Artem Prokhorov
  16. The Past, Present, and Future of Multidimensional Scaling By Groenen, P.J.F.; Borg, I.
  17. Now-casting and the real-time data flow By Bańbura, Marta; Giannone, Domenico; Modugno, Michele; Reichlin, Lucrezia
  18. One Swallow Doesn't Make a Summer - A Note By Mitesh Kataria

  1. By: Warne, Anders; Coenen, Günter; Christoffel, Kai
    Abstract: This paper shows how to compute the h-step-ahead predictive likelihood for any subset of the observed variables in parametric discrete time series models estimated with Bayesian methods. The subset of variables may vary across forecast horizons and the problem thereby covers marginal and joint predictive likelihoods for a fixed subset as special cases. The basic idea is to utilize well-known techniques for handling missing data when computing the likelihood function, such as a missing observations consistent Kalman filter for linear Gaussian models, but it also extends to nonlinear, nonnormal state-space models. The predictive likelihood can thereafter be calculated via Monte Carlo integration using draws from the posterior distribution. As an empirical illustration, we use euro area data and compare the forecasting performance of the New Area-Wide Model, a small-open-economy DSGE model, to DSGEVARs, and to reduced-form linear Gaussian models. JEL Classification: C11, C32, C52, C53, E37
    Keywords: Bayesian inference, forecasting, Kalman filter, Missing data, Monte Carlo integration
    Date: 2013–04
  2. By: Joel Horowitz (Institute for Fiscal Studies and Northwestern University)
    Abstract: A parameter of an econometric model is identified if there is a one-to-one or many-to-one mapping from the population distribution of the available data to the parameter. Often, this mapping is obtained by inverting a mapping from the parameter to the population distribution. If the inverse mapping is discontinuous, then estimation of the parameter usually presents an ill-posed inverse problem. Such problems arise in many settings in economics and other fields where the parameter of interest is a function. This paper explains how ill-posedness arises and why it causes problems for estimation. The need to modify or 'regularise' the identifying mapping is explained, and methods for regularisation and estimation are discussed. Methods for forming confidence intervals and testing hypotheses are summarised. It is shown that a hypothesis test can be more 'precise' in a certain sense than an estimator. An empirical example illustrates estimation in an ill-posed setting in economics.
    Keywords: regularisation, nonparametric estimation, density estimation, deconvolution, nonparametric instrumental variables, Fredholm equation
    Date: 2013–08
  3. By: Nikhil Agarwal (Dept. of Economics, MIT); William Diamond (Dept. of Economics, Harvard University)
    Abstract: We study estimation and non-parametric identification of preferences in two-sided matching markets using data from a single market with many agents. We consider a model in which preferences of each side of the market is homogeneous, utility is non- transferable utility and the observed matches are pairwise stable. We show that preferences are not identified with data on one-to-one matches but are non-parametrically identified when data from many-to-one matches are observed. This difference in the identifiability of the model is illustrated by comparing two simulated objective functions, one that does and the other that does not use information available in many-to-one matching. We also prove consistency of a method of moments estimator for a parametric model under a data generating process in which the size of the matching market increases, but data only on one market is observed. Since matches in a single market are interdependent, our proof of consistency cannot rely on observations of independent matches. Finally, we present Monte Carlo studies of a simulation based estimator.
    Keywords: Two-sided matching, Identification, Estimation
    JEL: C13 C14 C78
    Date: 2013–08
  4. By: Mawuli Segnon; Thomas Lux
    Abstract: This chapter provides an overview over the recently developed so called multifractal (MF) approach for modeling and forecasting volatility. We outline the genesis of this approach from similar models of turbulent flows in statistical physics and provide details on different specifications of multifractal time series models in finance, available methods for their estimation, and the current state of their empirical applications
    Keywords: Multifractal processes, random measures, stochastic volatility, forecasting
    JEL: C20 F37 G15
    Date: 2013–08
  5. By: Amisano, Gianni; Geweke, John
    Abstract: Prediction of macroeconomic aggregates is one of the primary functions of macroeconometric models, including dynamic factor models, dynamic stochastic general equilibrium models, and vector autoregressions. This study establishes methods that improve the predictions of these models, using a representative model from each class and a canonical 7-variable postwar US data set. It focuses on prediction over the period 1966 through 2011. It measures the quality of prediction by the probability densities assigned to the actual values of these variables, one quarter ahead, by the predictive distributions of the models in real time. Two steps lead to substantial improvement. The first is to use full Bayesian predictive distributions rather than substitute a "plug-in" posterior mode for parameters. Across models and quarters, this leads to a mean improvement in probability of 50.4%. The second is to use an equally-weighted pool of predictive densities from the three models, which leads to a mean improvement in probability of 41.9% over the full Bayesian predictive distributions of the individual models. This improvement is much better than that a¤orded by Bayesian model averaging. The study uses several analytical tools, including pooling, analysis of predictive variance, and probability integral transform tests, to understand and interpret the improvements. JEL Classification: C11, C51 C53
    Keywords: Analysis of variance, Bayesian model averaging, dynamic factor model, dynamic stochastic general equilibrium model, prediction pools, probability integral transform test, vector autoregression model
    Date: 2013–04
  6. By: Igor Borovikov; Michael Sadovsky
    Abstract: Here we present a novel approach to statistical analysis of financial time series. The approach is based on $n$-grams frequency dictionaries derived from the quantized market data. Such dictionaries are studied by evaluating their information capacity using relative entropy. A specific quantization of (originally continuous) financial data is considered: so called binary quantization. Possible applications of the proposed technique include market event study with the $n$-grams of higher information value. The finite length of the input data presents certain computational and theoretical challenges discussed in the paper. also, some other versions of a quantization are discussed.
    Date: 2013–08
  7. By: Leo Krippner
    Abstract: When nominal interest rates are near their zero lower bound (ZLB), as in many developed economies at the time of writing, it is theoretically untenable to apply the popular class of Gaussian affine term structure models (GATSMs) given their inherent material probabilities of negative interest rates. Hence, I propose a tractable modification for GATSMs that enforces the ZLB, and which approximates the fully arbitrage-free but much less tractable framework proposed in Black (1995). I apply my framework to United States yield curve data, with robust estimation via the iterated extended Kalman filter, and first show that the two-factor results are very similar to those from a comparable Black model. I then estimate two- and three-factor models with longer-maturity data sets to illustrate that my ZLB framework can readily be applied in circumstances would computationally burdensome or infeasible within the Black framework.
    Keywords: zero lower bound; term structure of interest rates; Gaussian affine term structure models; shadow short rate; shadow term structure
    JEL: E43 G12 G13
    Date: 2013–08
  8. By: James G. MacKinnon (Queen's University); Matthew D. Webb (University of Calgary)
    Abstract: The cluster robust variance estimator (CRVE) relies on the number of clusters being large. The precise meaning of 'large' is ambiguous, but a shorthand 'rule of 42' has emerged in the literature. We show that this rule depends crucially on the assumption of equal-sized clusters. Monte Carlo evidence suggests that rejection frequencies at the five percent level can be more than twice the desired size when a dataset has 50 clusters proportional to the populations of the US states. In contrast, using a cluster wild bootstrap procedure for the same dataset usually results in very accurate rejection frequencies. We also show that, when the test regressor is a dummy variable, both conventional and bootstrap tests perform badly when the proportion of clusters treated is very small or very large. A third set of simulations uses placebo laws to see whether similar results hold in a difference-in-differences framework.
    Keywords: CRVE, grouped data, clustered data, panel data, cluster wild bootstrap
    JEL: C15 C21 C23
    Date: 2013–08
  9. By: Jan R. Magnus (The University of Sydney Business School); Andrey L. Vasnev
    Abstract: Sensitivity analysis is important for its own sake and also in combination with diagnostic testing. We consider the question how to use sensitivity statistics in practice, in particular how to judge whether sensitivity is large or small. For this purpose we distinguish between absolute and relative sensitivity and highlight the context-dependent nature of any sensitivity analysis. Relative sensitivity is then applied in the context of forecast combination and sensitivity-based weights are introduced. All concepts are illustrated through the European yield curve. In this context it is natural to look at sensitivity to autocorrelation and normality assumptions. Different forecasting models are combined with equal, fit-based and sensitivity-based weights, and compared with the multivariate and random walk benchmarks. We show that the fit-based weights and the sensitivity-based weights are complementary. For long-term maturities the sensitivity-based weights perform better than other weights.
    Keywords: Sensitivity analysis, Forecast combination, Yield curve prediction
    Date: 2013–03
  10. By: Andrey L. Vasnev (The University of Sydney Business School); Laurent L. Pauwels
    Abstract: The problem of finding appropriate weights to combine several density forecasts is an important issue currently debated in the forecast combination literature. Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density forecasts with optimal weights obtained from solving an optimization problem. This paper studies the properties of this optimization problem when the number of forecasting periods is relatively small and finds that it often produces corner solutions by allocating all the weight to one density forecast only. This paper's practical recommendation is to have an additional training sample period for the optimal weights. While reserving a portion of the data for parameter estimation and making pseudo-out-of-sample forecasts are common practices in the empirical literature, employing a separate training sample for the optimal weights is novel, and it is suggested because it decreases the chances of corner solutions. Alternative log-score or quadratic-score weighting schemes do not have this training sample requirement.
    Keywords: Forecast combination; Density forecast; Optimization; Optimal weight; Discrete choice models
    Date: 2013–01
  11. By: Desislava Chetalova; Thilo A. Schmitt; Rudi Sch\"afer; Thomas Guhr
    Abstract: We consider random vectors drawn from a multivariate normal distribution and compute the sample statistics in the presence of non-stationary correlations. For this purpose, we construct an ensemble of random correlation matrices and average the normal distribution over this ensemble. The resulting distribution contains a modified Bessel function of the second kind whose behavior differs significantly from the multivariate normal distribution, in the central part as well as in the tails. This result is then applied to asset returns. We compare with empirical return distributions using daily data from the Nasdaq Composite Index in the period from 1992 to 2012. The comparison reveals good agreement, the average portfolio return distribution describes the data well especially in the central part of the distribution. This in turn confirms our ansatz to model the non-stationarity by an ensemble average.
    Date: 2013–08
  12. By: Gross, Marco
    Abstract: This paper aims to illustrate how weight matrices that are needed to construct foreign variable vectors in Global Vector Autoregressive (GVAR) models can be estimated jointly with the GVAR's parameters. An application to real GDP and consumption expenditure price inflation as well as a controlled Monte Carlo simulation serve to highlight that 1) In the application at hand, the estimated weights differ for some countries significantly from trade-based ones that are traditionally employed in that context; 2) misspecified weights might bias the GVAR estimate and therefore distort its dynamics; 3) using estimated GVAR weights instead of trade-based ones (to the extent that they differ and the latter bias the global model estimates) shall enhance the out-of-sample forecast performance of the GVAR. Devising a method for estimating GVAR weights is particularly useful for contexts in which it is not obvious how weights could otherwise be constructed from data. JEL Classification: C33, C53, C61, E17
    Keywords: forecasting and simulation, Global macroeconometric modeling, models with panel data
    Date: 2013–03
  13. By: Binder, Michael; Gross, Marco
    Abstract: The purpose of the paper is to develop a Regime-Switching Global Vector Autoregressive (RS-GVAR) model. The RS-GVAR model allows for recurring or non-recurring structural changes in all or a subset of countries. It can be used to generate regime-dependent impulse response functions which are conditional upon a regime-constellation across countries. Coupling the RS and the GVAR methodology improves out-of-sample forecast accuracy significantly in an application to real GDP, price inflation, and stock prices. JEL Classification: C32, E17, G20
    Keywords: forecasting and simulation, Global macroeconometric modeling, nonlinear modeling, Regime switching
    Date: 2013–08
  14. By: Smets, Frank; Warne, Anders; Wouters, Raf
    Abstract: This paper analyses the real-time forecasting performance of the New Keynesian DSGE model of Galí, Smets, and Wouters (2012) estimated on euro area data. It investigates to what extent forecasts of inflation, GDP growth and unemployment by professional forecasters improve the forecasting performance. We consider two approaches for conditioning on such information. Under the “noise” approach, the mean professional forecasts are assumed to be noisy indicators of the rational expectations forecasts implied by the DSGE model. Under the “news” approach, it is assumed that the forecasts reveal the presence of expected future structural shocks in line with those estimated over the past. The forecasts of the DSGE model are compared with those from a Bayesian VAR model and a random walk. JEL Classification: E24, E31, E32
    Keywords: Bayesian methods, DSGE model, estimated New Keynesian model, macroeconomic forecasting, real-time data, survey data
    Date: 2013–08
  15. By: Irina Murtazashvil; Di Liu; Artem Prokhorov
    Abstract: We estimate intergenerational income mobility in the USA and Sweden. To measure the degree to which income status is transmitted from one generation to another we propose a nonparametric estimator, which is particularly relevant for cross-country comparisons. Our approach allows intergenerational mobility to vary across observable family characteristics. Furthermore, it ts situations when data on fathers and sons come from di fferent samples. Finally, our estimator is consistent in the presence of measurement error in fathers' long-run economic status. We fi nd that family background captured by fathers' education matters for intergenerational income persistence in the USA more than in Sweden suggesting that the character of inequality in the two countries is rather different.
    Keywords: Intergeneration Income Persistence; GMM estimation
    JEL: D3 C14
    Date: 2013–08
  16. By: Groenen, P.J.F.; Borg, I.
    Abstract: Multidimensional scaling (MDS) has established itself as a standard tool for statisticians and applied researchers. Its success is due to its simple and easily interpretable representation of potentially complex structural data. These data are typically embedded into a 2-dimensional map, where the objects of interest (items, attributes, stimuli, respondents, etc.) correspond to points such that those that are near to each other are empirically similar, and those that are far apart are different. In this paper, we pay tribute to several important developers of MDS and give a subjective overview of milestones in MDS developments. We also discuss the present situation of MDS and give a brief outlook on its future.
  17. By: Bańbura, Marta; Giannone, Domenico; Modugno, Michele; Reichlin, Lucrezia
    Abstract: The term now-casting is a contraction for now and forecasting and has been used for a long-time in meteorology and recently also in economics. In this paper we survey recent developments in economic now-casting with special focus on those models that formalize key features of how market participants and policy makers read macroeconomic data releases in real time, which involves: monitoring many data, forming expectations about them and revising the assessment on the state of the economy whenever realizations diverge sizeably from those expectations. (Prepared for G. Elliott and A. Timmermann, eds., Handbook of Economic Forecasting, Volume 2, Elsevier-North Holland). JEL Classification: E32, E37, C01, C33, C53
    Keywords: macroeconomic forecasting, Macroeconomic news, mixed frequency, real-time data, state space models
    Date: 2013–07
  18. By: Mitesh Kataria (Max Planck Institute of Economics, Strategic Interaction Group,Jena)
    Abstract: Maniadis et al. (2013) present a theoretical framework that aims at providing insights into the mechanics of proper inference. They suggest that a decision about whether to call an experimental finding noteworthy, or deserving of great attention, should be based on the calculated post-study probability. Although I in large agree with most points in Maniadis et al. (2013), this note raises some important caveats.
    Keywords: Bayes' theorem, Methodology
    JEL: C11 C12 C80
    Date: 2013–08–14

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.