nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒11‒27
24 papers chosen by
Sune Karlsson
Orebro University

  1. "Computing Densities: A Conditional Monte Carlo Estimator" By Richard Anton Braun; Huiyu Li; John Stachurski
  2. Frequentist Inference in Weakly Identified DSGE Models By Guerron-Quintana, Pablo A.; Inoue, Atsushi; Kilian, Lutz
  4. Forecasting Large Datasets with Bayesian Reduced Rank Multivariate Models By Carriero, Andrea; Kapetanios, George; Marcellino, Massimiliano
  5. Generalized Extreme Value Distribution with Time-Dependence Using the AR and MA Models in State Space Form By Jouchi Nakajima; Tsuyoshi Kunihama; Yasuhiro Omori; Sylvia Fruwirth-Scnatter
  6. Testing the Correlated Random Coefficient Model By James J. Heckman; Daniel Schmierer
  7. "Forecasting Realized Volatility with Linear and Nonlinear Models" By Michael McAleer; Marcelo C. Medeiros
  8. The 'Puzzles' Methodology: en route to Indirect Inference? By Le, Vo Phuong Mai; Minford, Patrick; Wickens, Michael R.
  9. Jump-Robust Volatility Estimation using Nearest Neighbor Truncation By Torben G. Andersen; Dobrislav Dobrev; Ernst Schaumburg
  10. How Much Should We Trust Linear Instrumental Variables Estimators? An Application to Family Size and Children's Education By Mogstad, Magne; Wiswall, Matthew
  11. Estimation of Treatment Effects Without an Exclusion Restriction: with an Application to the Analysis of the School Breakfast Program By Daniel L. Millimet; Rusty Tchernis
  12. Macroeconomic Forecasting and Structural Change By D Agostino, Antonello; Gambetti, Luca; Giannone, Domenico
  13. An information theoretic approach to statistical dependence: copula information By Rafael S. Calsaverini; Renato Vicente
  14. A Low Dimensional Kalman Filter for Systems with Lagged Observables By Kristoffer Nimark
  15. Adaptive Instrumental Variable Estimation of Heteroskedastic Error Component Models By Eduardo Fé Rodríguez
  16. Multivariate Sarmanov Count Data Models By Miravete, Eugenio J
  17. Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data By Carlos A. Flores; Oscar A. Mitnik
  18. MIDAS vs. mixed-frequency VAR: Nowcasting GDP in the Euro Area By Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian
  19. Identifying the Effects of Food Stamps on Child Health Outcomes When Participation is Endogenous and Misreported By Kreider, Brent; Pepper, John V.; Gundersen, Craig; Jolliffe, Dean
  20. On the Equivalence of Location Choice Models: Conditional Logit, Nested Logit and Poisson By Brülhart, Marius; Schmidheiny, Kurt
  21. Modeling uncertainty in macroeconomic growth determinants using Gaussian graphical models By Adrian Dobra; Theo S. Eicher; Alexander Lenkoski
  22. Testing Unilateral and Bilateral Link Formation By Comola, Margherita; Fafchamps, Marcel
  23. Can Parameter Instability Explain the Meese-Rogoff Puzzle? By Bacchetta, Philippe; Beutler, Toni; van Wincoop, Eric
  24. How much nominal rigidity is there in the US Economy? Testing a New Keynesian DSGE model using indirect inference By Le, Vo Phuong Mai; Minford, Patrick; Wickens, Michael R.

  1. By: Richard Anton Braun (Faculty of Economics, University of Tokyo); Huiyu Li (Graduate School of Economics, University of Tokyo); John Stachurski (Institute of Economic Research, Kyoto University)
    Abstract: We propose a generalized conditional Monte Carlo technique for computing densities in economic models. Global consistency and functional asymptotic normality are established under ergodicity assumptions on the simulated process. The asymptotic normality result allows us to characterize the asymptotic distribution of the error in density space, and implies faster convergence than nonparametric kernel density estimators. We show that our results nest several other well-known density estimators, and illustrate potential applications.
    Date: 2009–10
  2. By: Guerron-Quintana, Pablo A.; Inoue, Atsushi; Kilian, Lutz
    Abstract: We show that in weakly identified models (1) the posterior mode will not be a consistent estimator of the true parameter vector, (2) the posterior distribution will not be Gaussian even asymptotically, and (3) Bayesian credible sets and frequentist confidence sets will not coincide asymptotically. This means that Bayesian DSGE estimation should not be interpreted merely as a convenient device for obtaining asymptotically valid point estimates and confidence sets from the posterior distribution. As an alternative, we develop new frequentist confidence sets for structural DSGE model parameters that remain asymptotically valid regardless of the strength of the identification.
    Keywords: Bayes factor; Bayesian estimation; Confidence set; DSGE models; Identification; Inference; Likelihood ratio
    JEL: C32 C52 E30 E50
    Date: 2009–09
  3. By: Victoria Zinde-Walsh
    Abstract: Identification in errors-in-variables regression models was recently extended to wide models classes by S. Schennach (Econometrica, 2007) (S) via use of generalized functions. In this paper the problems of non- and semi- parametric identification in such models are re-examined. Nonparametric identification holds under weaker assumptions than in (S); the proof here does not rely on decomposition of generalized functions into ordinary and singular parts, which may not hold. Conditions for continuity of the identification mapping are provided and a consistent nonparametric plug-in estimator for regression functions in the L₁ space constructed. Semiparametric identification via a finite set of moments is shown to hold for classes of functions that are explicitly characterized; unlike (S) existence of a moment generating function for the measurement
    JEL: C14 C65
    Date: 2009–09
  4. By: Carriero, Andrea; Kapetanios, George; Marcellino, Massimiliano
    Abstract: The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large scale Bayesian VARs, and multivariate boosting. Specifically, we focus on classical reduced rank regression, a two-step procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke (1996). We find that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast, and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrapped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the ground to use large scale reduced rank models for empirical analysis.
    Keywords: Bayesian VARs; factor models; forecasting; reduced rank.
    JEL: C11 C13 C33 C53
    Date: 2009–09
  5. By: Jouchi Nakajima (Institute for Monetary and Economic Studies, Bank of Japan. Currently in the Personnel and Corporate Affairs Department ( studying at Duke University, E-mail:; Tsuyoshi Kunihama (Graduate student, Graduate School of Economics, University of Tokyo. (E-mail:; Yasuhiro Omori (Professor, Faculty of Economics, University of Tokyo. (E-mail:; Sylvia Fruwirth-Scnatter (Professor, Department of Applied Statistics, Johannes Kepler University in Lintz. (E-mail:
    Abstract: A new state space approach is proposed to model the time- dependence in an extreme value process. The generalized extreme value distribution is extended to incorporate the time-dependence using a state space representation where the state variables either follow an autoregressive (AR) process or a moving average (MA) process with innovations arising from a Gumbel distribution. Using a Bayesian approach, an efficient algorithm is proposed to implement Markov chain Monte Carlo method where we exploit a very accurate approximation of the Gumbel distribution by a ten-component mixture of normal distributions. The methodology is illustrated using extreme returns of daily stock data. The model is fitted to a monthly series of minimum returns and the empirical results support strong evidence for time-dependence among the observed minimum returns.
    Keywords: Extreme values, Generalized extreme value distribution, Markov chain Monte Carlo, Mixture sampler, State space model, Stock returns
    JEL: C11 C51
    Date: 2009–11
  6. By: James J. Heckman (University of Chicago; University College Dublin; Cowles Foundation, Yale University; American Bar Foundation); Daniel Schmierer (University of Chicago)
    Abstract: The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coefficient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coefficient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and find evidence of sorting into schooling based on unobserved components of gains.
    Keywords: Correlated random coefficient, testing, instrumental variables, power of tests based on IV
    JEL: C31
    Date: 2009–11–13
  7. By: Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo); Marcelo C. Medeiros (Department of Economics, Pontifical Catholic University of Rio de Janeiro)
    Abstract: In this paper we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 indexes. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high frequency intra-day returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analyzed in the paper.
    Date: 2009–10
  8. By: Le, Vo Phuong Mai; Minford, Patrick; Wickens, Michael R.
    Abstract: We review the methods used in many papers to evaluate DSGE models by comparing their simulated moments with data moments. We compare these with the method of Indirect Inference to which they are closely related. We illustrate the comparison with contrasting assessments of a two-country model in two recent papers. We conclude that Indirect Inference is the proper end point of the puzzles methodology.
    Keywords: anomaly; Bootstrap; DSGE; indirect inference; puzzle; US-EU Model; VAR; Wald statistic
    JEL: C12 C32 C52 E1
    Date: 2009–11
  9. By: Torben G. Andersen; Dobrislav Dobrev; Ernst Schaumburg
    Abstract: We propose two new jump-robust estimators of integrated variance based on high-frequency return observations. These MinRV and MedRV estimators provide an attractive alternative to the prevailing bipower and multipower variation measures. Specifically, the MedRV estimator has better theoretical efficiency properties than the tripower variation measure and displays better finite-sample robustness to both jumps and the occurrence of "zero'' returns in the sample. Unlike the bipower variation measure, the new estimators allow for the development of an asymptotic limit theory in the presence of jumps. Finally, they retain the local nature associated with the low order multipower variation measures. This proves essential for alleviating finite sample biases arising from the pronounced intraday volatility pattern which afflict alternative jump-robust estimators based on longer blocks of returns. An empirical investigation of the Dow Jones 30 stocks and an extensive simulation study corroborate the robustness and efficiency properties of the new estimators.
    JEL: C14 C15 C22 C80 G10
    Date: 2009–11
  10. By: Mogstad, Magne (Statistics Norway); Wiswall, Matthew (New York University)
    Abstract: Many empirical studies specify outcomes as a linear function of endogenous regressors when conducting instrumental variable (IV) estimation. We show that tests for treatment effects, selection bias, and treatment effect heterogeneity are biased if the true relationship is non-linear. These results motivate a re-examination of recent evidence suggesting no causal effect of family size on children's education. Following common practice, a linear IV estimator has been used, assuming constant marginal effects of additional children across family sizes. We find that the conclusion of no effect of family size is an artifact of the linear specification, which masks substantial marginal family size effects.
    Keywords: instrumental variables, variable treatment intensity, treatment effect heterogeneity, selection bias, quantity-quality, family size, child outcome
    JEL: C31 C14 J13
    Date: 2009–11
  11. By: Daniel L. Millimet; Rusty Tchernis
    Abstract: While the rise in childhood obesity is clear, the policy ramifications are not. School nutrition programs such as the School Breakfast Program (SBP) have come under much scrutiny. However, the lack of experimental evidence, combined with non-random selection into these programs, makes identification of the causal effects of such programs difficult. In the case of the SBP, this difficulty is exacerbated by the apparent lack of exclusion restrictions. Here, we compare via Monte Carlo study several existing estimators that do not rely on exclusion restrictions for identification. In addition, we propose two new estimation strategies. Simulations illustrate the usefulness of our new estimators, as well as provide applied researchers several practical guidelines when analyzing the causal effects of binary treatments. More importantly, we find consistent evidence of a beneficial causal effect of SBP participation on childhood obesity when applying estimators designed to circumvent selection on unobservables.
    JEL: C21 C52 I18 J13
    Date: 2009–11
  12. By: D Agostino, Antonello; Gambetti, Luca; Giannone, Domenico
    Abstract: The aim of this paper is to assess whether explicitly modeling structural change increases the accuracy of macroeconomic forecasts. We produce real time out-of-sample forecasts for inflation, the unemployment rate and the interest rate using a Time-Varying Coefficients VAR with Stochastic Volatility (TV-VAR) for the US. The model generates accurate predictions for the three variables. In particular for inflation the TV-VAR outperforms, in terms of mean square forecast error, all the competing models: fixed coefficients VARs, Time-Varying ARs and the naïve random walk model. These results are also shown to hold over the most recent period in which it has been hard to forecast inflation.
    Keywords: Forecasting; Inflation; Stochastic Volatility; Time Varying Vector Autoregression
    JEL: C32 E37 E47
    Date: 2009–11
  13. By: Rafael S. Calsaverini; Renato Vicente
    Abstract: We discuss the connection between information and copula theories by showing that a copula can be employed to decompose the information content of a multivariate distribution into marginal and dependence components, with the latter quantified by the mutual information. We define the information excess as a measure of deviation from a maximum entropy distribution. The idea of marginal invariant dependence measures is also discussed and used to show that empirical linear correlation underestimates the amplitude of the actual correlation in the case of non-Gaussian marginals. The mutual information is shown to provide an upper bound for the asymptotic empirical log-likelihood of a copula. An analytical expression for the information excess of T-copulas is provided, allowing for simple model identification within this family. We illustrate the framework in a financial data set.
    Date: 2009–11
  14. By: Kristoffer Nimark
    Abstract: This note describes how the Kalman filter can be modified to allow for the vector of observables to be a function of lagged variables without increasing the dimension of the state vector in the filter. This is useful in applications where it is desirable to keep the dimension of the state vector low. The modified filter and accompanying code (which nests the standard filter) can be used to compute (i) the steady state Kalman filter (ii) the log likelihood of a parameterized state space model conditional on a history of observables (iii) a smoothed estimate of latent state variables and (iv) a draw from the distribution of latent states conditional on a history of observables.
    Keywords: Kalman filter, lagged observables, Kalman smoother, simulation smoother
    Date: 2009–11
  15. By: Eduardo Fé Rodríguez
    Date: 2009
  16. By: Miravete, Eugenio J
    Abstract: I present two flexible models of multivariate, count data regression that make use of the Sarmanov family of distributions. This approach overcomes several existing difficulties to extend Poisson regressions to the multivariate case, namely: i) it is able to account for both over and underdispersion, ii) it allows for correlations of any sign among the counts, iii) correlation and dispersion depend on different parameters, and iv) constrained maximum likelihood estimation is computationally feasible. Models can be extended beyond the bivariate case. I estimate the bivariate versions of two of these models to address whether the pricing strategies of competing duopolists in the early U.S. cellular telephone industry can be considered strategic complements or substitutes. I show that a Sarmanov model with double Poisson marginals outperforms the alternative count data model based on a multivariate renewal process with gamma distributed arrival times because the latter imposes very restrictive constraints on the valid range of the correlation coefficients.
    Keywords: Double Poisson; Gamma; Multivariate Count Data Models; Sarmanov Distributions; Tariff Options
    JEL: C16 C35 L11
    Date: 2009–09
  17. By: Carlos A. Flores (Department of Economics, University of Miami); Oscar A. Mitnik (Department of Economics, University of Miami and IZA)
    Abstract: This paper assesses the e¤ectiveness of unconfoundedness-based estimators of mean e¤ects for multiple or multivalued treatments in eliminating biases arising from nonrandom treatment assignment. We evaluate these multiple treatment estimators by simultaneously equalizing average outcomes among several control groups from a randomized experiment. We study linear regression estimators as well as partial mean and weighting estimators based on the generalized propensity score (GPS). We also study the use of the GPS in assessing the comparability of individuals among the di¤erent treatment groups, and propose a strategy to determine the overlap or common support region that is less stringent than those previously used in the literature. Our results show that in the multiple treatment setting there may be treatment groups for which it is extremely di¢ cult to ?nd valid comparison groups, and that the GPS plays a signi?cant role in identifying those groups. In such situations, the estimators we consider perform poorly. However, their performance improves considerably once attention is restricted to those treatment groups with adequate overlap quality, with di¤erence-in-di¤erence estimators performing the best. Our results suggest that unconfoundedness-based estimators are a valuable econometric tool for evaluating multiple treatments, as long as the overlap quality is satisfactory.
    Date: 2009–09
  18. By: Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian
    Abstract: This paper compares the mixed-data sampling (MIDAS) and mixed-frequency VAR (MF-VAR) approaches to model specification in the presence of mixed-frequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coefficients, whereas MF-VAR does not restrict the dynamics and therefore can suffer from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MF-VAR can perform better. Hence, it is difficult to rank MIDAS and MF-VAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MF-VAR tends to perform better for longer horizons, whereas MIDAS for shorter horizons.
    Keywords: euro area growth; MIDAS; mixed-frequency data; mixed-frequency VAR; nowcasting
    JEL: C53 E37
    Date: 2009–09
  19. By: Kreider, Brent; Pepper, John V.; Gundersen, Craig; Jolliffe, Dean
    Abstract: The literature assessing the efficacy of the Food Stamp Program, now called the Supplemental Nutrition Assistance Program (SNAP), has long puzzled over positive associations between food stamp receipt and various undesirable health outcomes such as food insecurity. Assessing the impact of food stamps on outcomes is made difficult by endogenous selection into food stamp recipiency and extensive systematic misreporting of participation status. Using data from the National Health and Nutrition Examination Survey (NHANES), we apply and extend partial identification bounding methods to account for these two identification problems in a single unifying framework. Imposing relatively weak nonparametric assumptions on the selection and reporting error processes, we provide informative bounds on the impact of food stamps on child food insecurity, obesity, general health, and anemia. We find that commonly cited negative relationships between food stamps and health outcomes provide a misleading picture about the impact of the program. Without imposing any parametric assumptions, our tightest bounds identify modest favorable impacts of food stamps on child health.
    Keywords: Food Stamp Program, Supplemental Nutrition Assistance Program, food insecurity, health outcomes, partial identification, treatment effect, nonparametric bounds, classification error
    JEL: C1 C2 I3
    Date: 2009–11–13
  20. By: Brülhart, Marius; Schmidheiny, Kurt
    Abstract: It is well understood that the two most popular empirical models of location choice - conditional logit and Poisson - return identical coefficient estimates when the regressors are not individual specific. We show that these two models differ starkly in terms of their implied predictions. The conditional logit model represents a zero-sum world, in which one region's gain is the other regions' loss. In contrast, the Poisson model implies a positive-sum economy, in which one region's gain is no other region's loss. We also show that all intermediate cases can be represented as a nested logit model with a single outside option. The nested logit turns out to be a linear combination of the conditional logit and Poisson models. Conditional logit and Poisson elasticities mark the polar cases and can therefore serve as boundary values in applied research.
    Keywords: conditional logit; firm location; nested logit; Poisson count model; residential choice
    JEL: C25 H73 R3
    Date: 2009–07
  21. By: Adrian Dobra (University of Washington); Theo S. Eicher (University of Washington); Alexander Lenkoski (University of Washington)
    Abstract: Model uncertainty has become a central focus of policy discussion surrounding the determinants of economic growth. Over 140 regressors have been employed in growth empirics due to the proliferation of several new growth theories in the past two decades. Recently Bayesian model averaging (BMA) has been employed to address model uncertainty and to provide clear policy implications by identifying robust growth determinants. The BMA approaches were, however, limited to linear regression models that abstract from possible dependencies embedded in the covariance structures of growth determinants. The recent empirical growth literature has developed jointness measures to highlight such dependencies. We address model uncertainty and covariate dependencies in a comprehensive Bayesian framework that allows for structural learning in linear regressions and Gaussian graphical models. A common prior specification across the entire comprehensive framework provides consistency. Gaussian graphical models allow for a principled analysis of dependency structures, which allows us to generate a much more parsimonious set of fundamental growth determinants. Our empirics are based on a prominent growth dataset with 41 potential economic factors that has been the utilized in numerous previous analyses to account for model uncertainty as well as jointness.
  22. By: Comola, Margherita; Fafchamps, Marcel
    Abstract: The literature has shown that network architecture depends crucially on whether links are formed unilaterally or bilaterally, that is, on whether the consent of both nodes is required for a link to be formed. We propose a test of whether network data is best seen as an actual link or willingness to link and, in the latter case, whether this link is generated by an unilateral or bilateral link formation process. We illustrate this test using survey answers to a risk-sharing question in Tanzania. We find that the bilateral link formation model fits the data better than the unilateral model, but the data are best interpreted as willingness to link rather than an actual link. We then expand the model to include self-censoring and find that models with self-censoring fit the data best.
    Keywords: network architecture; pairwise stability; risk sharing
    JEL: C12 C52 D85
    Date: 2009–08
  23. By: Bacchetta, Philippe; Beutler, Toni; van Wincoop, Eric
    Abstract: The empirical literature on nominal exchange rates shows that the current exchange rate is often a better predictor of future exchange rates than a linear combination of macroeconomic fundamentals. This result is behind the famous Meese-Rogoff puzzle. In this paper we evaluate whether parameter instability can account for this puzzle. We consider a theoretical reduced-form relationship between the exchange rate and fundamentals in which parameters are either constant or time varying. We calibrate the model to data for exchange rates and fundamentals and conduct the exact same Meese-Rogoff exercise with data generated by the model. Our main finding is that the impact of time-varying parameters on the prediction performance is either very small or goes in the wrong direction. To help interpret the findings, we derive theoretical results on the impact of time-varying parameters on the out-of-sample forecasting performance of the model. We conclude that it is not time-varying parameters, but rather small sample estimation bias, that explains the Meese-Rogoff puzzle.
    Keywords: Exchange rate forecasting; exchange rate models
    JEL: F31 F37 F41
    Date: 2009–07
  24. By: Le, Vo Phuong Mai; Minford, Patrick; Wickens, Michael R.
    Abstract: We evaluate the Smets-Wouters model of the US using indirect inference with a VAR representation of the main US data series. We find that the original New Keynesian SW model is on the margin of acceptance when SW's own estimates of the variances and time-series behaviour of the structural errors are used. However when the structural errors implied jointly by the data and the structural model are used the model is rejected. We also construct an alternative (New Classical) version of the model with flexible wages and prices and a one-period information lag. This too is rejected. But when small proportions of both the labour and product markets are assumed to be imperfectly competitive within otherwise flexible markets the resulting `weighted' model is accepted.
    Keywords: Bootstrap; DSGE; grea moderation; indirect inference; New Classical; New Keynesian; regime change; structural break; US Model; VAR; Wald statistic
    JEL: C12 C32 C52 E1
    Date: 2009–11

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.