nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒02‒12
24 papers chosen by
Sune Karlsson
Örebro universitet

  1. Testing for Monotonicity in Unobservables under Unconfoundedness By Stefan Hoderlein; Liangjun Su; Halbert White; Thomas Tao Yang
  2. Nonparametric specification testing via the trinity of tests By Gupta, Abhimanyu
  3. The Estimation of Continuous Time Models with Mixed Frequency Data By Chambers, Marcus J
  4. Testing Monotonicity in Unobservables with Panel Data By Liangjun Su; Stefan Hoderlein; Halbert White
  5. Nonparametric Estimation of Demand Elasticities Using Panel Data By Stefan Hoderlein; Matthew Shum
  6. Estimation of Spatial Autoregressions with Stochastic Weight Matrices By Gupta, Abhimanyu
  7. Dynamic Vector Mode Regression By Kemp, Gordon C R; Parente, Paulo M D C; Santos Silva, Joao M C
  8. Nonparametric Specification Testing in Random Parameter Models By Christoph Breunig; Stefan Hoderlein
  9. Pseudo Maximum Likelihood Estimation of Spatial Autoregressive Models with Increasing Dimension By Gupta, Abhimanyu; Robinson, Peter M
  10. Autoregressive Spatial Spectral Estimates By Gupta, Abhimanyu
  11. Generalized Efficient Inference on Factor Models with Long-Range Dependence By Yunus Emre Ergemen
  12. Solution and Estimation Methods for DSGE Models By Fernández-Villaverde, Jesús; Rubio-Ramírez, Juan Francisco; Schorfheide, Frank
  13. On the parameter identifiability problem in Agent Based economical models By Di Molfetta Giuseppe
  14. Using Split Samples to Improve Inference on Causal Effects By Fafchamps, Marcel; Labonne, Julien
  15. The finite sample performance of inference methods for propensity score matching and weighting estimators By Bodory, Hugo; Huber, Martin; Camponovo, Lorenzo; Lechner, Michael
  16. Unraveling Firms: Demand, Productivity and Markups Heterogeneity By Forlani, Emanuele; Martin, Ralf; Mion, Giordano; Muûls, Mirabelle
  17. Deciding Between Alternative Approaches In Macroeconomics By David Hendry
  18. Parallelization Experience with Four Canonical Econometric Models using ParMitISEM By Nalan Basturk; Stefano Grassi; Lennart Hoogerheide; Herman K. van Dijk
  19. Identification and Inference in Regression Discontinuity Designs with a Manipulated Running Variable By Gerard, Francois; Rokkanen, Miikka; Rothe, Christoph
  20. Are small scale VARs useful for business cycle analysis? Revisiting Non-Fundamentalness By Canova, Fabio; Hamidi Sahneh, Mehdi
  21. The Linear Systems Approach to Linear Rational Expectations Models By Majid Al-Sadoon
  22. A Simultaneous Equation Approach to Estimating HIV Prevalence with Non-Ignorable Missing Responses By Giampiero Marra; Rosalba Radice; Till Bärnighausen; Simon N. Wood; Mark E. McGovern
  23. Erratum regarding “Instrumental variables with unrestricted heterogeneity and continuous treatment” By Stefan Hoderlein; Hajo Holzmann; Maximilian Kasy; Alexander Meister
  24. Are Small-Scale SVARs Useful for Business Cycle Analysis? Revisiting Non-Fundamentalness By Fabio Canova

  1. By: Stefan Hoderlein (Boston College); Liangjun Su (Singapore Management University); Halbert White; Thomas Tao Yang (Boston College)
    Abstract: Monotonicity in a scalar unobservable is a common assumption when modeling heterogeneity in structural models. Among other things, it allows one to recover the underlying structural function from certain conditional quantiles of observables. Nevertheless, monotonicity is a strong assumption and in some economic applications unlikely to hold, e.g., random coefficient models. Its failure can have substantive adverse consequences, in particular inconsistency of any estimator that is based on it. Having a test for this hypothesis is hence desirable. This paper provides such a test for cross-section data. We show how to exploit an exclusion restriction together with a conditional independence assumption, which in the binary treatment literature is commonly called unconfoundedness, to construct a test. Our statistic is asymptotically normal under local alternatives and consistent against global alternatives. Monte Carlo experiments show that a suitable bootstrap procedure yields tests with reasonable level behavior and useful power. We apply our test to study the role of unobserved ability in determining Black-White wage differences and to study whether Engel curves are monotonically driven by a scalar unobservable.
    Keywords: Control variables, Conditional exogeneity, Endogenous variables, Monotonicity, Nonparametrics, Nonseparable, Specification test, Unobserved heterogeneity
    JEL: C12 C14 C21 C26
    Date: 2015–10–23
  2. By: Gupta, Abhimanyu
    Abstract: Tests are developed for inference on a parameter vector whose dimension grows slowly with sample size. The statistics are based on the Lagrange Multiplier, Wald and (pseudo) Likelihood Ratio principles, admit standard normal asymptotic distributions under the null and are straightforward to compute. They are shown to be consistent and possessing non-trivial power against local alternatives. The settings considered include multiple linear regression, panel data models with fixed effects and spatial autoregressions. When a nonparametric regression function is estimated by series, we use our statistics to propose specification tests, and in semiparametric adaptive estimation we provide a test for correct error distribution specification. These tests are nonparametric but handled in practice with parametric techniques. A Monte Carlo study suggests that our tests perform well in finite samples. Two empirical examples use them to test for correct shape of an electricity distribution cost function and linearity and equality of Engel curves.
    Date: 2015
  3. By: Chambers, Marcus J
    Abstract: This paper derives exact representations for discrete time mixed frequency data generated by an underlying multivariate continuous time model. Allowance is made for different combinations of stock and flow variables as well as deterministic trends, and the variables themselves may be stationary or nonstationary (and possibly co-integrated). The resulting discrete time representations allow for the information contained in high frequency data to be utilised alongside the low frequency data in the estimation of the parameters of the continuous time model. Monte Carlo simulations explore the finite sample performance of the maximum likelihood estimator of the continuous time system parameters based on mixed frequency data, and a comparison with extant methods of using data only at the lowest frequency is provided. An empirical application demonstrates the methods developed in the paper and it concludes with a discussion of further ways in which the present analysis can be extended and refined.
    Keywords: Continuous time; mixed frequency data; exact discrete time models; stock and flow variables.
    Date: 2016
  4. By: Liangjun Su (Singapore Management University); Stefan Hoderlein (Boston College); Halbert White
    Abstract: Monotonicity in a scalar unobservable is a crucial identifying assumption for an important class of nonparametric structural models accommodating unobserved heterogeneity. Tests for this monotonicity have previously been unavailable. This paper proposes and analyzes tests for scalar monotonicity using panel data for structures with and without time-varying unobservables, either partially or fully nonseparable between observables and unobservables. Our nonparametric tests are computationally straightforward, have well behaved limiting distributions under the null, are consistent against pre- cisely specified alternatives, and have standard local power properties. We provide straightforward bootstrap methods for inference. Some Monte Carlo experiments show that, for empirically relevant sample sizes, these reasonably control the level of the test, and that our tests have useful power. We apply our tests to study asset returns and demand for ready-to-eat cereals.
    Keywords: monotonicity, nonparametric, nonseparable, specification test, unobserved heterogeneity
    JEL: C12 C14 C33
  5. By: Stefan Hoderlein (Boston College); Matthew Shum (Caltech)
    Abstract: In this paper, we propose and implement an estimator for price elasticities in demand models that makes use of Panel data. Our underlying demand model is nonparametric, and accommodates general distributions of product-specific unobservables which can lead to endogeneity of price. Our approach allows these unobservables to vary over time while, at the same time, not requiring the availability of instruments which are orthogonal to these unobservables. Monte Carlo simulations demonstrate that our estimator works remarkably well, even with modest sample sizes. We provide an illustrative application to estimating the cross-price elasticity matrix for carbonated soft drinks.
    Keywords: demand elasticities, nonparametric estimation
    Date: 2014–04–07
  6. By: Gupta, Abhimanyu
    Abstract: We examine a higher-order spatial autoregressive model with stochastic, but exogenous, spatial weight matrices. Allowing a general spatial linear process form for the disturbances that permits many common types of error specifications as well as potential ‘long memory’, we provide sufficient conditions for consistency and asymptotic normality of instrumental variables and ordinary least squares estimates. The implications of popular weight matrix normalizations and structures for our theoretical conditions are discussed. A set of Monte Carlo simulations examines the behaviour of the estimates in a variety of situations and suggests, like the theory, that spatial weights generated from distributions with ‘smaller’ moments yield better estimates. Our results are especially pertinent in situations where spatial weights are functions of stochastic economic variables.
    Date: 2015
  7. By: Kemp, Gordon C R; Parente, Paulo M D C; Santos Silva, Joao M C
    Abstract: We study the semi-parametric estimation of the conditional mode of a random vector that has a continuous conditional joint density with a well-defined global mode. A novel full-system estimator is proposed and its asymptotic properties are studied allowing for possibly dependent data. We specifically consider the estimation of vector autoregressive conditional mode models and of structural systems of linear simultaneous equations definded by mode restrictions. The proposed estimator is easy to implement using standard software and the results of a small simulation study suggest that it is well behaved in finite samples.
    Date: 2015
  8. By: Christoph Breunig (Humboldt-Universität zu Berlin); Stefan Hoderlein (Boston College)
    Abstract: In this paper, we suggest and analyze a new class of specification tests for random coefficient models. These tests allow to assess the validity of central structural features of the model, in particular linearity in coefficients and generalizations of this notion like a known nonlinear functional relationship. They also allow to test for degeneracy of the distribution of a random coefficient, i.e., whether a coefficient is fixed or random, including whether an associated variable can be omitted altogether. Our tests are nonparametric in nature, and use sieve estimators of the characteristic function. We analyze their power against both global and local alternatives in large samples and through a Monte Carlo simulation study. Finally, we apply our framework to analyze the specification in a heterogeneous random coefficients consumer demand model.
    Keywords: Nonparametric specification testing, random coefficients, unobserved heterogeneity, sieve method, characteristic function, consumer demand
    JEL: C12 C14
    Date: 2016–02–07
  9. By: Gupta, Abhimanyu; Robinson, Peter M
    Abstract: Pseudo maximum likelihood estimates are developed for higher-order spatial autoregres- sive models with increasingly many parameters, including models with spatial lags in the dependent variables and regression models with spatial autoregressive disturbances. We consider models with and without a linear or nonlinear regression component. Sufficient conditions for consistency and asymptotic normality are provided, the results varying ac- cording to whether the number of neighbours of a particular unit diverges or is bounded. Monte Carlo experiments examine nite-sample behaviour.
    Date: 2015
  10. By: Gupta, Abhimanyu
    Abstract: Autoregressive spectral density estimation for stationary random fields on a regular spatial lattice has many advantages relative to kernel based methods. It provides a guaranteed positive-definite estimate even when suitable edge-effect correction is employed, is simple to compute using least squares and necessitates no choice of kernel. We truncate a true half-plane infinite autoregressive representation to estimate the spectral density. The truncation length is allowed to diverge in all dimensions in order to avoid the potential bias which would accrue due to truncation at a fixed lag-length. Consistency and strong consistency of the proposed estimator, both uniform in frequencies, are established. Under suitable conditions the asymptotic distribution of the estimate is shown to be zero-mean normal and independent at fixed distinct frequencies, mirroring the behaviour for time series. A small Monte Carlo experiment examines finite sample performance. We illustrate the technique by applying it to Los Angeles house price data and a novel analysis of voter turnout data in a US presidential election. Technically the key to the results is the covariance structure of stationary random fields defined on regularly spaced lattices. We study this in detail and show the covariance matrix to satisfy a generalization of the Toeplitz property familiar from time series analysis.
    Date: 2015
  11. By: Yunus Emre Ergemen (Aarhus University and CREATES)
    Abstract: A dynamic factor model is considered that contains stochastic time trends allowing for stationary and nonstationary long-range dependence. The model nests standard I(0) and I(1) behaviour smoothly in common factors and residuals, removing the necessity of a priori unit-root and stationarity testing. Short-memory dynamics are allowed in the common factor structure and possibly heteroskedastic error term. In the estimation, a generalized version of the principal components (PC) approach is proposed to achieve efficiency. Asymptotics for efficient common factor and factor loading as well as long-range dependence parameter estimates are justified at standard parametric convergence rates. The use of the method for the selection of number of factors and testing for latent components is discussed. Finite-sample properties of the estimates are explored via Monte-Carlo experiments, and an empirical application to U.S. economy diffusion indices is included.
    Keywords: Factor models, long-range dependence, principal components, efficiency, hypothesis testing
    JEL: C12 C13 C33
    Date: 2016–01–29
  12. By: Fernández-Villaverde, Jesús; Rubio-Ramírez, Juan Francisco; Schorfheide, Frank
    Abstract: This paper provides an overview of solution and estimation techniques for dynamic stochastic general equilibrium (DSGE) models. We cover the foundations of numerical approximation techniques as well as statistical inference and survey the latest developments in the field.
    Keywords: approximation error analysis; Bayesian inference; DSGE model; frequentist inference; GMM estimation; impulse response function matching; likelihood-based inference; Metropolis-Hastings algorithm; minimum distance estimation; particle filter; perturbation methods; projection methods; sequential Monte Carlo
    JEL: C11 C13 C32 C52 C61 C63 E32 E52
    Date: 2015–12
  13. By: Di Molfetta Giuseppe
    Abstract: Identifiability of parameters is a fundamental prerequisite for model identification. It concerns uniqueness of the model parameters determined from experimental or simulated observations. This dissertation specifically deals with structural or a priori identifiability: whether or not parameters can be identified from a given model structure and experimental measurements. We briefly present the identifiability problem in linear and non linear dynamical model. We compare DSGE and Agent Based model (ABM) in terms of identifiability of the structural parameters and we finally discuss limits and perspective of numerical protocols to test global identifiability in case of ergodic and markovian economical systems.
    Date: 2016–02
  14. By: Fafchamps, Marcel; Labonne, Julien
    Abstract: We discuss a method aimed at reducing the risk that spurious results are published. Researchers send their datasets to an independent third party who randomly generates training and testing samples. Researchers perform their analysis on the former and once the paper is accepted for publication the method is applied to the latter and it is those results that are published. Simulations indicate that, under empirically relevant settings, the proposed method significantly reduces type I error and delivers adequate power. The method – that can be combined with pre-analysis plans – reduces the risk that relevant hypotheses are left untested.
    Keywords: Bonferroni correction; data mining; pre-analysis plan; publication bias
    JEL: C12 C18
    Date: 2016–01
  15. By: Bodory, Hugo; Huber, Martin; Camponovo, Lorenzo; Lechner, Michael
    Abstract: This paper investigates the finite sample properties of a range of inference methods for propensity score-based matching and weighting estimators frequently applied to evaluate the average treatment effect on the treated. We analyse both asymptotic approximations and bootstrap methods for computing variances and confidence intervals in our simulation design, which is based on large scale labor market data from Germany and varies w.r.t. treatment selectivity, effect heterogeneity, the share of treated, and the sample size. The results suggest that in general, the bootstrap procedures dominate the asymptotic ones in terms of size and power for both matching and weighting estimators. Furthermore, the results are qualitatively quite robust across the various simulation features.
    Keywords: inference; variance estimation; treatment effects; matching; inverse probability weighting
    JEL: C21
    Date: 2016–01–26
  16. By: Forlani, Emanuele; Martin, Ralf; Mion, Giordano; Muûls, Mirabelle
    Abstract: We develop a new econometric framework that simultaneously allows recovering heterogeneity in demand, TFP and markups across firms while leaving the correlation among the three unrestricted. We do this by systematically exploiting assumptions that are implicit in previous firm-level productivity estimation approaches. We use Belgian firms production data to quantify TFP, demand and mark-ups and show how they are correlated among them, across time and with measures obtained from other approaches. We also show to what extent our three dimensions of heterogeneity allow us to gain deeper and sharper insights on two key firm-level outcomes: export status and size.
    Keywords: Demand; Export status; Firm size; Markups; Production function estimation; Productivity
    JEL: D24 F14 L11 L25
    Date: 2016–01
  17. By: David Hendry
    Abstract: Abstract: Macroeconomic time-series data are aggregated, inaccurate, non-stationary, collinear and rarely match theoretical concepts. Macroeconomic theories are incomplete, incorrect and changeable: location shifts invalidate the law of iterated expectations and ‘rational expectations’ are then systematically biased. Empirical macro-econometric models are non-constant and mis-specified in numerous ways, so economic policy often has unexpected effects, and macroeconomic forecasts go awry. In place of using just one of the four main methods of deciding between alternative models, theory, empirical evidence, policy relevance and forecasting, we propose nesting ‘theory-driven’ and ‘datadriven’ approaches, where theory-models’ parameter estimates are unaffected by selection despite searching over rival candidate variables, longer lags, functional forms, and breaks.
    Keywords: Model Selection, Theory Retention, Location Shifts, Indicator Saturation, Autometrics.
    JEL: C51 C22
    Date: 2016–01–27
  18. By: Nalan Basturk (Maastricht University, the Netherlands); Stefano Grassi (University of Kent, United Kingdom); Lennart Hoogerheide (VU University Amsterdam, the Netherlands); Herman K. van Dijk (VU University Amsterdam, Erasmus University Rotterdam, the Netherlands)
    Abstract: This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm, introduced by Hoogerheide, Opschoor and Van Dijk (2012), provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.
    Keywords: finite mixtures, Student-t distributions, Importance Sampling, MCMC, Metropolis-Hastings algorithm, Expectation Maximization, Bayesian inference
    JEL: C11 C13 C23 C32
    Date: 2016–01–22
  19. By: Gerard, Francois; Rokkanen, Miikka; Rothe, Christoph
    Abstract: A key assumption in regression discontinuity analysis is that units cannot manipulate the value of their running variable in a way that guarantees or avoids assignment to the treatment. Standard identification arguments break down if this condition is violated. This paper shows that treatment effects remain partially identified in this case. We derive sharp bounds on the treatment effects, show how to estimate them, and propose ways to construct valid confidence intervals. Our results apply to both sharp and fuzzy regression discontinuity designs. We illustrate our methods by studying the effect of unemployment insurance on unemployment duration in Brazil, where we find strong evidence of manipulation at eligibility cutoffs.
    Keywords: bounds; manipulation; regression discontinuity
    JEL: C2
    Date: 2016–01
  20. By: Canova, Fabio; Hamidi Sahneh, Mehdi
    Abstract: Non-fundamentalness arises when observables do not contain enough information to recover the vector of structural shocks. Using Granger causality tests, the literature suggested that many small scale VAR models are non-fundamental and thus not useful for business cycle analysis. We show that causality tests are problematic when VAR variables are cross sectionally aggregated or proxy for non-observables. We provide an alternative testing procedure, illustrate its properties with a Monte Carlo exercise, and reexamine the properties of two prototypical VAR models.
    Keywords: aggregation; Granger causality; non-fundamentalness; small scale VARs
    JEL: C32 C5 E5
    Date: 2016–01
  21. By: Majid Al-Sadoon
    Abstract: This paper considers linear rational expectations models from the linear systems point of view. Using a generalization of the Wiener-Hopf factorization, the linear systems approach is able to furnish very simple conditions for existence and uniqueness of both particular and generic linear rational expectations models. As applications of this approach, the paper provides results for existence of sequential solutions to block triangular systems and provides an exhaustive description of stationary and unit root solutions, including a generalization of Granger's representation theorem. In addition, the paper provides an innovative numerical solution to the Wiener-Hopf factorization and its generalization.
    Keywords: rational expectations, linear systems, Wiener-Hopf factorization, vector autoregressive processes, block triangular system, stability, cointegration
    JEL: C32 C51 C62 C63 C65
    Date: 2016–01
  22. By: Giampiero Marra; Rosalba Radice; Till Bärnighausen; Simon N. Wood; Mark E. McGovern
    Abstract: Estimates of HIV prevalence are important for policy in order to establish the health status of a country's population and to evaluate the effectiveness of population-based interventions and campaigns. However, participation rates in testing for surveillance conducted as part of household surveys, on which many of these estimates are based, can be low. HIV positive individuals may be less likely to participate because they fear disclosure, in which case estimates obtained using conventional approaches to deal with missing data, such as imputation-based methods, will be biased. We develop a Heckman-type simultaneous equation approach which accounts for non-ignorable selection, but unlike previous implementations, allows for spatial dependence and does not impose a homogeneous selection process on all respondents. In addition, our framework addresses the issue of separation, where for instance some factors are severely unbalanced and highly predictive of the response, which would ordinarily prevent model convergence. Estimation is carried out within a penalized likelihood framework where smoothing is achieved using a parametrization of the smoothing criterion which makes estimation more stable and efficient. We provide the software for straightforward implementation of the proposed approach, and apply our methodology to estimating national and sub-national HIV prevalence in Swaziland, Zimbabwe and Zambia.
    Keywords: Heckman-Type Selection Model, HIV, Penalized Regression Splines, Selection Bias, Simultaneous Equation Models, Spatial Dependence
    JEL: C30 J10
    Date: 2016–02
  23. By: Stefan Hoderlein (Boston College); Hajo Holzmann (Marburg University); Maximilian Kasy; Alexander Meister (University of Rostock)
    Date: 2015–07–18
  24. By: Fabio Canova
    Abstract: Non-fundamentalness arises when observables do not contain enough information to recover the vector of structural shocks. Using Granger causality tests, the literature suggested that many small scale VAR models are non-fundamental and thus not useful for business cycle analysis. We show that causality tests are problematic when VAR variables are cross sectionally aggregated or proxy for non-observables. We provide an alternative testing procedure, illustrate its properties with a Monte Carlo exercise, and reexamine the properties of two prototypical VAR models.
    Keywords: Aggregation, Non-Fundamentalness, Granger causality, Small scale VARs
    Date: 2016–02

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.