nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒04‒02
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. A Simple Test for Identification in GMM under Conditional Moment Restrictions By Francesco Bravo; Juan Carlos Escanciano; Taisuke Otsu
  2. Instrumental variables estimation and inference in the presence of many exogenous regressors By Stanislav Anatolyev
  3. Asymptotic theory for nonparametric regression with spatial data By Peter Robinson
  4. Time-Varying Parameter VAR Model with Stochastic Volatility: An Overview of Methodology and Empirical Applications By Jouchi Nakajima
  5. Efficient estimation of parameters in marginals in semiparametric multivariate models By Valentyn Panchenko; Artem Prokhorov
  6. Testing functional inequalities By Sokbae 'Simon' Lee; Kyungchul Song; Yoon-Jae Whang
  7. Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State Space Models By Siem Jan Koopman; Andre Lucas; Marcel Scharth
  8. Nonparametric trending regression with cross-sectional dependence By Peter Robinson
  9. Bounds On Treatment Effects On Transitions By Ridder, Geert; Vikström, Johan
  10. Structural Breaks - An Instrumental Variable Approach By Conniffe, Denis; Kelly, Robert
  11. Sequential Estimation of Dynamic Programming Models with Unobserved Heterogeneity By Kasahara, Hiroyuki; Shimotsu, Katsumi
  12. Monetary Policy Transmission under Zero Interest Rates: An Extended Time-Varying Parameter Vector Autoregression Approach By Jouchi Nakajima
  13. Capturing Preferences Under Incomplete Scenarios Using Elicited Choice Probabilities. By Herriges, Joseph A.; Bhattacharjee, Subhra; Kling, Catherine L.
  14. Estimating Causal Installed-Base Effects: A Bias-Correction Approach By Narayanan, Sridhar; Nair, Harikesh S.
  15. The Fisher Information Matrix in Right Censored Data from the Dagum Distribution By Filippo Domma; Sabrina Giordano; Mariangela Zenga
  17. Are Forecast Updates Progressive? By Chia-Lin Chang; Philip Hans Franses; Michael McAleer

  1. By: Francesco Bravo (Dept. of Economics, University of York); Juan Carlos Escanciano (Dept. of Economics, Indiana University); Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: This paper proposes a simple, fairly general, test for global identification of unconditional moment restrictions implied from point-identified conditional moment restrictions. The test is based on the Hausdorff distance between an estimator that is consistent even under global identification failure of the unconditional moment  restrictions, and an estimator of the identified set of the unconditional moment restrictions. The proposed test has a chi-squared limiting distribution and is also able to detect weak identification alternatives. Some Monte Carlo experiments show that the proposed test has competitive finite sample properties already for moderate sample sizes.
    Keywords: Conditional moment restrictions, Generalized method of moments, Global identification, Hausman test, Asset pricing
    JEL: C12 C13 C32
    Date: 2011–03
  2. By: Stanislav Anatolyev (New Economic School)
    Abstract: We consider a standard instrumental variables model contaminated by the presence of a large number of exogenous regressors. In an asymptotic framework where this number is proportional to the sample size, we study the impact of their ratio on the validity of existing estimators and tests. When the instruments are few, the inference using the conventional 2SLS estimator and associated t and J statistics, as well as the AndersonRubin and Kleibergen tests, is still valid. When the instruments are many, the LIML estimator remains consistent, but the presence of many exogenous regressors changes its asymptotic variance. Moreover, the conventional bias correction of the 2SLS estimator is no longer appropriate, and the associated HahnHausman test is not valid. We provide asymptotically correct versions of bias correction for the 2SLS estimator, derive its asymptotically correct variance estimator, extend the HansenHausmanNewey LIML variance estimator to the case of many exogenous regressors, and propose asymptotically valid modi…cations of the HahnHausman and J tests based on the LIML and bias corrected 2SLS estimators.
    Keywords: instrumental variables regression, many instruments, many exogenous regressors, 2SLS estimator, LIML estimator, bias correction, t test, J test, AndersonRubin test, Kleibergen test, HahnHausman test
    JEL: C12 C21
    Date: 2011–03
  3. By: Peter Robinson (Institute for Fiscal Studies and London School of Economics)
    Abstract: <p>Nonparametric regression with spatial, or spatio-temporal, data is considered. The conditional mean of a dependent variable, given explanatory ones, is a nonparametric function, while the conditional covariance reflects spatial correlation. Conditional heteroscedasticity is also allowed, as well as non-identically distributed observations. Instead of mixing conditions, a (possibly non-stationary) linear process is assumed for disturbances, allowing for long range, as well as short-range, dependence, while decay in dependence in explanatory variables is described using a measure based on the departure of the joint density from the product of marginal densities. A basic triangular array setting is employed, with the aim of covering various patterns of spatial observation. Sufficient conditions are established for consistency and asymptotic normality of kernel regression estimates. When the cross-sectional dependence is sufficiently mild, the asymptotic variance in the central limit theorem is the same as when observations are independent; otherwise, the rate of convergence is slower. We discuss application of our conditions to spatial autoregressive models, and models defined on a regular lattice.</p>
    Date: 2011–02
  4. By: Jouchi Nakajima (Institute for Monetary and Economic Studies, Bank of Japan (Currently in the Personnel and Corporate Affairs Department < studying at Duke University>, E-mail:
    Abstract: This paper aims to provide a comprehensive overview of the estimation methodology for the time-varying parameter structural vector autoregression (TVP-VAR) with stochastic volatility, in both methodology and empirical applications. The TVP-VAR model, combined with stochastic volatility, enables us to capture possible changes in underlying structure of the economy in a flexible and robust manner. In that respect, as shown in simulation exercises in the paper, the incorporation of stochastic volatility to the TVP estimation significantly improves estimation performance. The Markov chain Monte Carlo (MCMC) method is employed for the estimation of the TVP-VAR models with stochastic volatility. As an example of empirical application, the TVP-VAR model with stochastic volatility is estimated using the Japanese data with significant structural changes in dynamic relationship between the macroeconomic variables.
    Keywords: Bayesian inference, Markov chain Monte Carlo, Monetary policy, State space model, Structural vector autoregression, Stochastic volatility, Time-varying parameter
    JEL: C11 C15 E52
    Date: 2011–03
  5. By: Valentyn Panchenko (University of New South Wales); Artem Prokhorov (Concordia University and CIREQ)
    Abstract: Recent literature on semiparametric copula models focused on the situation when the marginals are specified nonparametrically and the copula function is given a parametric form. For example, this setup is used in Chen, Fan and Tsyrennikov (2006) [Efficient Estimation of Semiparametric Multivariate Copula Models, JASA] who focus on efficient estimation of copula parameters. We consider a reverse situation when the marginals are specified parametrically and the copula function is modelled nonparametrically. This setting is no less relevant in applications. We use the method of sieve for efficient estimation of parameters in marginals, derive its asymptotic distribution and show that the estimator is semiparametrically efficient. Simulations suggest that the sieve MLE can be up to 40% more efficient relative to QMLE depending on the strength of dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of this setting.
    Date: 2011–01
  6. By: Sokbae 'Simon' Lee (Institute for Fiscal Studies and Seoul National University); Kyungchul Song; Yoon-Jae Whang (Institute for Fiscal Studies and Seoul National University)
    Abstract: <p><p>This paper develops tests for inequality constraints of nonparametric regression functions. The test statistics involve a one-sided version of L<sub>p</sub>-type functionals of kernel estimators. Drawing on the approach of Poissonization, this paper establishes that the tests are asymptotically distribution free, admitting asymptotic normal approximation. Furthermore, the tests have nontrivial local power against a certain class of local alternatives converging to the null at the rate of n<sup>-1/2</sup>. Some results from Monte Carlo simulations are presented. </p><p></p><p></p></p>
    Date: 2011–02
  7. By: Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam); Marcel Scharth (VU University Amsterdam)
    Abstract: We introduce a new efficient importance sampler for nonlinear non-Gaussian state space models. By combining existing numerical and Monte Carlo integration methods, we obtain a general and efficient likelihood evaluation method for this class of models. Our approach is based on the idea that only a small part of the likelihood evaluation problem requires simulation, even in high dimensional settings. We refer to this method as Numerically Accelerated Importance Sampling. Computational gains of our efficient importance sampler are obtained by relying on Kalman filter and smoothing methods associated with an approximated linear Gaussian state space model. Our approach also leads to the removal of the bias-variance tradeoff in the efficient importance sampling estimator of the likelihood function. We illustrate our new methods by an elaborate simulation study which reveals high computational and numerical efficiency gains for a range of well-known models.
    Keywords: State space models; importance sampling; simulated maximum likelihood; stochastic volatility; stochastic copula; stochastic conditional duration
    JEL: C15 C22
    Date: 2011–03–22
  8. By: Peter Robinson (Institute for Fiscal Studies and London School of Economics)
    Abstract: <p>Panel data, whose series length T is large but whose cross-section size N need not be, are assumed to have a common time trend. The time trend is of unknown form, the model includes additive, unknown, individual-specific components, and we allow for spatial or other cross-sectional dependence and/or heteroscedasticity. A simple smoothed nonparametric trend estimate is shown to be dominated by an estimate which exploits the availability of cross-sectional data. Asymptotically optimal choices of bandwidth are justified for both estimates. Feasible optimal bandwidths, and feasible optimal trend estimates, are asymptotically justified, the finite sample performance of the latter being examined in a Monte Carlo study. A number of potential extensions are discussed.</p>
    Date: 2011–02
  9. By: Ridder, Geert (University of Southern California,); Vikström, Johan (Uppsala Center for Labor Studies)
    Abstract: This paper considers the definition and identification of treatment effects on conditional transition probabilities. We show that even under sequential random assignment only the instantaneous average treatment effect is point identified. Because treated and control units drop out at different rates, randomization only ensures the comparability of treatment and controls at the time of randomization, so that long run average treatment effects are not point identified. Instead we derive informative bounds on these average treatment effects. Our bounds do not impose (semi)parametric restrictions, as e.g. proportional hazards, that would narrow the bounds or even allow for point identification. We also explore various assumptions such as monotone treatment response, common shocks and positively correlated outcomes.
    Keywords: Partial identification; duration model; randomized experiment; treatment effect
    JEL: C14 C41
    Date: 2011–03–18
  10. By: Conniffe, Denis (University College Dublin); Kelly, Robert (Central Bank of Ireland)
    Abstract: A structural change test and corresponding change estimator of an instrumental variable nature are proposed. The strengths of the approach lie in its ease of application and the strong test power. It does not suffer from critical value adjustments required by the CUSUM type tests and is unique in that it tests and measures the size of the break in one operation. The power of this test is compared to others in the literature, both algebraically and through simulations, with favourable results.
    Keywords: Structural Break Test, Structural Break Estimation, Instrumental Variable
    JEL: C01 C12 C15
    Date: 2011–03
  11. By: Kasahara, Hiroyuki; Shimotsu, Katsumi
    Abstract: This paper develops a new computationally attractive procedure for estimating dynamic discrete choice models that is applicable to a wide range of dynamic programming models. The proposed procedure can accommodate unobserved state variables that (i) are neither additively separable nor follow generalized extreme value distribution, (ii) are serially correlated, and (iii) affect the choice set. Our estimation algorithm sequentially updates the parameter estimate and the value function estimate. It builds upon the idea of the iterative estimation algorithm proposed by Aguirregabiria and Mira (2002, 2007) but conducts iteration using the value function mapping rather than the policy iteration mapping. Its implementation is straightforward in terms of computer programming; unlike the Hotz-Miller type estimators, there is no need to reformulate a fixed point mapping in the value function space as that in the space of probability distributions. It is also applicable to estimate models with unobserved heterogeneity. We analyze the convergence property of our sequential algorithm and derive the conditions for its convergence. We develop an approximated procedure which reduces computational cost substantially without deteriorating the convergence rate. We further extend our sequential procedure for estimating dynamic programming models with an equilibrium constraint, which include dynamic game models and dynamic macroeconomic models.
    Keywords: dynamic discrete choice, value function mapping, nested pseudo, likelihood, unobserved, heterogeneity, equilibrium constraint
    JEL: C13 C14 C63
    Date: 2011–03
  12. By: Jouchi Nakajima (Institute for Monetary and Economic Studies, Bank of Japan (Currently in the Personnel and Corporate Affairs Department < studying at Duke University>, E-mail:
    Abstract: This paper attempts to explore monetary policy transmission under zero interest rates by explicitly incorporating the zero lower bound (ZLB) of nominal interest rates into the time-varying parameter structural vector autoregression model with stochastic volatility (TVP- VAR-ZLB). Nominal interest rates are modeled as a censored variable with Tobit-type non-linearity and incorporated into the TVP-VAR framework. For estimation, an efficient Markov chain Monte Carlo (MCMC) method is constructed in the context of Bayesian inference. The model is applied to the Japanese macroeconomic data including the periods of the zero interest rates policy and the quantitative easing policy. The empirical results show that a dynamic relationship between monetary policy and macroeconomic variables is well detected through changes in medium-term interest rates, and not policy interest rates under the ZLB, although other macroeconomic dynamics are reasonably traced without considering the ZLB in an explicit manner.
    Keywords: Monetary policy, Zero lower bound of nominal interest rates, Markov chain Monte Carlo, Time-varying parameter vector autoregression with stochastic volatility
    JEL: C11 C15 E44 E52 E58
    Date: 2011–03
  13. By: Herriges, Joseph A.; Bhattacharjee, Subhra; Kling, Catherine L.
    Abstract: Manski (1999) proposed an approach for dealing with a particular form respondent uncertainty in discrete choice settings, particularly relevant in survey based research when the uncertainty stems from the incomplete description of the choice scenarios. Specifically, he suggests eliciting choice probabilities from respondents rather than their single choice of an alternative. A recent paper in IER by Blass et al. (2010) further develops the approach and presents the first empirical application. This paper extends the literature in a number of directions, examining the linkage between elicited choice probabilities and the more common discrete choice elicitation format. We also provide the first convergent validity test of the elicited choice probability format vis-\`a-vis the standard discrete choice format in a split sample experiment. Finally, we discuss the differences between welfare measures that can be derived from elicited choice probabilities versus those that can obtained from discrete choice responses.
    Keywords: discrete choice; Elicited Choice Probabilities
    JEL: C25 Q51
    Date: 2011–03–24
  14. By: Narayanan, Sridhar (Stanford University); Nair, Harikesh S. (Stanford University)
    Abstract: New empirical models of consumer demand that incorporate social preferences, observational learning, word-of-mouth or network effects have the feature that the adoption of others in the reference group - the "installed-base" - has a causal effect on current adoption behavior. Estimation of such causal installed-base effects is challenging due to the potential for spurious correlation between the adoption of agents, arising from endogenous assortive matching into social groups (or homophily) and from the existence of unobservables across agents that are correlated. In the absence of experimental variation, the preferred solution is to control for these using a rich specification of fixed-effects, which is feasible with panel data. We show that fixed-effects estimators of this sort are inconsistent in the presence of installed-base effects; in our simulations, random-effects specifications perform even worse. Our analysis reveals the tension faced by the applied empiricist in this area: a rich control for unobservables increases the credibility of the reported causal effects, but the incorporation of these controls introduces biases of a new kind in this class of models. We present two solutions: an instrumental variable approach, and a new bias-correction approach, both of which deliver consistent estimates of causal installed-base effects. The bias-correction approach is tractable in this context because we are able to exploit the structure of the problem to solve analytically for the asymptotic bias of the installed-base estimator, and to incorporate it into the estimation routine. Our approach has implications for the measurement of social effects using non-experimental data, and for measuring marketing-mix effects in the presence of state-dependence in demand, more generally. Our empirical application to the adoption of the Toyota Prius Hybrid in California reveals evidence for social influence in diffusion, and demonstrates the importance of incorporating proper controls for the biases we identify.
    Date: 2011–03
  15. By: Filippo Domma; Sabrina Giordano; Mariangela Zenga (Dipartimento di Economia e Statistica, Università della Calabria)
    Abstract: In this note, we provide the mathematical details of the calculation of the Fisher information matrix when the data involve type I right censored observations from a Dagum distribution.
    Keywords: Fisher information matrix, type I right censored observations, Dagum distribution
    Date: 2011–03
  16. By: Concepción Román; Juan Carlos Martín; Raquel Espino; Ana Isabel Arencibia (University of Las Palmas de Gran Canaria)
    Abstract: This paper evaluates gains in efficiency produced by the use of efficient designs to analyze stated choice (SC) data. Based on a standard experiment used in a previous research, we compare the efficiency of this design with that of the efficient design obtained according to the minimization of the D-error, considering different modelling strategies. The experiment was conducted in the context of the choice between the plane and the new high speed train in the route Madrid-Barcelona. As the levels assigned to some attributes in the stated choice exercise were customized to each respondent experience, pivoting the information provided by preliminary revealed preference questions around the reference alternative (the plane, in this case), a different efficient design was created for every respondent in the sample. Results of the analysis demonstrate that substantial gains in the significance level of the parameter estimates could have been attained if the efficient design had been used to analyze SC data.
    Keywords: Stated Choice Data, Efficient Designs, Discrete Choice Models
    Date: 2011
  17. By: Chia-Lin Chang (NCHU Department of Applied Economics (Taiwan)); Philip Hans Franses (Econometrisch Instituut (Econometric Institute), Faculteit der Economische Wetenschappen (Erasmus School of Economics), Erasmus Universiteit); Michael McAleer (Econometrisch Instituut (Econometric Institute), Faculteit der Economische Wetenschappen (Erasmus School of Economics) Erasmus Universiteit, Tinbergen Instituut (Tinbergen Institute).)
    Abstract: Many macro-economic forecasts and forecast updates, such as those from the IMF and OECD, typically involve both a model component, which is replicable, as well as intuition (namely, expert knowledge possessed by a forecaster), which is non-replicable. . Learning from previous mistakes can affect both the replicable component of a model as well as intuition. If learning, and hence forecast updates, are progressive, forecast updates should generally become more accurate as the actual value is approached. Otherwise, learning and forecast updates would be neutral. The paper proposes a methodology to test whether macro-economic forecast updates are progressive, where the interaction between model and intuition is explicitly taken into account. The data set for the empirical analysis is for Taiwan, where we have three decades of quarterly data available of forecasts and their updates of two economic fundamentals, namely the inflation rate and real GDP growth rate. The empirical results suggest that the forecast updates for Taiwan are progressive, and that progress can be explained predominantly by improved intuition.
    Keywords: Macro-economic forecasts, econometric models, intuition, learning, progressive forecast updates, forecast errors.
    JEL: C53 C22 E27 E37
    Date: 2011

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.