nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒04‒16
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. Empirical Likelihood for Nonparametric Additive Models By Taisuke Otsu
  2. On the Bias of the Maximum Likelihood Estimator for the Two-Parameter Lomax Distribution By David E. Giles; Hui Feng; Ryan T. Godwin
  3. Generalized Measurement Invariance Tests with Application to Factor Analysis By Edgar C. Merkle; Achim Zeileis
  4. Test Of Hypotheses In Panel Data Models When The Regressor And Disturbances Are Possibly Nonstationary By Badi H. Baltagi; Chihwa Kao; Sanggon Na
  5. Testing for Breaks in Cointegrated Panels with Common and Idiosyncratic Stochastic Trends By Chihwa Kao; Lorenzo Trapani; Giovanni Urga
  6. The overall seasonal integration tests under non-stationary alternatives: A methodological note By Ghassen El Montasser
  7. Limit Laws in Transaction-Level Asset Price Models By Alexander Aue; Lajos Horváth; Clifford Hurvich; Philippe Soulier
  8. Efficient Estimation of Data Combination Models by the Method of Auxiliary-to-Study Tilting (AST) By Bryan S. Graham; Cristine Campos de Xavier Pinto; Daniel Egel
  9. Interacting multiple -- Try algorithms with different proposal distributions By Roberto Casarin; Radu Craiu; Fabrizio Leisen
  10. An Econometric Approach To Estimating Support Prices And Measures Of Productivity Change In Public Hospitals By C.J. O’Donnell; K. Nguyen
  11. Using the global dimension to identify shocks with sign restrictions By Alexander Chudik; Michael Fidora
  12. Unbiased estimate of dynamic term structure models By Michael D. Bauer; Glenn D. Rudebusch; Jing (Cynthia) Wu
  13. Pareto versus lognormal: a maximum entropy test By Marco Bee; Massimo Riccaboni; Stefano Schiavo
  14. Montecarlo simulation of long-term dependent processes: a primer By Carlos Leóm; Alejandro Reveiz
  15. Why inferential statistics are inappropriate for development studies and how the same data can be better used By Ballinger, Clint
  16. TFP growth and its determinants: nonparametrics and model averaging By Michael Danquah; Enrique Moral-Benito; Bazoumana Ouattara

  1. By: Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: Nonparametric additive modeling is a fundamental tool for statistical data analysis which allows flexible functional forms for conditional mean or quantile functions but avoids the curse of dimensionality for fully nonparametric methods induced by high-dimensional covariates. This paper proposes empirical likelihood-based inference methods for unknown functions in three types of nonparametric additive models: (i) additive mean regression with the identity link function, (ii) generalized additive mean regression with a known non-identity link function, and (iii) additive quantile regression. The proposed empirical likelihood ratio statistics for the unknown functions are asymptotically pivotal and converge to chi-square distributions, and their associated confidence intervals possess several attractive features compared to the conventional Wald-type confidence intervals.
    Keywords: Nonparametric additive model, Empirical likelihood, Generalized linear model, Quantile regression
    JEL: C12 C14 C21 C25
    Date: 2011–04
  2. By: David E. Giles (Department of Economics, University of Victoria); Hui Feng; Ryan T. Godwin
    Abstract: The Lomax (Pareto II) distribution has found wide application in a variety of fields. We analyze the second-order bias of the maximum likelihood estimators of its parameters for finite sample sizes, and show that this bias is positive. We derive an analytic bias correction which reduces the percentage bias of these estimators by one or two orders of magnitude, while simultaneously reducing relative mean squared error. Our simulations show that this analytic bias correction outperforms a correction based on the parametric bootstrap. Three examples with actual data illustrate the application of our methods.
    Keywords: Maximum likelihood estimator; bias reduction; Lomax distribution; Pareto II distribution; bootstrap
    JEL: C13 C15 C16 C53
    Date: 2011–04–07
  3. By: Edgar C. Merkle; Achim Zeileis
    Abstract: The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we construct tests of measurement invariance based on stochastic processes of casewise derivatives of the likelihood function. These tests can be viewed as generalizations of the Lagrange multiplier test, and they are especially useful for: (1) isolating specific parameters affected by measurement invariance violations, and (2) identifying subgroups of individuals that violated measurement invariance based on a continuous auxiliary variable. The tests are presented and illustrated in detail, along with simulations examining the tests' abilities in controlled conditions.
    Keywords: measurement invariance, parameter stability, factor analysis, structural equation models
    JEL: C30 C52
    Date: 2011–04
  4. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Sanggon Na
    Abstract: This paper considers the problem of hypotheses testing in a simple panel data regression model with random individual effects and serially correlated disturbances. Following Baltagi, Kao and Liu (2008), we allow for the possibility of non-stationarity in the regressor and/or the disturbance term. While Baltagi et al. (2008) focus on the asymptotic properties and distributions of the standard panel data estimators, this paper focuses on test of hypotheses in this setting. One important finding is that unlike the time series case, one does not necessarily need to rely on the “super-efficient” type AR estimator by Perron and Yabu (2009) to make inference in panel data. In fact, we show that the simple t-ratio always converges to the standard normal distribution regardless of whether the disturbances and/or the regressor are stationary.
    Keywords: Panel Data, OLS, Fixed-Effects, First-Difference, GLS, t-ratio.
    JEL: C12 C33
    Date: 2011–02
  5. By: Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Lorenzo Trapani; Giovanni Urga
    Abstract: In this paper, we develop tests for structural change in cointegrated panel regressions with common and idiosyncratic trends. We consider both the cases of observable and nonobservable common trends, deriving a Functional Central Limit Theorem for the partial sample estimators under the null of no break. We show that tests based on sup-Wald statistics are powerful versus breaks of size , also proving that power is present when the time of change differs across units and when only some units have a break. Our framework is extended to the case of cross correlated regressors and endogeneity. Monte Carlo evidence shows that the tests have the correct size and good power properties.
    Keywords: Structural change, Panel cointegration, Common stochastic trends, Functional Central Limit Theorem.
    JEL: C23
    Date: 2011–02
  6. By: Ghassen El Montasser
    Abstract: Few authors have studied, either asymptotically or in finite samples, the size and power of seasonal unit root tests when the data generating process [DGP] is a non-stationary alternative aside from the seasonal random walk. In this respect, Ghysels, lee and Noh (1994) conducted a simulation study by considering the alternative of a non-seasonal random walk to analyze the size and power properties of some seasonal unit root tests. Analogously, Taylor (2005) completed this analysis by developing the limit theory of statistics of Dickey and Fuller Hasza [DHF] (1984) when the data are generated by a non-seasonal random walk. del Barrio Castro (2007) extended the set of non-stationary alternatives and established, for each one, the asymptotic theory of the statistics subsumed in the HEGY procedure. In this paper, I show that establishing the limit theory of F-type statistics for seasonal unit roots can be debatable in such alternatives. The problem lies in the nature of the regressors that these overall F-type tests specify.
    Keywords: Fisher test, seasonal integration, non-stationary alternatives, Brownian motion, Monte Carlo Simulation.
    JEL: C22
    Date: 2011–04–06
  7. By: Alexander Aue (Department of Statistics - University of California, Davis-Livermore); Lajos Horváth (Mathematics department - University of Utah); Clifford Hurvich (IOMS - Information, Operations and Management Science - New York University); Philippe Soulier (MODAL'X - Modélisation aléatoire de Paris X - Université Paris Ouest Nanterre La Défense)
    Abstract: We consider pure-jump transaction-level models for asset prices in continuous time, driven by point processes. In a bivariate model that admits cointegration, we allow for time deformations to account for such effects as intraday seasonal patterns in volatility, and non-trading periods that may be different for the two assets. We also allow for asymmetries (leverage effects). We obtain the asymptotic distribution of the log-price process. We also obtain the asymptotic distribution of the ordinary least-squares estimator of the cointegrating parameter based on data sampled from an equally-spaced discretization of calendar time, in the case of weak fractional cointegration. For this same case, we obtain the asymptotic distribution for a tapered estimator under more
    Keywords: Point processes; fractional cointegration;
    Date: 2011–04–04
  8. By: Bryan S. Graham; Cristine Campos de Xavier Pinto; Daniel Egel
    Abstract: We propose a locally efficient, doubly robust, estimator for a class of semiparametric data combination problems. A leading estimand in this class is the average treatment effect on the treated (ATT). Data combination problems are related to, but distinct from, the class of missing data problems analyzed by Robins, Rotnitzky and Zhao (1994) (of which the Average Treatment Effect (ATE) estimand is a special case). Our procedure may be used to efficiently estimate, among other objects, the ATT, the two-sample instrumental variables model (TSIV), counterfactual distributions, and poverty maps. In an empirical application we use our procedure to characterize residual Black-White wage inequality after flexibly controlling for 'pre-market' differences in measured cognitive achievement as in Neal and Johnson (1996). We find that residual Black-White inequality is negligible at lower and higher quantiles of the Black wage distribution, but substantial at middle quantiles.
    JEL: C01 C14 J31 J7
    Date: 2011–04
  9. By: Roberto Casarin; Radu Craiu; Fabrizio Leisen
    Abstract: We propose a new class of interacting Markov chain Monte Carlo (MCMC) algorithms designed for increasing the efficiency of a modified multiple-try Metropolis (MTM) algorithm. The extension with respect to the existing MCMC literature is twofold. The sampler proposed extends the basic MTM algorithm by allowing different proposal distributions in the multipletry generation step. We exploit the structure of the MTM algorithm with different proposal distributions to naturally introduce an interacting MTM mechanism (IMTM) that expands the class of population Monte Carlo methods and builds connections with the rapidly expanding world of adaptive MCMC. We show the validity of the algorithm and discuss the choice of the selection weights and of the different proposals. We provide numerical studies which show that the new algorithm can perform better than the basic MTM algorithm and that the interaction mechanism allows the IMTM to efficiently explore the state space.
    Keywords: Interacting Monte Carlo, Markov chain Monte Carlo, Multiple-try Metropolis, Population Monte Carlo
    Date: 2011–03
  10. By: C.J. O’Donnell (CEPA - School of Economics, The University of Queensland); K. Nguyen
    Abstract: In industry sectors where market prices are unavailable it is common to represent multiple-input multiple-output production technologies using distance functions. Econometric estimation of such functions is complicated by the fact that more than one variable in the function may be endogenous. In such cases, maximum likelihood estimation can lead to biased and inconsistent estimates of the model parameters and associated measures of firm performance. We solve the problem by using linear programming to construct a quantity index. The distance function is then written in the form of a conventional stochastic frontier model where the explanatory variables are unambiguously exogenous. We use this approach to estimate productivity indexes and support (or shadow) prices for a sample of Australian public hospitals. We decompose the productivity index into several measures of environmental change and efficiency change. We find that the productivity effects of improvements in input-oriented technical efficiency have been largely offset by the effects of deteriorations in the production environment over time.
    Date: 2011–03
  11. By: Alexander Chudik (European Central Bank, Kaiserstraße 29, D-60311 Frankfurt am Main, Germany.); Michael Fidora (European Central Bank, Kaiserstraße 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: Identification of structural VARs using sign restrictions has become increasingly popular in the academic literature. This paper (i) argues that identification of shocks can benefit from introducing a global dimension, and (ii) shows that summarising information by the median of the available impulse responses—as commonly done in the literature—has some undesired features that can be avoided by using an alternatively proposed summary measure based on a “scaled median” estimate of the structural impulse response. The paper implements this approach in both a small scale model as originally presented in Uhlig (2005) and a large scale model, introducing the sign restrictions approach to the global VAR (GVAR) literature, that allows to explore the global dimension by adding a large number of sign restrictions. We find that the patterns of impulse responses are qualitatively similar though point estimates tend to be quantitatively much larger in the alternatively proposed approach. In addition, our GVAR application in the context of global oil supply shocks documents that oil supply shocks have a stronger impact on emerging economies’ real output as compared to mature economies, a negative impact on real growth in oil-exporting economies as well, and tend to cause an appreciation (depreciation) of oil-exporters’ (oil-importers’) real exchange rates but also lead to an appreciation of the US dollar. One possible explanation would be the recycling of oil-exporters’ increased revenues in US financial markets. JEL Classification: C32, E17, F37, F41, F47.
    Keywords: Identification of shocks, sign restrictions, VAR, global VAR, oil shocks.
    Date: 2011–04
  12. By: Michael D. Bauer; Glenn D. Rudebusch; Jing (Cynthia) Wu
    Abstract: Affine dynamic term structure models (DTSMs) are the standard finance representation of the yield curve. However, the literature on DTSMs has ignored the coefficient bias that plagues estimated autoregressive models of persistent time series. We introduce new simulation-based methods for reducing or even eliminating small-sample bias in empirical affine Gaussian DTSMs. With these methods, we show that conventional estimates of DTSM coefficients are severely biased, which results in misleading estimates of expected future short-term interest rates and long-maturity term premia. Our unbiased DTSM estimates imply risk-neutral rates and term premia that are more plausible from a macro-finance perspective.
    Keywords: Interest rates
    Date: 2011
  13. By: Marco Bee; Massimo Riccaboni; Stefano Schiavo
    Abstract: It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of species abundance, income and wealth as well as file, city and firm sizes are examples with this structure. We present a new test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows to identify the true data generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with alternative methods at different levels of aggregation of economic systems. Our results provide support to the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
    Keywords: Pareto distribution, power-law, lognormal distribution, maximum entropy, firm size, international trade
    JEL: C14 C51 C52
    Date: 2011
  14. By: Carlos Leóm; Alejandro Reveiz
    Abstract: As a natural extension to León and Vivas (2010) and León and Reveiz (2010) this paper briefly describes the Cholesky method for simulating Geometric Brownian Motion processes with long-term dependence, also referred as Fractional Geometric Brownian Motion (FBM). Results show that this method generates random numbers capable of replicating independent, persistent or antipersistent time-series depending on the value of the chosen Hurst exponent. Simulating FBM via the Cholesky method is (i) convenient since it grants the ability to replicate intense and enduring returns, which allows for reproducing well-documented financial returns’ slow convergence in distribution to a Gaussian law, and (ii) straightforward since it takes advantage of the Gaussian distribution ability to express a broad type of stochastic processes by changing how volatility behaves with respect to the time horizon. However, Cholesky method is computationally demanding, which may be its main drawback. Potential applications of FBM simulation include market, credit and liquidity risk models, option valuation techniques, portfolio optimization models and payments systems dynamics. All can benefit from the availability of a stochastic process that provides the ability to explicitly model how volatility behaves with respect to the time horizon in order to simulate severe and sustained price and quantity changes. These applications are more pertinent than ever because of the consensus regarding the limitations of customary models for valuation, risk and asset allocation after the most recent episode of global financial crisis.
    Keywords: Montecarlo simulation, Fractional Brownian Motion, Hurst exponent, Long-term Dependence, Biased Random Walk. Classification JEL: C15, C53, C63, G17, G14.
  15. By: Ballinger, Clint
    Abstract: The purpose of this paper is twofold: 1) to highlight the widely ignored but fundamental problem of ‘superpopulations’ for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. 2) to show that descriptive statistics both avoid the problem of superpopulations and can be a powerful tool when used correctly. A few examples are provided. The paper ends with considerations of some reasons we think are behind the adherence to methods that are known to be inapplicable to many of the types of questions asked in development studies yet still widely practiced.
    Keywords: frequentist statistics; Bayesian statistics; causation; determinism; explanation; spatial autocorrelation; mulitple regression; international development; econometrics; comparative method; datasets; descriptive statistics; tabular analysis; visual analysis; maps; regession modeling; quantitative; qualitative; macrosociology; superpopulations; apparent populations; indeterminism; statistical assumptions
    JEL: B0 C12 C33 C11 P16 A11 O1 C10 F5 C20 C3 C23 C21
    Date: 2011–01–06
  16. By: Michael Danquah (Swansea University); Enrique Moral-Benito (Bank Of Spain); Bazoumana Ouattara (Swansea University)
    Abstract: Total Factor Productivity (TFP) accounts for a sizeable proportion of the income and growth differences across countries. Two challenges remain to researchers aiming to explain these differences: on the one hand, TFP growth is hard to measure; on the other hand, model uncertainty hampers consensus on its key determinants. This paper combines a non-parametric measure of TFP growth with model averaging techniques to addess both issues. The empirical findings suggest that the most robust TFP growth determinants are unobserved heterogeneity, initial GDP, consumption share, and trade openness. We also investigate the main determinants of the TFP components: efficiency change (i.e. catching up) and technological progress (i.e. innovation).
    Keywords: Productivity, Bayesian Model Averaging, Nonparametric methods
    JEL: O47 C11 C14 C23
    Date: 2011–04

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.