nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒05‒14
eleven papers chosen by
Sune Karlsson
Orebro University

  1. Some Practical Guidance for the Implementation of Propensity Score Matching By Marco Caliendo; Sabine Kopeinig
  2. Orthogonality Conditions for Non-Dyadic Wavelet Analysis By Stephen Pollock; Iolanda Lo Cascio
  3. Econometric Methods of Signal Extraction By Stephen Pollock
  4. Structural Spurious Regressions and A Hausman-type Cointegration Test By Chi-Young Choi; Ling Hu; Masao Ogaki
  5. Robust Mean-Variance Portfolio Selection By Cédric Perret-Gentil; Maria-Pia Victoria-Feser
  6. Conditional Nonparametric Frontier Models for Convex and Non Convex Technologies: a Unifying Approach By Cinzia Daraio; Leopold Simar
  7. The Large Sample Behaviour of the Generalized Method of Moments Estimator in Misspecified Models By Alastair R. Hall; Atsushi Inoue
  9. Convergence en loi de Dirichlet de certaines intégrales stochastiques By Christophe Chorro
  10. frequency domain principal components estimation of fractionally cointegrated processes By Claudio Morana
  11. a structural common factor approach to core inflation estimation and forecasting By Claudio Morana

  1. By: Marco Caliendo (DIW Berlin and IZA Bonn); Sabine Kopeinig (University of Cologne)
    Abstract: Propensity Score Matching (PSM) has become a popular approach to estimate causal treatment effects. It is widely applied when evaluating labour market policies, but empirical examples can be found in very diverse fields of study. Once the researcher has decided to use PSM, he is confronted with a lot of questions regarding its implementation. To begin with, a first decision has to be made concerning the estimation of the propensity score. Following that one has to decide which matching algorithm to choose and determine the region of common support. Subsequently, the matching quality has to be assessed and treatment effects and their standard errors have to be estimated. Furthermore, questions like "what to do if there is choice-based sampling?" or "when to measure effects?" can be important in empirical studies. Finally, one might also want to test the sensitivity of estimated treatment effects with respect to unobserved heterogeneity or failure of the common support condition. Each implementation step involves a lot of decisions and different approaches can be thought of. The aim of this paper is to discuss these implementation issues and give some guidance to researchers who want to use PSM for evaluation purposes.
    Keywords: propensity score matching, implementation, evaluation, sensitivity
    JEL: C40 H43
    Date: 2005–05
  2. By: Stephen Pollock (Queen Mary, University of London); Iolanda Lo Cascio (Queen Mary, University of London)
    Abstract: The conventional dyadic multiresolution analysis constructs a succession of frequency intervals in the form of (<i>π</i> / 2<sup><i> j</i></sup>, <i>π</i> / 2<sup><i> j</i>-1</sup>); <i>j</i> = 1, 2, . . . , <i>n</i> of which the bandwidths are halved repeatedly in the descent from high frequencies to low frequencies. Whereas this scheme provides an excellent framework for encoding and transmitting signals with a high degree of data compression, it is less appropriate to the purposes of statistical data analysis.<br>       A non-dyadic mixed-radix wavelet analysis is described that allows the wave bands to be defined more flexibly than in the case of a conventional dyadic analysis. The wavelets that form the basis vectors for the wave bands are derived from the Fourier transforms of a variety of functions that specify the frequency responses of the filters corresponding to the sequences of wavelet coefficients.
    Keywords: Wavelets, Non-dyadic analysis, Fourier analysis.
    JEL: C22
    Date: 2005–05
  3. By: Stephen Pollock (Queen Mary, University of London)
    Abstract: The Wiener–Kolmogorov signal extraction filters, which are widely used in econometric analysis, are constructed on the basis of statistical models of the processes generating the data. In this paper, such models are used mainly as heuristic devices that are to be specified in whichever ways are appropriate to ensure that the filters have the desired characteristics. The digital Butterworth filters, which are described and illustrated in the paper, are specified in this way. The components of an econometric time series often give rise to spectral structures that fall within well-defined frequency bands that are isolated from each other by spectral dead spaces. We find that the finite-sample Wiener–Kolmogorov formulation lends itself readily to a specialisation that is appropriate for dealing with band-limited components.
    Keywords: Signal extraction, Linear filtering, Frequency-domain analysis, Trend estimation.
    JEL: C22
    Date: 2005–05
  4. By: Chi-Young Choi (University of New Hampshire); Ling Hu (Ohio State University); Masao Ogaki (Ohio State University)
    Abstract: This paper proposes two estimators based on asymptotic theory to estimate structural parameters with spurious regressions involving unit-root nonstationary variables. This approach motivates a Hausman-type test for the null hypothesis of cointegration for dynamic Ordinary Least Squares estimation using one of our estimators for spurious regressions. We apply our estimation and testing methods to four applications: (i) long-run money demand in the U.S.; (ii) long-run implications of the consumption-leisure choice; (iii) output convergence among industrial and developing countries; (iv) Purchasing Power Parity for traded and non-traded goods.
    Keywords: Spurious regression, GLS correction method, Dynamic regression, Test for cointegration.
    JEL: C10 C15
    Date: 2005–05
  5. By: Cédric Perret-Gentil (Union Bancaire Privée); Maria-Pia Victoria-Feser (HEC,University of Geneva)
    Abstract: This paper investigates model risk issues in the context of mean-variance portfolio selection. We analytically and numerically show that, under model misspecification, the use of statistically robust estimates instead of the widely used classical sample mean and covariance is highly beneficial for the stability properties of the mean-variance optimal portfolios. Moreover, we perform simulations leading to the conclusion that, under classical estimation, model risk bias dominates estimation risk bias. Finally, we suggest a diagnostic tool to warn the analyst of the presence of extreme returns that have an abnormally large influence on the optimization results.
    Keywords: Mean-variance e .cient frontier; Outliers; Model risk; Robust es-timation
    JEL: C13 C51 G11
    Date: 2005–04
  6. By: Cinzia Daraio; Leopold Simar
    Abstract: The explanation of productivity differentials is very important to identify the economic conditions that create inefficiency and to improve managerial performance. In literature two main approaches have been developed: one-stage approaches and two-stage approaches. Daraio and Simar (2003) propose a full nonparametric methodology based on conditional FDH and conditional order-m frontiers without any convexity assumption on the technology. On the one hand, convexity has always been assumed in mainstream production theory and general equilibrium. On the other hand, in many empirical applications, the convexity assumption can be reasonable and sometimes natural. Leading by these considerations, in this paper we propose a unifying approach to introduce external-environmental variables in nonparametric frontier models for convex and non convex technologies. Developing further the work done in Daraio and Simar (2003) we introduce a conditional DEA estimator, i.e., an estimator of production frontier of DEA type conditioned to some external-environmental variables which are neither inputs nor outputs under the control of the producer. A robust version of this conditional estimator is also proposed. These various measures of efficiency provide also indicators of convexity. Illustrations through simulated and real data (mutual funds) examples are reported.
    Keywords: Convexity, External-Environmental Factors, Production Frontier, Nonparametric Estimation, Robust Estimation.
  7. By: Alastair R. Hall (North Carolina State University); Atsushi Inoue (North Carolina State University)
    Abstract: This paper presents the limiting distribution theory for the GMM estimator when the estimation is based on a population moment condition which is subject to non--local (or fixed) misspecification. It is shown that if the parameter vector is overidentified then the weighting matrix plays a far more fundamental role than it does in the corresponding analysis for correctly specified models. Specifically, the rate of convergence of the estimator depends on the rate of convergence of the weighting matrix to its probability limit. The analysis is presented for four particular choices of weighting matrix which are commonly used in practice. In each case the limiting distribution theory is different, and also different from the limiting distribution in a correctly specified model. Statistics are proposed which allow the researcher to test hypotheses about the parameters in misspecified models.
    Keywords: Misspecification, Generalized Method of Moments, Asymptotic Distribution Theory
    JEL: C10 C32
    Date: 2005–05–10
  8. By: Massimiliano Marinucci (Universidad Complutense Madrid); Teodosio Pérez-Amaral (Universidad Complutense Madrid)
    Abstract: We estimate the business telecommunications demands for local, intra- LATA and inter-LATA services, using US telecommunications data from a Bill Harvesting (R) survey carried out during 1997. We model heterogeneity, which is present among firms due to a variety of different business telecommunication needs, by estimating heteroskedastic normal mixture regression models for each demand. The results show that a three components mixture model fits well the demand for local services while a two components structure is used to model intra-LATA and inter-LATA demand. We describe the groups in terms of their difference among the coefficient regressors, and then use Retina to perform automatic model selection over a new expanded candidate regressor set which includes heterogeneity parameters as well as transformation of the original variables. Our results show that obtained models improve substantially the in-sample as well the out-of-sample predictive ability over alternative candidate models. Retina suggests that the final demand specification should include also interaction terms between telephone equipment variables which are found to be negative. On the other side the output of the firm, as well as its physical extension, have second order, yet significant effects on the demand for telecommunication services. Estimated elasticities are different for the three demands but always positive for access form (single-line or private network), while the effects of other variables is secondary.
    Keywords: Telecommunication Demand Models, Local calls, inter-LATA calls, intra-LATA calls, Retina, Flexible Functional Forms, Heterogeneity, Finite Mixtures.
    JEL: C21 C51 C87
    Date: 2005–05–10
  9. By: Christophe Chorro (CERMSEM)
    Abstract: Recently, Nicolas Bouleau has proposed an extension of the Donsker's invariance principle in the framework of Dirichlet forms. He proves that an erroneous random walk of i.i.d random variables converges in Dirichlet law toward the Ornstein-Uhlenbeck error structure on the Wiener space [4]. The aim of this paper is to extend this result to some families of stochastic integrals.
    Keywords: Invariance principle, stochastic integrals, Dirichlet forms, squared field operator, vectorial domain, errors
    Date: 2005–04
  10. By: Claudio Morana (SEMEQ Department - Faculty of Economics - University of Eastern Piedmont)
    Abstract: In this paper we study the zero frequency spectral properties of fractionally cointegrated long memory processes and introduce a new frequency domain principal components estimator of the cointegration space and the factor loading matrix for the long memory factors. We find that for fractionally di?erenced (fractionally) cointegrated processes the squared multiple coherence at the zero frequency is equal to one, the spectral density matrix at the zero frequency is singular, and the factor loading and cointegrating matrices can be obtained from the eigenvectors of the spectral matrix at the zero frequency, associated with the positive and zero roots, respectively. A Monte Carlo simulation reveals that the proposed principal components estimator has already good properties with relatively small sample sizes.
    Keywords: cointegration, long memory, frequency domain analysis
    JEL: C22
    Date: 2004–03
  11. By: Claudio Morana (SEMEQ Department - Faculty of Economics - University of Eastern Piedmont)
    Abstract: In the paper we propose a new methodological approach to core in- flation estimation, based on a frequency domain principal components estimator, suited to estimate systems of fractionally cointegrated processes. The proposed core inflation measure is the scaled common persistent factor in inflation and excess nominal money growth and bears the interpretation of monetary inflation. The proposed measure is characterised by all the properties that an “ideal” core inflation process should show, providing also a superior forecasting performance relative to other available measures.
    Keywords: long memory, common factors, fractional cointegration, Markov switching, core inflation, euro area.
    JEL: C22 E31 E52
    Date: 2004–02

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.