nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒08‒14
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Testing conditional monotonicity in the absence of smoothness By Miguel A. Delgado; Juan Carlos Escanciano
  2. Bayesian Cointegrated Vector Autoregression models incorporating Alpha-stable noise for inter-day price movements via Approximate Bayesian Computation By Gareth W. Peters; Balakrishnan B. Kannan; Ben Lasscock; Chris Mellen; Simon Godsill
  3. Estimating nonparametric mixed logit models via EM algorithm By Daniele Pacifico
  4. Disentangling Systematic and Idiosyncratic Risk for Large Panels of Assets By Matteo Barigozzi; Christian T. Brownlees; Giampiero M. Gallo; David Veredas
  5. Real time forecasts of inflation: the role of financial variables By Libero Monteforte; Gianluca Moretti
  6. Modeling House Prices using Multilevel Structured Additive Regression By Wolfgang Brunauer; Stefan Lang; Nikolaus Umlauf
  7. Choice probability generating functions By Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
  8. The Gravity Equation in International Economics and International Business Research: A Note By Michele Fratianni; Francesco Marchionne; Chang Hoon Oh
  9. Analyzing social experiments as implemented: evidence from the HighScope Perry Preschool Program By James Heckman; Seong Hyeok Moon; Rodrigo Pinto; Peter Savelyev; Adam Yavitz
  10. On the role of unobserved preference heterogeneity in discrete choice models of labour supply By Daniele Pacifico
  11. Smoking persistence in Europe: A semi-parametric panel data analysis with selectivity By Dimitris Christelis; Anna Sanz-de-Galdeano
  12. Identifying technology shocks in the frequency domain By Riccardo DiCecio; Michael T. Owyang

  1. By: Miguel A. Delgado; Juan Carlos Escanciano
    Abstract: This article proposes an omnibus test for monotonicity of nonparametric conditional distributions and its moments. Unlike previous proposals, our method does not require smooth estimation of the derivatives of nonparametric curves and it can be implemented even when the probability densities do not exist. In fact, we only require continuity of the marginal distributions. Distinguishing features of our approach are that the test statistic is pivotal under the null and invariant to any monotonic continuous transformation of the explanatory variable in finite samples. The test statistic is the sup-norm of the difference between the empirical copula function and its least concave majorant with respect to the explanatory variable coordinate. The resulting test is able to detect local alternatives converging to the null at the parametric rate n-1/2; like the classical goodness-of-.t tests. The article also discusses restricted estimation procedures under monotonicity and extensions of the basic framework to general conditional moments, estimated parameters and multivariate explanatory variables. The finite sample performance of the test is examined by means of a Monte Carlo experiment.
    Keywords: Stochastic monotonicity, Conditional moments, Least concave majorant, Copula process, Distribution-free in finite samples, Tests invariant to monotone transforms
    JEL: C14 C15
    Date: 2010–03
  2. By: Gareth W. Peters; Balakrishnan B. Kannan; Ben Lasscock; Chris Mellen; Simon Godsill
    Abstract: We consider a statistical model for pairs of traded assets, based on a Cointegrated Vector Auto Regression (CVAR) Model. We extend standard CVAR models to incorporate estimation of model parameters in the presence of price series level shifts which are not accurately modeled in the standard Gaussian error correction model (ECM) framework. This involves developing a novel matrix variate Bayesian CVAR mixture model comprised of Gaussian errors intra-day and Alpha-stable errors inter-day in the ECM framework. To achieve this we derive a novel conjugate posterior model for the Scaled Mixtures of Normals (SMiN CVAR) representation of Alpha-stable inter-day innovations. These results are generalized to asymmetric models for the innovation noise at inter-day boundaries allowing for skewed Alpha-stable models. Our proposed model and sampling methodology is general, incorporating the current literature on Gaussian models as a special subclass and also allowing for price series level shifts either at random estimated time points or known a priori time points. We focus analysis on regularly observed non-Gaussian level shifts that can have significant effect on estimation performance in statistical models failing to account for such level shifts, such as at the close and open of markets. We compare the estimation accuracy of our model and estimation approach to standard frequentist and Bayesian procedures for CVAR models when non-Gaussian price series level shifts are present in the individual series, such as inter-day boundaries. We fit a bi-variate Alpha-stable model to the inter-day jumps and model the effect of such jumps on estimation of matrix-variate CVAR model parameters using the likelihood based Johansen procedure and a Bayesian estimation. We illustrate our model and the corresponding estimation procedures we develop on both synthetic and actual data.
    Date: 2010–08
  3. By: Daniele Pacifico
    Abstract: The aim of this paper is to describe a Stata routine for the nonparametric estimation of mixed logit models using a Expectation-Maximisation algorithm. We also compare the performance of our estimator with respect to more typical parametric mixed logit models estimated by means of Simulated Maximum Likelihood.
    Keywords: EM algorithm; latent class; mixed logit model; unobserved heterogeneity
    Date: 2010–05
  4. By: Matteo Barigozzi (Solvay Brussels School of Economics and Management, Université libre de Bruxelles); Christian T. Brownlees (Stern School of Business, New York University); Giampiero M. Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti"); David Veredas (Solvay Brussels School of Economics and Management, Université libre de Bruxelles)
    Abstract: When observed over a large panel, measures of risk (such as realized volatilities) usually exhibit a secular trend around which individual risks cluster. In this article we propose a vector Multiplicative Error Model achieving a decomposition of each risk measure into a common systematic and an idiosyncratic component, while allowing for contemporaneous dependence in the innovation process. As a consequence, we can assess how much of the current asset risk is due to a system wide component, and measure the persistence of the deviation of an asset specific risk from that common level. We develop an estimation technique, based on a combination of seminonparametric methods and copula theory, that is suitable for large dimensional panels. The model is applied to two panels of daily realized volatilities between 2001 and 2008: the SPDR Sectoral Indices of the S&P500 and the constituents of the S&P100. Similar results are obtained on the two sets in terms of reverting behavior of the common nonstationary component and the idiosyncratic dynamics to with a variable speed that appears to be sector dependent.
    Keywords: Systematic risk, idiosyncratic risk, Multiplicative Error Model, seminonparametric, copula.
    JEL: C32 C51
    Date: 2010–07
  5. By: Libero Monteforte (Bank of Italy); Gianluca Moretti (Bank of Italy)
    Abstract: We present a mixed-frequency model for daily forecasts of euro area inflation. The model combines a monthly index of core inflation with daily data from financial markets; estimates are carried out with the MIDAS regression approach. The forecasting ability of the model in real-time is compared with that of standard VARs and of daily quotes of economic derivatives on euro area inflation. We find that the inclusion of daily variables helps to reduce forecast errors with respect to models that consider only monthly variables. The mixed-frequency model also displays superior predictive performance with respect to forecasts solely based on economic derivatives.
    Keywords: forecasting inflation, real time forecasts, dynamic factor models, MIDAS regression, economic derivatives
    JEL: C13 C51 C53 E37 G19
    Date: 2010–07
  6. By: Wolfgang Brunauer; Stefan Lang; Nikolaus Umlauf
    Abstract: This paper analyzes house price data belonging to three hierarchical levels of spatial units. House selling prices with associated individual attributes (the elementary level-1) are grouped within municipalities (level-2), which form districts (level-3), which are themselves nested in counties (level-4). Additionally to individual attributes, explanatory covariates with possibly nonlinear effects are available on two of these spatial resolutions. We apply a multilevel version of structured additive regression (STAR) models to regress house prices on individual attributes and locational neighborhood characteristics in a four level hierarchical model. In multilevel STAR models the regression coefficients of a particular nonlinear term may themselves obey a regression model with structured additive predictor. The framework thus allows to incorporate nonlinear covariate effects and time trends, smooth spatial effects and complex interactions at every level of the hierarchy of the multilevel model. Moreover we are able to decompose the spatial heterogeneity effect and investigate its magnitude at different spatial resolutions allowing for improved predictive quality even in the case of unobserved spatial units. Statistical inference is fully Bayesian and based on highly efficient Markov chain Monte Carlo simulation techniques that take advantage of the hierarchical structure in the data.
    Keywords: Bayesian hierarchical models, hedonic pricing models, multilevel models, MCMC, P-splines
    JEL: C01 C11 C14
    Date: 2010–07
  7. By: Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
    Abstract: This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models.
    Keywords: Discrete choice; random utility; mixture models; duration models; logit; generalised extreme value; multivariate extreme value
    JEL: C14 C35
    Date: 2010
  8. By: Michele Fratianni (Department of Business Economics and Public Policy, Indiana University Kelley School of Business); Francesco Marchionne (Universita Politecnica delle Marche); Chang Hoon Oh (Faculty of Business, Brock University)
    Abstract: This note discusses methodological issues and practical concerns for international economists and international business scholars who apply the gravity equation in their research. The most important message of the note is that this equation should correct for multilateral resistance factors. We propose a relatively low-cost specification and estimation to implement such correction, which is robust in the presence of various endogeneity effects and non-stationary variables. In the presence of zero-values in the dataset, however, the multilateral specification is best estimated with a Poisson maximum likelihood.
    Keywords: gravity equation, international trade, foreign direct investment, methodology
    JEL: C1 F1 F2
    Date: 2010–08
  9. By: James Heckman (Institute for Fiscal Studies and University of Chicago); Seong Hyeok Moon; Rodrigo Pinto; Peter Savelyev; Adam Yavitz
    Abstract: <p><p><p><p>Social experiments are powerful sources of information about the effectiveness of interventions. In practice, initial randomization plans are almost always compromised. Multiple hypotheses are frequently tested. "Significant" effects are often reported with p-values that do not account for preliminary screening from a large candidate pool of possible effects. This paper develops tools for analyzing data from experiments as they are actually implemented.</p> </p><p></p><p></p><p><p>We apply these tools to analyze the influential HighScope Perry Preschool Program. The Perry program was a social experiment that provided preschool education and home visits to disadvantaged children during their preschool years. It was evaluated by the method of random assignment. Both treatments and controls have been followed from age 3 through age 40.</p> </p><p></p><p></p><p><p>Previous analyses of the Perry data assume that the planned randomization protocol was implemented. In fact, as in many social experiments, the intended randomization protocol was compromised. Accounting for compromised randomization, multiple-hypothesis testing, and small sample sizes, we find statistically significant and economically important program effects for both males and females. We also examine the representativeness of the Perry study.</p><p><a href="/wps/cwp2210_app.pdf">Download appendix</a></p></p>
    Date: 2010–08
  10. By: Daniele Pacifico
    Abstract: The aim of this paper is to analyse the role of unobserved preference heterogeneity in structural discrete choice models of labour supply. Within this framework, unobserved heterogeneity has been estimated either parametrically or semiparametrically through random coefficient models. Nevertheless, the estimation of such models by means of standard, gradient-based methods is often difficult, in particular if the number of random parameters is high. For this reason, the role of unobserved taste variability in empirical studies is often constrained since only a small set of coefficients is assumed to be random. However, this simplification may affect the estimated labour supply elasticities and the subsequent policy recommendations. In this paper, we propose a new estimation method based on an EM algorithm that allows us to fully consider the effect of unobserved heterogeneity nonparametrically. Results show that labour supply elasticities and other post-estimation results change significantly only when unobserved heterogeneity is considered in a more flexible and comprehensive manner. Moreover, we analyse the behavioural effects of the introduction of a working-tax credit scheme in the Italian tax-benefit system and show that the magnitude of labour supply reactions and the post-reform income distribution can differ significantly depending on the specification of unobserved heterogeneity.
    Keywords: behavioural microsimulation; labour supply; unobserved heterogeneity; random coefficient mixed models; EM algorithm
    JEL: J22 H31 H24 C25 C14
    Date: 2010–05
  11. By: Dimitris Christelis (Department of Economics, University Of Venice Cà Foscari); Anna Sanz-de-Galdeano (Department of Economics and Economic History, Universitat Autònoma de Barcelona)
    Abstract: We study smoking persistence, which can be due to both true state dependence and individual unobserved heterogeneity, in ten European countries. We distinguish between the two sources of persistence by using semi-parametric dynamic panel selection methods, applied to both smoking participation and cigarette consumption. We find that for both smoking decisions true state dependence is generally much smaller when unobserved individual heterogeneity is taken into account, and we also uncover large differences in true state dependence across countries. Finally, allowing for heaping in the reported number of cigarettes smoked considerably improves the fit of our model.
    Keywords: smoking, panel data, state dependence, selectivity
    JEL: C33 C34 D12 I10 I12
    Date: 2010
  12. By: Riccardo DiCecio; Michael T. Owyang
    Abstract: Since Galí [1999], long-run restricted VARs have become the standard for identifying the effects of technology shocks. In a recent paper, Francis et al. [2008] proposed an alternative to identify technology as the shock that maximizes the forecast-error variance share of labor productivity at long horizons. In this paper, we propose a variant of the Max Share identification, which focuses on maximizing the variance share of labor productivity in the frequency domain. We consider the responses to technology shocks identified from various frequency bands. Two distinct technology shocks emerge. An expansionary shock increases productivity, output, and hours at business-cycle frequencies. The technology shock that maximizes productivity in the medium and long runs instead has clear contractionary effects on hours, while increasing output and productivity.
    Keywords: Business cycles ; Technology - Economic aspects ; Productivity
    Date: 2010

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.