nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒09‒25
eleven papers chosen by
Sune Karlsson
Örebro universitet

  1. Generalized Indirect Inference for Discrete Choice Models By Marianne Bruins; James A. Duffy; Michael P. Keane; Anthony A. Smith, Jr
  2. A Spatial Autoregressive Stochastic Frontier Model for Panel Data with Asymmetric Efficiency Spillovers By Glass, Anthony J.; Kenjegalieva, Karligash; Sickles, Robin C.
  3. Hidden Markov models in time series, with applications in economics By Sylvia Kaufmann
  4. Continuous Time ARMA Processes: Discrete Time Representation and Likelihood Evaluation. By Michael Thornton; Marcus Chambers
  5. Copula-Based Univariate Time Series Structural Shift Identification Test By Henry Penikas
  6. Nonparametric Dynamic Conditional Beta By Maheu, John M; Shamsi, Azam
  7. Semiparametric Estimation under Shape Constraints By Wu, Ximing; Sickles, Robin
  8. Using pattern mixture modeling to account for informative attrition in the Whitehall II study: A simulation study By Catherine Welch; Martin Shipley; Séverine Sabia; Eric Brunner; Mika Kivim
  9. Cumulated sum of squares statistics for non-linear and non-stationary regressions By Vanessa Berenguer-Rico; Bent Nielsen
  10. Beyond Truth-Telling: Preference Estimation with Centralized School Choice By Gabrielle Fack; Julien Grenet; Yinghua He
  11. Multivariate GARCH for a large number of stocks By Matthias Raddant; Friedrich Wagner

  1. By: Marianne Bruins (Nuffield College and Dept of Economics, Univesity of Oxford); James A. Duffy (Nuffield College, Dept of Economics and Institute for New Economic Thinking at the Oxford Martin School, Univesity of Oxford); Michael P. Keane (Nuffield College and Dept of Economics, Univesity of Oxford); Anthony A. Smith, Jr (Yale University)
    Abstract: This paper develops and implements a practical simulation-based method for estimating dynamic discrete choice models. The method, which can accommodate lagged dependent variables, serially correlated errors, unobserved variables, and many alternatives, builds on the ideas of indirect inference. The main difficulty in implementing indirect inference in discrete choice models is that the objective surface is a step function, rendering gradientbased optimization methods useless. To overcome this obstacle, this paper shows how to smooth the objective surface. The key idea is to use a smoothed function of the latent utilities as the dependent variable in the auxiliary model. As the smoothing parameter goes to zero, this function delivers the discrete choice implied by the latent utilities, thereby guaranteeing consistency. We establish conditions on the smoothing such that our estimator enjoys the same limiting distribution as the indirect inference estimator, while at the same time ensuring that the smoothing facilitates the convergence of gradient-based optimization methods. A set of Monte Carlo experiments shows that the method is fast, robust, and nearly as efficient as maximum likelihood when the auxiliary model is sufficiently rich.
    Date: 2015–07–15
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:1508&r=ecm
  2. By: Glass, Anthony J. (Loughborough University); Kenjegalieva, Karligash (Loughborough University); Sickles, Robin C. (Rice University and Loughborough University)
    Abstract: By blending seminal literature on non-spatial stochastic frontier models with key contributions to spatial econometrics we develop a spatial autoregressive (SAR) sto- chastic frontier model for panel data. The specification of the SAR frontier allows efficiency to vary over time and across the cross-sections. Efficiency is calculated from a composed error structure by assuming a half-normal distribution for inefficiency. The spatial frontier is estimated using maximum likelihood methods taking into account the endogenous SAR variable. We apply our spatial estimator to an aggregate production frontier for 41 European countries over the period 1990-2011. In the application section, the fitted SAR stochastic frontier specification is used to discuss, among other things, the asymmetry between efficiency spillovers to and from a country.
    JEL: C23 C51 D24 E23
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:ecl:riceco:15-014&r=ecm
  3. By: Sylvia Kaufmann (Study Center Gerzensee)
    Abstract: Markov models introduce persistence in the mixture distribution. In time series analysis, the mixture components relate to different persistent states characterizing the state-specific time series process. Model specification is discussed in a general form. Emphasis is put on the functional form and the parametrization of timeinvariant and time-varying specifications of the state transition distribution. The concept of mean-square stability is introduced to discuss the condition under which Markov switching processes have finite first and second moments in the indefinite future. Not surprisingly, a time series process may be mean-square stable even if it switches between bounded and unbounded state-specific processes. Surprisingly, switching between stable state-specific processes is neither necessary nor sufficient to obtain a mean-square stable time series process. Model estimation proceeds by data augmentation. We derive the basic forward-filtering backward-smoothing/sampling algorithm to infer on the latent state indicator in maximum likelihood and Bayesian estimation procedures. Emphasis is again laid on the state transition distribution. We discuss the specification of state-invariant prior parameter distributions and posterior parameter inference under either a logit or probit functional form of the state transition distribution. With simulated data, we show that the estimation of parameters under a probit functional form is more efficient. However, a probit functional form renders estimation extremely slow if more than two states drive the time series process. Finally, various applications illustrate how to obtain informative switching in Markov switching models with time-invariant and time-varying transition distributions.
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:szg:worpap:1606&r=ecm
  4. By: Michael Thornton; Marcus Chambers
    Abstract: This paper explores the representation and estimation of mixed continuous time ARMA (autoregressive moving average) systems of orders p, q. Taking the general case of mixed stock and flow variables, we discuss new state space and exact discrete time representations and demonstrate that the discrete time ARMA representations widely used in empirical work, based on differencing stock variables, are members of a class of observationally equivalent discrete time ARMA(p + 1, p) representations, which includes a more natural ARMA(p, p) representation. We compare and contrast two approaches to likelihood evaluation and computation, namely one based on an exact discrete time representation and another utilising astate space representation and the Kalman-Bucy filter.
    Keywords: Continuous time; ARMA process; state space; discrete time representation.
    JEL: C32
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:16/10&r=ecm
  5. By: Henry Penikas
    Abstract: An approach is proposed to determine structural shift in time-series assuming non-linear dependence of lagged values of dependent variable. Copulas are used to model non-linear dependence of time series components.
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1609.05056&r=ecm
  6. By: Maheu, John M; Shamsi, Azam
    Abstract: This paper derives a dynamic conditional beta representation using a Bayesian semiparametric multivariate GARCH model. The conditional joint distribution of excess stock returns and market excess returns are modeled as a countably infinite mixture of normals. This allows for deviations from the elliptic family of distributions. Empirically we find the time-varying beta of a stock nonlinearly depends on the contemporaneous value of excess market returns. In highly volatile markets, beta is almost constant, while in stable markets, the beta coefficient can depend asymmetrically on the market excess return. The model is extended to allow nonlinear dependence in Fama-French factors.
    Keywords: GARCH, Dirichlet process mixture, slice sampling
    JEL: C32 C58 G10 G17
    Date: 2016–09–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:73764&r=ecm
  7. By: Wu, Ximing (TX A&M University); Sickles, Robin (Rice University)
    Abstract: Economic theory provides the econometrician with substantial structure and restrictions necessary to give economic interpretation to empirical findings. In many settings, such as those in consumer demand and production studies, these restrictions often take the form of monotonicity and curvature constraints. Although such restrictions may be imposed in certain parametric empirical settings in a relatively straight-forward fashion by utilizing parametric restrictions or particular parametric functional forms (Cobb-Douglas, CES, etc.), imposing such restrictions in semiparametric models is often problematic. Our paper provides one solution to this problem by incorporating penalized splines, where monotonicity and curvature constraints are maintained via integral transformations of spline basis expansions. We derive the estimator, algorithms for its solution, and its large sample properties. Inferential procedures are discussed as well as methods for selecting the smoothing parameter. We also consider multiple regressions under the framework of additive models. We conduct a series of Monte Carlo simulations to illustrate the finite sample properties of the estimator. We apply the proposed methods to estimate two canonical relationships, one in consumer behavior and one in producer behavior. These two empirical settings examine the relationship between individuals' degree of optimism and risk tolerance and a production function with multiple inputs.
    JEL: C14 C15 D04
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:ecl:riceco:15-021&r=ecm
  8. By: Catherine Welch (Research Department of Epidemiology and Public Health, UCL); Martin Shipley (Research Department of Epidemiology and Public Health, UCL); Séverine Sabia (INSERM U1018, Centre for Research in Epidemiology and Population Health, Villejuif, France); Eric Brunner (Research Department of Epidemiology and Public Health, UCL); Mika Kivim (Research Department of Epidemiology and Public Health, UCL)
    Abstract: Attrition is one potential bias that occurs in longitudinal studies when participants drop out and is informative when the reason for attrition is associated with the study outcome. However, this is impossible to check because the data we need to confirm informative attrition are missing. When data are missing at random (MAR), the probability of missingness not being associated with the missing values conditional on the observed data, one appropriate approach for handling missing data is multiple imputation (MI). However, when attrition results in the data being missing not at random (MNAR), the probability of missing data is associated with the values missing, so we cannot use MI directly. An alternative approach is pattern mixture modeling, which specifies the distribution of the observed data, which we know, and the missing data, which we dont know. We can estimate the missing data models, using observations about the data, and average the estimates of the two models using MI. Many longitudinal clinical trials have a monotone missing pattern (once participants drop out, they do not return), which simplifies MI, so use pattern mixture modeling as a sensitivity analysis. However, in observational studies, data are missing because of nonresponses and attrition, which is a more complex setting for handling attrition compared with clinical trials. For this study, we used data from the Whitehall II study. Data were first collected on over 10,000 civil servants in 1985 and data collection phases are repeated every 2-3 years. Participants complete a health and lifestyle questionnaire and, at alternate , odd-numbered phases, attend a screening clinic. Over 30 years, many epidemiological studies used these data. One study investigated how smoking status at baseline (Phase 5) was associated with a 10-year cognitive decline using a mixed model with random intercept and slope. In these analyses, the authors replaced missing values in non-responders with last observed values. However, participants with reduced cognitive function may be unable to continue participation in the Whitehall II study, which may bias the statistical analysis. Using Stata, we will simulate 1,000 datasets with the same distributions and associations as Whitehall II to perform the statistical analysis described above. First, we will develop a MAR missingness mechanism (conditional on previously observed values) and change cognitive function values to missing. Next, for attrition, we will use a MNAR missingness mechanism (conditional on measurements at the same phase). For both MAR and MNAR missingness mechanisms, we will compare the bias and precision from an analysis of simulated datasets without any missing data with a complete case analysis and an analysis of data imputed using MI; additionally, for the MNAR missingness mechanism, we will use pattern mixture modeling. We will use the twofold fully conditional specification (FCS) algorithm to impute missing values for nonresponders and to average estimates when using pattern mixture modeling. The twofold FCS algorithm imputes each phase sequentially conditional on observed information at adjacent phases, so is a suitable approach for imputing missing values in longitudinal data. The user-written package for this approach, twofold, is available on the Statistical Software Components (SSC) archive. We will present the methods used to perform the study and results from these comparisons.
    Date: 2016–09–16
    URL: http://d.repec.org/n?u=RePEc:boc:usug16:11&r=ecm
  9. By: Vanessa Berenguer-Rico (Dept of Economics, Mansfield College and Programme for Economic Modelling, Oxford University); Bent Nielsen (Dept of Economics, Nuffield College, Institute Programme for Economic Modelling, Oxford University)
    Abstract: We show that the cumulated sum of squares test has a standard Brownian bridge-type asymptotic distribution in non-linear regression models with non-stationary regressors. This contrasts with cumulated sum tests which have been studied previously and where the asymptotic distribution involves nuisance quantities. Through simulation we show that the power is comparable in a wide of range of situations.
    Keywords: Cumulated sum of squares, Non-linear Least Squares, Non-stationarity, Specification tests.
    JEL: C01 C22
    Date: 2015–08–03
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:1509&r=ecm
  10. By: Gabrielle Fack (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Julien Grenet (PSE - Paris-Jourdan Sciences Economiques - CNRS - Centre National de la Recherche Scientifique - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENS Paris - École normale supérieure - Paris - École des Ponts ParisTech (ENPC), PSE - Paris School of Economics); Yinghua He (TSE - Toulouse School of Economics - Toulouse School of Economics)
    Abstract: We propose novel approaches and tests for estimating student preferences with data from school choice mechanisms, e.g., the Gale-Shapley Deferred Acceptance. Without requiring truth-telling to be the unique equilibrium, we show that the matching is (asymptotically) stable, or justified-envy-free, implying that every student is assigned to her favorite school among those she is qualified for ex post. Having validated the methods in simulations, we apply them to data from Paris and reject truth-telling but not stability. Our estimates are then used to compare the sorting and welfare effects of alternative admission criteria prescribing how schools rank students in centralized mechanisms.
    Keywords: Gale-Shapley Deferred Acceptance Mechanism,School Choice,Stable Matching,Student Preferences,Admission Criteria,C78,D47,D50,D61,I21
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01215998&r=ecm
  11. By: Matthias Raddant; Friedrich Wagner
    Abstract: The problems related to the application of multivariate GARCH models to a market with a large number of stocks are solved by restricting the form of the conditional covariance matrix. It contains one component describing the market and a second simple component to account for the remaining contribution to the volatility. This allows the analytical calculation of the inverse covariance matrix. We compare our model with the results of other GARCH models for the daily returns from the S&P500 market. The description of the covariance matrix turns out to be similar to the DCC model but has fewer free parameters and requires less computing time. As applications we use the daily values of $\beta$ coefficients available from the market component to confirm a transition of the market in 2006. Further we discuss properties of the leverage effect.
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1609.07051&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.