nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒10‒10
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. Time to Demystify Endogeneity Bias By Duo Qin
  2. Kernel Estimation Of Hazard Functions When Observations Have Dependent and Common Covariates By James Wolter
  3. STR: A Seasonal-Trend Decomposition Procedure Based on Regression By G. Forchini; Bin Jiang; Bin Peng
  4. A triangulare treatment effect model with random coefficients in the selection equation By Gautier, Eric; Hoderlein, Stefan
  5. Asymptotics for Sieve Estimators of Hazard Rates: Estimating Hazard Functionals By James Wolter
  6. Common Feature Analysis of Economic Time Series: An Overview and Recent Developments By Marco Centoni; Gianluca Cubadda
  7. Maximum Entropy Evaluation of Asymptotic Hedging Error under a Generalised Jump-Diffusion Model By Farzad Alavi Fard; Firmin Doko Tchakota; Sivagowry Sriananthakumar
  8. An Alternative Estimator for Industrial Gender Wage Gaps: A Normalized Regression Approach By Yun, Myeong-Su; Lin, Eric S.
  9. Kriging of financial term-structures By Areski Cousin; Hassan Maatouk; Didier Rullière
  10. Identification and Estimation of Production Function with Unobserved Heterogeneity By Paul Schrimpf; Michio Suzuki; Hiroyuki Kasahara
  11. Manipulating a stated choice experiment By Fosgerau, Mogens; Börjesson, Maria
  12. A fast algorithm for finding the confidence set of large collections of models By Sylvain Barde

  1. By: Duo Qin (Department of Economics, SOAS, University of London, UK)
    Abstract: This study exposes the flaw in defining endogeneity bias by correlation between an explanatory variable and the error term of a regression model. Through dissecting the links which have led to entanglement of measurement errors, simultaneity bias, omitted variable bias and self-selection bias, the flaw is revealed to stem from a Utopian mismatch of reality with single explanatory variable models. The consequent estimation-centred route to circumvent the correlation is shown to be committing a type III error. Use of single variable based ëconsistentí estimators without consistency of model with data can result in significant distortion of causal postulates of substantive interest. This strategic error is traced to a loss in translation of those causal postulates directly to appropriate conditional models as decompositions of joint distributions. Efficient combination of substantive knowledge with data information in applied modelling research entails mending the loss to dispel the endogeneity bias phobia.
    Keywords: simultaneity, omitted variable, self-selection, multicollinearity, consistency, causal model, conditioning
    JEL: B23 B40 C10 C50
    URL: http://d.repec.org/n?u=RePEc:soa:wpaper:192&r=all
  2. By: James Wolter
    Abstract: Abstract We propose a hazard model where dependence between events is achieved by assuming dependence between covariates. This model allows for correlated variables specific to observations as well as macro variables which all observations share. This setup better fits many economic and financial applications where events are not independent. Nonparametric estimation of the hazard function is then studied. Kernel estimators proposed in Nielsen and Linton (1995, Annals of Statistics) and Linton, Nielsen and Van de Geer (2003, Annalsof Statistics) are shown to have similar asymptotic properties compared with the i.i.d.case. Mixing conditions ensure the asymptotic results follow. These results depend on adjustments to bandwidth conditions. Simulations are conducted which verify the impact of dependenceon estimators. Bandwidth selection accounting for dependence is shown to improve performance. In an empirical application, trade intensity in high-frequency financial data is estimated.
    Keywords: Hazard estimation, Correlated events, Dependent covariates, Common covariates, Kernel estimation.
    JEL: C13 C14 C51
    Date: 2015–10–05
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:761&r=all
  3. By: G. Forchini; Bin Jiang; Bin Peng
    Abstract: The set-up considered by Pesaran (Econometrica, 2006) is extended to allow for endogenous explanatory variables. A class of instrumental variables estimators is studied and it is shown that estimators in this class are consistent and asymptotically normally distributed as both the cross-section and time-series dimensions tend to infinity.
    Keywords: time series decomposition, seasonal data, Tikhonov regularisation, ridge regression, LASSO, STL, TBATS, X-12-ARIMA, BSM
    JEL: C33 C36
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-14&r=all
  4. By: Gautier, Eric; Hoderlein, Stefan
    Abstract: This paper considers treatment effects under endogeneity with complex heterogeneity in the selection equation. We model the outcome of an endogenous treat- ment as a triangular system, where both the outcome and first-stage equations consist of a random coefficients model. The first-stage specifically allows for nonmonotone selection into treatment. We provide conditions under which marginal distributions of potential outcomes, average and quantile treatment effects, all conditional on first-stage random coefficients, are identified. Under the same conditions, we derive bounds on the (conditional) joint distributions of potential outcomes and gains from treatment, and provide additional conditions for their point identification. All conditional quantities yield unconditional effects (e.g., the average treatment effect) by weighted integration.
    Keywords: Treatment effects, Endogeneity, Random Coefficients, Nonparametric Identification, Partial Identification, Roy Model
    Date: 2011–09–02
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:29656&r=all
  5. By: James Wolter
    Abstract: Abstract This paper derives asymptotics for functionals of a hazard model with an exposure-time effect and time-varying covariates. A semi-nonparametric sieve maximum likelihood estimator of a competing risks model based on the Cox proportional hazard is considered. Consistency of the estimator and its rate of convergence in the Fisher norm are derived. These results are prerequisites for asymptotic normality of plug-in estimators of hazard functionals. This provides an inference procedure for a large class of functionals including the conditional probability of events and various asset pricing formulas for defaultable securities. Asset pricing formulas in this class include the value of mortgages, insurance contracts, bonds, swaps and other options.
    Keywords: Conditional probabilities, Sieve estimation, Hazard models
    JEL: C01 C14 C41
    Date: 2015–10–05
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:760&r=all
  6. By: Marco Centoni (LUMSA University); Gianluca Cubadda (DEF & CEIS, University of Rome "Tor Vergata")
    Abstract: In this paper we overview the literature on common features analysis of economic time series. Starting from the seminal contributions by Engle and Kozicki (1993) and Vahid and Engle (1993), we present and discuss the various notions that have been proposed to detect and model common cyclical features in macroeconometrics. In particular, we analyze in details the link between common cyclical features and the reduced-rank regression model. We also illustrate similarities and differences between the common features methodology and other popular types of multivariate time series modelling. Finally, we discuss some recent developments in this area, such as the implications of common features for univariate time series models and the analysis of common autocorrelation in medium-large dimensional systems.
    Keywords: Common features; common cycles; reduced-rank regression; canonical correlation analysis; vector autoregressive models; dynamic factor models; business cycles.
    Date: 2015–10–05
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:355&r=all
  7. By: Farzad Alavi Fard (RMIT University); Firmin Doko Tchakota (School of Economics, University of Adelaide); Sivagowry Sriananthakumar (RMIT University)
    Abstract: In this paper we propose a maximum entropy estimator for the asymptotic distribution of the hedging error for options. Perfect replication of financial derivatives is not possible, due to market incompleteness and discrete-time hedging. We derive the asymptotic hedging error for options under a generalised jump-diffusion model with kernel biased, which nests a number of very important processes in finance. We then obtain an estimation for the distribution of hedging error by maximising ShannonÂ’s entropy subject to a set of moment constraints, which in turn yield the value-at-risk and expected shortfall of the hedging error. The significance of this approach lies in the fact that the maximum entropy estimator allows us to obtain a consistent estimate of the asymptotic distribution of hedging error, despite the non-normality of the underlying distribution of returns.
    Keywords: Generalized Jump, kernel biased, Asymptotic Hedging Error, Esscher Transform, Maximum Entropy Density, Value-at-Risk, Expected Shortfall
    JEL: C13 C51 G13
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:adl:wpaper:2015-17&r=all
  8. By: Yun, Myeong-Su (Tulane University); Lin, Eric S. (National Tsing Hua University)
    Abstract: Using normalized regression equations, we propose an alternative estimator of industrial gender wage gaps which is identified in the sense that it is invariant to the choice of an unobserved non-discriminatory wage structure, and to the choice of the reference groups of any categorical variables. The proposed estimator measures the pure impact of industry on gender wage gaps after netting out wage differentials due to differences in characteristics and their coefficients between men and women. Furthermore, the proposed estimator is easy to implement, including hypothesis tests. We compare the proposed estimator with existing estimators using samples from 1998 Current Population Survey of US.
    Keywords: gender wage discrimination, identification, industrial gender wage gaps, normalized regression, Oaxaca decomposition
    JEL: C12 J31 J71
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9381&r=all
  9. By: Areski Cousin (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1); Hassan Maatouk (LIMOS - Laboratoire d'Informatique, de Modélisation et d'optimisation des Systèmes - CNRS - Université d'Auvergne - Clermont-Ferrand I - UBP - Université Blaise Pascal - Clermont-Ferrand 2 - Institut Français de Mécanique Avancée, DEMO-ENSMSE - Département Décision en Entreprise : Modélisation, Optimisation - Mines Saint-Étienne MSE - École des Mines de Saint-Étienne - Institut Mines-Télécom - Institut Henri Fayol); Didier Rullière (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1)
    Abstract: Due to the lack of reliable market information, building financial term-structures may be associated with a significant degree of uncertainty. In this paper, we propose a new term-structure interpolation method that extends classical spline technics by additionally allowing for quantification of uncertainty. The proposed method is based on a generalization of kriging models with linear equality constraints (market-fit conditions) and shape-preserving conditions such as monotonicity or positivity (no-arbitrage conditions). We define the most likely curve and show how to build confidence bands. The Gaussian process covariance hyper-parameters under the construction constraints are estimated using cross-validation technics. Based on observed market quotes at different dates, we demonstrate the efficiency of the method by building curves together with confidence intervals for term-structures of OIS discount rates, of zero-coupon swaps rates and of CDS implied default probabilities. We also show how to construct interest-rate surfaces or default probability surfaces by considering time (quotation dates) as an additional dimension.
    Keywords: monotonicity constraints, kriging, interpolation,Yield curve
    Date: 2015–09–30
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01206388&r=all
  10. By: Paul Schrimpf (The University of British Columbia); Michio Suzuki (University of Tokyo); Hiroyuki Kasahara (University of British Columbia)
    Abstract: This paper examines non-parametric identifiability of production function when production functions are heterogenous across firms beyond Hicks-neutral technology terms. Using a finite mixture specification to capture permanent unobserved heterogeneity in production technology, we show that production function for each unobserved type is non-parametrically identified under regularity conditions. We also propose an estimation procedure for production function with random coefficients based on EM algorithm. We estimate a random coefficients production function using the panel data of Japanese publicly-traded manufacturing firms and compare it with the estimate of production function with fixed coefficients estimated by the method of Gandhi, Navarro, and Rivers (2013).
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:red:sed015:924&r=all
  11. By: Fosgerau, Mogens; Börjesson, Maria
    Abstract: This paper considers the design of a stated choice experiment intended to measure the marginal rate of substitution (MRS) between cost and an attribute such as time using a conventional logit model. Focusing the experimental design on some target MRS will bias estimates towards that value. The paper shows why this happens. The resulting estimated MRS can then be manipulated by adapting the target MRS in the experimental design.
    Keywords: stated choice; willingness to pay; misspecification; experimental design
    JEL: C9 D10 Q51
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67053&r=all
  12. By: Sylvain Barde
    Abstract: The paper proposes a new algorithm for finding the confidence set of a collection of forecasts or prediction models. Existing numerical implementations for finding the confidence set use an elimination approach where one starts with the full collection of models and successively eliminates the worst performing until the null of equal predictive ability is no longer rejected at a given confidence level. The intuition behind the proposed implementation lies in reversing the process: one starts with a collection of two models and as models are successively added to the collection both the model rankings and p-values are updated. The first benefit of this updating approach is a reduction of one polynomial order in both the time complexity and memory cost of finding the confidence set of a collection of M models, falling respectively from O (M3) to O (M2) and from O (M2) to O (M). This theoretical prediction is confirmed by a Monte Carlo benchmarking analysis of the algorithms. The second key benefit of the updating approach is that it intuitively allows for further models to be added at a later point in time, thus enabling collaborative efforts using the model confidence set procedure.
    Keywords: Model selection; model confidence set; bootstrapped statistics
    JEL: C12 C18 C52
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:ukc:ukcedp:1519&r=all

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.