nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒09‒30
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust Multivariate Local Whittle Estimation and Spurious Fractional Cointegration By Becker, Janis; Leschinski, Christian; Sibbertsen, Philipp
  2. Adjusted QMLE for the spatial autoregressive parameter By Federico Martellosio; Grant Hillier
  3. Nonparametric estimation of the random coefficients model: An elastic net approach By Heiss, Florian; Hetzenecker, Stephan; Osterhaus, Maximilian
  4. Backtesting Marginal Expected Shortfall and Related Systemic Risk Measures By Denisa Banulescu; Christophe Hurlin; Jeremy Leymarie; O. Scaillet
  5. How Serious is the Measurement-Error Problem in a Popular Risk-Aversion Task? By Fabien Perez; Guillaume Hollard; Radu Vranceanu; Delphine Dubart
  6. Double-Robust Identification for Causal Panel Data Models By Dmitry Arkhangelsky; Guido W. Imbens
  7. istinguishing Incentive from Selection Effects in Auction-Determined Contracts By Laurent LAMY; Manasa PATNAM; Michael VISSER
  8. ALTERNATIVE ESTIMATORS FOR THE FDI GRAVITY MODEL: AN APPLICATION TO GERMAN OUTWARD FDI By Mariam Camarero; Laura Montolio; Cecilio Tamarit
  9. Persistent zeros: The extensive margin of trade By Hinz, Julian; Stammann, Amrei; Wanner, Joschka
  10. Imposing equilibrium restrictions in the estimation of dynamic discrete games By Victor Aguirregabiria; Mathieu Marcoux
  11. Proxy-SVAR as a Bridge for Identification with Higher Frequency Data By Andrea Giovanni Gazzani; Alejandro Vicondoa
  12. Estimating and Decomposing Conditional Average Treatment Effects: The Smoking Ban in England By Robson, M.;; Doran, T.;; Cookson, R.;
  13. Asymptotic post-selection inference for Akaike’s information criterion By Ali Charkhi; Gerda Claeskens
  14. Reusing Natural Experiments By Heath, Davidson; Ringgenberg, Matthew C.; Samadi, Mehrdad; Werner, Ingrid M.
  15. Distributional conformal prediction By Victor Chernozhukov; Kaspar W\"uthrich; Yinchu Zhu
  16. The Memory of Beta Factors By Becker, Janis; Hollstein, Fabian; Prokopczuk, Marcel; Sibbertsen, Philipp
  17. Latent Heterogeneity in the Marginal Propensity to Consume By Daniel Lewis; Davide Melcangi; Laura Pilossoph

  1. By: Becker, Janis; Leschinski, Christian; Sibbertsen, Philipp
    Abstract: This paper derives a multivariate local Whittle estimator for the memory parameter of a possibly long memory process and the fractional cointegration vector robust to low frequency contaminations. This estimator as many other local Whittle based procedures requires a priori knowledge of the cointegration rank. Since low frequency contaminations bias inference on the cointegration rank, we also provide a robust estimator of the cointegration rank. As both estimators are based on the trimmed periodogram we further derive some insights in the behaviour of the periodogram of a process under very general types of low frequency contaminations. An extensive Monte Carlo exercise shows the applicability of our estimators in finite samples. Our procedures are applied to realized betas of two American energy companies discovering that the series are fractionally cointegrated. As the series exhibit low frequency contaminations, standard procedures are unable to detect this relation.
    Keywords: Multivariate Long Memory; Fractional Cointegration; Random Level Shifts; Semiparametric Estimation
    JEL: C13 C32
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-660&r=all
  2. By: Federico Martellosio; Grant Hillier
    Abstract: One simple, and often very effective, way to attenuate the impact of nuisance parameters on maximum likelihood estimation of a parameter of interest is to recenter the profile score for that parameter. We apply this general principle to the quasi-maximum likelihood estimator (QMLE) of the autoregressive parameter $\lambda$ in a spatial autoregression. The resulting estimator for $\lambda$ has better finite sample properties compared to the QMLE for $\lambda$, especially in the presence of a large number of covariates. It can also solve the incidental parameter problem that arises, for example, in social interaction models with network fixed effects, or in spatial panel models with individual or time fixed effects. However, spatial autoregressions present specific challenges for this type of adjustment, because recentering the profile score may cause the adjusted estimate to be outside the usual parameter space for $\lambda$. Conditions for this to happen are given, and implications are discussed. For inference, we propose confidence intervals based on a Lugannani--Rice approximation to the distribution of the adjusted QMLE of $\lambda$. Based on our simulations, the coverage properties of these intervals are excellent even in models with a large number of covariates.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.08141&r=all
  3. By: Heiss, Florian; Hetzenecker, Stephan; Osterhaus, Maximilian
    Abstract: This paper investigates and extends the computationally attractive nonparametric random coefficients estimator of Fox, Kim, Ryan, and Bajari (2011). We show that their estimator is a special case of the nonnegative LASSO, explaining its sparse nature observed in many applications. Recognizing this link, we extend the estimator, transforming it to a special case of the nonnegative elastic net. The extension improves the estimator's recovery of the true support and allows for more accurate estimates of the random coefficients' distribution. Our estimator is a generalization of the original estimator and therefore, is guaranteed to have a model fit at least as good as the original one. A theoretical analysis of both estimators' properties shows that, under conditions, our generalized estimator approximates the true distribution more accurately. Two Monte Carlo experiments and an application to a travel mode data set illustrate the improved performance of the generalized estimator.
    Keywords: Random Coefficients,Mixed Logit,Nonparametric Estimation,Elastic Net
    JEL: C14 C25 L
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:dicedp:326&r=all
  4. By: Denisa Banulescu (University of Orleans; Maastricht School of Business and Economics); Christophe Hurlin (University of Orleans); Jeremy Leymarie (University of Orleans); O. Scaillet (University of Geneva GSEM and GFRI; Swiss Finance Institute; University of Geneva - Research Center for Statistics)
    Abstract: This paper proposes an original approach for backtesting systemic risk measures. This backtesting approach makes it possible to assess the systemic risk measure forecasts used to identify the financial institutions that contribute the most to the overall risk in the financial system. Our procedure is based on simple tests similar to those generally used to backtest the standard market risk measures such as value-at-risk or expected shortfall. We introduce a concept of violation associated with the marginal expected shortfall (MES), and we define unconditional coverage and independence tests for these violations. We can generalize these tests to any MES-based systemic risk measures such as SES, SRISK, or ∆CoVaR. We study their asymptotic properties in the presence of estimation risk and investigate their finite sample performance via Monte Carlo simulations. An empirical application is then carried out to check the validity of the MES, SRISK, and ∆CoVaR forecasts issued from a GARCH-DCC model for a panel of U.S. financial institutions. Our results show that this model is able to produce valid forecasts for the MES and SRISK when considering a medium-term horizon. Finally, we propose an original early warning system indicator for future systemic crises deduced from these backtests. We then define an adjusted systemic risk measure that takes into account the potential misspecification of the risk model.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp1948&r=all
  5. By: Fabien Perez (ENSAE - Ecole Nationale de la Statistique et de l'Analyse Economique - Ecole Nationale de la Statistique et de l'Analyse Economique); Guillaume Hollard (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Radu Vranceanu (THEMA - Théorie économique, modélisation et applications - UCP - Université de Cergy Pontoise - Université Paris-Seine - CNRS - Centre National de la Recherche Scientifique); Delphine Dubart (ESSEC Business School - Essec Business School)
    Abstract: This paper uses the test/retest data from the Holt and Laury (2002) experiment to provide estimates of the measurement error in this popular risk-aversion task. Maximum likelihood estimation suggests that the variance of the measurement error is approximately equal to the variance of the number of safe choices. Simulations confirm that the coefficient on the risk measure in univariate OLS regressions is approximately half of its true value. Unlike measurement error, the discrete transformation of continuous riskaversion is not a major issue. We discuss the merits of a number of different solutions: increasing the number of observations, IV and the ORIV method developed by Gillen et al. (2019).
    Keywords: ORIV,Experiments,Measurement error,Risk-aversion,Test/retest
    Date: 2019–09–17
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:hal-02291224&r=all
  6. By: Dmitry Arkhangelsky; Guido W. Imbens
    Abstract: We study identification and estimation of causal effects of a binary treatment in settings with panel data. We highlight that there are two paths to identification in the presence of unobserved confounders. First, the conventional path based on making assumptions on the relation between the potential outcomes and the unobserved confounders. Second, a design-based path where assumptions are made about the relation between the treatment assignment and the confounders. We introduce different sets of assumptions that follow the two paths, and develop double robust approaches to identification where we exploit both approaches, similar in spirit to the double robust approaches to estimation in the program evaluation literature.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.09412&r=all
  7. By: Laurent LAMY (CIRED, Ecole des Ponts, ParisTech.); Manasa PATNAM (IMF.); Michael VISSER (CREST; ENSAE; CRED, University of Paris 2.)
    Abstract: This paper develops a novel approach to estimate how contract and principal-agent characteristics influence an ex-post performance outcome when the matching between agents and principals derives from an auction process. We propose a control-function approach to account for the endogeneity of contracts and matching. This consists of, first, estimating the primitives of an interdependent values auction model - which is shown to be non-parametrically identified from the bidding data - second, constructing control functions based on the distribution of the unobserved private signals conditional on the auction outcome. A Monte Carlo study shows that our augmented outcome equation corrects well of the endogeneity biases, even in small samples. We apply our methodology to a labor market application: we estimate the effect of sports players’ auction-determined wages on their individual performance.
    Keywords: Econometrics of Contracts, Econometrics of Auctions; Structural Econometrics; Endogenous Matching; Polychotomous Sample Selection; Wage-Performance Elasticity.
    JEL: C01 C29 D44 M52
    Date: 2019–09–20
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2019-15&r=all
  8. By: Mariam Camarero (Jaume I University. Department of Economics, Av. de Vicent Sos Baynat s/n, E-12071 Castellón, Spain); Laura Montolio (University of Valencia, Department of Applied Economics II, Av. dels Tarongers, s/n Eastern Department Building E-46022 Valencia, Spain); Cecilio Tamarit (University of Valencia, INTECO Joint Research Unit. Department of Applied Economics II. PO Box 22.006 - E-46071 Valencia, Spain)
    Abstract: Despite the sound theoretical foundations of FDI gravity models and its popularity in empirical studies, there is a lack of consensus regarding the econometric specification and the estimation of the gravity equation. This paper provides a comprehensive empirical evidence of the determinants of German outward FDI comparing several estimation methods in their multiplicative form. We use four versions of the Generalized Linear Model (GLM), namely, Poisson Pseudo Maximum Likelihood(PPML), Gamma Pseudo Maximum Likelihood (GPML), Negative Binomal Pseudo Maximum Likelihood (NBPML) and Gaussian-GLM. The results of the empirical application indicate that NBPML is the best performing estimator followed by GPML.
    Keywords: FDI determinants; Outward Foreign Direct Investment; Germany; Gravity models; Generalized linear models
    JEL: F21 F23 C13 C33
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:eec:wpaper:1907&r=all
  9. By: Hinz, Julian; Stammann, Amrei; Wanner, Joschka
    Abstract: The extensive margin of bilateral trade exhibits a high level of persistence that cannot be explained by geography or trade policy. We combine a heterogeneous firms model of international trade with bounded productivity with features from the firm dynamics literature to derive expressions for an exporting country's participation in a specific destination market in a given period. The model framework asks for a dynamic binary choice estimator with two or three sets of high-dimensional fixed effects. To mitigate the incidental parameter problem associated with nonlinear fixed effects models, we characterize and implement suitable bias corrections. Extensive simulation experiments confirm the desirable statistical properties of the bias-corrected estimators. Empirically, taking two sources of persistence - true state dependence and unobserved heterogeneity - into account using a dynamic specification, along with appropriate fixed effects and bias corrections, changes the estimated effects considerably: out of the most commonly studied potential determinants (joint WTO membership, common regional trade agreement, and shared currency), only sharing a common currency retains a significant effect on whether two countries trade with each other at all in our preferred estimation.
    Keywords: high-dimensional fixed effects,dynamic panel model,binary choice,incidental parameter bias correction,trade policy
    JEL: C13 C23 F14 F15
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwkwp:2139&r=all
  10. By: Victor Aguirregabiria; Mathieu Marcoux
    Abstract: Imposing equilibrium restrictions provides substantial gains in the estimation of dynamic discrete games. Estimation algorithms imposing these restrictions -- MPEC, NFXP, NPL, and variations -- have different merits and limitations. MPEC guarantees local convergence, but requires the computation of high-dimensional Jacobians. The NPL algorithm avoids the computation of these matrices, but -- in games -- may fail to converge to the consistent NPL estimator. We study the asymptotic properties of the NPL algorithm treating the iterative procedure as performed in finite samples. We find that there are always samples for which the algorithm fails to converge, and this introduces a selection bias. We also propose a spectral algorithm to compute the NPL estimator. This algorithm satisfies local convergence and avoids the computation of Jacobian matrices. We present simulation evidence illustrating our theoretical results and the good properties of the spectral algorithm.
    Keywords: Dynamic discrete game; Estimation algorithm; Convergence; Nested pseudo-likelihood
    JEL: C13 C61 C73
    Date: 2019–09–16
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-646&r=all
  11. By: Andrea Giovanni Gazzani (Bank of Italy); Alejandro Vicondoa (Pontificia Universidad Catolica de Chile)
    Abstract: High frequency identification around key events has recently solved many puzzles in empirical macroeconomics. This paper proposes a novel methodology, the Bridge Proxy-SVAR, to identify structural shocks in Vector Autoregressions (VARs) by exploiting high frequency information in a more general framework. Our methodology comprises three steps: (I) identify the structural shocks of interest in high frequency systems; (II) aggregate the series of high frequency shocks at a lower frequency employing the correct filter; (III) use the aggregated series of shocks as a proxy for the corresponding structural shock in lower frequency VARs. Both analytically and through simulations, we show that our methodology significantly improves the identification of VARs, recovering the true impact effect. In a first empirical application on US data, we show that financial shocks identified at daily frequency produce unambiguously macroeconomic effects consistent with a demand shock. In a second application, we identify U.S. monetary policy shocks that are highly correlated with the series of monetary policy surprises but, contrary to the latter ones, are invertible and so valid external instruments for low-frequency VARs.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:red:sed019:855&r=all
  12. By: Robson, M.;; Doran, T.;; Cookson, R.;
    Abstract: We develop a practical method for estimating and decomposing conditional average treatment effects using locally-weighted regressions. We illustrate with an application to the smoking ban in England using a regression discontinuity design, based on Health Survey for England data. We estimate average treatment effects conditional on socioeconomic status and decompose these effects by smoking location. Results show, the ban had no effect on the level of active smoking, but significantly reduced average exposure to second-hand smoke among non-smokers by 1.38 hours per week. Our method reveals a complex relationship between socioeconomic status and the effect on passive smoking. Decomposition analysis shows that these effects stem primarily from exposure reductions in pubs, but also from workplace exposure reductions for high socioeconomic status individuals.
    Keywords: health inequality; equity; conditional average treatment effects; regression discontinuity; heterogeneity; smoking ban; lwcate;
    JEL: C14 C21 C87 D63 I14 I38
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:yor:hectdg:19/20&r=all
  13. By: Ali Charkhi; Gerda Claeskens
    Abstract: Ignoring the model selection step in inference after selection is harmful. This paper studies the asymptotic distribution of estimators after model selection using the Akaike information criterion. First, we consider the classical setting in which a true model exists and is included in the candidate set of models. We exploit the overselection property of this criterion in the construction of a selection region, and obtain the asymptotic distribution of estimators and linear combinations thereof conditional on the selected model. The limiting distribution depends on the set of competitive models and on the smallest overparameterized model. Second, we relax the assumption about the existence of a true model, and obtain uniform asymptotic results. We use simulation to study the resulting postselection distributions and to calculate confidence regions for the model parameters. We apply the method to data.
    Keywords: Akaike information criterion, Confidence region, Likelihood model, Model selection, post-selection inference
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:ete:kbiper:616160&r=all
  14. By: Heath, Davidson (University of Utah David Eccles School of Business); Ringgenberg, Matthew C. (University of Utah - Department of Finance); Samadi, Mehrdad (Southern Methodist University (SMU) - Finance Department); Werner, Ingrid M. (The Ohio State University - Fisher College of Business)
    Abstract: Natural experiments are used in empirical research to make causal inferences. After a natural experiment is first used, other researchers often reuse the setting, examining different outcomes based on causal chain arguments. Using simulation evidence combined with two extensively studied natural experiments, business combination laws and the Regulation SHO pilot, we show that the repeated use of a natural experiment significantly increases the likelihood of false discoveries. To correct this, we propose multiple testing methods which account for dependence across tests and we show evidence of their efficacy.
    JEL: G1 G10
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:ecl:ohidic:2019-21&r=all
  15. By: Victor Chernozhukov; Kaspar W\"uthrich; Yinchu Zhu
    Abstract: We propose a robust method for constructing conditionally valid prediction intervals based on regression models for conditional distributions such as quantile and distribution regression. Our approach exploits the probability integral transform and relies on permuting estimated ``ranks'' instead of regression residuals. Unlike residuals, these ranks are independent of the covariates, which allows us to establish the conditional validity of the resulting prediction intervals under consistent estimation of the conditional distributions. We also establish theoretical performance guarantees under arbitrary model misspecification. The usefulness of the proposed method is illustrated based on two applications. First, we study the problem of predicting daily returns using realized volatility. Second, we consider a synthetic control setting where the goal is to predict a country's counterfactual GDP growth rate based on the contemporaneous GDP growth rates of other countries.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.07889&r=all
  16. By: Becker, Janis; Hollstein, Fabian; Prokopczuk, Marcel; Sibbertsen, Philipp
    Abstract: Researchers and practitioners employ a variety of time-series processes to forecast betas, using either short-memory models or implicitly imposing infinite memory. We find that both approaches are inadequate: beta factors show consistent long-memory properties. For the vast majority of stocks, we reject both the short-memory and difference-stationary (random walk) alternatives. A pure long-memory model reliably provides superior beta forecasts compared to all alternatives. Finally, we document the relation of firm characteristics with the forecast error differentials that result from inadequately imposing short-memory or random walk instead of long-memory processes.
    Keywords: Long memory; beta; persistence; forecasting; predictability
    JEL: C58 G15 G12 G11
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-661&r=all
  17. By: Daniel Lewis (Federal Reserve Bank of New York); Davide Melcangi (Federal Reserve Bank of New York); Laura Pilossoph (Federal Reserve Bank of New York)
    Abstract: Recent work highlights the importance of heterogeneity in marginal propensities to consume (MPCs) out of transitory income shocks for fiscal policy, the transmission of monetary policy, and welfare. In this paper, we construct an estimator for individual MPCs using the Grouped Marginal Effects Estimator (GMEE), which optimally groups households together that have similar latent MPCs. The approach we propose is agnostic about the source of heterogeneity and estimates distinct MPCs as well as which households display these distinct propensities. We apply the estimator to the 2008 tax rebate and household consumption data from the Consumer Expenditure Survey (CEX), exploiting the randomized timing of payments as previously done in the literature. Our approach uncovers a large degree of heterogeneity in household MPCs, and permits the identification of observable characteristics that predict household MPCs.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:red:sed019:519&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.