nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒09‒21
27 papers chosen by
Sune Karlsson
Örebro universitet

  1. Consistent Misspecification Testing in Spatial Autoregressive Models By Jungyoon Lee; Peter C.B. Phillips; Francesca Rossi
  2. Instrument Validity for Heterogeneous Causal Effects By Zhenting Sun
  3. A Robust Score-Driven Filter for Multivariate Time Series By Enzo D'Innocenzo; Alessandra Luati; Mario Mazzocchi
  4. Two-Stage Instrumental Variable Estimation of Linear Panel Data Models with Interactive Effects By Guowei Cui; Milda Norkuté; Vasilis Sarafidis; Takashi Yamagata
  5. Robust Semiparametric Estimation in Panel Multinomial Choice Models By Wayne Yuan Gao; Ming Li
  6. Data driven value-at-risk forecasting using a SVR-GARCH-KDE hybrid By Marius Lux; Wolfgang Karl H\"ardle; Stefan Lessmann
  7. Portfolio Efficiency Tests with Conditioning Information - Comparing GMM and GEL Estimators By Caio Vigo Pereira; Marcio Laurini
  8. Decomposing Identification Gains and Evaluating Instrument Identification Power for Partially Identified Average Treatment Effects By Lina Zhang; David T. Frazier; D. S. Poskitt; Xueyan Zhao
  9. Powerful Inference By Xiaohong Chen; Sokbae Lee; Myung Hwan Seo
  10. EM estimation for the Poisson-Inverse Gamma regression model with varying dispersion: an application to insurance ratemaking By Tzougas, George
  11. Understanding the Estimation of Oil Demand and Oil Supply Elasticities By Lutz Kilian
  12. Doubly Robust Semiparametric Difference-in-Differences Estimators with High-Dimensional Data By Yang Ning; Sida Peng; Jing Tao
  13. Heterogeneous Coefficients, Control Variables, and Identification of Treatment Effects By Whitney K. Newey; Sami Stouli
  14. The Identity Fragmentation Bias By Tesary Lin; Sanjog Misra
  15. Sample size calculation for an ordered categorical outcome By Ian R. White; Ella Marley-Zagar; Tim Morris; Mahesh K. B. Parmar; Abdel G. Babiker
  16. Two-Stage Maximum Score Estimator By Wayne Yuan Gao; Sheng Xu
  17. Causal Inference in Possibly Nonlinear Factor Models By Yingjie Feng
  18. COVID-19: Tail Risk and Predictive Regressions By Walter Distaso; Rustam Ibragimov; Alexander Semenov; Anton Skrobotov
  19. The role of parallel trends in event study settings: An application to environmental economics By Michelle Marcus; Pedro H. C. Sant'Anna
  20. Dimension Reduction for High Dimensional Vector Autoregressive Models By Gianluca Cubadda; Alain Hecq
  21. An Instrumental Variable Approach to Dynamic Models By Steven T. Berry; Giovanni Compiani
  22. Cointegrating Polynomial Regressions with Power Law Trends: A New Angle on the Environmental Kuznets Curve By Yicong Lin; Hanno Reuvers
  23. Vector autoregressive-based Granger causality test in the presence of instabilities By Rossi, Barbara; Wang, Yiru
  24. Using a Satisficing Model of Experimenter Decision-Making to Guide Finite-Sample Inference for Compromised Experiments By Ganesh Karapakula; James J. Heckman
  25. Identification and Inference in First-Price Auctions with Risk Averse Bidders and Selective Entry By Xiaohong Chen; Matthew Gentry; Tong Li; Jingfeng Lu
  26. On the equivalence between the Kinetic Ising Model and discrete autoregressive processes By Carlo Campajola; Fabrizio Lillo; Piero Mazzarisi; Daniele Tantari
  27. Instrumental Variable Quantile Regression By Victor Chernozhukov; Christian Hansen; Kaspar Wuthrich

  1. By: Jungyoon Lee (Royal Holloway, University of London); Peter C.B. Phillips (Cowles Foundation, Yale University); Francesca Rossi (University of Verona)
    Abstract: Spatial autoregressive (SAR) and related models offer flexible yet parsimonious ways to model spatial or network interaction. SAR specifications typically rely on a particular parametric functional form and an exogenous choice of the so-called spatial weight matrix with only limited guidance from theory in making these specifications. The choice of a SAR model over other alternatives, such as spatial Durbin (SD) or spatial lagged X (SLX) models, is often arbitrary, raising issues of potential specification error. To address such issues, this paper develops an omnibus specification test within the SAR framework that can detect general forms of misspecification including that of the spatial weight matrix, functional form and the model itself. The approach extends the framework of conditional moment testing of Bierens (1982, 1990) to the general spatial setting. We derive the asymptotic distribution of our test statistic under the null hypothesis of correct SAR specification and show consistency of the test. A Monte Carlo study is conducted to study finite sample performance of the test. An empirical illustration on the performance of our test in the modeling of tax competition in Finland and Switzerland is included.
    Keywords: Conditional moment test, Misspecification test, Omnibus testing, Spatial AR, Weight matrix misspecification
    JEL: C21 C23
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2256&r=all
  2. By: Zhenting Sun
    Abstract: This paper provides a general framework for testing instrument validity in heterogeneous causal effect models. We first generalize the testable implications of the instrument validity assumption provided by Balke and Pearl (1997), Imbens and Rubin (1997), and Heckman and Vytlacil (2005). The generalization involves the cases where the treatment can be multivalued (and ordered) or unordered, and there can be conditioning covariates. Based on these testable implications, we propose a nonparametric test which is proved to be asymptotically size controlled and consistent. Because of the nonstandard nature of the problem in question, the test statistic is constructed based on a nonsmooth map, which causes technical complications. We provide an extended continuous mapping theorem and an extended delta method, which may be of independent interest, to establish the asymptotic distribution of the test statistic under null. We then extend the bootstrap method proposed by Fang and Santos (2018) to approximate this asymptotic distribution and construct a critical value for the test. Compared to the test proposed by Kitagawa (2015), our test can be applied in more general settings and may achieve power improvement. Evidence that the test performs well on finite samples is provided via simulations. We revisit the empirical study of Card (1993) and use their data to demonstrate application of the proposed test in practice. We show that a valid instrument for a multivalued treatment may not remain valid if the treatment is coarsened.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.01995&r=all
  3. By: Enzo D'Innocenzo; Alessandra Luati; Mario Mazzocchi
    Abstract: A novel multivariate score-driven model is proposed to extract signals from noisy vector processes. By assuming that the conditional location vector from a multivariate Student's t distribution changes over time, we construct a robust filter which is able to overcome several issues that naturally arise when modeling heavy-tailed phenomena and, more in general, vectors of dependent non-Gaussian time series. We derive conditions for stationarity and invertibility and estimate the unknown parameters by maximum likelihood. Strong consistency and asymptotic normality of the estimator are proved and the finite sample properties are illustrated by a Monte-Carlo study. From a computational point of view, analytical formulae are derived, which consent to develop estimation procedures based on the Fisher scoring method. The theory is supported by a novel empirical illustration that shows how the model can be effectively applied to estimate consumer prices from home scanner data.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.01517&r=all
  4. By: Guowei Cui; Milda Norkuté; Vasilis Sarafidis; Takashi Yamagata
    Abstract: This paper puts forward a new instrumental variables (IV) approach for linear panel datamodels with interactive effects in the error term and regressors. The instruments are transformed regressors and so it is not necessary to search for external instruments. The proposed method asymptotically eliminates the interactive effects in the error term and in the regressors separately in two stages. We propose a two-stage IV (2SIV) and a mean-group IV (MGIV) estimator for homogeneous and heterogeneous slope models, respectively. The asymptotic analysis for the models with homogeneous slopes reveals that: (i) the sqrt(NT)-consistent 2SIV estimator is free from asymptotic bias that could arise due to the correlation between the regressors and the estimation error of the interactive effects; (ii) under the same set of assumptions, existing popular estimators, which eliminate interactive effects either jointly in the regressors and the error term, or only in the error term, can suffer from asymptotic bias; (iii) the proposed 2SIV estimator is asymptotically as efficient as the bias-corrected version of estimators that eliminate interactive effects jointly in the regressors and the error, whilst; (iv) the relative efficiency of the estimators that eliminate interactive effects only in the error term is indeterminate. A Monte Carlo study confirms good approximation quality of our asymptotic results and competent performance of 2SIV and MGIV in comparison with existing estimators. Furthermore, it demonstrates that the bias-corrections can be imprecise and noticeably inflate the dispersion of the estimators in finite samples.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:dpr:wpaper:1101&r=all
  5. By: Wayne Yuan Gao; Ming Li
    Abstract: This paper proposes a robust method for semiparametric identification and estimation in panel multinomial choice models, where we allow for infinite-dimensional fixed effects that enter into consumer utilities in an additively nonseparable way, thus incorporating rich forms of unobserved heterogeneity. Our identification strategy exploits multivariate monotonicity in parametric indexes, and uses the logical contraposition of an intertemporal inequality on choice probabilities to obtain identifying restrictions. We provide a consistent estimation procedure, and demonstrate the practical advantages of our method with simulations and an empirical illustration with the Nielsen data.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.00085&r=all
  6. By: Marius Lux; Wolfgang Karl H\"ardle; Stefan Lessmann
    Abstract: Appropriate risk management is crucial to ensure the competitiveness of financial institutions and the stability of the economy. One widely used financial risk measure is Value-at-Risk (VaR). VaR estimates based on linear and parametric models can lead to biased results or even underestimation of risk due to time varying volatility, skewness and leptokurtosis of financial return series. The paper proposes a nonlinear and nonparametric framework to forecast VaR that is motivated by overcoming the disadvantages of parametric models with a purely data driven approach. Mean and volatility are modeled via support vector regression (SVR) where the volatility model is motivated by the standard generalized autoregressive conditional heteroscedasticity (GARCH) formulation. Based on this, VaR is derived by applying kernel density estimation (KDE). This approach allows for flexible tail shapes of the profit and loss distribution, adapts for a wide class of tail events and is able to capture complex structures regarding mean and volatility. The SVR-GARCH-KDE hybrid is compared to standard, exponential and threshold GARCH models coupled with different error distributions. To examine the performance in different markets, one-day-ahead and ten-days-ahead forecasts are produced for different financial indices. Model evaluation using a likelihood ratio based test framework for interval forecasts and a test for superior predictive ability indicates that the SVR-GARCH-KDE hybrid performs competitive to benchmark models and reduces potential losses especially for ten-days-ahead forecasts significantly. Especially models that are coupled with a normal distribution are systematically outperformed.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.06910&r=all
  7. By: Caio Vigo Pereira (Department of Economics, University of Kansas); Marcio Laurini (Department of Economics, University of Sao Paulo)
    Abstract: We evaluate the use of Generalized Empirical Likelihood (GEL) estimators in port- folio efficiency tests for asset pricing models in the presence of conditional information. Estimators from GEL family present some optimal statistical properties, such as robustness to misspecification and better properties in finite samples. Unlike GMM, the bias for GEL estimators do not increase with the number of moment conditions included, which is expected in conditional efficiency analysis. By means of Monte Carlo experiments, we show that GEL estimators have better performance in the presence of data contaminations, especially under heavy tails and outliers. An extensive empirical analysis shows the properties of the estimators for different sample sizes and portfolios types for two asset pricing models.
    Keywords: Portfolio Efficiency. Conditional Information. Efficiency Tests. GEL. GMM
    JEL: C12 C13 C58 G11 G12
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202014&r=all
  8. By: Lina Zhang; David T. Frazier; D. S. Poskitt; Xueyan Zhao
    Abstract: This paper studies the instrument identification power for the average treatment effect (ATE) in partially identified binary outcome models with an endogenous binary treatment. We propose a novel approach to measure the instrument identification power by their ability to reduce the width of the ATE bounds. We show that instrument strength, as determined by the extreme values of the conditional propensity score, and its interplays with the degree of endogeneity and the exogenous covariates all play a role in bounding the ATE. We decompose the ATE identification gains into a sequence of measurable components, and construct a standardized quantitative measure for the instrument identification power ($IIP$). The decomposition and the $IIP$ evaluation are illustrated with finite-sample simulation studies and an empirical example of childbearing and women's labor supply. Our simulations show that the $IIP$ is a useful tool for detecting irrelevant instruments.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.02642&r=all
  9. By: Xiaohong Chen; Sokbae Lee; Myung Hwan Seo
    Abstract: We develop an inference method for a (sub)vector of parameters identified by conditional moment restrictions, which are implied by economic models such as rational behavior and Euler equations. Building on Bierens (1990), we propose penalized maximum statistics and combine bootstrap inference with model selection. Our method is optimized to be powerful against a set of local alternatives of interest by solving a data-dependent max-min problem for tuning parameter selection. We demonstrate the efficacy of our method by a proof of concept using two empirical examples: rational unbiased reporting of ability status and the elasticity of intertemporal substitution.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.11140&r=all
  10. By: Tzougas, George
    Abstract: This article presents the Poisson-Inverse Gamma regression model with varying dispersion for approximating heavy-tailed and overdispersed claim counts. Our main contribution is that we develop an Expectation-Maximization (EM) type algorithm for maximum likelihood (ML) estimation of the Poisson-Inverse Gamma regression model with varying dispersion. The empirical analysis examines a portfolio of motor insurance data in order to investigate the efficiency of the proposed algorithm. Finally, both the a priori and a posteriori, or Bonus-Malus, premium rates that are determined by the Poisson-Inverse Gamma model are compared to those that result from the classic Negative Binomial Type I and the Poisson-Inverse Gaussian distributions with regression structures for their mean and dispersion parameters.
    Keywords: poisson-inverse gamma distribution; em algorithm; regression models for mean and dispersion parameters; motor third party liability insurance; ratemaking
    JEL: C1
    Date: 2020–09–11
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:106539&r=all
  11. By: Lutz Kilian
    Abstract: This paper examines the advantages and drawbacks of alternative methods of estimating oil supply and oil demand elasticities and of incorporating this information into structural VAR models. I not only summarize the state of the literature, but also draw attention to a number of econometric problems that have been overlooked in this literature. Once these problems are recognized, seemingly conflicting conclusions in the recent literature can be resolved. My analysis reaffirms the conclusion that the one-month oil supply elasticity is close to zero, which implies that oil demand shocks are the dominant driver of the real price of oil. The focus of this paper is not only on correcting some misunderstandings in the recent literature, but on the substantive and methodological insights generated by this exchange, which are of broader interest to applied researchers.
    Keywords: oil supply elasticity; oil demand elasticity; IV estimation; structural VAR; Bayesian inference; oil price; gasoline price
    JEL: Q43 Q41 C36 C52
    Date: 2020–09–03
    URL: http://d.repec.org/n?u=RePEc:fip:feddwp:88693&r=all
  12. By: Yang Ning; Sida Peng; Jing Tao
    Abstract: This paper proposes a doubly robust two-stage semiparametric difference-in-difference estimator for estimating heterogeneous treatment effects with high-dimensional data. Our new estimator is robust to model miss-specifications and allows for, but does not require, many more regressors than observations. The first stage allows a general set of machine learning methods to be used to estimate the propensity score. In the second stage, we derive the rates of convergence for both the parametric parameter and the unknown function under a partially linear specification for the outcome equation. We also provide bias correction procedures to allow for valid inference for the heterogeneous treatment effects. We evaluate the finite sample performance with extensive simulation studies. Additionally, a real data analysis on the effect of Fair Minimum Wage Act on the unemployment rate is performed as an illustration of our method. An R package for implementing the proposed method is available on Github.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.03151&r=all
  13. By: Whitney K. Newey; Sami Stouli
    Abstract: Multidimensional heterogeneity and endogeneity are important features of models with multiple treatments. We consider a heterogeneous coefficients model where the outcome is a linear combination of dummy treatment variables, with each variable representing a different kind of treatment. We use control variables to give necessary and sufficient conditions for identification of average treatment effects. With mutually exclusive treatments we find that, provided the generalized propensity scores (Imbens, 2000) are bounded away from zero with probability one, a simple identification condition is that their sum be bounded away from one with probability one. These results generalize the classical identification result of Rosenbaum and Rubin (1983) for binary treatments.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.02314&r=all
  14. By: Tesary Lin; Sanjog Misra
    Abstract: Consumers interact with firms across multiple devices, browsers, and machines; these interactions are often recorded with different identifiers for the same individual. The failure to correctly match different identities leads to a fragmented view of exposures and behaviors. This paper studies the identity fragmentation bias, referring to the estimation bias resulted from using fragmented data. Using a formal framework, we decompose the contributing factors of the estimation bias caused by data fragmentation and discuss the direction of bias. Contrary to conventional wisdom, this bias cannot be signed or bounded under standard assumptions. Instead, upward biases and sign reversals can occur even in experimental settings. We then propose and compare several corrective measures, and demonstrate their performances using an empirical application.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.12849&r=all
  15. By: Ian R. White (MRC Clinical Trials Unit at University College London); Ella Marley-Zagar (MRC Clinical Trials Unit at University College London); Tim Morris (MRC Clinical Trials Unit at University College London); Mahesh K. B. Parmar (MRC Clinical Trials Unit at University College London); Abdel G. Babiker (MRC Clinical Trials Unit, University College London)
    Abstract: We describe a new command, artcat, to calculate sample size or power for a clinical trial or similar experiment with an ordered categorical outcome, where analysis is by the proportional odds model. The command implements an existing and a new method. The existing method is that of Whitehead (1993). The new method is based on creating a weighted data set containing the expected counts per person, and analysing it with ologit. We show how the weighted data set can be used to compute variances under the null and alternative hypotheses and hence to produce a more accurate calculation. We also show that the new method can be extended to handle non-inferiority trials and to settings where the proportional odds model does not fit the expected data. We illustrate the command and explore the value of an ordered categorical outcome over a binary outcome in various settings. We show by simulation that the methods perform well and are very similar when treatment effects are moderate. With very large treatment effects, the new method is a little more accurate than Whitehead’s method. The new method also applies to the case of a binary outcome and we show that it compares favourably with the official power and the community-contributed artbin
    Date: 2020–09–11
    URL: http://d.repec.org/n?u=RePEc:boc:usug20:08&r=all
  16. By: Wayne Yuan Gao; Sheng Xu
    Abstract: This paper considers the asymptotic theory of a semiparametric M-estimator that is generally applicable to models that satisfy a monotonicity condition in one or several parametric indexes. We call the estimator two-stage maximum score (TSMS) estimator since our estimator involves a first-stage nonparametric regression when applied to the binary choice model of Manski (1975, 1985). We characterize the asymptotic distribution of the TSMS estimator, which features phase transitions depending on the dimension and thus the convergence rate of the first-stage estimation. We show that the TSMS estimator is asymptotically equivalent to the smoothed maximum-score estimator (Horowitz, 1992) when the dimension of the first-step estimation is relatively low, while still achieving partial rate acceleration relative to the cubic-root rate when the dimension is not too high. Effectively, the first-stage nonparametric estimator serves as an imperfect smoothing function on a non-smooth criterion function, leading to the pivotality of the first-stage estimation error with respect to the second-stage convergence rate and asymptotic distribution
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.02854&r=all
  17. By: Yingjie Feng
    Abstract: This paper develops a general causal inference method for treatment effects models under selection on unobservables. A large set of covariates that admits an unknown, possibly nonlinear factor structure is exploited to control for the latent confounders. The key building block is a local principal subspace approximation procedure that combines $K$-nearest neighbors matching and principal component analysis. Estimators of many causal parameters, including average treatment effects and counterfactual distributions, are constructed based on doubly-robust score functions. Large-sample properties of these estimators are established, which only require relatively mild conditions on the principal subspace approximation. The results are illustrated with an empirical application studying the effect of political connections on stock returns of financial firms, and a Monte Carlo experiment. The main technical and methodological results regarding the general local principal subspace approximation method may be of independent interest.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.13651&r=all
  18. By: Walter Distaso; Rustam Ibragimov; Alexander Semenov; Anton Skrobotov
    Abstract: Reliable analysis and forecasting of the spread of COVID-19 pandemic and its impacts on global finance and World's economies requires application of econometrically justified and robust methods. At the same time, statistical and econometric analysis of financial and economic markets and of the spread of COVID-19 is complicated by the inherent potential non-stationarity, dependence, heterogeneity and heavy-tailedness in the data. This project focuses on econometrically justified robust analysis of the effects of the COVID-19 pandemic on the World's financial markets in different countries across the World. Among other results, the study focuses on robust inference in predictive regressions for different countries across the World. We also present a detailed study of persistence, heavy-tailedness and tail risk properties of the time series of the COVID-19 death rates that motivate the necessity in applications of robust inference methods in the analysis. Econometrically justified analysis is based on application of heteroskedasticity and autocorrelation consistent (HAC) inference methods, related approaches using consistent standard errors, recently developed robust $t$-statistic inference procedures and robust tail index estimation approaches.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.02486&r=all
  19. By: Michelle Marcus; Pedro H. C. Sant'Anna
    Abstract: Difference-in-Differences (DID) research designs usually rely on variation of treatment timing such that, after making an appropriate parallel trends assumption, one can identify, estimate, and make inference about causal effects. In practice, however, different DID procedures rely on different parallel trends assumptions (PTA), and recover different causal parameters. In this paper, we focus on staggered DID (also referred as event-studies) and discuss the role played by the PTA in terms of identification and estimation of causal parameters. We document a ``robustness'' vs. ``efficiency'' trade-off in terms of the strength of the underlying PTA, and argue that practitioners should be explicit about these trade-offs whenever using DID procedures. We propose new DID estimators that reflect these trade-offs and derived their large sample properties. We illustrate the practical relevance of these results by assessing whether the transition from federal to state management of the Clean Water Act affects compliance rates.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.01963&r=all
  20. By: Gianluca Cubadda; Alain Hecq
    Abstract: This paper aims to decompose a large dimensional vector autoregessive (VAR) model into two components, the first one being generated by a small-scale VAR and the second one being a white noise sequence. Hence, a reduced number of common factors generates the entire dynamics of the large system through a VAR structure. This modelling extends the common feature approach to high dimensional systems, and it differs from the dynamic factor models in which the idiosyncratic components can also embed a dynamic pattern. We show the conditions under which this decomposition exists, and we provide statistical tools to detect its presence in the data and to estimate the parameters of the underlying small-scale VAR model. We evaluate the practical value of the proposed methodology by simulations as well as by empirical applications on both economic and financial time series.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.03361&r=all
  21. By: Steven T. Berry; Giovanni Compiani
    Abstract: We present a new class of methods for identification and inference in dynamic models with serially correlated unobservables, which typically imply that state variables are econometrically endogenous. In the context of Industrial Organization, these state variables often reflect econometrically endogenous market structure. We propose the use of Generalized Instrument Variables methods to identify those dynamic policy functions that are consistent with instrumental variable (IV) restrictions. Extending popular "two-step" methods, these policy functions then identify a set of structural parameters that are consistent with the dynamic model, the IV restrictions and the data. We provide computed illustrations to both single-agent and oligopoly examples. We also present a simple empirical analysis that, among other things, supports the counterfactual study of an environmental policy entailing an increase in sunk costs.
    JEL: C26 C57 L1
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27756&r=all
  22. By: Yicong Lin; Hanno Reuvers
    Abstract: The Environment Kuznets Curve (EKC) predicts an inverted U-shaped relationship between economic growth and environmental pollution. Current analyses frequently employ models which restrict the nonlinearities in the data to be explained by the economic growth variable only. We propose a Generalized Cointegrating Polynomial Regression (GCPR) with flexible time trends to proxy time effects such as technological progress and/or environmental awareness. More specifically, a GCPR includes flexible powers of deterministic trends and integer powers of stochastic trends. We estimate the GCPR by nonlinear least squares and derive its asymptotic distribution. Endogeneity of the regressors can introduce nuisance parameters into this limiting distribution but a simulated approach nevertheless enables us to conduct valid inference. Moreover, a subsampling KPSS test can be used to check the stationarity of the errors. A comprehensive simulation study shows good performance of the simulated inference approach and the subsampling KPSS test. We illustrate the GCPR approach on a dataset of 18 industrialised countries containing GDP and CO2 emissions. We conclude that: (1) the evidence for an EKC is significantly reduced when a nonlinear time trend is included, and (2) a linear cointegrating relation between GDP and CO2 around a power law trend also provides an accurate description of the data.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.02262&r=all
  23. By: Rossi, Barbara; Wang, Yiru
    Abstract: In this article, we review Granger-causality tests robust to the presence of instabilities in a Vector Autoregressive framework. We also introduce the gcrobustvar command, which illustrates the procedure in Stata. In the presence of instabilities, the Granger-causality robust test is more powerful than the traditional Granger-causality test.
    Keywords: gcrobustvar, Granger-causality, VAR, instability, structural breaks, local projections
    JEL: C22 C52 C53
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:101492&r=all
  24. By: Ganesh Karapakula; James J. Heckman
    Abstract: This paper presents a simple decision-theoretic economic approach for analyzing social experiments with compromised random assignment protocols that are only partially documented. We model administratively constrained experimenters who satisfice in seeking covariate balance. We develop design-based small-sample hypothesis tests that use worst-case (least favorable) randomization null distributions. Our approach accommodates a variety of compromised experiments, including imperfectly documented re-randomization designs. To make our analysis concrete, we focus much of our discussion on the influential Perry Preschool Project. We reexamine previous estimates of program effectiveness using our methods. The choice of how to model reassignment vitally affects inference.
    JEL: C01 C4 I21
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27738&r=all
  25. By: Xiaohong Chen (Cowles Foundation, Yale University); Matthew Gentry; Tong Li; Jingfeng Lu
    Abstract: We study identification and inference in first-price auctions with risk averse bidders and selective entry, building on a flexible entry and bidding framework we call the Affiliated Signal with Risk Aversion (AS-RA) model. Assuming that the econometrician observes either exogenous variation in the number of potential bidders (N) or a continuous instrument (z) shifting opportunity costs of entry, we provide a sharp characterization of the nonparametric restrictions implied by equilibrium bidding. Given variation in either competition or costs, this characterization implies that risk neutrality is nonparametrically testable in the sense that if bidders are strictly risk averse, then no risk neutral model can rationalize the data. In addition, if both instruments (discrete N and continuous z) are available, then the model primitives are nonparametrically point identified. We then explore inference based on these identification results, focusing on set inference and testing when primitives are set identified. Keywords: Auctions, entry, risk aversion, identification, set inference.
    Keywords: Auctions, Entry, Risk aversion, Identification, Set inference
    JEL: D44 C57
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2257&r=all
  26. By: Carlo Campajola; Fabrizio Lillo; Piero Mazzarisi; Daniele Tantari
    Abstract: Binary random variables are the building blocks used to describe a large variety of systems, from magnetic spins to financial time series and neuron activity. In Statistical Physics the Kinetic Ising Model has been introduced to describe the dynamics of the magnetic moments of a spin lattice, while in time series analysis discrete autoregressive processes have been designed to capture the multivariate dependence structure across binary time series. In this article we provide a rigorous proof of the equivalence between the two models in the range of a unique and invertible map unambiguously linking one model parameters set to the other. Our result finds further justification acknowledging that both models provide maximum entropy distributions of binary time series with given means, auto-correlations, and lagged cross-correlations of order one.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.10666&r=all
  27. By: Victor Chernozhukov; Christian Hansen; Kaspar Wuthrich
    Abstract: This chapter reviews the instrumental variable quantile regression model of Chernozhukov and Hansen (2005). We discuss the key conditions used for identification of structural quantile effects within this model which include the availability of instruments and a restriction on the ranks of structural disturbances. We outline several approaches to obtaining point estimates and performing statistical inference for model parameters. Finally, we point to possible directions for future research.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.00436&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.