Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics
2020-10-26
Time-Varying Instrumental Variable Estimation
http://d.repec.org/n?u=RePEc:qmw:qmwecw:911&r=ecm
We develop non-parametric instrumental variable estimation and inferential theory for econometric models with possibly endogenous regressors whose coefficients can vary over time either deterministically or stochastically, and the time-varying and uniform versions of the standard Hausman exogeneity test. After deriving the asymptotic properties of the proposed procedures, we assess their finite sample performance by means of a set of Monte Carlo experiments, and illustrate their application by means of an empirical example on the Phillips curve.
Luidas Giraitis
George Kapetanios
Massimiliano Marcellino
Instrumental variables, Time-varying parameters, endogeneity, Hausman test, Non-parametric methods, Phillips curve.
2020-08-17
Heteroscedasticity test of high-frequency data with jumps and microstructure noise
http://d.repec.org/n?u=RePEc:arx:papers:2010.07659&r=ecm
In this paper, we are interested in testing if the volatility process is constant or not during a given time span by using high-frequency data with the presence of jumps and microstructure noise. Based on estimators of integrated volatility and spot volatility, we propose a nonparametric way to depict the discrepancy between local variation and global variation. We show that our proposed test estimator converges to a standard normal distribution if the volatility is constant, otherwise it diverges to infinity. Simulation studies verify the theoretical results and show a good finite sample performance of the test procedure. We also apply our test procedure to do the heteroscedasticity test for some real high-frequency financial data. We observe that in almost half of the days tested, the assumption of constant volatility within a day is violated. And this is due to that the stock prices during opening and closing periods are highly volatile and account for a relative large proportion of intraday variation.
Qiang Liu
Zhi Liu
Chuanhai Zhang
2020-10
Encompassing Tests for Value at Risk and Expected Shortfall Multi-Step Forecasts based on Inference on the Boundary
http://d.repec.org/n?u=RePEc:arx:papers:2009.07341&r=ecm
We propose forecast encompassing tests for the Expected Shortfall (ES) jointly with the Value at Risk (VaR) based on flexible link (or combination) functions. Our setup allows testing encompassing for convex forecast combinations and for link functions which preclude crossings of the combined VaR and ES forecasts. As the tests based on these link functions involve parameters which are on the boundary of the parameter space under the null hypothesis, we derive and base our tests on nonstandard asymptotic theory on the boundary. Our simulation study shows that the encompassing tests based on our new link functions outperform tests based on unrestricted linear link functions for one-step and multi-step forecasts. We further illustrate the potential of the proposed tests in a real data analysis for forecasting VaR and ES of the S&P 500 index.
Timo Dimitriadis
Xiaochun Liu
Julie Schnaitmann
2020-09
Statistical Inference for the Tangency Portfolio in High Dimension
http://d.repec.org/n?u=RePEc:hhs:oruesi:2020_010&r=ecm
In this paper, we study the distributional properties of the tangency portfolio (TP) weights assuming a normal distribution of the logarithmic returns. We derive a stochastic representation of the TP weights that fully describes their distribution. Under a high-dimensional asymptotic regime, i.e. the dimension of the portfolio, k, and the sample size, n, approach infinity such that k/n → c ∈ (0, 1), we deliver the asymptotic distribution of the TP weights. Moreover, we consider tests about the elements of the TP and derive the asymptotic distribution of the test statistic under the null and alternative hypotheses. In a simulation study, we compare the asymptotic distribution of the TP weights with the exact finite sample density. We also compare the high-dimensional asymptotic test with an exact small sample test. We document a good performance of the asymptotic approximations except for small sample sizes combined with c close to one. In an empirical study, we analyze the TP weights in portfolios containing stocks from the S&P 500 index.
Karlsson, Sune
Mazur, Stepan
Muhinyuza, Stanislas
Tangency portfolio; high-dimensional asymptotics; hypothesis testing
2020-10-09
Fixed Effects Binary Choice Models with Three or More Periods
http://d.repec.org/n?u=RePEc:arx:papers:2009.08108&r=ecm
We consider fixed effects binary choice models with a fixed number of periods T and without a large support condition on the regressors. If the time-varying unobserved terms are i.i.d. with known distribution F, Chamberlain (2010) shows that the common slope parameter is point-identified if and only if F is logistic. However, he considers in his proof only T=2. We show that actually, the result does not generalize to T>2: the common slope parameter and some parameters of the distribution of the shocks can be identified when F belongs to a family including the logit distribution. Identification is based on a conditional moment restriction. We give necessary and sufficient conditions on the covariates for this restriction to identify the parameters. In addition, we show that under mild conditions, the corresponding GMM estimator reaches the semiparametric efficiency bound when T=3.
Laurent Davezies
Xavier D'Haultfoeuille
Martin Mugnier
2020-09
Noise-Induced Randomization in Regression Discontinuity Designs
http://d.repec.org/n?u=RePEc:arx:papers:2004.09458&r=ecm
Regression discontinuity designs are used to estimate causal effects in settings where treatment is determined by whether an observed running variable crosses a pre-specified threshold. While the resulting sampling design is sometimes described as akin to a locally randomized experiment in a neighborhood of the threshold, standard formal analyses do not make reference to probabilistic treatment assignment and instead identify treatment effects via continuity arguments. Here we propose a new approach to identification, estimation, and inference in regression discontinuity designs that exploits measurement error in the running variable. Under an assumption that the measurement error is exogenous, we show how to consistently estimate causal effects using a class of linear estimators that weight treated and control units so as to balance a latent variable of which the running variable is a noisy measure. We find this approach to facilitate identification of both familiar estimands from the literature, as well as policy-relevant estimands that correspond to the effects of realistic changes to the existing treatment assignment rule. We demonstrate the method with a study of retention of HIV patients and evaluate its performance using simulated data and a regression discontinuity design artificially constructed from test scores in early childhood.
Dean Eckles
Nikolaos Ignatiadis
Stefan Wager
Han Wu
2020-04
Inference for Large-Scale Linear Systems with Known Coefficients
http://d.repec.org/n?u=RePEc:arx:papers:2009.08568&r=ecm
This paper considers the problem of testing whether there exists a non-negative solution to a possibly under-determined system of linear equations with known coefficients. This hypothesis testing problem arises naturally in a number of settings, including random coefficient, treatment effect, and discrete choice models, as well as a class of linear programming problems. As a first contribution, we obtain a novel geometric characterization of the null hypothesis in terms of identified parameters satisfying an infinite set of inequality restrictions. Using this characterization, we devise a test that requires solving only linear programs for its implementation, and thus remains computationally feasible in the high-dimensional applications that motivate our analysis. The asymptotic size of the proposed test is shown to equal at most the nominal level uniformly over a large class of distributions that permits the number of linear equations to grow with the sample size.
Zheng Fang
Andres Santos
Azeem M. Shaikh
Alexander Torgovitsky
2020-09
Semiparametric Testing with Highly Persistent Predictors
http://d.repec.org/n?u=RePEc:arx:papers:2009.08291&r=ecm
We address the issue of semiparametric efficiency in the bivariate regression problem with a highly persistent predictor, where the joint distribution of the innovations is regarded an infinite-dimensional nuisance parameter. Using a structural representation of the limit experiment and exploiting invariance relationships therein, we construct invariant point-optimal tests for the regression coefficient of interest. This approach naturally leads to a family of feasible tests based on the component-wise ranks of the innovations that can gain considerable power relative to existing tests under non-Gaussian innovation distributions, while behaving equivalently under Gaussianity. When an i.i.d. assumption on the innovations is appropriate for the data at hand, our tests exploit the efficiency gains possible. Moreover, we show by simulation that our test remains well behaved under some forms of conditional heteroskedasticity.
Bas Werker
Bo Zhou
2020-09
Using Survey Information for Improving the Density Nowcasting of US GDP with a Focus on Predictive Performance during Covid-19 Pandemic
http://d.repec.org/n?u=RePEc:koc:wpaper:2016&r=ecm
We provide a methodology that efficiently combines the statistical models of nowcasting with the survey information for improving the (density) nowcasting of US real GDP. Specifically, we use the conventional dynamic factor model together with a stochastic volatility component as the baseline statistical model. We augment the model with information from the survey expectations by aligning the first and second moments of the predictive distribution implied by this baseline model with those extracted from the survey information at various horizons. Results indicate that survey information bears valuable information over the baseline model for nowcasting GDP. While the mean survey predictions deliver valuable information during extreme events such as the Covid-19 pandemic, the variation in the survey participants’ predictions, often used as a measure of ‘ambiguity’, conveys crucial information beyond the mean of those predictions for capturing the tail behavior of the GDP distribution.
Cem Cakmakli
Hamza Demircan
Dynamic factor model; Stochastic volatility; Survey of Professional Forecasters; Disagreement; Predictive density evaluation; Bayesian inference.
2020-10
Nonparametric Bounds on Treatment Effects with Imperfect Instruments
http://d.repec.org/n?u=RePEc:isu:genstf:202010120700001113&r=ecm
This paper extends the identification results in Nevo and Rosen(2012) to nonparametric models. We derive nonparametric bounds on the averagetreatment effect when an imperfect instrument is available. As in Nevo andRosen (2012), we assume that the correlation between the imperfect instrumentand the unobserved latent variables has the same sign as the correlationbetween the endogenous variable and the latent variables. We show that themonotone treatment selection and monotone instrumental variable restrictions,introduced by Manski and Pepper (2000, 2009), jointly imply this assumption.We introduce the concept of comonotone instrumental variable, which alsosatisfies this assumption. Moreover, we show how the assumption that theimperfect instrument is less endogenous than the treatment variable can helptighten the bounds. We also use the monotone treatment response assumption toget tighter bounds. The identified set can be written in the form ofintersection bounds, which is more conducive to inference. We illustrate ourmethodology using the National Longitudinal Survey of Young Men data toestimate returns to schooling.
Ban, Kyunghoon
Kedagni, Desire
2020-10-12
Ordinal-response models for irregularly spaced transactions: A forecasting exercise
http://d.repec.org/n?u=RePEc:pra:mprapa:103250&r=ecm
We propose a new model for transaction data that accounts jointly for the time duration between transactions and for the discreteness of the intraday stock price changes. Duration is assumed to follow a stochastic conditional duration model, while price discreteness is captured by an autoregressive moving average ordinal-response model with stochastic volatility and time-varying parameters. The proposed model also allows for endogeneity of the trade durations as well as for leverage and in-mean effects. In a purely Bayesian framework we conduct a forecasting exercise using multiple high-frequency transaction data sets and show that the proposed model produces better point and density forecasts than competing models.
Dimitrakopoulos, Stefanos
Tsionas, Mike G.
Aknouche, Abdelhakim
Ordinal-response models, irregularly spaced data, stochastic conditional duration, time varying ARMA-SV model, Bayesian MCMC, model confidence set.
2020-10-01
Recent Developments on Factor Models and its Applications in Econometric Learning
http://d.repec.org/n?u=RePEc:arx:papers:2009.10103&r=ecm
This paper makes a selective survey on the recent development of the factor model and its application on statistical learnings. We focus on the perspective of the low-rank structure of factor models, and particularly draws attentions to estimating the model from the low-rank recovery point of view. The survey mainly consists of three parts: the first part is a review on new factor estimations based on modern techniques on recovering low-rank structures of high-dimensional models. The second part discusses statistical inferences of several factor-augmented models and applications in econometric learning models. The final part summarizes new developments dealing with unbalanced panels from the matrix completion perspective.
Jianqing Fan
Kunpeng Li
Yuan Liao
2020-09
Realized Volatility Forecasting Based on Dynamic Quantile Model Averaging
http://d.repec.org/n?u=RePEc:kan:wpaper:202016&r=ecm
Heterogeneity, volatility persistence, leverage effect and fat right tails are the most documented stylized features of realized volatility (RV), which introduce substantial difficulties in econometric modeling that requires some rigid distributional assumptions. To accommodate these features without making these assumptions, we study the quantile forecasting of RV by proposing five novel dynamic model averaging strategies designed to combine individual quantile models, termed as dynamic quantile model averaging (DQMA). The empirical results of analyzing high-frequency price data of the S&P 500 index clearly indicate that the stylized facts of RV can be captured by different quantiles, with stronger effects at high-level quantiles. Therefore, DQMA can not only reduce the risk of model uncertainty but also generate more accurate and robust out-of-sample quantile forecasts than those of individual heterogeneous autoregressive quantile models.
Zongwu Cai
Chaoqun Ma
Xianhua Mi
Dynamic moving averaging; Model uncertainty; Fat tails; Heterogeneity; Quantile regression; Realized volatility; Time-varying parameters.
2020-09
Bounding Average Returns to Schooling using Unconditional Moment Restrictions
http://d.repec.org/n?u=RePEc:isu:genstf:201812290800001086&r=ecm
Abstract. In the last 20 years, the bounding approach for the average treatment effect (ATE) has been developing on the theoretical side, however, empirical work has lagged far behind theory in this area. One main reason is that, in practice, traditional bounding methods fall into two extreme cases: (i) On the one hand, the bounds are too wide to be informative and this happens, in general, when the instrumental variable (IV) has little variation; (ii) while on the other hand, the bounds cross, in which case the researcher learns nothing about the parameter of interest other than that the IV restrictions are rejected. This usually happens when the IV has a rich support and the IV restriction imposed in the model â€” full, quantile or mean independenceâ€” is too stringent, as illustrated in Ginther (2000). In this paper, we provide sharp bounds on the ATE using only a finite set of unconditional moment restrictions, which is a weaker version of mean independence. We revisit Gintherâ€™s (2000) return to schooling application using our bounding approach and derive informative bounds on the average returns to schooling in US.
Kedagni, Desire
Li, Lixiong
Mourifie, Ismael
2018-12-29
A Panel Data Model with Generalized Higher-Order Network Effects
http://d.repec.org/n?u=RePEc:max:cprwps:233&r=ecm
Many data situations require the consideration of network effects among the cross-sectional units of observation. In this paper, we present a generalized panel model which accounts for two features: (i) three types of network effects on the right-hand side of the model, namely through weighted dependent variable, weighted exogenous variables, as well as weighted error components, and (ii) higher-order network effects due to ex-ante unknown network-decay functions or the presence of multiplex (or multi-layer) networks among all of those. We outline the model, the basic assumptions, and present simulation results.
Badi H. Baltagi
Sophia Ding
Peter H. Egger
Spatial and Network Interdependence, Panel Data, Higher-Order Network Effects
2020-10
Edgeworth Expansions for Multivariate Random Sums
http://d.repec.org/n?u=RePEc:hhs:oruesi:2020_009&r=ecm
The sum of a random number of independent and identically distributed random vectors has a distribution which is not analytically tractable, in the general case. The problem has been addressed by means of asymptotic approximations embedding the number of summands in a stochastically increasing sequence. Another approach relies on tting exible and tractable parametric, multivariate distributions, as for example nite mixtures. In this paper we investigate both approaches within the framework of Edgeworth expansions. We derive a general formula for the fourth-order cumulants of the random sum of independent and identically distributed random vectors and show that the above mentioned asymptotic approach does not necessarily lead to valid asymptotic normal approximations. We address the problem by means of Edgeworth expansions. Both theoretical and empirical results suggest that mixtures of two multivariate normal distributions with proportional covariance matrices satisfactorily t data generated from random sums where the counting random variable and the random summands are Poisson and multivariate skew-normal, respectively.
Javed, Farrukh
Loperfido, Nicola
Mazur, Stepan
Edgeworth expansion; Fourth cumulant; Random sum; Skew-normal
2020-10-07
How to remove the testing bias in CoV-2 statistics
http://d.repec.org/n?u=RePEc:jgu:wpaper:2021&r=ecm
BACKGROUND. Public health measures and private behaviour are based on reported numbers of SARS-CoV-2 infections. Some argue that testing influences the confirmed number of infections. OBJECTIVES/METHODS. Do time series on reported infections and the number of tests allow one to draw conclusions about actual infection numbers? A SIR model is presented where the true numbers of susceptible, infectious and removed individuals are unobserved. Testing is also modelled. RESULTS. Official confirmed infection numbers are likely to be biased and cannot be compared over time. The bias occurs because of different reasons for testing (e.g. by symptoms, representative or testing travellers). The paper illustrates the bias and works out the effect of the number of tests on the number of reported cases. The paper also shows that the positive rate (the ratio of positive tests to the total number of tests) is uninformative in the presence of non-representative testing. CONCLUSIONS. A severity index for epidemics is proposed that is comparable over time. This index is based on Covid-19 cases and can be obtained if the reason for testing is known.
Klaus Wälde
Covid-19, number of tests, reported number of CoV-2 infections, (correcting the) bias, SIR model, unbiased epidemiological severity index
2020-10-09
Multivariate cointegration and temporal aggregation: some further simulation results
http://d.repec.org/n?u=RePEc:mcd:mcddps:2020_05&r=ecm
We perform Monte Carlo simulations to study the effect of increasing the frequency of observations and data span on the Johansen (1988, 1995) maximum likelihood cointegration testing approach, as well as on the bootstrap and wild bootstrap implementations of the method developed by Cavaliere et al. (2012, 2014). Considering systems with three and four variables, we find that when both the data span and the frequency vary, the power of the tests depend more on the sample length. We illustrate our findings by investigating the existence of long-run equilibrium relationships among four indicators prices of coffee.
Jesus Otero
Theodore Panagiotidis
Georgios Papapanagiotou
Monte Carlo, Span, Power, Cointegration, Coffee prices.
2020-10
Ranking-based variable selection for high-dimensional data
http://d.repec.org/n?u=RePEc:ehl:lserod:90233&r=ecm
We propose a ranking-based variable selection (RBVS) technique that identifies important variables influencing the response in high-dimensional data. RBVS uses subsampling to identify the covariates that appear nonspuriously at the top of a chosen variable ranking. We study the conditions under which such a set is unique, and show that it can be recovered successfully from the data by our procedure. Unlike many existing high-dimensional variable selection techniques, among all relevant variables, RBVS distinguishes between important and unimportant variables, and aims to recover only the important ones. Moreover, RBVS does not require model restrictions on the relationship between the response and the covariates, and, thus, is widely applicable in both parametric and nonparametric contexts. Lastly, we illustrate the good practical performance of the proposed technique by means of a comparative simulation study. The RBVS algorithm is implemented in rbvs, a publicly available R package.
Baranowski, Rafal
Chen, Yining
Fryzlewicz, Piotr
variable screening; subset selection; bootstrap; stability selection.
2020-07-01
Comment on Gouri\'eroux, Monfort, Renne (2019): Identification and Estimation in Non-Fundamental Structural VARMA Models
http://d.repec.org/n?u=RePEc:arx:papers:2010.02711&r=ecm
This comment points out a serious flaw in the article "Gouri\'eroux, Monfort, Renne (2019): Identification and Estimation in Non-Fundamental Structural VARMA Models" with regard to mirroring complex-valued roots with Blaschke polynomial matrices. Moreover, the (non-) feasibility of the proposed method (if the handling of Blaschke transformation were not prohibitive) for cross-sectional dimensions greater than two and vector moving average (VMA) polynomial matrices of degree greater than one is discussed.
Bernd Funovits
2020-10
A Class of Time-Varying Vector Moving Average (infinity) Models
http://d.repec.org/n?u=RePEc:msh:ebswps:2020-39&r=ecm
Multivariate time series analyses are widely encountered in practical studies, e.g., modelling policy transmission mechanism and measuring connectedness between economic agents. To better capture the dynamics, this paper proposes a class of multivariate dynamic models with time-varying coefficients, which have a general time-varying vector moving average (VMA) representation, and nest, for instance, time-varying vector autoregression (VAR), timeâ€“varying vector autoregression movingâ€“average (VARMA), and so forth as special cases. The paper then develops a unified estimation method for the unknown quantities before an asymptotic theory for the proposed estimators is established. In the empirical study, we investigate the transmission mechanism of monetary policy using U.S. data, and uncover a fall in the volatilities of exogenous shocks. In addition, we find that (i) monetary policy shocks have less influence on inflation before and during the so-called Great Moderation, (ii) inflation is more anchored recently, and (iii) the long-run level of inflation is below, but quite close to the Federal Reserveâ€™s target of two percent after the beginning of the Great Moderation period.
Yayi Yan
Jiti Gao
Bin peng
multivariate time series model, nonparametric kernel estimation, trending stationarity
2020
A Generalised Stochastic Volatility in Mean VAR. An Updated Algorithm
http://d.repec.org/n?u=RePEc:qmw:qmwecw:908&r=ecm
In this note we present an updated algorithm to estimate the VAR with stochastic volatility proposed in Mumtaz (2018). The model is re-written so that some of the Metropolis Hastings steps are avoided.
Haroon Mumtaz
VAR, Stochastic volatility in mean, error covariance
2020-07-05
Spillovers of Program Benefits with Mismeasured Networks
http://d.repec.org/n?u=RePEc:arx:papers:2009.09614&r=ecm
In studies of program evaluation under network interference, correctly measuring spillovers of the intervention is crucial for making appropriate policy recommendations. However, increasing empirical evidence has shown that network links are often measured with errors. This paper explores the identification and estimation of treatment and spillover effects when the network is mismeasured. I propose a novel method to nonparametrically point-identify the treatment and spillover effects, when two network observations are available. The method can deal with a large network with missing or misreported links and possesses several attractive features: (i) it allows heterogeneous treatment and spillover effects; (ii) it does not rely on modelling network formation or its misclassification probabilities; and (iii) it accommodates samples that are correlated in overlapping ways. A semiparametric estimation approach is proposed, and the analysis is applied to study the spillover effects of an insurance information program on the insurance adoption decisions.
Lina Zhang
2020-09
Further results on the estimation of dynamic panel logit models with fixed effects
http://d.repec.org/n?u=RePEc:arx:papers:2010.03382&r=ecm
Kitazawa (2013, 2016) showed that the common parameters in the panel logit AR(1) model with strictly exogenous covariates and fixed effects are estimable at the root-n rate using the Generalized Method of Moments. Honor\'e and Weidner (2020) extended his results in various directions: they found additional moment conditions for the logit AR(1) model and also considered estimation of logit AR(p) models with p>1. In this note we prove a conjecture in their paper and show that 2^{T}-2T of their moment functions for the logit AR(1) model are linearly independent and span the set of valid moment functions, which is a 2^{T}-2T -dimensional linear subspace of the 2^{T} -dimensional vector space of real valued functions over the outcomes y element of {0,1}^{T}.
Hugo Kruiniger
2020-10
Testing homogeneity in dynamic discrete games in finite samples
http://d.repec.org/n?u=RePEc:arx:papers:2010.02297&r=ecm
The literature on dynamic discrete games often assumes that the conditional choice probabilities and the state transition probabilities are homogeneous across markets and over time. We refer to this as the "homogeneity assumption" in dynamic discrete games. This homogeneity assumption enables empirical studies to estimate the game's structural parameters by pooling data from multiple markets and from many time periods. In this paper, we propose a hypothesis test to evaluate whether the homogeneity assumption holds in the data. Our hypothesis is the result of an approximate randomization test, implemented via a Markov chain Monte Carlo (MCMC) algorithm. We show that our hypothesis test becomes valid as the (user-defined) number of MCMC draws diverges, for any fixed number of markets, time-periods, and players. We apply our test to the empirical study of the U.S. Portland cement industry in Ryan (2012).
Federico A. Bugni
Jackson Bunting
Takuya Ura
2020-10
Data Science: A Primer for Economists
http://d.repec.org/n?u=RePEc:pra:mprapa:102928&r=ecm
The last years have seen an explosion in the demand for data science skills. In this paper, I introduce the reader to the term, point out the technological jumps that allowed the rise of its methods, and give an overview of the most common ones. I close by pointing out the strengths and weaknesses of the corresponding tools as well as their complementarities with economic analysis.
Gomez-Ruano, Gerardo
Data Science; Statistics; Quantitative Methods; Labor Market; Technological Change; Numerical Methods; Econometric Methods
2020
Identification and Estimation of A Rational Inattention Discrete Choice Model with Bayesian Persuasion
http://d.repec.org/n?u=RePEc:arx:papers:2009.08045&r=ecm
This paper studies the semi-parametric identification and estimation of a rational inattention model with Bayesian persuasion. The identification requires the observation of a cross-section of market-level outcomes. The empirical content of the model can be characterized by three moment conditions. A two-step estimation procedure is proposed to avoid computation complexity in the structural model. In the empirical application, I study the persuasion effect of Fox News in the 2000 presidential election. Welfare analysis shows that persuasion will not influence voters with high school education but will generate higher dispersion in the welfare of voters with a partial college education and decrease the dispersion in the welfare of voters with a bachelors degree.
Moyu Liao
2020-09
Manipulation-Robust Regression Discontinuity Design
http://d.repec.org/n?u=RePEc:arx:papers:2009.07551&r=ecm
Regression discontinuity designs (RDDs) may not deliver reliable results if units manipulate their running variables. It is commonly believed that imprecise manipulations are harmless and, diagnostic tests detect precise manipulations. However, we demonstrate that RDDs may fail to point-identify in the presence of imprecise manipulation, and that not all harmful manipulations are detectable. To formalize these claims, we propose a class of RDDs with harmless or detectable manipulations over locally randomized running variables as manipulation-robust RDDs. The conditions for the manipulation-robust RDDs may be intuitively verified using the institutional background. We demonstrate its verification process in case studies of applications that use the McCrary (2008) density test. The restrictions of manipulation-robust RDDs generate partial identification results that are robust to possible manipulation. We apply the partial identification result to a controversy regarding the incumbency margin study of the U.S. House of Representatives elections. The results show the robustness of the original conclusion of Lee (2008).
Takuya Ishihara
Masayuki Sawada
2020-09
On the Existence of Conditional Maximum Likelihood Estimates of the Binary Logit Model with Fixed Effects
http://d.repec.org/n?u=RePEc:arx:papers:2009.09998&r=ecm
By exploiting McFadden (1974)'s results on conditional logit estimation, we show that there exists a one-to-one mapping between existence and uniqueness of conditional maximum likelihood estimates of the binary logit model with fixed effects and the spatial configuration of data points. Our results extend those in Albert and Anderson (1984) for the cross-sectional case and can be used to build a simple algorithm that detects spurious estimates in finite samples. Importantly, we show an instance from artificial data for which the STATA's command clogit returns spurious estimates.
Martin Mugnier
2020-09
Hot Spots, Cold Feet, and Warm Glow: Identifying Spatial Heterogeneity in Willingness to Pay
http://d.repec.org/n?u=RePEc:nev:wpaper:wp202001&r=ecm
We propose a novel extension of existing semi-parametric approaches to examine spatial patterns of willingness to pay (WTP) and status quo effects, including tests for global spatial autocorrelation, spatial interpolation techniques, and local hotspot analysis. We are the first to formally account for the fact that observed WTP values are estimates, and to incorporate the statistical precision of those estimates into our spatial analyses. We demonstrate our two-step methodology using data from a stated preference survey that elicited values for improvements in water quality in the Chesapeake Bay and lakes in the surrounding watershed. Our methodology offers a flexible way to identify potential spatial patterns of welfare impacts, with the ultimate goal of facilitating more accurate benefit-cost and distributional analyses, both in terms of defining the appropriate extent of the market and in interpolating values within that market.
Dennis Guignet
Christoper Moore
Haoluan Wang
Bayesian; hotspot analysis; semi-parametric; spatial heterogeneity; stated preference; water quality
2020-03