
on Econometrics 
By:  Cizek, P. (Tilburg University, Center For Economic Research); Aquaro, M. (Tilburg University, Center For Economic Research) 
Abstract:  This paper extends an existing outlierrobust estimator of linear dynamic panel data models with fixed effects, which is based on the median ratio of two consecutive pairs of firstdifferenced data. To improve its precision and robust properties, a general procedure based on many pairwise differences and their ratios is designed. The proposed twostep GMM estimator based on the corresponding moment equations relies on an innovative weighting scheme reflecting both the variance and bias of those moment equations, where the bias is assumed to stem from data contamination. To estimate the bias, the influence function is derived and evaluated. The asymptotic distribution as well as robust properties of the estimator are characterized; the latter are obtained both under contamination by independent additive outliers and the patches of additive outliers. The proposed estimator is additionally compared with existing methods by means of Monte Carlo simulations. 
Keywords:  dynamic panel data; fixed effects; generalized method of moments; influence function; pairwise differences; robust estimation 
JEL:  C13 C23 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:39d0f613007f4d21b1e2b6dc51149fe4&r=ecm 
By:  Billy Wu; Qiwei Yao; Shiwu Zhu 
Abstract:  We consider the incidental parameters problem in this paper, i.e. the estimation for a small number of parameters of interest in the presence of a large number of nuisance parameters. By assuming that the observations are taken from a multiple strictly stationary process, the two estimation methods, namely the maximum composite quasilikelihood estimation (MCQLE) and the maximum plugin quasilikelihood estimation (MPQLE) are considered. For the MCQLE, we profile out nuisance parameters based on lowerdimensional marginal likelihoods, while the MPQLE is based on some initial estimators for nuisance parameters. The asymptotic normality for both the MCQLE and the MPQLE is established under the assumption that the number of nuisance parameters and the number of observations go to infinity together, and both the estimators for the parameters of interest enjoy the standard rootnn convergence rate. Simulation with a spatial–temporal model illustrates the finite sample properties of the two estimation methods. 
Keywords:  composite likelihood; incidental parameters problem; nuisance parameterlem; panel data; profile likelihood; quasilikelihood; rootnn convergence 
JEL:  C1 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:50043&r=ecm 
By:  Rothe, Christoph (Columbia University) 
Abstract:  Estimators of average treatment effects under unconfounded treatment assignment are known to become rather imprecise if there is limited overlap in the covariate distributions between the treatment groups. But such limited overlap can also have a detrimental effect on inference, and lead for example to highly distorted confidence intervals. This paper shows that this is because the coverage error of traditional confidence intervals is not so much driven by the total sample size, but by the number of observations in the areas of limited overlap. At least some of these "local sample sizes" are often very small in applications, up to the point where distributional approximation derived from the Central Limit Theorem become unreliable. Building on this observation, the paper proposes two new robust confidence intervals that are extensions of classical approaches to small sample inference. It shows that these approaches are easy to implement, and have superior theoretical and practical properties relative to standard methods in empirically relevant settings. They should thus be useful for practitioners. 
Keywords:  average treatment effect, causality, overlap, propensity score, treatment effect heterogeneity, unconfoundedness 
JEL:  C12 C14 C25 C31 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp8758&r=ecm 
By:  Audrino, Francesco; Camponovo, Lorenzo; Roth, Constantin 
Abstract:  A (conservative) test is constructed to investigate the optimal lag structure for forecasting realized volatility dynamics. The testing procedure relies on the recent theoretical results that show the ability of the adaptive least absolute shrinkage and selection operator (adaptive lasso) to combine efficient parameter estimation, variable selection, and valid inference for time series processes. In an application to several constituents of the S&P 500 index it is shown that (i) the optimal significant lag structure is timevarying and subject to drastic regime shifts that seem to happen across assets simultaneously; (ii) in many cases the relevant information for prediction is included in the first 22 lags, corroborating previous results concerning the accuracy and the difficulty of outperforming outofsample the heterogeneous autoregressive (HAR) model; and (iii) some common features of the optimal lag structure can be identified across assets belonging to the same market segment or showing a similar beta with respect to the market index. 
Keywords:  Realized volatility; Adaptive lasso; HAR model; Test for false positives; Lag structure 
JEL:  C12 C58 C63 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2015:01&r=ecm 
By:  Helene Roth; Stefan Lang; Helga Wagner 
Abstract:  This paper discusses random intercept selection within the context of semiparametric regression models with structured additive predictor (STAR). STAR models can deal simultaneously with nonlinear covariate effects and time trends, unit or clusterspecific heterogeneity, spatial heterogeneity and complex interactions between covariates of different type. The random intercept selection is based on spike and slab priors for the variances of the random intercept coefficients. The aim is to achieve shrinkage of small random intercept coefficients to zero similar as for the LASSO in frequentist linear models. The mixture structure of the spike and slab prior allows for selective shrinkage, as coefficients are either heavily shrunk under the spike component or left almost unshrunk under the slab component. The hyperparameters of the spike and slab prior are chosen by theoretical considerations based on the prior inclusion probability of a particular random coefficient given the true effect size. Using extensive simulation experiments we compare random intercept models based on spike and slab priors for variances with the usual Inverse Gamma priors. A case study on malnutrition of children in Zambia illustrates the methodology in a real data example. 
Keywords:  Bayesian hierarchical models, Bayesian model choice, MCMC, Psplines, spike and slab priors 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:201502&r=ecm 
By:  David Card (University of California, Berkeley); David S. Lee (Princeton University); Zhuan Pei (Brandeis university); Andrea Weber (University of Mannheim) 
Abstract:  We consider nonparametric identification and estimation in a nonseparable model where a continuous regressor of interest is a known, deterministic, but kinked function of an observed assignment variable. This design arises in many institutional settings where a policy variable (such as weekly unemployment benefits) is determined by an observed but potentially endogenous assignment variable (like previous earnings). We provide new results on identification and estimation for these settings, and apply our results to obtain estimates of the elasticity of joblessness with respect to UI benefit rates. We characterize a broad class of models in which a sharp "Regression Kink Design" (RKD, or RK Design) identifies a readily interpretable treatmentonthetreated parameter (Florens et al. (2008)). We also introduce a "fuzzy regression kink design" generalization that allows for omitted variables in the assignment rule, noncompliance, and certain types of measurement errors in the observed values of the assignment variable and the policy variable. Our identifying assumptions give rise to testable restrictions on the distributions of the assignment variable and predetermined covariates around the kink point, similar to the restrictions delivered by Lee (2008) for the regression discontinuity design. We then use a fuzzy RKD approach to study the effect of unemployment insurance benefits on the duration of joblessness in Austria, where the benefit schedule has kinks at the minimum and maximum benefit level. Our preferred estimates suggest that changes in UI benefit generosity exert a relatively large effect on the duration of joblessness of both lowwage and highwage UI recipients in Austria. 
Keywords:  Regression Discontinuity Design, Regression Kink Design, Treatment Effects, Nonseparable Models, Nonparametric Estimation 
JEL:  C13 C14 C31 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:upj:weupjo:15218&r=ecm 
By:  Andrew J. Buck (Department of Economics, Temple University); George M. Lady (Department of Economics, Temple University) 
Abstract:  A recent literature, e.g., Lady and Buck (2011), has shown that a qualitative analysis of a modelâ€™s structural and estimated reduced form arrays can provide a robust procedure for assessing if a modelâ€™s hypothesized structure has been falsified. This paper shows that the even weaker statement of the modelâ€™s structure provided by zero restrictions on the structural arrays can be falsified, independent of the proposed nonzero entries. When this takes place, multistage least squares, or any procedure for estimating the structural arrays with the zero restrictions imposed, will present estimates that could not possibly have generated the data upon which the estimated reduced form is based. The examples given in the paper are based upon a Monte Carlo sampling procedure that is briefly described in the appendix. 
Keywords:  qualitative analysis, Monte Carlo, model falsification, impossible estimates 
JEL:  C15 C18 C51 C52 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:tem:wpaper:1506&r=ecm 
By:  Ismael Mourifie; Yuanyuan Wan 
Abstract:  In this paper we propose a new unifying approach to (partially) identify potential outcome distributions in a nonseparable triangular model with a binary endogenous variable and a binary instrument. Our identification strategy provides a testable condition under which the objects of interest are point identified. When point identification is not achieved, we provide sharp bounds on the potential outcome distributions and the difference of marginal distributions. 
Keywords:  Potential outcomes, triangular system, point and partial identification, sharp bounds. 
JEL:  C14 C31 C35 
Date:  2015–01–29 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa532&r=ecm 
By:  KUROZUMI, Eiji; YAMAMOTO, Yohei 
Abstract:  This study proposes constructing a confidence set for the date of a onetime structural change using a point optimal test. Following Elliott and M鶴ler (2007), we first construct a test for the break date that maximizes the weighted average of the power function. The confidence set is then obtained by inverting the test statistic. We carefully choose the weights and show by Monte Carlo simulations that the confidence set based on our method has a relatively accurate coverage rate, while the length of our confidence set is significantly shorter than the lengths proposed in the literature. 
Keywords:  coverage rate, break fraction, hypothesis test, average power 
JEL:  C12 C22 
Date:  2015–01–22 
URL:  http://d.repec.org/n?u=RePEc:hit:econdp:201501&r=ecm 
By:  Emanuele Bacchiocchi (University of Milano); Efrem Castelnuovo (University of Padova); Luca Fanelli (University of Bologna) 
Abstract:  We employ a novel identification scheme to quantify the macroeconomic effects of monetary policy shocks in the United States. The identification of the shocks is achieved by exploiting the instabilities in the contemporaneous coefficients of the structural VAR (SVAR) and in the covariance matrix of the reducedform residuals. Different volatility regimes can be associated with different transmission mechanisms of the identified structural shocks. We formally test and reject the stability of our impulse responses estimated with postWWII U.S. data by working with a break in macroeconomic volatilities occurred in the mid1980s. We show that the impulse responses obtained with our nonrecursive identification scheme are quite similar to those conditional on a standard CholeskySVARs estimated with pre1984 data. In contrast, recursive vs. nonrecursive identification schemes return substantially different macroeconomic reactions conditional on Great Moderation data, in particular as for inflation and a longterm interest rate. Using our nonrecursive SVARs as auxiliary models to estimate a smallscale newKeynesian model of the business cycle with an impulse response function matching approach, we show that the instabilities in the estimated VAR impulse responses are informative as for the calibration of some keystructural parameters. 
Keywords:  structural break, recursive and nonrecursive VARs, identification, monetary policy shocks, impulse responses. 
JEL:  C32 C50 E52 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:pad:wpaper:0181&r=ecm 
By:  Daniela Marella; Mauro Mezzini; Paola Vicard 
Abstract:  Bayesian networks are multivariate statistical models using a directed acyclic graph to represent statistical dependencies among variables. When dealing with Bayesian Networks it is common to assume that all the variables are discrete. This is not often the case in many real contexts where also continuous variables are observed. A common solution consists in discretizing the continuous variables. In this paper we propose a discretization algorithm based on the KullbackLeibler divergence measure. Formally, we deal with the problem of discretizing a continuous variable Y conditionally on its parents. We show that such a problem is polynomially solvable. A simulation study is finally performed. 
Keywords:  Discretization, KullbackLeibler divergence measure, Bayesian Networks 
JEL:  C10 C18 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:rtr:wpaper:0199&r=ecm 
By:  Tatiana Komarova 
Abstract:  This paper proposes an approach to proving nonparametric identification for distributions of bidders' values in asymmetric secondprice auctions. I consider the case when bidders have independent private values and the only available data pertain to the winner's identity and the transaction price. My proof of identification is constructive and is based on establishing the existence and uniqueness of a solution to the system of nonlinear differential equations that describes relationships between unknown distribution functions and observable functions. The proof is conducted in two logical steps. First, I prove the existence and uniqueness of a local solution. Then I describe a method that extends this local solution to the whole support. This paper delivers other interesting results. I demonstrate how this approach can be applied to obtain identification in auctions with a stochastic number of bidders. Furthermore, I show that my results can be extended to generalized competing risks models. 
Keywords:  Secondprice auctions; ascending auctions; asymmetric bidders; private values; nonparametric identification; competing risks; coherent systems 
JEL:  C02 C14 C41 C65 D44 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:50245&r=ecm 
By:  Raffaello Morales; T. Di Matteo; Tomaso Aste 
Abstract:  We report evidence of a deep interplay between crosscorrelations hierarchical properties and multifractality of New York Stock Exchange daily stock returns. The degree of multifractality displayed by different stocks is found to be positively correlated to their depth in the hierarchy of crosscorrelations. We propose a dynamical model that reproduces this observation along with an array of other empirical properties. The structure of this model is such that the hierarchical structure of heterogeneous risks plays a crucial role in the time evolution of the correlation matrix, providing an interpretation to the mechanism behind the interplay between crosscorrelation and multifractality in financial markets, where the degree of multifractality of stocks is associated to their hierarchical positioning in the crosscorrelation structure. Empirical observations reported in this paper present a new perspective towards the merging of univariate multi scaling and multivariate crosscorrelation properties of financial time series. 
JEL:  F3 G3 
Date:  2014–04–04 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:56622&r=ecm 
By:  Thomas BoyerKassem (Archives H. Poincaré (UMR 7117 CNRS); Université de Lorraine); Sébastien Duchêne (Université Nice Sophia Antipolis; GREDEGCNRS); Eric Guerci (Université Nice Sophia Antipolis; GREDEGCNRS) 
Abstract:  Lately, socalled 'quantum models' based on parts of the mathematics of quantum mechanics, have been developed in decision theory and cognitive sciences to account for seemingly irrational or paradoxical human judgments. In this paper, we limit ourselves to such quantumlike models that address order effects. It has been argued that such models are able to account for existing and new empirical data, and meet some a priori predictions. From the quantum law of reciprocity, we derive new empirical predictions that we call the Grand Reciprocity equations, that must be satisfied by quantumlike models on the condition that they are nondegenerate. We show that existing nondegenerate quantumlike models for order effects fail this test on several existing data sets. We take it to suggest that degenerate quantumlike models should be the focus of forthcoming research in the area. 
Keywords:  Order effects, Decision theory, Quantum probability 
JEL:  C10 C40 C44 D03 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:gre:wpaper:201506&r=ecm 
By:  Jouni Kuha; Jonathan Jackson 
Abstract:  The item count method is a way of asking sensitive survey questions which protects the anonymity of the respondents by randomization before the interview. It can be used to estimate the probability of sensitive behaviour and to model how it depends on explanatory variables. We analyse item count survey data on the illegal behaviour of buying stolen goods. The analysis of an item count question is best formulated as an instance of modelling incomplete categorical data. We propose an efficient implementation of the estimation which also provides explicit variance estimates for the parameters. We then suggest pecifications for the model for the control items, which is an auxiliary but unavoidable part of the analysis of item count data. These considerations and the results of our analysis of criminal behaviour highlight the fact that careful design of the questions is crucial for the success of the item count method. 
Keywords:  categorical data analysis; EM algorithm; list experiment; missing information; NewtonRaphson algorithm; randomized response 
JEL:  C1 
Date:  2014–02–11 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:48069&r=ecm 
By:  Frank T. Denton; Dean C. Mountain 
Abstract:  Policy analysis frequently requires estimates of aggregate (or mean) consumer elasticities. However, estimates are often made incorrectly, based on elasticity calculations at mean income. We provide in this paper an overall integrated analytical framework that encompasses these biases and others. We then use empirically derived parameter estimates to simulate and quantify the full range of biases. We do that for alternative income distributions and four different demand models. The biases can be quite large; they generally grow as the degree of income inequality rises, the underlying expenditure elasticity differs from one, and the rank of the model increases. 
Keywords:  aggregate consumer elasticities, aggregation bias, consumer demand, income inequality, income distribution, model rank 
JEL:  D11 C43 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:mcm:deptwp:201501&r=ecm 