nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒02‒05
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust Estimation and Moment Selection in Dynamic Fixed-effects Panel Data Models By Cizek, P.; Aquaro, M.
  2. Estimation in the presence of many nuisance parameters: composite likelihood and plug-in likelihood By Billy Wu; Qiwei Yao; Shiwu Zhu
  3. Robust Confidence Intervals for Average Treatment Effects under Limited Overlap By Rothe, Christoph
  4. Testing the lag structure of assets’ realized volatility dynamics By Audrino, Francesco; Camponovo, Lorenzo; Roth, Constantin
  5. Random intercept selection in structured additive regression models By Helene Roth; Stefan Lang; Helga Wagner
  6. Inference on Causal Effects in a Generalized Regression Kink Design By David Card; David S. Lee; Zhuan Pei; Andrea Weber
  7. Estimating a Falsified Model: Some Impossibility Theorems By Andrew J. Buck; George M. Lady
  8. (Partially) Identifying potential outcome distributions in triangular systems By Ismael Mourifie; Yuanyuan Wan
  9. Confidence Sets for the Break Date Based on Optimal Tests By KUROZUMI, Eiji; YAMAMOTO, Yohei
  10. Gimme a break! Identification and estimation of the macroeconomic effects of monetary policy shocks in the U.S. By Emanuele Bacchiocchi; Efrem Castelnuovo; Luca Fanelli
  11. Improving Discretization Exploiting Dependence Structure By Daniela Marella; Mauro Mezzini; Paola Vicard
  12. A new approach to identifying generalized competing risks models with application to second-price auctions By Tatiana Komarova
  13. Dependency structure and scaling properties of financial time series are related By Raffaello Morales; T. Di Matteo; Tomaso Aste
  14. Testing Quantum-like Models of Judgment for Question Order Effects By Thomas Boyer-Kassem; Sébastien Duchêne; Eric Guerci
  15. The item count method for sensitive survey questions: modelling criminal behaviour By Jouni Kuha; Jonathan Jackson
  16. Biases in Consumer Elasticities Based on Micro and Aggregate Data: An Integrated Framework and Empirical Evaluation By Frank T. Denton; Dean C. Mountain

  1. By: Cizek, P. (Tilburg University, Center For Economic Research); Aquaro, M. (Tilburg University, Center For Economic Research)
    Abstract: This paper extends an existing outlier-robust estimator of linear dynamic panel data models with fixed effects, which is based on the median ratio of two consecutive pairs of first-differenced data. To improve its precision and robust properties, a general procedure based on many pairwise differences and their ratios is designed. The proposed two-step GMM estimator based on the corresponding moment equations relies on an innovative weighting scheme reflecting both the variance and bias of those moment equations, where the bias is assumed to stem from data contamination. To estimate the bias, the influence function is derived and evaluated. The asymptotic distribution as well as robust properties of the estimator are characterized; the latter are obtained both under contamination by independent additive outliers and the patches of additive outliers. The proposed estimator is additionally compared with existing methods by means of Monte Carlo simulations.
    Keywords: dynamic panel data; fixed effects; generalized method of moments; influence function; pairwise differences; robust estimation
    JEL: C13 C23
    Date: 2015
  2. By: Billy Wu; Qiwei Yao; Shiwu Zhu
    Abstract: We consider the incidental parameters problem in this paper, i.e. the estimation for a small number of parameters of interest in the presence of a large number of nuisance parameters. By assuming that the observations are taken from a multiple strictly stationary process, the two estimation methods, namely the maximum composite quasi-likelihood estimation (MCQLE) and the maximum plug-in quasi-likelihood estimation (MPQLE) are considered. For the MCQLE, we profile out nuisance parameters based on lower-dimensional marginal likelihoods, while the MPQLE is based on some initial estimators for nuisance parameters. The asymptotic normality for both the MCQLE and the MPQLE is established under the assumption that the number of nuisance parameters and the number of observations go to infinity together, and both the estimators for the parameters of interest enjoy the standard root-nn convergence rate. Simulation with a spatial–temporal model illustrates the finite sample properties of the two estimation methods.
    Keywords: composite likelihood; incidental parameters problem; nuisance parameterlem; panel data; profile likelihood; quasi-likelihood; root-nn convergence
    JEL: C1
    Date: 2013–07
  3. By: Rothe, Christoph (Columbia University)
    Abstract: Estimators of average treatment effects under unconfounded treatment assignment are known to become rather imprecise if there is limited overlap in the covariate distributions between the treatment groups. But such limited overlap can also have a detrimental effect on inference, and lead for example to highly distorted confidence intervals. This paper shows that this is because the coverage error of traditional confidence intervals is not so much driven by the total sample size, but by the number of observations in the areas of limited overlap. At least some of these "local sample sizes" are often very small in applications, up to the point where distributional approximation derived from the Central Limit Theorem become unreliable. Building on this observation, the paper proposes two new robust confidence intervals that are extensions of classical approaches to small sample inference. It shows that these approaches are easy to implement, and have superior theoretical and practical properties relative to standard methods in empirically relevant settings. They should thus be useful for practitioners.
    Keywords: average treatment effect, causality, overlap, propensity score, treatment effect heterogeneity, unconfoundedness
    JEL: C12 C14 C25 C31
    Date: 2015–01
  4. By: Audrino, Francesco; Camponovo, Lorenzo; Roth, Constantin
    Abstract: A (conservative) test is constructed to investigate the optimal lag structure for forecasting realized volatility dynamics. The testing procedure relies on the recent theoretical results that show the ability of the adaptive least absolute shrinkage and selection operator (adaptive lasso) to combine efficient parameter estimation, variable selection, and valid inference for time series processes. In an application to several constituents of the S&P 500 index it is shown that (i) the optimal significant lag structure is time-varying and subject to drastic regime shifts that seem to happen across assets simultaneously; (ii) in many cases the relevant information for prediction is included in the first 22 lags, corroborating previous results concerning the accuracy and the difficulty of outperforming out-of-sample the heterogeneous autoregressive (HAR) model; and (iii) some common features of the optimal lag structure can be identified across assets belonging to the same market segment or showing a similar beta with respect to the market index.
    Keywords: Realized volatility; Adaptive lasso; HAR model; Test for false positives; Lag structure
    JEL: C12 C58 C63
    Date: 2015–01
  5. By: Helene Roth; Stefan Lang; Helga Wagner
    Abstract: This paper discusses random intercept selection within the context of semiparametric regression models with structured additive predictor (STAR). STAR models can deal simultaneously with nonlinear covariate effects and time trends, unit- or cluster-specific heterogeneity, spatial heterogeneity and complex interactions between covariates of different type. The random intercept selection is based on spike and slab priors for the variances of the random intercept coefficients. The aim is to achieve shrinkage of small random intercept coefficients to zero similar as for the LASSO in frequentist linear models. The mixture structure of the spike and slab prior allows for selective shrinkage, as coefficients are either heavily shrunk under the spike component or left almost unshrunk under the slab component. The hyperparameters of the spike and slab prior are chosen by theoretical considerations based on the prior inclusion probability of a particular random coefficient given the true effect size. Using extensive simulation experiments we compare random intercept models based on spike and slab priors for variances with the usual Inverse Gamma priors. A case study on malnutrition of children in Zambia illustrates the methodology in a real data example.
    Keywords: Bayesian hierarchical models, Bayesian model choice, MCMC, P-splines, spike and slab priors
    Date: 2015–01
  6. By: David Card (University of California, Berkeley); David S. Lee (Princeton University); Zhuan Pei (Brandeis university); Andrea Weber (University of Mannheim)
    Abstract: We consider nonparametric identification and estimation in a nonseparable model where a continuous regressor of interest is a known, deterministic, but kinked function of an observed assignment variable. This design arises in many institutional settings where a policy variable (such as weekly unemployment benefits) is determined by an observed but potentially endogenous assignment variable (like previous earnings). We provide new results on identification and estimation for these settings, and apply our results to obtain estimates of the elasticity of joblessness with respect to UI benefit rates. We characterize a broad class of models in which a sharp "Regression Kink Design" (RKD, or RK Design) identifies a readily interpretable treatment-on-the-treated parameter (Florens et al. (2008)). We also introduce a "fuzzy regression kink design" generalization that allows for omitted variables in the assignment rule, noncompliance, and certain types of measurement errors in the observed values of the assignment variable and the policy variable. Our identifying assumptions give rise to testable restrictions on the distributions of the assignment variable and predetermined covariates around the kink point, similar to the restrictions delivered by Lee (2008) for the regression discontinuity design. We then use a fuzzy RKD approach to study the effect of unemployment insurance benefits on the duration of joblessness in Austria, where the benefit schedule has kinks at the minimum and maximum benefit level. Our preferred estimates suggest that changes in UI benefit generosity exert a relatively large effect on the duration of joblessness of both low-wage and high-wage UI recipients in Austria.
    Keywords: Regression Discontinuity Design, Regression Kink Design, Treatment Effects, Nonseparable Models, Nonparametric Estimation
    JEL: C13 C14 C31
    Date: 2015–01
  7. By: Andrew J. Buck (Department of Economics, Temple University); George M. Lady (Department of Economics, Temple University)
    Abstract: A recent literature, e.g., Lady and Buck (2011), has shown that a qualitative analysis of a model’s structural and estimated reduced form arrays can provide a robust procedure for assessing if a model’s hypothesized structure has been falsified. This paper shows that the even weaker statement of the model’s structure provided by zero restrictions on the structural arrays can be falsified, independent of the proposed nonzero entries. When this takes place, multi-stage least squares, or any procedure for estimating the structural arrays with the zero restrictions imposed, will present estimates that could not possibly have generated the data upon which the estimated reduced form is based. The examples given in the paper are based upon a Monte Carlo sampling procedure that is briefly described in the appendix.
    Keywords: qualitative analysis, Monte Carlo, model falsification, impossible estimates
    JEL: C15 C18 C51 C52
    Date: 2015–01
  8. By: Ismael Mourifie; Yuanyuan Wan
    Abstract: In this paper we propose a new unifying approach to (partially) identify potential outcome distributions in a non-separable triangular model with a binary endogenous variable and a binary instrument. Our identification strategy provides a testable condition under which the objects of interest are point identified. When point identification is not achieved, we provide sharp bounds on the potential outcome distributions and the difference of marginal distributions.
    Keywords: Potential outcomes, triangular system, point and partial identification, sharp bounds.
    JEL: C14 C31 C35
    Date: 2015–01–29
  9. By: KUROZUMI, Eiji; YAMAMOTO, Yohei
    Abstract: This study proposes constructing a confidence set for the date of a one-time structural change using a point optimal test. Following Elliott and M鶴ler (2007), we first construct a test for the break date that maximizes the weighted average of the power function. The confidence set is then obtained by inverting the test statistic. We carefully choose the weights and show by Monte Carlo simulations that the confidence set based on our method has a relatively accurate coverage rate, while the length of our confidence set is significantly shorter than the lengths proposed in the literature.
    Keywords: coverage rate, break fraction, hypothesis test, average power
    JEL: C12 C22
    Date: 2015–01–22
  10. By: Emanuele Bacchiocchi (University of Milano); Efrem Castelnuovo (University of Padova); Luca Fanelli (University of Bologna)
    Abstract: We employ a novel identification scheme to quantify the macroeconomic effects of monetary policy shocks in the United States. The identification of the shocks is achieved by exploiting the instabilities in the contemporaneous coefficients of the structural VAR (SVAR) and in the covariance matrix of the reduced-form residuals. Different volatility regimes can be associated with different transmission mechanisms of the identified structural shocks. We formally test and reject the stability of our impulse responses estimated with post-WWII U.S. data by working with a break in macroeconomic volatilities occurred in the mid-1980s. We show that the impulse responses obtained with our non-recursive identification scheme are quite similar to those conditional on a standard Cholesky-SVARs estimated with pre-1984 data. In contrast, recursive vs. non-recursive identification schemes return substantially different macroeconomic reactions conditional on Great Moderation data, in particular as for inflation and a long-term interest rate. Using our non-recursive SVARs as auxiliary models to estimate a small-scale new-Keynesian model of the business cycle with an impulse response function matching approach, we show that the instabilities in the estimated VAR impulse responses are informative as for the calibration of some key-structural parameters.
    Keywords: structural break, recursive and non-recursive VARs, identification, monetary policy shocks, impulse responses.
    JEL: C32 C50 E52
    Date: 2014–07
  11. By: Daniela Marella; Mauro Mezzini; Paola Vicard
    Abstract: Bayesian networks are multivariate statistical models using a directed acyclic graph to represent statistical dependencies among variables. When dealing with Bayesian Networks it is common to assume that all the variables are discrete. This is not often the case in many real contexts where also continuous variables are observed. A common solution consists in discretizing the continuous variables. In this paper we propose a discretization algorithm based on the Kullback-Leibler divergence measure. Formally, we deal with the problem of discretizing a continuous variable Y conditionally on its parents. We show that such a problem is polynomially solvable. A simulation study is finally performed.
    Keywords: Discretization, Kullback-Leibler divergence measure, Bayesian Networks
    JEL: C10 C18
    Date: 2015–01
  12. By: Tatiana Komarova
    Abstract: This paper proposes an approach to proving nonparametric identification for distributions of bidders' values in asymmetric second-price auctions. I consider the case when bidders have independent private values and the only available data pertain to the winner's identity and the transaction price. My proof of identification is constructive and is based on establishing the existence and uniqueness of a solution to the system of nonlinear differential equations that describes relationships between unknown distribution functions and observable functions. The proof is conducted in two logical steps. First, I prove the existence and uniqueness of a local solution. Then I describe a method that extends this local solution to the whole support. This paper delivers other interesting results. I demonstrate how this approach can be applied to obtain identification in auctions with a stochastic number of bidders. Furthermore, I show that my results can be extended to generalized competing risks models.
    Keywords: Second-price auctions; ascending auctions; asymmetric bidders; private values; nonparametric identification; competing risks; coherent systems
    JEL: C02 C14 C41 C65 D44
    Date: 2013–07
  13. By: Raffaello Morales; T. Di Matteo; Tomaso Aste
    Abstract: We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns. The degree of multifractality displayed by different stocks is found to be positively correlated to their depth in the hierarchy of cross-correlations. We propose a dynamical model that reproduces this observation along with an array of other empirical properties. The structure of this model is such that the hierarchical structure of heterogeneous risks plays a crucial role in the time evolution of the correlation matrix, providing an interpretation to the mechanism behind the interplay between cross-correlation and multifractality in financial markets, where the degree of multifractality of stocks is associated to their hierarchical positioning in the cross-correlation structure. Empirical observations reported in this paper present a new perspective towards the merging of univariate multi scaling and multivariate cross-correlation properties of financial time series.
    JEL: F3 G3
    Date: 2014–04–04
  14. By: Thomas Boyer-Kassem (Archives H. Poincaré (UMR 7117 CNRS); Université de Lorraine); Sébastien Duchêne (Université Nice Sophia Antipolis; GREDEG-CNRS); Eric Guerci (Université Nice Sophia Antipolis; GREDEG-CNRS)
    Abstract: Lately, so-called 'quantum models' based on parts of the mathematics of quantum mechanics, have been developed in decision theory and cognitive sciences to account for seemingly irrational or paradoxical human judgments. In this paper, we limit ourselves to such quantum-like models that address order effects. It has been argued that such models are able to account for existing and new empirical data, and meet some a priori predictions. From the quantum law of reciprocity, we derive new empirical predictions that we call the Grand Reciprocity equations, that must be satisfied by quantum-like models on the condition that they are non-degenerate. We show that existing non-degenerate quantum-like models for order effects fail this test on several existing data sets. We take it to suggest that degenerate quantum-like models should be the focus of forthcoming research in the area.
    Keywords: Order effects, Decision theory, Quantum probability
    JEL: C10 C40 C44 D03
    Date: 2015–01
  15. By: Jouni Kuha; Jonathan Jackson
    Abstract: The item count method is a way of asking sensitive survey questions which protects the anonymity of the respondents by randomization before the interview. It can be used to estimate the probability of sensitive behaviour and to model how it depends on explanatory variables. We analyse item count survey data on the illegal behaviour of buying stolen goods. The analysis of an item count question is best formulated as an instance of modelling incomplete categorical data. We propose an efficient implementation of the estimation which also provides explicit variance estimates for the parameters. We then suggest pecifications for the model for the control items, which is an auxiliary but unavoidable part of the analysis of item count data. These considerations and the results of our analysis of criminal behaviour highlight the fact that careful design of the questions is crucial for the success of the item count method.
    Keywords: categorical data analysis; EM algorithm; list experiment; missing information; Newton-Raphson algorithm; randomized response
    JEL: C1
    Date: 2014–02–11
  16. By: Frank T. Denton; Dean C. Mountain
    Abstract: Policy analysis frequently requires estimates of aggregate (or mean) consumer elasticities. However, estimates are often made incorrectly, based on elasticity calculations at mean income. We provide in this paper an overall integrated analytical framework that encompasses these biases and others. We then use empirically derived parameter estimates to simulate and quantify the full range of biases. We do that for alternative income distributions and four different demand models. The biases can be quite large; they generally grow as the degree of income inequality rises, the underlying expenditure elasticity differs from one, and the rank of the model increases.
    Keywords: aggregate consumer elasticities, aggregation bias, consumer demand, income inequality, income distribution, model rank
    JEL: D11 C43
    Date: 2015–01

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.