
on Econometrics 
By:  Francesco Bartolucci (University of Perugia); Federico Belotti (University of Rome "Tor Vergata"); Franco Peracchi (University of Rome "Tor Vergata" and EIEF) 
Abstract:  Recent literature on panel data has emphasized the importance of accounting for timevarying unobserved heterogeneity, which may stem either from timevarying omitted variables or macrolevel shocks that affect each individual unit differently. In this paper, we propose a computationally convenient test for the null hypothesis of timeinvariant individual effects. The proposed test is an application of Hausman (1978) specification test procedure and can be applied to generalized linear models for panel data, a wide class of models that includes the Gaussian linear model and a variety of nonlinear models typically employed for discrete or categorical outcomes. The basic idea is to compare fixed effects estimators defined as the maximand of full and pairwise conditional likelihood functions. Thus, the proposed approach requires no assumptions on the distribution of the individual effects and, most importantly, it does not require them to be independent of the covariates in the model. We investigate the finite sample properties of the test through a set of Monte Carlo experiments. Our results show that the test performs quite well, with small size distortions and good power properties. A health economics example based on data from the Health and Retirement Study is used to illustrate the proposed test. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:eie:wpaper:1312&r=ecm 
By:  Susanne Schennach (Institute for Fiscal Studies and Brown University) 
Abstract:  This paper establishes that socalled instrumental variables enable the identification and the estimation of a fully nonparametric regression model with Berksontype measurement error in the regressors. An estimator is proposed and proven to be consistent. Its practical performance and feasibility are investigated via Monte Carlo simulations as well as through an epidemiological application investigating the effect of particulate air pollution on respiratory health. These examples illustrate that Berkson errors can clearly not be neglected in nonlinear regression models and that the proposed method represents an effective remedy. 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:22/13&r=ecm 
By:  Nadja Klein; Thomas Kneib; Stefan Lang 
Abstract:  Frequent problems in applied research that prevent the application of the classical Poisson loglinear model for analyzing count data include overdispersion, an excess of zeros compared to the Poisson distribution, correlated responses, as well as complex predictor structures comprising nonlinear effects of continuous covariates, interactions or spatial effects. We propose a general class of Bayesian generalized additive models for zeroinflated and overdispersed count data within the framework of generalized additive models for location, scale and shape where semiparametric predictors can be specified for several parameters of a count data distribution. As special instances, we consider the zeroinflated Poisson, the negative binomial and the zeroinflated negative binomial distribution as standard options for applied work. The additive predictor specifications rely on basis function approximations for the different types of effects in combination with Gaussian smoothness priors. We develop Bayesian inference based on Markov chain Monte Carlo simulation techniques where suitable proposal densities are constructed based on iteratively weighted least squares approximations to the full conditionals. To ensure practicability of the inference we consider theoretical properties like the involved question whether the joint posterior is proper. The proposed approach is evaluated in simulation studies and applied to count data arising from patent citations and claim frequencies in car insurances. For the comparison of models with respect to the distribution, we consider quantile residuals as an effective graphical device and scoring rules that allow to quantify the predictive ability of the models. The deviance information criterion is used for further model specification. 
Keywords:  iteratively weighted least squares, Markov chain Monte Carlo, penalized splines, zeroinflated negative binomial, zeroinflated Poisson 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:201312&r=ecm 
By:  Wolgang Karl Härdle; LiShan Huang; ; 
Abstract:  We develop analysis of deviance tools for generalized partial linear models based on local polynomial fitting. Assuming a canonical link, we propose expressions for both local and global analysis of deviance, which admit an additivity property that reduces to ANOVA decompositions in the Gaussian case. Chisquare tests based on integrated likelihood functions are proposed to formally test whether the nonparametric term is significant. Simulation results are shown to illustrate the proposed chisquare tests. The methodology is applied to German Bundesbank Federal Reserve data. 
Keywords:  ANOVA decomposition, Integrated likelihood, Link function, Local polynomial AMS 2000 subject classifications: Primary 62G08; secondary 62J12 
JEL:  C00 C14 C50 C58 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013028&r=ecm 
By:  Matteo Pelagatti 
Abstract:  We propose three nonparametric tests for the null of no eventinduced shifts in the distribution of stock returns. One test is the natural extension of the popular Corrado rank test to the case of crosssectionally dependent returns, while the other two are based on new ideas. Unfortunately only for one of these tests a solid theory for approximating the distribution of the statistic can be derived, but some simulation experiments confirm that normality is a good approximation also for the other two. The new tests are compared to a widely used parametric test (Patell) through simulation experiments and are shown to compare favourably in terms of power. Simulation results are based on bootstrapping daily stock returns from the S&P100 and NASDAQ indexes. 
Keywords:  Rank test, Event study, Abnormal returns, Crosssectional dependence 
JEL:  G14 C12 C14 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:mib:wpaper:244&r=ecm 
By:  Maican, Florin G. (Department of Economics, School of Business, Economics and Law, Göteborg University); Sweeney, Richard J. (Georgetown University, Washington, D.C.) 
Abstract:  If the researcher tests each model in a battery at the a % significance level, the probability that at least one test rejects is generally larger than a %. For five unitroot models, this paper uses Monte Carlo simulation and the inclusionexclusion principle to show for a %=5% for each test, the probability that at least one test rejects is 16.2% rather than the upperbound of 25% from the Bonferroni inequality. It also gives estimated probabilities that any combination two, three, four or five models all reject.<p> 
Keywords:  Real Exchange Rates; Unit root; Monte Carlo; Break models 
JEL:  C15 C22 C32 C33 E31 F31 
Date:  2013–06–03 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunwpe:0568&r=ecm 
By:  Raffaella Giacomini (Institute for Fiscal Studies and UCL) 
Abstract:  This chapter reviews the literature on the econometric relationship between DSGE and VAR models from the point of view of estimation and model validation. The mapping between DSGE and VAR models is broken down into three stages: 1) from DSGE to statespace model; 2) from statespace model to VAR (âˆž); 3) from VAR (âˆž) to finite order VAR. The focus is on discussing what can go wrong at each step of this mapping and on critically highlighting the hidden assumptions. I also point out some open research questions and interesting new research directions in the literature on the econometrics of DSGE models. These include, in no particular order: understanding the effects of loglinearisation on estimation and identification; dealing with multiplicity of equilibria; estimating nonlinear DSGE models; incorporating into DSGE models information from atheoretical models and from survey data; adopting flexible modelling approaches that combine the theoretical rigor of DSGE models and the econometric model's ability to fit the data. 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:21/13&r=ecm 
By:  Bent Nielsen; Maria Dolores Martinez Miranda; Jens Perch Nielsen 
Abstract:  It is of considerable interest to forecast future mesothelioma mortality. No measures for exposure are available so it is not straight forward to apply a doseresponse model. It is proposed to model the counts of deaths directly using a Poisson regression with an ageperiodcohort structure, but without offset. Traditionally the ageperiodcohort is viewed to suffer from an identification problem. It is shown how to reparameterize the model in terms of freely varying parameters, so as to avoid this problem. It is shown how to conduct inference and how to construct distribution forecasts. 
Date:  2013–03–26 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:2013w05&r=ecm 
By:  Stephen Bazen (AMSE  AixMarseille School of Economics  AixMarseille Univ.  Centre national de la recherche scientifique (CNRS)  École des Hautes Études en Sciences Sociales [EHESS]  Ecole Centrale Marseille (ECM)); Xavier Joutard (AMSE  AixMarseille School of Economics  AixMarseille Univ.  Centre national de la recherche scientifique (CNRS)  École des Hautes Études en Sciences Sociales [EHESS]  Ecole Centrale Marseille (ECM)) 
Abstract:  The widely used Oaxaca decomposition applies to linear models. Extending it to commonly used nonlinear models such as binary choice and duration models is not straightforward. This paper shows that the original decomposition using a linear model can be obtained as a first order Taylor expansion. This basis provides a means of obtaining a coherent and unified approach which applies to nonlinear models, which we refer to as a Taylor decomposition. Explicit formulae are provided for the Taylor decomposition for the main nonlinear models used in applied econometrics including the Probit binary choice and Weibull duration models. The detailed decomposition of the explained component is expressed in terms of what are usually referred to as marginal effects and a remainder. Given Jensen's inequality, the latter will always be present in nonlinear models unless an ad hoc or tautological basis for decomposition is used. 
Keywords:  Oaxaca decomposition; nonlinear models 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00828790&r=ecm 
By:  Sylvain Corlay 
Abstract:  This paper is devoted to the application of Bsplines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility models and the parameterization of an arbitragefree implied volatility surface calibrated to sparse option data. We use an extension to the classical Bsplines obtained by including basis functions of infinite support. \par We first come back to the application of shapeconstrained Bsplines to the estimation of conditional expectations, not merely from a scatter plot but also with the given of the marginal distributions. An application is the Monte Carlo calibration of stochastic local volatility models by Markov projection. Then we present a new technique for the calibration of an implied volatility surface to sparse option data. We use a Bspline parameterization of the RadonNikodym derivative of the underlying's riskneutral probability density with respect to a roughly calibrated base model. We show that the method provides smooth arbitragefree implied volatility surfaces. Eventually, we propose a Galerkin method with Bspline finite elements to the solution of the P.D.E. satisfied by the Radon Nikodym derivative. 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1306.0995&r=ecm 
By:  Biørn, Erik (Dept. of Economics, University of Oslo) 
Abstract:  In the paper is considered identification of coefficients in equations explaining a continuous variable, say the number of sickness absence days of an individual per year, by cohort, time and age, subject to their definitional identity. Extensions of a linear equation to polynomials, including additive polynomials, are explored. The cohort+time=age identity makes the treatment of interactions important. If no interactions between the three variables are included, only the coefficients of the linear terms remain unidentified unless additional information is available. Illustrations using a large data set for individual longterm sickness absence in Norway are given. The sensitivity to the estimated marginal effects of cohort and age at the samplemean, as well as conclusions about the equations’ curvature, are illustrated. We find notable differences in this respect between linear and quadratic equations on the one hand and cubic and fourthorder polynomials on the other. 
Keywords:  AGe cohorttime problem; Identification; Polynomial regression; Interaction; Agecohort curvature; Panel data; Sickness absence 
JEL:  C23 C24 C25 C52 H55 I18 J21 
Date:  2013–03–21 
URL:  http://d.repec.org/n?u=RePEc:hhs:osloec:2013_008&r=ecm 
By:  Berliant, Marcus; Weiss, Adam 
Abstract:  We examine spatial econometric issues arising from the model specification in Henderson, Storeygard and Weil (2012), that uses night light data to proxy for missing or unreliable GDP growth data. 
Keywords:  GDP, Night light data, Spatial autocorrelation 
JEL:  C21 C23 
Date:  2013–06–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:47340&r=ecm 
By:  Pasquale Cirillo 
Abstract:  Pareto distributions, and power laws in general, have demonstrated to be very useful models to describe very different phenomena, from physics to finance. In recent years, the econophysical literature has proposed a large amount of papers and models justifying the presence of power laws in economic data. Most of the times, this Paretianity is inferred from the observation of some plots, such as the Zipf plot and the mean excess plot. If the Zipf plot looks almost linear, then everything is ok and the parameters of the Pareto distribution are estimated. Often with OLS. Unfortunately, as we show in this paper, these heuristic graphical tools are not reliable. To be more exact, we show that only a combination of plots can give some degree of confidence about the real presence of Paretianity in the data. We start by reviewing some of the most important plots, discussing their points of strength and weakness, and then we propose some additional tools that can be used to refine the analysis. 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1306.0100&r=ecm 
By:  Tim Robinson (Reserve Bank of Australia) 
Abstract:  Different approaches to modelling the macroeconomy vary in the emphasis they place on coherence with theory relative to their ability to match the data. Dynamic stochastic general equilibrium (DSGE) models place greater emphasis on theory, while vector autoregression (VAR) models tend to provide a better fit of the data. Del Negro and Schorfheide (2004) develop a method of using a DSGE model to inform the priors of a Bayesian VAR. The resulting BVARDSGE model partially relaxes the relationships in the DSGE so as to fit the data better. However, their approach does not accommodate the typical restriction of small open economy models which ensures that developments in the small economy cannot affect the large economy. I develop a method that allows this restriction to be imposed and introduce a simple way, suitable for small open economies, of identifying the empirical BVARDSGE using information from the DSGE model. These methods are demonstrated using the Justiniano and Preston (2010a) DSGE model. Compared to the DSGE model, the empirical BVARDSGE model estimates that there is a larger role for foreign shocks in the small economy's business cycle. 
Keywords:  BVARDSGE; small open economy 
JEL:  C11 C32 C51 E30 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:rba:rbardp:rdp201306&r=ecm 
By:  Mardi Dungey; Jan P.A.M. Jacobs; Jing Tian; Simon van Norden 
Abstract:  A welldocumented property of the BeveridgeNelson trendcycle decomposition is the perfect negative correlation between trend and cycle innovations. We show how this may be consistent with a structural model where trend shocks enter the cycle, or cyclic shocks enter the trend and that identification restrictions are necessary to make this structural distinction. A reducedform unrestricted version such as that of Morley, Nelson and Zivot (2003) is compatible with either option, but cannot distinguish which is relevant. We discuss economic interpretations and implications using US real GDP data. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:fip:fedpwp:1322&r=ecm 