
on Econometrics 
By:  Fe, Eduardo 
Abstract:  Estimation of causal eects in regression discontinuity designs relies on a local Wald estimator whose components are estimated via local linear regressions centred at an specic point in the range of a treatment assignment variable. The asymptotic distribution of the estimator depends on the specic choice of kernel used in these nonparametric regressions, with some popular kernels causing a notable loss of effciency. This article presents the asymptotic distribution of the local Wald estimator when a gamma kernel is used in each local linear regression. The resulting statistics is easy to implement, consistent at the usual nonparametric rate, maintains its asymptotic normal distribution, but its bias and variance do not depend on kernelrelated constants and, as a result, is becomes a more effcient method. The effciency gains are measured via a limited Monte Carlo experiment, and the new method is used in a substantive application. 
Keywords:  Regression Discontinuity; Asymmetric Kernels; Local Linear Regression 
JEL:  C13 C14 C21 
Date:  2012–02–24 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:38164&r=ecm 
By:  Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Lars Nesheim (Institute for Fiscal Studies and University College London); Anna Simoni (Institute for Fiscal Studies and Bocconi) 
Abstract:  <p>In structural economic models, individuals are usually characterized as solving a decision problem that is governed by a finite set of parameters. This paper discusses the nonparametric estimation of the probability density function of these parameters if they are allowed to vary continuously across the population. We establish that the problem of recovering the probability density function of random parameters falls into the class of nonlinear inverse problem. This framework helps us to answer the question whether there exist densities that satisfy this relationship. It also allows us to characterize the identified set of such densities. We obtain novel conditions for point identification, and establish that point identification is generically weak. Given this insight, we provide a consistent nonparametric estimator that accounts for this fact, and derive its asymptotic distribution. Our general framework allows us to deal with unobservable nuisance variables, e.g., measurement error, but also covers the case when there are no such nuisance variables. Finally, Monte Carlo experiments for several structural models are provided which illustrate the performance of our estimation procedure.</p> 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:09/12&r=ecm 
By:  Hyungsik Roger Moon; Matthew Shum; Martin Weidner (Institute for Fiscal Studies and UCL) 
Abstract:  <p>We extend the Berry, Levinsohn and Pakes (BLP, 1995) random coefficients discretechoice demand model, which underlies much recent empirical work in IO. We add interactive fixed effects in the form of a factor structure on the unobserved product characteristics. The interactive fixed effects can be arbitrarily correlated with the observed product characteristics (including price), which accommodates endogeneity and, at the same time, captures strong persistence in market shares across products and markets. We propose a two step least squaresminimum distance (LSMD) procedure to calculate the estimator. Our estimator is easy to compute, and Monte Carlo simulations show that it performs well. We consider an empirical application to US automobile demand.</p> 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:08/12&r=ecm 
By:  José Casals (Departamento de Fundamentos del Análisis Económico II. Facultad de Ciencias Económicas. Campus de Somosaguas. 28223 Madrid (SPAIN).); Sonia Sotoca (Departamento de Fundamentos del Análisis Económico II. Facultad de Ciencias Económicas. Campus de Somosaguas. 28223 Madrid (SPAIN).); Miguel Jerez (Departamento de Fundamentos del Análisis Económico II. Facultad de Ciencias Económicas. Campus de Somosaguas. 28223 Madrid (SPAIN).) 
Abstract:  Computing the gaussian likelihood for a nonstationary statespace model is a difficult problem which has been tackled by the literature using two main strategies: data transformation and diffuse likelihood. The data transformation approach is cumbersome, as it requires nonstandard filtering. On the other hand, in some nontrivial cases the diffuse likelihood value depends on the scale of the diffuse states, so one can obtain different likelihood values corresponding to different observationally equivalent models. In this paper we discuss the properties of the minimallyconditioned likelihood function, as well as two efficient methods to compute its terms with computational advantages for specific models. Three convenient features of the minimallyconditioned likelihood are: (a) it can be computed with standard Kalman filters, (b) it is scalefree, and (c) its values are coherent with those resulting from differencing, being this the most popular approach to deal with nonstationary data. 
Keywords:  Statespace models, Conditional likelihood, Diffuse likelihood, Diffuse initial conditions, Kalman filter, Nonstationarity. 
JEL:  C32 C51 C10 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:ucm:doicae:1204&r=ecm 
By:  David E. Giles (Department of Economics, University of Victoria) 
Abstract:  By noting that the HodrickPrescott filter can be expressed as the solution to a particular regression problem, we are able to show how to construct confidence bands for the filtered timeseries. This procedure requires that the data are stationary. The construction of such confidence bands is illustrated using annual U.S. data for real valueadded output; and monthly U.S. data for the unemployment rate. 
Keywords:  HodrickPrescott filter; timeseries decomposition; confidence bands 
JEL:  C13 C20 E3 
Date:  2012–04–19 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:1202&r=ecm 
By:  Hecq Alain; Laurent Sébastien; Palm Franz C. (METEOR) 
Abstract:  First, we investigate the minimal order univariate representation of some well known ndimensionalconditional volatility models. Even simple low order systems (e.g. a multivariate GARCH(0,1)) forthe joint behavior of several variables imply individual processes with a lot of persistence inthe form of high order lags. However, we show that in the presence of common GARCH factors,parsimonious univariate representations (e.g. GARCH(1,1)) can result from large multivariatemodels generating the conditional variances and conditional covariances/correlations. The trivialdiagonal model without any contagion effects in conditional volatilities gives rise to the sameconclusions though.Consequently, we then propose an approach to detect the presence of these commonalities inmultivariate GARCH process. The factor we extract is the volatility of a portfolio made up by theoriginal assets whose weights are determined by the reduced rank analysis.We compare the small sample performances of two strategies. First, extending Engle and Marcucci(2006), we use reduced rank regressions in a multivariate system for squared returns andcrossreturns. Second we investigate a likelihood ratio approach, where under the null the matrixparameters of the BEKK have a reduced rank structure (Lin, 1992). It emerged that the latterapproach has quite good properties enabling us to discriminate between a system with seeminglyunrelated assets (e.g. a diagonal model) and a model with few common sources of volatility. 
Keywords:  econometrics; 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2012018&r=ecm 
By:  Cerquera, Daniel; Laisney, François; Ullrich, Hannes 
Abstract:  Motivated by Manski and Tamer (2002) and especially their partial identification analysis of the regression model where one covariate is only intervalmeasured, we offer several contributions. Manski and Tamer (2002) propose two estimation approaches in this context, focussing on general results. The modified minimum distance (MMD) estimates the true identified set and the modified method of moments (MMM) a superset. Our first contribution is to characterize the true identified set and the superset. Second, we complete and extend the Monte Carlo study of Manski and Tamer (2002). We present benchmark results using the exact functional form for the expectation of the dependent variable conditional on observables to compare with results using its nonparametric estimates, and illustrate the superiority of MMD over MMM. For MMD, we propose a simple shortcut for estimation.  
Keywords:  partial identification,true identified set,superset,MMD,MMM,estimation 
JEL:  C01 C13 C40 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:zbw:zewdip:12024&r=ecm 
By:  Pollmann, Daniel (ROA, Maastricht University); Dohmen, Thomas (ROA, Maastricht University); Palm, Franz C. (Maastricht University) 
Abstract:  We present a semiparametric method to estimate grouplevel dispersion, which is particularly effective in the presence of censored data. We apply this procedure to obtain measures of occupationspecific wage dispersion using topcoded administrative wage data from the German IAB Employment Sample (IABS). We then relate these robust measures of earnings risk to the risk attitudes of individuals working in these occupations. We find that willingness to take risk is positively correlated with the wage dispersion of an individual's occupation. 
Keywords:  dispersion estimation, earnings risk, censoring, quantile regression, occupational choice, sorting, risk preferences, SOEP, IABS 
JEL:  C14 C21 C24 J24 J31 D01 D81 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp6447&r=ecm 
By:  John Knight (University of Western Ontario); Stephen Satchell (Department of Economics, Mathematics & Statistics, Birkbeck; University of Sydney); Nandini Srivastava (Christ's College, University of Cambridge) 
Abstract:  The purpose of this paper is to examine the properties of bubbles in the light of steady state results for threshold autoregressive (TAR) models recently derived by Knight and Satchell (2011). We assert that this will have implications for econometrics. We study the conditions under which we can obtain a steady state distribution of asset prices using our simple model of bubbles based on our particular definition of a bubble. We derive general results and further extend the analysis by considering the steady state distribution in three cases of a (I) a normally distributed error process, (II) a non normally (exponentially) distributed steadystate process and (III) a switching random walk with a fairly general i.i.d error process We then examine the issues related to unit root testing for the presence of bubbles using standard econometric procedures. We illustrate as an example, the market for art, which shows distinctly bubblelike characteristics. Our results shed light on the ubiquitous finding of no bubbles in the econometric literature. 
Keywords:  Bubbles, Asset prices, Steady state, Nonlinear time series, TAR Models 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:bbk:bbkefp:1208&r=ecm 
By:  Mehmet Pinar (Fondazione Eni Enrico Mattei); Thanasis Stengos (University of Guelph.); M. Ege Yazgan (Istanbul Bilgi University) 
Abstract:  The forecast combination puzzle refers to the finding that a simple average forecast combination outperforms more sophisticated weighting schemes and/or the best individual model. The paper derives optimal (worst) forecast combinations based on stochastic dominance (SD) analysis with differential forecast weights. For the optimal (worst) forecast combination, this index will minimize (maximize) forecasts errors by combining timeseries model based forecasts at a given probability level. By weighting each forecast differently, we find the optimal (worst) forecast combination that does not rely on arbitrary weights. Using two exchange rate series on weekly data for the Japanese Yen/U.S. Dollar and U.S. Dollar/Great Britain Pound for the period from 1975 to 2010 we find that the simple average forecast combination is neither the worst nor the best forecast combination something that provides partial support for the forecast combination puzzle. In that context, the random walk model is the model that consistently contributes with considerably more than an equal weight to the worst forecast combination for all variables being forecasted and for all forecast horizons, whereas a flexible Neural Network autoregressive model and a selfexciting threshold autoregressive model always enter the best forecast combination with much greater than equal weights. 
Keywords:  Nonparametric Stochastic Dominance, Mixed Integer Programming; Forecast combinations; Forecast combination 
JEL:  C53 C61 C63 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:gue:guelph:201206.&r=ecm 
By:  Makram ElShagi; Tobias Knedlik; Gregor von Schweinitz 
Abstract:  The signals approach as an early warning system has been fairly successful in detecting crises, but it has so far failed to gain popularity in the scientific community because it does not distinguish between randomly achieved insample fit and true predictive power. To overcome this obstacle, we test the null hypothesis of no correlation between indicators and crisis probability in three applications of the signals approach to different crisis types. To that end, we propose bootstraps specifically tailored to the characteristics of the respective datasets. We find (1) that previous applications of the signals approach yield economically meaningful and statistically significant results and (2) that composite indicators aggregating information contained in individual indicators add value to the signals approach, even where most individual indicators are not statistically significant on their own. 
Keywords:  early warning system, signals approach, bootstrap 
JEL:  C15 E60 F01 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:iwh:dispap:312&r=ecm 
By:  Fève, Patrick; Jidoud, Ahmat 
Abstract:  This paper assesses SVARs as relevant tools at identifying the aggregate effects of news shocks. When the econometrician and private agents’ information sets are not aligned, the dynamic responses identified from SVARs are biased. However, the bias vanishes when news shocks account for the bulk of fluctuations in the economy. A simple correlation diagnostic test shows that under this condition, news shocks identified through long–run and short–run restrictions have a correlation close to unity. 
Keywords:  , Information Flows, News shocks, Non–fundamentalness, SVARs, Identification 
JEL:  C32 C52 E32 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:ide:wpaper:25752&r=ecm 
By:  Eder Lucio Fonseca; Fernando F. Ferreira; Paulsamy Muruganandam; Hilda A. Cerdeira 
Abstract:  In this work we develop a new measure to study the behavior of stochastic time series, which permits to distinguish events which are different from the ordinary, like financial crises. We identify from the data well known market crashes such as Black Thursday (1929), Black Monday (1987) and Subprime crisis (2008) with clear and robust results. We also show that the analysis has forecasting capabilities. We apply the method to the market fluctuations of 2011. From these results it appears as if the apparent crisis of 2011 is of a different nature from the other three. 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1204.3136&r=ecm 