nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒04‒23
thirteen papers chosen by
Sune Karlsson
Orebro University

  1. Efficient estimation in regression discontinuity designs via asymmetric kernels By Fe, Eduardo
  2. Semiparametric estimation of random coefficients in structural economic models By Stefan Hoderlein; Lars Nesheim; Anna Simoni
  3. Estimation of random coefficients logit demand models with interactive fixed effects By Hyungsik Roger Moon; Matthew Shum; Martin Weidner
  4. Minimally Conditioned Likelihood for a Nonstationary State Space Model By José Casals; Sonia Sotoca; Miguel Jerez
  5. Constructing Confidence Bands for the Hodrick-Prescott Filter By David E. Giles
  6. On the Univariate Representation of BEKK Models with Common Factors By Hecq Alain; Laurent Sébastien; Palm Franz C.
  7. Considerations on partially identified regression models By Cerquera, Daniel; Laisney, François; Ullrich, Hannes
  8. Robust Estimation of Wage Dispersion with Censored Data: An Application to Occupational Earnings Risk and Risk Attitudes By Pollmann, Daniel; Dohmen, Thomas; Palm, Franz C.
  9. Steady-State Distributions for Models of Bubbles: their Existence and Econometric Implications By John Knight; Stephen Satchell; Nandini Srivastava
  10. Is there an optimal forecast combination? A stochastic dominance approach applied to the forecast combination puzzle. By Mehmet Pinar; Thanasis Stengos; M. Ege Yazgan
  11. Predicting Financial Crises: The (Statistical) Significance of the Signals Approach By Makram El-Shagi; Tobias Knedlik; Gregor von Schweinitz
  12. News Shocks, Information Flows and SVARs By Fève, Patrick; Jidoud, Ahmat
  13. Identifying financial crises in real time By Eder Lucio Fonseca; Fernando F. Ferreira; Paulsamy Muruganandam; Hilda A. Cerdeira

  1. By: Fe, Eduardo
    Abstract: Estimation of causal eects in regression discontinuity designs relies on a local Wald estimator whose components are estimated via local linear regressions centred at an specic point in the range of a treatment assignment variable. The asymptotic distribution of the estimator depends on the specic choice of kernel used in these nonparametric regressions, with some popular kernels causing a notable loss of effciency. This article presents the asymptotic distribution of the local Wald estimator when a gamma kernel is used in each local linear regression. The resulting statistics is easy to implement, consistent at the usual nonparametric rate, maintains its asymptotic normal distribution, but its bias and variance do not depend on kernel-related constants and, as a result, is becomes a more effcient method. The effciency gains are measured via a limited Monte Carlo experiment, and the new method is used in a substantive application.
    Keywords: Regression Discontinuity; Asymmetric Kernels; Local Linear Regression
    JEL: C13 C14 C21
    Date: 2012–02–24
  2. By: Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Lars Nesheim (Institute for Fiscal Studies and University College London); Anna Simoni (Institute for Fiscal Studies and Bocconi)
    Abstract: <p>In structural economic models, individuals are usually characterized as solving a decision problem that is governed by a finite set of parameters. This paper discusses the nonparametric estimation of the probability density function of these parameters if they are allowed to vary continuously across the population. We establish that the problem of recovering the probability density function of random parameters falls into the class of non-linear inverse problem. This framework helps us to answer the question whether there exist densities that satisfy this relationship. It also allows us to characterize the identified set of such densities. We obtain novel conditions for point identification, and establish that point identification is generically weak. Given this insight, we provide a consistent nonparametric estimator that accounts for this fact, and derive its asymptotic distribution. Our general framework allows us to deal with unobservable nuisance variables, e.g., measurement error, but also covers the case when there are no such nuisance variables. Finally, Monte Carlo experiments for several structural models are provided which illustrate the performance of our estimation procedure.</p>
    Date: 2012–04
  3. By: Hyungsik Roger Moon; Matthew Shum; Martin Weidner (Institute for Fiscal Studies and UCL)
    Abstract: <p>We extend the Berry, Levinsohn and Pakes (BLP, 1995) random coefficients discrete-choice demand model, which underlies much recent empirical work in IO. We add interactive fixed effects in the form of a factor structure on the unobserved product characteristics. The interactive fixed effects can be arbitrarily correlated with the observed product characteristics (including price), which accommodates endogeneity and, at the same time, captures strong persistence in market shares across products and markets. We propose a two step least squares-minimum distance (LS-MD) procedure to calculate the estimator. Our estimator is easy to compute, and Monte Carlo simulations show that it performs well. We consider an empirical application to US automobile demand.</p>
    Date: 2012–03
  4. By: José Casals (Departamento de Fundamentos del Análisis Económico II. Facultad de Ciencias Económicas. Campus de Somosaguas. 28223 Madrid (SPAIN).); Sonia Sotoca (Departamento de Fundamentos del Análisis Económico II. Facultad de Ciencias Económicas. Campus de Somosaguas. 28223 Madrid (SPAIN).); Miguel Jerez (Departamento de Fundamentos del Análisis Económico II. Facultad de Ciencias Económicas. Campus de Somosaguas. 28223 Madrid (SPAIN).)
    Abstract: Computing the gaussian likelihood for a nonstationary state-space model is a difficult problem which has been tackled by the literature using two main strategies: data transformation and diffuse likelihood. The data transformation approach is cumbersome, as it requires nonstandard filtering. On the other hand, in some nontrivial cases the diffuse likelihood value depends on the scale of the diffuse states, so one can obtain different likelihood values corresponding to different observationally equivalent models. In this paper we discuss the properties of the minimally-conditioned likelihood function, as well as two efficient methods to compute its terms with computational advantages for specific models. Three convenient features of the minimally-conditioned likelihood are: (a) it can be computed with standard Kalman filters, (b) it is scale-free, and (c) its values are coherent with those resulting from differencing, being this the most popular approach to deal with nonstationary data.
    Keywords: State-space models, Conditional likelihood, Diffuse likelihood, Diffuse initial conditions, Kalman filter, Nonstationarity.
    JEL: C32 C51 C10
    Date: 2012
  5. By: David E. Giles (Department of Economics, University of Victoria)
    Abstract: By noting that the Hodrick-Prescott filter can be expressed as the solution to a particular regression problem, we are able to show how to construct confidence bands for the filtered time-series. This procedure requires that the data are stationary. The construction of such confidence bands is illustrated using annual U.S. data for real value-added output; and monthly U.S. data for the unemployment rate.
    Keywords: Hodrick-Prescott filter; time-series decomposition; confidence bands
    JEL: C13 C20 E3
    Date: 2012–04–19
  6. By: Hecq Alain; Laurent Sébastien; Palm Franz C. (METEOR)
    Abstract: First, we investigate the minimal order univariate representation of some well known n-dimensionalconditional volatility models. Even simple low order systems (e.g. a multivariate GARCH(0,1)) forthe joint behavior of several variables imply individual processes with a lot of persistence inthe form of high order lags. However, we show that in the presence of common GARCH factors,parsimonious univariate representations (e.g. GARCH(1,1)) can result from large multivariatemodels generating the conditional variances and conditional covariances/correlations. The trivialdiagonal model without any contagion effects in conditional volatilities gives rise to the sameconclusions though.Consequently, we then propose an approach to detect the presence of these commonalities inmultivariate GARCH process. The factor we extract is the volatility of a portfolio made up by theoriginal assets whose weights are determined by the reduced rank analysis.We compare the small sample performances of two strategies. First, extending Engle and Marcucci(2006), we use reduced rank regressions in a multivariate system for squared returns andcross-returns. Second we investigate a likelihood ratio approach, where under the null the matrixparameters of the BEKK have a reduced rank structure (Lin, 1992). It emerged that the latterapproach has quite good properties enabling us to discriminate between a system with seeminglyunrelated assets (e.g. a diagonal model) and a model with few common sources of volatility.
    Keywords: econometrics;
    Date: 2012
  7. By: Cerquera, Daniel; Laisney, François; Ullrich, Hannes
    Abstract: Motivated by Manski and Tamer (2002) and especially their partial identification analysis of the regression model where one covariate is only interval-measured, we offer several contributions. Manski and Tamer (2002) propose two estimation approaches in this context, focussing on general results. The modified minimum distance (MMD) estimates the true identified set and the modified method of moments (MMM) a superset. Our first contribution is to characterize the true identified set and the superset. Second, we complete and extend the Monte Carlo study of Manski and Tamer (2002). We present benchmark results using the exact functional form for the expectation of the dependent variable conditional on observables to compare with results using its nonparametric estimates, and illustrate the superiority of MMD over MMM. For MMD, we propose a simple shortcut for estimation. --
    Keywords: partial identification,true identified set,superset,MMD,MMM,estimation
    JEL: C01 C13 C40
    Date: 2012
  8. By: Pollmann, Daniel (ROA, Maastricht University); Dohmen, Thomas (ROA, Maastricht University); Palm, Franz C. (Maastricht University)
    Abstract: We present a semiparametric method to estimate group-level dispersion, which is particularly effective in the presence of censored data. We apply this procedure to obtain measures of occupation-specific wage dispersion using top-coded administrative wage data from the German IAB Employment Sample (IABS). We then relate these robust measures of earnings risk to the risk attitudes of individuals working in these occupations. We find that willingness to take risk is positively correlated with the wage dispersion of an individual's occupation.
    Keywords: dispersion estimation, earnings risk, censoring, quantile regression, occupational choice, sorting, risk preferences, SOEP, IABS
    JEL: C14 C21 C24 J24 J31 D01 D81
    Date: 2012–03
  9. By: John Knight (University of Western Ontario); Stephen Satchell (Department of Economics, Mathematics & Statistics, Birkbeck; University of Sydney); Nandini Srivastava (Christ's College, University of Cambridge)
    Abstract: The purpose of this paper is to examine the properties of bubbles in the light of steady state results for threshold auto-regressive (TAR) models recently derived by Knight and Satchell (2011). We assert that this will have implications for econometrics. We study the conditions under which we can obtain a steady state distribution of asset prices using our simple model of bubbles based on our particular definition of a bubble. We derive general results and further extend the analysis by considering the steady state distribution in three cases of a (I) a normally distributed error process, (II) a non normally (exponentially) distributed steady-state process and (III) a switching random walk with a fairly general i.i.d error process We then examine the issues related to unit root testing for the presence of bubbles using standard econometric procedures. We illustrate as an example, the market for art, which shows distinctly bubble-like characteristics. Our results shed light on the ubiquitous finding of no bubbles in the econometric literature.
    Keywords: Bubbles, Asset prices, Steady state, Non-linear time series, TAR Models
    Date: 2012–04
  10. By: Mehmet Pinar (Fondazione Eni Enrico Mattei); Thanasis Stengos (University of Guelph.); M. Ege Yazgan (Istanbul Bilgi University)
    Abstract: The forecast combination puzzle refers to the finding that a simple average forecast combination outperforms more sophisticated weighting schemes and/or the best individual model. The paper derives optimal (worst) forecast combinations based on stochastic dominance (SD) analysis with differential forecast weights. For the optimal (worst) forecast combination, this index will minimize (maximize) forecasts errors by combining time-series model based forecasts at a given probability level. By weighting each forecast differently, we find the optimal (worst) forecast combination that does not rely on arbitrary weights. Using two exchange rate series on weekly data for the Japanese Yen/U.S. Dollar and U.S. Dollar/Great Britain Pound for the period from 1975 to 2010 we find that the simple average forecast combination is neither the worst nor the best forecast combination something that provides partial support for the forecast combination puzzle. In that context, the random walk model is the model that consistently contributes with considerably more than an equal weight to the worst forecast combination for all variables being forecasted and for all forecast horizons, whereas a flexible Neural Network autoregressive model and a self-exciting threshold autoregressive model always enter the best forecast combination with much greater than equal weights.
    Keywords: Nonparametric Stochastic Dominance, Mixed Integer Programming; Forecast combinations; Forecast combination
    JEL: C53 C61 C63
    Date: 2011
  11. By: Makram El-Shagi; Tobias Knedlik; Gregor von Schweinitz
    Abstract: The signals approach as an early warning system has been fairly successful in detecting crises, but it has so far failed to gain popularity in the scientific community because it does not distinguish between randomly achieved in-sample fit and true predictive power. To overcome this obstacle, we test the null hypothesis of no correlation between indicators and crisis probability in three applications of the signals approach to different crisis types. To that end, we propose bootstraps specifically tailored to the characteristics of the respective datasets. We find (1) that previous applications of the signals approach yield economically meaningful and statistically significant results and (2) that composite indicators aggregating information contained in individual indicators add value to the signals approach, even where most individual indicators are not statistically significant on their own.
    Keywords: early warning system, signals approach, bootstrap
    JEL: C15 E60 F01
    Date: 2012–04
  12. By: Fève, Patrick; Jidoud, Ahmat
    Abstract: This paper assesses SVARs as relevant tools at identifying the aggregate effects of news shocks. When the econometrician and private agents’ information sets are not aligned, the dynamic responses identified from SVARs are biased. However, the bias vanishes when news shocks account for the bulk of fluctuations in the economy. A simple correlation diagnostic test shows that under this condition, news shocks identified through long–run and short–run restrictions have a correlation close to unity.
    Keywords: , Information Flows, News shocks, Non–fundamentalness, SVARs, Identification
    JEL: C32 C52 E32
    Date: 2012–03
  13. By: Eder Lucio Fonseca; Fernando F. Ferreira; Paulsamy Muruganandam; Hilda A. Cerdeira
    Abstract: In this work we develop a new measure to study the behavior of stochastic time series, which permits to distinguish events which are different from the ordinary, like financial crises. We identify from the data well known market crashes such as Black Thursday (1929), Black Monday (1987) and Subprime crisis (2008) with clear and robust results. We also show that the analysis has forecasting capabilities. We apply the method to the market fluctuations of 2011. From these results it appears as if the apparent crisis of 2011 is of a different nature from the other three.
    Date: 2012–04

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.