nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒08‒02
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Bayesian Model Averaging and Weighted Average Least Squares: Equivariance, Stability, and Numerical Issues By De Luca, G.; Magnus, J.R.
  2. A Fixed-b Perspective on the Phillips-Perron Unit Root Tests By Vogelsang, Timothy J.; Wagner, Martin
  3. Mixtures of g-priors for Bayesian Model Averaging with economic application By Ley, Eduardo; Steel, Mark F.J.
  4. Tests of Structural Changes in Conditional Distributions with Unknown Changepoints. By Dominique Guegan; Philippe de Peretti
  5. Using the Helmert-Transformation to Reduce Dimensionality in a Mixed Model: Application to a Wage Equation with Worker and Firm Heterogeneity By Nilsen, Øivind Anti; Raknerud, Arvid; Skjerpen, Terje
  6. Endogenous treatment effects for count data models with endogenous participation or sample selection By Massimiliano Bratti; Alfonso Miranda
  7. The Extreme Value Theory as a Tool to Measure Market Risk By Krenar Avdulaj
  8. Quantization of long memory processes By Gabriele La Spada; Fabrizio Lillo
  9. Classification of Volatility in Presence of Changes in Model Parameters By Edoardo Otranto
  10. Moment conditions model averaging with an application to a forward-looking monetary policy reaction function By Luis F. Martins
  11. Bayesian Networks and Sex-related Homicides By Stephan Stahlschmidt; Helmut Tausendteufel; Wolfgang K. Härdle
  12. An Empirical Test of Pricing Kernel Monotonicity By Beare, Brendan K.; Schmidt, Lawrence

  1. By: De Luca, G.; Magnus, J.R. (Tilburg University, Center for Economic Research)
    Abstract: This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squares (WALS) estimator developed by Magnus et al. (2010). Unlike standard pretest estimators which are based on some preliminary diagnostic test, these model averaging estimators provide a coherent way of making inference on the regression parameters of interest by taking into account the uncertainty due to both the estimation and the model selection steps. Special emphasis is given to a number practical issues that users are likely to face in applied work: equivariance to certain transformations of the explanatory variables, stability, accuracy, computing speed and out-of-memory problems. Performances of our bma and wals commands are illustrated using simulated data and empirical applications from the literature on model averaging estimation.
    Keywords: Model uncertainty;Model averaging;Bayesian analysis;Exact computation.
    JEL: C11 C51 C52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011082&r=ecm
  2. By: Vogelsang, Timothy J. (Department of Economics, Michigan State University, East Lansing, USA); Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria, and Frisch Centre for Economic Research, Oslo, Norway)
    Abstract: We extend fixed-b asymptotic theory to the nonparametric Phillips-Perron (PP) unit root tests. We show that the fixed-b limits depend on nuisance parameters in a complicated way. These non-pivotal limits provide an alternative theoretical explanation for the well known finite sample problems of PP tests. We also show that the fixed-b limits depend on whether deterministic trends are removed using one-step or two-step approaches, contrasting the asymptotic equivalence of the one- and two-step approaches under a consistency approximation for the long run variance estimator. Based on these results we introduce modified PP tests that allow for fixed-b inference. The theoretical analysis is cast in the framework of near-integrated processes which allows to study the asymptotic behavior both under the unit root null hypothesis as well as for local alternatives. The performance of the original and modified tests is compared by means of local asymptotic power and a small simulation study.
    Keywords: Nonparametric kernel estimator, long run variance, detrending, one-step, two-step
    JEL: C12 C13 C32
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:272&r=ecm
  3. By: Ley, Eduardo; Steel, Mark F.J.
    Abstract: This paper examines the issue of variable selection in linear regression modeling, where there is a potentially large amount of possible covariates and economic theory offers insufficient guidance on how to select the appropriate subset. In this context, Bayesian Model Averaging presents a formal Bayesian solution to dealing with model uncertainty. The main interest here is the effect of the prior on the results, such as posterior inclusion probabilities of regressors and predictive performance. The authors combine a Binomial-Beta prior on model size with a g-prior on the coefficients of each model. In addition, they assign a hyperprior to g, as the choice of g has been found to have a large impact on the results. For the prior on g, they examine the Zellner-Siow prior and a class of Beta shrinkage priors, which covers most choices in the recent literature. The authors propose a benchmark Beta prior, inspired by earlier findings with fixed g, and show it leads to consistent model selection. Inference is conducted through a Markov chain Monte Carlo sampler over model space and g. The authors examine the performance of the various priors in the context of simulated and real data. For the latter, they consider two important applications in economics, namely cross-country growth regression and returns to schooling. Recommendations for applied users are provided.
    Keywords: Educational Technology and Distance Education,Arts&Music,Geographical Information Systems,Information Security&Privacy,Statistical&Mathematical Sciences
    Date: 2011–07–01
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:5732&r=ecm
  4. By: Dominique Guegan (Centre d'Economie de la Sorbonne); Philippe de Peretti (Centre d'Economie de la Sorbonne)
    Abstract: This paper focuses on a procedure to test for structural changes in the first two moments of a time series, when no information about the process driving the breaks is available. To approximate the process, an orthogonal Bernstein polynomial is used and testing for the null is achieved either by using an AICu information criterion, or a restriction test. The procedure covers both the pure discrete structural change and the continuous changes models. Running Monte-Carlo simulations, we show that the test has power against various alternatives.
    Keywords: Structural changes, Bernstein polynomial, AICu.
    JEL: C01 C12 C15
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:11042&r=ecm
  5. By: Nilsen, Øivind Anti (Norwegian School of Economics (NHH)); Raknerud, Arvid (Statistics Norway); Skjerpen, Terje (Statistics Norway)
    Abstract: A model for matched data with two types of unobserved heterogeneity is considered – one related to the observation unit, the other to units to which the observation units are matched. One or both of the unobserved components are assumed to be random. This mixed model allows identification of the effect of time-invariant variables on the observation units. Applying the Helmert transformation to reduce dimensionality simplifies the computational problem substantially. The framework has many potential applications; we apply it to wage modeling. Using Norwegian manufacturing data shows that the assumption with respect to the two types of heterogeneity affects the estimate of the return to education considerably.
    Keywords: matched employer-employee data, high-dimensional two-way unobserved components, ECM-algorithm
    JEL: C23 C81 J31
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp5847&r=ecm
  6. By: Massimiliano Bratti (University of Milan); Alfonso Miranda (Institute of Education, University of London)
    Abstract: We propose an estimator for models in which an endogenous dichotomous treatment affects a count outcome in the presence of either sample selection or endogenous participation using maximum simulated likelihood. We allow for the treatment to have an effect on both the participation or the sample selection rule and on the main outcome. Applications of this model are frequent in—but not limited to—health economics. We show an application of the model using data from Kenkel (2001, Kenkel and Terza, Journal of Applied Econometrics 16: 165–184), who investigated the effect of physician advice on the amount of alcohol consumption. Our estimates suggest that in these data a) neglecting treatment endogeneity leads to a wrongly signed effect of physician advice on drinking intensity, b) accounting for treatment endogeneity but neglecting endogenous participation leads to an upward biased estimate of the treatment effect, and c) advice only affects the drinking-intensive margin but not drinking prevalence.
    Date: 2011–07–23
    URL: http://d.repec.org/n?u=RePEc:boc:msug11:05&r=ecm
  7. By: Krenar Avdulaj (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic)
    Abstract: Assessing the extreme events is crucial in financial risk management. All risk managers and financial institutions want to know the risk of their portfolio under rare events scenarios. We illustrate a multivariate market risk estimating method which employs Monte Carlo simulations to estimate Value-at-Risk (VaR) for a portfolio of 4 stock exchange indexes from Central Europe. The method uses the non-parametric empirical distribution to capture small risks and the parametric Extreme Value theory to capture large and rare risks. We compare estimates of this method with historical simulation and variance-covariance method under low and high volatility samples of data. In general historical simulation method overestimates the VaR for extreme events, while variance-covariance underestimates it. The method that we illustrate gives a result in between because it considers historical performance of the stocks and also corrects for the heavy tails of the distribution. We conclude that the estimate method that we illustrate here is useful in estimating VaR for extreme events, especially for high volatility times.
    Keywords: Value-at-Risk, Extreme Value Theory, copula.
    JEL: C22
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2011_26&r=ecm
  8. By: Gabriele La Spada; Fabrizio Lillo
    Abstract: We study how quantization, occurring when a continuously varying process is approximated by or observed on a grid of discrete values, changes the properties of a Gaussian long-memory process. By computing the asymptotic behavior of the autocovariance and of the spectral density, we find that the quantized process has the same Hurst exponent of the original process. We show that the log-periodogram regression and the Detrended Fluctuation Analysis (DFA) are severely negatively biased estimators of the Hurst exponent for quantized processes. We compute the asymptotics of the DFA for a generic long-memory process and we study them for quantized processes.
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1107.4476&r=ecm
  9. By: Edoardo Otranto
    Abstract: The classification of volatility of financial time series has recently received a lot of contributions - in particular using model based clustering algorithms. Recent works have evidenced how volatility structure can vary along time, with gradual or abrupt changes in the coefficients of the model. We wonder if these changes can affect the classification of series in terms of similar volatility structure. We propose to classify the level of the unconditional volatility obtained from Multiplicative Er- ror Models with the possibility of changes in the parameters of the model in terms of regime switching or time varying smoothed coefficients. They provide different unconditional volatility structures with a proper interpretation, useful to represent different situations of interest. The different methodologies are coherent with each other and provide a common synthetic pattern. The procedure is experimented on fifteen stock indices volatilities.
    Keywords: clustering; AMEM; Markov switching; smooth transition; unconditional volatility
    JEL: C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:201113&r=ecm
  10. By: Luis F. Martins
    Abstract: In this paper, we examine the empirical validity of the baseline version of the forward-looking monetary policy reaction function proposed by Clarida, Gali, and Gertler (2000). For that purpose, we propose a moment conditions model averaging estimator in the Generalized Method of Moments and Generalized Empirical Likelihood setups. We derive some of their asymptotic properties under correctly specified and misspecified models. Although the model averaging estimates and the standard procedures point to a stabilizing policy rule during the Paul Volcker and Alan Greenspan tenures but not so during the pre-Volker period, our results cast serious doubts on the significance of the cyclical output variable as a forcing variable in the FED funds dynamics during the Volcker-Greenspan period.
    JEL: C22 C52 E43 E52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201116&r=ecm
  11. By: Stephan Stahlschmidt; Helmut Tausendteufel; Wolfgang K. Härdle
    Abstract: We present a statistical investigation on the domain of sex-related homicides. As general sociological and psychological theory on this specific type of crime is incomplete or even lacking, a data-driven approach is implemented. In detail, graphical modelling is applied to learn the dependency structure and several structure learning algorithms are combined to yield a skeleton corresponding to distinct Bayesian Networks. This graph is subsequently analysed and presents a distinction between an offender and a situation driven crime.
    Keywords: Bayesian Networks, structure learning, offender profiling
    JEL: C49 C81 K42
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-045&r=ecm
  12. By: Beare, Brendan K.; Schmidt, Lawrence
    Abstract: A recent literature in finance concerns a curious recurring feature of estimated pricing kernels. Classical theory dictates that the pricing kernel { defined loosely as the ratio of Arrow security prices to an objective probability measure { should be a decreasing function of aggregate resources. Yet a large number of recent empirical studies appear to contradict this prediction. The nonmonotonicity of empirical pricing kernel estimates has become known as the pricing kernel puzzle. In this paper we propose and apply a formal statistical test of pricing kernel monotonicity. The test involves assessing the concavity of the ordinal dominance curve associated with the risk neutral and physical return distributions. We apply the test using thirteen years of data from the market for European put and call options written on the S&P 500 index. Statistically significant violations of pricing kernel monotonicity occur in a substantial proportion of months.
    Keywords: finance, empirical pricing kernels, Econometrics
    Date: 2011–07–21
    URL: http://d.repec.org/n?u=RePEc:cdl:ucsdec:2113389&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.