nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒07‒24
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Modelling Realized Covariances and Returns By Xin Jin; John M Maheu
  2. Convergence analysis as distribution dynamics when data are spatially dependent By Margherita Gerolimetto; Stefano Magrini;
  3. Missing ordinal covariates with informative selection By Alfonso Miranda; Sophia Rabe-Hesketh
  4. Localized Level Crossing Random Walk Test Robust to the Presence of Structural Breaks. By Vitali Alexeev; Alex Maynard
  5. A factor-augmented probit model for business cycle analysis By Christophe Bellégo; Laurent Ferrara
  6. Nonlinearity and Temporal Dependence By Xiaohong Chen; Lars P. Hansen; Marine Carrasco
  7. Implied Risk-Neutral probability Density functions from options prices : A comparison of estimation methods By Rihab Bedoui; Haykel Hamdi
  8. Variable Selection for Market Basket Analysis By Dippold, Katrin; Hruschka, Harald
  9. Asymmetric Time Aggregation and its Potential Benefits for Forecasting Annual Data By Kunst, Robert M.; Franses, Philip Hans
  10. D-optimal and D-efficient Equivalent-Estimation Second-Order Split-Plot Designs By Macharia H.; Goos P.
  11. Codependence and Cointegration By Trenkler, Carsten; Weber, Enzo
  12. Testing the Consumption Based CAPM: Evidence from a New Approach By Paresh Kumar Narayan; Stephan Popp
  13. On the Sources of U.S. Stock Market Comovement By Weber, Enzo
  14. A Simultaneous Unobserved Components Analysis of US Output and the Great Moderation By Weber, Enzo
  15. Mirror, mirror, on the wall, who in this land is fairest of all? Revisiting the extended concentration index By Erreygers G.; Clarke Ph.; Van Ourti T.
  16. Forecasting volatility in the presence of Leverage Effect By Rémi Rhodes; Vincent Vargas; Jean-Christophe Domenge
  17. Econometría de evaluación de impacto By Luis García Núñez
  18. Financial Contagion, Vulnerability and Information Flow: Empirical Identification By Weber, Enzo
  19. Does Respondent Perception of the Status Quo Matter in Non-Market Valuation with Choice Experiments? An Application to New Zealand Freshwater Streams By Dan Marsh; Bentry Mkwara; Riccardo Scarpa
  20. Econometric methods for research in education By Costas Meghir; Steven Rivkin

  1. By: Xin Jin; John M Maheu
    Abstract: This paper proposes new dynamic component models of realized covariance (RCOV) matrices based on recent work in time-varying Wishart distributions. The specifications are linked to returns for a joint multivariate model of returns and covariance dynamics that is both easy to estimate and forecast. Realized covariance matrices are constructed for 5 stocks using high-frequency intraday prices based on positive semi-definite realized kernel estimates. The models are compared based on a term-structure of density forecasts of returns for multiple forecast horizons. Relative to multivariate GARCH models that use only daily returns, the joint RCOV and return models provide significant improvements in density forecasts from forecast horizons of 1 day to 3 months ahead. Global minimum variance portfolio selection is improved for forecast horizons up to 3 weeks out.
    Keywords: eigenvalues, dynamic conditional correlation, predictive likelihoods, MCMC
    JEL: C11 C32 C53
    Date: 2010–07–16
  2. By: Margherita Gerolimetto (Department of Statistics, University Of Venice Cà Foscari); Stefano Magrini (Department of Economics, University Of Venice Cà Foscari);
    Abstract: Conditional distributions for the analysis of convergence are usually estimated using a standard kernel smoother but this is known to be biased. Hyndman et al. (1996) thus suggest a conditional density estimator with a mean function specified by a local polynomial smoother, i.e. one with better bias properties. However, even in this case, the estimated conditional mean might be incorrect when observations are spatially dependent. Consequently, in this paper we study per capita income inequalities among European Functional Regions and U.S. Metropolitan Statistical Areas through a distribution dynamics approach in which the conditional mean is estimated via a procedure that allows for spatial dependence (Gerolimetto and Magrini, 2009).
    Keywords: Regional convergence, Distribution dynamics, Nonparametric smoothing, Spatial dependence
    JEL: R10 O40 C14 C21
    Date: 2010
  3. By: Alfonso Miranda (Department of Quantitative Social Science, Institute of Education, University of London. 20 Bedford Way, London WC1H 0AL, UK.); Sophia Rabe-Hesketh (Graduate School of Education and Graduate Group in Biostatistics, University of California, Berkeley, USA. Institute of Education, University of London, London, UK.)
    Abstract: This paper considers the problem of parameter estimation in a model for a continuous response variable y when an important ordinal explanatory variable x is missing for a large proportion of the sample. Non-missingness of x, or sample selection, is correlated with the response variable and/or with the unobserved values the ordinal explanatory variable takes when missing. We suggest solving the endogenous selection, or 'not missing at random' (NMAR), problem by modelling the informative selection mechanism, the ordinal explanatory variable, and the response variable together. The use of the method is illustrated by re-examining the problem of the ethnic gap in school achievement at age 16 in England using linked data from the National Pupil database (NPD), the Longitudinal Study of Young People in England (LSYPE), and the Census 2001.
    Keywords: Missing covariate, sample selection, latent class models, ordinal variables, NMAR
    JEL: C13 C35 I21
    Date: 2010–07–14
  4. By: Vitali Alexeev (School of Economics and Finance, University of Tasmania, Australia); Alex Maynard (Department of Economics, University of Guelph, Canada.)
    Abstract: We propose a modified version of the nonparametric level crossing random walk test, in which the crossing level is determined locally. This modification results in a test that is robust to unknown multiple structural breaks in the level and slope of the trend function under both the null and alternative hypothesis. No knowledge regarding the number or timing of the breaks is required. An algorithm is proposed to select the degree of localization in order to maximize bootstrapped power in a proximate model. A computational procedure is then developed to adjust the critical values for the effect of this selection procedure by replicating it under the null hypothesis. The test is applied to Canadian nominal inflation and nominal interest rate series with implications for the Fisher hypothesis.
    Keywords: Level crossing; random walk; structural breaks; unit root; robustness
    JEL: C12 C14 C22
    Date: 2010
  5. By: Christophe Bellégo; Laurent Ferrara
    Abstract: Dimension reduction of large data sets has been recently the topic of interest of many research papers dealing with macroeconomic modelling. Especially dynamic factor models have been proved to be useful for GDP nowcasting or short-term forecasting. In this paper, we put forward an innovative factor-augmented probit model in order to analyze the business cycle. Factor estimation is carried either by standard statistical methods or by allowing a richer dynamic behaviour. An application is provided on euro area data in order to point out the ability of the model to detect recessions over the period 1974-2008.
    Date: 2010
  6. By: Xiaohong Chen (Cowles Foundation, Yale University); Lars P. Hansen (Dept. of Economics and Statistics, University of Chicago); Marine Carrasco (Dept. of Economics, University of Montreal)
    Abstract: Nonlinearities in the drift and diffusion coefficients influence temporal dependence in diffusion models. We study this link using three measures of temporal dependence: rho-mixing, beta-mixing and alpha-mixing. Stationary diffusions that are rho-mixing have mixing coefficients that decay exponentially to zero. When they fail to be rho-mixing, they are still beta-mixing and alpha-mixing; but coefficient decay is slower than exponential. For such processes we find transformations of the Markov states that have finite variances but infinite spectral densities at frequency zero. The resulting spectral densities behave like those of stochastic processes with long memory. Finally we show how state-dependent, Poisson sampling alters the temporal dependence.
    Keywords: Diffusion, Strong dependence, Long memory, Poisson sampling, Quadratic forms
    JEL: C12 C13 C22 C50
    Date: 2009–10
  7. By: Rihab Bedoui; Haykel Hamdi
    Abstract: This paper compares the goodness-of-fit of eight option-based approaches used to extract risk-neutral probability density functions from a high-frequency CAC 40 index options during a normal and troubled period. Our findings show that the kernel estimator generates a strong volatility smile with respect to the moneyness, and the kernel smiles shape varies with the chosen time to maturity. The mixture of log-normals, Edgeworth expansion, hermite polynomials, jump diffusion and Heston models are more in line and have heavier tails than the log-normal distribution. Moreover, according to the goodness of fit criteria we compute, the jump diffusion model provides a much better fit than the other models on the period just-before the crisis for relatively short maturities. However, during this same period, the mixture of log-normal models performs better for more than three month maturity. Furthermore, in the troubled period and the period just-after the crisis, we find that semi-parametric models are the methods with the best accuracy in fitting observed option prices for all maturities with a minimal difference towards the mixture of log-normals model.
    Keywords: Risk-neutral density, mixture of log-normal distributions, Edgeworth expansions, Hermite polynomials, tree-based methods, kernel regression, Heston’s stochastic volatility model, jump diffusion model
    JEL: C02 C14 C65 G13
    Date: 2010
  8. By: Dippold, Katrin; Hruschka, Harald
    Keywords: Market basket analysis; cross category effects; variable selection; multivariate logit model; pseudo likelihood estimation
    JEL: C13 C52 L81 M31
    Date: 2010–02
  9. By: Kunst, Robert M. (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria and Department of Economics, University of Vienna, Vienna, Austria); Franses, Philip Hans (Erasmus School of Economics, Econometrics, Erasmus University Rotterdam, Rotterdam, The Netherlands)
    Abstract: For many economic time-series variables that are observed regularly and frequently, for example weekly, the underlying activity is not distributed uniformly across the year. For the aim of predicting annual data, one may consider temporal aggregation into larger subannual units based on an activity time scale instead of calendar time. Such a scheme may strike a balance between annual modelling (which processes little information) and modelling at the finest available frequency (which may lead to an excessive parameter dimension), and it may also outperform modelling calendar time units (with some months or quarters containing more information than others). We suggest an algorithm that performs an approximate inversion of the inherent seasonal time deformation. We illustrate the procedure using weekly data for temporary staffing services.
    Keywords: Seasonality, time deformation, prediction, time series
    JEL: C22 C53
    Date: 2010–07
  10. By: Macharia H.; Goos P.
    Abstract: Industrial experiments often involve factors which are hard to change or costly to manipulate and thus make it impossible to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the hard-to-change factors. In general, model estimation for split-plot designs requires the use of generalized least squares (GLS). However, for some split-plot designs (including not only classical agricultural split-plot designs, but also some second-order split-plot response surface designs), ordinary least squares (OLS) estimates are equivalent to GLS estimates. These designs are called equivalent-estimation designs. As an alternative to these equivalentestimation designs, one can use D-optimal designs which guarantee efficient estimation of the fixed effects of the statistical model that is appropriate given the split-plot structure. We explore the relationship between equivalent-estimation and D-optimal split-plot designs for a second-order response surface model and propose an algorithm for generating D-efficient equivalent-estimation split-plot designs. This approach allows for a flexible choice of the number of hard-to-change factors, the number of easy-to-change factors, the number of whole plots and the total sample size.
    Date: 2010–05
  11. By: Trenkler, Carsten; Weber, Enzo
    Abstract: We introduce the idea of common serial correlation features among non-stationary, cointegrated variables. That is, the time series do not only trend together in the long run, but adjustment restores equilibrium immediately in the period following a deviation. Allowing for delayed re-equilibration, we extend the framework to codependence. The restrictions derived for VECMs exhibiting the common feature are checked by LR and GMM-type tests. Alongside, we provide corrected maximum codependence orders and discuss identification. The concept is applied to US and European interest rate data, examining the capability of the Fed and ECB to control overnight money market rates.
    Keywords: VAR; serial correlation common features; codependence; cointegration
    JEL: C32 E52
    Date: 2009–10–21
  12. By: Paresh Kumar Narayan; Stephan Popp
    Abstract: The goal of this paper is to test the consumption based CAPM. To achieve this goal, we propose a new test capable of examining the unit root null hypothesis in cases of non-trending data. We fi…nd that the real interest rate series and the consumption growth rate series for Australia, Canada, Denmark, France, Italy, the USA, the UK, and Norway are non-stationary. This …finding is consistent with the expectations of the consumption based CAPM.
    Date: 2010–07–16
  13. By: Weber, Enzo
    Abstract: This paper disentangles direct spillovers and common factors as sources of correlations in simultaneous heteroscedastic systems. While these different components are not identifiable by standard means without restrictions, it is shown that they can be pinned down by specifying the variances of the latent idiosyncratic and common shocks as ARCH-type processes. Applying an adapted Kalman filter estimation method to Dow and Nasdaq stock returns, predominant spillovers from the Dow and substantial rising factor exposure are found. While the latter is shown to prevail in the recent global financial crisis, volatility in the dot-com bubble period was driven by Nasdaq shocks.
    Keywords: Simultaneous System; Latent Factor; Identification; Spillover; EGARCH
    Date: 2010–03–16
  14. By: Weber, Enzo
    Abstract: In an unobserved components framework of US output trend and cycle, this paper seeks to determine the causal interaction between permanent and transitory innovations. For the purpose of identification, strategies of augmenting the cyclical dynamics as well as allowing for shifts in volatility are proposed. In the early 1980s, substantial predominance of cycle shocks gives way to strong negative spillovers of trend impulses, consistent with real business cycle theories. The coincident reduction of macroeconomic volatility mainly traces back to pronounced dampening of transitory disturbances. This ascribes an important role to the mitigation of policy interventions in explaining the Great Moderation.
    Keywords: Unobserved Components; Trend; Cycle; Identification; Great Moderation
    JEL: C32 E32
    Date: 2009–07–10
  15. By: Erreygers G.; Clarke Ph.; Van Ourti T.
    Abstract: This paper explores three alternative indices for measuring health inequalities in a way that takes into account attitudes towards inequality. Firstly, we revisit the extended concentration index which has been proposed to generalise the value judgements implicit in the standard concentration index. We then examine two alternative measures which have desirable mirror properties. One of these indices applies symmetric weights which is a property of the standard concentration index. We also examine the bias that arises when all three measures are applied to small samples. We propose a correction for this small sample bias and use Monte Carlo simulations to check whether it works. We empirically compare the different indices for under-five mortality rates in developing countries.
    Date: 2010–07
  16. By: Rémi Rhodes (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - CNRS : UMR7534 - Université Paris Dauphine - Paris IX); Vincent Vargas (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - CNRS : UMR7534 - Université Paris Dauphine - Paris IX); Jean-Christophe Domenge (Laboratoire de Physique Théorique de la Matière Condensée - Aucune)
    Abstract: We define a simple and tractable method for adding the Leverage effect in general volatility predictions. As an application, we compare volatility predictions with and without Leverage on the SP500 Index during the period 2002-2010.
    Date: 2010–07–13
  17. By: Luis García Núñez (Departamento de Economía - Pontificia Universidad Católica del Perú)
    Abstract: In recent years the program evaluation methods have become very popular in applied microeconomics. However, the variety of these methods responds to specific problems, which are normally determined by the data available and the impact the researcher tries to measure. This paper summarizes the main methods in the current literature, emphasizing the assumptions under which the average treatment effect and the average treatment effect on the treated are identified. Additionally, after each section I briefly present some applications of these methods. This document is a didactic presentation aimed to advanced students and applied researchers who wish to learn the basics of these techniques.
    Keywords: Inferencia Causal, Evaluación de Programas, Regresión Discontinua, Variables Instrumentales, Matching
    JEL: C13 C14 C31
    Date: 2010
  18. By: Weber, Enzo
    Abstract: This paper proposes a new approach to modelling financial transmission effects. In simultaneous systems of stock returns, fundamental shocks are identified through heteroscedasticity. The size of contemporaneous spillovers is determined in the fashion of smooth transition regression by the innovations' variances and (negative) signs, both representing typical crisis-related magnitudes. Thereby, contagion describes higher inward transmission in times of foreign crisis, whereas vulnerability is defined as increased susceptibility to foreign shocks in times of domestic turmoil. The application to major American stock indices confirms US dominance and demonstrates that volatility and sign of the equity returns significantly govern spillover size.
    Keywords: Contagion; Vulnerability; Identification; Smooth Transition Regression
    JEL: C32 G15
    Date: 2009–07–10
  19. By: Dan Marsh (University of Waikato); Bentry Mkwara (University of Waikato); Riccardo Scarpa (University of Waikato)
    Abstract: In environmental valuation studies with stated preference methods, researchers often provide descriptions of status quo conditions which may differ from those perceived by respondents. Ignoring this difference in utility baselines may affect the magnitude of utility changes and hence bias the implied estimates of benefits from the proposed environmental policies. We investigate this issue using data from a choice experiment on a community’s willingness to pay for water quality improvements in streams. More than 60 percent of respondents perceived the description of the quality of water in streams to be better than the one we provided in our scenario. Our results show that respondents who could provide details of their perception of the status quo displayed stronger preferences for water quality improvements - hence a higher marginal willingness to pay - than their counterparts. Respondents who opted for their own status quo description displayed a higher inclination to remain in the status quo, while their counterparts displayed the contrary. We argue this might be linked to the amount of knowledge each group displayed about the status quo: a kind of reluctance to leave what one knows well.
    Keywords: choice experiments; fixed status quo; people’s perceived status quo; status quo effect; willingness to pay.
    JEL: C51 Q25 Q51
    Date: 2010–07–15
  20. By: Costas Meghir (Institute for Fiscal Studies and University College London); Steven Rivkin
    Abstract: <p>This paper reviews some of the econometric methods that have been used in the economics of education. The focus is on understanding how the assumptions made to justify and implement such methods relate to the underlying economic model and the interpretation of the results. We start by considering the estimation of the returns to education both within the context of a dynamic discrete choice model inspired byWillis and Rosen (1979) and in the context of the Mincer model. We discuss the relationship between the econometric assumptions and economic behaviour. We then discuss methods that have been used in the context of assessing the impact of education quality, the teacher contribution to pupils' achievement and the effect of school quality on housing prices. In the process we also provide a summary of some of the main results in this literature.</p>
    Date: 2010–05

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.