nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒06‒04
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Dynamic panels with predetermined regressors: likelihood-based estimation and Bayesian averaging with an application to cross-country growth By Enrique Moral-Benito
  2. Estimation of the long memory parameter in non stationary models: A Simulation Study By Mohamed Boutahar; Rabeh Khalfaoui2
  3. Thirty Years of Heteroskedasticity-Robust Inference By James MacKinnon
  4. Principal Components Instrumental Variable Estimation By Winkelried, D.; Smith, R.J.
  5. Variance Clustering Improved Dynamic Conditional Correlation MGARCH Estimators By Gian Piero Aielli; Massimiliano Caporin
  6. Tests for Convergence Clubs By Corrado, L.; Weeks, M.
  7. Exploring ICA for time series decomposition By Antonio García Ferrer; Ester González Prieto; Daniel Peña
  8. Exact likelihood computation for nonlinear DSGE models with heteroskedastic innovations By Gianni Amisano; Oreste Tristani
  9. Empirical Implementation of Nonparametric First-Price Auction Models By Daniel J. Henderson; John A. List; Daniel L. Millimet; Christopher F. Parmeter; Michael K. Price
  10. Fact or friction: Jumps at ultra high frequency By Kim Christensen; Roel Oomen; Mark Podolskij
  11. Beyond the DSGE straightjacket By Pesaran, M. H.; Smith, R. P.
  12. Unpredictability in Economic Analysis, Econometric Modeling and Forecasting By David F. Hendry

  1. By: Enrique Moral-Benito (Banco de España)
    Abstract: This paper discusses likelihood-based estimation of linear panel data models with general predetermined variables and individual-specific effects. The resulting (pseudo) maximum likelihood estimator is asymptotically equivalent to standard GMM but tends to have smaller finite-sample biases as illustrated in simulation experiments. Moreover, the availability of such a likelihood function allows applying the Bayesian apparatus to this class of panel data models. Combining the aforementioned estimator with Bayesian model averaging methods we estimate empirical growth models simultaneously considering endogenous regressors and model uncertainty. Empirical results indicate that only the investment ratio seems to robustly cause long-run economic growth. Moreover, the estimated rate of convergence is not significantly different from zero.
    Keywords: dynamic panel estimation, maximum likelihood, weak instruments, growth regressions, bayesian model averaging
    JEL: C11 C33 O40
    Date: 2011–05
  2. By: Mohamed Boutahar (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579); Rabeh Khalfaoui2 (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579)
    Abstract: In this paper we perform a Monte Carlo study based on three well-known semiparametric estimates for the long memory fractional parameter. We study the efficiency of Geweke and Porter-Hudak, Gaussian semiparametric and wavelet Ordinary Least-Square estimates in both stationary and non stationary models. We consider an adequate data tapers to compute non stationary estimates. The Monte Carlo simulation study is based on different sample size. We show that for d belonging to [1/4,1.25) the Haar estimate performs the others with respect to the mean squared error. The estimation methods are applied to energy data set for an empirical illustration.
    Keywords: wavelets; long memory; tapering; non-stationarity; volatility.
    Date: 2011–05–23
  3. By: James MacKinnon (Queen's University)
    Abstract: White (1980) marked the beginning of a new era for inference in econometrics. It introduced the revolutionary idea of inference that is robust to heteroskedasticity of unknown form, an idea that was very soon extended to other forms of robust inference and also led to many new estimation methods. This paper discusses the development of heteroskedasticity-robust inference since 1980. There have been two principal lines of investigation. One approach has been to modify White's original estimator to improve its finite-sample properties, and the other has been to use bootstrap methods. The relation between these two approaches, and some ways in which they may be combined, are discussed. Finally, a simulation experiment compares various methods and shows how far heteroskedasticity-robust inference has come in just over thirty years.
    Keywords: wild bootstrap, HCCME, power, finite-sample
    JEL: C12 C15 C20
    Date: 2011–05
  4. By: Winkelried, D.; Smith, R.J.
    Abstract: Instrumental variable estimators can be severely biased in finite samples when the degree of overidentification is high or when the instruments are weakly correlated with the endogenous regressors. This paper proposes an estimator based on the use of the principal components of the instruments as a means of dealing with these issues. By promoting parsimony, the proposed estimator can exhibit considerably lower bias, often without giving up asymptotic efficiency. To make the estimator operational, a simple but flexible rule to select the relevant components for estimation is suggested. Simulation evidence shows that this approach yields significant finite sample improvements over other instrumental variable estimators.
    Date: 2011–01–31
  5. By: Gian Piero Aielli (University of Padova); Massimiliano Caporin (University of Padova)
    Abstract: It is well-known that the estimated GARCH dynamics exhibit common patterns. Starting from this fact we extend the Dynamic Conditional Correlation (DCC) model by allowing for a cluster- ing structure of the univariate GARCH parameters. The model can be estimated in two steps, the first devoted to the clustering structure, and the second focusing on correlation parameters. Differently from the traditional two-step DCC estimation, we get large system feasibility of the joint estimation of the whole set of modelÕs parameters. We also present a new approach to the clustering of GARCH processes, which embeds the asymptotic properties of the univariate quasi-maximum-likelihood GARCH estimators into a Gaussian mixture clustering algorithm. Unlike other GARCH clustering techniques, our method logically leads to the selection of the optimal number of clusters.
    Keywords: dynamic conditional correlations, time series clustering, multivariate GARCH, composite likelihood.
    JEL: C32 C53 C51 C52
    Date: 2011–05
  6. By: Corrado, L.; Weeks, M.
    Abstract: In many applications common in testing for convergence the number of cross-sectional units is large and the number of time periods are few. In these situations tests which are founded upon an omnibus null hypothesis are characterised by a number of problems. In this paper we consider a broad class of tests of convergence based on multivariate time series and panel data methodologies, and track a gradual progression away from tests based on an omnibus null, to sequential tests and tests that are founded upon multiple pairwise comparisons. In a previous study Corrado, Martin and Weeks (2005) test for regional convergence across the European Union allowing for an endogenous selection of regional clusters using a multivariate test for stationarity. Given that the time series are relatively short, there are potential problems in basing inference on asymptotic results for stationarity tests. To circumvent this problem we bootstrap the stationarity test and explore the robustness of the cluster outcomes. In general our results show that the size distortion which aicts the asymptotic tests, and resulting in a bias towards nding less convergence, is resolved when we apply the bootstrap generated critical values. To interpret the composition of the resulting convergence clusters, the latter are tested against a variety of possible groupings suggested by recent theories and hypotheses of regional growth and convergence.
    Keywords: Multivariate stationarity, bootstrap tests, regional convergence
    JEL: C51 R11 R15
    Date: 2011–01–26
  7. By: Antonio García Ferrer; Ester González Prieto; Daniel Peña
    Abstract: In this paper, we apply independent component analysis (ICA) for prediction and signal extraction in multivariate time series data. We compare the performance of three different ICA procedures, JADE, SOBI, and FOTBI that estimate the components exploiting either the non-Gaussianity, or the temporal structure of the data, or combining both, non-Gaussianity as well as temporal dependence. Some Monte Carlo simulation experiments are carried out to investigate the performance of these algorithms in order to extract components such as trend, cycle, and seasonal components. Moreover, we empirically test the performance of those three ICA procedures on capturing the dynamic relationships among the industrial production index (IPI) time series of four European countries. We also compare the accuracy of the IPI time series forecasts using a few JADE, SOBI, and FOTBI components, at different time horizons. According to the results, FOTBI seems to be a good starting point for automatic time series signal extraction procedures, and it also provides quite accurate forecasts for the IPIs.
    Keywords: ICA, Signal extraction, Multivariate time series, Forecasting
    Date: 2011–05
  8. By: Gianni Amisano (DG-Research, European Central Bank, Kaiserstrasse 29, D-60311, Frankfurt am Main, Germany and Department of Economics University of Brescia.); Oreste Tristani (DG-Research, European Central Bank, Kaiserstrasse 29, D-60311, Frankfurt am Main, Germany.)
    Abstract: Phenomena such as the Great Moderation have increased the attention of macro-economists towards models where shock processes are not (log-)normal. This paper studies a class of discrete-time rational expectations models where the variance of exogenous innovations is subject to stochastic regime shifts. We first show that, up to a second-order approximation using perturbation methods, regime switching in the variances has an impact only on the intercept coefficients of the decision rules. We then demonstrate how to derive the exact model likelihood for the second-order approximation of the solution when there are as many shocks as observable variables. We illustrate the applicability of the proposed solution and estimation methods in the case of a small DSGE model. JEL Classification: E0, C63.
    Keywords: DSGE models, second-order approximation, regime switching, time-varying volatility.
    Date: 2011–05
  9. By: Daniel J. Henderson; John A. List; Daniel L. Millimet; Christopher F. Parmeter; Michael K. Price
    Abstract: Nonparametric estimators provide a flexible means of uncovering salient features of auction data. Although these estimators are popular in the literature, many key features necessary for proper implementation have yet to be uncovered. Here we provide several suggestions for nonparamteric estimation of first-price auction models. Specifically, we show how to impose monotonicity of the equilibrium bidding strategy; a key property of structural auction models not guaranteed in standard nonparametric estimation. We further develop methods for automatic bandwidth selection. Finally, we discuss how to impose monotonicity in auctions with differering number of bidders, reserve prices, and auction-specific characteristics. Finite sample performance is examined using simulated data as well as experimental auction data.
    JEL: C12 C14 D44
    Date: 2011–05
  10. By: Kim Christensen (Aarhus University and CREATES); Roel Oomen (Deutsche Bank, London); Mark Podolskij (University of Heidelberg and CREATES)
    Abstract: In this paper, we demonstrate that jumps in financial asset prices are not nearly as common as generally thought, and that they account for only a very small proportion of total return variation. We base our investigation on an extensive set of ultra high-frequency equity and foreign exchange rate data recorded at milli-second precision, allowing us to view the price evolution at a microscopic level. We show that both in theory and practice, traditional measures of jump variation based on low-frequency tick data tend to spuriously attribute a burst of volatility to the jump component thereby severely overstating the true variation coming from jumps. Indeed, our estimates based on tick data suggest that the jump variation is an order of magnitude smaller. This finding has a number of important implications for asset pricing and risk management and we illustrate this with a delta hedging example of an option trader that is short gamma. Our econometric analysis is build around a pre-averaging theory that allows us to work at the highest available frequency, where the data are polluted bymicrostructure noise. We extend the theory in a number of directions important for jump estimation and testing. This also reveals that pre-averaging has a built-in robustness property to outliers in high-frequency data, and allows us to show that some of the few remaining jumps at tick frequency are in fact induced by data-cleaning routines aimed at removing the outliers.
    Keywords: jump variation, high-frequency data, market microstructure noise, pre-averaging, realised variation, outliers.
    JEL: C10 C80
    Date: 2011–05–26
  11. By: Pesaran, M. H.; Smith, R. P.
    Abstract: Academic macroeconomics and the research department of central banks have come to be dominated by Dynamic, Stochastic, General Equilibrium (DSGE) models based on micro-foundations of optimising representative agents with rational expectations. We argue that the dominance of this particular sort of DSGE and the resistance of some in the profession to alternatives has become a straitjacket that restricts empirical and theoretical experimentation and inhibits innovation and that the profession should embrace a more flexible approach to macroeconometric modelling. We describe one possible approach.
    JEL: C1 E1
    Date: 2011–04–13
  12. By: David F. Hendry
    Abstract: Unpredictability arises from intrinsic stochastic variation, unexpected instances of outliers, and unanticipated extrinsic shifts of distributions. We analyze their properties, relationships, and different effects on the three arenas in the title, which suggests considering three associated information sets. We note the implications of unanticipated shifts for forecasting, economic analyses of efficient markets, inter-temporal derivations, and general-to-specific model selection, tackling outliers and non-constancy by impulse-indicator saturation, and contrast the potential success in modeling breaks with the major difficulties confronting forecasting.
    Keywords: Unpredictability, 'Black Swans', distributional shfits, forecasing, model selection
    JEL: C51 C22
    Date: 2011

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.