nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒10‒31
23 papers chosen by
Sune Karlsson
Orebro University

  1. Nested forecast model comparisons: a new approach to testing equal accuracy By Todd E. Clark; Michael W. McCracken
  2. Nonparametric methods for volatility density estimation By Bert van Es; Peter Spreij; Harry van Zanten
  3. The Multivariate k-Nearest Neighbor Model for Dependent Variables : One-Sided Estimation and Forecasting By Dominique Guegan; Patrick Rakotomarolahy
  4. In-sample tests of predictive ability: a new approach By Todd E. Clark; Michael W. McCracken
  5. A blocking and regularization approach to high dimensional realized covariance estimation By Nikolaus Hautsch; Lada M. Kyj; Roel C.A. Oomen
  6. Weak and Strong Cross Section Dependence and Estimation of Large Panels. By Alexander Chudik; M. Hashem Pesaran; Elisa Tosetti
  7. Testing the Correlated Random Coefficient Model By James J. Heckman; Daniel A. Schmierer; Sergio S. Urzua
  8. Cointegration tests of purchasing power parity By Wallace, Frederick
  9. Generalized single-index models: The EFM approach By Xia Cui; Wolfgang Karl Härdle; Lixing Zhu
  10. The Weak Instrument Problem of the System GMM Estimator in Dynamic Panel Data Models By Maurice J.G. Bun; Frank Windmeijer
  11. Detecting the Presence of Informed Price Trading Via Structural Break Tests By Jose Olmo; Keith Pilbeam; William Pouliot
  12. Well-Posedness of Measurement Error Models for Self-Reported Data By Yonghong An and Yingyao Hu
  13. Evaluation of Nonlinear time-series models for real-time business cycle analysis of the Euro area By Monica Billio; Laurent Ferrara; Dominique Guegan; Gian Luigi Mazzi
  14. The Cross-Entropy Method With Patching For Rare-Event Simulation Of Large Markov Chains By Bahar Kaynar; Ad Ridder
  15. Stochastic volatility By Torben G. Andersen; Luca Benzoni
  16. Are market makers uninformed and passive? Signing trades in the absence of quotes By Michel van der Wel; Albert J. Menkveld; Asani Sarkar
  17. Inflation Expectations: Does the Market Beat Professional Forecasts? By Makram El-Shagi
  18. Macroeconomic forecasting with real-time data: an empirical comparison By Heij, C.; Dijk, D.J.C. van; Groenen, P.J.F.
  19. Negative Data in DEA: A Simple Proportional Distance Function Approach By Kristiaan Kerstens; Ignace Van de Woestyne
  20. Statistical Inference for Multidimensional Inequality Indices By Abul Naga, Ramses
  21. Estimating the Effects of Large Shareholders Using a Geographic Instrument By Bo Becker; Henrik Cronqvist; Rüdiger Fahlenbrach
  22. Who\'s Who in Patents. A Bayesian approach By Nicolas CARAYOL (GREThA UMR CNRS 5113); Lorenzo CASSI (CES, Université Paris 1 Panthéon Sorbonne - CNRS)
  23. The effects of monetary policy on unemployment dynamics under model uncertainty: Evidence from the US and the euro area. By Carlo Altavilla; Matteo Ciccarelli

  1. By: Todd E. Clark; Michael W. McCracken
    Abstract: This paper develops bootstrap methods for testing whether, in a finite sample, competing out-of-sample forecasts from nested models are equally accurate. Most prior work on forecast tests for nested models has focused on a null hypothesis of equal accuracy in population - basically, whether coefficients on the extra variables in the larger, nesting model are zero. We instead use an asymptotic approximation that treats the coefficients as non-zero but small, such that, in a finite sample, forecasts from the small model are expected to be as accurate as forecasts from the large model. Under that approximation, we derive the limiting distributions of pairwise tests of equal mean square error, and develop bootstrap methods for estimating critical values. Monte Carlo experiments show that our proposed procedures have good size and power properties for the null of equal finite-sample forecast accuracy. We illustrate the use of the procedures with applications to forecasting stock returns and inflation.
    Keywords: Economic forecasting
    Date: 2009
  2. By: Bert van Es; Peter Spreij; Harry van Zanten
    Abstract: Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on discretely sampled continuous time processes and discrete time models will be discussed. The key insight for the analysis is a transformation of the volatility density estimation problem to a deconvolution model for which standard methods exist. Three type of nonparametric density estimators are reviewed: the Fourier-type deconvolution kernel density estimator, a wavelet deconvolution density estimator and a penalized projection estimator. The performance of these estimators will be compared. Key words: stochastic volatility models, deconvolution, density estimation, kernel estimator, wavelets, minimum contrast estimation, mixing
    Date: 2009–10
  3. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: Forecasting current quarter GDP is a permanent task inside the central banks. Many models are known and proposed to solve this problem. Thanks to new results on the asymptotic normality of the multivariate k-nearest neighbor regression estimate, we propose an interesting and new approach to solve in particular the forecasting of economic indicators, included GDP modelling. Considering dependent mixing data sets, we prove the asymptotic normality of multivariate k-nearest neighbor regression estimate under weak conditions, providing confidence intervals for point forecasts. We introduce an application for economic indicators of euro area, and compare our method with other classical ARMA-GARCH modelling.
    Keywords: Multivariate k-nearest neighbor, asymptotic normality of the regression, mixing time series, confidence intervals, forecasts, economic indicators, euro area.
    Date: 2009–07
  4. By: Todd E. Clark; Michael W. McCracken
    Abstract: This paper presents analytical, Monte Carlo, and empirical evidence linking in-sample tests of predictive content and out-of-sample forecast accuracy. Our approach focuses on the negative effect that finite-sample estimation error has on forecast accuracy despite the presence of significant population-level predictive content. Specifically, we derive simple-to-use in-sample tests that test not only whether a particular variable has predictive content but also whether this content is estimated precisely enough to improve forecast accuracy. Our tests are asymptotically non-central chi-square or non-central normal. We provide a convenient bootstrap method for computing the relevant critical values. In the Monte Carlo and empirical analysis, we compare the effectiveness of our testing procedure with more common testing procedures.
    Keywords: Economic forecasting
    Date: 2009
  5. By: Nikolaus Hautsch; Lada M. Kyj; Roel C.A. Oomen
    Abstract: We introduce a regularization and blocking estimator for well-conditioned high-dimensional daily covariances using high-frequency data. Using the Barndorff-Nielsen, Hansen, Lunde, and Shephard (2008a) kernel estimator, we estimate the covariance matrix block-wise and regularize it. A data-driven grouping of assets of similar trading frequency ensures the reduction of data loss due to refresh time sampling. In an extensive simulation study mimicking the empirical features of the S&P 1500 universe we show that the ’RnB’ estimator yields efficiency gains and outperforms competing kernel estimators for varying liquidity settings, noise-to-signal ratios, and dimensions. An empirical application of forecasting daily covariances of the S&P 500 index confirms the simulation results.
    Keywords: covariance estimation, blocking, realized kernel, regularization, microstructure, asynchronous trading
    JEL: C14 C22
    Date: 2009–10
  6. By: Alexander Chudik (Universidad de Salamanca, Campus Miguel de Unamuno, Salamanca, E-23007 Salamanca, España.); M. Hashem Pesaran (Faculty of Economics, Austin Robinson Building, Sidgwick Avenue, Cambridge, CB3 9DD, UK.); Elisa Tosetti (Faculty of Economics, Austin Robinson Building, Sidgwick Avenue, Cambridge, CB3 9DD, UK.)
    Abstract: This paper introduces the concepts of time-specific weak and strong cross section dependence. A double-indexed process is said to be cross sectionally weakly dependent at a given point in time, t, if its weighted average along the cross section dimension (N) converges to its expectation in quadratic mean, as N is increased without bounds for all weights that satisfy certain ‘granularity’ conditions. Relationship with the notions of weak and strong common factors is investigated and an application to the estimation of panel data models with an infinite number of weak factors and a finite number of strong factors is also considered. The paper concludes with a set of Monte Carlo experiments where the small sample properties of estimators based on principal components and CCE estimators are investigated and compared under various assumptions on the nature of the unobserved common effects. JEL Classification: C10, C31, C33.
    Keywords: Panels, Strong and Weak Cross Section Dependence, Weak and Strong Factors.
    Date: 2009–10
  7. By: James J. Heckman; Daniel A. Schmierer; Sergio S. Urzua
    Abstract: The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coefficient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coefficient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and find evidence of sorting into schooling based on unobserved components of gains.
    JEL: C31
    Date: 2009–10
  8. By: Wallace, Frederick
    Abstract: In recent work Im, Lee, and Enders (2006) use stationary instrumental variables to test for cointegrating relationships. The advantage of their approach is that the t-statistics are asymptotically standard normal and the familiar critical values of the normal distribution may be used to assess significance. Thus, the test avoids the nuisance parameter problem in single equation regressions for cointegration. Using an updated version of the data set developed by Taylor (2002), the ILE test is compared to three single equation alternatives in testing for purchasing power parity: An error correction model, autoregressive distributed lag model, and the Engle-Granger two step procedure. The regressions with instruments provide evidence supportive of PPP for some countries but the empirical results differ across tests and the choice of instrument can affect the results.
    Keywords: Cointegration; purchasing power parity
    JEL: C20 F31
    Date: 2009–10–01
  9. By: Xia Cui; Wolfgang Karl Härdle; Lixing Zhu
    Abstract: Generalized single-index models are natural extensions of linear models and circumvent the so-called curse of dimensionality. They are becoming increasingly popular in many scientific fields including biostatistics, medicine, economics and finan- cial econometrics. Estimating and testing the model index coefficients beta is one of the most important objectives in the statistical analysis. However, the commonly used assumption on the index coefficients, beta = 1, represents a non-regular problem: the true index is on the boundary of the unit ball. In this paper we introduce the EFM ap- proach, a method of estimating functions, to study the generalized single-index model. The procedure is to first relax the equality constraint to one with (d - 1) components of beta lying in an open unit ball, and then to construct the associated (d - 1) estimating functions by projecting the score function to the linear space spanned by the residuals with the unknown link being estimated by kernel estimating functions. The root-n consistency and asymptotic normality for the estimator obtained from solving the re- sulting estimating equations is achieved, and a Wilk's type theorem for testing the index is demonstrated. A noticeable result we obtain is that our estimator for beta has smaller or equal limiting variance than the estimator of Carroll et al. (1997). A fixed point iterative scheme for computing this estimator is proposed. This algorithm only involves one-dimensional nonparametric smoothers, thereby avoiding the data sparsity problem caused by high model dimensionality. Numerical studies based on simulation and on applications suggest that this new estimating system is quite powerful and easy to implement.
    Keywords: Generalized single-index model, index coefficients, estimating equations, asymptotic properties, iteration
    JEL: C02 C13 C14 C21
    Date: 2009–10
  10. By: Maurice J.G. Bun (University of Amsterdam); Frank Windmeijer (University of Bristol)
    Abstract: The system GMM estimator for dynamic panel data models combines moment conditions for the model in first differences with moment conditions for the model in levels. It has been shown to improve on the GMM estimator in the first differenced model in terms of bias and root mean squared error. However, we show in this paper that in the covariance stationary panel data AR(1) model the expected values of the concentration parameters in the differenced and levels equations for the cross section at time t are the same when the variances of the individual heterogeneity and idiosyncratic errors are the same. This indicates a weak instrument problem also for the equation in levels. We show that the 2SLS biases relative to that of the OLS biases are then similar for the equations in differences and levels, as are the size distortions of the Wald tests. These results are shown to extend to the panel data GMM estimators.
    Keywords: Dynamic Panel Data; System GMM; Weak Instruments
    JEL: C12 C13 C23
    Date: 2009–10–09
  11. By: Jose Olmo (Department of Economics, City University, London); Keith Pilbeam (Department of Economics, City University, London); William Pouliot (Department of Economics, City University, London and Management School, University of Liverpool)
    Abstract: The occurrence of abnormal returns before unscheduled announcements is usually identified with informed price movements. Therefore, the detection of these observations beyond the range of returns due to the normal day-to-day activity of financial markets is a concern for regulators monitoring the right functioning of financial markets and for investors concerned about their investment portfolios. In this article we introduce a novel method to detect informed price movements via structural break tests for the intercept of an extended CAPM model describing the risk premium of financial returns. These tests are based on the use of a U-statistic type process that is sensitive to detecting changes in the intercept that occur very early in the evaluation period and that can be used to construct a consistent estimator of the timing of the change. As a byproduct, we show that estimators of the timing of change constructed from standard CUSUM statistics are inconsistent and therefore fail to provide useful information about the presence of informed price movements.
    Keywords: CUSUM tests; ECAPM; Informed Price Movements; Insider Trading; Linear Regression Models; Structural Change; U-statistics
    JEL: C14 G11 G12 G14 G28 G38
    Date: 2009–10
  12. By: Yonghong An and Yingyao Hu
    Abstract: It is widely admitted that the inverse problem of estimating the distribution of a latent variable X* from an observed sample of X, a contaminated measurement of X*, is ill-posed. This paper shows that a property of self-reporting errors, observed from validation studies, is that the probability of reporting the truth is nonzero conditional on the true values, and furthermore, this property implies that measurement error models for self-reporting data are in fact well-posed. We also illustrate that the classical measurement error models may in fact be conditionally well-posed given prior information on the distribution of the latent variable X*.
    Date: 2009–10
  13. By: Monica Billio (Università Ca' Foscari di Venezia - Dipartimento di Scienze Economiche); Laurent Ferrara (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, DGEI-DAMEP - Banque de France); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Gian Luigi Mazzi (Eurostat - Office Statistique des Communautés Européennes)
    Abstract: In this paper, we aim at assessing Markov-switching and threshold models in their ability to identify turning points of economic cycles. By using vintage data that are updated on a monthly basis, we compare their ability to detect ex-post the occurrence of turning points of the classical business cycle, we evaluate the stability over time of the signal emitted by the models and assess their ability to detect in real-time recession signals. In this respect, we have built an historical vintage database for the Euro area going back to 1970 for two monthly macroeconomic variables of major importance for short-term economic outlook, namely the Industrial Production Index and the Unemployment Rate.
    Keywords: Business cycle, Euro zone, Markov switching model, SETAR model, unemployment, industrial production.
    Date: 2009–08
  14. By: Bahar Kaynar (VU University Amsterdam); Ad Ridder (VU University Amsterdam)
    Abstract: There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the effciency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator.
    Keywords: Cross-Entropy; Rare Events; Importance Sampling; Large-Scale Markov Chains
    JEL: C6
    Date: 2009–09–30
  15. By: Torben G. Andersen; Luca Benzoni
    Abstract: Given the importance of return volatility on a number of practical financial management decisions, the efforts to provide good real- time estimates and forecasts of current and future volatility have been extensive. The main framework used in this context involves stochastic volatility models. In a broad sense, this model class includes GARCH, but we focus on a narrower set of specifications in which volatility follows its own random process, as is common in models originating within financial economics. The distinguishing feature of these specifications is that volatility, being inherently unobservable and subject to independent random shocks, is not measurable with respect to observable information. In what follows, we refer to these models as genuine stochastic volatility models. Much modern asset pricing theory is built on continuous- time models. The natural concept of volatility within this setting is that of genuine stochastic volatility. For example, stochastic-volatility (jump-) diffusions have provided a useful tool for a wide range of applications, including the pricing of options and other derivatives, the modeling of the term structure of risk-free interest rates, and the pricing of foreign currencies and defaultable bonds. The increased use of intraday transaction data for construction of so-called realized volatility measures provides additional impetus for considering genuine stochastic volatility models. As we demonstrate below, the realized volatility approach is closely associated with the continuous-time stochastic volatility framework of financial economics. There are some unique challenges in dealing with genuine stochastic volatility models. For example, volatility is truly latent and this feature complicates estimation and inference. Further, the presence of an additional state variable - volatility - renders the model less tractable from an analytic perspective. We examine how such challenges have been addressed through development of new estimation methods and imposition of model restrictions allowing for closed-form solutions while remaining consistent with the dominant empirical features of the data.
    Date: 2009
  16. By: Michel van der Wel; Albert J. Menkveld; Asani Sarkar
    Abstract: We develop a new likelihood-based approach to signing trades in the absence of quotes. This approach is equally efficient as the existing Markov-chain Monte Carlo methods, but more than ten times faster. It can address the occurrence of multiple trades at the same time and allows for analysis of settings in which trade times are observed with noise. We apply this method to a high-frequency data set of thirty-year U.S. Treasury futures to investigate the role of the market maker. Most theory characterizes the market maker as an uninformed, passive supplier of liquidity. Our findings suggest, however, that some market makers actively demand liquidity for a substantial part of the day and that they are informed speculators
    Keywords: Electronic trading of securities ; Liquidity (Economics) ; Speculation ; Futures
    Date: 2009
  17. By: Makram El-Shagi
    Abstract: The present paper compares expected inflation to (econometric) inflation forecasts based on a number of forecasting techniques from the literature using a panel of ten industrialized countries during the period of 1988 to 2007. To capture expected inflation we develop a recursive filtering algorithm which extracts unexpected inflation from real interest rate data, even in the presence of diverse risks and a potential Mundell-Tobin-effect. The extracted unexpected inflation is compared to the forecasting errors of ten econometric forecasts. Beside the standard AR(p) and ARMA(1,1) models, which are known to perform best on average, we also employ several Phillips curve based approaches, VAR, dynamic factor models and two simple model avering approaches.
    Keywords: Inflation Expectations,Rational Expectations,Inflation Forecasting
    JEL: E31 E37
    Date: 2009–10
  18. By: Heij, C.; Dijk, D.J.C. van; Groenen, P.J.F. (Erasmus Econometric Institute)
    Abstract: Macroeconomic forecasting is not an easy task, in particular if future growth rates are forecasted in real time. This paper compares various methods to predict the growth rate of US Industrial Production (IP) and of the Composite Coincident Index (CCI) of the Conference Board, over the coming month, quarter, and half year. It turns out that future IP growth rates can be forecasted in real time from ten leading indicators, by means of the Composite Leading Index (CLI) or, even somewhat better, by principal components regression. This amends earlier negative findings for IP by Diebold and Rudebusch. For CCI, on the other hand, simple autoregressive models do not provide significantly less accurate forecasts than single-equation and bivariate vector autoregressive models with the CLI. This amends some of the more positive results for CCI recently reported by the Conference Board. Not surprisingly, all forecast methods improve considerably if `ex post' data are used, after possible data updates and revisions.
    Keywords: vintage date;leading indicators;forecast evaluation;recessions;industrial production;composite coincident index
    Date: 2009–10–19
  19. By: Kristiaan Kerstens (CNRS-LEM (UMR 8179), IÉSEG School of Management); Ignace Van de Woestyne (Hogeschool Universiteit Brussel, Brussels, Belgium)
    Abstract: The need to adapt Data Development Analysis (DEA) and other frontier models in the context of negative data has been a rather neglected issue in the literature. Silva Portela, Thanassoulis, and Simpson (2004) proposed a variation on the directional distance function, a very general distance function that is dual to the profit function, to accomodate eventual negative data. In this contribution, we suggest a simple varaiation on the proportional distance funtion that can do the same job.
    Keywords: DEA, negative data, directional distance funtion
    Date: 2009–04
  20. By: Abul Naga, Ramses
    Abstract: We use the delta method to derive the large sample distribution of mul- tidimensional inequality indices. We also present a simple method for com- puting standard errors and obtain explicit formulas in the context of two families of indices
    Keywords: multidimensional inequality indices; large sample distribu- tions; standard errors
    Date: 2009
  21. By: Bo Becker (Harvard Business School, Finance Unit); Henrik Cronqvist (Claremont McKenna College, Robert Day School of Economics and Finance); Rüdiger Fahlenbrach (Ohio State University, Fisher College of Business and Ecole polytechnique Fédérale de Lausanne)
    Abstract: Large shareholders may play an important role for firm performance and policies, but identifying this empirically presents a challenge due to the endogeneity of ownership structures. We develop and test an empirical framework which allows us to separate selection from treatment effects of large shareholders. Individual blockholders tend to hold blocks in public firms located close to where they reside. Using this empirical observation, we develop an instrument - the density of wealthy individuals near a firm's headquarters - for the presence of a large, non-managerial individual shareholder in a firm. These shareholders have a large impact on firms, controlling for selection effects.
    Date: 2009–09
  22. By: Nicolas CARAYOL (GREThA UMR CNRS 5113); Lorenzo CASSI (CES, Université Paris 1 Panthéon Sorbonne - CNRS)
    Abstract: This paper proposes a bayesian methodology to treat the who’s who problem arising in individual level data sets such as patent data. We assess the usefullness of this methodology on the set of all French inventors appearing on EPO applications from 1978 to 2003.
    Keywords: Patents, homonymy, Bayes rule
    JEL: C81 C88 O30
    Date: 2009
  23. By: Carlo Altavilla (University of Naples Parthenope, Via Medina, 40 - 80133 Naples, Italy.); Matteo Ciccarelli (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: This paper explores the role that the imperfect knowledge of the structure of the economy plays in the uncertainty surrounding the effects of rule-based monetary policy on unemployment dynamics in the euro area and the US. We employ a Bayesian model averaging procedure on a wide range of models which differ in several dimensions to account for the uncertainty that the policymaker faces when setting the monetary policy and evaluating its effect on real economy. We find evidence of a high degree of dispersion across models in both policy rule parameters and impulse response functions. Moreover, monetary policy shocks have very similar recessionary effects on the two economies with a different role played by the participation rate in the transmission mechanism. Finally, we show that a policy maker who does not take model uncertainty into account and selects the results on the basis of a single model may come to misleading conclusions not only about the transmission mechanism, but also about the differences between the euro area and the US, which are on average essentially small. JEL Classification: C11, E24, E52, E58.
    Keywords: Monetary policy, Model uncertainty, Bayesian model averaging, Unemployment gap, Taylor rule.
    Date: 2009–09

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.