nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒08‒08
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. A Nonlinear Panel Unit Root Test under Cross Section Dependence By Mario Cerrato; Christian de Peretti; Rolf Larsson; Nick Sarantis
  2. A New Simple Test Against Spurious Long Memory Using Temporal Aggregation By Kuswanto, Heri
  3. Forecast evaluation of small nested model sets. By Kirstin Hubrich; Kenneth D. West
  4. Stochastische Überlagerung mit Hilfe der Mischungsverteilung By Gerd Ronning
  5. Optimal Prediction Pools. By John Geweke; Gianni Amisano
  6. Nonparametric Identification and Estimation of Nonadditive Hedonic Models By Heckman, James J.; Matzkin, Rosa; Nesheim, Lars
  7. A Note on the Application of EC2SLS and EC3SLS Estimators in Panel Data Models By Badi H. Baltagi; Long Liu
  8. Identifying the elasticity of substitution with biased technical change. By Miguel A. León-Ledesma; Peter McAdam; Alpo Willman
  9. Another Look at the Identification at Infinity of Sample Selection Models By d'Haultfoeuille, Xavier; Maurel, Arnaud
  10. On the density distribution across space: a probabilistic approach By Ilenia Epifani; Rosella Nicolini
  11. Long Memory and Tail dependence in Trading Volume and Volatility By Eduardo Rossi; Paolo Santucci de Magistris
  12. Are more data always better for factor analysis? Results for the euro area, the six largest euro area countries and the UK. By Giovanni Caggiano; George Kapetanios; Vincent Labhard
  13. Forecasting the World Economy in the Short-Term. By Audrone Jakaitiene; Stéphane Dées
  14. The forecasting power of international yield curve linkages. By Michele Modugno; Kleopatra Nikolaou
  15. How Accurate are Government Forecast of Economic Fundamentals? By Chang, C-L.; Franses, Ph.H.B.F.; McAleer, M.
  16. Bayes reliability measures of Lognormal and inverse Gaussian distributions under ML-II ε-contaminated class of prior distributions By Sinha, Pankaj; Jayaraman, Prabha

  1. By: Mario Cerrato; Christian de Peretti; Rolf Larsson; Nick Sarantis
    Abstract: We propose a nonlinear heterogeneous panel unit root test for testing the null hypothesis of unit-roots processes against the alternative that allows a proportion of units to be generated by globally stationary ESTAR processes and a remaining non-zero proportion to be generated by unit root processes. The proposed test is simple to implement and accommodates cross sectional dependence. We show that the distribution of the test statistic is free of nuisance parameters as (N, T) −∞. Monte Carlo simulation shows that our test holds correct size and under the hypothesis that data are generated by globally stationary ESTAR processes has a better power than the recent test proposed in Pesaran [2007]. An application to a panel of bilateral real exchange rate series with the US Dollar from the 20 major OECD countries is provided.
    Keywords: Nonlinear panel unit root tests, cross sectional dependence.
    JEL: C12 C15 C22 C23 F31
    Date: 2009–07
  2. By: Kuswanto, Heri
    Abstract: We have developed a new test against spurious long memory based on the invariance of long memory parameter to aggregation. By using the local Whittle estimator, the statistic takes the supremum among combinations of paired aggregated series. Simulations show that the test performs good in finite sample sizes, and is able to distinguish long memory from spurious processes with excellent power. Moreover, the empirical application gives further evidence that the observed long memory in German stock returns is spurious.
    Keywords: Local-Whittle method, Spurious long memory, Change point, Aggregation
    JEL: C12 C22
    Date: 2009–08
  3. By: Kirstin Hubrich (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.); Kenneth D. West (University of Wisconsin, Madison, Department of Economics,  1180 Observatory Drive,  Madison, WI 53706, USA.)
    Abstract: We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require reestimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White’s (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate, our procedures by comparing forecasts of different models for U.S. inflation. JEL Classification: C32, C53, E37.
    Keywords: Out-of-sample, prediction, testing, multiple model comparisons, inflation forecasting.
    Date: 2009–03
  4. By: Gerd Ronning
    Abstract: The paper considers the effect of additive and multiplicative measurement errors on the estimation of linear models.We assume that such measurement errors have been applied to the micro data by purpose in order to protect them against re-identification. In particular measurement errors with a bimodal mixture distribution are analyzed. First the case of cross-section data is assumed. Then for panel data both the "naive' estimator ("within estimator", mixed effects estimator) and IV estimators are considered. In particular the effect of autocorrelation of regressors in short panels is discussed.
    Keywords: bimodal mixture distribution
    JEL: F2 F43
    Date: 2009–03
  5. By: John Geweke (Departments of Statistics and Economics, University of Iowa, Iowa City, IA, USA.); Gianni Amisano (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components. JEL Classification: C11, C53.
    Keywords: forecasting, GARCH, log scoring, Markov mixture, model combination, S&P 500 returns, stochastic volatility.
    Date: 2009–03
  6. By: Heckman, James J. (University of Chicago); Matzkin, Rosa (University of California, Los Angeles); Nesheim, Lars (University College London)
    Abstract: This paper studies the identification and estimation of preferences and technologies in equilibrium hedonic models. In it, we identify nonparametric structural relationships with nonadditive heterogeneity. We determine what features of hedonic models can be identified from equilibrium observations in a single market under weak assumptions about the available information. We then consider use of additional information about structural functions and heterogeneity distributions. Separability conditions facilitate identification of consumer marginal utility and firm marginal product functions. We also consider how identification is facilitated using multimarket data.
    Keywords: hedonic models, hedonic equilibrium, nonadditive models, identification, non-parametric estimation
    JEL: C14 D41 D58
    Date: 2009–07
  7. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Long Liu (Department of Economics, College of Business, University of Texas at San Antonio, One UTSA Circle, TX 78249-0633)
    Abstract: Baltagi and Li (1992) showed that for estimating a single equation in a simultaneous panel data model, EC2SLS has more instruments than G2SLS. Although these extra instruments are redundant in White (1986) terminology, they may yield different estimates and standard errors in empirical studies with finite N and T. We illustrte this using the crime data of Cornwell and Trumbull (1994). We show that the standard errors of EC2SLS are smaller than those of G2SLS for this example. In general, we prove that the asymptotic variance of G2SLS differs from that of EC2SLS by a positive semi-definite matrix. Although this difference tends to zero as the sample size tends to infinity, in small samples, this difference may be different from zero and can lead to gains in small sample efficiency. This proof is extended to the system equations 3SLS counterparts.
    Keywords: Instrument variable; panel data
    JEL: C13
    Date: 2009–07
  8. By: Miguel A. León-Ledesma (Department of Economics, Keynes College, University of Kent, Canterbury, Kent CT2 7NP, United Kingdom.); Peter McAdam (Corresponding author: Research Department, European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.); Alpo Willman (Research Department, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: Despite being critical parameters in many economic fields, the received wisdom, in theoretical and empirical literatures, states that joint identification of the elasticity of capital-labor substitution and technical bias is infeasible. This paper challenges that pessimistic interpretation. Putting the new approach of "normalized" production functions at the heart of a Monte Carlo analysis we identify the conditions under which identification is feasible and robust. The key result is that the jointly modeling the production function and first-order conditions is superior to single-equation approaches in terms of robustly capturing production and technical parameters, especially when merged with "normalization". Our results will have fundamental implications for production-function estimation under non-neutral technical change, for understanding the empirical relevance of normalization and the variability underlying past empirical studies. JEL Classification: C22, E23, O30, 051.
    Keywords: Constant Elasticity of Substitution, Factor-Augmenting Technical Change, Normalization, Factor Income share, Identification, Monte Carlo.
    Date: 2009–01
  9. By: d'Haultfoeuille, Xavier (CREST-INSEE); Maurel, Arnaud (ENSAE-CREST)
    Abstract: It is often believed that without instrument, endogenous sample selection models are identified only if a covariate with a large support is available (see Chamberlain, 1986, and Lewbel, 2007). We propose a new identification strategy mainly based on the condition that the selection variable becomes independent of the covariates when the outcome, not one of the covariates, tends to infinity. No large support on the covariates is required. Moreover, we prove that this condition is testable. We finally show that our strategy can also be applied to the identification of generalized Roy models.
    Keywords: identification at infinity, sample selection model, Roy model
    JEL: C21
    Date: 2009–07
  10. By: Ilenia Epifani; Rosella Nicolini
    Abstract: This paper aims at providing a Bayesian parametric framework to tackle the accessibility problem across space in urban theory. Adopting continuous variables in a probabilistic setting we are able to associate with the distribution density to the Kendall's tau index and replicate the general issues related to the role of proximity in a more general context. In addition, by referring to the Beta and Gamma distribution, we are able to introduce a differentiation feature in each spatial unit without incurring in any a-priori definition of territorial units. We are also providing an empirical application of our theoretical setting to study the density distribution of the population across Massachusetts.
    Keywords: Agglomerations, Bayesian inference, Distance, Gibbs sampling, Kendall's tau index, Population density.
    JEL: C40 R14
    Date: 2009–07–22
  11. By: Eduardo Rossi (Dipartimento di economia politica e metodi quantitativi, University of Pavia, Italy.); Paolo Santucci de Magistris (Dipartimento di economia politica e metodi quantitativi, University of Pavia, Italy)
    Abstract: This paper investigates long-run dependencies of volatility and volume, supposing that are driven by the same informative process. Log-realized volatility and log-volume are characterized by upper and lower tail dependence, where the positive tail dependence is mainly due to the jump component. The possibility that volume and volatility are driven by a common fractionally integrated stochastic trend, as the Mixture Distribution Hypothesis prescribes, is rejected. We model the two series with a bivariate Fractionally Integrated VAR specification. The joint density is parameterized by means of with different copula functions, which provide flexibility in modeling the dependence in the extremes and are computationally convenient. Finally, we present a simulation exercise to validate the model.
    Keywords: Realized Volatility, Trading Volume, Fractional Cointegration, Tail dependence, Copula Modeling
    JEL: C32 G12
    Date: 2009–07–13
  12. By: Giovanni Caggiano (Department of Economics, University of Padua, Via del Santo 33, 35123 Padova, Italy.); George Kapetanios (Department of Economics, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom.); Vincent Labhard (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: Factor based forecasting has been at the forefront of developments in the macroeconometric forecasting literature in the recent past. Despite the flurry of activity in the area, a number of specification issues such as the choice of the number of factors in the forecasting regression, the benefits of combining factor-based forecasts and the choice of the dataset from which to extract the factors remain partly unaddressed. This paper provides a comprehensive empirical investigation of these issues using data for the euro area, the six largest euro area countries, and the UK. JEL Classification: C100,C150,C530.
    Keywords: Factors, Large Datasets, Forecast Combinations.
    Date: 2009–05
  13. By: Audrone Jakaitiene (Institute of Mathematics and Informatics, Akademijos st. 4, LT-08663 Vilnius, Lithuania.); Stéphane Dées (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: Forecasting the world economy is a difficult task given the complex inter relationships within and across countries. This paper proposes a number of approaches to forecast short-term changes in selected world economic variables and aims, first, at ranking various forecasting methods in terms of forecast accuracy and, second, at checking whether methods forecasting directly aggregate variables (direct approaches) outperform methods based on the aggregation of country-speci.c forecasts (bottom-up approaches). Overall, all methods perform better than a simple benchmark for short horizons (up to three months ahead). Among the forecasting approaches used, factor models appear to perform the best. Moreover, direct approaches outperform bottom-up ones for real variables, but not for prices. Finally, when country-specific forecasts are adjusted to match direct forecasts at the aggregate levels (top-down approaches), the forecast accuracy is neither improved nor deteriorated (i.e. top-down and bottom-up approaches are broadly equivalent in terms of country-specific forecast accuracy). JEL Classification: C53, C32, E37, F17.
    Keywords: Factor models, Forecasts, Time series models.
    Date: 2009–06
  14. By: Michele Modugno (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.); Kleopatra Nikolaou (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: This paper investigates whether information from foreign yield curves helps forecast domestic yield curves out-of-sample. A nested methodology to forecast yield curves in domestic and international settings is applied on three major countries (the US, Germany and the UK). This novel methodology is based on dynamic factor models, the EM algorithm and the Kalman …lter. The domestic model is compared vis-á-vis an international one, where information from foreign yield curves is allowed to enrich the information set of the domestic yield curve. The results have interesting and original implications. They reveal clear international dependency patterns, strong enough to improve forecasts of Germany and to a lesser extent UK. The US yield curve exhibits a more independent behaviour. In this way, the paper also generalizes anecdotal evidence on international interest rate linkages to the whole yield curve. JEL Classification: F31.
    Keywords: Yield curve forecast, Dynamic factor model, EM algorithm, International linkages.
    Date: 2009–04
  15. By: Chang, C-L.; Franses, Ph.H.B.F.; McAleer, M. (Erasmus Econometric Institute)
    Abstract: A government’s ability to forecast key economic fundamentals accurately can affect business confidence, consumer sentiment, and foreign direct investment, among others. A government forecast based on an econometric model is replicable, whereas one that is not fully based on an econometric model is non-replicable. Governments typically provide non-replicable forecasts (or, expert forecasts) of economic fundamentals, such as the inflation rate and real GDP growth rate. In this paper, we develop a methodology to evaluate non-replicable forecasts. We argue that in order to do so, one needs to retrieve from the non-replicable forecast its replicable component, and that it is the difference in accuracy between these two that matters. An empirical example to forecast economic fundamentals for Taiwan shows the relevance of the proposed methodological approach. Our main finding is that it is the undocumented knowledge of the Taiwanese government that reduces forecast errors substantially.
    Keywords: government forecasts;generated regressors;replicable government forecasts;non- replicable government forecasts;initial forecasts, revised forecasts;E37
    Date: 2009–07–23
  16. By: Sinha, Pankaj; Jayaraman, Prabha
    Abstract: In this paper we employ ML-II ε-contaminated class of priors to study the sensitivity of Bayes Reliability measures for an Inverse Gaussian (IG) distribution and Lognormal (LN) distribution to misspecification in the prior. The numerical illustrations suggest that reliability measures of both the distributions are not sensitive to moderate amount of misspecification in prior distributions belonging to the class of ML-II ε-contaminated.
    Keywords: Bayes reliability; ML-II ε-contaminated prior
    JEL: C13 C44 C02 C46 C11 C01
    Date: 2009–07–29

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.