nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒04‒29
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Simple Wald tests of the fractional integration parameter : an overview of new results By Juan Jose Dolado; Jesus Gonzalo; Laura Mayoral
  2. Long memory or shifting means? A new approach and application to realised volatility By Eduardo Mendes; Les Oxley; William Rea; Marco Reale
  3. The Spectral Representation of Markov-Switching Arma Models By Beatrice Pataracchia
  4. Improving Upon the Marginal Empirical Distribuition Functions when the Copula is Known By Segers, J.J.J.; Akker, R. van den; Werker, B.J.M.
  5. Possibly Ill-behaved Posteriors in Econometric Models By Lennart Hoogerheide; Herman K. van Dijk
  6. How much structure in empirical models? By Canova, Fabio
  8. Consumer preferences and demand systems By Barnett, William A.; Serletis, Apostolos
  9. A non-parametric method to nowcast the Euro Area IPI By Laurent Ferrara; Thomas Raffinot
  10. The dynamics of economics functions: modelling and forecasting the yield curve By Clive G. Bowsher; Roland Meeks
  11. Minimizing Bias in Selection on Observables Estimators When Unconfoundness Fails By Daniel Millimet; Rusty Tchernis
  12. JBendge: An Object-Oriented System for Solving, Estimating and Selecting Nonlinear Dynamic Models By Viktor Winschel; Markus Krätzig
  13. Modeling Expectations with Noncausal Autoregressions By Lanne, Markku; Saikkonen, Pentti
  14. Adaptive Experimental Design Using the Propensity Score By Hahn, Jinyong; Hirano, Keisuke; Karlan, Dean
  15. Interpreting long-horizon estimates in predictive regressions By Erik Hjalmarsson
  16. Estimating fundamental cross-section dispersion from fixed event forecasts By Jonas Dovern; Ulrich Fritsche
  17. The Realisation of Finite-Sample Frequency-Selective Filters By Prof D.S.G. Pollock
  18. Wavelets unit root test vs DF test : A further investigation based on monte carlo experiments By Ibrahim Ahamada; Philippe Jolivaldt
  19. Forecasting Inflation in China By Mehrotra , Aaron; Sánchez-Fung, José R.
  20. Should We Trust the Empirical Evidence from Present Value Models of the Current Account? By Mercereau, Benôit; Miniane, Jacques Alain
  21. Economists, Incentives, Judgment, and Empirical Work By Colander, David C.
  22. Component-based structural equation modelling By TENENHAUS, Michel

  1. By: Juan Jose Dolado; Jesus Gonzalo; Laura Mayoral
    Abstract: This paper presents an overview of some new results regarding an easily implementable Wald test-statistic (EFDF test) of the null hypotheses that a time-series process is I(1) or I(0) against fractional I(d) alternatives, with d?(0,1), allowing for unknown deterministic components and serial correlation in the error term. Specifically, we argue that the EFDF test has better power properties under fixed alternatives than other available tests for fractional roots, as well as analyze how to implement this test when the deterministic components or the long-memory parameter are subject to structural breaks.
    Keywords: Fractional processes, Deterministic components, Power, Structural breaks
    JEL: C12 C22
    Date: 2008–01
  2. By: Eduardo Mendes; Les Oxley (University of Canterbury); William Rea; Marco Reale
    Abstract: It is now recognised that long memory and structural change can be confused because the statistical properties of times series of lengths typical of financial and econometric series are similar for both models. We propose a new set of methods aimed at distinguishing between long memory and structural change. The approach, which utilises the computational efficient methods based upon Atheoretical Regression Trees (ART), establishes through simulation the bivariate distribution of the fractional integration parameter, d, with regime length for simulated fractionally integrated series. This bivariate distribution is then compared with the data for the time series. We also combine ART with the established goodness of fit test for long memory series due to Beran. We apply these methods to the realized volatility series of 16 stocks in the Dow Jones Industrial Average. We show that in these series the value of the fractional integration parameter is not constant with time. The mathematical consequence of this is that the definition of H self-similarity is violated. We present evidence that these series have structural breaks.
    Keywords: Long-range dependence; Strong dependence; Global dependence; Hurst phenomena
    JEL: C22
    Date: 2008–01–29
  3. By: Beatrice Pataracchia
    Abstract: In this paper we propose a method to derive the spectral representation in the case of a particular class of nonlinear models: Markov Switching ARMA models. The procedure simply relies on the application of the Riesz-Fisher Theorem which describes the spectral density as the Fourier transform of the autocovariance functions. We explicitly show the analytical structure of the spectral density in the simple Markov Switching AR(1). Finally, a monetary policy application of a Markov Switching VAR(4) is presented
    Keywords: Multivariate ARMA models; Regime-switching models; Markov switching models; Frequency Domain
    JEL: C32 C44 E52
    Date: 2008–03
  4. By: Segers, J.J.J.; Akker, R. van den; Werker, B.J.M. (Tilburg University, Center for Economic Research)
    Abstract: At the heart of the copula methodology in statistics is the idea of separating marginal distributions from the dependence structure. However, as shown in this paper, this separation is not to be taken for granted: in the model where the copula is known and the marginal distributions are completely unknown, the empirical distribution functions are semiparametrically efficient if and only if the copula is the independence copula. Incorporating the knowledge of the copula into a nonparametric likelihood yields an estimation procedure which by simulations is shown to outperform the empirical distribution functions, the amount of improvement depending on the copula. Although the known-copula model is arguably artificial, it provides an instructive stepping stone to the more general model of a parametrically specified copula and arbitrary margins.
    Keywords: independence copula;nonparametric maximum likelihood estimator;score function;semiparametric efficiency;tangent space
    JEL: C14
    Date: 2008
  5. By: Lennart Hoogerheide (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam)
    Abstract: Highly non-elliptical posterior distributions may occur in several econometric models, in particular, when the likelihood information is allowed to dominate and data information is weak. We explain the issue of highly non-elliptical posteriors in a model for the effect of education on income using data from the well-known Angrist and Krueger (1991) study and discuss how a so-called Information Matrix or Jeffreys' prior may be used as a `regularization prior' that in combination with the likelihood yields posteriors with desirable properties. We further consider an 8-dimensional bimodal posterior distribution in a 2-regime mixture model for the real US GNP growth. In order to perform a Bayesian posterior analysis using indirect sampling methods in these models, one has to find a good candidate density. In a recent paper - Hoogerheide, Kaashoek and Van Dijk (2007) - a class of neural network functions was introduced as candidate densities in case of non-elliptical posteriors. In the present paper, the connection between canonical model structures, non-elliptical credible sets, and more sophisticated neural network simulation techniques is explored. In all examples considered in this paper – a bimodal distribution of Gelman and Meng (1991) and posteriors in IV and mixture models - the mixture of Student's <I>t</I> distributions is clearly a much better candidate than a Student's <I>t</I> candidate, yielding far more precise estimates of posterior means after the same amount of computing time, whereas the Student's <I>t</I> candidate almost completely misses substantial parts of the parameter space.
    Keywords: instrumental variables; vector error correction model; mixture model; importance sampling; Markov chain Monte Carlo; neural network
    JEL: C11 C15 C45
    Date: 2008–04–08
  6. By: Canova, Fabio
    Abstract: This chapter highlights the problems that structural methods and SVAR approaches have when estimating DSGE models and examining their ability to capture important features of the data. We show that structural methods are subject to severe identification problems due, in large part, to the nature of DSGE models. The problems can be patched up in a number of ways, but solved only if DSGEs are completely reparametrized or respecified. The potential misspecification of the structural relationships give Bayesian methods an hedge over classical ones in structural estimation. SVAR approaches may face invertibility problems but simple diagnostics can help to detect and remedy these problems. A pragmatic empirical approach ought to use the flexibility of SVARs against potential misspecification of the structural relationships but must firmly tie SVARs to the class of DSGE models which could have have generated the data.
    Keywords: DSGE models; Identification; Invertibility; SVAR models
    JEL: C10 C52 E32 E50
    Date: 2008–04
  7. By: Mohamed Boutahar (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales - CNRS : UMR6579); Gilles Dufrénot (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales - CNRS : UMR6579); Anne Peguin-Feissolle (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales - CNRS : UMR6579)
    Abstract: This paper generalizes the standard long memory modeling by assuming that the long memory parameter d is stochastic and time varying: we introduce a STAR process on this parameter characterized by a logistic function. We propose an estimation method of this model. Some simulation experiments are conducted. The empirical results suggest that this new model offers an interesting alternative competing framework to describe the persistent dynamics in modelling some financial series.
    Keywords: Long-memory, Logistic function, STAR
    Date: 2008–04–23
  8. By: Barnett, William A.; Serletis, Apostolos
    Abstract: This paper is an up-to-date survey of the state-of-the-art in consumer demand modelling. We review and evaluate advances in a number of related areas, including different approaches to empirical demand analysis, such as the differential approach, the locally flexible functional forms approach, the semi-nonparametric approach, and a nonparametric approach. We also address estimation issues, including sampling theoretic and Bayesian estimation methods, and discuss the limitations of the currently common approaches. We also highlight the challenge inherent in achieving economic regularity, for consistency with the assumptions of the underlying neoclassical economic theory, as well as econometric regularity, when variables are nonstationary.
    Keywords: Representative consumer; Engel curves; rank; flexible functional forms; parametric tests; nonparametric tests; theoretical regularity
    JEL: C14 C50 C30 C11
    Date: 2008–04–22
  9. By: Laurent Ferrara (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, DGEI-DAMEP - Banque de France); Thomas Raffinot (CPR-Asset Management - CPR Asset Management)
    Abstract: Non-parametric methods have been empirically proved to be of great interest in the statistical literature in order to forecast stationary time series, but very few applications have been proposed in the econometrics literature. In this paper, our aim is to test whether non-parametric statistical procedures based on a Kernel method can improve classical linear models in order to nowcast the Euro area manufacturing industrial production index (IPI) by using business surveys released by the European Commission. Moreover, we consider the methodology based on bootstrap replications to estimate the confidence interval of the nowcasts.
    Keywords: Non-parametric, Kernel, nowcasting, bootstrap, Euro area IPI.
    Date: 2008–04
  10. By: Clive G. Bowsher; Roland Meeks
    Abstract: The class of Functional Signal plus Noise (FSN) models is introduced that provides a new, general method for modelling and forecasting time series of economic functions. The underlying, continuous economic function (or "signal") is a natural cubic spline whose dynamic evolution is driven by a cointegrated vector autoregression for the ordinates (or "y-values") at the knots of the spline. The natural cubic spline provides flexible cross-sectional fit and results in a linear, state space model. This FSN model achieves dimension reduction, provides a coherent description of the observed yield curve and its dynamics as the cross-sectional dimension N becomes large, and can feasibly be estimated and used for forecasting when N is large. The integration and cointegration properties of the model are derived. The FSN models are then applied to forecasting 36-dimensional yield curves for US Treasury bonds at the one month ahead horizon. The method consistently outperforms the Diebold and Li (2006) and random walk forecasts on the basis of both mean square forecast error criteria and economically relevant loss functions derived from the realised profits of pairs trading algorithms. The analysis also highlights in a concrete setting the dangers of attempts to infer the relative economic value of model forecasts on the basis of their associated mean square forecast errors.
    Keywords: Time-series analysis ; Forecasting ; Mathematical models ; Macroeconomics - Econometric models
    Date: 2008
  11. By: Daniel Millimet (Southern Methodist University); Rusty Tchernis (Indiana University Bloomington)
    Abstract: We characterize the bias of propensity score based estimators of common average treatment effect parameters in the case of selection on unobservables. We then propose a new minimum biased estimator of the average treatment effect. We assess the finite sample performance of our estimator using simulated data, as well as a timely application examining the causal effect of the School Breakfast Program on childhood obesity. We find our new estimator to be quite advantageous in many situations, even when selection is only on observables.
    Keywords: Treatment Effects, Propensity Score, Bias, Unconfoundedness, Selection on Unobservables
    JEL: C21 C52
    Date: 2008–04
  12. By: Viktor Winschel; Markus Krätzig
    Abstract: We present an object-oriented software framework allowing to specify, solve, and estimate nonlinear dynamic general equilibrium (DSGE) models. The imple- mented solution methods for nding the unknown policy function are the standard linearization around the deterministic steady state, and a function iterator using a multivariate global Chebyshev polynomial approximation with the Smolyak op- erator to overcome the course of dimensionality. The operator is also useful for numerical integration and we use it for the integrals arising in rational expecta- tions and in nonlinear state space lters. The estimation step is done by a parallel Metropolis-Hastings (MH) algorithm, using a linear or nonlinear lter. Implemented are the Kalman, Extended Kalman, Particle, Smolyak Kalman, Smolyak Sum, and Smolyak Kalman Particle lters. The MH sampling step can be interactively moni- tored and controlled by sequence and statistics plots. The number of parallel threads can be adjusted to benet from multiprocessor environments. JBendge is based on the framework JStatCom, which provides a standardized ap- plication interface. All tasks are supported by an elaborate multi-threaded graphical user interface (GUI) with project management and data handling facilities.
    Keywords: Dynamic Stochastic General Equilibrium (DSGE) Models, Bayesian Time Series Econometrics, Java, Software Development
    JEL: C11 C13 C15 C32 C52 C63 C68 C87
    Date: 2008–04
  13. By: Lanne, Markku; Saikkonen, Pentti
    Abstract: This paper is concerned with univariate noncausal autoregressive models and their potential usefulness in economic applications. We argue that noncausal autoregressive models are especially well suited for modeling expectations. Unlike conventional causal autoregressive models, they explicitly show how the considered economic variable is affected by expectations and how expectations are formed. Noncausal autoregressive models can also be used to examine the related issue of backward-looking or forward-looking dynamics of an economic variable. We show in the paper how the parameters of a noncausal autoregressive model can be estimated by the method of maximum likelihood and how related test procedures can be obtained. Because noncausal autoregressive models cannot be distinguished from conventional causal autoregressive models by second order properties or Gaussian likelihood, a detailed discussion on their specification is provided. Motivated by economic applications we explicitly use a forward-looking autoregressive polynomial in the formulation of the model. This is different from the practice used in previous statistics literature on noncausal autoregressions and, in addition to its economic motivation, it is also convenient from a statistical point of view. In particular, it facilitates obtaining likelihood based diagnostic tests for the specified orders of the backward-looking and forward-looking autoregressive polynomials. Such test procedures are not only useful in the specification of the model but also in testing economically interesting hypotheses such as whether the considered variable only exhibits forward-looking behavior. As an empirical application, we consider modeling the U.S. inflation dynamics which, according to our results, is purely forward-looking.
    Keywords: Noncausal autoregression; expectations; inflation persistence
    JEL: C52 E31 C22
    Date: 2008
  14. By: Hahn, Jinyong; Hirano, Keisuke; Karlan, Dean
    Abstract: Many social experiments are run in multiple waves, or are replications of earlier social experiments. In principle, the sampling design can be modified in later stages or replications to allow for more efficient estimation of causal effects. We consider the design of a two-stage experiment for estimating an average treatment effect, when covariate information is available for experimental subjects. We use data from the first stage to choose a conditional treatment assignment rule for units in the second stage of the experiment. This amounts to choosing the propensity score, the conditional probability of treatment given covariates. We propose to select the propensity score to minimize the asymptotic variance bound for estimating the average treatment effect. Our procedure can be implemented simply using standard statistical software and has attractive large-sample properties.
    JEL: C90 C42 C93 C01
    Date: 2008–04–15
  15. By: Erik Hjalmarsson
    Abstract: This paper analyzes the asymptotic properties of long-horizon estimators under both the null hypothesis and an alternative of predictability. Asymptotically, under the null of no predictability, the long-run estimator is an increasing deterministic function of the short-run estimate and the forecasting horizon. Under the alternative of predictability, the conditional distribution of the long-run estimator, given the short-run estimate, is no longer degenerate and the expected pattern of coefficient estimates across horizons differs from that under the null. Importantly, however, under the alternative, highly endogenous regressors, such as the dividend-price ratio, tend to deviate much less than exogenous regressors, such as the short interest rate, from the pattern expected under the null, making it more difficult to distinguish between the null and the alternative.
    Date: 2008
  16. By: Jonas Dovern (The Kiel Institute for the World Economy (IfW)); Ulrich Fritsche (Department for Economics and Politics, University of Hamburg, and DIW Berlin)
    Abstract: A couple of recent papers have shifted the focus towards disagreement of professional forecasters. When dealing with survey data that is sampled at a frequency higher than annual and that includes only fixed event forecasts, e.g. expectation of average annual growth rates measures of disagreement across forecasters naturally are distorted by a component that mainly reflects the time varying forecast horizon. We use data from the Survey of Professional Forecasters, which reports both fixed event and fixed horizon forecasts, to evaluate different methods for extracting the ``fundamental'' component of disagreement. Based on the paper's results we suggest two methods to estimate dispersion measures from panels of fixed event forecasts: a moving average transformation of the underlying forecasts and estimation with constant forecast-horizon-effects. Both models are easy to handle and deliver equally well performing results, which show a surprisingly high correlation (up to 0.94) with the true dispersion.
    Keywords: survey data, dispersion, disagreement, fixed event forecasts
    JEL: C22 C32 E37
    Date: 2008–05
  17. By: Prof D.S.G. Pollock
    Abstract: This paper shows how a frequency-selective filter that is applicable to short trended data sequences can be implemented via a frequency-domain approach. A filtered sequence can be obtained by multiplying the Fourier ordinates of the data by the ordinates of the frequency response of the filter and by applying the inverse Fourier transform to carry the product back into the time domain. Using this technique, it is possible, within the constraints of a finite sample, to design an ideal frequency-selective filter that will preserve all elements within a specified range of frequencies and that will remove all elements outside it. Approximations to ideal filters that are implemented in the time domain are commonly based on truncated versions of the infinite sequences of coefficients derived from the Fourier transforms of rectangular frequency response functions. An alternative to truncating an infinite sequence of coefficients is to wrap it around a circle of a circumference equal in length to the data sequence and to add the overlying coefficients. The coefficients of the wrapped filter can also be obtained by applying a discrete Fourier transform to a set of ordinates sampled from the frequency response function. Applying the coefficients to the data via circular convolution produces results that are identical to those obtained by a multiplication in the frequency domain, which constitutes a more efficient approach.
    Keywords: Linear filtering; Frequency-domain analysis
    JEL: C22
    Date: 2008–04
  18. By: Ibrahim Ahamada (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Ecole d'économie de Paris - Paris School of Economics - Université Panthéon-Sorbonne - Paris I); Philippe Jolivaldt (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Ecole d'économie de Paris - Paris School of Economics - Université Panthéon-Sorbonne - Paris I)
    Abstract: Test for unit root based in wavelets theory is recently defined (Genay and Fan, 2007). While the new test is supposed to be robust to the initial value, we bring out by contrast the significant effects of the initial value in the size and the power. We found also that both the wavelets unit root test and ADF test give the same efficiency if the data are corrected of the initial value. Our approach is based in monte carlo experiment.
    Keywords: Unit root tests, wavelets, monte carlo experiments, size-power curve.
    Date: 2008–03
  19. By: Mehrotra , Aaron (BOFIT); Sánchez-Fung, José R. (BOFIT)
    Abstract: This paper forecasts inflation in China over a 12-month horizon. The analysis runs 15 alternative models and finds that only those considering many predictors via a principal component display a better relative forecasting performance than the univariate benchmark.
    Keywords: inflation forecasting; data-rich environment; principal components; China
    JEL: C53 E31
    Date: 2008–04–21
  20. By: Mercereau, Benôit; Miniane, Jacques Alain
    Abstract: The present value model of the current account has been very popular, as it provides an optimal benchmark to which actual current account series have often been compared. We show why persistence in observed current account data makes the estimated optimal series very sensitive to small-sample estimation error, making it close to impossible to determine whether the paths of the two series truly bear any relation to each other. Moreover, the standard Wald test of the model will falsely accept or reject the model with substantial probability. Monte Carlo simulations and estimations using annual and quarterly data from five OECD countries strongly support our predictions. In particular, we conclude that two important consensus results in the literature – that the optimal series is highly correlated with the actual series, but substantially less volatile – are not statistically robust.
    Keywords: Current account, present value model, model evaluation
    JEL: C11 C52 F32 F41
    Date: 2008
  21. By: Colander, David C.
    Abstract: This paper asks the question: Why has the “general-to-specific” cointegrated VAR approach as developed in Europe had only limited success in the US as a tool for doing empirical macroeconomics, where what might be called a “theory comes first” approach dominates? The reason this paper highlights is the incompatibility of the European approach with the US focus on the journal publication metric for advancement. Specifically, the European “general-to specific” cointegrated VAR approach requires researcher judgment to be part of the analysis, and the US focus on a journal publication metric discourages such research methods. The US “theory comes first” approach fits much better with the journal publication metric.
    Keywords: Incentives, empirical work, econometrics, methodology, cointegration, VAR
    JEL: B4
    Date: 2008
  22. By: TENENHAUS, Michel
    Abstract: In this research, the authors explore the use of ULS-SEM (Structural-Equation-Modelling), PLS (Partial Least Squares), GSCA (Generalized Structured Component Analysis), path analysis on block principal components and path analysis on block scales on customer satisfaction data.
    Keywords: Component-based SEM; covariance-based SEM; GSCA; path analysis; PLS path modelling; Structural Equation Modelling; Unweighted Least Squares
    JEL: C10 C23
    Date: 2008–01–01

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.