nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒08‒02
23 papers chosen by
Sune Karlsson
Orebro University

  1. A Simple Nonparametric Estimator for the Distribution of Random Coefficients By Patrick Bajari; Jeremy T. Fox; Kyoo il Kim; Stephen P. Ryan
  2. Exact Maximum Likelihood estimation for the BL-GARCH model under elliptical distributed innovations By Abdou Kâ Diongue; Dominique Guegan; Rodney C. Wolff
  3. Wavelet Method for Locally Stationary Seasonal Long Memory Processes By Dominique Guegan; Zhiping Lu
  4. Breaks or Long Memory Behaviour : An empirical Investigation By Lanouar Charfeddine; Dominique Guegan
  5. Change analysis of dynamic copula for measuring dependence in multivariate financial data By Dominique Guegan; Jing Zhang
  6. The sensitivity of DSGE models' results to data detrending By Simona Delle Chiaie
  7. Bayesian Clustering of Categorical Time Series Using Finite Mixtures of Markov Chain Models By Sylvia Frühwirth-Schnatter; Christoph Pamminger
  8. R/S analysis and DFA: finite sample properties and confidence intervals By Kristoufek, Ladislav
  9. Volatility Models : frrom GARCH to Multi-Horizon Cascades By Alexander Subbotin; Thierry Chauveau; Kateryna Shapovalova
  10. A Note on Adapting Propensity Score Matching and Selection Models to Choice Based Samples By Heckman, James J.; Todd, Petra E.
  11. An I(2) Cointegration Model With Piecewise Linear Trends: Likelihood Analysis And Application By Takamitsu Kurita; Heino Bohn Nielsen; Anders Rahbek
  12. Simulating WTP Values from Random-Coefficient Models By Maurus Rischatsch
  13. Distinguishing between short and long range dependence: Finite sample properties of rescaled range and modified rescaled range By Kristoufek, Ladislav
  14. The Standard Biplot By Michael Greenacre
  15. Very simple marginal effects in some discrete choice models By Kevin J. Denny
  16. The propagation of regional recessions By James D. Hamilton; Michael T. Owyang
  17. Evaluating Marginal Policy Changes and the Average Effect of Treatment for Individuals at the Margin By Pedro Carneiro; James J. Heckman; Edward J. Vytlacil
  18. Can Parameter Instability Explain the Meese-Rogoff Puzzle? By Philippe Bacchetta; Eric van Wincoop; Toni Beutler
  19. Dynamic Cost-offsets of Prescription Drug Expenditures: Panel Data Analysis Using a Copula-based Hurdle Model By Partha Deb; Pravin K. Trivedi; David M. Zimmer
  20. Estimating Treatment Effects from Contaminated Multi-Period Education Experiments: The Dynamic Impacts of Class Size Reductions By Weili Ding; Steven F. Lehrer
  21. Does money matter in inflation forecasting? By Jane M. Binner; Peter Tino; Jonathan Tepper; Richard G. Anderson; Barry Jones; Graham Kendall
  22. Predicting Elections from Biographical Information about Candidates By Armstrong, J. Scott; Graefe, Andreas
  23. A panel Granger-causality test of endogenous vs. exogenous growth By Jochen Hartwig

  1. By: Patrick Bajari; Jeremy T. Fox; Kyoo il Kim; Stephen P. Ryan
    Abstract: We propose a simple nonparametric mixtures estimator for recovering the joint distribution of parameter heterogeneity in economic models, such as the random coefficients logit. The estimator is based on linear regression subject to linear inequality constraints, and is robust, easy to program and computationally attractive compared to alternative estimators for random coefficient models. We prove consistency and provide the rate of convergence under deterministic and stochastic choices for the sieve approximating space. We present a Monte Carlo study and an empirical application to dynamic programming discrete choice with a serially-correlated unobserved state variable.
    JEL: C01 C14 C25 C31 C35 I21 I28 L0 O1 O15
    Date: 2009–07
  2. By: Abdou Kâ Diongue (UFR SAT - Université Gaston Berger - Université Gaston Berger de Saint-Louis); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Rodney C. Wolff (School of Mathematical Sciences - Queensland University of Technology)
    Abstract: In this paper, we discuss the class of Bilinear GATRCH (BL-GARCH) models which are capable of capturing simultaneously two key properties of non-linear time series : volatility clustering and leverage effects. It has been observed often that the marginal distributions of such time series have heavy tails ; thus we examine the BL-GARCH model in a general setting under some non-Normal distributions. We investigate some probabilistic properties of this model and we propose and implement a maximum likelihood estimation (MLE) methodology. To evaluate the small-sample performance of this method for the various models, a Monte Carlo study is conducted. Finally, within-sample estimation properties are studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects.
    Keywords: BL-GARCH process - elliptical distribution - leverage effects - Maximum Likelihood - Monte Carlo method - volatility clustering
    Date: 2009
  3. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Zhiping Lu (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, ECNU - East China Normal University)
    Abstract: Long memory processes have been extensively studied over the past decades. When dealing with the financial and economic data, seasonality and time-varying long-range dependence can often be observed and thus some kind of non-stationarity can exist inside financial data sets. To take into account this kind of phenomena, we propose a new class of stochastic process : the locally stationary k-factor Gegenbauer process. We describe a procedure of estimating consistently the time-varying parameters by applying the discrete wavelet packet transform (DWPT). The robustness of the algorithm is investigated through simulation study. An application based on the error correction term of fractional cointegration analysis of the Nikkei Stock Average 225 index is proposed.
    Keywords: Discrete wavelet packet transform ; Gegenbauer process ; Nikkei Stock Average 225 index ; non-stationarity ; ordinary least square estimation
    Date: 2009–03
  4. By: Lanouar Charfeddine (OEP - Université de Marne-la-Vallée); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris)
    Abstract: Are structural breaks models true switching models or long memory processes ? The answer to this question remain ambiguous. A lot of papers, in recent years, have dealt with this problem. For instance, Diebold and Inoue (2001) and Granger and Hyung (2004) show, under specific conditions, that switching models and long memory processes can be easily confused. In this paper, using several generating models like the mean-plus-noise model, the STOchastic Permanent BREAK model, the Markov switching model, the TAR model, the sign model and the Structural CHange model (SCH) and several estimation techiques like the GPH technique, the Exact Local Whittle (ELW) and the Wavelet methods, we show that, if the answer is quite simple in some cases, it can be mitigate in other cases. Using French and American inflation rates, we show that these series cannot be characterized by the same class of models. The main result of this study suggests that estimating the long memory parameter without taking account existence of breaks in the data sets may lead to misspecification and to overestimate the true parameter.
    Keywords: Structural breaks models, spurious long memory behavior, inflation series.
    Date: 2009–04
  5. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Jing Zhang (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, ECNU - East China Normal University)
    Abstract: This paper proposes a new approach to measure the dependence in multivariate financial data. Data in finance and insurance often cover a long time period. Therefore, the economic factors may induce some changes inside the dependence structure. Recently, two methods using copulas have been proposed to analyze such changes. The first approach investigates the changes of copula's parameters. The second one tests the changes of copulas by determining the best copulas using moving windows. In this paper we take into account the non stationarity of the data and analyze : (1) the changes of parameters while the copula family keeps static ; (2) the changes of copula family. We propose a series of tests based on conditional copulas and goodness-of-fit (GOF) tests to decide the type of change, and further give the corresponding change analysis. We illustrate our approach with Standard & Poor 500 and Nasdaq indices, and provide dynamic risk measures.
    Keywords: Dynamic copula - goodness-of-fit test - change-point - time-varying parameter - VaR - ES
    Date: 2009
  6. By: Simona Delle Chiaie (Oesterreichische Nationalbank, Economic Studies Division, P.O. Box 61, A-1010 Vienna,)
    Abstract: This paper aims to shed light on potential pitfalls of di¤erent data filtering and detrending procedures for the estimation of stationary DSGE models. For this purpose, a medium-sized New Keynesian model as the one developed by Smets and Wouters (2003) is used to assess the sensitivity of the structural estimates to preliminary data transformations. To examine the question, we focus on two widely used detrending and filtering methods, the HP filter and linear detrending. After comparing the properties of business cycle components, we estimate the model through Bayesian techniques using in turn the two different sets of transformed data. Empirical findings show that posterior distributions of structural parameters are rather sensitive to the choice of detrending. As a consequence, both the magnitude and the persistence of theoretical responses to shocks depend upon preliminary filtering.
    Keywords: DSGE models; Filters; Trends; Bayesian estimates
    JEL: E3
    Date: 2009–07–20
  7. By: Sylvia Frühwirth-Schnatter (Department of Applied Statistics, Johannes Kepler University Linz, Austria); Christoph Pamminger (Department of Applied Statistics, Johannes Kepler University Linz, Austria)
    Abstract: Two approaches for model-based clustering of categorical time series based on time- homogeneous first-order Markov chains are discussed. For Markov chain clustering the in- dividual transition probabilities are fixed to a group-specific transition matrix. In a new approach called Dirichlet multinomial clustering the rows of the individual transition matri- ces deviate from the group mean and follow a Dirichlet distribution with unknown group- specific hyperparameters. Estimation is carried out through Markov chain Monte Carlo. Various well-known clustering criteria are applied to select the number of groups. An appli- cation to a panel of Austrian wage mobility data leads to an interesting segmentation of the Austrian labor market.
    Keywords: Markov chain Monte Carlo, model-based clustering, panel data, transition matrices, labor market, wage mobility
    Date: 2009–07
  8. By: Kristoufek, Ladislav
    Abstract: We focus on finite sample properties of two mostly used methods of Hurst exponent H estimation – R/S analysis and DFA. Even though both methods have been widely applied on different types of financial assets, only several papers have dealt with finite sample properties which are crucial as the properties differ significantly from the asymptotic ones. Recently, R/S analysis has been shown to overestimate H when compared with DFA. However, we show on the random time series with lengths from 2^9 to 2^17 that even though the estimates of R/S are truly significantly higher than an asymptotic limit of 0.5, they remain very close to the estimates proposed by Anis & Lloyd and the estimated standard deviations are lower than the ones of DFA. On the other hand, DFA estimates are very close to 0.5. The results propose that R/S still remains useful and robust method even when compared to newer method of DFA which is usually preferred in recent literature.
    Keywords: rescaled range analysis; detrended fluctuation analysis; Hurst exponent; long-range dependence; confidence intervals
    JEL: C12 C59 C01
    Date: 2009–07–20
  9. By: Alexander Subbotin (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Thierry Chauveau (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Kateryna Shapovalova (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: We overview different methods of modeling volatility of stock prices and exchange rates, focusing on their ability to reproduce the empirical properties in the corresponding time series. The properties of price fluctuations vary across the time scales of observation. The adequacy of different models for describing price dynamics at several time horizons simultaneously is the central topic of this study. We propose a detailed survey of recent volatility models, accounting for multiple horizons. These models are based on different and sometimes competing theoretical concepts. They belong either to GARCH or stochastic volatility model families and often borrow methodological tools from statistical physics. We compare their properties and comment on their pratical usefulness and perspectives.
    Keywords: Volatility modeling, GARCH, stochastic volatility, volatility cascade, multiple horizons in volatility.
    Date: 2009–05
  10. By: Heckman, James J. (University of Chicago); Todd, Petra E. (University of Pennsylvania)
    Abstract: The probability of selection into treatment plays an important role in matching and selection models. However, this probability can often not be consistently estimated, because of choice-based sampling designs with unknown sampling weights. This note establishes that the selection and matching procedures can be implemented using propensity scores fit on choice-based samples with misspecified weights, because the odds ratio of the propensity score fit on the choice-based sample is monotonically related to the odds ratio of the true propensity scores.
    Keywords: choice-based sampling, matching models, propensity scores, selection models
    JEL: C52
    Date: 2009–07
  11. By: Takamitsu Kurita (Faculty of Economics, Fukuoka University); Heino Bohn Nielsen (Department of Economics, University of Copenhagen); Anders Rahbek (Department of Economics, University of Copenhagen)
    Abstract: This paper presents likelihood analysis of the I(2) cointegrated vector autoregression with piecewise linear deterministic terms. Limiting behavior of the maximum likelihood estimators are derived, which is used to further derive the limiting distribution of the likelihood ratio statistic for the cointegration ranks, extending the result for I(2) models with a linear trend in Nielsen and Rahbek (2007) and for I(1) models with piecewise linear trends in Johansen, Mosconi, and Nielsen (2000). The provided asymptotic theory extends also the results in Johansen, Juselius, Frydman, and Goldberg (2009) where asymptotic inference is discussed in detail for one of the cointegration parameters. To illustrate, an empirical analysis of US consumption, income and wealth, 1965 - 2008, is performed, emphasizing the importance of a change in nominal price trends after 1980.
    Keywords: Cointegration, I(2); piecewise linear trends; likelihood analysis; US consumption
    JEL: C32
    Date: 2009–07
  12. By: Maurus Rischatsch (Socioeconomic Institute, University of Zurich)
    Abstract: Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a known distribution.
    Keywords: willingness-to-pay, discrete choice, simulation, random-coe±cient models
    JEL: C15 C25
    Date: 2009–07
  13. By: Kristoufek, Ladislav
    Abstract: Mostly used estimators of Hurst exponent for detection of long-range dependence are biased by presence of short-range dependence in the underlying time series. We present confidence intervals estimates for rescaled range and modified rescaled range. We show that the difference in expected values and confidence intervals enables us to use both methods together to clearly distinguish between the two types of processes. The estimates are further applied on Dow Jones Industrial Average between 1944 and 2009 and show that returns do not show any long-range dependence whereas volatility shows both short-range and long-range dependence in the underlying process.
    Keywords: rescaled range; modified rescaled range; Hurst exponent; long-range dependence; confidence intervals
    JEL: G14 G10 C49 C01
    Date: 2009–07–01
  14. By: Michael Greenacre
    Abstract: Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
    Keywords: Biplot, contributions, correspondence analysis, discriminant analysis, MANOVA, principal component analysis, scaling, singular-value decomposition, weighting
    JEL: C19 C88
    Date: 2009–07
  15. By: Kevin J. Denny (School of Economics & Geary Institute University College Dublin)
    Abstract: I show a simple back-of-the-envelope method for calculating marginal effects in binary choice and count data models. The approach suggested here focuses attention on marginal effects at different points in the distribution of the dependent variable rather than representative points in the joint distribution of the explanatory variables. For binary models, if the mean of the dependent variable is between 0.4 and 0.6 then dividing the logit coefficient by 4 or multiplying the probit coefficient by 0.4 should be moderately accurate.
    Keywords: marginal effects, binary choice, count data
    Date: 2009–07–21
  16. By: James D. Hamilton; Michael T. Owyang
    Abstract: This paper develops a framework for inferring common Markov-switching components in a panel data set with large cross-section and time-series dimensions. We apply the framework to studying similarities and differences across U.S. states in the timing of business cycles. We hypothesize that there exists a small number of cluster designations, with individual states in a given cluster sharing certain business cycle characteristics. We find that although oil-producing and agricultural states can sometimes experience a separate recession from the rest of the United States, for the most part, differences across states appear to be a matter of timing, with some states entering recession or recovering before others.
    Keywords: Business cycles ; Recessions
    Date: 2009
  17. By: Pedro Carneiro; James J. Heckman; Edward J. Vytlacil
    Abstract: This paper develops methods for evaluating marginal policy changes. We characterize how the effects of marginal policy changes depend on the direction of the policy change, and show that marginal policy effects are fundamentally easier to identify and to estimate than conventional treatment parameters. We develop the connection between marginal policy effects and the average effect of treatment for persons on the margin of indifference between participation in treatment and nonparticipation, and use this connection to analyze both parameters. We apply our analysis to estimate the effect of marginal changes in tuition on the return to going to college.
    JEL: C14
    Date: 2009–07
  18. By: Philippe Bacchetta; Eric van Wincoop; Toni Beutler
    Abstract: The empirical literature on nominal exchange rates shows that the current exchange rate is often a better predictor of future exchange rates than a linear combination of macroeconomic fundamentals. This result is behind the famous Meese-Rogoff puzzle. In this paper we evaluate whether parameter instability can account for this puzzle. We consider a theoretical reduced-form relationship between the exchange rate and fundamentals in which parameters are either constant or time varying. We calibrate the model to data for exchange rates and fundamentals and conduct the exact same Meese-Rogoff exercise with data generated by the model. Our main finding is that the impact of time-varying parameters on the prediction performance is either very small or goes in the wrong direction. To help interpret the findings, we derive theoretical results on the impact of time-varying parameters on the out-of-sample forecasting performance of the model. We conclude that it is not time-varying parameters, but rather small sample estimation bias, that explains the Meese-Rogoff puzzle.
    Keywords: exchange rate forecasting; time-varying coefficients
    JEL: F31 F37 F41
    Date: 2009–07
  19. By: Partha Deb; Pravin K. Trivedi; David M. Zimmer
    Abstract: This paper presents a new multivariate copula-based modeling approach for analyzing cost-offsets between drug and nondrug expenditures. Estimates are based on panel data from the Medical Expenditure Panel Survey (MEPS) with quarterly measures of medical expenditures. The approach allows for nonlinear dynamic dependence between drug and nondrug expenditures as well as asymmetric contemporaneous dependence. The specification uses the standard hurdle model with two significant extensions. First, it is adapted to the bivariate case. Second, because the cost-offset hypothesis is inherently dynamic, the bivariate hurdle framework is extended to accommodate dynamic relationships between drug and nondrug spending. The econometric analysis is implemented for six different groups defined by specific health conditions. There is evidence of modest cost-offsets of expenditures on prescribed drugs.
    JEL: C3 C33 C35 I11
    Date: 2009–07
  20. By: Weili Ding; Steven F. Lehrer
    Abstract: This paper introduces an empirical strategy to estimate dynamic treatment effects in randomized trials that provide treatment in multiple stages and in which various noncompliance problems arise such as attrition and selective transitions between treatment and control groups. Our approach is applied to the highly influential four year randomized class size study, Project STAR. We find benefits from attending small class in all cognitive subject areas in kindergarten and the first grade. We do not find any statistically significant dynamic benefits from continuous treatment versus never attending small classes following grade one. Finally, statistical tests support accounting for both selective attrition and noncompliance with treatment assignment.
    JEL: C31 I21
    Date: 2009–07
  21. By: Jane M. Binner; Peter Tino; Jonathan Tepper; Richard G. Anderson; Barry Jones; Graham Kendall
    Abstract: This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.
    Keywords: Forecasting ; Inflation (Finance) ; Monetary theory
    Date: 2009
  22. By: Armstrong, J. Scott; Graefe, Andreas
    Abstract: Using the index method, we developed the PollyBio model to predict election outcomes. The model, based on 49 cues about candidates’ biographies, was used to predict the outcome of the 28 U.S. presidential elections from 1900 to 2008. In using a simple heuristic, it correctly predicted the winner for 25 of the 28 elections and was wrong three times. In predicting the two-party vote shares for the last four elections from 1996 to 2008, the model’s out-of-sample forecasts yielded a lower forecasting error than 12 benchmark models. By relying on different information and including more variables than traditional models, PollyBio improves on the accuracy of election forecasting. It is particularly helpful for forecasting open-seat elections. In addition, it can help parties to select the candidates running for office.
    Keywords: forecasting; unit weighting; Dawes rule; differential weighting
    JEL: C53 D72
    Date: 2009–06–23
  23. By: Jochen Hartwig (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: The paper proposes a new test of endogenous vs. exogenous growth theories based on the Granger-causality methodology and applies it to a panel of 20 OECD countries. The test yields divergent evidence with respect to physical and human capital. For physical capital, the test results favor Solow-type exogenous growth theory over AK-type endogenous growth models. On the other hand, the test results lend support to human capital oriented endogenous growth models – like the Uzawa-Lucas model – rather than to the human capital augmented Solow model.
    Keywords: Exogenous growth, endogenous growth, panel Granger-causality tests
    JEL: C12 C23 O41
    Date: 2009–07

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.