nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒02‒13
thirty-six papers chosen by
Sune Karlsson
Stockholm School of Economics

  1. Unemployment dynamics and NAIRU estimates for CEECs: A univariate approach By Camarero, M.; Carrión, J.Ll.; Tamarit, C.
  2. Permanent vs Transitory Components and Economic Fundamentals By Anthony Garratt; Donald Robertson; Stephen Wright
  3. Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and non-standard asymptotics By Jean-Marie Dufour
  4. Exact Multivariate Tests of Asset Pricing Models with Stable Asymmetric Distributions By Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf
  5. Interpolation and Backdating with A Large Information Set By Angelini, Elena; Henry, Jérôme; Marcellino, Massimiliano
  6. Similarities and Convergence in G7 Cycles By Canova, Fabio; Ciccarelli, Matteo; Ortega, Eva
  7. Small Sample Confidence Intervals for Multivariate Impulse Response Functions at Long Horizons By Pesavento, Elena; Rossi, Barbara
  8. Federal Funds Rate Prediction By Sarno, Lucio; Thornton, Daniel L; Valente, Giorgio
  9. Optimal Forecast Combination Under Regime Switching By Elliott, Graham; Timmermann, Allan G
  10. Model-based Clustering of Multiple Time Series By Frühwirth-Schnatter, Sylvia; Kaufmann, Sylvia
  11. A Variance Ratio Related Prediction Tool with Application to the NYSE Index 1825-2002 By Kandel, Shmuel; Zilca, Shlomo
  12. Business Cycle Turning Points : Mixed-Frequency Data with Structural Breaks By Konstantin A., KHOLODILIN; Wension Vincent, YAO
  13. Predicting Electoral College Victory Probabilities from State Probability Data By Ray C. Fair
  14. Predictive regressions with panel data By Hjalmarsson, Erik
  15. On the Predictability of Global Stock Returns By Hjalmarsson, Erik
  16. Hierarchical mixture modelling with normalized inverse Gaussian priors. By Antonio Lijoi; Ramsés H. Mena; Igor Prünster
  17. Contributions to the understanding of Bayesian consistency. By Antonio Lijoi; Igor Prünster; Stephen G. Walker
  18. On consistency of nonparametric normal mixtures for Bayesian density estimation. By Antonio Lijoi; Igor Prünster; Stephen G. Walker
  19. On rates of convergence for posterior distributions in infinite–dimensional models. By Antonio Lijoi; Igor Prünster; Stephen G. Walker
  20. A strong law of large numbers for capacities. By Fabio Maccheroni; Massimo Marinacci
  21. Bayesian Econometrics By Poirier, Dale J; Tobias, Justin
  22. Bayesian Analysis of Structural Effects in an Ordered Equation System By Li, Mingliang; Tobias, Justin
  23. Microsimulations in the Presence of Heterogeneity By Constantijn W.A. Panis
  24. Random Scenario Forecasts Versus Stochastic Forecasts By Shripad Tuljapurkar; Ronald D. Lee; Qi Li
  25. Robust forecasting of mortality and fertility rates: a functional data approach By Rob J. Hyndman; Md. Shahid Ullah
  26. Forecasting age-specific breast cancer mortality using functional data models By Bircan Erbas; Rob J. Hyndman; Dorota M. Gertig
  27. Using Out-of-Sample Mean Squared Prediction Errors to Test the Martingale Difference By Todd E. Clark; Kenneth D. West
  28. Predicting Customer Retention and Profitability by Using Random Forests and Regression Forests Techniques By B. LARIVIÈRE; D. VAN DEN POEL
  29. The relative importance of symmetric and asymmetric shocks and the determination of the exchange rate By G. PEERSMAN
  30. Exponential Weighting and Random-Matrix-Theory-Based Filtering of Financial Covariance Matrices for Portfolio Optimization By Marc Potters; Szilard Pafka; Imre Kondor
  31. Noise dressing of financial correlation matrices By Marc Potters; Jean-Philippe Bouchaud; Laurent Laloux; Pierre Cizeau
  32. Asymptotic Distribution Theory of Empirical Rank-dependent Measures of Inequality By Rolf Aaberge
  33. Nonparametric Tests of Optimizing Behavior in Public Service Provision: Methodology and an Application to Local Public Safety By Laurens Cherchye; Bruno De Borger; Tom Van Puyenbroeck
  34. Bootstrapping GMM Estimators for Time Series By Atsushi Inoue; Mototsugu Shintani
  35. Evaluating Density Forecasts via the Copula Approach By Xiaohong Chen; Yanqin Fan
  36. Factor Analysis Regression By Reinhold Kosfeld; Jorgen Lauridsen

  1. By: Camarero, M.; Carrión, J.Ll.; Tamarit, C. (Universitat de Barcelona)
    Abstract: In this paper we test for the hysteresis versus the natural rate hypothesis on the unemployment rates of the EU new members using unit root tests that account for the presence of level shifts. As a by product, the analysis proceeds to the estimation of a NAIRU measure from a univariate point of view. The paper also focuses on the precision of these NAIRU estimates studying the two sources of inaccuracy that derive from the break points estimation and the autoregressive parameters estimation. The results point to the associated with institucional changes implementing market-oriented reforms. Moreover, the degree of persistence in unemployment varies dramatically among the individual countries depending on the stage reached in the transition process.
    JEL: C22 C23 E24
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:bar:bedcje:2005131&r=ecm
  2. By: Anthony Garratt (School of Economics, Mathematics & Statistics, Birkbeck College); Donald Robertson; Stephen Wright (School of Economics, Mathematics & Statistics, Birkbeck College)
    Abstract: Any non-stationary series can be decomposed into permanent (or "trend") and transitory (or "cycle") components. Typically some atheoretic pre-filtering procedure is applied to extract the permanent component. This paper argues that analysis of the fundamental underlying stationary economic processes should instead be central to this process. We present a new derivation of multivariate Beveridge-Nelson permanent and transitory components, whereby the latter can be derived explicitly as a weighting of observable stationary processes. This allows far clearer economic interpretations. Different assumptions on the fundamental stationary processes result in distinctly different results; but this reflects deep economic uncertainty. We illustrate with an example using Garratt et al's (2003a) small VECM model of the UK economy. Any non-stationary series can be decomposed into permanent (or "trend") and transitory (or "cycle") components. Typically some atheoretic pre-filtering procedure is applied to extract the permanent component. This paper argues that analysis of the fundamental underlying stationary economic processes should instead be central to this process. We present a new derivation of multivariate Beveridge-Nelson permanent and transitory components, whereby the latter can be derived explicitly as a weighting of observable stationary processes. This allows far clearer economic interpretations. Different assumptions on the fundamental stationary processes result in distinctly different results; but this reflects deep economic uncertainty. We illustrate with an example using Garratt et al's (2003a) small VECM model of the UK economy.
    Keywords: Multivariate Beveridge-Nelson, VECM, Economic Fundamentals, Decomposition.
    JEL: C1 C32 E0 E32 E37
    Date: 2005–02
    URL: http://d.repec.org/n?u=RePEc:bbk:bbkefp:0412&r=ecm
  3. By: Jean-Marie Dufour
    Abstract: The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter). <P>La technique des tests de Monte Carlo ((MC; Dwass (1957), Barnard (1963)) constitue une méthode attrayante qui permet de construire des tests exacts fondés sur des statistiques dont la distribution exacte est difficile à calculer par des méthodes analytiques mais peut être simulée, pourvu que cette distribution ne dépende pas de paramètres de nuisance. Nous généralisons cette méthode dans deux directions: premièrement, en considérant le cas où le test de Monte Carlo est construit à partir de réplications échangeables d’une variable aléatoire dont la distribution peut comporter des discontinuités; deuxièmement, en étendant la méthode à des statistiques dont la distribution dépend de paramètres de nuisance (tests de Monte Carlo maximisés, MMC). Nous proposons aussi des versions simplifiées de la procédure MMC, qui ne sont valides qu’asymptotiquement mais fournissent néanmoins une méthode simple qui permet d’améliorer les approximations asymptotiques usuelles, en particulier dans des cas non standards (e.g., l’asymptotique en présence de racines unitaires). Nous montrons aussi que les tests basés sur la technique du bootstrap paramétrique peut s’interpréter comme une version simplifiée de la procédure MMC. Cette dernière fournit toutefois des tests asymptotiquement valides sous des conditions beaucoup plus générales que le bootstrap paramétrique.
    Keywords: Monte Carlo test, maximized monte Carlo test, finite sample test, exact test, nuisance parameter, bounds, bootstrap, parametric bootstrap, simulated annealing, asymptotics, nonstandard asymptotic distribution, test de Monte Carlo, test de Monte Carlo maximisé, test exact, test valide en échantillon fini, paramètre de nuisance, bornes, bootstrap, bootstrap paramétrique, recuit simulé, distribution asymptotique non standard
    JEL: C12 C15 C2 C52 C22
    Date: 2005–02–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2005s-02&r=ecm
  4. By: Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf
    Abstract: In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis. <P>Dans cet article, nous proposons des méthodes d’inférence exactes pour des modèles d’évaluation d’actifs (CAPM) qui sont formulés dans le contexte des modèles de régression linéaires multivariés. De plus, ces méthodes permettent de considérer des lois de probabilité stables sur les erreurs du modèle. Il est bien connu que l’hypothèse de normalité des rendements boursiers est habituellement rejetée dans les études empiriques à cause de la présence d’asymétrie et d’aplatissement dans les distributions. Afin de modéliser de tels attributs, nous suggérons une approche qui accommode l’asymétrie et l’aplatissement dans les distributions sans avoir recours à des approximations de grands échantillons. Les méthodes suggérées sont basées sur des tests de Monte Carlo. Des tests diagnostiques multivariés sont formellement inclus dans l’analyse afin de s’assurer que les distributions d’erreurs considérées sont raisonnables pour les données étudiées. Ces tests permettent la construction de régions de confiance exactes pour les paramètres d’asymétrie et d’aplatissement des erreurs dans le cas de lois stables. Nous proposons des tests d’efficacité du portefeuille de référence (i.e., pour la nullité des constantes) qui tiennent explicitement compte de la présence de paramètres de nuisance dans les distributions stables. Les méthodes proposées sont appliquées aux rendements de 12 portefeuilles constitués d’actifs négociés à la bourse de New York (NYSE) sur la période s’étalant de 1926 à 1995 (par sous-périodes de cinq ans). Nos résultats montrent que l’utilisation de distributions stables possiblement asymétriques produit une amélioration statistique importante dans la représentation de la distribution et mène à moins de rejet de l’hypothèse d’efficacité du portefeuille de marché.
    Keywords: capital asset pricing model; mean-variance efficiency; non-normality; multivariate linear regression; stable distribution; skewness; kurtosis; asymmetry; uniform linear hypothesis; exact test; Monte Carlo test; nuisance parameter; specification test; diagnostics, modèle d’évaluation d’actifs financiers; efficience de portefeuille; non-normalité; modèle de régression multivarié; loi stable; asymétrie; aplatissement; hypothèse linéaire uniforme; test exact; test de Monte Carlo; paramètres de nuisance; tests diagnostiques
    JEL: C3 C12 C33 C15 G1 G12 G14
    Date: 2005–02–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2005s-03&r=ecm
  5. By: Angelini, Elena; Henry, Jérôme; Marcellino, Massimiliano
    Abstract: Existing methods for data interpolation or backdating are either univariate or based on a very limited number of series, due to data and computing constraints that were binding until the recent past. Nowadays large datasets are readily available, and models with hundreds of parameters are fastly estimated. We model these large datasets with a factor model, and develop an interpolation method that exploits the estimated factors as an efficient summary of all the available information. The method is compared with existing standard approaches from a theoretical point of view, by means of Monte Carlo simulations, and also when applied to actual macroeconomic series. The results indicate that our method is more robust to model misspecification, although traditional multivariate methods also work well while univariate approaches are systematically outperformed. When interpolated series are subsequently used in econometric analyses, biases can emerge, depending on the type of interpolation but again be reduced with multivariate approaches, including factor-based ones.
    Keywords: Factor model; Interpolation; Kalman filter; spline
    JEL: C32 C43 C82
    Date: 2004–10
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4533&r=ecm
  6. By: Canova, Fabio; Ciccarelli, Matteo; Ortega, Eva
    Abstract: This Paper examines the properties of G-7 cycles using a multicountry Bayesian panel VAR model with time variations, unit specific dynamics and cross country interdependences. We demonstrate the presence of a significant world cycle and show that country specific indicators play a much smaller role. We detect differences across business cycle phases but, apart from an increase in synchronicity in the late 1990s, find little evidence of major structural changes. We also find no evidence of the existence of a euro area specific cycle or of its emergence in the 1990s.
    Keywords: Bayesian methods; business cycle; G7; indicators; panel data
    JEL: C11 E32
    Date: 2004–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4534&r=ecm
  7. By: Pesavento, Elena; Rossi, Barbara
    Abstract: Existing methods for constructing confidence bands for multivariate impulse response functions depend on auxiliary assumptions on the order of integration of the variables. Thus, they may have poor coverage at long lead times when variables are highly persistent. Solutions that have been proposed in the literature may be computationally challenging. The goal of this Paper is to propose a simple method for constructing confidence bands for impulse response functions that is not pointwise and that is robust to the presence of highly persistent processes. The method uses alternative approximations based on local-to-unity asymptotic theory and allows the lead time of the impulse response function to be a fixed fraction of the sample size. These devices provide better approximations in small samples. Monte Carlo simulations show that our method tends to have better coverage properties at long horizons than existing methods. We also investigate the properties of the various methods in terms of the length of their confidence bands. Finally, we show, with empirical applications, that our method may provide different economic interpretations of the data. Applications to real GDP and to nominal versus real sources of fluctuations in exchange rates are discussed.
    Keywords: impulse response functions; local to unity asymptotics; persistence; VARs
    JEL: C12 C32 F40
    Date: 2004–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4536&r=ecm
  8. By: Sarno, Lucio; Thornton, Daniel L; Valente, Giorgio
    Abstract: We examine the forecasting performance of a range of time-series models of the daily US effective federal funds (FF) rate recently proposed in the literature. We find that: (i) most of the models and predictor variables considered produce satisfactory one-day-ahead forecasts of the FF rate; (ii) the best forecasting model is a simple univariate model where the future FF rate is forecast using the current difference between the FF rate and its target; (iii) combining the forecasts from various models generally yields modest improvements on the best performing model. These results have a natural interpretation and clear policy implications.
    Keywords: E47; federal fund rate; forecasting; nonlinearity; term structure
    JEL: E43
    Date: 2004–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4587&r=ecm
  9. By: Elliott, Graham; Timmermann, Allan G
    Abstract: This Paper proposes a new forecast combination method that lets the combination weights be driven by regime switching in a latent state variable. An empirical application that combines forecasts from survey data and time series models finds that the proposed regime switching combination scheme performs well for a variety of macroeconomic variables. Monte Carlo simulations shed light on the type of data generating processes for which the proposed combination method can be expected to perform better than a range of alternative combination schemes. Finally, we show how time-variations in the combination weights arise when the target variable and the predictors share a common factor structure driven by a hidden Markov process.
    Keywords: forecast combination; Markov switching; survey data; time-varying combination weights
    JEL: C53
    Date: 2004–10
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4649&r=ecm
  10. By: Frühwirth-Schnatter, Sylvia; Kaufmann, Sylvia
    Abstract: We propose to use the attractiveness of pooling relatively short time series that display similar dynamics, but without restricting to pooling all into one group. We suggest estimating the appropriate grouping of time series simultaneously along with the group-specific model parameters. We cast estimation into the Bayesian framework and use Markov chain Monte Carlo simulation methods. We discuss model identification and base model selection on marginal likelihoods. A simulation study documents the efficiency gains in estimation and forecasting that are realized when appropriately grouping the time series of a panel. Two economic applications illustrate the usefulness of the method in analysing also extensions to Markov switching within clusters and heterogeneity within clusters, respectively.
    Keywords: clustering; Markov chain Monte Carlo; Markov Switching; mixture modelling; panel data
    JEL: C11 C33 E32
    Date: 2004–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4650&r=ecm
  11. By: Kandel, Shmuel; Zilca, Shlomo
    Abstract: Cochrane’s variance ratio is a leading tool for detection of deviations from random walks in financial asset prices. This Paper develops a variance ratio related regression model that can be used for prediction. We suggest a comprehensive framework for our model, including model identification, model estimation and selection, bias correction, model diagnostic check, and an inference procedure. We use our model to study and model mean reversion in the NYSE index in the period 1825-2002. We demonstrate that in addition to mean reversion, our model can generate other characteristic properties of financial asset prices, such as short-term persistence and volatility clustering of unconditional returns.
    Keywords: mean reversion; persistence; variance ratio
    Date: 2004–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4729&r=ecm
  12. By: Konstantin A., KHOLODILIN; Wension Vincent, YAO
    Abstract: This papers develops a dynamic factor models with regime switching to account for the decreasing volatility of the U.S. economy observed since the mid-1980s. Apart from the Markov switching capturing the cyclical fluctuations, an additional type of regime switching is introduced to allow variances to switch between distinct regimes. The resulting four-regime models extend univariate analysis currently used in the literature on the structural break in conditional volatility to the multivariate time series. Besides the dynamic factor model using the data with a single (monthly) frequency, we employ the additonal information incorporating the mixed-frequency data, which include not only the monthly component series but also such an important quarterly series as the real GDP. The evaluation of six different nonlinear models suggests that the probabilities derived from all the models comply with NBER business cycle dating and detect a one-time shifting from high variance to low-variance states in February 1984. In addition, we find that: mixed-frequency models outperform single-frequency models; restricted models outperform unrestricted models; four-regime switching models outperform two-regime switching models.
    Keywords: Volatility; Structural break; Composite coincident indicator; Dynamic factor model; Markov switching; Mixed-frequency data
    JEL: E32 C10
    Date: 2004–09–15
    URL: http://d.repec.org/n?u=RePEc:ctl:louvir:2004024&r=ecm
  13. By: Ray C. Fair (Cowles Foundation, Yale University)
    Abstract: A method is proposed in this paper for predicting Electoral College victory probabilities from state probability data. A "ranking" assumption about dependencies across states is made that greatly simplifies the analysis. The method issued to analyze state probability data from the Intrade political betting market. The Intrade prices of various contracts are quite close to what would be expected under the ranking assumption. Under the joint hypothesis that the Intrade price ranking is correct and the ranking assumption is correct, President Bush should not have won any state ranked below a state that he lost. He did not win any such state. The ranking assumption is also consistent with the fact that the two parties spent essentially nothing in most states in 2004.
    Keywords: Electoral College victory probabilities, political betting markets
    JEL: C10
    Date: 2004–11
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1496&r=ecm
  14. By: Hjalmarsson, Erik (Department of Economics)
    Abstract: This paper analyzes econometric inference in predictive regressions in a panel data setting. In a traditional time-series framework, estimation and testing are often made difficult by the endogeneity and near persistence of many forecasting variables; tests of whether the dividend-price ratio predicts stock returns is a prototypical example. I show that, by pooling the data, these econometric issues can be dealt with more easily. When no individual intercepts are included in the pooled regression, the pooled estimator has an asymptotically normal distribution and standard tests can be performed. However, when fixed effects are included in the specification, a second order bias in the fixed effects estimator arises from the endogeneity and persistence of the regressors. A new estimator based on recursive demeaning is proposed and its asymptotic normality is derived; the procedure requires no knowledge of the degree of persistence in the regressors and thus sidesteps the main inferential problems in the time-series case. Since forecasting regressions are typically applied to financial or macroeconomic data, the traditional panel data assumption of cross-sectional independence is likely to be violated. New methods for dealing with common factors in the data are therefore also developed. The analytical results derived in the paper are supported by Monte Carlo evidence. <p>
    Keywords: Predictive regression; panel data; pooled regression; cross-sectional dependence; stock return predictability; fully modified estimation; local-to-unity
    JEL: C22 C23
    Date: 2005–02–02
    URL: http://d.repec.org/n?u=RePEc:hhs:gunwpe:0160&r=ecm
  15. By: Hjalmarsson, Erik (Department of Economics)
    Abstract: Stock return predictability is a central issue in empirical finance. Yet no comprehensive study of international data has been performed to test the predictive ability of lagged explanatory variables. In fact, most stylized facts are based on U.S. stock-market data. In this paper, I test for stock return predictability in the largest and most comprehensive data set analyzed so far, using four common forecasting variables: the dividend- and earnings-price ratios, the short interest rate, and the term spread. The data contain over 20,000 monthly observations from 40 international markets, including markets in 22 of the 24 OECD countries. <p> I also develop new asymptotic results for long-run regressions with overlapping observations. I show that rather than using auto-correlation robust standard errors, the standard t-statistic can simply be divided by the square root of the forecasting horizon to correct for the effects of the overlap in the data. Further, when the regressors are persistent and endogenous, the long-run OLS estimator suffers from the same problems as does the short-run OLS estimator, and similar corrections and test procedures as those proposed by Campbell and Yogo (2003) for the short-run case should also be used in the long-run; again, the resulting test statistics should be scaled due to the overlap. <p> The empirical analysis conducts time-series regressions for individual countries as well as pooled regressions. The results indicate that the short interest rate and the term spread are fairly robust predictors of stock returns in OECD countries. The predictive abilities of both the short rate and the term spread are short-run phenomena; in particular, there is only evidence of predictability at one and 12-month horizons. In contrast to the interest rate variables, no strong or consistent evidence of predictability is found when considering the earnings- and dividend-price ratios as predictors. Any evidence that is found is primarily seen at the long-run horizon of 60 months. Neither of these predictors yields any consistent predictive power for the OECD countries. <p> The interest rate variables also have out-of-sample predictive power that is economically significant; the welfare gains to a log-utility investor who uses the predictive ability of these variables to make portfolio decisions are substantial. <p>
    Keywords: Predictive regressions; long-horizon regressions; panel data; stock return predictability
    JEL: C22 C23 G12 G15
    Date: 2005–02–02
    URL: http://d.repec.org/n?u=RePEc:hhs:gunwpe:0161&r=ecm
  16. By: Antonio Lijoi; Ramsés H. Mena; Igor Prünster
    Abstract: In recent years the Dirichlet process prior has experienced a great success in the context of Bayesian mixture modelling. The idea of overcoming discreteness of its realizations by exploiting it in hierarchical models, combined with the development of suitable sampling techniques, represent one of the reasons of its popularity. In this paper we aim at proposing the normalized inverse Gaussian process as an alternative to the Dirichlet process to be used in Bayesian hierarchical models. The normalized inverse Gaussian prior is constructed via its finite-dimensional distributions. This prior, though sharing the discreteness property of the Dirichlet prior, is characterized by a more elaborate and sensible clustering which makes use of all the information contained in the data. While in the Dirichlet case the mass assigned to each observation depends solely on the number of times it occurred, for the normalized inverse Gaussian prior the weight of a single observation heavily depends on the whole number of ties in the sample. Moreover, expressions corresponding to relevant statistical quantities, such as a priori moments and the predictive distributions, are as tractable as those arising from the Dirichlet process. This implies that well-established sampling schemes can be easily extended to cover hierarchical models based upon the normalized inverse Gaussian process. The mixture of normalized inverse Gaussian process and the mixture of Dirichlet process are compared by means of two examples involving mixtures of normals.
    Keywords: Bayesian nonparametrics; Density estimation; Dirichlet process; Inverse Gaussian distribution; Mixture models; Predictive distribution; Semiparametric inference
    Date: 2004–11
    URL: http://d.repec.org/n?u=RePEc:icr:wpmath:12-2004&r=ecm
  17. By: Antonio Lijoi; Igor Prünster; Stephen G. Walker
    Abstract: Consistency of Bayesian nonparametric procedures has been the focus of a considerable amount of research. Here we deal with strong consistency for Bayesian density estimation. An awkward consequence of inconsistency is pointed out. We investigate reasons for inconsistency and precisely identify the notion of “data tracking”. Specific examples in which this phenomenon can not occur are discussed. When it can happen, we show how and where things can go wrong, in particular the type of sets where the posterior can put mass.
    Keywords: Bayesian consistency; Density estimation; Hellinger distance; Weak neighborhood
    Date: 2004–11
    URL: http://d.repec.org/n?u=RePEc:icr:wpmath:13-2004&r=ecm
  18. By: Antonio Lijoi; Igor Prünster; Stephen G. Walker
    Abstract: The past decade has seen a remarkable development in the area of Bayesian nonparametric inference both from a theoretical and applied perspective. As for the latter, the celebrated Dirichlet process has been successfully exploited within Bayesian mixture models leading to many interesting applications. As for the former, some new discrete nonparametric priors have been recently proposed in the literature: their natural use is as alternatives to the Dirichlet process in a Bayesian hierarchical model for density estimation. When using such models for concrete applications, an investigation of their statistical properties is mandatory. Among them a prominent role is to be assigned to consistency. Indeed, strong consistency of Bayesian nonparametric procedures for density estimation has been the focus of a considerable amount of research and, in particular, much attention has been devoted to the normal mixture of Dirichlet process. In this paper we improve on previous contributions by establishing strong consistency of the mixture of Dirichlet process under fairly general conditions: besides the usual Kullback–Leibler support condition, consistency is achieved by finiteness of the mean of the base measure of the Dirichlet process and an exponential decay of the prior on the standard deviation. We show that the same conditions are sufficient for mixtures based on priors more general than the Dirichlet process as well. This leads to the easy establishment of consistency for many recently proposed mixture models.
    Keywords: Bayesian nonparametrics; Density estimation; Mixture of Dirichlet process; Normal mixture model; Random discrete distribution; Strong consistency
    Date: 2004–10
    URL: http://d.repec.org/n?u=RePEc:icr:wpmath:23-2004&r=ecm
  19. By: Antonio Lijoi; Igor Prünster; Stephen G. Walker
    Abstract: This paper introduces a new approach to the study of rates of convergence for posterior distributions. It is a natural extension of a recent approach to the study of Bayesian consistency. Crucially, no sieve or entropy measures are required and so rates do not depend on the rate of convergence of the corresponding sieve maximum likelihood estimator. In particular, we improve on current rates for mixture models.
    Keywords: Hellinger consistency; mixture of Dirichlet process; posterior distribution; rates of convergence
    Date: 2004–10
    URL: http://d.repec.org/n?u=RePEc:icr:wpmath:24-2004&r=ecm
  20. By: Fabio Maccheroni; Massimo Marinacci
    Abstract: We consider a totally monotone capacity on a Polish space and a sequence of bounded p.i.i.d. random variables. We show that, on a full set, any cluster point of empirical averages lies between the lower and the upper Choquet integrals of the random variables, provided either the random variables or the capacity are continuous.
    Keywords: Capacities; Choquet integral; Strong law of large numbers
    Date: 2004–06
    URL: http://d.repec.org/n?u=RePEc:icr:wpmath:28-2004&r=ecm
  21. By: Poirier, Dale J; Tobias, Justin
    Abstract: Basic principles of Bayesian statistics and econometrics are reviewed. The topics covered include point and interval estimation, hypothesis testing, prediction, model building and choice of prior. We also review in very general terms recent advances in computational methods and illustrate the use of these techniques with an application.
    Date: 2005–02–08
    URL: http://d.repec.org/n?u=RePEc:isu:genres:12245&r=ecm
  22. By: Li, Mingliang; Tobias, Justin
    Abstract: We describe a new simulation-based algorithm for Bayesian estimation of structural effects in models where the outcome of interest and an endogenous treatment variable are ordered. Our algorithm makes use of a reparameterization, suggested by Nandram and Chen (1996) in the context of a single equation ordered-probit model, which significantly improves the mixing of the standard Gibbs sampler. We illustrate the improvements afforded by this new algorithm in a generated data experiment and also make use of our methods in an empirical application. Specifically, we take data from the National Longitudinal Survey of Youth (NLSY) and investigate the impact of maternal alcohol consumption on early infant health. Our results show clear evidence that the health outcomes of infants whose mothers drink while pregnant are worse than the outcomes of infants whose mothers never consumed alcohol while pregnant. In addition, the estimated parameters clearly suggest the need to control for the endogeneity of maternal alcohol consumption.
    Date: 2005–02–08
    URL: http://d.repec.org/n?u=RePEc:isu:genres:12247&r=ecm
  23. By: Constantijn W.A. Panis (RAND)
    Abstract: This paper develops a method that improves researchers’ ability to account for behavioral responses to policy change in microsimulation models. Current microsimulation models are relatively simple, in part because of the technical difficulty of accounting for unobserved heterogeneity. This is all the more problematic because data constraints typically force researchers to limit their forecasting models to relatively few, mostly time-invariant explanatory covariates, so that much of the variation across individuals is unobserved. Furthermore, failure to account for unobservables often leads to biased estimates of structural parameters, which are critically important for measuring behavioral responses. This paper develops a theoretical approach to incorporate (univariate and multivariate) unobserved heterogeneity into microsimulation models; illustrates computer algorithms to efficiently implement heterogeneity in continuous and limited dependent models; and evaluates the importance of unobserved heterogeneity by conducting Monte Carlo simulations.
    Date: 2003–05
    URL: http://d.repec.org/n?u=RePEc:mrr:papers:wp048&r=ecm
  24. By: Shripad Tuljapurkar (Stanford University); Ronald D. Lee (University of California); Qi Li (Stanford University)
    Abstract: Probabilistic population forecasts are useful because they describe uncertainty in a quantitatively useful way. One approach (that we call LT) uses historical data to estimate stochastic models (e.g., a time series model) of vital rates, and then makes forecasts. Another (we call it RS) began as a kind of randomized scenario: we consider its simplest variant, in which expert opinion is used to make probability distributions for terminal vital rates, and smooth trajectories are followed over time. We use analysis and examples to show several key differences between these methods: serial correlations in the forecast are much smaller in LT; the variance in LT models of vital rates (especially fertility)is much higher than in RS models that are based on official expert scenarios; trajectories in LT are much more irregular than in RS; probability intervals in LT tend to widen faster over forecast time. Newer versions of RS have been developed that reduce or eliminate some of these differences.
    Date: 2004–04
    URL: http://d.repec.org/n?u=RePEc:mrr:papers:wp073&r=ecm
  25. By: Rob J. Hyndman; Md. Shahid Ullah
    Abstract: We propose a new method for forecasting age-specific mortality and fertility rates observed over time. Our approach allows for smooth functions of age, is robust for outlying years due to wars and epidemics, and provides a modelling framework that is easily adapted to allow for constraints and other information. We combine ideas from functional data analysis, nonparametric smoothing and robust statistics to form a methodology that is widely applicable to any functional time series data, and age-specific mortality and fertility in particular. We show that our model is a generalization of the Lee-Carter model commonly used in mortality and fertility forecasting. The methodology is applied to French mortality data and Australian fertility data, and we show that the forecasts obtained are superior to those from the Lee-Carter method and several of its variants.
    Keywords: Fertility Forecasting, Functional Data, Mortality Forecasting, Nonparametric Smoothing, Principal Components, Robustness.
    JEL: J11 C53 C14 C32
    Date: 2005–02
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2005-2&r=ecm
  26. By: Bircan Erbas; Rob J. Hyndman; Dorota M. Gertig
    Abstract: Accurate estimates of future age-specific incidence and mortality are critical for allocation of resources to breast cancer control programs and evaluation of screening programs. The purpose of this study is to apply functional data analysis techniques to model age-specific breast cancer mortality time trends, and forecast entire age-specific mortality function using a state-space approach. We use yearly unadjusted breast cancer mortality rates in Australia, from 1921 to 2001 in 5 year age groups (45 to 85+). We use functional data analysis techniques where mortality and incidence are modeled as curves with age as a functional covariate varying by time. Data is smoothed using nonparametric smoothing methods then decomposed (using principal components analysis) to estimate basis functions that represent the functional curve. Period effects from the fitted functions are forecast then multiplied by the basis functions, resulting in a forecast mortality curve with prediction intervals. To forecast, we adopt a state-space approach and an extension of the Pegels modeling framework for selecting among exponential smoothing methods. Overall, breast cancer mortality rates in Australia remained relatively stable from 1960 to the late 1990's but declined over the last few years. A set of K=4 basis functions minimized the mean integrated squared forecasting error (MISFE) and accounts for 99.3% of variation around the mean mortality curve. 20 year forecast suggest a continual decline at a slower rate and stabilize beyond 2010 and by age, forecasts show a decline in all age groups with the greatest decline in older women. We illustrate the utility of a new modelling and forecasting approach to model breast cancer mortality rates using a functional model of age. The methods have the potential to incorporate important covariates such as Hormone Replacement Therapy (HRT) and interventions to represent mammographic screening. This would be particularly useful for evaluating the impact of screening on mortality and incidence from breast cancer.
    Keywords: Mortality, Breast Cancer, Forecasting, Functional Data Analysis, Exponential Smoothing
    JEL: I12 J11 C53
    Date: 2005–02
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2005-3&r=ecm
  27. By: Todd E. Clark; Kenneth D. West
    Abstract: We consider using out-of-sample mean squared prediction errors (MSPEs) to evaluate the null that a given series follows a zero mean martingale difference against the alternative that it is linearly predictable. Under the null of no predictability, the population MSPE of the null %u201Cno change%u201D model equals that of the linear alternative. We show analytically and via simulations that despite this equality, the alternative model%u2019s sample MSPE is expected to be greater than the null%u2019s. For rolling regression estimators of the alternative model%u2019s parameters, we propose and evaluate an asymptotically normal test that properly accounts for the upward shift of the sample MSPE of the alternative model. Our simulations indicate that our proposed procedure works well.
    JEL: C22 C53
    Date: 2005–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberte:0305&r=ecm
  28. By: B. LARIVIÈRE; D. VAN DEN POEL
    Abstract: In an era of strong customer relationship management (CRM) emphasis, firms strive to build valuable relationships with their existing customer base. In this study we attempt to better understand three important measures of customer outcome: next buy, partial defection and customers’ profitability evolution. By means of random forests techniques we investigate a broad set of explanatory variables, including past customer behavior, observed customer heterogeneity and some typical variables related to intermediaries. We analyze a real-life sample of 100,000 customers taken from the data warehouse of a large European financial services company. Two types of random forests techniques are employed to analyze the data: random forests are used for binary classification, whereas regression forests are applied for the models with linear dependent variables. Our research findings demonstrate that both random forests techniques provide better fit for the estimation and validation sample compared to ordinary linear regression and logistic regression models. Furthermore, we find evidence that the same set of variables have a different impact on buying versus defection versus profitability behavior. Our findings suggest that past customer behavior is more important to generate repeat purchasing and favorable profitability evolutions, while the intermediary’s role has a greater impact on the customers’ defection proneness. Finally, our results demonstrate the benefits of analyzing different customer outcome variables simultaneously, since an extended investigation of the next buy - partial defection - customer profitability triad indicates that one cannot fully understand a particular outcome without understanding the other related behavioral outcome variables.
    Keywords: Data mining, Customer relationship management, Customer retention and profitability, Random forests and regression forests.
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:04/282&r=ecm
  29. By: G. PEERSMAN
    Abstract: This paper shows how sign restrictions can be used to identify symmetric and asymmetric shocks in a simple two-country structural VAR. Specifically, the e??ects of symmetric and asymmetric supply, demand and monetary policy shocks as well as pure exchange rate shocks are estimated. The results can be used to deal with two issues. First, it is possible to estimate the relative importance of symmetric, asymmetric and pure exchange rate shocks across two countries or areas, which provides information about the degree of business cycle synchronization. Second, it is also possible to evaluate the relative importance of these shocks in determining exchange rate fluctuations, which can deliver answers to questions like ’Is the exchange rate a shock absorber or source of shocks?’. Evidence is provided for the UK versus the Euro area and compared with the US as a benchmark.
    Keywords: exchange rates, symmetric and asymmetric shocks, vector autoregressions
    JEL: C32 E42 F31 F33
    Date: 2005–01
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:05/286&r=ecm
  30. By: Marc Potters (Science & Finance, Capital Fund Management); Szilard Pafka; Imre Kondor
    Abstract: We introduce a covariance matrix estimator that both takes into account the heteroskedasticity of financial returns (by using an exponentially weighted moving average) and reduces the effective dimensionality of the estimation (and hence measurement noise) via techniques borrowed from random matrix theory. We calculate the spectrum of large exponentially weighted random matrices (whose upper band edge needs to be known for the implementation of the estimation) analytically, by a procedure analogous to that used for standard random matrices. Finally, we illustrate, on empirical data, the superiority of the newly introduced estimator in a portfolio optimization context over both the method of exponentially weighted moving averages and the uniformly-weighted random-matrix-theory-based filtering.
    JEL: G10
    URL: http://d.repec.org/n?u=RePEc:sfi:sfiwpa:500050&r=ecm
  31. By: Marc Potters (Science & Finance, Capital Fund Management); Jean-Philippe Bouchaud (Science & Finance, Capital Fund Management; CEA Saclay;); Laurent Laloux (Science & Finance, Capital Fund Management); Pierre Cizeau (Science & Finance, Capital Fund Management)
    Abstract: We show that results from the theory of random matrices are potentially of great interest to understand the statistical structure of the empirical correlation matrices appearing in the study of price fluctuations. The central result of the present study is the remarkable agreement between the theoretical prediction (based on the assumption that the correlation matrix is random) and empirical data concerning the density of eigenvalues associated to the time series of the different stocks of the S&P500 (or other major markets). In particular the present study raises serious doubts on the blind use of empirical correlation matrices for risk management.
    JEL: G10
    URL: http://d.repec.org/n?u=RePEc:sfi:sfiwpa:500051&r=ecm
  32. By: Rolf Aaberge (Statistics Norway)
    Abstract: A major aim of most income distribution studies is to make comparisons of income inequality across time for a given country and/or compare and rank different countries according to the level of income inequality. However, most of these studies lack information on sampling errors, which makes it difficult to judge the significance of the attained rankings. The purpose of this paper it to derive the asymptotic properties of the empirical rank-dependent family of inequality measures. A favourable feature of this family of inequality measures is that it includes the Gini coefficients, and that any member of this family can be given an explicit and simple expression in terms of the Lorenz curve. By relying on a result of Doksum (1974) it is easily demonstrated that the empirical Lorenz curve, regarded as a stochastic process, converges to a Gaussian process. Moreover, this result forms the basis of the derivation of the asymptotic properties of the empirical rank-dependent measures of inequality.
    Keywords: The Lorenz curve; the Gini coefficient; rank-dependent measures of inequality; nonparametric estimation methods; asymptotic distribution theory.
    JEL: C14 D
    Date: 2005–01
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:402&r=ecm
  33. By: Laurens Cherchye; Bruno De Borger; Tom Van Puyenbroeck
    Abstract: We develop a positive non-parametric model of public sector production that allows us to test whether an implicit procedure of cost minimization at shadow prices can rationalize the outcomes of public sector activities. The basic model focuses on multiple C-outputs and does not imply any explicit or implicit assumption regarding the trade-offs between the different inputs (in terms of relative shadow prices) or outputs (in terms of relative valuation). The proposed methodology is applied to a cross-section sample of 546 Belgian municipal police forces. Drawing on detailed task-allocation data and controlling, among others, for the presence of state police forces, the cost minimization hypothesis is found to provide a good fit of the data. Imposing additional structure on output valuation, derived from available ordinal information, yields equally convincing goodness-of-fit results. By contrast, we find that aggregating the labor input over task specializations, a common practice in efficiency assessments of police departments, entails a significantly worse fit of the data.
    Keywords: Public agencies, optimizing behavior, nonparametric production, local police departments
    JEL: C14 C61 D21 D24
    Date: 2004
    URL: http://d.repec.org/n?u=RePEc:wpe:papers:ces0416&r=ecm
  34. By: Atsushi Inoue (North Carolina State University); Mototsugu Shintani (Department of Economics, Vanderbilt University)
    Abstract: This paper establishes that the bootstrap provides asymptotic refinements for the generalized method of moments estimator of overidentified linear models when autocorrelation structures of moment functions are unknown. When moment functions are uncorrelated after finite lags, Hall and Horowitz (1996) showed that errors in the rejection probabilities of the symmetrical t test and the test of overidentifying restrictions based on the bootstrap are O(T-1). In general, however, such a parametric rate cannot be obtained with the heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimator since it converges at a nonparametric rate that is slower than T-1/2. By taking into account the HAC covariance matrix estimator in the Edgeworth expansion, we show that the bootstrap provides asymptotic refinements when kernels whose characteristic exponent is greater than two are used. Moreover, we find that the order of the bootstrap approximation error can be made arbitrarily close to o(T-1) provided moment conditions are satisfied. The bootstrap approximation thus improves upon the first-order asymptotic approximation even when there is a general autocorrelation. A Monte Carlo experiment shows that the bootstrap improves the accuracy of inference on regression parameters in small samples. We apply our bootstrap method to inference about the parameters in the monetary policy reaction function.
    Keywords: Asymptotic refinements, block bootstrap, dependent data, Edgeworth expansions, instrumental variables
    JEL: C12 C22 C32
    Date: 2001–12
    URL: http://d.repec.org/n?u=RePEc:van:wpaper:0129&r=ecm
  35. By: Xiaohong Chen (Department of Economics, New York University); Yanqin Fan (Department of Ecomomics, Vanderbilt University)
    Abstract: In this paper, we develop a general approach for constructing simple tests for the correct density forecasts, or equivalently, for i.i.d. uniformity of appropriately transformed random variables. It is based on nesting a series of i.i.d. uniform random variables into a class of copula-based stationary Markov processes. As such, it can be used to test for i.i.d. uniformity against alternative processes that exhibit a wide variety of marginal properties and temporal dependence properties, including skewed and fat-tailed marginal distributions, asymmetric dependence, and positive tail dependence. In addition, we develop tests for the dependence structure of the forecasting model that are robust to possible misspecification of the marginal distribution.
    Keywords: Density forecasts, Gaussian copula, probability integral transform, nonlinear time series
    JEL: C22 C52 C53
    Date: 2002–10
    URL: http://d.repec.org/n?u=RePEc:van:wpaper:0225&r=ecm
  36. By: Reinhold Kosfeld (Author-Workplace-Name: Department of Economics, University of Kassel); Jorgen Lauridsen (Author-Workplace-Name: University of Southern Denmark, Department of Economics)
    Abstract: In presence of multicollinearity principal component regression (PCR) is sometimes suggested for the estimation of the regression coefficients of a multiple regression model. Due to ambiguities in the interpretation involved by the orthogonal transformation of the set of explanatory variables the method could not yet gain wide acceptance. Factor analysis regression (FAR) provides a model-based estimation method which is particular tailored to overcome multicollinearity in an errors in variables setting. In this paper we present a new FAR estimator that proves to be unbiased and consistent for the coefficient vector of a multiple regression model given the parameters of the measurement model. The behaviour of feasible FAR estimators in the general case of completely unknown model parameters is studied in comparison with the OLS estimator by means of Monte Carlo simulation.
    Keywords: Factor Analysis Regression, Multicollinearity, Factor model, Errors in Variables
    JEL: C13 C20 C51
    Date: 2004–05
    URL: http://d.repec.org/n?u=RePEc:kas:wpaper:2004-57&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.