nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒04‒24
nineteen papers chosen by
Sune Karlsson
Orebro University

  1. Chain ladder method: Bayesian bootstrap versus classical bootstrap By Gareth W. Peters; Mario V. W\"uthrich; Pavel V. Shevchenko
  2. A distribution-free transform of the residuals sample autocorrelations with application to model checking By Miguel A. Delgado; Carlos Velasco
  3. Optimal predictions of powers of conditionally heteroskedastic processes By Francq, Christian; Zakoian, Jean-Michel
  4. A Semiparametric Panel Model for Unbalanced Data with Application to Climate Change in the United Kingdom By Atak, Alev; Linton, Oliver B.; Xiao, Zhijie
  5. Supervised Principal Components and Factor Instrumental Variables. An Application to Violent CrimeTrends in the US, 1982-2005. By Travaglini, Guido
  6. EXPLICIT SOLUTIONS FOR THE ASYMPTOTICALLY-OPTIMAL BANDWIDTH IN CROSS VALIDATION By Karim Abadir; Michel Lubrano
  7. The Power of some Standard tests of stationarity against changes in the unconditional variance. By Ibrahim Ahamada; Mohamed Boutahar
  8. Fitting high-dimensional Copulae to Data By Ostap Okhrin
  9. Hermite Regression Analysis of Multi-Modal Count Data By David E. Giles
  10. Forecasting Nonlinear Aggregates and Aggregates with Time-varying Weights By Helmut Luetkepohl
  11. The Duration of Trade Revisited: Continuous-Time vs. Discrete-Time Hazards By Hess, Wolfgang; Persson, Maria
  12. On the Stationarity of Current Account Deficits in the European Union By Mark J. Holmes; Jesús Otero; Theodore Panagiotidis
  13. Classical vs wavelet-based filters Comparative study and application to business cycle. By Ibrahim Ahamada; Philippe Jolivaldt
  14. A Direct Test of Rational Bubbles By Friedrich Geiecke; Mark Trede
  15. A Goodness-of-fit Test for Copulas By Wanling Huang; Artem Prokhorov
  16. How Risky Is the Value at Risk? By Roxana Chiriac; Winfried Pohlmeier
  17. How to use data swapping to create useful dummy data for panel datasets By Jacobebbinghaus, Peter; Müller, Dana; Orban, Agnes
  18. Protesting or Justifying? A Latent Class Model for Contingent Valuation with Attitudinal Data By Cunha-e-Sa, Maria Antonieta; Madureira, Livia; Nunes, Luis Catela; Otrachshenko, Vladimir
  19. Tapping the Supercomputer Under Your Desk: Solving Dynamic Equilibrium Models with Graphics Processors By Eric M. Aldrich; Jesús Fernández-Villaverde; A. Ronald Gallant; Juan F. Rubio-Ramírez

  1. By: Gareth W. Peters; Mario V. W\"uthrich; Pavel V. Shevchenko
    Abstract: The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilising Markov chain Monte Carlo (MCMC), ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. The ABC methodology arises because we work in a distribution-free setting in which we make no parametric assumptions, meaning we can not evaluate the likelihood point-wise or in this case simulate directly from the likelihood model. The use of a bootstrap procedure allows us to generate samples from the intractable likelihood without the requirement of distributional assumptions, this is crucial to the ABC framework. The developed methodology is used to obtain the empirical distribution of the DFCL model parameters and the predictive distribution of the outstanding loss liabilities conditional on the observed claims. We then estimate predictive Bayesian capital estimates, the Value at Risk (VaR) and the mean square error of prediction (MSEP). The latter is compared with the classical bootstrap and credibility methods.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1004.2548&r=ecm
  2. By: Miguel A. Delgado; Carlos Velasco
    Abstract: We propose an asymptotically distribution-free transform of the sample autocorrelations of residuals in general parametric time series models, possibly non-linear in variables. The residuals autocorrelation function is the basic model checking tool in time series analysis, but it is useless when its distribution is incorrectly approximated because the effects of parameter estimation or of unnoticed higher order serial dependence have not been taken into account. The limiting distribution of residuals sample autocorrelations may be difficult to derive, particularly when the underlying innovations are not independent. However, the transformation we propose is easy to implement and the resulting transformed sample autocorrelations are asymptotically distributed as independent standard normals, providing an useful and intuitive device for model checking by taking over the role of the standard sample autocorrelations. We also discuss in detail alternatives to the classical Box-Pierce and Bartlett's Tp-process tests, showing that our transform entails no efficiency loss under Gaussianity. The finite sample performance of the procedures is examined in the context of a Monte Carlo experiment for the two goodness-of-fit tests discussed in the article. The proposed methodology is applied to modeling the autocovariance structure of the well known chemical process temperature reading data already used for the illustration of other statistical procedures.
    Keywords: Residuals autocorrelation function, Asymptotically pivotal statistics, Nonlinear in variables models, Long memory, Higher order serial dependence, Recursive residuals, Model checking, Local alternatives
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we101707&r=ecm
  3. By: Francq, Christian; Zakoian, Jean-Michel
    Abstract: In conditionally heteroskedastic models, the optimal prediction of powers, or logarithms, of the absolute process has a simple expression in terms of the volatility process and an expectation involving the independent process. A standard procedure for estimating this prediction is to estimate the volatility by gaussian quasi-maximum likelihood (QML) in a first step, and to use empirical means based on rescaled innovations to estimate the expectation in a second step. This paper proposes an alternative one-step procedure, based on an appropriate non-gaussian QML estimation of the model, and establishes the asymptotic properties of the two approaches. Their performances are compared for finite-order GARCH models and for the infinite ARCH. For the standard GARCH(p, q) and the Asymmetric Power GARCH(p,q), it is shown that the ARE of the estimators only depends on the prediction problem and some moments of the independent process. An application to indexes of major stock exchanges is proposed.
    Keywords: APARCH; Infinite ARCH; Conditional Heteroskedasticity; Efficiency of estimators; GARCH; Prediction; Quasi Maximum Likelihood Estimation
    JEL: C13 C22 C01
    Date: 2010–04–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22155&r=ecm
  4. By: Atak, Alev; Linton, Oliver B.; Xiao, Zhijie
    Abstract: This paper is concerned with developing a semiparametric panel model to explain the trend in UK temperatures and other weather outcomes over the last century. We work with the monthly averaged maximum and minimum temperatures observed at the twenty six Meteorological Office stations. The data is an unbalanced panel. We allow the trend to evolve in a nonparametric way so that we obtain a fuller picture of the evolution of common temperature in the medium timescale. Profile likelihood estimators (PLE) are proposed and their statistical properties are studied. The proposed PLE has improved asymptotic property comparing the the sequential two-step estimators. Finally, forecasting based on the proposed model is studied.
    Keywords: Global warming; Kernel estimation; Semiparametric; Trend analysis
    JEL: C13 C14 C21
    Date: 2010–03–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22079&r=ecm
  5. By: Travaglini, Guido
    Abstract: Supervised Principal Component Analysis (SPCA) and Factor Instrumental Variables (FIV) are competing methods addressed at estimating models affected by regressor collinearity and at detecting a reduced-size instrument set from a large database, possibly dominated by non-exogeneity and weakness. While the first method stresses the role of regressors by taking account of their data-induced tie with the endogenous variable, the second places absolute relevance on the data-induced structure of the covariance matrix and selects the true common factors as instruments by means of formal statistical procedures. Theoretical analysis and Montecarlo simulations demonstrate that FIV is more efficient than SPCA and standard Generalized Method of Moments (GMM) even when the instruments are few and possibly weak. The prefered FIV estimation is then applied to a large dataset to test the more recent theories on the determinants of total violent crime and homicide trends in the United States for the period 1982-2005. Demographic variables, and especially abortion, law enforcement and unchecked gun availability are found to be the most significant determinants.
    Keywords: Principal Components; Instrumental Variables; Generalized Method of Moments; Crime; Law and Order.
    JEL: C22 K14 C01
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22077&r=ecm
  6. By: Karim Abadir (Imperial College London - Imperial College London); Michel Lubrano (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579)
    Abstract: Least squares cross-validation (CV) methods are often used for automated bandwidth selection. We show that they share a common structure which has an explicit asymptotic solution. Using the framework of density estimation, we consider unbiased, biased, and smoothed CV methods. We show that, with a Student t(nu) kernel which includes the Gaussian as a special case, the CV criterion becomes asymptotically equivalent to a simple polynomial. This leads to optimal-bandwidth solutions that dominate the usual CV methods, definitely in terms of simplicity and speed of calculation, but also often in terms of integrated squared error because of the robustness of our asymptotic solution. We present simulations to illustrate these features and to give practical guidance on the choice of nu.
    Keywords: bandwidth choice; cross validation; nonparametric density es- timation; analytical solution
    Date: 2010–04–13
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-00472750_v1&r=ecm
  7. By: Ibrahim Ahamada (Centre d'Economie de la Sorbonne); Mohamed Boutahar (GREQAM - Université Aix-Marseille II)
    Abstract: Abrupt changes in the unconditional variance of returns have been recently revealed in many empirical studies. In this paper, we show that traditional KPSS-based tests have a low power against nonstationarities stemming from changes in the unconditional variance. More precisely, we show that even under very strong abrupt changes in the unconditional variance, the asymptotic moments of the statistics of these tests remain unchanged. To overcome this problem, we use some CUSUM-based tests adapted for small samples. These tests do not compete with KPSS-based tests and can be considered as complementary. CUSUM-based tests confirm the presence of strong abrupt changes in the unconditional variance of stock returns, whereas KPSS-based tests do not. Consequently, traditional stationary models are not always appropriate to describe stock returns. Finally, we show how a model allowing abrupt changes in the unconditional variance is well appropriate for CAC 40 stock returns.
    Keywords: KPSS test, panel stationarity test, unconditional variance, abrupt changes, stock returns, size-power curve.
    JEL: C12 C15 C23
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:10028&r=ecm
  8. By: Ostap Okhrin
    Abstract: This paper make an overview of the copula theory from a practical side. We consider different methods of copula estimation and different Goodness-of-Fit tests for model selection. In the GoF section we apply Kolmogorov-Smirnov and Cramer-von-Mises type tests and calculate power of these tests under different assumptions. Novating in this paper is that all the procedures are done in dimensions higher than two, and in comparison to other papers we consider not only simple Archimedean and Gaussian copulae but also Hierarchical Archimedean Copulae. Afterwards we provide an empirical part to support the theory.
    Keywords: copula, multivariate distribution, Archimedean copula, GoF
    JEL: C13 C14 C50
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010-022&r=ecm
  9. By: David E. Giles (Department of Economics, University of Victoria)
    Abstract: We discuss the modeling of count data whose empirical distribution is both multi-modal and overdispersed, and propose the Hermite distribution with covariates introduced through the conditional mean. The model is readily estimated by maximum likelihood, and nests the Poisson model as a special case. The Hermite regression model is applied to data for the number of banking and currency crises in IMF-member countries, and is found to out-perform the Poisson and negative binomial models.
    Keywords: Count data, multi-modal data, over-dispersion, financial crises
    JEL: C16 C25 G15 G21
    Date: 2010–04–13
    URL: http://d.repec.org/n?u=RePEc:vic:vicewp:1001&r=ecm
  10. By: Helmut Luetkepohl
    Abstract: Despite the fact that many aggregates are nonlinear functions and the aggregation weights of many macroeconomic aggregates are timevarying, much of the literature on forecasting aggregates considers the case of linear aggregates with fixed, time-invariant aggregation weights. In this study a framework for nonlinear contemporaneous aggregation with possibly stochastic or time-varying weights is developed and different predictors for an aggregate are compared theoretically as well as with simulations. Two examples based on European unemployment and inflation series are used to illustrate the virtue of the theoretical setup and the forecasting results.
    Keywords: Forecasting, stochastic aggregation, autoregression, moving average,vector autoregressive process
    JEL: C32
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2010/11&r=ecm
  11. By: Hess, Wolfgang (Lund University); Persson, Maria (Research Institute of Industrial Economics (IFN))
    Abstract: The recent literature on the duration of trade has predominantly analyzed the determinants of trade flow durations using Cox proportional hazards models. The purpose of this paper is to show why it is inappropriate to analyze the duration of trade with continuous-time models such as the Cox model, and to propose alternative discrete-time models which are more suitable for estimation. Briefly, the Cox model has three major drawbacks when applied to large trade data sets. First, it faces problems in the presence of many tied duration times, leading to biased coefficient estimates and standard errors. Second, it is difficult to properly control for unobserved heterogeneity, which can result in spurious duration dependence and parameter bias. Third, the Cox model imposes the restrictive and empirically questionable assumption of proportional hazards. By contrast, with discrete-time models there is no problem handling ties; unobserved heterogeneity can be controlled for without difficulty; and the restrictive proportional hazards assumption can easily be bypassed. By replicating an influential study by Besedeš and Prusa from 2006, but employing discrete-time models as well as the original Cox model, we find empirical support for each of these arguments against the Cox model. Moreover, when comparing estimation results obtained from a Cox model and our preferred discrete-time specification, we find significant differences in both the predicted hazard rates and the estimated effects of explanatory variables on the hazard. In other words, the choice between models affects the conclusions that can be drawn.
    Keywords: Duration of Trade; Continuous-Time versus Discrete-Time Hazard Models; Proportional Hazards; Unobserved Heterogeneity
    JEL: C41 F10 F14
    Date: 2010–04–13
    URL: http://d.repec.org/n?u=RePEc:hhs:iuiwop:0829&r=ecm
  12. By: Mark J. Holmes (Department of Economics, Waikato University, New Zealand); Jesús Otero (Facultad de Economía, Universidad del Rosario, Colombia); Theodore Panagiotidis (Department of Economics, University of Macedonia, Greece)
    Abstract: In this paper, we test for the stationarity of EU current account deficits. Our testing strategy addresses two key concerns with regard to unit root panel data testing, namely (i) the identification of which panel members are stationary, and (ii) the presence of cross-sectional dependence. For this purpose, we employ an AR-based bootstrap approach to the Hadri (2000) test. While there is only mixed evidence that current account stationarity applies when examining individual countries, this does not appear to be case when considering panels comprising both EU and non-EU members.
    Keywords: Heterogeneous dynamic panels, current account stationarity, mean reversion, panel stationarity test
    JEL: C33 F32 F41
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:05_10&r=ecm
  13. By: Ibrahim Ahamada (Centre d'Economie de la Sorbonne); Philippe Jolivaldt (Centre d'Economie de la Sorbonne)
    Abstract: In this article, we compare the performance of Hodrickk-Prescott and Baxter-King filters with a method of filtering based on the multi-resolution properties of wavelets. We show that overall the three methods remain comparable if the theoretical cyclical component is defined in the usual waveband, ranging between six and thirty two quarters. However the approach based on wavelets provides information about the business cycle, for example, its stability over time which the other two filters do not provide. Based on Monte Carlo simulation experiments, our method applied to the American GDP using growth rate data shows that the estimate of the business cycle component is richer in information than that deduced from the level of GDP and includes additional information about the post 1980 period of great moderation.
    Keywords: Filters HP, wavelets, Monte Carlo Simulation, break, business cycles.
    JEL: C15 C22 C65 E32
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:10027&r=ecm
  14. By: Friedrich Geiecke; Mark Trede
    Abstract: The recent introduction of new derivatives with future dividend payments as underlyings allows to construct a direct test of rational bubbles. We suggest a simple, new method to calculate the fundamental value of stock indices. Using this approach, bubbles become observable. We calculate the time series of the bubble component of the Euro-Stoxx 50 index and investigate its properties. Using a formal hypothesis test we find that the behavior of the bubble is compatible with rationality.
    Keywords: brand equity, price premium, hedonic regression, Bayesian estimation, dynamic linear model
    JEL: A
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:1310&r=ecm
  15. By: Wanling Huang (Concordia University); Artem Prokhorov (Concordia University and CIREQ)
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:crd:wpaper:10002&r=ecm
  16. By: Roxana Chiriac (University of Konstanz, CoFE); Winfried Pohlmeier (University of Konstanz, CoFE, ZEW, RCEA)
    Abstract: The recent financial crisis has raised numerous questions about the accuracy of value-at-risk (VaR) as a tool to quantify extreme losses. In this paper we present empirical evidence from assessing the out-of-sample performance and robustness of VaR before and during the recent financial crisis with respect to the choice of sampling window, return distributional assumptions and stochastic properties of the underlying financial assets. Moreover we develop a new data driven approach that is based on the principle of optimal combination and that provides robust and precise VaR forecasts for periods when they are needed most, such as the recent financial crisis.
    Keywords: Value at Risk, model risk, optimal forecast combination
    JEL: C21 C5 G28 G32
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:07_10&r=ecm
  17. By: Jacobebbinghaus, Peter (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Müller, Dana (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Orban, Agnes
    Abstract: "Many research data centres (RDCs) provide access to micro data by means of onsite use and remote execution of programs. An efficient usage of these modes of data access requires the researchers to have dummy data, which allows them to familiarize with the real data. These dummy data must be anonymous and look the same as the original data, but they do not have to render valid results. For complex datasets such as panel data or linked data, the creation of useful dummy data is not trivial. In this paper we suggest to use data swapping with constraints in order to keep some consistency and correlation between variables within crosssections and over time. It is easy to be implemented even for datasets with many variables and many survey waves." (author's abstract, IAB-Doku) ((en))
    Keywords: Prüfverfahren, Datenzugang, Datenaufbereitung - Test, IAB-Betriebspanel, Panel, IAB-Betriebs-Historik-Panel
    Date: 2010–04–13
    URL: http://d.repec.org/n?u=RePEc:iab:iabfme:201003_en&r=ecm
  18. By: Cunha-e-Sa, Maria Antonieta; Madureira, Livia; Nunes, Luis Catela; Otrachshenko, Vladimir
    Abstract: This article develops a latent class model for estimating willingness-to-pay for public goods using simultaneously contingent valuation (CV) and attitudinal data capturing protest attitudes related to the lack of trust in public institutions providing those goods. A measure of the social cost associated with protest responses and the consequent loss in potential contributions for providing the public good is proposed. The presence of potential justification biases is further considered, that is, the possibility that for psychological reasons the response to the CV question affects the answers to the attitudinal questions. The results from our empirical application suggest that psychological factors should not be ignored in CV estimation for policy purposes, allowing for a correct identification of protest responses. JEL codes: C35, C85, Q51
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:unl:unlfep:wp547&r=ecm
  19. By: Eric M. Aldrich; Jesús Fernández-Villaverde; A. Ronald Gallant; Juan F. Rubio-Ramírez
    Abstract: This paper shows how to build algorithms that use graphics processing units (GPUs) installed in most modern computers to solve dynamic equilibrium models in economics. In particular, we rely on the compute unified device architecture (CUDA) of NVIDIA GPUs. We illustrate the power of the approach by solving a simple real business cycle model with value function iteration. We document improvements in speed of around 200 times and suggest that even further gains are likely.
    JEL: C87 E0
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:15909&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.