nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒03‒19
twenty papers chosen by
Sune Karlsson
Örebro universitet

  1. Bootstrap-Assisted Unit Root Testing With Piecewise Locally Stationary Errors By Yeonwoo Rho; Xiaofeng Shao
  2. Bias Reduction by Imputation for Linear Panel Data; Models with Nonrandom Missing By Goeun Lee; Chirok Han
  3. Continuous partition-of-unity copulas and their application to risk management By Dietmar Pfeifer; Andreas M\"andle; Olena Ragulina; C\^ome Girschig
  4. Inference on time-invariant variables using panel data: a pretest estimator By Jean-Bernard Chatelain; Kirsten Ralf
  5. An Overview of Modified Semiparametric Memory Estimation Methods By Busch, Marie; Sibbertsen, Philipp
  6. A Multi-country Approach to Analysing the Euro Area Output Gap By Florian Huber; Philipp Piribauer
  7. Geographically Weighted Regression with Multidimensional Locations and Subscripts: Based on Dependent Data By Zihao Yuan; Xiaolin Wu
  8. Skewness-Adjusted Bootstrap Confidence Intervals and Confidence Bands for Impulse Response Functions By Daniel Grabowski; Anna Staszewska-Bystrova; Peter Winker
  9. Negative Binomial Autoregressive Process By Yang Lu; Christian Gourieroux
  10. Finite Sample Theory and Bias Correction of Maximum Likelihood Estimators in the EGARCH Model By Antonis Demos; Dimitra Kyriakopoulou
  11. Finite Sample Theory and Bias Correction of MLEs in the EGARCH Model (Technical Appendix I) By Antonis Demos; Dimitra Kyriakopoulou
  12. Finite Sample Theory and Bias Correction of MLEs in the EGARCH Model (Technical Appendix II) By Antonis Demos; Dimitra Kyriakopoulou
  13. An Operational (Preasymptotic) Measure of Fat-tailedness By Nassim Nicholas Taleb
  14. Comparing Asset Pricing Models: Distance-based Metrics and Bayesian Interpretations By Zhongzhi Lawrence He
  15. Parametric models for biomarkers based on flexible size distributions By Davillas, Apostolos; Jones, Andrew M.
  16. Radial Basis Functions Neural Networks for Nonlinear Time Series Analysis and Time-Varying Effects of Supply Shocks By KANAZAWA, Nobuyuki
  17. Semiparametric detection of changes in long range dependence By Fabrizio Iacone; Stepana Lazarova
  18. Forecasting dynamically asymmetric fluctuations of the U.S. business cycle By Emilio Zanetti Chini
  19. Exploring the relationship between money stock and GDP in the Euro Area via a bootstrap test for Granger-causality in the frequency domain By Matteo Farn\'e; Angela Montanari
  20. Bayesian factor models for probabilistic cause of death assessment with verbal autopsies By Tsuyoshi Kunihama; Zehang Richard Li; Samuel J. Clark; Tyler H. McCormick

  1. By: Yeonwoo Rho; Xiaofeng Shao
    Abstract: In unit root testing, a piecewise locally stationary process is adopted to accommodate nonstationary errors that can have both smooth and abrupt changes in second- or higher-order properties. Under this framework, the limiting null distributions of the conventional unit root test statistics are derived and shown to contain a number of unknown parameters. To circumvent the difficulty of direct consistent estimation, we propose to use the dependent wild bootstrap to approximate the non-pivotal limiting null distributions and provide a rigorous theoretical justification for bootstrap consistency. The proposed method is compared through finite sample simulations with the recolored wild bootstrap procedure, which was developed for errors that follow a heteroscedastic linear process. Further, a combination of autoregressive sieve recoloring with the dependent wild bootstrap is shown to perform well. The validity of the dependent wild bootstrap in a nonstationary setting is demonstrated for the first time, showing the possibility of extensions to other inference problems associated with locally stationary processes.
    Date: 2018–02
  2. By: Goeun Lee (Department of Economics, Korea University, Seoul, Republic of Korea); Chirok Han (Department of Economics, Korea University, Seoul, Republic of Korea)
    Abstract: When no variables are observed for endogenous non-respondents of panel data, bias correction is available only for a limited class of instrumental variable estimators, which require strong conditions for consistency and often suffer from substantial efficiency loss. In this paper we introduce a convenient alternative method of imputing the missing explanatory variables and then using standard bias-correction procedures for sample selection. Various bias-corrected estimators are derived and their performances are compared by Monte Carlo experiments. Results verify efficiency loss by the instrumental variable estimators and suggest that the imputation method is practically useful if it is applied to first-difference regression.
    Keywords: Attrition, missing, nonresponse, bias-correction, panel data, imputation
    JEL: C23
    Date: 2018
  3. By: Dietmar Pfeifer; Andreas M\"andle; Olena Ragulina; C\^ome Girschig
    Abstract: In this paper we discuss a natural extension of infinite discrete partition-of-unity copulas to continuous partition of copulas which were recently introduced in the literature, with possible applications in risk management and other fields. We present a general simple algorithm to generate such copulas on the basis of the empirical copula from high-dimensional data sets. In particular, our constructions also allow for positive tail dependence which sometimes is a desirable property of data-driven copula modelling, in particular for internal models under Solvency II.
    Date: 2018–03
  4. By: Jean-Bernard Chatelain (PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Kirsten Ralf (Ecole Supérieure du Commerce Extérieur - ESCE - International business school)
    Abstract: This paper proposes a new pretest estimator of panel data models including time-invariant variables based on the Mundlak-Krishnakumar estimator and an "unrestricted" Hausman-Taylor estimator. Furthermore, the paper evaluates the biases of currently used estimators: repeated between, ordinary least squares, two-stage restricted between, Oaxaca-Geisler estimator, fixed effect vector decomposition, and generalized least squares. Some of these may lead to erroneous conclusions regarding the statistical significance of the estimated parameter values of time-invariant variables, especially when time-invariant variables are correlated with the individual effects.
    Keywords: Time-invariant variables,panel data,time-series cross-sections,pretest estimator,Mundlak estimator,Hausman-Taylor estimator
    Date: 2018–02
  5. By: Busch, Marie; Sibbertsen, Philipp
    Abstract: Several modified estimation methods of the memory parameter have been introduced in the past years. They aim to decrease the upward bias of the memory parameter in cases of low frequency contaminations or an additive noise component, especially in situations with a short-memory process being contaminated. In this paper, we provide an overview and compare the performance of nine semiparametric estimation methods. Among them are two standard methods, four modified approaches to account for low frequency contaminations and three procedures developed for perturbed fractional processes. We conduct an extensive Monte Carlo study for a variety of parameter constellations and several DGPs. Furthermore, an empirical application of the log-absolute return series of the S&P 500 shows that the estimation results combined with a long-memory test indicate a spurious long-memory process.
    Keywords: Spurious Long Memory; Semiparametric estimation; Low frequency contamination; Pertubation;Monte Carlo simulation
    Date: 2018–03
  6. By: Florian Huber; Philipp Piribauer (WIFO)
    Abstract: We develop a multivariate dynamic factor model that exploits euro area country-specific information on output and inflation for estimating an area-wide measure of the output gap. In the proposed multi-country framework we moreover allow for flexible stochastic volatility specifications for both the error variances and the innovations to the latent quantities in order to deal with potential changes in the commonalities of business cycle movements. By tracing the relative importance of the common euro area output gap component as a means to explaining movements in both output and inflation over time, the paper provides valuable insights in the evolution of the degree of synchronicity of the country-specific business cycles. In an out-of-sample forecasting exercise, the paper shows that the proposed approach performs well as compared to other well-known benchmark specifications.
    Keywords: European Business Cycles, Dynamic factor model, Forecasting
    Date: 2018–03–13
  7. By: Zihao Yuan; Xiaolin Wu
    Abstract: Leading terms of conditional bias and variance are useful when we tend to investigate the mean-square properties of locally weighted polynomial regression. In this paper, we consider a geographically weighted regression model with any d-dimensional locations and M-dimensional subscripts. Under a given structure of dependence and assumption of stationarity, based on a weight matrix controlled by euclidean distance function with scale parameters, we analyze the leading terms of conditional bias, variance of locally weighted pool least square estimation in each location. Furthermore, according to mean integrated squared error, we demonstrate how the design of scale parameters affect the value of globally optimal bandwith.
    Date: 2018–03
  8. By: Daniel Grabowski (University of Giessen); Anna Staszewska-Bystrova (University of Lodz); Peter Winker (University of Giessen)
    Abstract: This article investigates the construction of skewness-adjusted confidence intervals and joint confidence bands for impulse response functions from vector autoregressive models. Three different implementations of the skewness adjustment are investigated. The methods are based on a bootstrap algorithm that adjusts mean and skewness of the bootstrap distribution of the autoregressive coefficients before the impulse response functions are computed. Using extensive Monte Carlo simulations, the methods are shown to improve the coverage accuracy in small and medium sized samples and for unit root processes for both known and unknown lag orders.
    Keywords: Bootstrap, confidence intervals, joint confidence bands, vector autoregression
    JEL: C15 C32
    Date: 2018
  9. By: Yang Lu (Centre d'Economie de l'Université de Paris Nord (CEPN)); Christian Gourieroux (University of Toronto and Toulouse School of Economics)
    Abstract: We introduce Negative Binomial Autoregressive (NBAR) processes for (univariate and bivariate) count time series. The univariate NBAR process is defined jointly with an underlying intensity process, which is autoregressive gamma. The resulting count process is Markov, with negative binomial conditional and marginal distributions. The process is then extended to the bivariate case with a Wishart autoregressive matrix intensity process. The NBAR processes are Compound Autoregressive, which allows for simple stationarity condition and quasi-closed form nonlinear forecasting formulas at any horizon, as well as a computationally tractable generalized method of moment estimator. The model is applied to a pairwise analysis of weekly occurrence counts of a contagious disease between the greater Paris region and other French regions.
    Keywords: Compound Autoregressive, Poisson-gamma conjugacy
    JEL: C32
    Date: 2018–03
  10. By: Antonis Demos (; Dimitra Kyriakopoulou
    Abstract: We derive analytical expressions of bias approximations for maximum likelihood (ML) and quasi-maximum likelihood (QML) estimators of the EGARCH(1; 1) parameters that enable us to correct after the bias of all estimators. The bias correction mechanism is constructed under the specification of two methods that are analytically described. We also evaluate the residual bootstrapped estimator as a measure of performance. Monte Carlo simulations indicate that, for given sets of parameters values, the bias corrections work satisfactory for all parameters. The proposed full-step estimator performs better than the classical one and is also faster than the bootstrap. The results can be also used to formulate the approximate Edgeworth distribution of the estimators.
    Keywords: Exponential GARCH, maximum likelihood estimation, finite sample properties, bias approximations, bias correction, Edgeworth expansion, bootstrap
    JEL: C13 C22
    Date: 2018–02–23
  11. By: Antonis Demos (; Dimitra Kyriakopoulou
    Date: 2018–02–23
  12. By: Antonis Demos (; Dimitra Kyriakopoulou
    Date: 2018–02–23
  13. By: Nassim Nicholas Taleb
    Abstract: This note presents an operational measure of fat-tailedness for univariate probability distributions, in $[0,1]$ where 0 is maximally thin-tailed (Gaussian) and 1 is maximally fat-tailed. Among others,1) it helps assess the sample size needed to establish a comparative $n$ needed for statistical significance, 2) allows practical comparisons across classes of fat-tailed distributions, 3) helps understand some inconsistent attributes of the lognormal, pending on the parametrization of its scale parameter. The literature is rich for what concerns asymptotic behavior, but there is a large void for finite values of $n$, those needed for operational purposes. Conventional measures of fat-tailedness, namely 1) the tail index for the power law class, and 2) Kurtosis for finite moment distributions fail to apply to some distributions, and do not allow comparisons across classes and parametrization, that is between power laws outside the Levy-Stable basin, or power laws to distributions in other classes, or power laws for different number of summands. How can one compare a sum of 100 Student T distributed random variables with 3 degrees of freedom to one in a Levy-Stable or a Lognormal class? How can one compare a sum of 100 Student T with 3 degrees of freedom to a single Student T with 2 degrees of freedom? We propose an operational and heuristic measure that allow us to compare $n$-summed independent variables under all distributions with finite first moment. The method is based on the rate of convergence of the Law of Large numbers for finite sums, $n$-summands specifically. We get either explicit expressions or simulation results and bounds for the lognormal, exponential, Pareto, and the Student T distributions in their various calibrations --in addition to the general Pearson classes.
    Date: 2018–02
  14. By: Zhongzhi Lawrence He
    Abstract: In light of the power problems of statistical tests and undisciplined use of alpha-based statistics to compare models, this paper proposes a unified set of distance-based performance metrics, derived as the square root of the sum of squared alphas and squared standard errors. The Bayesian investor views model performance as the shortest distance between his dogmatic belief (model-implied distribution) and complete skepticism (data-based distribution) in the model, and favors models that produce low dispersion of alphas with high explanatory power. In this view, the momentum factor is a crucial addition to the five-factor model of Fama and French (2015), alleviating his prior concern of model mispricing by -8% to 8% per annum. The distance metrics complement the frequentist p-values with a diagnostic tool to guard against bad models.
    Date: 2018–03
  15. By: Davillas, Apostolos; Jones, Andrew M.
    Abstract: Recent advances in social science surveys include collection of biological samples. Although biomarkers offer a large potential for social science and economic research, they impose a number of statistical challenges, often being distributed asymmetrically with heavy tails. Using data from the UK Household Panel Survey (UKHLS), we illustrate the comparative performance of a set of flexible parametric distributions, which allow for a wide range of skewness and kurtosis: the four-parameter generalized beta of the second kind (GB2), the three-parameter generalized gamma (GG) and their three-, two- or one-parameter nested and limiting cases. Commonly used blood-based biomarkers for inflammation, diabetes, cholesterol and stress-related hormones are modelled. Although some of the three-parameter distributions nested within the GB2 outperform the latter for most of the biomarkers considered, the GB2 can be used as a guide for choosing among competing parametric distributions for biomarkers. Going “beyond the mean†to estimate tail probabilities, we find that GB2 performs fairly well with some disparities at the very high levels of HbA1c and fibrinogen. Commonly used OLS models are shown to perform worse than almost all the flexible distributions.
    Date: 2018–03–08
  16. By: KANAZAWA, Nobuyuki
    Abstract: I propose a flexible nonlinear method for studying the time series properties of macroeconomic variables. In particular, I focus on a class of Artificial Neural Networks (ANN) called the Radial Basis Functions (RBF). To assess the validity of the RBF approach in the macroeconomic time series analysis, I conduct a Monte Carlo experiment using the data generated from a nonlinear New Keynesian (NK) model. I find that the RBF estimator can uncover the structure of the nonlinear NK model from the simulated data whose length is as small as 300 periods. Finally, I apply the RBF estimator to the quarterly US data and show that the response of the macroeconomic variables to a positive supply shock exhibits a substantial time variation. In particular, the positive supply shocks are found to have significantly weaker expansionary effects during the zero lower bound periods as well as periods between 2003 and 2004. The finding is consistent with a basic NK model, which predicts that the higher real interest rate due to the monetary policy inaction weakens the effects of supply shocks.
    Keywords: Neural Networks, Radial Basis Functions, Zero Lower Bound, Supply Shocks
    JEL: C45 E31
    Date: 2018–03
  17. By: Fabrizio Iacone (University of York); Stepana Lazarova (Queen Mary University of London)
    Abstract: We consider changes in the degree of persistence of a process when the degree of persistence is characterized as the order of integration of a strongly dependent process. To avoid the risk of incorrectly specifing the data generating process we employ local Whittle estimates which uses only frequencies local at zero. The limit distribution of the test statistic under the null is not standard but it is well known in the literature. A Monte Carlo study shows that this inference procedure performs well in finite samples.
    Keywords: Long memory, persistence, break, local Whittle estimate
    JEL: C22
    Date: 2017–08–18
  18. By: Emilio Zanetti Chini (Department of Economics and Management, University of Pavia)
    Abstract: The Generalized Smooth Transition Auto-Regression (GSTAR) parametrizes the joint asymmetry in the duration and length of cycles in macroeconomic time series by using particular generalizations of the logistic function. The symmetric smooth transition and linear auto-regressions are peculiar cases of the new parametrization. A test for the null hypothesis of dynamic symmetry is discussed. Two case studies indicate that dynamic asymmetry is a key feature of the U.S. economy. Our model beats its competitors in point forecasting, but this superiority becomes less evident in density forecasting and in uncertain forecasting environments.
    Keywords: Density forecasts, Econometric modelling, Evaluating forecasts, Generalized logistic, Industrial production, Nonlinear time series, Point forecasts, Statistical tests, Unemployment.
    JEL: C22 C51 C52
    Date: 2018–03
  19. By: Matteo Farn\'e; Angela Montanari
    Abstract: The question regarding the relationship between money stock and GDP in the Euro Area is still under debate. In this paper we address the theme by resorting to Granger-causality spectral estimation and inference in the frequency domain. We propose a new bootstrap test on unconditional and conditional Granger-causality, as well as on their difference, to catch particularly prominent causality cycles. The null hypothesis is that each causality or causality difference is equal to the median across frequencies. In a dedicated simulation study, we prove that our tool is able to disambiguate causalities significantly larger than the median even in presence of a rich causality structure. Our results hold until the stationary bootstrap of Politis & Romano (1994) is consistent on the underlying stochastic process. By this method, we point out that in the Euro Area money and output co-implied before the financial crisis of 2008, while after the crisis the only significant direction is from money to output with a shortened period.
    Date: 2018–03
  20. By: Tsuyoshi Kunihama (Kwansei Gakuin University); Zehang Richard Li (University of Washington); Samuel J. Clark (Ohio State University); Tyler H. McCormick (University of Washington)
    Abstract: The distribution of deaths by cause provides crucial information for public health planning, response, and evaluation. About 60% of deaths globally are not registered or given a cause which limits our ability to understand the epidemiology of affected populations. Verbal autopsy (VA) surveys are increasingly used in such settings to collect information on the signs, symptoms, and medical history of people who have recently died. This article develops a novel Bayesian method for estimation of population distributions of deaths by cause using verbal autopsy data. The proposed approach is based on a multivariate probit model where associations among items in questionnaires are flexibly induced by latent factors. We measure strength of conditional dependence of symptoms with causes. Using the Population Health Metrics Research Consortium labeled data that include both VA and medically certified causes of death, we assess performance of the proposed method. Further, we propose a method to estimate important questionnaire items that are highly associated with causes of death. This framework provides insights that will simplify future data collection.
    Keywords: Bayesian latent model; Cause of death; Conditional dependence; Multivariate data; Verbal autopsies; Survey data
    Date: 2018–03

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.