nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒08‒17
twenty papers chosen by
Sune Karlsson
Örebro universitet

  1. Nonparametric Euler Equation Identi?cation and Estimation By Escanciano, J C.; Hoderlein, S.; Lewbel, A.; Linton, O.; Srisuma, S.
  2. Testing error distribution by kernelized Stein discrepancy in multivariate time series models By Donghang Luo; Ke Zhu; Huan Gong; Dong Li
  3. Identification of Volatility Proxies as Expectations of Squared Financial Return By Sucarrat, Genaro
  4. An EM algorithm for fitting a new class of mixed exponential regression models with varying dispersion By Tzougas, George; Karlis, Dimitris
  5. Tail risk forecasting using Bayesian realized EGARCH models By Vica Tendenan; Richard Gerlach; Chao Wang
  6. Identification of Time Preferences in Dynamic Discrete Choice Models: Exploiting Choice Restrictions By Schneider, Ulrich
  7. Adaptiveness of the empirical distribution of residuals in semi- parametric conditional location scale models By Christian Francq; Jean-Michel Zakoïan
  8. Large dynamic covariance matrices: enhancements based on intraday data By Gianluca De Nard; Robert F. Engle; Olivier Ledoit; Michael Wolf
  9. Rate-Optimality of Consistent Distribution-Free Tests of Independence Based on Center-Outward Ranks and Signs By Hongjian Shi; Marc Hallin; Mathias Drton; Fang Han
  10. Utilizing Two Types of Survey Data to Enhance the Accuracy of Labor Supply Elasticity Estimation By Cheng Chou; Ruoyao Shi
  11. Convergence rate of estimators of clustered panel models with misclassification By Andreas Dzemski; Ryo Okui
  12. Monte-Carlo Simulation Studies in Survey Statistics – An Appraisal By Jan Pablo Burgard; Patricia Dörr; Ralf Münnich
  13. Time Inhomogeneous Multivariate Markov Chains: Detecting and Testing Multiple Structural Breaks Occurring at Unknown By Bruno Damásio; João Nicolau
  14. Do Any Economists Have Superior Forecasting Skills? By Qu, Ritong; Timmermann, Allan; Zhu, Yinchu
  15. Design-Based Uncertainty for Quasi-Experiments By Ashesh Rambachan; Jonathan Roth
  16. Cheating with (recursive) models By Eliaz, Kfir; Spiegler, Ran; Weiss, Yair
  17. Too similar to combine? On negative weights in forecast combination By Radchenko, Peter; Vasnev, Andrey; Wang, Wendun
  18. Global Representation of LATE Model: A Separability Result By Yu-Chang Chen; Haitian Xie
  19. The economic drivers of volatility and uncertainty By Andrea Carriero; Francesco Corsello; Massimiliano Marcellino
  20. Revisiting income convergence with DF-Fourier tests: old evidence with a new test By Silva Lopes, Artur

  1. By: Escanciano, J C.; Hoderlein, S.; Lewbel, A.; Linton, O.; Srisuma, S.
    Abstract: We consider nonparametric identification and estimation of pricing kernels, or equivalently of marginal utility functions up to scale, in consumption based asset pricing Euler equations. Ours is the first paper to prove nonparametric identification of Euler equations under low level conditions (without imposing functional restrictions or just assuming completeness). We also propose a novel nonparametric estimator based on our identification analysis, which combines standard kernel estimation with the computation of a matrix eigenvector problem. Our estimator avoids the ill-posed inverse issues associated with nonparametric instrumental variables estimators. We derive limiting distributions for our estimator and for relevant associated functionals. A Monte Carlo shows a satisfactory finite sample performance for our estimators.
    Keywords: uler equations, marginal utility, pricing kernel, Fredholm equations, integral equations, nonparametric identification, asset pricing
    JEL: C14 D91 E21 G12
    Date: 2020–07–04
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2064&r=all
  2. By: Donghang Luo; Ke Zhu; Huan Gong; Dong Li
    Abstract: Knowing the error distribution is important in many multivariate time series applications. To alleviate the risk of error distribution mis-specification, testing methodologies are needed to detect whether the chosen error distribution is correct. However, the majority of the existing tests only deal with the multivariate normal distribution for some special multivariate time series models, and they thus can not be used to testing for the often observed heavy-tailed and skewed error distributions in applications. In this paper, we construct a new consistent test for general multivariate time series models, based on the kernelized Stein discrepancy. To account for the estimation uncertainty and unobserved initial values, a bootstrap method is provided to calculate the critical values. Our new test is easy-to-implement for a large scope of multivariate error distributions, and its importance is illustrated by simulated and real data.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.00747&r=all
  3. By: Sucarrat, Genaro
    Abstract: Volatility proxies like Realised Volatility (RV) are extensively used to assess the forecasts of squared financial return produced by Autoregressive Conditional Heteroscedasticity (ARCH) models. But are volatility proxies identified as expectations of the squared return? If not, then the results of these comparisons can be misleading, even if the proxy is unbiased. Here, a tripartite distinction between strong, semi-strong and weak identification of a volatility proxy as an expectation of squared return is introduced. The definition implies that semi-strong and weak identification can be studied and corrected for via a multiplicative transformation. Well-known tests can be used to check for identification and bias, and Monte Carlo simulations show they are well-sized and powerful -- even in fairly small samples. As an illustration, twelve volatility proxies used in three seminal studies are revisited. Half of the proxies do not satisfy either semi-strong or weak identification, but their corrected transformations do. Correcting for identification does not always reduce the bias of the proxy, so there is a tradeoff between the choice of correction and the resulting bias.
    Keywords: GARCH models, financial time-series econometrics, volatility forecasting, Realised Volatility
    JEL: C18 C22 C53 C58
    Date: 2020–07–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:101953&r=all
  4. By: Tzougas, George; Karlis, Dimitris
    Abstract: Regression modelling involving heavy-tailed response distributions, which have heavier tails than the exponential distribution, has become increasingly popular in many insurance settings including non-life insurance. Mixed Exponential models can be considered as a natural choice for the distribution of heavy-tailed claim sizes since their tails are not exponentially bounded. This paper is concerned with introducing a general family of mixed Exponential regression models with varying dispersion which can efficiently capture the tail behaviour of losses. Our main achievement is that we present an Expectation-Maximization (EM)-type algorithm which can facilitate maximum likelihood (ML) estimation for our class of mixed Exponential models which allows for regression specifications for both the mean and dispersion parameters. Finally, a real data application based on motor insurance data is given to illustrate the versatility of the proposed EM-type algorithm.
    Keywords: mixed exponential distributions; EM algorithm; regression models for the mean and dispersion parameters; non-life insurance; heavy-tailed losses
    JEL: C1
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:104027&r=all
  5. By: Vica Tendenan; Richard Gerlach; Chao Wang
    Abstract: This paper develops a Bayesian framework for the realized exponential generalized autoregressive conditional heteroskedasticity (realized EGARCH) model, which can incorporate multiple realized volatility measures for the modelling of a return series. The realized EGARCH model is extended by adopting a standardized Student-t and a standardized skewed Student-t distribution for the return equation. Different types of realized measures, such as sub-sampled realized variance, sub-sampled realized range, and realized kernel, are considered in the paper. The Bayesian Markov chain Monte Carlo (MCMC) estimation employs the robust adaptive Metropolis algorithm (RAM) in the burn in period and the standard random walk Metropolis in the sample period. The Bayesian estimators show more favourable results than maximum likelihood estimators in a simulation study. We test the proposed models with several indices to forecast one-step-ahead Value at Risk (VaR) and Expected Shortfall (ES) over a period of 1000 days. Rigorous tail risk forecast evaluations show that the realized EGARCH models employing the standardized skewed Student-t distribution and incorporating sub-sampled realized range are favored, compared to a range of models.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.05147&r=all
  6. By: Schneider, Ulrich
    Abstract: I study the identification of time preferences in dynamic discrete choice models. Time preferences play a crucial role in these models, as they affect inference and counterfactual analysis. Previous literature has shown that observed choice probabilities do not identify the exponential discount factor in general. Recent identification results rely on specific forms of exogenous variation that impact transition probabilities but not instantaneous utilities. Although such variation allows for set identification of the respective parameter, point identification is only achieved in limited cases. To circumvent this shortcoming, I focus on models in which economic decision-makers might be restricted in their choice sets. I show that time preferences can be identified provided that there is variation in the probability of being restricted that does not affect utilities or transition probabilities. The derived exclusion restrictions are easy to interpret and potentially fulfilled in many empirical applications.
    Keywords: discount factor; identification; dynamic discrete choice
    JEL: C14 C23 C61
    Date: 2019–03–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:102137&r=all
  7. By: Christian Francq (CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - ENSAE ParisTech - École Nationale de la Statistique et de l'Administration Économique - CNRS - Centre National de la Recherche Scientifique); Jean-Michel Zakoïan
    Abstract: This paper addresses the problem of deriving the asymptotic distribution of the empirical distribution function F n of the residuals in a general class of time series models, including conditional mean and conditional heteroscedaticity, whose independent and identically distributed errors have unknown distribution F. We show that, for a large class of time series models (including the standard ARMA-GARCH), the asymptotic distribution of √ n{ F n (·) − F (·)} is impacted by the estimation but does not depend on the model parameters. It is thus neither asymptotically estimation free, as is the case for purely linear models, nor asymptotically model dependent, as is the case for some nonlinear models. The asymptotic stochastic equicontinuity is also established. We consider an application to the estimation of the conditional Value-at-Risk.
    Date: 2020–07–14
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02898909&r=all
  8. By: Gianluca De Nard; Robert F. Engle; Olivier Ledoit; Michael Wolf
    Abstract: Modeling and forecasting dynamic (or time-varying) covariance matrices has many important applications in finance, such as Markowitz portfolio selection. A popular tool to this end are multivariate GARCH models. Historically, such models did not perform well in large dimensions due to the so-called curse of dimensionality. The recent DCC-NL model of Engle et al. (2019) is able to overcome this curse via nonlinear shrinkage estimation of the unconditional correlation matrix. In this paper, we show how performance can be increased further by using open/high/low/close (OHLC) price data instead of simply using daily returns. A key innovation, for the improved modeling of not only dynamic variances but also of dynamic covariances, is the concept of a regularized return, obtained from a volatility proxy in conjunction with a smoothed sign (function) of the observed return.
    Keywords: Dynamic conditional correlations, intraday data, Markowitz portfolio selection, multivariate GARCH, nonlinear shrinkage
    JEL: C13 C58 G11
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:356&r=all
  9. By: Hongjian Shi; Marc Hallin; Mathias Drton; Fang Han
    Abstract: Rank correlations have found many innovative applications in the last decade. In particular,suitable versions of rank correlations have been used for consistent tests of independence between pairs of random variables. The use of ranks is especially appealing for continuous data as tests become distribution-free. However, the traditional concept of ranks relies on ordering data and is, thus, tied to univariate observations. As a result it has long remained unclear how one may construct distribution-free yet consistent tests of independence between multivariate random vectors. This is the problem we address in this paper, in which we lay out a general framework for designing dependence measures that give tests of multivariate independence that are not only consistent and distribution-free but which we also prove to be statistically efficient. Our framework leverages the recently introduced concept of center-outward ranks and signs, a multivariate generalization of traditional ranks, and adopts a common standard form for dependence measures that encompasses many popular measures from the literature. In a unified study, we derive a general asymptotic representation of center-outward test statistics under independence, extending to the multivariate setting the classical Hájek asymptotic representation results. This representation permits a direct calculation of limiting null distributions for the proposed test statistics. Moreover, it facilitates a local power analysis that provides strong support for the center-outward approach to multivariate ranks by establishing, for the first time, the rate-optimality of center-outward tests within families of Konijn alternatives.
    Keywords: Multivariate ranks and signs; Le Cam’s third lemma; Hájek representation; independence test; multivariate dependence measure; center-outward ranks and signs
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/309233&r=all
  10. By: Cheng Chou (University of Leicester); Ruoyao Shi (Department of Economics, University of California Riverside)
    Abstract: We argue that despite its nonclassical measurement errors, the hours worked in the Current Population Survey (CPS) can still be utilized to enhance the overall accuracy of the estimator of the labor supply parameters based on the American Time Use Survey (ATUS), if done properly. We propose such an estimator that is a weighted average between the two stage least squares estimator based on the CPS and a non-standard estimator based on the ATUS.
    Keywords: labor supply elasticity, averaging estimator, bias-variance trade-off, measurement error
    JEL: C13 C21 C26 C52 C81 J22
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202018&r=all
  11. By: Andreas Dzemski; Ryo Okui
    Abstract: We study kmeans clustering estimation of panel data models with a latent group structure and $N$ units and $T$ time periods under long panel asymptotics. We show that the group-specific coefficients can be estimated at the parametric root $NT$ rate even if error variances diverge as $T \to \infty$ and some units are asymptotically misclassified. This limit case approximates empirically relevant settings and is not covered by existing asymptotic results.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.04708&r=all
  12. By: Jan Pablo Burgard; Patricia Dörr; Ralf Münnich
    Abstract: Innovations in statistical methodology is often accompanied by Monte-Carlo studies. In the context of survey statistics two types of inferences have to be considered. First, the classical randomization methods used for developments in statistical modelling. Second, survey data is typically gathered using random sampling schemes from a finite population. In this case, the sampling inference under a finite population model drives statistical conclusions. For empirical analyses, in general, mainly survey data is available. So the question arises how best to conduct the simulation study accompanying the empirical research. In addition, economists and social scientists often use statistical models on the survey data where the statistical inference is based on the classical randomization approach based on the model assumptions. This confounds classical randomization with sampling inference. The question arises under which circumstances – if any – the sampling design can then be ignored. In both fields of research – official statistics and (micro-)econometrics – Monte-Carlo studies generally seek to deliver additional information on an estimator’s distribution. The two named inferences obviously impact distributional assumptions and, hence, must be distinguished in the Monte-Carlo set-up. Both, the conclusions to be drawn and comparability between research results, therefore, depend on inferential assumptions and the consequently adapted simulation study. The present paper gives an overview of the different types of inferences and combinations thereof that are possibly applicable on survey data. Additionally, further types of Monte-Carlo methods are elaborated to provide answers in mixed types of randomization in the survey context as well as under statistical modelling using survey data. The aim is to provide a common understanding of Monte-Carlo based studies using survey data including a thorough discussion of advantages and disadvantages of the different types and their appropriate evaluation.
    Keywords: Monte-Carlo simulation, survey sampling, randomization inference, model inference
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:trr:wpaper:202004&r=all
  13. By: Bruno Damásio; João Nicolau
    Abstract: Markov chains models are used in several applications and different areas of study. Usually a Markov chain model is assumed to be homogeneous in the sense that the transition probabilities are time invariant. Yet, ignoring the inhomogeneous nature of a stochastic process by disregarding the presence of structural breaks can lead to misleading conclusions. Several methodologies are currently proposed for detecting structural breaks in a Markov chain, however, these methods have some limitations, namely they can only test directly for the presence of a single structural break. This paper proposes a new methodology for detecting and testing the presence multiple structural breaks in a Markov chain occurring at unknown dates.
    Keywords: Inhomogeneous Markov chain, structural breaks, time-varying probabilities
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:ise:remwps:wp01362020&r=all
  14. By: Qu, Ritong; Timmermann, Allan; Zhu, Yinchu
    Abstract: To answer this question, we develop new testing methods for identifying superior forecasting skills in settings with arbitrarily many forecasters, outcome variables, and time periods. Our methods allow us to address if any economists had superior forecasting skills for any variables or at any point in time while carefully controlling for the role of "luck" which can give rise to false discoveries when large numbers of forecasts are evaluated. We propose new hypotheses and test statistics that can be used to identify specialist, generalist, and event-specific skills in forecasting performance. We apply our new methods to a large set of Bloomberg survey forecasts of US economic data show that, overall, there is very little evidence that any individual forecasters can beat a simple equal-weighted average of peer forecasts.
    Keywords: Bloomberg survey; Economic forecasting; multiple testing; superior predictive skills
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:14112&r=all
  15. By: Ashesh Rambachan; Jonathan Roth
    Abstract: Social scientists are often interested in estimating causal effects in settings where all units in the population are observed (e.g. all 50 US states). Design-based approaches, which view the treatment as the random object of interest, may be more appealing than standard sampling-based approaches in such contexts. This paper develops a design-based theory of uncertainty suitable for quasi-experimental settings, in which the researcher estimates the treatment effect as if treatment was randomly assigned, but in reality treatment probabilities may depend in unknown ways on the potential outcomes. We first study the properties of the simple difference-in-means (SDIM) estimator. The SDIM is unbiased for a finite-population design-based analog to the average treatment effect on the treated (ATT) if treatment probabilities are uncorrelated with the potential outcomes in a finite population sense. We further derive expressions for the variance of the SDIM estimator and a central limit theorem under sequences of finite populations with growing sample size. We then show how our results can be applied to analyze the distribution and estimand of difference-in-differences (DiD) and two-stage least squares (2SLS) from a design-based perspective when treatment is not completely randomly assigned.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.00602&r=all
  16. By: Eliaz, Kfir; Spiegler, Ran; Weiss, Yair
    Abstract: To what extent can misspecified models generate false estimated correlations? We focus on models that take the form of a recursive system of linear regression equations. Each equation is fitted to minimize the sum of squared errors against an arbitrarily large sample. We characterize the maximal pairwise correlation that this procedure can predict given a generic objective covariance matrix, subject to the constraint that the estimated model does not distort the mean and variance of individual variables. We show that as the number of variables in the model grows, the false pairwise correlation can become arbitrarily close to one, regardless of the true correlation.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:14100&r=all
  17. By: Radchenko, Peter; Vasnev, Andrey; Wang, Wendun
    Abstract: This paper provides the first thorough investigation of the negative weights that can emerge when combining forecasts. The usual practice in the literature is to ignore or trim negative weights, i.e., set them to zero. This default strategy has its merits, but it is not optimal. We study the problem from a variety of different angles, and the main conclusion is that negative weights emerge when highly correlated forecasts with similar variances are combined. In this situation, the estimated weights have large variances, and trimming reduces the variance of the weights and improves the combined forecast. The threshold of zero is arbitrary and can be improved. We propose an optimal trimming threshold, i.e., an additional tuning parameter to improve forecasting performance. The effects of optimal trimming are demonstrated in simulations. In the empirical example using the European Central Bank Survey of Professional Forecasters, we find that the new strategy performs exceptionally well and can deliver improvements of more than 10% for inflation, up to 20% for GDP growth, and more than 20% for unemployment forecasts relative to the equal-weight benchmark.
    Keywords: Forecast combination; Optimal weights; Negative weight; Trimming
    Date: 2020–07–28
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/22956&r=all
  18. By: Yu-Chang Chen; Haitian Xie
    Abstract: This paper studies the latent index representation of the conditional LATE model, making explicit the role of covariates in treatment selection. We find that if the directions of the monotonicity condition are the same across all values of the conditioning covariate, which is often assumed in the literature, then the treatment choice equation has to satisfy a separability condition between the instrument and the covariate. This global representation result establishes testable restrictions imposed on the way covariates enter the treatment choice equation. We later extend the representation theorem to incorporate multiple ordered levels of treatment.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.08106&r=all
  19. By: Andrea Carriero (Queen Mary, University of London); Francesco Corsello (Bank of Italy); Massimiliano Marcellino (Università Bocconi, Milano)
    Abstract: We introduce a time-series model for a large set of variables in which the structural shocks identified are employed to simultaneously explain the evolution of both the level (conditional mean) and the volatility (conditional variance) of the variables. Specifically, the total volatility of macroeconomic variables is first decomposed into two separate components: an idiosyncratic component, and a component common to all of the variables. Then, the common volatility component, often interpreted as a measure of uncertainty, is further decomposed into three parts, respectively driven by the volatilities of the demand, supply and monetary/financial shocks. From a methodological point of view, the model is an extension of the homoscedastic Multivariate Autoregressive Index (MAI) model (Reinsel, 1983) to the case of time-varying volatility. We derive the conditional posterior distribution of the coefficients needed to perform estimations via Gibbs sampling. By estimating the model with US data, we find that the common component of volatility is substantial, and it explains at least 50 per cent of the overall volatility for most variables. The relative contribution of the demand, supply and financial volatilities to the common volatility component is variable specific and often time-varying, and some interesting patterns emerge.
    Keywords: Multivariate autoregressive Index models, stochastic volatility, reduced rank regressions, Bayesian VARs, factor models, structural analysis
    JEL: C15 C32 C38 C51 E30
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1285_20&r=all
  20. By: Silva Lopes, Artur
    Abstract: Motivated by the purpose to assess the income convergence hypothesis, a simple new Fourier-type unit root test of the Dickey-Fuller family is introduced and analysed. In spite of a few shortcomings that it shares with rival tests, the proposed test generally improves upon them in terms of power performance in small samples. The empirical results that it produces for a recent and updated sample of data for 25 countries clearly contrast with previous evidence produced by the Fourier approach and, more generally, they also contradict a recent wave of optimism concerning income convergence, as they are mostly unfavourable to it.
    Keywords: income convergence; unit root tests; structural breaks
    JEL: C22 F43 O47
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:102208&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.