nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒12‒07
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. To infinity and beyond: Efficient computation of ARCH(1) models By Morten Ørregaard Nielsen; Antoine L. Noël
  2. Persistent and Rough Volatility By Liu, Xiaobin; Shi, Shuping; Yu, Jun
  3. Nonparametric Instrumental Regression with Right Censored Duration Outcomes By Beyhum, Jad; Florens, Jean-Pierre; Van Keilegom, Ingrid
  4. Modelling Realized Covariance Matrices: a Class of Hadamard Exponential Models By L. Bauwens; E. Otranto
  5. Testing and Dating Structural Changes in Copula-based Dependence Measures By Florian Stark; Sven Otto
  6. Fractionally integrated Log-GARCH with application to value at risk and expected shortfall By Yuanhua Feng; Jan Beran; Sebastian Letmathe; Sucharita Ghosh
  7. When Should We (Not) Interpret Linear IV Estimands as LATE? By Tymon S{\l}oczy\'nski
  8. The inclusive synthetic control method By Stefano, Roberta di; Mellace, Giovanni
  9. Policy choice in experiments with unknown interference By Davide Viviano
  10. Identifying the effect of a mis-classified, binary, endogenous regressor By Francis J. DiTraglia; Camilo Garcia-Jimeno
  11. Subjects, Trials, and Levels: Statistical Power in Conjoint Experiments By Stefanelli, Alberto; Lukac, Martin
  12. Interpreting Big Data in the Macro Economy: A Bayesian Mixed Frequency Estimator By David Kohns; Arnab Bhattacharjee
  13. Weak Diffusion Limit of Real-Time GARCH Models: The Role of Current Return Information By Ding, Y.
  14. MULTILEVEL MODELING FOR ECONOMISTS: WHY, WHEN AND HOW By Aleksey Oshchepkov; Anna Shirokanova
  15. Matching Theory and Evidence on Covid-19 using a Stochastic Network SIR Model By Pesaran, M. H.; Yang, C. F.
  16. Channeling Fisher: randomization tests and the statistical insignificance of seemingly significant experimental results By Young, Alwyn
  17. Tracking change-points in multivariate extremes By Miguel de Carvalho; Manuele Leonelli; Alex Rossi
  18. Reducing bias in difference-in-differences models using entropy balancing By Matthew Cefalu; Brian G. Vegetabile; Michael Dworsky; Christine Eibner; Federico Girosi
  19. Distance-based measures of spatial concentration: Introducing a relative density function By Gabriel Lang; Eric Marcon; Florence Puech

  1. By: Morten Ørregaard Nielsen (Queens University and CREATES); Antoine L. Noël (Queens University)
    Abstract: This paper provides an exact algorithm for efficient computation of the time series of conditional variances, and hence the likelihood function, of models that have an ARCH(\infty) representation. This class of models includes, e.g., the fractionally integrated generalized autoregressive conditional heteroskedasticity (FIGARCH) model. Our algorithm is a variation of the fast fractional difference algorithm of Jensen and Nielsen (2014). It takes advantage of the fast Fourier transform (FFT) to achieve an order of magnitude improvement in computational speed. The efficiency of the algorithm allows estimation (and simulation/bootstrapping) of ARCH(\infty) models, even with very large data sets and without the truncation of the filter commonly applied in the literature. In Monte Carlo simulations, we show that the elimination of the truncation of the filter reduces the bias of the quasi-maximum-likelihood estimators and improves out-of-sample forecasting. Our results are illustrated in two empirical examples.
    Keywords: Circular convolution theorem, conditional heteroskedasticity, fast Fourier transform, FIGARCH, truncation
    JEL: C22 C58 C63 C87
    Date: 2020–11–23
    URL: http://d.repec.org/n?u=RePEc:aah:create:2020-13&r=all
  2. By: Liu, Xiaobin (Zhejiang University); Shi, Shuping (Macquarie University); Yu, Jun (School of Economics, Singapore Management University)
    Abstract: This paper contributes to an ongoing debate on volatility dynamics. We introduce a discrete-time fractional stochastic volatility (FSV) model based on the fractional Gaussian noise. The new model has the same limit as the fractional integrated stochastic volatility (FISV) model under the in-fill asymptotic scheme. We study the theoretical properties of both models and introduce a memory signature plot for a model-free initial assessment. A simulated maximum likelihood (SML) method, which maximizes the time-domain log-likelihoods obtained by the importance sampling technique, is employed to estimate the model parameters. Simulation studies suggest that the SML method can accurately estimate both models. Our empirical analysis of several financial assets reveals that volatilities are both persistent and rough. It is persistent in the sense that the estimated autoregressive coefficients of the log volatilities are very close to unity, which explains the observed long-range dependent feature of volatilities. It is rough as the estimated Hurst (fractional) parameters of the FSV (FISV) model are significantly less than half (zero), which is consistent with the findings of the recent literature on ‘rough volatility’.
    Keywords: Fractional Brownian motion; stochastic volatility; memory signature plot; long memory; asymptotic; variance-covariance matrix; rough volatility
    JEL: C15 C22 C32
    Date: 2020–11–03
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2020_023&r=all
  3. By: Beyhum, Jad; Florens, Jean-Pierre; Van Keilegom, Ingrid
    Abstract: This paper analyzes the effect of a discrete treatment Z on a duration T. The treatment is not randomly assigned. The confoundingness issue is treated using a discrete instrumental variable explaining the treatment and independent of the error term of the model. Our framework is nonparametric and allows for random right censoring. This specification generates a nonlinear inverse problem and the average treatment effect is derived from its solution. We provide local and global identification properties that rely on a nonlinear system of equations. We propose an estimation procedure to solve this system and derive rates of convergence and conditions under which the estimator is asymptotically normal. When censoring makes identification fail, we develop partial identification results. Our estimators exhibit good finite sample properties in simulations. We also apply our methodology to the Illinois Reemployment Bonus Experiment.
    Keywords: Duration Models; Endogeneity; Instrumental variable; Nonseparability; Partial identification
    Date: 2020–11–20
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:124931&r=all
  4. By: L. Bauwens; E. Otranto
    Abstract: Time series of realized covariance matrices can be modelled in the conditional autoregressive Wishart model family via dynamic correlations or via dynamic covariances. Extended parameterizations of these models are proposed, which imply a specific and time-varying impact parameter of the lagged realized covariance (or correlation) on the next conditional covariance (or correlation) of each asset pair. The proposed extensions guarantee the positive definiteness of the conditional covariance or correlation matrix with simple parametric restrictions, while keeping the number of parameters fixed or linear with respect to the number of assets. An empirical study on twenty-nine assets reveals that the extended models have superior forecasting performances than their simpler versions.
    Keywords: realized covariances;dynamic covariances and correlations;Hadamard exponential matri
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:202007&r=all
  5. By: Florian Stark; Sven Otto
    Abstract: This paper is concerned with testing and dating structural breaks in the dependence structure of multivariate time series. We consider a cumulative sum (CUSUM) type test for constant copula-based dependence measures, such as Spearman's rank correlation and quantile dependencies. The asymptotic null distribution is not known in closed form and critical values are estimated by an i.i.d. bootstrap procedure. We analyze size and power properties in a simulation study under different dependence measure settings, such as skewed and fat-tailed distributions. To date break points and to decide whether two estimated break locations belong to the same break event, we propose a pivot confidence interval procedure. Finally, we apply the test to the historical data of ten large financial firms during the last financial crisis from 2002 to mid-2013.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.05036&r=all
  6. By: Yuanhua Feng (Paderborn University); Jan Beran (University of Konstanz); Sebastian Letmathe (Paderborn University); Sucharita Ghosh (Swiss Federal Research Institute WSL)
    Abstract: Volatility modelling is applied in a wide variety of disciplines, namely finance, en- vironment and societal disciplines, where modelling conditional variability is of in- terest e.g. for incremental data. We introduce a new long memory volatility model, called FI-Log-GARCH. Conditions for stationarity and existence of fourth moments are obtained. It is shown that any power of the squared returns shares the same memory parameter. Asymptotic normality of sample means is proved. The practical performance of the proposal is illustrated by an application to one-day rolling forecasts of the VaR (value at risk) and ES (expected shortfall). Comparisons with FIGARCH, FIEGARCH and FIAPARCH models are made using a criterion based on different traffic light test. The results of this paper indicate that the FI-Log- GARCH often outperforms the other models, and thus provides a useful alternative to existing long memory volatility models.
    Keywords: FI-Log-GARCH, stationary solutions, finite fourth moments, covariance structure, rolling forecasting VaR and ES, traffic light test of ES
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:pdn:ciepap:137&r=all
  7. By: Tymon S{\l}oczy\'nski
    Abstract: In this paper I revisit the interpretation of the linear instrumental variables (IV) estimand as a weighted average of conditional local average treatment effects (LATEs). I focus on a practically relevant situation in which additional covariates are required for identification but the reduced-form and first-stage regressions are possibly misspecified as a result of neglected heterogeneity in the effects of the instrument. If we also allow for conditional monotonicity, i.e. the existence of compliers but no defiers at some covariate values and the existence of defiers but no compliers elsewhere, then the weights on some conditional LATEs are negative and the IV estimand is no longer interpretable as a causal effect. Even if monotonicity holds unconditionally, the IV estimand is not interpretable as the unconditional LATE parameter unless the groups that are encouraged and not encouraged to get treated are roughly equal sized.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.06695&r=all
  8. By: Stefano, Roberta di (Department of Methods and Models for Economics); Mellace, Giovanni (Department of Business and Economics)
    Abstract: The Synthetic Control Method (SCM) estimates the causal effect of a policy intervention in a panel data setting with only a few treated units and control units. The treated outcome in the absence of the intervention is recovered by a weighted average of the control units. The latter cannot be affected by the intervention, neither directly nor indirectly. We introduce the inclusive synthetic control method (iSCM), a novel and intuitive synthetic control modification that allows including units potentially affected directly or indirectly by an intervention in the donor pool. Our method is well suited for applications with multiple treated units where including treated units in the donor pool substantially improves the pre-intervention fit and/or for applications where some of the units in the donor pool might be affected by spillover effects. Our iSCM is very easy to implement, and any synthetic control type estimation and inference procedure can be used. Finally, as an illustrative empirical example, we re-estimate the causal effect of German reunification on GDP per capita allowing for spillover effects from West Germany to Austria.
    Keywords: Synthetic Control Method; spillover effects; causal inference
    JEL: C21 C23 C31 C33
    Date: 2020–11–25
    URL: http://d.repec.org/n?u=RePEc:hhs:sdueko:2020_014&r=all
  9. By: Davide Viviano
    Abstract: This paper discusses experimental design for inference and estimation of individualized treatment allocation rules in the presence of unknown interference, with units being organized into large independent clusters. The contribution is two-fold. First, we design a short pilot study with few clusters for testing whether base-line interventions are welfare-maximizing, with its rejection motivating larger-scale experimentation. Second, we introduce an adaptive randomization procedure to estimate welfare-maximizing individual treatment allocation rules valid under unobserved interference. We propose non-parametric estimators of direct treatments and marginal spillover effects, which serve for hypothesis testing and policy-design. We discuss the asymptotic properties of the estimators and small sample regret guarantees of the estimated policy. Finally, we illustrate the method's advantage in simulations calibrated to an existing experiment on information diffusion.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.08174&r=all
  10. By: Francis J. DiTraglia (Department of Economics University of Oxford); Camilo Garcia-Jimeno (Federal Reserve Bank of Chicago)
    Abstract: This paper studies identification of the effect of a mis-classified, binary, endogenous regressor when a discrete-valued instrumental variable is available. We begin by showing that the only existing point identification result for this model is incorrect. We go on to derive the sharp identified set under mean independence assumptions for the instrument and measurement error. The resulting bounds are novel and informative, but fail to point identify the effect of interest. This motivates us to consider alternative and slightly stronger assumptions: we show that adding second and third moment independence assumptions suffices to identify the model.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.07272&r=all
  11. By: Stefanelli, Alberto; Lukac, Martin (London School of Economics and Political Science)
    Abstract: Conjoint analysis is an experimental technique that has become quite popular to understand people's decisions in multi-dimensional decision-making processes. Despite the importance of power analysis for experimental techniques, current literature has largely disregarded statistical power considerations when designing conjoint experiments. The main goal of this article is to provide researchers and practitioners with a practical tool to calculate the statistical power of conjoint experiments. To this end, we first conducted an extensive literature review to understand how conjoint experiments are designed and gauge the plausible effect sizes discovered in the literature. Second, we formulate a data generating model that is sufficiently flexible to accommodate a wide range of conjoint designs and hypothesized effects. Third, we present the results of an extensive series of simulation experiments based on the previously formulated data generation process. Our results show that---even with relatively large sample size and the number of trials---conjoint experiments are not suited to draw inferences for experiments with large numbers of experimental conditions and relatively small effect sizes. Specifically, Type S and Type M errors are especially pronounced for experimental designs with relatively small effective sample sizes (< 3000) or a high number of levels (> 15) that find small but statistically significant effects (< 0.03). The proposed online tool based on the simulation results can be used by researchers to perform power analysis of their designs and hence achieve adequate design for future conjoint experiments.
    Date: 2020–11–18
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:spkcy&r=all
  12. By: David Kohns; Arnab Bhattacharjee (Centre for Energy Economics Research and Policy, Heriot-Watt University)
    Abstract: More and more are Big Data sources, such as Google Trends, being used to augment nowcast models. An often neglected issue within the previous literature, which is especially pertinent to policy environments, is the interpretability of the Big Data source included in the model. We provide a Bayesian modeling framework which is able to handle all usual econometric issues involved in combining Big Data with traditional macroeconomic time series such as mixed frequency and ragged edges, while remaining computationally simple and allowing for a high degree of interpretability. In our model, we explicitly account for the possibility that the Big Data and macroeconomic data set included have different degreesof sparsity. We test our methodology by investigating whether Google trends in real time increase nowcast fit of US real GDP growth compared to traditional macroeconomic time series. We find that search terms improve performance of both point forecast accuracy as well as forecast density calibration not only before official information is released but alsolater into GDP reference quarters. Our transparent methodology shows that the increased fit stems from search terms acting as early warning signals to large turning points in GDP.
    Keywords: Big Data; Machine Learning; Interpretability; Illusion of Sparsity; Density Nowcast; Google Search Terms
    JEL: C31 C53
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:hwc:wpaper:010&r=all
  13. By: Ding, Y.
    Abstract: We prove that Real-time GARCH (RT-GARCH) models converge to the same type of stochastic differential equations as the standard GARCH models as the length of sampling interval goes to zero. The additional parameter of RT-GARCH can be interpreted as current information risk premium. We show RT-GARCH has the same limiting stationary distribution and shares the same asymptotic properties for volatility filtering and forecast as standard GARCH. Simulation results confirm the current information parameter decreases with the length of sampling interval and hence, GARCH and RT-GARCH models behave increasingly similar for high frequency data. Moreover, empirical results show the current information risk premium has increased significantly after the 2008 financial crisis for S&P 500 index returns.
    Keywords: GARCH, RT-GARCH, SV, diffusion limit, high frequency data
    JEL: C22 C32 C58
    Date: 2020–11–25
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:20112&r=all
  14. By: Aleksey Oshchepkov (National Research University Higher School of Economics); Anna Shirokanova (National Research University Higher School of Economics)
    Abstract: Multilevel modeling (MLM, also known as hierarchical linear modeling, HLM) is a methodological framework widely used in the social sciences to analyze data with a hierarchical structure, where lower units of aggregation are ‘nested’ in higher units, including longitudinal data. In economics, however, MLM is used very rarely. Instead, economists use separate econometric techniques including cluster-robust standard errors and fixed effects models. In this paper, we review the methodological literature and contrast the econometric techniques typically used in economics with the analysis of hierarchical data using MLM. Our review suggests that economic techniques are generally less convenient, flexible, and efficient compared to MLM. The important limitation of MLM, however, is its inability to deal with the omitted variable problem at the lowest level of data, while standard economic techniques may be complemented by quasi-experimental methods mitigating this problem. It is unlikely, though, that this limitation can explain and justify the rare use of MLM in economics. Overall, we conclude that MLM has been unreasonably ignored in economics, and we encourage economists to apply this framework by providing ‘when and how’ guidelines
    Keywords: multilevel modeling, hierarchical linear modeling, mixed effects, random effects, fixed effects, random coefficients, clusterization of errors.
    JEL: C18 C50 C33 A12
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:hig:wpaper:233/ec/2020&r=all
  15. By: Pesaran, M. H.; Yang, C. F.
    Abstract: This paper develops an individual-based stochastic network SIR model for the empirical analysis of the Covid-19 pandemic. It derives moment conditions for the number of infected and active cases for single as well as multigroup epidemic models. These moment conditions are used to investigate identification and estimation of recovery and transmission rates. The paper then proposes simple moment-based rolling estimates and shows them to be fairly robust to the well-known under-reporting of infected cases. Empirical evidence on six European countries match the simulated outcomes, once the under-reporting of infected cases is addressed. It is estimated that the number of reported cases could be between 3 to 9 times lower than the actual numbers. Counterfactual analysis using calibrated models for Germany and UK show that early intervention in managing the infection is critical in bringing down the reproduction numbers below unity in a timely manner.
    Keywords: Covid-19, multigroup SIR model, basic and effective reproduction numbers, rolling window estimates of the transmission rate, method of moments, calibration and counterfactual analysis
    JEL: C13 C15 C31 D85 I18 J18
    Date: 2020–11–11
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:20102&r=all
  16. By: Young, Alwyn
    Abstract: I follow R. A. Fisher's The Design of Experiments (1935), using randomization statistical inference to test the null hypothesis of no treatment effects in a comprehensive sample of 53 experimental papers drawn from the journals of the American Economic Association. In the average paper, randomization tests of the significance of individual treatment effects find 13% to 22% fewer significant results than are found using authors’ methods. In joint tests of multiple treatment effects appearing together in tables, randomization tests yield 33% to 49% fewer statistically significant results than conventional tests. Bootstrap and jackknife methods support and confirm the randomization results.
    JEL: C12 C90
    Date: 2019–05–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:101401&r=all
  17. By: Miguel de Carvalho; Manuele Leonelli; Alex Rossi
    Abstract: In this paper we devise a statistical method for tracking and modeling change-points on the dependence structure of multivariate extremes. The methods are motivated by and illustrated on a case study on crypto-assets.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.05067&r=all
  18. By: Matthew Cefalu; Brian G. Vegetabile; Michael Dworsky; Christine Eibner; Federico Girosi
    Abstract: This paper illustrates the use of entropy balancing in difference-in-differences analyses when pre-intervention outcome trends suggest a possible violation of the parallel trends assumption. We describe a set of assumptions under which weighting to balance intervention and comparison groups on pre-intervention outcome trends leads to consistent difference-in-differences estimates even when pre-intervention outcome trends are not parallel. Simulated results verify that entropy balancing of pre-intervention outcomes trends can remove bias when the parallel trends assumption is not directly satisfied, and thus may enable researchers to use difference-in-differences designs in a wider range of observational settings than previously acknowledged.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.04826&r=all
  19. By: Gabriel Lang (MIA-Paris - Mathématiques et Informatique Appliquées - AgroParisTech - INRA - Institut National de la Recherche Agronomique); Eric Marcon (UMR ECOFOG - Ecologie des forêts de Guyane - Cirad - Centre de Coopération Internationale en Recherche Agronomique pour le Développement - INRA - Institut National de la Recherche Agronomique - AgroParisTech - UG - Université de Guyane - CNRS - Centre National de la Recherche Scientifique - UA - Université des Antilles); Florence Puech (RITM - Réseaux Innovation Territoires et Mondialisation - UP11 - Université Paris-Sud - Paris 11)
    Abstract: For a decade, distance-based methods have been widely employed and constantly improved in the field of spatial economics. These methods are a very useful tool for accurately evaluating the spatial distribution of plants or retail stores, for example (Duranton and Overman, 2008). In this paper, we introduce a new distance-based statistical measure for evaluating the spatial concentration of economic activities. To our knowledge, the m function is the first relative density function to be proposed in the economics literature. This tool supplements the typology of distance-based methods recently drawn up by Marcon and Puech (2012). By considering several theoretical and empirical examples, we show the advantages and the limits of the m function for detecting spatial structures in economics.
    Keywords: Agglomeration,Aggregation,Spatial Concentration,Point Patterns,Economic Geography
    Date: 2019–10–24
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-01082178&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.