nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒01‒04
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. An Automatic Finite-Sample Robustness Metric: Can Dropping a Little Data Change Conclusions? By Tamara Broderick; Ryan Giordano; Rachael Meager
  2. Testing for Structural Change of Predictive Regression Model to Threshold Predictive Regression Model By Fukang Zhu; Mengya Liu; Shiqing Ling; Zongwu Cai
  3. Covariate Balance Weighting Methods in Estimating Treatment Effects: An Empirical Comparison By Mingfeng Zhan; Zongwu Cai; Ying Fang; Ming Lin
  4. Historical Instruments and Contemporary Endogenous Regressors By Gregory P. Casey; Marc P. B. Klemp
  5. Inference on Risk Premia in Continuous-Time Asset Pricing Models By Yacine Aït-Sahalia; Jean Jacod; Dacheng Xiu
  6. Spurious relationships in high dimensional systems with strong or mild persistence By Pitarakis, Jean-Yves; Gonzalo, Jesús
  7. Exact Distribution of the F-statistic under Heteroskedasticity of Unknown Form for Improved Inference By Jianghao Chu; Tae-Hwy Lee; Aman Ullah; Haifeng Xu
  8. Frequency Domain Local Bootstrap in long memory time series By Arteche González, Jesús María
  9. Double machine learning for sample selection models By Michela Bia; Martin Huber; Luk\'a\v{s} Laff\'ers
  10. Uncovering regimes in out of sample forecast errors from predictive regressions By Pitarakis, Jean-Yves; Gonzalo, Jesús; Da Silva Neto, Anibal Emiliano
  11. A General and Efficient Method for Solving Regime-Switching DSGE Models By Julien Albertini; Stéphane Moyen
  12. Out of sample predictability in predictive regressions with many predictor candidates By Pitarakis, Jean-Yves; Gonzalo, Jesús
  13. A Bayesian Dynamic Compositional Model for Large Density Combinations in Finance By Roberto Casarin; Stefano Grassi; Francesco Ravazzolo; Herman K. van Dijk
  14. COVID-19: Reduced forms have gone viral, but what do they tell us?. By Léa BOU SLEIMAN; Germain GAUTHIER
  15. Forecasting Value-at-Risk and Expected Shortfall in Large Portfolios: a General Dynamic Factor Approach By Marc Hallin; Carlos Trucíos
  16. Tests of Conditional Predictive Ability: Existence, Size, and Power By Michael W. McCracken
  17. Happy times: identification from ordered response data By Shuo Liu; Nick Netzer
  18. Predicting Recessions with a Frontier Measure of Output Gap: An Application to Italian Economy By Camilla Mastromarco; Léopold Simar; Valentin Zelenyuk

  1. By: Tamara Broderick; Ryan Giordano; Rachael Meager
    Abstract: We propose a method to assess the sensitivity of econometric analyses to the removal of a small fraction of the sample. Analyzing all possible data subsets of a certain size is computationally prohibitive, so we provide a finite-sample metric to approximately compute the number (or fraction) of observations that has the greatest influence on a given result when dropped. We call our resulting metric the Approximate Maximum Influence Perturbation. Our approximation is automatically computable and works for common estimators (including OLS, IV, GMM, MLE, and variational Bayes). We provide explicit finite-sample error bounds on our approximation for linear and instrumental variables regressions. At minimal computational cost, our metric provides an exact finite-sample lower bound on sensitivity for any estimator, so any non-robustness our metric finds is conclusive. We demonstrate that the Approximate Maximum Influence Perturbation is driven by a low signal-to-noise ratio in the inference problem, is not reflected in standard errors, does not disappear asymptotically, and is not a product of misspecification. Several empirical applications show that even 2-parameter linear regression analyses of randomized trials can be highly sensitive. While we find some applications are robust, in others the sign of a treatment effect can be changed by dropping less than 1% of the sample even when standard errors are small.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.14999&r=all
  2. By: Fukang Zhu (School of Mathematics, Jilin University, Changchun, Jilin 130012, China); Mengya Liu (School of Mathematics, Jilin University, Changchun, Jilin 130012, China); Shiqing Ling (Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: This paper investigates two test statistics for structural changes and thresh- olds in predictive regression models. The generalized likelihood ratio (GLR) test is proposed for the stationary predictor and the generalized F test is suggested for the unit root predictor. Under the null hypothesis of no structural change and threshold, it is shown that the GLR test statistic converges to a function of a centered Gaussian process, and the generalized F test statistic converges to a function of Brownian motions. A Bootstrap method is proposed to obtain the critical values of test statistics. Simulation studies and a real example are given to assess the performances of the tests.
    Keywords: Bootstrap method, Generalized F-test; Generalized likelihood ratio test; Predictive regression, Structural change, Threshold model.
    JEL: C12 C22 C58 G12
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202021&r=all
  3. By: Mingfeng Zhan (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Ying Fang (The Wang Yanan Institute for Studies in Economics and Department of Statistics, School of Economics, Xiamen University, Xiamen, Fujian 361005, China); Ming Lin (The Wang Yanan Institute for Studies in Economics and Department of Statistics, School of Economics, Xiamen University, Xiamen, Fujian 361005, China)
    Abstract: We conduct a series of simulations to compare the finite sample performance of the average treatment effect estimators based on four recently proposed methodologies Ñ the covariate balancing propensity score method, the stable balance weighting approach, the calibration balance weighting procedure, and the integrated propensity score method. Simulation results show that the performance of the four covariate balance weighting methods are generally better than that for the conventional method, maximum likelihood estimation method without covariate balance, and among the four covariate balance weighting methods, it is difficult to tell which covariate balance weighting method can dominate the others.
    Keywords: Covariate balance; Propensity score; Treatment effects
    JEL: C3 C5
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202020&r=all
  4. By: Gregory P. Casey; Marc P. B. Klemp
    Abstract: We provide a simple framework for interpreting instrumental variable regressions when there is a gap in time between the impact of the instrument and the measurement of the endogenous variable, highlighting a particular violation of the exclusion restriction that can arise in this setting. In the presence of this violation, conventional IV regressions do not consistently estimate a structural parameter of interest. Building on our framework, we develop a simple empirical method to estimate the long-run effect of the endogenous variable. We use our bias correction method to examine the role of institutions in economic development, following Acemoglu et al. (2001). We find long-run coefficients that are smaller than the coefficients from the existing literature, demonstrating the quantitative importance of our framework.
    Keywords: long-run economic development, instrumental variable regression
    JEL: C10 C30 O10 O40
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_8716&r=all
  5. By: Yacine Aït-Sahalia; Jean Jacod; Dacheng Xiu
    Abstract: We develop and implement asymptotic theory to conduct inference on continuous-time asset pricing models using individual equity returns sampled at high frequencies over an increasing time horizon. We study the identification and estimation of risk premia for the continuous and jump components of risks. Our results generalize the Fama-MacBeth two-pass regression approach from the classical discrete-time factor setting to a continuous-time factor model with general dynamics for the factors, idiosyncratic components and factor loadings, while accounting for the fact that the inputs of the second-pass regression are themselves estimated in the first pass.
    JEL: C51 C52 C58 G12
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28140&r=all
  6. By: Pitarakis, Jean-Yves; Gonzalo, Jesús
    Abstract: This paper is concerned with the interactions of persistence and dimensionality in the context of the eigenvalue estimation problem of large covariance matrices arising in cointegration and principal component analysis. Following a review of the early and more recent developments in this area we investigate the behaviour of these eigenvalues in a VAR setting that blends pure unit root, local to unit root and mildly integrated components. Our results highlight the seriousness of spurious relationships that may arise in such Big Data environments even when the degree of persistence of variables involved is mild and is affecting only a small proportion of a large data matrix with important implications for forecasts based on principal component regressions and related methods. We argue that first differencing prior to principal component analysis may be suitable even in stationary ornearly-stationary environments.
    Keywords: Principal Components; High Dimensional Covariances; Persistence; Spurious Factors; Spurious Cointegration
    Date: 2020–12–09
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:31553&r=all
  7. By: Jianghao Chu (Ford Motor Co); Tae-Hwy Lee (Department of Economics, University of California Riverside); Aman Ullah (University of California Riverside); Haifeng Xu (Xiamen University)
    Abstract: The exact finite sample distribution of the $F$ statistic using the heteroskedasticity-consistent (HC) covariance matrix estimators of the regression parameter estimators is unknown. In this paper, we derive the exact finite sample distribution of the $F$ ($=t^2$) statistic for a single linear restriction on the regression parameters. We show that the $F$ statistic can be expressed as a ratio of quadratic forms, and therefore its exact cumulative distribution under the null hypothesis can be derived from the result of Imhof (1961). A numerical calculation is carried out for the exact distribution of the $F$ statistic using various HC covariance matrix estimators, and the rejection probability under the null hypothesis (size) based on the exact distribution is examined. The results show the exact finite sample distribution is remarkably reliable, while, in comparison, the use of the $F$-table leads to a serious over-rejection when the sample is not large or leveraged/unbalanced. An empirical application highlights that the use of the exact distribution of the $F$ statistic will increase the accuracy of inference in empirical research.
    Keywords: Heteroskedastisity, Finite sample theory, Imhof distribution
    JEL: C1 C2
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202027&r=all
  8. By: Arteche González, Jesús María
    Abstract: Bootstrap techniques in the frequency domain have been proved to be effective instruments to approximate the distribution of many statistics of weakly dependent (short memory) series. However their validity with long memory has not been analysed yet. This paper proposes a Frequency Domain Local Bootstrap (FDLB) based on resampling a locally studentised version of the periodogram in a neighbourhood of the frequency of interest. A bound of the Mallows distance between the distributions of the original and bootstrap periodograms is offered for stationary and non-stationary long memory series. This result is in turn used to justify the use of FDLB for some statistics such as the average periodogram or the Local Whittle (LW) estimator. Finally, the finite sample behaviour of the FDLB in the LW estimator is analysed in a Monte Carlo, comparing its performance with rival alternatives.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:48980&r=all
  9. By: Michela Bia; Martin Huber; Luk\'a\v{s} Laff\'ers
    Abstract: This paper considers treatment evaluation when outcomes are only observed for a subpopulation due to sample selection or outcome attrition/non-response. For identification, we combine a selection-on-observables assumption for treatment assignment with either selection-on-observables or instrumental variable assumptions concerning the outcome attrition/sample selection process. To control in a data-driven way for potentially high dimensional pre-treatment covariates that motivate the selection-on-observables assumptions, we adapt the double machine learning framework to sample selection problems. That is, we make use of (a) Neyman-orthogonal and doubly robust score functions, which imply the robustness of treatment effect estimation to moderate regularization biases in the machine learning-based estimation of the outcome, treatment, or sample selection models and (b) sample splitting (or cross-fitting) to prevent overfitting bias. We demonstrate that the proposed estimators are asymptotically normal and root-n consistent under specific regularity conditions concerning the machine learners. The estimator is available in the causalweight package for the statistical software R.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.00745&r=all
  10. By: Pitarakis, Jean-Yves; Gonzalo, Jesús; Da Silva Neto, Anibal Emiliano
    Abstract: We introduce a set of test statistics for assessing the presence of regimes in out of sample forecast errors produced by recursively estimated linear predictive regressions that can accommodate multiple highly persistent predictors. Our tests statistics are designed to be robust to the chosen starting window size and are shown to be both consistent and locally powerful. Their limiting none distributionsare also free of nuisance parameters and hence robust to the degree of persistence of the predictors.Our methods are subsequently applied to the predictability of the value premium whose dynamics are shown to be characterised by state dependence.
    Keywords: Thresholds; Cusum; Out Of Sample Forecast Errors; Predictability; Predictive Regressions
    JEL: C58 C53 C22 C12
    Date: 2020–12–09
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:31555&r=all
  11. By: Julien Albertini (Univ Lyon, Université Lumière Lyon 2, GATE UMR 5824, F-69130 Ecully, France); Stéphane Moyen (Deutsche Bundesbank)
    Abstract: This paper provides a general representation of endogenous and threshold-based regime switching models and develops an efficient numerical solution method. The regime-switching is triggered endogenously when some variables cross threshold conditions that can themselves be regime-dependent. We illustrate our approach using a RBC model with state-dependent government spending policies. It is shown that regime-switching models involve strong non linearities and discontinuities in the dynamics of the model. However, our numerical solution based on simulation and projection methods with regime-dependent policy rules is accurate, and fast enough, to efficiently take into all these challenging aspects. Several alternative specifications to the model and the method are studied.
    Keywords: Regime-switching, RBC model, simulation, accuracy
    JEL: E3 J6
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:gat:wpaper:2035&r=all
  12. By: Pitarakis, Jean-Yves; Gonzalo, Jesús
    Abstract: This paper is concerned with detecting the presence of out of sample predictability in linear predictive regressions with a potentially large set of candidate predictors. We propose a procedure based on out of sample MSE comparisons that is implementedin a pairwise manner using one predictor at a time and resulting in an aggregate test statistic that is standard normally distributed under the none hypothesis of no linear predictability. Predictors can be highly persistent, purely stationary or a combination of both. Upon rejection of the none hypothesis we subsequently introduce a predictor screening procedure designed to identify the most active predictors.
    Keywords: High Dimensional Predictability; Predictive Regressions; Forecasting
    JEL: C53 C52 C32 C12
    Date: 2020–12–09
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:31554&r=all
  13. By: Roberto Casarin (University Ca' Foscari of Venice, Italy); Stefano Grassi (University of Rome Tor Vergata, Italy); Francesco Ravazzolo (Free University of Bozen-Bolzano, Italy; BI Norwegian Business School, Norway; Rimini Centre for Economic Analysis); Herman K. van Dijk (Econometric Institute, Erasmus University Rotterdam, The Netherlands; Norges Bank, Norway; Tinbergen Institute, The Netherlands; Rimini Centre for Economic Analysis)
    Abstract: A Bayesian dynamic compositional model is introduced that can deal with combining a large set of predictive densities. It extends the mixture of experts and the smoothly mixing regression models by allowing for combination weight dependence across models and time. A compositional model with Logistic-normal noise is specified for the latent weight dynamics and the class-preserving property of the logistic-normal is used to reduce the dimension of the latent space and to build a compositional factor model. The projection used in the dimensionality reduction is based on a dynamic clustering process which partitions the large set of predictive densities into a smaller number of subsets. We exploit the state space form of the model to provide an efficient inference procedure based on Particle MCMC. The approach is applied to track the Standard & Poor 500 index combining 3712 predictive densities, based on 1856 US individual stocks, clustered in relatively small number of model sets. For the period 2007-2009, which included the financial crisis, substantial predictive gains are obtained, in particular, in the tails using Value-at-Risk. Similar predictive gains are obtained for the US Treasury Bill yield using a large set of macroeconomic variables. Evidence obtained on model set incompleteness and dynamic patterns in the financial clusters provide valuable signals for improved modelling and more effective economic and financial decisions.
    Keywords: Density Combination, Large Set of Predictive Densities, Compositional Factor Models, Nonlinear State Space, Bayesian Inference
    JEL: C11 C15 C53 E37
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:20-27&r=all
  14. By: Léa BOU SLEIMAN (CREST, Polytechnique.); Germain GAUTHIER (CREST, Polytechnique.)
    Abstract: This paper discusses mitigation policy evaluation in the context of a pandemic. We take the SIRD model as a benchmark for epidemic dynamics and describe the theoretical implications of these dynamics for reduced form estimation. We show that, without additional theoretical structure, agnostic reduced form estimations that have been used in the literature are subject to an omitted variable bias, which in turn affects both the identification of treatment effects and counterfactual analysis.
    Date: 2020–12–16
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2020-32&r=all
  15. By: Marc Hallin; Carlos Trucíos
    Abstract: Beyond their importance from a regulatory policy point of view, Value-at-Risk (VaR) and Expected Shortfall (ES) play an important role in risk management, portfolio allocation, capital level requirements, trading systems, and hedging strategies. Unfortunately, due to the curse of dimensionality, their accurate estimation in large portfolios is quite a challenge. To tackle this problem, we propose a filtered historical simulation method in which high-dimensional conditional covariance matrices are estimated via a general dynamic factor model with infinite-dimensional factor space and conditionally heteroscedastic factors. The procedure is applied to a panel with concentration ratio close to one. Back-testing and scoring results indicate that both VaR and ES are accurately estimated under our method, which outperforms alternative approaches available in the literature.
    Keywords: conditional covariance; high-dimensional time series; large panels; risk measures; volatility
    JEL: C10 C32 C53 G17 G32
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/315983&r=all
  16. By: Michael W. McCracken
    Abstract: We investigate a test of conditional predictive ability described in Giacomini and White (2006; Econometrica). Our main goal is simply to demonstrate existence of the null hypothesis and, in doing so, clarify just how unlikely it is for this hypothesis to hold. We do so using a simple example of point forecasting under quadratic loss. We then provide simulation evidence on the size and power of the test. While the test can be accurately sized we find that power is typically low.
    Keywords: prediction; out-of-sample; inference
    JEL: C53 C12 C52
    Date: 2020–12–18
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:89216&r=all
  17. By: Shuo Liu; Nick Netzer
    Abstract: Surveys are an important tool in economics and in the social sciences more broadly. However, methods used to analyse ordinal survey data (e.g., ordered probit) rely on strong and often unjustified distributional assumptions. In this paper, we propose using survey response times to solve that problem. Our main identifying assumption is that individual response time is decreasing in the distance between the value of the latent variable and an indecision threshold. This assumption is supported by a large body of evidence on chronometric effects in psychology and neuroscience. We provide conditions under which the expected value of the latent variable (e.g., average happiness) can be compared across groups, even without making distributional assumptions. By applying it to an online survey experiment, we show how our method can be implemented in practice and gives rise to new insights.
    Keywords: Surveys, ordinal data, response times, non-parametric identification
    JEL: C14 D60 D91 I31
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:371&r=all
  18. By: Camilla Mastromarco (Dipartimento di Scienze dell'Economia, Università degli Studi del Salento.); Léopold Simar (Institut de Statistique, Biostatistique et Sciences Actuarielles, Université Catholique de Louvain.); Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia)
    Abstract: Despite the long and great history, developed institutions, and high level of physical and human capital, the Italian economy has been fairly stagnant during the last three decades. In this paper, we merge two streams of literature: nonparametric methods to estimate frontier eciency of an economy, which allows us to develop a new measure of output gap, and nonparametric methods to estimate probability of an economic recession. To illustrate the new framework we use quarterly data for Italy from 1995 to 2019, and nd that our model, using either nonparametric or the linear probit model, is able to provide useful insights.
    Keywords: Forecasting, Output Gap, Robust Nonparametric Frontier, Generilized Nonparametric Quasi- Likelihood Method, Italian recession.
    JEL: C5 C14 C13 C32 D24 E37 O4
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:qld:uqcepa:153&r=all

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.