nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒10‒12
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A Homogeneous Approach to Testing for Granger Non-Causality in Heterogeneous Panels By Arturas Juodis; Yiannis Karavias; Vasilis Sarafidis
  2. Nonparametric Identication and Estimation of Panel Quantile Models with Sample Selection By Sungwon Lee
  3. Nonparametric Tests for Conditional Quantile Independence with Duration Outcomes By Sungwon Lee
  4. TippingSens: An R Shiny Application to Facilitate Sensitivity Analysis for Causal Inference Under Confounding By Haensch, Anna-Carolina; Drechsler, Jörg; Bernhard, Sarah
  5. Identifucation and Confidence Regions for Treatment Effect and its Distribution under Stochastic Dominance By Sungwon Lee
  6. Canonical Correlation-based Model Selection for the Multilevel Factors By In Choi; Rui Lin; Yongcheol Shin
  7. Online Appendix for Canonical Correlation-based Model Selection for the Multilevel Factors By In Choi; Rui Lin; Yongcheol Shin
  8. Comparison of Variable Selection Methods for Time-to-Event Data in High-Dimensional Settings By J. Gilhodes; Florence Dalenc; Jocelyn Gal; C. Zemmour; Eve Leconte; Jean Marie Boher; Thomas Filleron
  9. Non asymptotic controls on a stochastic algorithm for superquantile approximation By Gadat, Sébastien; Costa, Manon
  10. Optimal probabilistic forecasts: When do they work? By Ruben Loaiza-Maya; Gael M. Martin; David T. Frazier; Worapree Maneesoonthorn; Andres Ramirez Hassan
  11. Gravity Models and the Law of Large Numbers By Colin Jareb; Sergey K. Nigai
  12. A multinomial and rank-ordered logit model with inter- and intra-individual heteroscedasticity By Anoek Castelein; Dennis Fok; Richard Paap
  13. Identifying Behavioral Responses to Tax Reforms: New Insights and a New Approach By Katrine Marie Jakobsen; Jakob Egholt Søgaard
  14. Gravity-Model Estimation with Time-Interval Data: Revisiting the Impact of Free Trade Agreements By Peter H. Egger; Mario Larch; Yoto V. Yotov
  15. Optimal and robust combination of forecasts via constrained optimization and shrinkage By Roccazzella, Francesco; Gambetti, Paolo; Vrins, Frédéric
  16. A Suggestion for a Dynamic Multi Factor Model (DMFM) By Heather D. Gibson; Stephen G. Hall; George S. Tavlas

  1. By: Arturas Juodis; Yiannis Karavias; Vasilis Sarafidis
    Abstract: This paper develops a new method for testing for Granger non-causality in panel data models with large cross-sectional (N) and time series (T) dimensions. The method is valid in models with homogeneous or heterogeneous coefficients. The novelty of the proposed approach lies on the fact that under the null hypothesis, the Granger-causation parameters are all equal to zero, and thus they are homogeneous. Therefore, we put forward a pooled least-squares (fixed effects type) estimator for these parameters only. Pooling over cross-sections guarantees that the estimator has a root NT convergence rate. In order to account for the well-known "Nickell bias", the approach makes use of the well-known Split Panel Jackknife method. Subsequently, a Wald test is proposed, which is based on the bias-corrected estimator. Finite-sample evidence shows that the resulting approach performs well in a variety of settings and outperforms existing procedures. Using a panel data set of 350 U.S. banks observed during 56 quarters, we test for Granger non-causality between banks' profitability and cost efficiency.
    Keywords: panel data, Granger causality, VAR, "Nickell bias", bias correction, fixed effects
    JEL: C12 C13 C23 C33
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2020-32&r=all
  2. By: Sungwon Lee (Department of Economics, Sogang University, Seoul)
    Abstract: This paper develops nonparametric panel quantile regression models with sample selection. The class of models allows the unobserved heterogeneity to be correlated with time-varying re- gressors in a time-invariant manner. I adopt the correlated random eects approach proposed by Mundlak (1978) and Chamberlain (1980), and the control function approach to correct the sample selection bias. The class of models is general and exible enough to incorporate many empirical issues, such as endogeneity of regressors and censoring. Identication of the model re- quires that T≥3, where T is the number of time periods, and that there is an excluded variable that affects the selection probability. Based on the identication result, this paper proposes sieve two-step estimation to estimate the model parameters. This paper also establishes the asymptotic theory for the sieve two-step estimators, including consistency, convergence rates, and asymptotic normality of functionals.
    Keywords: Sample selection, panel data, quantile regression, nonseparable models, correlated random effects, control function approach, nonparametric identication, sieve two-step estima- tion.
    JEL: C14 C21 C23
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:2012&r=all
  3. By: Sungwon Lee (Department of Economics, Sogang University, Seoul)
    Abstract: It is common for empirical studies to specify models with many covariates to eliminate the omitted variable bias, even if some of them are potentially irrelevant. In the case where models are nonparametrically specied, such a practice results in the curse of dimensionality. There- fore, it is important in nonparametric models to drop irrelevant variables and consider a small number of covariates to improve the precision of estimation. This paper develops nonpara- metric signicance tests for censored quantile regression models with duration outcomes. The null hypothesis is characterized by a conditional moment restriction. I adopt the integrated conditional moment (ICM) approach, which was developed by Bierens (1982); Bierens (1990), to construct test statistics. The testing procedure does not require to estimate the alterna- tive models. Two test statistics are considered: one is the Kolmogorov-Smirnov type statistic and the other is the Cramer-von-Mises type statistic. These test statistics are functionals of a stochastic process which converges weakly to a centered Gaussian process. The test has non-trivial power against local alternatives at the parametric rate. A subsampling procedure is proposed to obtain critical values.
    Keywords: Conditional quantile independence, nonparametric tests, duration outcomes, inte- grated conditional moment, empirical processes
    JEL: C12 C14 C41
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:2013&r=all
  4. By: Haensch, Anna-Carolina; Drechsler, Jörg (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Bernhard, Sarah (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany])
    Abstract: "Most strategies for causal inference based on quasi-experimental or observational data critically rely on the assumption of unconfoundedness. If this assumption is suspect, sensitivity analysis can be a viable tool to evaluate the impact of confounding on the analysis of interest. One of the earliest proposals for such a sensitivity analysis was suggested by Rosenbaum/ Rubin (1983). However, while it is straightforward to obtain estimates for the causal effect under specific assumptions regarding an unobserved confounder, conducting a full sensitivity analysis based on a range of parameter settings is unwieldy based on the simple forking tables which Rosenbaum and Rubin used. To tackle the multiple parameter problem of the Rosenbaum-Rubin approach, we present an interactive R Shiny application called TippingSens, which visualizes the impact of various parameter settings on the estimated causal effect. Borrowing from the literature on tipping point analysis, the flexible app facilitates manipulating all parameters simultaneously. We demonstrate the usefulness of our app by conducting a sensitivity analysis for a quasi-experiment measuring the effect of vocational training programs on unemployed men. The online supplement accompanying this paper provides a step-by-step introduction to the app using the original illustrative example from Rosenbaum/Rubin (1983)." (Author's abstract, IAB-Doku) ((en))
    JEL: C31 C87
    URL: http://d.repec.org/n?u=RePEc:iab:iabdpa:202029&r=all
  5. By: Sungwon Lee (Department of Economics, Sogang University, Seoul)
    Abstract: This paper considers identification of treatment effect and its distribution under some distri- butional assumptions. I assume that a binary treatment is endogenously determined. The main identification objects are the quantile treatment effect and the distribution of the treatment effect. To examine the identification problems, I construct a counterfactual model without specifying an underlying economic model and apply Manski's approach (Manski (1990)) to nd the quantile treatment effects. For the distribution of the treatment effect, I follow the approach proposed by Fan and Park (2010). Some distributional assumptions called stochastic dominance are imposed on the model. Stochastic dominance assumptions are consistent with economic theories in many areas and the results show that those distributional assumptions help to tighten the bounds on the parameters of interest. This paper also provides condence regions for identied sets that are pointwise consistent in level. An empirical study on the return to college is provided. The empirical results conrm that the stochastic dominance assumptions improve the bounds on the distribution of the treatment effect.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:2011&r=all
  6. By: In Choi (Department of Economics, Sogang University, Seoul); Rui Lin (Department of Economics, University of York.); Yongcheol Shin (Department of Economics, University of York.)
    Abstract: A great deal of research e ort has been devoted to the analysis of the multilevel factor model. To date, however, limited progress has been made on the development of coherent inference for identifying the number of the global factors. We propose a novel approach based on the canonical correlation analysis to identify the number of the global factors. We develop the canonical correlations di erence (CCD), which is constructed by the di erence between the cross group-averages of the adjacent canonical correlations between factors. We prove that CCD is a consistent selection criterion. Via Monte Carlo simulations, we show that CCD always selects the number of global factors correctly even in small samples. Further, CCD outperforms the existing approaches in the presence of serially correlated and weakly cross-sectionally correlated idiosyncratic errors as well as the correlated local factors. Finally, we demonstrate the utility of our framework with an application to the multilevel asset pricing model for the stock return data of 12 industries in the U.S.
    Keywords: Multilevel Factor Models, Principal Components, Canonical Correlation Difference, Multilevel Asset Pricing Models
    JEL: C52 G12
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:2008&r=all
  7. By: In Choi (Department of Economics, Sogang University, Seoul); Rui Lin (Department of Economics, University of York.); Yongcheol Shin (Department of Economics, University of York.)
    Abstract: We provide additional simulation results and theoretical derivations. Section I provides the simulation results for the performance of the alternative selection criteria for estimating the number of local factors. Section II provides the proofs for Lemmas in Section 4.1.1. Section III describes the detailed estimation algorithms of alternative approaches for selecting the number of global factors. Section IV presents the additional empirical results, showing that the popular systematic risk factors, smb and hml, proposed by Fama and French (1993), do not explain the within and the between correlations. Section V investigates the nite sample performance of the existing model selection criteria that ignore the multilevel structure and demonstrate that the existing selection criteria will produce unreliable inference in nite samples.
    Keywords: Multilevel Factor Models, Principal Components, Canonical Correlation Difference, Multilevel Asset Pricing Models
    JEL: C52 G12
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:2009&r=all
  8. By: J. Gilhodes (Institut Claudius Regaud); Florence Dalenc (Institut Claudius Regaud); Jocelyn Gal (UNICANCER/CAL - Centre de Lutte contre le Cancer Antoine Lacassagne [Nice] - UNICANCER - UCA - Université Côte d'Azur); C. Zemmour (Institut Paoli-Calmettes - Fédération nationale des Centres de lutte contre le Cancer (FNCLCC)); Eve Leconte (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Jean Marie Boher (Institut Paoli-Calmettes - Fédération nationale des Centres de lutte contre le Cancer (FNCLCC)); Thomas Filleron (Institut Claudius Regaud)
    Abstract: Over the last decades, molecular signatures have become increasingly important in oncology and are opening up a new area of personalized medicine. Nevertheless, biological relevance and statistical tools necessary for the development of these signatures have been called into question in the literature. Here, we investigate six typical selection methods for high-dimensional settings and survival endpoints, including LASSO and some of its extensions, component-wise boosting, and random survival forests (RSF). A resampling algorithm based on data splitting was used on nine high-dimensional simulated datasets to assess selection stability on training sets and the intersection between selection methods. Prognostic performances were evaluated on respective validation sets. Finally, one application on a real breast cancer dataset has been proposed. The false discovery rate (FDR) was high for each selection method, and the intersection between lists of predictors was very poor. RSF selects many more variables than the other methods and thus becomes less efficient on validation sets. Due to the complex correlation structure in genomic data, stability in the selection procedure is generally poor for selected predictors, but can be improved with a higher training sample size. In a very high-dimensional setting, we recommend the LASSO-pcvl method since it outperforms other methods by reducing the number of selected genes and minimizing FDR in most scenarios. Nevertheless, this method still gives a high rate of false positives. Further work is thus necessary to propose new methods to overcome this issue where numerous predictors are present. Pluridisciplinary discussion between clinicians and statisticians is necessary to ensure both statistical and biological relevance of the predictors included in molecular signatures.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02934793&r=all
  9. By: Gadat, Sébastien; Costa, Manon
    Abstract: In this work, we study a new recursive stochastic algorithm for the joint estimation of quantile and superquantile of an unknown distribution. The novelty of this algorithm is to use the Cesaro averaging of the quantile estimation inside the recursive approximation of the superquantile. We provide some sharp non-asymptotic bounds on the quadratic risk of the superquantile estimator for different step size sequences. We also prove new non-asymptotic Lp-controls on the Robbins Monro algorithm for quantile estimation and its averaged version. Finally, we derive a central limit theorem of our joint procedure using the diffusion approximation point of view hidden behind our stochastic algorithm.
    Date: 2020–09–25
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:124761&r=all
  10. By: Ruben Loaiza-Maya; Gael M. Martin; David T. Frazier; Worapree Maneesoonthorn; Andres Ramirez Hassan
    Abstract: Proper scoring rules are used to assess the out-of-sample accuracy of probabilistic forecasts, with different scoring rules rewarding distinct aspects of forecast performance. Herein, we reinvestigate the practice of using proper scoring rules to produce probabilistic forecasts that are 'optimal' according to a given score, and assess when their out-of-sample accuracy is superior to alternative forecasts, according to that score. Particular attention is paid to relative predictive performance under misspecification of the predictive model. Using numerical illustrations, we document several novel findings within this paradigm that highlight the important interplay between the true data generating process, the assumed predictive model and the scoring rule. Notably, we show that only when a predictive model is sufficiently compatible with the true process to allow a particular score criterion to reward what it is designed to reward, will this approach to forecasting reap benefits. Subject to this compatibility however, the superiority of the optimal forecast will be greater, the greater is the degree of misspecification. We explore these issues under a range of different scenarios, and using both artificially simulated and empirical data.
    Keywords: coherent predictions, linear predictive pools, predictive distributions, proper scoring rules, stochastic volatility with jumps, testing equal predictive ability
    JEL: C18 C53 C58
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2020-33&r=all
  11. By: Colin Jareb; Sergey K. Nigai
    Abstract: Modern quantitative theories of international trade rely on the probabilistic representation of technology and the assumption of the Law of Large Numbers (LLN), which ensures that when the number of traded goods goes to infinity, trade flows can be expressed via a deterministic gravity equation that is log-linear in exporter-specific, importer-specific and bilateral trade cost components. This paper shows that when the number of traded goods is finite, the gravity equation has a structural stochastic component not related to the fundamental gravity forces. It provides a novel explanation of the differences in the goodness of fit of gravity models across different sectors observed in the data. It also suggests that when the LLN does not hold, the welfare gains from trade have a considerable stochastic component and should be characterized via distributions rather than point estimates. We use sectoral trade data and Monte Carlo simulations to develop a procedure with minimal data requirements that allows estimation of intervals for the welfare gains from trade.
    Keywords: trade gravity, Law of Large Numbers, gains from trade
    JEL: F10 F60 F14 F17
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_8548&r=all
  12. By: Anoek Castelein (Erasmus University Rotterdam); Dennis Fok (Erasmus University Rotterdam); Richard Paap (Erasmus University Rotterdam)
    Abstract: The heteroscedastic logit model is useful to describe choices of individuals when the randomness in the choice-making varies over time. For example, during surveys individuals may become fatigued and start responding more randomly to questions as the survey proceeds. Or when completing a ranking amongst multiple alternatives, individuals may be unable to accurately assign middle and bottom ranks. The standard heteroscedastic logit model accommodates such behavior by allowing for changes in the signal-to-noise ratio via a time-varying scale parameter. In the current literature, this time-variation is assumed equal across individuals. Hence, each individual is assumed to become fatigued at the same time, or assumed to be able to accurately assign exactly the same ranks. In most cases, this assumption is too stringent. In this paper, we generalize the heteroscedastic logit model by allowing for differences across individuals. We develop a multinomial and a rank-ordered logit model in which the time-variation in an individual-specific scale parameter follows a Markov process. In case individual differences exist, our models alleviate biases and make more efficient use of data. We validate the models using a Monte Carlo study and illustrate them using data on discrete choice experiments and political preferences. These examples document that inter- and intra-individual heteroscedasticity both exist.
    Keywords: Scale, Heterogeneity, Markov, Logit scaling, Logit mixture, Dynamics, Conjoint, Fatigue, Markov switching
    JEL: C23 C25 C10
    Date: 2020–09–29
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20200069&r=all
  13. By: Katrine Marie Jakobsen (CEBI, Department of Economics, University of Copenhagen); Jakob Egholt Søgaard (CEBI, Department of Economics, University of Copenhagen)
    Abstract: We revisit the identification of behavioral responses to tax reforms and develop a new approach that allows for graphical validation of identifying assumptions and representation of treatment effects. Considering typical tax reforms, such as a reduction in the top income tax, we show that the state-of-the-art estimation strategy relies on an assumption that trend differences in income across the income distribution remain constant in the absence of reforms. Similar to the pre-trend validation of differencesin-differences studies, this identifying assumption of constant trend differentials can be validated by comparing the evolution of income in untreated parts of the income distribution over time. We illustrate the importance of our new validation approach by studying a number of tax reforms in Denmark, and we show how violations of the identifying assumption may drive the estimates obtained from the state-of-the-art strategy.
    Keywords: Tax Reforms, Behavioral Responses, Identification, Validation
    JEL: C14 H30 J22
    Date: 2020–09–21
    URL: http://d.repec.org/n?u=RePEc:kud:kucebi:2023&r=all
  14. By: Peter H. Egger; Mario Larch; Yoto V. Yotov
    Abstract: We challenge the common practice of estimating gravity equations with time-interval data in order to capture dynamic-adjustment effects to trade-policy changes. Instead, we point to a series of advantages of using consecutive-year data recognizing dynamic-adjustment effects. Our analysis reveals that, relative to time-interval data, the proposed approach avoids downward-biased effect estimates due to the distribution of trade-policy events during an event window as well as due to anticipation (pre-interval) and delayed (post-interval) effects, and it improves the efficiency of effect estimates due to the use of more data.
    Keywords: structural gravity, trade policy, free trade agreements, interval data
    JEL: F10 F14
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_8553&r=all
  15. By: Roccazzella, Francesco; Gambetti, Paolo; Vrins, Frédéric
    Keywords: forecasts combination ; robust methods ; optimal combination ; machine learning
    Date: 2020–01–01
    URL: http://d.repec.org/n?u=RePEc:ajf:louvlf:2020006&r=all
  16. By: Heather D. Gibson (Bank of Greece); Stephen G. Hall (University of Leicester, Bank of Greece and University of Pretoria); George S. Tavlas (Bank of Greece)
    Abstract: We provide a new way of deriving a number of dynamic unobserved factors from a set of variables. We show how standard principal components may be expressed in state space form and estimated using the Kalman filter. To illustrate our procedure we perform two exercises. First, we use it to estimate a measure of the current-account imbalances among northern and southern euro-area countries that developed during the period leading up to the outbreak of the euro-area crisis, before looking at adjustment in the post-crisis period. Second, we show how these dynamic factors can improve forecasting of the euro-dollar exchange rate.
    Keywords: Principal Components;Factor Models; Underlying activity; Forecasts
    JEL: E3 G01 G14 G21
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:bog:wpaper:282&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.