nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒04‒26
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A robust specification test in linear panel data models By Beste Hamiye Beyaztas; Soutir Bandyopadhyay; Abhijit Mandal
  2. Estimating and Improving Dynamic Treatment Regimes With a Time-Varying Instrumental Variable By Shuxiao Chen; Bo Zhang
  3. A method for evaluating the rank condition for CCE estimators By Ignace De Vos; Gerdie Everaert; Vasilis Sarafidis
  4. Identification of Peer Effects with Miss-specified Peer Groups: Missing Data and Group Uncertainty By Christiern Rose
  5. Automatic Double Machine Learning for Continuous Treatment Effects By Sylvia Klosin
  6. A New Framework for Estimation of Unconditional Quantile Treatment Effects: The Residualized Quantile Regression (RQR) Model By Borgen, Nicolai T.; Haupt, Andreas; Wiborg, Øyvind N.
  7. CATE meets ML - The Conditional Average Treatment Effect and Machine Learning By Daniel Jacob
  8. Under the same (Chole)sky: DNK models, timing restrictions and recursive identification of monetary policy shocks By Giovanni Angelini; Marco M. Sorge
  9. Arbitrage Pricing Theory, the Stochastic Discount Factor and Estimation of Risk Premia from Portfolios By M. Hashem Pesaran; Ron P. Smith
  10. Structural Models: Inception and Frontier By Sebastian Galiani; Juan Pantano
  11. Methodologies for assessing government efficiency By O’Loughlin, Caitlin; Simar, Léopold; Wilson, Paul
  12. GARCH-UGH: A bias-reduced approach for dynamic extreme Value-at-Risk estimation in financial time series By Hibiki Kaibuchi; Yoshinori Kawasaki; Gilles Stupfler
  13. A Necessary Condition for Semiparametric Efficiency of Experimental Designs By Hisatoshi Tanaka
  14. Backtesting Systemic Risk Forecasts using Multi-Objective Elicitability By Tobias Fissler; Yannick Hoga
  15. Adaptive learning for financial markets mixing model-based and model-free RL for volatility targeting By Eric Benhamou; David Saltiel; Serge Tabachnik; Sui Kai Wong; Fran\c{c}ois Chareyron

  1. By: Beste Hamiye Beyaztas; Soutir Bandyopadhyay; Abhijit Mandal
    Abstract: The presence of outlying observations may adversely affect statistical testing procedures that result in unstable test statistics and unreliable inferences depending on the distortion in parameter estimates. In spite of the fact that the adverse effects of outliers in panel data models, there are only a few robust testing procedures available for model specification. In this paper, a new weighted likelihood based robust specification test is proposed to determine the appropriate approach in panel data including individual-specific components. The proposed test has been shown to have the same asymptotic distribution as that of most commonly used Hausman's specification test under null hypothesis of random effects specification. The finite sample properties of the robust testing procedure are illustrated by means of Monte Carlo simulations and an economic-growth data from the member countries of the Organisation for Economic Co-operation and Development. Our records reveal that the robust specification test exhibit improved performance in terms of size and power of the test in the presence of contamination.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.07723&r=
  2. By: Shuxiao Chen; Bo Zhang
    Abstract: Estimating dynamic treatment regimes (DTRs) from retrospective observational data is challenging as some degree of unmeasured confounding is often expected. In this work, we develop a framework of estimating properly defined "optimal" DTRs with a time-varying instrumental variable (IV) when unmeasured covariates confound the treatment and outcome, rendering the potential outcome distributions only partially identified. We derive a novel Bellman equation under partial identification, use it to define a generic class of estimands (termed IV-optimal DTRs), and study the associated estimation problem. We then extend the IV-optimality framework to tackle the policy improvement problem, delivering IV-improved DTRs that are guaranteed to perform no worse and potentially better than a pre-specified baseline DTR. Importantly, our IV-improvement framework opens up the possibility of strictly improving upon DTRs that are optimal under the no unmeasured confounding assumption (NUCA). We demonstrate via extensive simulations the superior performance of IV-optimal and IV-improved DTRs over the DTRs that are optimal only under the NUCA. In a real data example, we embed retrospective observational registry data into a natural, two-stage experiment with noncompliance using a time-varying IV and estimate useful IV-optimal DTRs that assign mothers to high-level or low-level neonatal intensive care units based on their prognostic variables.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.07822&r=
  3. By: Ignace De Vos; Gerdie Everaert; Vasilis Sarafidis (-)
    Abstract: This paper proposes a binary classifier to evaluate the so-called rank condition (RC), which is required for consistency of the Common Correlated Effects (CCE) estimator of Pesaran (2006). The RC postulates that the number of unobserved factors, m, is not larger than the rank of the unobserved matrix of average factor loadings, rho. When this condition fails, the CCE estimator is generally inconsistent. Despite the obvious importance of the RC, to date this condition could not be verified. The difficulty lies in that since the factor loadings are unobserved, rho cannot be evaluated or estimated directly. The key insight in the present paper is that rho can be established from the rank of the matrix of cross-sectional averages of observables. As a result, rho can be estimated consistently using procedures already available for determining the true rank of an unknown matrix. Similarly, m can be estimated consistently from the data using existing methods. A binary classifier that evaluates the RC is constructed by comparing the estimates of m and rho. The classifier correctly determines whether the RC is satisfied or not, with probability 1 as (N,T) grow to infinity.
    Keywords: Panel data, common factors, common correlated effects approach, rank condition
    JEL: C13 C33 C52
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:21/1013&r=all
  4. By: Christiern Rose
    Abstract: We consider identification of peer effects under peer group miss-specification. Our model of group miss-specification allows for missing data and peer group uncertainty. Missing data can take the form of some individuals being completely absent from the data, and the researcher need not have any information on these individuals and may not even know that they are missing. We show that peer effects are nevertheless identifiable if these individuals are missing completely at random, and propose a GMM estimator which jointly estimates the sampling probability and peer effects. In practice this means that the researcher need only have access to an individual/household level sample with group identifiers. The researcher may also be uncertain as to what is the relevant peer group for the outcome under study. We show that peer effects are nevertheless identifiable provided that the candidate peer groups are nested within one another (e.g. classroom, grade, school) and propose a non-linear least squares estimator. We conduct a Monte-Carlo experiment to demonstrate our identification results and the performance of the proposed estimators in a setting tailored to real data (the Dartmouth room-mate data).
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.10365&r=
  5. By: Sylvia Klosin
    Abstract: In this paper, we introduce and prove asymptotic normality for a new nonparametric estimator of continuous treatment effects. Specifically, we estimate the average dose-response function - the expected value of an outcome of interest at a particular level of the treatment level. We utilize tools from both the double debiased machine learning (DML) and the automatic double machine learning (ADML) literatures to construct our estimator. Our estimator utilizes a novel debiasing method that leads to nice theoretical stability and balancing properties. In simulations our estimator performs well compared to current methods.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.10334&r=
  6. By: Borgen, Nicolai T.; Haupt, Andreas; Wiborg, Øyvind N.
    Abstract: The identification of unconditional quantile treatment effects (QTE) has become increasingly popular within social sciences. However, current methods to identify unconditional QTEs of continuous treatment variables are incomplete. Contrary to popular belief, the unconditional quantile regression model introduced by Firpo, Fortin, and Lemieux (2009) does not identify QTE, while the propensity score framework of Firpo (2007) allows for only a binary treatment variable, and the generalized quantile regression model of Powell (2020) is unfeasible with high-dimensional fixed effects. This paper introduces a two-step approach to estimate unconditional QTEs where the treatment variable is first regressed on the control variables followed by a quantile regression of the outcome on the residualized treatment variable. Unlike much of the literature on quantile regression, this two-step residualized quantile regression framework is easy to understand, computationally fast, and can include high-dimensional fixed effects.
    Date: 2021–04–14
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:42gcb&r=
  7. By: Daniel Jacob
    Abstract: For treatment effects - one of the core issues in modern econometric analysis - prediction and estimation are two sides of the same coin. As it turns out, machine learning methods are the tool for generalized prediction models. Combined with econometric theory, they allow us to estimate not only the average but a personalized treatment effect - the conditional average treatment effect (CATE). In this tutorial, we give an overview of novel methods, explain them in detail, and apply them via Quantlets in real data applications. We study the effect that microcredit availability has on the amount of money borrowed and if 401(k) pension plan eligibility has an impact on net financial assets, as two empirical examples. The presented toolbox of methods contains meta-learners, like the Doubly-Robust, R-, T- and X-learner, and methods that are specially designed to estimate the CATE like the causal BART and the generalized random forest. In both, the microcredit and 401(k) example, we find a positive treatment effect for all observations but conflicting evidence of treatment effect heterogeneity. An additional simulation study, where the true treatment effect is known, allows us to compare the different methods and to observe patterns and similarities.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.09935&r=
  8. By: Giovanni Angelini; Marco M. Sorge
    Abstract: Recent structural VAR studies of the monetary transmission mechanism have voiced concerns about the use of recursive identification schemes based on short-run exclusion restrictions. We trace out the effects on impulse propagation of informational constraints embodying classical Cholesky-timing restrictions in otherwise standard Dynamic New Keynesian (DNK) models. By reinforcing internal propagation mechanisms and enlarging a model's equilibrium state space, timing restrictions may produce a non-trivial moving average component of the equilibrium representation, making finite order VARs a poor approximation of true adjustment paths to monetary impulses, albeit correctly identified. They can even serve as an independent source of model-based nonfundamentalness, thereby hampering shock identification via VAR methods. This notwithstanding, restricted DNK models are shown to feature (i) invertible equilibrium representations for the observables and (ii) fast-converging VAR coefficient matrices under empirically tenable parameterizations. This alleviates concerns about identification and lag truncation bias: low-order Cholesky-VARs do well at uncovering the transmission of monetary impulses in a truly Cholesky world.
    JEL: C3 E3
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:bol:bodewp:wp1160&r=
  9. By: M. Hashem Pesaran; Ron P. Smith
    Abstract: The arbitrage pricing theory (APT) attributes differences in expected returns to exposure to systematic risk factors, which are typically assumed to be strong. In this paper we consider two aspects of the APT. Firstly we relate the factors in the statistical factor model to a theoretically consistent set of factors defined by their conditional covariation with the stochastic discount factor (mt) used to price securities within inter-temporal asset pricing models. We show that risk premia arise from non-zero correlation of observed factors with mt; and the pricing errors arise from the correlation of the errors in the statistical factor model with mt: Secondly we compare estimates of factor risk premia using portfolios with the ones obtained using individual securities, and show that the identification conditions in terms of the strength of the factor are the same and that, in general, no clear cut ranking of the small sample bias of the two estimators is possible.
    Keywords: arbitrage pricing theory, stochastic discount factor, portfolios, factor strength, identification of risk premia, two-pass regressions, Fama-MacBeth
    JEL: C38 G12
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_9001&r=
  10. By: Sebastian Galiani; Juan Pantano
    Abstract: We discuss the past, present and future of the structural approach in empirical microeconomics, starting with its inception in the 1970s and 1980s. Our focus is on the use of the structural approach in labor economics, broadly defined to include population economics, human capital and related fields. In the hopes to reach a wider audience that might not be as familiar with the pillars of the structural approach, we first provide an overview of well-known features, setting the stage for a more up-to-date discussion of current developments. We discuss how to identify the need for a structural model, and key steps involved in how to formulate one. We also discuss issues of identification and estimation and highlight advantages and disadvantages of this approach, including the controversial issue of external validity. We then describe the current frontier of this approach, which increasingly reflects integration efforts with “design-based” strategies. This integration provides opportunities to both, validate structural models and enhance the credibility of their identification. We highlight why, whenever possible, it is best to pursue both of these goals, reserving some of the credible exogenous variation for identification and some for validation. While quasi-experimental variation can be useful in pursuit of both of these goals, we discuss why RCTs provide a first best opportunity in terms of out-of-sample validation. We conclude with thoughts about the future of the structural approach.
    JEL: C01 C52
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28698&r=
  11. By: O’Loughlin, Caitlin; Simar, Léopold (Université catholique de Louvain, LIDAM/ISBA, Belgium); Wilson, Paul
    Abstract: Nonparametric methods are widely used for assessing the performance of firms and other organizations in the private and public sectors. Typically, FDH or DEA estimators that estimate the attainable sets and its efficient boundary by enveloping the cloud of observed units in the appropriate input-output space are used. The statistical properties of these estimators have been established and inference is available using appropriate nonparametric techniques. In particular, hypotheses on model structure and the production process can be tested using using recent theoretical results including Central Limit Theorems on limiting distribution of means of efficiency scores. This chapter shows how these results can be used for testing the equality of means of efficiency, convexity of production sets and separability with respect to environmental factors are considered, and in addition for analyzing the dynamics of the production process over time. An empirical illustration is provided by using the various results and tests to examine the performance of municipal governments in the U.S. in providing local public goods.
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021002&r=all
  12. By: Hibiki Kaibuchi; Yoshinori Kawasaki; Gilles Stupfler
    Abstract: The Value-at-Risk (VaR) is a widely used instrument in financial risk management. The question of estimating the VaR of loss return distributions at extreme levels is an important question in financial applications, both from operational and regulatory perspectives; in particular, the dynamic estimation of extreme VaR given the recent past has received substantial attention. We propose here a two-step bias-reduced estimation methodology called GARCH-UGH (Unbiased Gomes-de Haan), whereby financial returns are first filtered using an AR-GARCH model, and then a bias-reduced estimator of extreme quantiles is applied to the standardized residuals to estimate one-step ahead dynamic extreme VaR. Our results indicate that the GARCH-UGH estimates are more accurate than those obtained by combining conventional AR-GARCH filtering and extreme value estimates from the perspective of in-sample and out-of-sample backtestings of historical daily returns on several financial time series.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.09879&r=
  13. By: Hisatoshi Tanaka (School of Political Science and Economics, Waseda University)
    Abstract: Efficiency of estimation depends not only on a method of the estimation, but also on the distribution of data. In statistical experiments, statisticians can at least partially design the data generating process to obtain high performance of the estimation. In this paper, a necessary condition for the semiparametrically efficient experimental design is proposed. A formula to determine the efficient distribution of input variables is derived. An application to the optimal bid design problem of contingent valuation survey experiments is presented.
    Keywords: Optimal Design Semiparametric Efficiency Binary Response Model Contingent Valuation Survey Experiments
    URL: http://d.repec.org/n?u=RePEc:wap:wpaper:2024&r=
  14. By: Tobias Fissler; Yannick Hoga
    Abstract: Backtesting risk measure forecasts requires identifiability (for model calibration and validation) and elicitability (for model comparison). We show that the three widely-used systemic risk measures conditional value-at-risk (CoVaR), conditional expected shortfall (CoES) and marginal expected shortfall (MES), which measure the risk of a position $Y$ given that a reference position $X$ is in distress, fail to be identifiable and elicitable on their own. As a remedy, we establish the joint identifiability of CoVaR, MES and (CoVaR, CoES) together with the value-at-risk (VaR) of the reference position $X$. While this resembles the situation of the classical risk measures expected shortfall (ES) and VaR concerning identifiability, a joint elicitability result fails. Therefore, we introduce a completely novel notion of multivariate scoring functions equipped with some order, which are therefore called multi-objective scores. We introduce and investigate corresponding notions of multi-objective elicitability, which may prove beneficial in various applications beyond finance. In particular, we prove that conditional elicitability of two functionals implies joint multi-objective elicitability with respect to the lexicographic order on $\mathbb{R}^2$, which makes it applicable in the context of CoVaR, MES or (CoVaR, CoES), together with VaR. We describe corresponding comparative backtests of Diebold-Mariano type, for two-sided and 'one and a half'-sided hypotheses, which respect the particularities of the lexicographic order and which can be used in a regulatory setting. We demonstrate the viability of these backtesting approaches in simulations and in an empirical application to DAX 30 and S&P 500 returns.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.10673&r=
  15. By: Eric Benhamou; David Saltiel; Serge Tabachnik; Sui Kai Wong; Fran\c{c}ois Chareyron
    Abstract: Model-Free Reinforcement Learning has achieved meaningful results in stable environments but, to this day, it remains problematic in regime changing environments like financial markets. In contrast, model-based RL is able to capture some fundamental and dynamical concepts of the environment but suffer from cognitive bias. In this work, we propose to combine the best of the two techniques by selecting various model-based approaches thanks to Model-Free Deep Reinforcement Learning. Using not only past performance and volatility, we include additional contextual information such as macro and risk appetite signals to account for implicit regime changes. We also adapt traditional RL methods to real-life situations by considering only past data for the training sets. Hence, we cannot use future information in our training data set as implied by K-fold cross validation. Building on traditional statistical methods, we use the traditional "walk-forward analysis", which is defined by successive training and testing based on expanding periods, to assert the robustness of the resulting agent. Finally, we present the concept of statistical difference's significance based on a two-tailed T-test, to highlight the ways in which our models differ from more traditional ones. Our experimental results show that our approach outperforms traditional financial baseline portfolio models such as the Markowitz model in almost all evaluation metrics commonly used in financial mathematics, namely net performance, Sharpe and Sortino ratios, maximum drawdown, maximum drawdown over volatility.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.10483&r=

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.