
on Econometrics 
By:  De Gooijer, Jan G.; Reichardt, Hugo 
Abstract:  For linear regression models, we propose and study a multistep kernel densitybased estimator that is adaptive to unknown error distributions. We establish asymptotic normality and almost sure convergence. An efficient EM algorithm is provided to implement the proposed estimator. We also compare its finite sample performance with five other adaptive estimators in an extensive Monte Carlo study of eight error distributions. Our method generally attains high meansquareerror efficiency. An empirical example illustrates the gain in efficiency of the new adaptive method when making statistical inference about the slope parameters in three linear regressions. 
Keywords:  adaptive estimation; EM algorithm; kernel density estimate; least squares estimate; linear regression 
JEL:  J1 
Date:  2021–12–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:115083&r= 
By:  Paul S. Koh 
Abstract:  This paper considers the estimation of static discrete games of complete information under pure strategy Nash equilibrium and no assumptions on the equilibrium selection rule, which is often viewed as computationally difficult due to the need for simulation of latent variables and repeated pointwise testing over a large number of candidate points in the parameter space. We propose computationally attractive approaches that avoid simulation and grid search by characterizing identifying restrictions in closed forms and formulating the identification problem as mathematical programming problems. We show that, under standard assumptions, the inequality restrictions proposed by Andrews, Berry, and Jia (2004) can be expressed in terms of closedform multinomial logit probabilities, and the corresponding identified set is convex. When actions are binary, the sharp identified set can be characterized using a finite number of closedform inequalities. We also propose a simple approach to constructing confidence sets for the identified sets. Two realdata experiments are used to illustrate that our methodology can be several orders of magnitude faster than existing approaches. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.05002&r= 
By:  Koki Fusejima; Takuya Ishihara; Masayuki Sawada 
Abstract:  Diagnostic tests with many covariates have been the norm to validate regression discontinuity designs (RDD). The test procedure with many covariates lacks validity due to the multiple testing problem. Testable restrictions are designed to verify a single identification restriction, therefore, a single joint null hypothesis should be tested. In a metaanalysis of the top five economics publications, the joint null hypothesis of local randomization is massively overrejected. This study provides joint testing procedures based on the newly shown joint asymptotic normality of RDD estimators. Simulation evidence demonstrates their favorable performance over the Bonferroni correction for dimensions fewer than 10 covariates. However, neither Bonferroni correction nor our procedure guarantees its size control with larger number of covariates. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.04345&r= 
By:  Loïc Cantin (CREST, 5 Avenue Henri Le Chatelier, 91120 Palaiseau, France); Christian Francq (CREST, 5 Avenue Henri Le Chatelier, 91120 Palaiseau, France); JeanMichel Zakoïan (CREST, 5 Avenue Henri Le Chatelier, 91120 Palaiseau, France) 
Abstract:  We propose a twostep semiparametric estimation approach for dynamic Conditional VaR (CoVaR), from which other important systemic risk measures such as the DeltaCoVaR can be derived. The CoVaR allows to define reserves for a given financial entity, in order to limit exceeding losses when a system is in distress. We assume that all financial returns in the system follow semiparametric GARCHtype models. Our estimation method relies on the fact that the dynamic CoVaR is the product of the volatility of the financial entity’s return and a conditional quantile term involving the innovations of the different returns. We show that the latter quantity can be easily estimated from residuals of the GARCHtype models estimated by QuasiMaximum Likelihood (QML). The study of the asymptotic behaviour of the corresponding estimator and the derivation of asymptotic confidence intervals for the dymanic CoVaR are the main purposes of the paper. Our theoretical results are illustrated via MonteCarlo experiments and real financial time series. 
Keywords:  conditional CoVaR and DeltaCoVaR, empirical distribution of bivariate residuals, modelfree estimation risk, multivariate risks. 
Date:  2022–01–24 
URL:  http://d.repec.org/n?u=RePEc:crs:wpaper:202211&r= 
By:  Urga, Giovanni; Wang, Fa 
Abstract:  This paper proposes maximum (quasi)likelihood estimation for high dimensional factor models with regime switching in the loadings. The model parameters are estimated jointly by EM algorithm, which in the current context only requires iteratively calculating regime probabilities and principal components of the weighted sample covariance matrix. When regime dynamics are taken into account, smoothed regime probabilities are calculated using a recursive algorithm. Consistency, convergence rates and limit distributions of the estimated loadings and the estimated factors are established under weak crosssectional and temporal dependence as well as heteroscedasticity. It is worth noting that due to high dimension, regime switching can be identified consistently right after the switching point with only one observation. Simulation results show good performance of the proposed method. An application to the FREDMD dataset demonstrates the potential of the proposed method for quick detection of business cycle turning points. 
Keywords:  Factor model, Regime switching, Maximum likelihood, High dimension, EM algorithm, Turning points 
JEL:  C13 C38 C55 
Date:  2022–05–07 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:113172&r= 
By:  Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School); Xu, Yongdeng (Cardiff Business School) 
Abstract:  Maximum Likelihood (ML) shows both lower power and higher bias in small sample Monte Carlo experiments than Indirect Inference (II) and IIís higher power comes from its use of the modelrestricted distribution of the auxiliary model coefficients (Le et al. 2016). We show here that IIís higher power causes it to have lower bias, because false parameter values are rejected more frequently under II; this greater rejection frequency is partly offset by a lower tendency for ML to choose unrejected false parameters as estimates, due again to its lower power allowing greater competition from rival unrejected parameter sets. 
Keywords:  Bias, Indirect Inference, Maximum Likelihood 
JEL:  C12 C32 C52 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:cdf:wpaper:2022/10&r= 
By:  Agrippino, Silvia Miranda (Bank of England); Ricco, Giovanni (University of Warwick) 
Abstract:  IV methods have become the leading approach to identify the effects of macroeconomic shocks. Conditions for identification generally involve all the shocks in the VAR even when only a subset of them is of interest. This paper provides more general conditions that only involve the shocks of interest and the properties of the instrument of choice. We introduce a heuristic and a formal test to guide the specification of the empirical models, and provide formulas for the bias when the conditions are violated. We apply our results to the study of the transmission of conventional and unconventional monetary policy shocks. 
Keywords:  Identification with external instruments; structural VAR; invertibility; monetary policy shocks 
JEL:  C32 C36 E30 E52 
Date:  2022–04–14 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0973&r= 
By:  Jiti Gao; Bin Peng; Yayi Yan 
Abstract:  In this paper, we propose a simple dependent wild bootstrap procedure for us to establish valid inferences for a wide class of panel data models including those with interactive fixed effects. The proposed method allows for the error components having weak correlation over both dimensions, and heteroskedasticity. The asymptotic properties are established under a set of simple and general conditions, and bridge the literature of bootstrap methods and the literature of HAC approaches for panel data models. The new findings fill some gaps left by the bulk literature of the block bootstrap based panel data studies. Finally, we show the superiority of our approach over several natural competitors using extensive numerical studies. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.00577&r= 
By:  Tatiana Komarova; William Matcham 
Abstract:  We introduce multivariate ordered discrete response models that exhibit nonlattice structures. From the perspective of behavioral economics, these models correspond to broad bracketing in decision making, whereas lattice models, which researchers typically estimate in practice, correspond to narrow bracketing. There is also a class of hierarchical models, which nests lattice models and is a special case of nonlattice models. Hierarchical models correspond to sequential decision making and can be represented by binary decision trees. In each of these cases, we specify latent processes as a sum of an index of covariates and an unobserved error, with unobservables for different latent processes potentially correlated. This additional dependence further complicates the identification of model parameters in nonlattice models. We give conditions sufficient to guarantee identification under the independence of errors and covariates, compare these conditions to what is required to attain identification in lattice models and outline an estimation approach. Finally, we provide simulations and empirical examples, through which we discuss the case when unobservables follow a distribution from a known parametric family, focusing on popular probit specifications. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.05779&r= 
By:  Silvia Goncalves; Ana María Herrera; Lutz Kilian; Elena Pesavento 
Abstract:  Many empirical studies estimate impulse response functions that depend on the state of the economy. Most of these studies rely on a variant of the local projection (LP) approach to estimate the statedependent impulse response functions. Despite its widespread application, the asymptotic validity of the LP approach to estimating statedependent impulse responses has not been established to date. We formally derive this result for a structural statedependent vector autoregressive process. The model only requires the structural shock of interest to be identified. A sufficient condition for the consistency of the statedependent LP estimator of the response function is that the first and secondorder conditional moments of the structural shocks are independent of current and future states, given the information available at the time the shock is realized. This rules out models in which the state of the economy is a function of current or future realizations of the outcome variable of interest, as is often the case in applied work. Even when the state is a function of past values of this variable only, consistency may hold only at short horizons. 
Keywords:  local projection; statedependent impulse responses; threshold; identification; nonlinear VAR 
JEL:  C22 C32 C51 
Date:  2022–05–06 
URL:  http://d.repec.org/n?u=RePEc:fip:feddwp:94175&r= 
By:  C. Angelo Guevara 
Abstract:  Crawford's et al. (2021) article on estimation of discrete choice models with unobserved or latent consideration sets, presents a unified framework to address the problem in practice by using "sufficient sets", defined as a combination of past observed choices. The proposed approach is sustained in a reinterpretation of a consistency result by McFadden (1978) for the problem of sampling of alternatives, but the usage of that result in Crawford et al. (2021) is imprecise in an important matter. It is stated that consistency would be attained if any subset of the true consideration set is used for estimation, but McFadden (1978) shows that, in general, one needs to do a sampling correction that depends on the protocol used to draw the choice set. This note derives the sampling correction that is required when the choice set for estimation is built from past choices. Then, it formalizes the conditions under which such correction would fulfill the uniform condition property and can therefore be ignored when building practical estimators, such as the ones analyzed by Crawford et al. (2021). 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.00852&r= 
By:  Xiaoyu Cheng 
Abstract:  When sample data are governed by an unknown sequence of independent but possibly nonidentical distributions, the datagenerating process (DGP) in general cannot be perfectly identified from the data. For making decisions facing such uncertainty, this paper presents a novel approach by studying how the data can best be used to robustly improve decisions. That is, no matter which DGP governs the uncertainty, one can make a better decision than without using the data. I show that common inference methods, e.g., maximum likelihood and Bayesian updating cannot achieve this goal. To address, I develop new updating rules that lead to robustly better decisions either asymptotically almost surely or in finite sample with a prespecified probability. Especially, they are easy to implement as are given by simple extensions of the standard statistical procedures in the case where the possible DGPs are all independent and identically distributed. Finally, I show that the new updating rules also lead to more intuitive conclusions in existing economic models such as asset pricing under ambiguity. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.04573&r= 
By:  Toni M. Whited 
Abstract:  I discuss various ways in which inference based on the estimation of the parameters of statistical models (reducedform estimation) can be combined with inference based on the estimation of the parameters of economic models (structural estimation). I discuss five basic categories of integration: directly combining the two methods, using statistical models to simplify structural estimation, using structural estimation to extend the validity of reducedform results, using reducedform techniques to assess the external validity of structural estimations, and using structural estimation as a sample selection remedy. I illustrate each of these methods with examples from corporate finance, banking, and personal finance. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.01175&r= 
By:  Chao Wang (Indiana University, Department of Economics); Stefan Weiergraeber (Indiana University, Department of Economics); Ruli Xiao (Indiana University, Department of Economics) 
Abstract:  We study the identification of dynamic discrete choice models with hyperbolic discounting using a terminating action. We provide novel identification results for both sophisticated and naive agents’ discount factors and their utilities in a finite horizon framework under the assumption of a stationary flow utility. In contrast to existing identification strategies we do not require to observe the final period for the sophisticated agent. Moreover, we avoid normalizing the flow utility of a reference action for both the sophisticated and the naive agent. We propose two simple estimators and show that they perform well in simulations. 
Keywords:  hyperbolic discounting, dynamic discrete choice model, identification 
Date:  2022–06 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2022010&r= 
By:  Ravan Moret; Andrew G. Chapple 
Abstract:  In observational studies, confounding variables that affect both the exposure and an outcome of interest are a general concern. It is well known that failure to control for confounding variables adequately can worsen inference on an exposureâ€™s effect on outcome. In this paper, we explore how exposure effect inference changes when nonconfounding covariates are added to the assumed logistic regression model, after the set of all true confounders are included. This is done via an exhaustive simulation study with thousands of randomly generated scenarios to make general statements about overadjusting in logistic regression. Our results show that in general, adding nonconfounders to the regression model decreases the mean squared error for nonnull exposure effects. The probability of both type I and type II errors also decrease with addition of more covariates given that all true confounders are controlled for. 
Keywords:  regression model, confounding covariates, type I errors, type II errors 
JEL:  C12 C13 C15 
Date:  2022–03–05 
URL:  http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2022_05&r= 
By:  Toru Kitagawa; Weining Wang; Mengshan Xu 
Abstract:  This paper develops a novel method for policy choice in a dynamic setting where the available data is a multivariate time series. Building on the statistical treatment choice framework, we propose Timeseries Empirical Welfare Maximization (TEWM) methods to estimate an optimal policy rule for the current period or over multiple periods by maximizing an empirical welfare criterion constructed using nonparametric potential outcome timeseries. We characterize conditions under which TEWM consistently learns a policy choice that is optimal in terms of conditional welfare given the timeseries history. We then derive a nonasymptotic upper bound for conditional welfare regret and its minimax lower bound. To illustrate the implementation and uses of TEWM, we perform simulation studies and apply the method to estimate optimal monetary policy rules from macroeconomic timeseries data. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.03970&r= 
By:  Hikaru Kawarazaki (Graduate school in Economics at University College London); Minhaj Mahmud (Asian Development Bank); Yasuyuki Sawada (Faculty of Economics, The University of Tokyo); Mai Seki (Ritsumeikan University); Kazuma Takakura (Graduate School of Economics, The University of Tokyo) 
Abstract:  In the already very rich and crowded literature on education interventions, the use of test scores to capture studentsâ€™cognitive abilities has been the norm when measuring the impact. We show that even in randomized controlled trials (RCTs), estimated treatment effects on the true latent abilities can still be biased towards zero, because test scores are often censored outside of zero and full marks. This paper employs sui generis data from a field experiment in Bangladesh as well as data sets from existing highlycited studies in developing countries to illustrate theoretically and empirically that this remaining classical sample selection problem exists. We suggest three concrete ways to correct such bias: First, to employ the conventional sample selection correction methods; second, to use tests that are designed with an extensive set of questions from easy to challenging levels which allow students to answer the maximum they could; and third, to incorporate each studentâ€™s completion time in estimation. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2022cf1194&r= 
By:  Paul S. Koh 
Abstract:  This paper studies identification and estimation of dynamic games when the underlying information structure is unknown to the researcher. To tractably characterize the set of model predictions while maintaining weak assumptions on players' information, we introduce Markov correlated equilibrium, a dynamic analog of Bayes correlated equilibrium. The set of Markov correlated equilibrium predictions coincides with the set of Markov perfect equilibrium predictions that can arise when the players might observe more signals than assumed by the analyst. We characterize the sharp identified sets under varying assumptions on what the players minimally observe. We also propose computational strategies for dealing with the nonconvexities that arise in dynamic environments. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.03706&r= 
By:  Daido Kido 
Abstract:  The effect of treatments is often heterogeneous, depending on the observable characteristics, and it is necessary to exploit such heterogeneity to devise individualized treatment rules. Existing estimation methods of such individualized treatment rules assume that the available experimental or observational data derive from the target population in which the estimated policy is implemented. However, this assumption often fails in practice because useful data are limited. In this case, social planners must rely on the data generated in the source population, which differs from the target population. Unfortunately, existing estimation methods do not necessarily work as expected in the new setting, and strategies that can achieve a reasonable goal in such a situation are required. In this paper, I study Distributionally Robust Policy Learning (DRPL), which formalizes an ambiguity about the target population and adapts to the worstcase scenario in the set. It is shown that DRPL with Wasserstein distancebased characterization of ambiguity provides simple intuitions and a simple estimation method. I then develop an estimator for the distributionally robust policy and evaluate its theoretical performance. An empirical application shows that DRPL outperforms the naive approach in the target population. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.04637&r= 
By:  Taras Bodnar; Vilhelm Niklasson; Erik Thors\'en 
Abstract:  In this paper, a new way to integrate volatility information for estimating value at risk (VaR) and conditional value at risk (CVaR) of a portfolio is suggested. The new method is developed from the perspective of Bayesian statistics and it is based on the idea of volatility clustering. By specifying the hyperparameters in a conjugate prior based on two different rolling window sizes, it is possible to quickly adapt to changes in volatility and automatically specify the degree of certainty in the prior. This constitutes an advantage in comparison to existing Bayesian methods that are less sensitive to such changes in volatilities and also usually lack standardized ways of expressing the degree of belief. We illustrate our new approach using both simulated and empirical data. Compared to some other well known homoscedastic and heteroscedastic models, the new method provides a good alternative for risk estimation, especially during turbulent periods where it can quickly adapt to changing market conditions. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.01444&r= 
By:  Kastoryano, Stephen (University of Reading) 
Abstract:  This paper proposes a causal decomposition framework for settings in which an initial regime randomization influences the timing of a treatment duration. The initial randomization and treatment affect in turn a duration outcome of interest. Our empirical application considers the survival of individuals on the kidney transplant waitlist. Upon entering the waitlist, individuals with an AB blood type, who are universal recipients, are effectively randomized to a regime with a higher propensity to rapidly receive a kidney transplant. Our dynamic potential outcomes framework allows us to identify the pretransplant effect of the blood type, and the transplant effects depending on blood type. We further develop dynamic assumptions which build on the LATE framework and allow researchers to separate effects for different population substrata. Our main empirical result is that AB blood type candidates display a higher pretransplant mortality. We provide evidence that this effect is due to behavioural changes rather than biological differences. 
Keywords:  dynamic treatment effects, survival models, expectation effects, kidney transplant 
JEL:  C22 C41 I12 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp15314&r= 
By:  James Mitchell; Aubrey Poon; Dan Zhu 
Abstract:  Quantile regression methods are increasingly used to forecast tail risks and uncertainties in macroeconomic outcomes. This paper reconsiders how to construct predictive densities from quantile regressions. We compare a popular twostep approach that fits a specific parametric density to the quantile forecasts with a nonparametric alternative that lets the 'data speak.' Simulation evidence and an application revisiting GDP growth uncertainties in the US demonstrate the flexibility of the nonparametric approach when constructing density forecasts from both frequentist and Bayesian quantile regressions. They identify its ability to unmask deviations from symmetrical and unimodal densities. The dominant macroeconomic narrative becomes one of the evolution, over the business cycle, of multimodalities rather than asymmetries in the predictive distribution of GDP growth when conditioned on financial conditions. 
Keywords:  Density Forecasts; Quantile Regressions; Financial Conditions 
JEL:  C53 E32 E37 E44 
Date:  2022–05–09 
URL:  http://d.repec.org/n?u=RePEc:fip:fedcwq:94160&r= 
By:  A. H. Nzokem; V. T. Montshiwa 
Abstract:  The paper investigates the rich class of Generalized Tempered Stable distribution, an alternative to Normal distribution and the $\alpha$Stable distribution for modelling asset return and many physical and economic systems. Firstly, we explore some important properties of the Generalized Tempered Stable (GTS) distribution. The theoretical tools developed are used to perform empirical analysis. The GTS distribution is fitted using three indexes: S\&P 500, SPY ETF and Bitcoin BTC. The Fractional Fourier Transform (FRFT) technique evaluates the probability density function and its derivatives in the maximum likelihood procedure. Based on the three sample data, The KolmogorovSmirnov (KS) goodnessoffit shows that the GTS distribution fits both sides of the underlying distribution for SPY EFT index and Bitcoin BTC returns. Regarding the S\&P 500 index, the Tempered Stable distribution fits the right side of the underlying distribution, while the compound Poisson distribution fits the left side. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.00586&r= 
By:  Paul S. Koh 
Abstract:  Empirically, many strategic settings are characterized by stable outcomes in which players' decisions are publicly observed, yet no player takes the opportunity to deviate. To analyze such situations in the presence of incomplete information, we build an empirical framework by introducing a novel solution concept that we call Bayes stable equilibrium. Our framework allows the researcher to be agnostic about players' information and the equilibrium selection rule. The Bayes stable equilibrium identified set collapses to the complete information pure strategy Nash equilibrium identified set under strong assumptions on players' information. Furthermore, all else equal, it is weakly tighter than the Bayes correlated equilibrium identified set. We also propose computationally tractable approaches for estimation and inference. In an application, we study the strategic entry decisions of McDonald's and Burger King in the US. Our results highlight the identifying power of informational assumptions and show that the Bayes stable equilibrium identified set can be substantially tighter than the Bayes correlated equilibrium identified set. In a counterfactual experiment, we examine the impact of increasing access to healthy food on the market structures in Mississippi food deserts. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.04990&r= 
By:  Christopher Adjaho; Timothy Christensen 
Abstract:  We consider the problem of learning treatment (or policy) rules that are externally valid in the sense that they have welfare guarantees in target populations that are similar to, but possibly different from, the experimental population. We allow for shifts in both the distribution of potential outcomes and covariates between the experimental and target populations. This paper makes two main contributions. First, we provide a formal sense in which policies that maximize social welfare in the experimental population remain optimal for the "worstcase" social welfare when the distribution of potential outcomes (but not covariates) shifts. Hence, policy learning methods that have good regret guarantees in the experimental population, such as empirical welfare maximization, are externally valid with respect to a class of shifts in potential outcomes. Second, we develop methods for policy learning that are robust to shifts in the joint distribution of potential outcomes and covariates. Our methods may be used with experimental or observational data. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.05561&r= 
By:  Wichert, Oliver (Tilburg University, School of Economics and Management) 
Date:  2022 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiutis:f926ab90382b4aa5953286f937dc5fa1&r= 
By:  Gizem Koşar; Cormac O'Dea 
Abstract:  A growing literature uses now widely available data on beliefs and expectations in the estimation of structural models. In this chapter, we review this literature, with an emphasis on models of individual and household behavior. We first show how expectations data have been used to relax strong assumptions about beliefs and outline how they can be used in estimation to substitute for, or as a complement to, data on choices. Next, we discuss the literature that uses different types of expectations data in the estimation of structural models. We conclude by noting directions for future research. 
Keywords:  expectations data; beliefs; household surveys; structural models; hypotheticals; choice expectations; statedpreference data 
JEL:  D84 C51 C13 D83 
Date:  2022–05–01 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:94270&r= 
By:  Eric Auerbach; Yong Cai 
Abstract:  We are interested in the distribution of treatment effects for an experiment where units are randomized to treatment but outcomes are measured for pairs of units. For example, we might measure risk sharing links between households enrolled in a microfinance program, employment relationships between workers and firms exposed to a trade shock, or bids from bidders to items assigned to an auction format. Such a double randomized experimental design may be appropriate when there are social interactions, market externalities, or other spillovers across units assigned to the same treatment. Or it may describe a natural or quasi experiment given to the researcher. In this paper, we propose a new empirical strategy based on comparing the eigenvalues of the outcome matrices associated with each treatment. Our proposal is based on a new matrix analog of the Fr\'echetHoeffding bounds that play a key role in the standard theory. We first use this result to bound the distribution of treatment effects. We then propose a new matrix analog of quantile treatment effects based on the difference in the eigenvalues. We call this analog spectral treatment effects. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.01246&r= 
By:  Corrado Monti; Marco Pangallo; Gianmarco De Francisci Morales; Francesco Bonchi 
Abstract:  AgentBased Models (ABMs) are used in several fields to study the evolution of complex systems from microlevel assumptions. However, ABMs typically can not estimate agentspecific (or "micro") variables: this is a major limitation which prevents ABMs from harnessing microlevel data availability and which greatly limits their predictive power. In this paper, we propose a protocol to learn the latent microvariables of an ABM from data. The first step of our protocol is to reduce an ABM to a probabilistic model, characterized by a computationally tractable likelihood. This reduction follows two general design principles: balance of stochasticity and data availability, and replacement of unobservable discrete choices with differentiable approximations. Then, our protocol proceeds by maximizing the likelihood of the latent variables via a gradientbased expectation maximization algorithm. We demonstrate our protocol by applying it to an ABM of the housing market, in which agents with different incomes bid higher prices to live in highincome neighborhoods. We demonstrate that the obtained model allows accurate estimates of the latent variables, while preserving the general behavior of the ABM. We also show that our estimates can be used for outofsample forecasting. Our protocol can be seen as an alternative to blackbox data assimilation methods, that forces the modeler to lay bare the assumptions of the model, to think about the inferential process, and to spot potential identification problems. 
Date:  2022–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2205.05052&r= 
By:  Sonan Memon (Pakistan Institute of Development Economics) 
Abstract:  Machine Learning (henceforth ML) refers to the set of algorithms and computational methods which enable computers to learn patterns from training data without being explicitly programmed to do so.[1] ML uses training data to learn patterns by estimating a mathematical model and making predictions in out of sample based on new or unseen input data. ML has the tremendous capacity to discover complex, flexible and crucially generalisable structure in training data. 
Keywords:  Machine Learning, Economists, Introduction 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:pid:kbrief:2021:33&r= 