nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒02‒07
eight papers chosen by
Sune Karlsson
Örebro universitet

  1. Black-box Bayesian inference for economic agent-based models By Farmer, J. Doyne; Dyer, Joel; Cannon, Patrick; Schmon, Sebastian
  2. Causal Inference from Hypothetical Evaluations By B. Douglas Bernheim; Daniel Bjorkegren; Jeffrey Naecker; Michael Pollmann
  3. Limiting Spectral Distribution of High-dimensional Hayashi-Yoshida Estimator of Integrated Covariance Matrix By Arnab Chakrabarti; Rituparna Sen
  4. Time-Varying Linear Transformation Models with Fixed Effects and Endogeneity for Short Panels By Senay Sokullu; Irene Botosaru; Chris Muris
  5. Auction Throttling and Causal Inference of Online Advertising Effects By George Gui; Harikesh Nair; Fengshi Niu
  6. Identifying High-Frequency Shockswith Bayesian Mixed-Frequency VARs By Alessia Paccagnini; Fabio Parla
  7. Identifying the Distribution of Welfare from Discrete Choice By Bart Capéau; Liebrecht De Sadeleer
  8. On the Aggregation of Probability Assessments: Regularized Mixtures of Predictive Densities for Eurozone Inflation and Real Interest Rates By Francis X. Diebold; Minchul Shin; Boyuan Zhang

  1. By: Farmer, J. Doyne; Dyer, Joel; Cannon, Patrick; Schmon, Sebastian
    Abstract: Simulation models, in particular agent-based models, are gaining popularity in economics. The considerable flexibility they offer, as well as their capacity to reproduce a variety of empirically observed behaviors of complex systems, give them broad appeal, and the increasing availability of cheap computing power has made their use feasible. Yet a widespread adoption in real-world modelling and decision-making scenarios has been hindered by the difficulty of performing parameter estimation for such models. In general, simulation models lack a tractable likelihood function, which precludes a straightforward application of standard statistical inference techniques. A number of recent works (Grazzini et al., 2017; Platt, 2020, 2021) have sought to address this problem through the application of likelihood-free inference techniques, in which parameter estimates are determined by performing some form of comparison between the observed data and simulation output. However, these approaches are (a) founded on restrictive assumptions, and/or (b) typically require many hundreds of thousands of simulations. These qualities make them unsuitable for large-scale simulations in economics and can cast doubt on the validity of these inference methods in such scenarios. In this paper, we investigate the efficacy of two classes of simulation-efficient black-box approximate Bayesian inference methods that have recently drawn significant attention within the probabilistic machine learning community: neural posterior estimation and neural density ratio estimation. We present a number of benchmarking experiments in which we demonstrate that neural network based black-box methods provide state of the art parameter inference for economic simulation models, and crucially are compatible with generic multivariate time-series data. In addition, we suggest appropriate assessment criteria for use in future benchmarking of approximate Bayesian inference procedures for economic simulation models.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:amz:wpaper:2022-05&r=
  2. By: B. Douglas Bernheim; Daniel Bjorkegren; Jeffrey Naecker; Michael Pollmann
    Abstract: This paper explores methods for inferring the causal effects of treatments on choices by combining data on real choices with hypothetical evaluations. We propose a class of estimators, identify conditions under which they yield consistent estimates, and derive their asymptotic distributions. The approach is applicable in settings where standard methods cannot be used (e.g., due to the absence of helpful instruments, or because the treatment has not been implemented). It can recover heterogeneous treatment effects more comprehensively, and can improve precision. We provide proof of concept using data generated in a laboratory experiment and through a field application.
    JEL: C13 D12
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:29616&r=
  3. By: Arnab Chakrabarti; Rituparna Sen
    Abstract: In this paper, the estimation of the Integrated Covariance matrix from high-frequency data, for high dimensional stock price process, is considered. The Hayashi-Yoshida covolatility estimator is an improvement over Realized covolatility for asynchronous data and works well in low dimensions. However it becomes inconsistent and unreliable in the high dimensional situation. We study the bulk spectrum of this matrix and establish its connection to the spectrum of the true covariance matrix in the limiting case where the dimension goes to infinity. The results are illustrated with simulation studies in finite, but high, dimensional cases. An application to real data with tick-by-tick data on 50 stocks is presented.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.00119&r=
  4. By: Senay Sokullu; Irene Botosaru; Chris Muris
    Abstract: This paper considers a class of fixed-T nonlinear panel models with time-varying link function, fixed effects, and endogenous regressors. We establish sufficient conditions for the identification of the regression coefficients, the time-varying link function, the distribution of counterfactual outcomes, and certain (time-varying) average partial effects. We propose estimators for these objects and study their asymptotic properties. We show the relevance of our model by estimating the effect of teaching practices on student attainment as measured by test scores on standardized tests in mathematics and science. We use data from the Trends in International Mathematics and Science Study, and show that both traditional and modern teaching practices have positive effects of similar magnitudes on the performance of U.S. students on standardized tests in math and science.
    Date: 2022–01–24
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:22/756&r=
  5. By: George Gui; Harikesh Nair; Fengshi Niu
    Abstract: Causally identifying the effect of digital advertising is challenging, because experimentation is expensive, and observational data lacks random variation. This paper identifies a pervasive source of naturally occurring, quasi-experimental variation in user-level ad-exposure in digital advertising campaigns. It shows how this variation can be utilized by ad-publishers to identify the causal effect of advertising campaigns. The variation pertains to auction throttling, a probabilistic method of budget pacing that is widely used to spread an ad-campaign's budget over its deployed duration, so that the campaign's budget is not exceeded or overly concentrated in any one period. The throttling mechanism is implemented by computing a participation probability based on the campaign's budget spending rate and then including the campaign in a random subset of available ad-auctions each period according to this probability. We show that access to logged-participation probabilities enables identifying the local average treatment effect (LATE) in the ad-campaign. We present a new estimator that leverages this identification strategy and outline a bootstrap estimator for quantifying its variability. We apply our method to ad-campaign data from JD.com, which uses such throttling for budget pacing. We show our estimate is statistically different from estimates derived using other standard observational method such as OLS and two-stage least squares estimators based on auction participation as an instrumental variable.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.15155&r=
  6. By: Alessia Paccagnini (University College Dublin and CAMA); Fabio Parla (Bank of Lithuania)
    Abstract: We contribute to research on mixed-frequency regressions by introducing an innovative Bayesian approach. We impose a Normal-inverse Wishart prior by adding a set of auxiliary dummies in estimating a Mixed-Frequency VAR. We identify a high frequency shock in a Monte Carlo experiment and in an illustrative example with uncertainty shock for the U.S. economy. As the main findings, we document a “temporal aggregation bias” when we adopt a common low-frequency model instead of estimating a mixed-frequency framework. The bias is amplified in case of a large mismatching between the highfrequency shock and low-frequency business cycle variables.
    Keywords: Bayesian mixed-frequency VAR, MIDAS, Monte Carlo, uncertainty shocks, macro-financial linkages
    JEL: C32 E44 E52
    Date: 2021–12–29
    URL: http://d.repec.org/n?u=RePEc:lie:wpaper:97&r=
  7. By: Bart Capéau; Liebrecht De Sadeleer
    Abstract: Empirical welfare analyses often impose stringent parametric assumptions on individuals' preferences and neglect unobserved preference heterogeneity. In this paper, we develop a framework to conduct individual and social welfare analysis for discrete choice that does not suffer from these drawbacks. We first adapt the class of individual welfare measures introduced by Fleurbaey (2009) to settings where individual choice is discrete. Allowing for unrestricted, unobserved preference heterogeneity, these measures become random variables. We then show that the distribution of these objects can be derived from choice probabilities, which can be estimated nonparametrically from cross-sectional data. In addition, we derive nonparametric results for the joint distribution of welfare and welfare differences, as well as for social welfare. The former is an important tool in determining whether the winners of a price change belong disproportionately to those groups who were initially well-off. An empirical illustration demonstrates the relevance of the methods and the importance of considering welfare instead of income.
    Keywords: discrete choice, nonparametric welfare analysis, individual welfare, social welfare, money metric utility, compensating variation, equivalent variation
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/338552&r=
  8. By: Francis X. Diebold; Minchul Shin; Boyuan Zhang
    Abstract: We propose methods for constructing regularized mixtures of density forecasts. We explore a variety of objectives and regularization penalties, and we use them in a substantive exploration of Eurozone inflation and real interest rate density forecasts. All individual inflation forecasters (even the ex post best forecaster) are outperformed by our regularized mixtures. From the Great Recession onward, the optimal regularization tends to move density forecasts' probability mass from the centers to the tails, correcting for overconfidence.
    JEL: C01 C53
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:29635&r=

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.