nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒02‒13
twenty papers chosen by
Sune Karlsson
Örebro universitet

  1. Semiparametric Bayesian doubly robust causal estimation By Luo, Yu; Graham, Daniel J.; McCoy, Emma J.
  2. More Efficient Estimation of Multiplicative Panel Data Models in the Presence of Serial Correlation By Nicholas Brown; Jeffrey Wooldridge
  3. A Model Specification Test for Nonlinear Stochastic Diffusions with Delay By Zongwu Cai; Hongwei Mei; Rui Wang
  4. Difference-in-Differences via Common Correlated Effects By Nicholas Brown; Kyle Butts; Joakim Westerlund
  5. Inference for Large Panel Data with Many Covariates By Markus Pelger; Jiacheng Zou
  6. General Conditions for Valid Inference in Multi-Way Clustering By Luther Yap
  7. Penalized Model Averaging for High Dimensional Quantile Regressions By Haowen Bao; Zongwu Cai; Yuying Sun
  8. A Framework for Generalization and Transportation of Causal Estimates Under Covariate Shift By Apoorva Lal; Wenjing Zheng; Simon Ejdemyr
  9. Spectral and post-spectral estimators for grouped panel data models By Denis Chetverikov; Elena Manresa
  10. Randomization Test for the Specification of Interference Structure By Tadao Hoshino; Takahide Yanagi
  11. Treatment Effect Analysis for Pairs with Endogenous Treatment Takeup By Mate Kormos; Robert P. Lieli; Martin Huber
  12. Truncated Poisson-Dirichlet approximation for Dirichlet process hierarchical models By Zhang, Junyi; Dassios, Angelos
  13. High-frequency realized stochastic volatility model By Watanabe, Toshiaki; Nakajima, Jouchi
  14. Model Averaging for Asymptotically Optimal Combined Forecasts By Yi-Ting Chen; Chu-An Liu
  15. A non-Normal framework for price discovery: The independent component based information shares measure By Sebastiano Michele Zema
  16. Stochastic Langevin Monte Carlo for (weakly) log-concave posterior distributions By Crespo, Marelys; Gadat, Sébastien; Gendre, Xavier
  17. Best, worst, and Best&worst choice probabilities for logit and reverse logit models By André de Palma; Karim Kilani
  18. Feature Selection for Personalized Policy Analysis By Maria Nareklishvili; Nicholas Polson; Vadim Sokolov
  19. The chi-square standardization, combined with Box-Cox transformation, is a valid alternative to transforming to logratios in compositional data analysis By Michael Greenacre
  20. Calibrating Agent-based Models to Microdata with Graph Neural Networks By Farmer, J. Doyne; Dyer, Joel; Cannon, Patrick; Schmon, Sebastian

  1. By: Luo, Yu; Graham, Daniel J.; McCoy, Emma J.
    Abstract: Frequentist semiparametric theory has been used extensively to develop doubly robust (DR) causal estimation. DR estimation combines outcome regression (OR) and propensity score (PS) models in such a way that correct specification of just one of two models is enough to obtain consistent parameter estimation. An equivalent Bayesian solution, however, is not straightforward as there is no obvious distributional framework to the joint OR and PS model, and the DR approach involves a semiparametric estimating equation framework without a fully specified likelihood. In this paper, we develop a fully semiparametric Bayesian framework for DR causal inference by bridging a nonparametric Bayesian procedure with empirical likelihood via semiparametric linear regression. Instead of specifying a fully probabilistic model, this procedure is only realized through relevant moment conditions. Crucially, this allows the posterior distribution of the causal parameter to be simulated via Markov chain Monte Carlo methods. We show that the posterior distribution of the causal estimator satisfies consistency and the Bernstein–von Mises theorem, when either the OR or PS is correctly specified. Simulation studies suggest that our proposed method is doubly robust and can achieve the desired coverage rate. We also apply this novel Bayesian method to a real data example to assess the impact of speed cameras on car collisions in England.
    Keywords: Bayesian estimation; causal inference; double robustness; empirical likelihood; propensity score adjustments; semiparametric inference
    JEL: C1
    Date: 2022–12–30
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:117944&r=ecm
  2. By: Nicholas Brown (Queen's University); Jeffrey Wooldridge (Michigan State University)
    Abstract: We provide a systemic approach in obtaining an estimator asymptotically more efficient than the popular fixed effects Poisson (FEP) estimator for panel data models with multiplicative heterogeneity in the conditional mean. In particular, we derive the optimal instrumental variables under appealing “working†second moment assumptions that allow underdispersion, overdispersion, and general patterns of serial correlation. Because parameters in the optimal instruments must be estimated, we argue for combining our new moment conditions with those that define the FEP estimator to obtain a generalized method of moments(GMM) estimator no less efficient than the FEP estimator and the estimator using the new instruments. A simulation study shows that the overidentfied GMM estimator behaves well in terms of bias and it often delivers nontrivial efficiency gains – even when the working second-moment assumptions fail. We apply the new estimator to modeling firm patent filings and spending on R&D, and find nontrivial reductions in standard errors using the new estimator.
    Keywords: fixed effects Poisson, serial correlation, optimal instruments, generalized method of moments
    JEL: C23
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1497&r=ecm
  3. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Hongwei Mei (Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX 79409, USA); Rui Wang (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: The paper investigates model specification problems for nonlinear stochastic differential equations with delay (SDDE). Compared to the model specification for conventional stochastic diffusions without delay, the observed sequence does not admit a Markovian structure so that the classical testing procedures fail. To overcome this difficulty, we propose a moment estimator from the ergodicity of SDDEs and its asymptotic properties are established. Based on the proposed moment estimator, a testing procedure is derived for our model specification testing problems. Particularly, the limiting distributions of the proposed test statistic are derived under null hypotheses and the test power is obtained under some specific alternative hypotheses. Finally, a Monte Carlo simulation is conducted to illustrate the finite sample performance of the proposed test.
    Keywords: Model specification test, Stochastic differential equation with delay, Moment estimator, Ergodicity, Invariant measure, Non-Markovian property.
    JEL: C58 C12 C32
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202301&r=ecm
  4. By: Nicholas Brown (Queen's University); Kyle Butts (University of Colorado Boulder, Economics Department); Joakim Westerlund (Lund University and Deakin University)
    Abstract: We study the effect of treatment on an outcome when parallel trends hold conditional on an interactive fixed effects structure. In contrast to the majority of the literature, we propose identification using time-varying covariates. We assume the untreated outcomes and covariates follow a common correlated effects (CCE) model, where the covariates are linear in the same common time effects. We then demonstrate consistent estimation of the treatment effect coefficients by imputing the untreated potential outcomes in post-treatment time periods. Our method accounts for treatment affecting the distribution of the control variables and is valid when the number of pre-treatment time periods is small. We also decompose the overall treatment effect into estimable direct and mediated components.
    Keywords: difference-in-differences, interactive fixed effects, fixed-T, imputation
    JEL: C31 C33 C38
    Date: 2025–01
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1496&r=ecm
  5. By: Markus Pelger; Jiacheng Zou
    Abstract: This paper proposes a new method for covariate selection in large dimensional panels. We develop the inferential theory for large dimensional panel data with many covariates by combining post-selection inference with a new multiple testing method specifically designed for panel data. Our novel data-driven hypotheses are conditional on sparse covariate selections and valid for any regularized estimator. Based on our panel localization procedure, we control for family-wise error rates for the covariate discovery and can test unordered and nested families of hypotheses for large cross-sections. As an easy-to-use and practically relevant procedure, we propose Panel-PoSI, which combines the data-driven adjustment for panel multiple testing with valid post-selection p-values of a generalized LASSO, that allows to incorporate priors. In an empirical study, we select a small number of asset pricing factors that explain a large cross-section of investment strategies. Our method dominates the benchmarks out-of-sample due to its better control of false rejections and detections.
    Date: 2022–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.00292&r=ecm
  6. By: Luther Yap
    Abstract: This paper proves a new central limit theorem for a sample that exhibits multi-way dependence and heterogeneity across clusters. Statistical inference for situations where there is both multi-way dependence and cluster heterogeneity has thus far been an open issue. Existing theory for multi-way clustering inference requires identical distributions across clusters (implied by the so-called separate exchangeability assumption). Yet no such homogeneity requirement is needed in the existing theory for one-way clustering. The new result therefore theoretically justifies the view that multi-way clustering is a more robust version of one-way clustering, consistent with applied practice. The result is applied to linear regression, where it is shown that a standard plug-in variance estimator is valid for inference.
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.03805&r=ecm
  7. By: Haowen Bao (School of Economics and Management, University of Chinese Academy of Sciences and Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Yuying Sun (School of Economics and Management, University of Chinese Academy of Sciences and Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China)
    Abstract: This paper proposes a new penalized model averaging method for high dimensional quantile regressions based on quasi-maximum likelihood estimation, which determines optimal combination weights and yields sparseness from various potential covariates simultaneously. The proposed weight choice criterion is based on the Kullback-Leibler loss with penalties, which could reduce to Mallows-type criterion for asymmetric Laplace density. Both the dimension of covariates and the number of possibly misspecified candidate models are allowed to be diverging with the sample size. The asymptotic optimality and convergence rate of the selected weights are derived, even when all candidate models are misspecified. We further extend our concern to the ultra-high dimensional scenarios and establish the corresponding asymptotic optimality. Simulation studies and empirical application to stock returns forecasting illustrate that the proposed method outperforms existing methods.
    JEL: C51 C52 C53
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202302&r=ecm
  8. By: Apoorva Lal; Wenjing Zheng; Simon Ejdemyr
    Abstract: Randomized experiments are an excellent tool for estimating internally valid causal effects with the sample at hand, but their external validity is frequently debated. While classical results on the estimation of Population Average Treatment Effects (PATE) implicitly assume random selection into experiments, this is typically far from true in many medical, social-scientific, and industry experiments. When the experimental sample is different from the target sample along observable or unobservable dimensions, experimental estimates may be of limited use for policy decisions. We begin by decomposing the extrapolation bias from estimating the Target Average Treatment Effect (TATE) using the Sample Average Treatment Effect (SATE) into covariate shift, overlap, and effect modification components, which researchers can reason about in order to diagnose the severity of extrapolation bias. Next, We cast covariate shift as a sample selection problem and propose estimators that re-weight the doubly-robust scores from experimental subjects to estimate treatment effects in the overall sample (=: generalization) or in an alternate target sample (=: transportation). We implement these estimators in the open-source R package causalTransportR and illustrate its performance in a simulation study and discuss diagnostics to evaluate its performance.
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.04776&r=ecm
  9. By: Denis Chetverikov; Elena Manresa
    Abstract: In this paper, we develop spectral and post-spectral estimators for grouped panel data models. Both estimators are consistent in the asymptotics where the number of observations $N$ and the number of time periods $T$ simultaneously grow large. In addition, the post-spectral estimator is $\sqrt{NT}$-consistent and asymptotically normal with mean zero under the assumption of well-separated groups even if $T$ is growing much slower than $N$. The post-spectral estimator has, therefore, theoretical properties that are comparable to those of the grouped fixed-effect estimator developed by Bonhomme and Manresa (2015). In contrast to the grouped fixed-effect estimator, however, our post-spectral estimator is computationally straightforward.
    Date: 2022–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2212.13324&r=ecm
  10. By: Tadao Hoshino; Takahide Yanagi
    Abstract: This study considers testing the specification of spillover effects in causal inference. We focus on experimental settings in which the treatment assignment mechanism is known to researchers and develop a new randomization test utilizing a hierarchical relationship between different exposures. Compared with existing approaches, our approach is essentially applicable to any null exposure specifications and produces powerful test statistics without a priori knowledge of the true interference structure. As empirical illustrations, we revisit two existing social network experiments: one on farmers' insurance adoption and the other on anti-conflict education programs.
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.05580&r=ecm
  11. By: Mate Kormos; Robert P. Lieli; Martin Huber
    Abstract: We study causal inference in a setting in which units consisting of pairs of individuals (such as married couples) are assigned randomly to one of four categories: a treatment targeted at pair member A, a potentially different treatment targeted at pair member B, joint treatment, or no treatment. The setup includes the important special case in which the pair members are the same individual targeted by two different treatments A and B. Allowing for endogenous non-compliance, including coordinated treatment takeup, as well as interference across treatments, we derive the causal interpretation of various instrumental variable estimands using weaker monotonicity conditions than in the literature. In general, coordinated treatment takeup makes it difficult to separate treatment interaction from treatment effect heterogeneity. We provide auxiliary conditions and various bounding strategies that may help zero in on causally interesting parameters. As an empirical illustration, we apply our results to a program randomly offering two different treatments, namely tutoring and financial incentives, to first year college students, in order to assess the treatments' effects on academic performance.
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.04876&r=ecm
  12. By: Zhang, Junyi; Dassios, Angelos
    Abstract: The Dirichlet process was introduced by Ferguson in 1973 to use with Bayesian nonparametric inference problems. A lot of work has been done based on the Dirichlet process, making it the most fundamental prior in Bayesian nonparametric statistics. Since the construction of Dirichlet process involves an infinite number of random variables, simulation-based methods are hard to implement, and various finite approximations for the Dirichlet process have been proposed to solve this problem. In this paper, we construct a new random probability measure called the truncated Poisson–Dirichlet process. It sorts the components of a Dirichlet process in descending order according to their random weights, then makes a truncation to obtain a finite approximation for the distribution of the Dirichlet process. Since the approximation is based on a decreasing sequence of random weights, it has a lower truncation error comparing to the existing methods using stick-breaking process. Then we develop a blocked Gibbs sampler based on Hamiltonian Monte Carlo method to explore the posterior of the truncated Poisson–Dirichlet process. This method is illustrated by the normal mean mixture model and Caron–Fox network model. Numerical implementations are provided to demonstrate the effectiveness and performance of our algorithm.
    JEL: C1
    Date: 2023–01–04
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:117690&r=ecm
  13. By: Watanabe, Toshiaki; Nakajima, Jouchi
    Abstract: A new high-frequency realized stochastic volatility model is proposed. Apart from the standard daily-frequency stochastic volatility model, the high-frequency stochastic volatility model is fit to intraday returns by extensively incorporating intraday volatility patterns. The daily realized volatility calculated using intraday returns is incorporated into the high-frequency stochastic volatility model by considering the bias in the daily realized volatility caused by microstructure noise. The volatility of intraday returns is assumed to consist of the autoregressive process, the seasonal component of the intraday volatility pattern, and the announcement component responding to macroeconomic announcements. A Bayesian method via Markov chain Monte Carlo is developed for the analysis of the proposed model. The empirical analysis using the 5-minute returns of E-mini S&P 500 futures provides evidence that our high-frequency realized stochastic volatility model improves in-sample model fit and volatility forecasting over the existing models.
    Keywords: Bayesian analysis, High-frequency data, Markov chain Monte Carlo, Realized volatility, Stochastic volatility model, Volatility forecasting
    JEL: C22 C53 C58 G17
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:hit:hiasdp:hias-e-127&r=ecm
  14. By: Yi-Ting Chen (National Taiwan University); Chu-An Liu (Institute of Economics, Academia Sinica, Taipei, Taiwan)
    Abstract: We propose a model-averaging (MA) method for constructing asymptotically optimal combined forecasts. The asymptotic optimality is defined in terms of approximating an unknown conditional-mean sequence based on the local-to-zero asymptotics. Unlike existing methods, our method is designed for combining a set of forecast sequences, which is more general than combining a set of single forecasts, generated from a set of predictive regressions. This design generates essential features that are not shared by related existing methods, and the resulting asymptotically optimal weights may be consistently estimated under suitable conditions. We also assess the forecasting performance of our method using simulation data and real data.
    Keywords: : Asymptotic optimality, forecast combination, model averaging
    JEL: C18 C41 C54
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:21-a002&r=ecm
  15. By: Sebastiano Michele Zema
    Abstract: I propose a new measure of price discovery, which I will refer to as the Independent Component based Information Share (IC-IS). This measure constitutes a variant of the widespread Information Share, with the main difference being it does not suffer the same identification issues. Under the assumptions of non-normality and independence of the shocks, a rather general theoretical framework leading to the estimation of the IC-IS is illustrated. After testing the robustness of the proposed measures to different non-Normal distributions in a simulated environment, an empirical exercise encompassing different price discovery applications will follow.
    Keywords: vector error correction models (VECMs); information shares; market microstructure; independent component analysis; pseudo maximum likelihood; price discovery
    Date: 2023–01–09
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/03&r=ecm
  16. By: Crespo, Marelys; Gadat, Sébastien; Gendre, Xavier
    Abstract: In this paper, we investigate a continuous time version of the Stochastic Langevin Monte Carlo method, introduced in [39], that incorporates a stochastic sampling step inside the traditional overdamped Langevin diffusion. This method is popular in machine learning for sampling posterior distribution. We will pay specific attention in our work to the computational cost in terms of n (the number of observations that produces the posterior distribution), and d (the dimension of the ambient space where the parameter of interest is living). We derive our analysis in the weakly convex framework, which is parameterized with the help of the Kurdyka- Lojasiewicz (KL) inequality, that permits to handle a vanishing curvature settings, which is far less restrictive when compared to the simple strongly convex case. We establish that the final horizon of simulation to obtain an ε approximation (in terms of entropy) is of the order (d log(n)²)(1+r)² [log²(ε−1) + n²d²(1+r) log4(1+r)(n)] with a Poissonian subsampling of parameter n(d log²(n))1+r)−1, where the parameter r is involved in the KL inequality and varies between 0 (strongly convex case) and 1 (limiting Laplace situation).
    Date: 2023–01–16
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:127747&r=ecm
  17. By: André de Palma (Université CYU, THEMA - Théorie économique, modélisation et applications - CNRS - Centre National de la Recherche Scientifique - CY - CY Cergy Paris Université); Karim Kilani (LIRSA - Laboratoire interdisciplinaire de recherche en sciences de l'action - CNAM - Conservatoire National des Arts et Métiers [CNAM] - HESAM - HESAM Université - Communauté d'universités et d'établissements Hautes écoles Sorbonne Arts et métiers université)
    Abstract: This paper builds upon the work of Professor Marley, who, since the beginning of his long research career, has proposed rigorous axiomatics in the area of probabilistic choice models. Our study concentrates on models that can be applied to best and worst choice scaling experiments. We focus on those among these models that are based on strong assumptions about the underlying ranking of the alternatives with which the individual is assumed to be endowed when making the choice. Taking advantage of an inclusion-exclusion identity that we showed a few years ago, we propose a variety of best-worst choice probability models that could be implemented in software packages that are flourishing in this field.
    Keywords: Best-worst scaling experiments Logit model Random utility models Reverse logit model, Best-worst scaling experiments, Logit model, Random utility models, Reverse logit model JEL classification
    Date: 2022–12–27
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03913928&r=ecm
  18. By: Maria Nareklishvili; Nicholas Polson; Vadim Sokolov
    Abstract: In this paper, we propose Forest-PLS, a feature selection method for analyzing policy effect heterogeneity in a more flexible and comprehensive manner than is typically available with conventional methods. In particular, our method is able to capture policy effect heterogeneity both within and across subgroups of the population defined by observable characteristics. To achieve this, we employ partial least squares to identify target components of the population and causal forests to estimate personalized policy effects across these components. We show that the method is consistent and leads to asymptotically normally distributed policy effects. To demonstrate the efficacy of our approach, we apply it to the data from the Pennsylvania Reemployment Bonus Experiments, which were conducted in 1988-1989. The analysis reveals that financial incentives can motivate some young non-white individuals to enter the labor market. However, these incentives may also provide a temporary financial cushion for others, dissuading them from actively seeking employment. Our findings highlight the need for targeted, personalized measures for young non-white male participants.
    Date: 2022–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.00251&r=ecm
  19. By: Michael Greenacre
    Abstract: The approach to analysing compositional data with a fixed sum constraint has been dominated by the use of logratio transformations, to ensure exact subcompositional coherence and, in some situations, exact isometry as well. A problem with this approach is that data zeros, found in most applications, have to be replaced to permit the logarithmic transformation. A simpler approach is to use the chi-square standardization that is inherent in correspondence analysis. Combined with the Box-Cox power transformation, this standardization defines chi-square distances that tend to logratio distances for strictly positive data as the power parameter tends to zero, and can thus be considered equivalent to transforming to logratios. For data with zeros, a value of the power can be identified that brings the chi-square standardization as close as possible to transforming by logratios, without having to substitute the zeros. Especially in the field of high-dimensional "omics" data, this alternative presents such a high level of coherence and isometry as to be a valid, and much simpler, approach to the analysis of compositional data.
    Keywords: Box-Cox transformation, chi-square distance, correspondence analysis, isometry, logratios, Procrustes analysis, subcompositional coherence
    JEL: C19 C88
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1857&r=ecm
  20. By: Farmer, J. Doyne; Dyer, Joel; Cannon, Patrick; Schmon, Sebastian
    Abstract: Calibrating agent-based models (ABMs) to data is among the most fundamental requirements to ensure the model fulfils its desired purpose. In recent years, simulation-based inference methods have emerged as powerful tools for performing this task when the model likelihood function is intractable, as is often the case for ABMs. In some real-world use cases of ABMs, both the observed data and the ABM output consist of the agents' states and their interactions over time. In such cases, there is a tension between the desire to make full use of the rich information content of such granular data on the one hand, and the need to reduce the dimensionality of the data to prevent difficulties associated with high-dimensional learning tasks on the other. A possible resolution is to construct lower-dimensional time-series through the use of summary statistics describing the macrostate of the system at each time point. However, a poor choice of summary statistics can result in an unacceptable loss of information from the original dataset, dramatically reducing the quality of the resulting calibration. In this work, we instead propose to learn parameter posteriors associated with granular microdata directly using temporal graph neural networks. We will demonstrate that such an approach offers highly compelling inductive biases for Bayesian inference using the raw ABM microstates as output.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:amz:wpaper:2022-30&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.