nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒05‒20
seventeen papers chosen by
Sune Karlsson, Örebro universitet


  1. Implied probability kernel block bootstrap for time series moment condition models By Paulo Parente; Richard J. Smith
  2. Stratifying on Treatment Status By Jinyong Hahn; John Ham; Geert Ridder; Shuyang Sheng
  3. Partial Identification of Heteroskedastic Structural VARs: Theory and Bayesian Inference By Helmut L\"utkepohl; Fei Shang; Luis Uzeda; Tomasz Wo\'zniak
  4. Multiply-Robust Causal Change Attribution By Victor Quintas-Martinez; Mohammad Taha Bahadori; Eduardo Santiago; Jeff Mu; Dominik Janzing; David Heckerman
  5. Testing Mechanisms By Soonwoo Kwon; Jonathan Roth
  6. The modified conditional sum-of-squares estimator for fractionally integrated models By Mustafa R. K{\i}l{\i}n\c{c}; Michael Massmann
  7. Overfitting Reduction in Convex Regression By Zhiqiang Liao; Sheng Dai; Eunji Lim; Timo Kuosmanen
  8. Matching to Suppliers in the Production Network: an Empirical Framework By Alonso Alfaro-Urena; Paolo Zacchia
  9. Two-step Estimation of Network Formation Models with Unobserved Heterogeneities and Strategic Interactions By Shaomin Wu
  10. Characterisation and Calibration of Multiversal Models By Cantone, Giulio Giacomo; Tomaselli, Venera
  11. Deep learning for multivariate volatility forecasting in high-dimensional financial time series. By Rei Iwafuchi; Yasumasa Matsuda
  12. Examiner and Judge Designs in Economics: A Practitioner's Guide By Eric Chyn; Brigham Frandsen; Emily C. Leslie
  13. Recovering Overlooked Information in Categorical Variables with LLMs: An Application to Labor Market Mismatch By Yi Chen; Hanming Fang; Yi Zhao; Zibo Zhao
  14. Belief Bias Identification By Pedro Gonzalez-Fernandez
  15. Ups and (Draw)Downs By Tommaso Proietti
  16. Optimal parallel sequential change detection under generalized performance measures By Lu, Zexian; Chen, Yunxiao; Li, Xiaoou
  17. On the Asymmetric Volatility Connectedness By Abdulnasser Hatemi-J

  1. By: Paulo Parente; Richard J. Smith
    Abstract: This article generalizes and extends the kernel block bootstrap (KBB) method of Parente and Smith (2018, 2021) to provide a comprehensive treatment of its use for GMM estimation and inference in time-series models formulated in terms of moment conditions. KBB procedures that employ bootstrap distributions with generalised empirical likelihood implied probabilities as probability mass points are also considered. The first-order asymptotic validity of new KBB estimators and test statistics for over-identifying moments, additional moment constraints and parametric restrictions is established. Their empirical distributions may serve as practical alternative approximations to those of GMM estimators and statistics and to other bootstrap distributions in the extant literature. Simulation experiments reveal that critical values arising from the empirical distributions of some KBB test statistics are more accurate than those from standard first-order asymptotic theory.
    Date: 2024–04–25
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:08/24&r=ecm
  2. By: Jinyong Hahn; John Ham; Geert Ridder; Shuyang Sheng
    Abstract: We investigate the estimation of treatment effects from a sample that is stratified on the binary treatment status. In the case of unconfounded assignment where the potential outcomes are independent of the treatment given covariates, we show that standard estimators of the average treatment effect are inconsistent. In the case of an endogenous treatment and a binary instrument, we show that the IV estimator is inconsistent for the local average treatment effect. In both cases, we propose simple alternative estimators that are consistent in stratified samples, assuming that the fraction treated in the population is known or can be estimated.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.04700&r=ecm
  3. By: Helmut L\"utkepohl (Freie Universit\"at Berlin and DIW Berlin); Fei Shang (South China University of Technology and Yuexiu Capital Holdings Group); Luis Uzeda (Bank of Canada); Tomasz Wo\'zniak (University of Melbourne)
    Abstract: We consider structural vector autoregressions identified through stochastic volatility. Our focus is on whether a particular structural shock is identified by heteroskedasticity without the need to impose any sign or exclusion restrictions. Three contributions emerge from our exercise: (i) a set of conditions under which the matrix containing structural parameters is partially or globally unique; (ii) a statistical procedure to assess the validity of the conditions mentioned above; and (iii) a shrinkage prior distribution for conditional variances centred on a hypothesis of homoskedasticity. Such a prior ensures that the evidence for identifying a structural shock comes only from the data and is not favoured by the prior. We illustrate our new methods using a U.S. fiscal structural model.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.11057&r=ecm
  4. By: Victor Quintas-Martinez; Mohammad Taha Bahadori; Eduardo Santiago; Jeff Mu; Dominik Janzing; David Heckerman
    Abstract: Comparing two samples of data, we observe a change in the distribution of an outcome variable. In the presence of multiple explanatory variables, how much of the change can be explained by each possible cause? We develop a new estimation strategy that, given a causal model, combines regression and re-weighting methods to quantify the contribution of each causal mechanism. Our proposed methodology is multiply robust, meaning that it still recovers the target parameter under partial misspecification. We prove that our estimator is consistent and asymptotically normal. Moreover, it can be incorporated into existing frameworks for causal attribution, such as Shapley values, which will inherit the consistency and large-sample distribution properties. Our method demonstrates excellent performance in Monte Carlo simulations, and we show its usefulness in an empirical application.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.08839&r=ecm
  5. By: Soonwoo Kwon; Jonathan Roth
    Abstract: Economists are often interested in the mechanisms by which a particular treatment affects an outcome. This paper develops tests for the ``sharp null of full mediation'' that the treatment $D$ operates on the outcome $Y$ only through a particular conjectured mechanism (or set of mechanisms) $M$. A key observation is that if $D$ is randomly assigned and has a monotone effect on $M$, then $D$ is a valid instrumental variable for the local average treatment effect (LATE) of $M$ on $Y$. Existing tools for testing the validity of the LATE assumptions can thus be used to test the sharp null of full mediation when $M$ and $D$ are binary. We develop a more general framework that allows one to test whether the effect of $D$ on $Y$ is fully explained by a potentially multi-valued and multi-dimensional set of mechanisms $M$, allowing for relaxations of the monotonicity assumption. We further provide methods for lower-bounding the size of the alternative mechanisms when the sharp null is rejected. An advantage of our approach relative to existing tools for mediation analysis is that it does not require stringent assumptions about how $M$ is assigned; on the other hand, our approach helps to answer different questions than traditional mediation analysis by focusing on the sharp null rather than estimating average direct and indirect effects. We illustrate the usefulness of the testable implications in two empirical applications.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.11739&r=ecm
  6. By: Mustafa R. K{\i}l{\i}n\c{c}; Michael Massmann
    Abstract: In this paper, we analyse the influence of estimating a constant term on the bias of the conditional sum-of-squares (CSS) estimator in a stationary or non-stationary type-II ARFIMA ($p_1$, $d$, $p_2$) model. We derive expressions for the estimator's bias and show that the leading term can be easily removed by a simple modification of the CSS objective function. We call this new estimator the modified conditional sum-of-squares (MCSS) estimator. We show theoretically and by means of Monte Carlo simulations that its performance relative to that of the CSS estimator is markedly improved even for small sample sizes. Finally, we revisit three classical short datasets that have in the past been described by ARFIMA($p_1$, $d$, $p_2$) models with constant term, namely the post-second World War real GNP data, the extended Nelson-Plosser data, and the Nile data.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.12882&r=ecm
  7. By: Zhiqiang Liao; Sheng Dai; Eunji Lim; Timo Kuosmanen
    Abstract: Convex regression is a method for estimating an unknown function $f_0$ from a data set of $n$ noisy observations when $f_0$ is known to be convex. This method has played an important role in operations research, economics, machine learning, and many other areas. It has been empirically observed that the convex regression estimator produces inconsistent estimates of $f_0$ and extremely large subgradients near the boundary of the domain of $f_0$ as $n$ increases. In this paper, we provide theoretical evidence of this overfitting behaviour. We also prove that the penalised convex regression estimator, one of the variants of the convex regression estimator, exhibits overfitting behaviour. To eliminate this behaviour, we propose two new estimators by placing a bound on the subgradients of the estimated function. We further show that our proposed estimators do not exhibit the overfitting behaviour by proving that (a) they converge to $f_0$ and (b) their subgradients converge to the gradient of $f_0$, both uniformly over the domain of $f_0$ with probability one as $n \rightarrow \infty$. We apply the proposed methods to compute the cost frontier function for Finnish electricity distribution firms and confirm their superior performance in predictive power over some existing methods.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.09528&r=ecm
  8. By: Alonso Alfaro-Urena; Paolo Zacchia
    Abstract: This paper develops a framework for the empirical analysis of the determinants of input supplier choice on the extensive margin using firm-to-firm transaction data. Building on a theoretical model of production network formation, we characterize the assumptions that enable a transformation of the multinomial logit likelihood function from which the seller fixed effects, which encode the seller marginal costs, vanish. This transformation conditions, for each subnetwork restricted to one supplier industry, on the out-degree of sellers (a sufficient statistic for the seller fixed effect) and the in-degree of buyers (which is pinned down by technology and by “make-or-buy” decisions). This approach delivers a consistent estimator for the effect of dyadic explanatory variables, which in our model are interpreted as matching frictions, on the supplier choice probability. The estimator is easy to implement and in Monte Carlo simulations it outperforms alternatives based on group fixed effects. In an empirical application about the effect of a major Costa Rican infrastructural project on firm-to-firm connections, our approach yields estimates typically much smaller in magnitude than those from naive multinomial logit.
    Keywords: Production network, Supplier choice, Conditional logit, Infrastructures
    JEL: C25 L11 R12 R15
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:cer:papers:wp775&r=ecm
  9. By: Shaomin Wu
    Abstract: In this paper, I characterize the network formation process as a static game of incomplete information, where the latent payoff of forming a link between two individuals depends on the structure of the network, as well as private information on agents' attributes. I allow agents' private unobserved attributes to be correlated with observed attributes through individual fixed effects. Using data from a single large network, I propose a two-step estimator for the model primitives. In the first step, I estimate agents' equilibrium beliefs of other people's choice probabilities. In the second step, I plug in the first-step estimator to the conditional choice probability expression and estimate the model parameters and the unobserved individual fixed effects together using Joint MLE. Assuming that the observed attributes are discrete, I showed that the first step estimator is uniformly consistent with rate $N^{-1/4}$, where $N$ is the total number of linking proposals. I also show that the second-step estimator converges asymptotically to a normal distribution at the same rate.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.12581&r=ecm
  10. By: Cantone, Giulio Giacomo; Tomaselli, Venera
    Abstract: The Multiverse is a framework to design multi-model estimation where data are fit on many connected specifications of the same abstract model instead of a singular or a small selection of specifications. Differently from canonical multi-model, in Multiverse the probabilities of the specifications to be included in the analysis are never assumed independent of each other. Grounded in this consideration, this study provides a compact statistical characterisation of the process of elicitation of the specifications in Multiverse Analysis and conceptually adjacent methods, connecting previous insights from meta-analytical Statistics, model averaging, Network Theory, Information Theory, and Causal Inference. The topic of calibration of the multiversal estimates is treated with references to the adoption Bayesian Model Averaging vs. alternatives. In an application, it is checked the theory that Bayesian Model Averaging reduces error for well-specified multiversal models, but it amplifies it when a collider variable is included in the multiversal model. In well-specified models, alternatives do not perform significantly better than Uniform weighting of the estimates, so the adoption of a gold standard remains ambiguous. Normative implications for misinterpretation of the epistemic value of Multiverse Analysis and the connection of the proposed characterisation and future directions of research are discussed.
    Date: 2024–04–24
    URL: http://d.repec.org/n?u=RePEc:osf:metaar:pa98g&r=ecm
  11. By: Rei Iwafuchi; Yasumasa Matsuda
    Abstract: The market for investment trusts of large-scale portfolios, including index funds, continues to grow, and high-dimensional volatility estimation is essential for assessing the risks of such portfolios. However, multivariate volatility models suitable for high-dimensional data have not been extensively studied. This paper introduces a new framework based on the Spatial AR model, which provides fast and stable estimation, and demonstrates its application through simulations using historical data from the S&P 500.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:141&r=ecm
  12. By: Eric Chyn; Brigham Frandsen; Emily C. Leslie
    Abstract: This article provides empirical researchers with an introduction and guide to research designs based on variation in judge and examiner tendencies to administer treatments or other interventions. We review the basic theory behind the research design, outline the assumptions under which the design identifies causal effects, describe empirical tests of those assumptions, and discuss tradeoffs associated with choices researchers must make for estimation. We demonstrate concepts and best practices concretely in an empirical case study that uses an examiner tendency research design to study the effects of pre-trial detention.
    JEL: C21 C26 C31 C54 K14
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32348&r=ecm
  13. By: Yi Chen; Hanming Fang; Yi Zhao; Zibo Zhao
    Abstract: Categorical variables have no intrinsic ordering, and researchers often adopt a fixed-effect (FE) approach in empirical analysis. However, this approach has two significant limitations: it overlooks textual labels associated with the categorical variables; and it produces unstable results when there are only limited observations in a category. In this paper, we propose a novel method that utilizes recent advances in large language models (LLMs) to recover overlooked information in categorical variables. We apply this method to investigate labor market mismatch. Specifically, we task LLMs with simulating the role of a human resources specialist to assess the suitability of an applicant with specific characteristics for a given job. Our main findings can be summarized in three parts. First, using comprehensive administrative data from an online job posting platform, we show that our new match quality measure is positively correlated with several traditional measures in the literature, and at the same time, we highlight the LLM's capability to provide additional information conditional on the traditional measures. Second, we demonstrate the broad applicability of the new method with a survey data containing significantly less information than the administrative data, which makes it impossible to compute most of the traditional match quality measures. Our LLM measure successfully replicates most of the salient patterns observed in a hard-to-access administrative dataset using easily accessible survey data. Third, we investigate the gender gap in match quality and explore whether there exists gender stereotypes in the hiring process. We simulate an audit study, examining whether revealing gender information to LLMs influences their assessment. We show that when gender information is disclosed to the GPT, the model deems females better suited for traditionally female-dominated roles.
    JEL: C55 J16 J24 J31
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32327&r=ecm
  14. By: Pedro Gonzalez-Fernandez
    Abstract: This paper proposes a unified theoretical model to identify and test a comprehensive set of probabilistic updating biases within a single framework. The model achieves separate identification by focusing on the updating of belief distributions, rather than classic point-belief measurements. Testing the model in a laboratory experiment reveals significant heterogeneity at the individual level: All tested biases are present, and each participant exhibits at least one identifiable bias. Notably, motivated-belief biases (optimism and pessimism) and sequence-related biases (gambler's fallacy and hot hand fallacy) are identified as key drivers of biased inference. Moreover, at the population level, base rate neglect emerges as a persistent influence. This study contributes to the belief-updating literature by providing a methodological toolkit for researchers examining links between different conflicting biases, or exploring connections between updating biases and other behavioural phenomena.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.09297&r=ecm
  15. By: Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata")
    Abstract: The concept of drawdown quantifies the potential loss in the value of a financial asset when it deviates from its historical peak. It plays an important role in evaluating market risk, portfolio construction, assessing risk-adjusted performance and trading strategies. This paper introduces a novel measurement framework that produces, along with the drawdown and its dual (the drawup), two Markov chain processes representing the current lead time with respect to the running maximum and minimum, i.e., the number of time units elapsed from the most recent peak and trough. Under relatively unrestrictive assumptions regarding the returns process, the chains are homogeneous and ergodic. We show that, together with the distribution of asset returns, they determine the properties of the drawdown and drawup time series, in terms of size, serial correlation, persistence and duration. Furthermore, they form the foundation of a new algorithm for dating peaks and troughs of the price process delimiting bear and bull market phases. The other contributions of this paper deal with out-of-sample prediction and robust estimation of the drawdown.
    Keywords: Financial time series; risk measures; dating bear and bull markets
    JEL: C22 C58 E32
    Date: 2024–05–03
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:576&r=ecm
  16. By: Lu, Zexian; Chen, Yunxiao; Li, Xiaoou
    Abstract: This paper considers the detection of change points in parallel data streams, a problem widely encountered when analyzing large-scale real-time streaming data. Each stream may have its own change point, at which its data has a distributional change. With sequentially observed data, a decision maker needs to declare whether changes have already occurred to the streams at each time point. Once a stream is declared to have changed, it is deactivated permanently so that its future data will no longer be collected. This is a compound decision problem in the sense that the decision maker may want to optimize certain compound performance metrics that concern all the streams as a whole. Thus, the decisions are not independent for different streams. Our contribution is three-fold. First, we propose a general framework for compound performance metrics that includes the ones considered in the existing works as special cases and introduces new ones that connect closely with the performance metrics for single-stream sequential change detection and large-scale hypothesis testing. Second, data-driven decision procedures are developed under this framework. Finally, optimality results are established for the proposed decision procedures. The proposed methods and theory are evaluated by simulation studies and a case study.
    Keywords: large-scale inference; multiple change detection; sequential analysis; multiple hypothesis testing; CAREER under Grant DMS-2143844
    JEL: C1
    Date: 2022–12–22
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:118348&r=ecm
  17. By: Abdulnasser Hatemi-J
    Abstract: Connectedness measures the degree at which a time-series variable spills over volatility to other variables compared to the rate that it is receiving. The idea is based on the percentage of variance decomposition from one variable to the others, which is estimated by making use of a VAR model. Diebold and Yilmaz (2012, 2014) suggested estimating this simple and useful measure of percentage risk spillover impact. Their method is symmetric by nature, however. The current paper offers an alternative asymmetric approach for measuring the volatility spillover direction, which is based on estimating the asymmetric variance decompositions introduced by Hatemi-J (2011, 2014). This approach accounts explicitly for the asymmetric property in the estimations, which accords better with reality. An application is provided to capture the potential asymmetric volatility spillover impacts between the three largest financial markets in the world.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.12997&r=ecm

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.