nep-ecm New Economics Papers
on Econometrics
Issue of 2025–12–08
twelve papers chosen by
Sune Karlsson, Örebro universitet


  1. Semiparametric Estimation of Fractional Integration: An Evaluation of Local Whittle Methods By Jason R. Blevins
  2. Bayesian Nonparametric Models for Conditional Densities Based on Orthogonal Polynomials By Andriy Norets; Marco Stenborg Petterson
  3. Moderate Time-Varying Parameter VARs By Celani, Alessandro; Pedini, Luca
  4. Empirical Likelihood for Random Forests and Ensembles By Harold D. Chiang; Yukitoshi Matsushita; Taisuke Otsu
  5. On Evolution-Based Models for Experimentation Under Interference By Sadegh Shirani; Mohsen Bayati
  6. Long memory in the marginalized time series of a VAR revisited By del Barrio Castro, Tomas; Sanso Rossello, Andreu; Sibbertsen, Philipp
  7. Understanding IV Versus OLS Estimates of Treatment Effects and the Coefficient Difference Check By Bjerk, David J.
  8. Efficiency Bound for Social Interaction Models with Network Structures By Ryota Ishikawa
  9. Misaligned by Design: Incentive Failures in Machine Learning By David Autor; Andrew Caplin; Daniel J. Martin; Philip Marx
  10. The Quantum Network of Assets: A Non-Classical Framework for Market Correlation and Structural Risk By Hui Gong; Akash Sedai; Francesca Medda
  11. Beyond the single binary choice format for eliciting willingness to accept: Evidence from a field study on onshore wind farms By Fanghella, Valeria; Fezzi, Carlo; Schleich, Joachim; Sebi, Carine
  12. Institutional Learning and Volatility Transmission in ASEAN Equity Markets: A Network-Integrated Regime-Dependent Approach By Junlin Yang

  1. By: Jason R. Blevins
    Abstract: Fractionally integrated time series exhibiting long memory are commonly found in economics, finance, and related fields. Semiparametric methods for estimating the memory parameter $d$ have proven to be effective and robust, but practitioners face difficulties arising from the availability of multiple estimators with different valid parameter ranges and the choice of bandwidth parameter $m$. This paper provides a comprehensive evaluation of local Whittle methods from Robinson's (1995) foundational estimator through the exact local Whittle approaches of Shimotsu and Phillips (2005) and Shimotsu (2010), where theoretical advances have expanded the feasible range of memory parameters and improved efficiency. Using a new implementation in Python, PyELW, we replicate key empirical and Monte Carlo results from the literature, providing external validation for both the original findings and the software implementation. We extend these empirical applications to demonstrate how method choice can affect substantive conclusions about persistence. Based on comprehensive simulation comparisons and empirical evidence, we provide practical guidance for applied researchers on how and when to use each method.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.15689
  2. By: Andriy Norets (Brown University); Marco Stenborg Petterson (University of Naples Federico II and CSEF)
    Abstract: The paper considers a nonparametric Bayesian model for conditional densities. The model considered is a mixture of orthogonal polynomials with a prior on the number of components. The use of orthogonal polynomials allows for a great deal of flexibility in applications while maintaining useful approximation properties. We provide the posterior contraction rate in the case of Legendre polynomials. The algorithm proposed allows for cross-dimensional moves, allowing it to choose the optimal number of terms in the series expansion conditional on a penalty parameter. We also provide Monte Carlo simulations that show how well the model approximates known distributions also in finite sample situations.
    Keywords: Bayesian nonparametrics, orthogonal polynomials, variable dimensions model
    JEL: C11 C14 C13
    Date: 2024–12–01
    URL: https://d.repec.org/n?u=RePEc:sef:csefwp:744
  3. By: Celani, Alessandro (Örebro University School of Business); Pedini, Luca (Fondazione ENI Enrico Mattei (FEEM))
    Abstract: This paper proposes a parsimonious reparametrization for time-varying parameter models that captures smooth dynamics through a low-dimensional state process combined with B-spline weights. We apply this framework to TVP-VARs, yielding Moderate TVP-VARs that retain the interpretability of standard specifications while mitigating overfitting. Monte Carlo evidence shows faster estimation, lower bias, and strong robustness to knot placement. In U.S. macroeconomic data, moderate specifications recover meaningful long-run movements, produce stable impulse responses and deliver superior density forecasts and predictive marginal likelihoods relative to conventional TVP-VARs, particularly in high-dimensional settings.
    Keywords: Time-Varying Parameter models; High-dimensional Vector Autoregressions; Stochastic Volatility; B-splines; Macroeconomic Forecasting
    JEL: C11 C33 C53
    Date: 2025–12–02
    URL: https://d.repec.org/n?u=RePEc:hhs:oruesi:2025_016
  4. By: Harold D. Chiang; Yukitoshi Matsushita; Taisuke Otsu
    Abstract: We develop an empirical likelihood (EL) framework for random forests and related ensemble methods, providing a likelihood-based approach to quantify their statistical uncertainty. Exploiting the incomplete $U$-statistic structure inherent in ensemble predictions, we construct an EL statistic that is asymptotically chi-squared when subsampling induced by incompleteness is not overly sparse. Under sparser subsampling regimes, the EL statistic tends to over-cover due to loss of pivotality; we therefore propose a modified EL that restores pivotality through a simple adjustment. Our method retains key properties of EL while remaining computationally efficient. Theory for honest random forests and simulations demonstrate that modified EL achieves accurate coverage and practical reliability relative to existing inference methods.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.13934
  5. By: Sadegh Shirani; Mohsen Bayati
    Abstract: Causal effect estimation in networked systems is central to data-driven decision making. In such settings, interventions on one unit can spill over to others, and in complex physical or social systems, the interaction pathways driving these interference structures remain largely unobserved. We argue that for identifying population-level causal effects, it is not necessary to recover the exact network structure; instead, it suffices to characterize how those interactions contribute to the evolution of outcomes. Building on this principle, we study an evolution-based approach that investigates how outcomes change across observation rounds in response to interventions, hence compensating for missing network information. Using an exposure-mapping perspective, we give an axiomatic characterization of when the empirical distribution of outcomes follows a low-dimensional recursive equation, and identify minimal structural conditions under which such evolution mappings exist. We frame this as a distributional counterpart to difference-in-differences. Rather than assuming parallel paths for individual units, it exploits parallel evolution patterns across treatment scenarios to estimate counterfactual trajectories. A key insight is that treatment randomization plays a role beyond eliminating latent confounding; it induces an implicit sampling from hidden interference channels, enabling consistent learning about heterogeneous spillover effects. We highlight causal message passing as an instantiation of this method in dense networks while extending to more general interference structures, including influencer networks where a small set of units drives most spillovers. Finally, we discuss the limits of this approach, showing that strong temporal trends or endogenous interference can undermine identification.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21675
  6. By: del Barrio Castro, Tomas; Sanso Rossello, Andreu; Sibbertsen, Philipp
    Abstract: In this paper we find an alternative explanation for the presence of long memory in marginalized time series of an autoregressive system in situations earlier explored by Bauwens, Chevillon and Laurent (2023) and Chevillon, Hecq, and Laurent (2018) which is the near cancellation of the damped trend shared by all the time series of the VAR(1) used in these papers and the MA(1) part of the data generating process followed by the marginalized time series. For a given time dimension T the long memory observed in the marginalized time series will depend on the number of time series in the VAR(1) system but not on the specific value of the main diagonal associated with the matrix of coefficients of the VAR(1) as stated in Chevillon, Hecq, and Laurent (2018) and Bauwens, Chevillon and Laurent (2023). Our results are based on the properties of circulant matrices and the Vector Moving Average representation of the VAR(1) model proposed in the previous two papers. Finally a Monte-Carlo experiment supports our analytical findings.
    Keywords: Long Memory, Marginalized Time Series, Damped Trend
    JEL: C3
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:han:dpaper:dp-742
  7. By: Bjerk, David J. (Claremont McKenna College)
    Abstract: This article derives an equation characterizing the difference between OLS and IV coefficients under potentially heterogenous treatment effects. This leads to what I call the Coefficient Difference Check, which consists of checking that the difference between the estimated OLS and IV coefficients has the same sign as the expected selection effect. I show failures of this check can arise because: IV is invalid, the expected selection story is incorrect, or there are particular heterogenous treatment effects that imply the IV estimate is both “fragile” and that it provides a more biased estimate of the ATT than OLS. Failures of this check are relatively common in the literature. I describe best practices given such failures.
    Keywords: adjudicator propensity to treat IV, judge fixed-effects, average treatment-on-the-treated, heterogenous treatment effects, selection, instrumental variables, examiner tendency IV, returns to schooling
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18274
  8. By: Ryota Ishikawa (Graduate School of Economics, Waseda University)
    Abstract: Bramoull´e et al. (2009) considered a linear social interaction model with network structures under complete information. However, their model is not appropriate for the case where the individual outcome is not completely observed or not precisely predictable by the other individuals in the same group. In this paper, we consider a linear social interaction model with network structures under incomplete information and derive the efficiency bound. The efficiency bound for the model considered in this paper had not been derived before. We also provide a sufficient condition for the existence of the efficiency bound.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:wap:wpaper:2524
  9. By: David Autor; Andrew Caplin; Daniel J. Martin; Philip Marx
    Abstract: The cost of error in many high-stakes settings is asymmetric: misdiagnosing pneumonia when absent is an inconvenience, but failing to detect it when present can be life-threatening. Accordingly, artificial intelligence (AI) models used to assist such decisions are frequently trained with asymmetric loss functions that incorporate human decision-makers' trade-offs between false positives and false negatives. In two focal applications, we show that this standard alignment practice can backfire. In both cases, it would be better to train the machine learning model with a loss function that ignores the human’s objective and then adjust predictions ex post according to that objective. We rationalize this result using an economic model of incentive design with endogenous information acquisition. The key insight from our theoretical framework is that machine classifiers perform not one but two incentivized tasks: choosing how to classify and learning how to classify. We show that while the adjustments engineers use correctly incentivize choosing, they can simultaneously reduce the incentives to learn. Our formal treatment of the problem reveals that methods embraced for their intuitive appeal can in fact misalign human and machine objectives in predictable ways.
    JEL: C1 D8
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34504
  10. By: Hui Gong; Akash Sedai; Francesca Medda
    Abstract: Classical correlation matrices capture only linear and pairwise co-movements, leaving higher-order, nonlinear, and state-dependent interactions of financial markets unrepresented. This paper introduces the Quantum Network of Assets (QNA), a density-matrix based framework that embeds cross-asset dependencies into a quantum-information representation. The approach does not assume physical quantum effects but uses the mathematical structure of density operators, entropy, and mutual information to describe market organisation at a structural level. Within this framework we define two structural measures: the Entanglement Risk Index (ERI), which summarises global non-separability and the compression of effective market degrees of freedom, and the Quantum Early-Warning Signal (QEWS), which tracks changes in entropy to detect latent information build-up. These measures reveal dependency geometry that classical covariance-based tools cannot capture. Using NASDAQ-100 data from 2024-2025, we show that quantum entropy displays smoother evolution and clearer regime distinctions than classical entropy, and that ERI rises during periods of structural tightening even when volatility remains low. Around the 2025 US tariff announcement, QEWS shows a marked pre-event increase in structural tension followed by a sharp collapse after the announcement, indicating that structural transitions can precede price movements without implying predictive modelling. QNA therefore provides a structural diagnostic of market fragility, regime shifts, and latent information flow. The framework suggests new directions for systemic risk research by linking empirical asset networks with tools from quantum information theory.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21515
  11. By: Fanghella, Valeria; Fezzi, Carlo; Schleich, Joachim; Sebi, Carine
    Abstract: This study assesses the incentive compatibility of different elicitation formats for estimating willingness to accept (WTA) in the field. We assess the convergent validity of standard and theorydriven (i.e. based on mechanism-design theory) versions of the double-bounded binary choice (DB) and the open-ended (OE) formats against the single-binary choice (SBC). Our empirical application, developed in collaboration with a major energy company, is based on estimating compensation for the installation of wind farms in respondents' municipalities of residence. We find strong evidence against convergent validity for both versions of the OE format. In comparison, both versions of the DB format, especially the theory-driven version, yield WTA estimates similar to those of the SBC, ranging from near zero for supporters of wind power to €1500-€1800 for opponents. Finally, we introduce a novel econometric approach that allows the utility of compensation to be non-linear when estimating WTA (and WTP) from binary choices.
    Abstract: Diese Studie bewertet die Anreizkompatibilität verschiedener Erhebungsformate zur Schätzung der Akzeptanzbereitschaft (WTA) in der Praxis. Wir bewerten die konvergente Validität von standardisierten und theoriegeleiteten (d. h. auf der Mechanismusdesign-Theorie basierenden) Versionen des doppelt begrenzten binären Auswahlformats (DB) und des offenen Formats (OE) im Vergleich zum einfach binären Auswahlformat (SBC). Unsere empirische Anwendung, die in Zusammenarbeit mit einem großen Energieunternehmen entwickelt wurde, basiert auf der Schätzung der Entschädigung für die Installation von Windparks in den Wohnorten der Befragten. Wir finden starke Hinweise gegen die konvergente Validität für beide Versionen des OE-Formats. Im Vergleich dazu liefern beide Versionen des DB-Formats, insbesondere die theoriegeleitete Version, WTA-Schätzungen, die denen des SBC ähneln und von nahezu Null für Befürworter der Windkraft bis zu 1500-1800 € für Gegner reichen. Schließlich stellen wir einen neuartigen ökonometrischen Ansatz vor, der es ermöglicht, den Nutzen der Entschädigung bei der Schätzung der WTA (und WTP) aus binären Entscheidungen nichtlinear zu gestalten.
    Keywords: contingent valuation, willingness to accept, wind farm, mechanism design
    JEL: C10 C93 D60 H41 Q51
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:rwirep:331884
  12. By: Junlin Yang
    Abstract: This paper investigates how institutional learning and regional spillovers shape volatility dynamics in ASEAN equity markets. Using daily data for Indonesia, Malaysia, the Philippines, and Thailand from 2010 to 2024, we construct a high-frequency institutional learning index via a MIDAS-EPU approach. Unlike existing studies that treat institutional quality as a static background characteristic, this paper models institutions as a dynamic mechanism that reacts to policy shocks, information pressure, and crisis events. Building on this perspective, we introduce two new volatility frameworks: the Institutional Response Dynamics Model (IRDM), which embeds crisis memory, policy shocks, and information flows; and the Network-Integrated IRDM (N-IRDM), which incorporates dynamic-correlation and institutional-similarity networks to capture cross-market transmission. Empirical results show that institutional learning amplifies short-run sensitivity to shocks yet accelerates post-crisis normalization. Crisis-memory terms explain prolonged volatility clustering, while network interactions improve tail behavior and short-horizon forecasts. Robustness checks using placebo and lagged networks indicate that spillovers reflect a strong regional common factor rather than dependence on specific correlation topologies. Diebold-Mariano and ENCNEW tests confirm that the N-IRDM significantly outperforms baseline GARCH benchmarks. The findings highlight a dual role of institutions and offer policy insights on transparency enhancement, macroprudential communication, and coordinated regional governance.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.19824

This nep-ecm issue is ©2025 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.