nep-ecm New Economics Papers
on Econometrics
Issue of 2025–12–15
29 papers chosen by
Sune Karlsson, Örebro universitet


  1. Low-Rank Estimation of Nonlinear Panel Data Models By Kan Yao
  2. Unbiased Estimation of Multi-Way Gravity Models By Lucas Resende; Guillaume Lecu\'e; Lionel Wilner; Philippe Chon\'e
  3. Treatment Effects in the Regression Discontinuity Model with Counterfactual Cutoff and Distorted Running Variables By Moyu Liao
  4. Robust Inference Methods for Latent Group Panel Models under Possible Group Non-Separation By Oguzhan Akgun; Ryo Okui
  5. Identification, estimation and inference in Panel Vector Autoregressions using external instruments By Raimondo Pala
  6. Unobserved Heterogeneous Spillover Effects in Instrumental Variable Models By Huan Wu
  7. Threshold Tensor Factor Model in CP Form By Stevenson Bolivar; Rong Chen; Yuefeng Han
  8. Consistent boundaries for the one-step-ahead forecast error criterion and the AIC in vector autoregressions By Tarek Jouini
  9. ReLU-Based and DNN-Based Generalized Maximum Score Estimators By Xiaohong Chen; Wayne Yuan Gao; Likang Wen
  10. Estimation of High-dimensional Nonlinear Vector Autoregressive Models By Yuefeng Han; Likai Chen; Wei Biao Wu
  11. Improved Inference for Nonparametric Regression By Giuseppe Cavaliere; S\'ilvia Gon\c{c}alves; Morten {\O}rregaard Nielsen; Edoardo Zanelli
  12. Estimation in high-dimensional linear regression: Post-Double-Autometrics as an alternative to Post-Double-Lasso By Sullivan Hu\'e; S\'ebastien Laurent; Ulrich Aiounou; Emmanuel Flachaire
  13. Generalized method of moments with partially missing data By Grigory Franguridi; Hyungsik Roger Moon
  14. Microfoundations and the Causal Interpretation of Price-Exposure Designs By Luca Moreno-Louzada; Guilherme Figueira; Pedro Picchetti
  15. Identification of Multivariate Measurement Error Models By Yingyao Hu
  16. A GAMLSS-based Optimal Quantile estimator for Stochastic Frontiers By Francesco Vidoli; Elisa Fusco
  17. A Generalized Control Function Approach to Production Function Estimation By Ulrich Doraszelski; Lixiong Li
  18. Persistent Anomalies and Nonstandard Errors By Coqueret, Guillaume; Pérignon, Christophe
  19. Beyond Parallel Trends: An Identification-Strategy-Robust Approach to Causal Inference with Panel Data By Brantly Callaway; Derek Dyal; Pedro H. C. Sant'Anna; Emmanuel S. Tsyawo
  20. Limit Theorems for Network Data without Metric Structure By Wen Jiang; Yachen Wang; Zeqi Wu; Xingbai Xu
  21. Explainable Machine Learning for Macroeconomic and Financial Nowcasting: A Decision-Grade Framework for Business and Policy By Luca Attolico
  22. Is UWLS Really Better for Medical Research? By Sanghyun Hong; W. Robert Reed
  23. An Unobserved Components Based Test for Asset Price Bubbles By Astill, Sam; Harvey, David I; Leybourne, Stephen J; Taylor, AM Robert
  24. Transferable Utility Matching Beyond Logit: Computation and Estimation with General Heterogeneity By Alfred Galichon; Antoine Jacquet; Georgy Salakhutdinov
  25. Discrete Choice with Endogenous Peer Selection By Nail Kashaev; Natalia Lazzati
  26. Opening the Black Box: Nowcasting Singapore's GDP Growth and its Explainability By Luca Attolico
  27. Visibility-Graph Asymmetry as a Structural Indicator of Volatility Clustering By Micha{\l} Sikorski
  28. The Three-Dimensional Decomposition of Volatility Memory By Ziyao Wang; A. Alexandre Trindade; Svetlozar T. Rachev
  29. Re(Visiting) Time Series Foundation Models in Finance By Eghbal Rahimikia; Hao Ni; Weiguan Wang

  1. By: Kan Yao
    Abstract: This paper investigates nonlinear panel regression models with interactive fixed effects and introduces a general framework for parameter estimation under potentially non-convex objective functions. We propose a computationally feasible two-step estimation procedure. In the first step, nuclear-norm regularization (NNR) is used to obtain preliminary estimators of the coefficients of interest, factors, and factor loadings. The second step involves an iterative procedure for post-NNR inference, improving the convergence rate of the coefficient estimator. We establish the asymptotic properties of both the preliminary and iterative estimators. We also study the determination of the number of factors. Monte Carlo simulations demonstrate the effectiveness of the proposed methods in determining the number of factors and estimating the model parameters. In our empirical application, we apply the proposed approach to study the cross-market arbitrage behavior of U.S. nonfinancial firms.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21948
  2. By: Lucas Resende; Guillaume Lecu\'e; Lionel Wilner; Philippe Chon\'e
    Abstract: Maximum likelihood estimators, such as the Poisson Pseudo-Maximum Likelihood (PPML), suffer from the incidental parameter problem: a bias in the estimation of structural parameters that arises from the joint estimation of structural and nuisance parameters. To address this issue in multi-way gravity models, we propose a novel, asymptotically unbiased estimator. Our method reframes the estimation as a series of classification tasks and is agnostic to both the number and structure of fixed effects. In sparse data environments, common in the network formation literature, it is also computationally faster than PPML. We provide empirical evidence that our estimator yields more accurate point estimates and confidence intervals than PPML and its bias-correction strategies. These improvements hold even under model misspecification and are more pronounced in sparse settings. While PPML remains competitive in dense, low-dimensional data, our approach offers a robust alternative for multi-way models that scales efficiently with sparsity. The method is applied to estimate the effect of a policy reform on spatial accessibility to health care in France.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.02203
  3. By: Moyu Liao
    Abstract: We develop a new framework for evaluating the total policy effect in regression discontinuity designs (RDD), incorporating both the direct effect of treatment on outcomes and the indirect effect arising from distortions in the running variable when treatment becomes available. Our identification strategy combines a conditional parallel trend assumption to recover untreated potential outcomes with a local invariance assumption that characterizes how the running variable responds to counterfactual policy cutoffs. These components allow us to identify and estimate counterfactual treatment effects for any proposed threshold. We construct a nonparametric estimator for the total effect, derive its asymptotic distribution, and propose bootstrap inference procedures. Finally, we apply our framework to the Italian Domestic Stability Pact, where population-based fiscal rules generate both behavioral responses and running-variable distortions.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.22886
  4. By: Oguzhan Akgun; Ryo Okui
    Abstract: This paper presents robust inference methods for general linear hypotheses in linear panel data models with latent group structure in the coefficients. We employ a selective conditional inference approach, deriving the conditional distribution of coefficient estimates given the group structure estimated from the data. Our procedure provides valid inference under possible violations of group separation, where distributional properties of group-specific coefficients remain unestablished. Furthermore, even when group separation does hold, our method demonstrates superior finite-sample properties compared to traditional asymptotic approaches. This improvement stems from our procedure's ability to account for statistical uncertainty in the estimation of group structure. We demonstrate the effectiveness of our approach through Monte Carlo simulations and apply the methods to two datasets on: (i) the relationship between income and democracy, and (ii) the cyclicality of firm-level R&D investment.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.18550
  5. By: Raimondo Pala
    Abstract: This paper proposes an identification inspired from the SVAR-IV literature that uses external instruments to identify PVARs, and discusses associated issues of identification, estimation, and inference. I introduce a form of local average treatment effect - the $\mu$-LATE - which arises when a continuous instrument targets a binary treatment. Under standard assumptions of independence, exclusion, and monotonicity, I show that externally instrumented PVARs estimate the $\mu$-LATE. Monte Carlo simulations illustrate that confidence sets based on the Anderson-Rubin statistics deliver reliable convergence for impulse responses. As an application, I instrument state-level military spending with the state's share of national spending to estimate the dynamic fiscal multiplier. I find multipliers above unity, with effects concentrated in the contemporaneous year and persisting into the following year.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.19372
  6. By: Huan Wu
    Abstract: This paper develops a general framework for identifying causal effects in settings with spillovers, where both outcomes and endogenous treatment decisions are influenced by peers within a known group. It introduces the generalized local average controlled spillover and direct effects (LACSEs and LACDEs), which extend the local average treatment effect framework to settings with spillovers and establish sufficient conditions for their point identification without restricting the cardinality of the support of instrumental variables. These conditions clarify the necessity of commonly imposed restrictions to achieve point identification with binary instruments in related studies. The paper then defines the marginal controlled spillover and direct effects (MCSEs and MCDEs), which naturally extend the marginal treatment effect framework to settings with spillovers and are nonparametrically point identified from continuous variation in instruments. These marginal effects serve as building blocks for a broad class of policy-relevant treatment effects, including some causal spillover parameters in the related literature. Semiparametric and parametric estimators are developed, and an application using Add Health data reveals heterogeneity in education spillovers within best-friend networks.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.22643
  7. By: Stevenson Bolivar; Rong Chen; Yuefeng Han
    Abstract: This paper proposes a new Threshold Tensor Factor Model in Canonical Polyadic (CP) form for tensor time series. By integrating a thresholding autoregressive structure for the latent factor process into the tensor factor model in CP form, the model captures regime-switching dynamics in the latent factor processes while retaining the parsimony and interpretability of low-rank tensor representations. We develop estimation procedures for the model and establish the theoretical properties of the resulting estimators. Numerical experiments and a real-data application illustrate the practical performance and usefulness of the proposed framework.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.19796
  8. By: Tarek Jouini (Department of Economics, University of Windsor)
    Abstract: We propose an upper bound for the asymptotic approximation of the one-step-ahead forecast mean squared error (MSE) in infinite-order vector autoregression (VAR) settings, i.e., VAR(infinity). Once minimized over a truncation-lag of small order o(T^(1/3)), where T is the sample size, it yields a consistent truncation of the autoregression associated with the efficient one-step forecast error covariance matrix. When the infinite-order process degenerates to a finite-order VAR, we show that the resulting truncation is strongly consistent (eventually asymptotically), given a parameter epsilon >= 2. We particularly note that when epsilon tends to infinity, our order-selection criterion (upper bound) becomes inconsistent, with a variant of it reducing to Akaike information criterion (AIC). Thus, unlike the final prediction error (FPE) criterion and AIC, our criteria have the good sampling property of being consistent, like those by Hannan and Quinn, and Schwarz, respectively. Compared to conventional criteria, our model-selection procedures not only better handle the multivariate dynamic structure of the time series data, through a compound penalty term that we specify, but also tend to avoid model overfitting in large samples, hence the singularity problems encountered in practice. Variants of our primary criterion, which are in small samples less parsimonious than AIC in large systems, are also proposed. Besides being strongly consistent asymptotically, they tend to select the actual data-generating process (DGP) most of the time in small samples, as shown with Monte Carlo (MC) simulations.
    Keywords: infinite-order autoregression, truncation-lag, order-selection criterion, time series, strongly consistent asymptotically, Monte Carlo simulation.
    JEL: C13 C14 C15 C18 C22 C24 C32 C34 C51 C52 C53 C62 C63 C82 C83
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:wis:wpaper:2506
  9. By: Xiaohong Chen; Wayne Yuan Gao; Likang Wen
    Abstract: We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975, 1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the $n^{-s/(2s+1)}$ convergence rate and asymptotic normality for the RMS estimator under order-$s$ Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.19121
  10. By: Yuefeng Han; Likai Chen; Wei Biao Wu
    Abstract: High-dimensional vector autoregressive (VAR) models have numerous applications in fields such as econometrics, biology, climatology, among others. While prior research has mainly focused on linear VAR models, these approaches can be restrictive in practice. To address this, we introduce a high-dimensional non-parametric sparse additive model, providing a more flexible framework. Our method employs basis expansions to construct high-dimensional nonlinear VAR models. We derive convergence rates and model selection consistency for least squared estimators, considering dependence measures of the processes, error moment conditions, sparsity, and basis expansions. Our theory significantly extends prior linear VAR models by incorporating both non-Gaussianity and non-linearity. As a key contribution, we derive sharp Bernstein-type inequalities for tail probabilities in both non-sub-Gaussian linear and nonlinear VAR processes, which match the classical Bernstein inequality for independent random variables. Additionally, we present numerical experiments that support our theoretical findings and demonstrate the advantages of the nonlinear VAR model for a gene expression time series dataset.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.18641
  11. By: Giuseppe Cavaliere; S\'ilvia Gon\c{c}alves; Morten {\O}rregaard Nielsen; Edoardo Zanelli
    Abstract: Nonparametric regression estimators, including those employed in regression-discontinuity designs (RDD), are central to the economist's toolbox. Their application, however, is complicated by the presence of asymptotic bias, which undermines coverage accuracy of conventional confidence intervals. Extant solutions to the problem include debiasing methods, such as the widely applied robust bias-corrected (RBC) confidence interval of Calonico et al. (2014, 2018). We show that this interval is equivalent to a prepivoted interval based on an invalid residual-based bootstrap method. Specifically, prepivoting performs an implicit bias correction while adjusting the nonparametric regression estimator's standard error to account for the additional uncertainty introduced by debiasing. This idea can also be applied to other bootstrap schemes, leading to new implicit bias corrections and corresponding standard error adjustments. We propose a prepivoted interval based on a bootstrap that generates observations from nonparametric regression estimates at each regressor value and show how it can be implemented as an RBC-type interval without the need for resampling. Importantly, we show that the new interval is shorter than the existing RBC interval. For example, with the Epanechnikov kernel, the length is reduced by 17%, while maintaining accurate coverage probability. This result holds irrespectively of: (a) the evaluation point being in the interior or on the boundary; (b) the use of a 'small' or 'large' bandwidths; (c) the distribution of the regressor and the error term.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.00566
  12. By: Sullivan Hu\'e; S\'ebastien Laurent; Ulrich Aiounou; Emmanuel Flachaire
    Abstract: Post-Double-Lasso is becoming the most popular method for estimating linear regression models with many covariates when the purpose is to obtain an accurate estimate of a parameter of interest, such as an average treatment effect. However, this method can suffer from substantial omitted variable bias in finite sample. We propose a new method called Post-Double-Autometrics, which is based on Autometrics, and show that this method outperforms Post-Double-Lasso. Its use in a standard application of economic growth sheds new light on the hypothesis of convergence from poor to rich economies.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21257
  13. By: Grigory Franguridi; Hyungsik Roger Moon
    Abstract: We consider a generalized method of moments framework in which a part of the data vector is missing for some units in a completely unrestricted, potentially endogenous way. In this setup, the parameters of interest are usually only partially identified. We characterize the identified set for such parameters using the support function of the convex set of moment predictions consistent with the data. This identified set is sharp, valid for both continuous and discrete data, and straightforward to estimate. We also propose a statistic for testing hypotheses and constructing confidence regions for the true parameter, show that standard nonparametric bootstrap may not be valid, and suggest a fix using the bootstrap for directionally differentiable functionals of Fang and Santos (2019). A set of Monte Carlo simulations demonstrates that both our estimator and the confidence region perform well when samples are moderately large and the data have bounded supports.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21988
  14. By: Luca Moreno-Louzada; Guilherme Figueira; Pedro Picchetti
    Abstract: This paper studies regional exposure designs that use commodity prices as instruments to study local effects of aggregate shocks. Unlike standard shift share designs that leverage differential exposure to many shocks, the price exposure relies on exogenous variation from a single shock, leading to challenges for both identification and inference. We motivate the design using a multi sector labor model. Under the model and a potential outcomes framework, we characterize the 2SLS and TWFE estimands as weighted averages of region and sector specific effects plus contamination terms driven by the covariance structure of prices and by general-equilibrium output responses. We derive conditions under which these estimands have a clear causal interpretation and provide simple sensitivity analysis procedures for violations. Finally, we show that standard inference procedures suffer from an overrejection problem in price-exposure designs. We derive a new standard error estimator and show its desirable finite-sample properties through Monte Carlo simulations. In an application to gold mining and homicides in the Amazon, the price exposure standard errors are roughly twice as large as conventional clustered standard errors, making the main effect statistically insignificant.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.10076
  15. By: Yingyao Hu
    Abstract: This paper develops new identification results for multidimensional continuous measurement-error models where all observed measurements are contaminated by potentially correlated errors and none provides an injective mapping of the latent distribution. Using third order cross moments, the paper constructs a three way tensor whose unique decomposition, guaranteed by Kruskal theorem, identifies the factor loading matrices. Starting with a linear structure, the paper recovers the full distribution of latent factors by constructing suitable measurements and applying scalar or multivariate versions of Kotlarski identity. As a result, the joint distribution of the latent vector and measurement errors is fully identified without requiring injective measurements, showing that multivariate latent structure can be recovered in broader settings than previously believed. Under injectivity, the paper also provides user-friendly testable conditions for identification. Finally, this paper provides general identification results for nonlinear models using a newly-defined generalized Kruskal rank - signal rank - of intergral operators. These results have wide applicability in empirical work involving noisy or indirect measurements, including factor models, survey data with reporting errors, mismeasured regressors in econometrics, and multidimensional latent-trait models in psychology and marketing, potentially enabling more robust estimation and interpretation when clean measurements are unavailable.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.02970
  16. By: Francesco Vidoli (Dipartimento di Economia, Societa', Politica, Universita' degli Studi di Urbino Carlo Bo); Elisa Fusco (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Universita' degli Studi di Firenze)
    Abstract: Efficiency in public services is an equity issue: inefficiency diverts resources from vulnerable populations who depend on public provision, while inaccurate measurement risks confounding structural disadvantage with managerial failure. To reply these issues, this paper proposes a new stochastic frontier estimator that combines Generalized Additive Models for Location, Scale and Shape (GAMLSS) with a data-driven optimal quantile criterion. By modelling the full conditional distribution of production outputs/costs, the approach captures non-linearity, heteroskedasticity and asymmetric inefficiency that traditional parametric frontier models cannot accommodate. Monte Carlo experiments, spanning linear, non-linear and endogenous inefficiency designs, show that the GAMLSS optimal quantile estimator systematically outperforms standard SFA and Fan-type corrections. An application to municipal waste management in Italy confirms its empirical advantages, revealing substantial heterogeneity in cost levels and dispersion. Results demonstrate that distributional flexibility is essential for fair benchmarking and targeted policy design in heterogeneous public service sectors.
    Keywords: Stochastic Frontier Analysis; Quantile Regression; Generalized Additive Models for Location, Scale and Shape; Municipal Waste Management
    JEL: C14 C23 D24 Q53
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:fir:econom:wp2025_12
  17. By: Ulrich Doraszelski; Lixiong Li
    Abstract: We develop a generalized control function approach to production function estimation. Our approach accommodates settings in which productivity evolves jointly with other unobservable factors such as latent demand shocks and the invertibility assumption underpinning the traditional proxy variable approach fails. We provide conditions under which the output elasticity of the variable input -- and hence the markup -- is nonparametrically point-identified. A Neyman orthogonal moment condition ensures oracle efficiency of our GMM estimator. A Monte Carlo exercise shows a large bias for the traditional approach that decreases rapidly and nearly vanishes for our generalized control function approach.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21578
  18. By: Coqueret, Guillaume (EMLYON Business School); Pérignon, Christophe (HEC Paris - Finance Department)
    Abstract: This article presents a framework for rigorous inference that accounts for the many methodological choices involved in testing asset pricing anomalies. We demonstrate that running multiple paths on the same original dataset inherently results in high correlation across outcomes, which significantly alters inference. In contrast, path-specific resampling greatly reduces outcome correlations and tightens the confidence interval of the estimated average return. Jointly accounting for across-path and within-path variability allows the variance of the average return to be decomposed into a standard error, a nonstandard error, and a correlation term. In our empirical analysis, we identify 29 persistent anomalies for which the 95% confidence intervals of their average returns exclude zero. Our tests also indicate that, for most anomalies, nonstandard errors dwarf standard errors and, in turn, are the primary determinants of the width of confidence intervals for multi-path average effects.
    Keywords: Asset pricing anomalies; p-hacking; multi-path inference; resampling; research replicability; nonstandard errors
    JEL: C12 C18 C51 G12
    Date: 2025–06–02
    URL: https://d.repec.org/n?u=RePEc:ebg:heccah:1578
  19. By: Brantly Callaway; Derek Dyal; Pedro H. C. Sant'Anna; Emmanuel S. Tsyawo
    Abstract: In this paper, we propose a new approach to causal inference with panel data. Instead of using panel data to adjust for differences in the distribution of unobserved heterogeneity between the treated and comparison groups, we instead use panel data to search for "close comparison groups" -- groups that are similar to the treated group in terms of pre-treatment outcomes. Then, we compare the outcomes of the treated group to the outcomes of these close comparison groups in post-treatment periods. We show that this approach is often identification-strategy-robust in the sense that our approach recovers the ATT under many different non-nested panel data identification strategies, including difference-in-differences, change-in-changes, or lagged outcome unconfoundedness, among several others. We provide related, though non-nested, results under "time homogeneity", where outcomes do not systematically change over time for any comparison group. Our strategy asks more out of the research design -- namely that there exist close comparison groups or time homogeneity (neither of which is required for most existing panel data approaches to causal inference) -- but, when available, leads to more credible inferences.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21977
  20. By: Wen Jiang; Yachen Wang; Zeqi Wu; Xingbai Xu
    Abstract: This paper develops limit theorems for random variables with network dependence, without requiring that individuals in the network to be located in a Euclidean or metric space. This distinguishes our approach from most existing limit theorems in network econometrics, which are based on weak dependence concepts such as strong mixing, near-epoch dependence, and $\psi$-dependence. By relaxing the assumption of an underlying metric space, our theorems can be applied to a broader range of network data, including financial and social networks. To derive the limit theorems, we generalize the concept of functional dependence (also known as physical dependence) from time series to random variables with network dependence. Using this framework, we establish several inequalities, a law of large numbers, and central limit theorems. Furthermore, we verify the conditions for these limit theorems based on primitive assumptions for spatial autoregressive models, which are widely used in network data analysis.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.17928
  21. By: Luca Attolico
    Abstract: Macroeconomic nowcasting sits at the intersection of traditional econometrics, data-rich information systems, and AI applications in business, economics, and policy. Machine learning (ML) methods are increasingly used to nowcast quarterly GDP growth, but adoption in high-stakes settings requires that predictive accuracy be matched by interpretability and robust uncertainty quantification. This article reviews recent developments in macroeconomic nowcasting and compares econometric benchmarks with ML approaches in data-rich and shock-prone environments, emphasizing the use of nowcasts as decision inputs rather than as mere error-minimization exercises. The discussion is organized along three axes. First, we contrast penalized regressions, dimension-reduction techniques, tree ensembles, and neural networks with autoregressive models, Dynamic Factor Models, and Random Walks, emphasizing how each family handles small samples, collinearity, mixed frequencies, and regime shifts. Second, we examine explainability tools (intrinsic measures and model-agnostic XAI methods), focusing on temporal stability, sign coherence, and their ability to sustain credible economic narratives and nowcast revisions. Third, we analyze non-parametric uncertainty quantification via block bootstrapping for predictive intervals and confidence bands on feature importance under serial dependence and ragged edge. We translate these elements into a reference workflow for "decision-grade" nowcasting systems, including vintage management, time-aware validation, and automated reliability audits, and we outline a research agenda on regime-dependent model comparison, bootstrap design for latent components, and temporal stability of explanations. Explainable ML and uncertainty quantification emerge as structural components of a responsible forecasting pipeline, not optional refinements.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.00399
  22. By: Sanghyun Hong (University of Canterbury); W. Robert Reed (University of Canterbury)
    Abstract: This study evaluates the performance of the Unrestricted Weighted Least Squares (UWLS) estimator in meta-analyses of medical research. Using a large-scale simulation approach, it addresses the limitations of model selection criteria in small-sample contexts. Prior research using the Cochrane Database of Systematic Reviews (CDSR) reported that UWLS outperformed Random Effects (RE) and, in some cases, Fixed Effect (FE) estimators when assessed using AIC and BIC. However, we show that idiosyncratic characteristics of the CDSR datasets, notably their small sample sizes and weak-signal settings (where key parameters are often small in magnitude), undermine the reliability of AIC and BIC for model selection. Accordingly, we simulate 108, 000 datasets mirroring the original CDSR data. This allows us to know the true model parameters and evaluate the estimators more accurately. While all estimators performed similarly with respect to bias and efficiency, RE consistently produced more accurate standard errors than UWLS, making confidence intervals and hypothesis testing more reliable. The comparison with FE was less clear. We therefore recommend continued use of the RE estimator as a reliable general-purpose approach for medical research, with the choice between UWLS and FE made in light of the likely extent of effect heterogeneity in the data.
    Keywords: Meta-analysis, Unrestricted Weighted Least Squares, Fixed Effect, Random Effects, Medical Research, Cochrane Database of Systematic Reviews, Replication, Robustness Check, Pre-Registration
    JEL: C18 B4 I1
    Date: 2025–11–01
    URL: https://d.repec.org/n?u=RePEc:cbt:econwp:25/13
  23. By: Astill, Sam; Harvey, David I; Leybourne, Stephen J; Taylor, AM Robert
    Abstract: The general solution of the standard stock pricing equation commonly employed in the finance literature decomposes the price of an asset into the sum of a fundamental price and a bubble component that is explosive in expectation. Despite this, the extant literature on bubble detection focuses almost exclusively on modelling asset prices using a single time-varying autoregressive process, a model which is not consistent with the general solution of the stock pricing equation. We consider a different approach, based on an unobserved components time series model whose components correspond to the fundamental and bubble parts of the general solution. Based on the locally best invariant testing principle, we derive a statistic for testing the null hypothesis that no bubble component is present, against the alternative that a bubble episode occurs in a given subsample of the data. In order to take an ambivalent stance on the possible number and timing of the bubble episodes, our proposed test is based on the maximum of a doubly recursive implementation of this statistic over all possible break dates. Simulation results show that our proposed tests can be significantly more powerful than the industry standard tests developed by Phillips, Shi and Yu (2015).
    Keywords: rational bubbles; unobserved components model; locally best invariant testing principle; double recursion
    Date: 2025–12–04
    URL: https://d.repec.org/n?u=RePEc:esy:uefcwp:42258
  24. By: Alfred Galichon; Antoine Jacquet; Georgy Salakhutdinov
    Abstract: We present a general framework for matching with transferable utility (TU) that accommodates arbitrary heterogeneity without relying on the logit structure. The optimal assignment problem is characterized by tractable linear programming formulation, allowing flexible error distributions and correlation patterns. We introduce an iterative algorithm that solves large-scale assignment problems with guaranteed convergence and an intuitive economic interpretation, and we show how the same structure supports a simulated moment-matching estimator of the systematic surplus. Experiments using simulated data demonstrate the algorithm's scalability and the estimator's consistency under correct specification, as well as systematic bias arising from logit misspecification.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.23116
  25. By: Nail Kashaev; Natalia Lazzati
    Abstract: We develop a continuous-time peer-effect discrete choice model where peers that affect the preferences of a given agent are randomly selected based on their previous choices. We characterize the equilibrium behavior and study the empirical content of the model. In the model, changes in the choices of peers affect both the set of peers the agent pays attention to and her preferences over the alternatives. We exploit variation in choices coupled with variation in the size of the set of potential peers to recover agents' preferences and the peer selection mechanism. These nonparametric identification results do not rely on exogenous variation of covariates.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.21446
  26. By: Luca Attolico
    Abstract: Timely assessment of current conditions is essential especially for small, open economies such as Singapore, where external shocks transmit rapidly to domestic activity. We develop a real-time nowcasting framework for quarterly GDP growth using a high-dimensional panel of approximately 70 indicators, encompassing economic and financial indicators over 1990Q1-2023Q2. The analysis covers penalized regressions, dimensionality-reduction methods, ensemble learning algorithms, and neural architectures, benchmarked against a Random Walk, an AR(3), and a Dynamic Factor Model. The pipeline preserves temporal ordering through an expanding-window walk-forward design with Bayesian hyperparameter optimization, and uses moving block-bootstrap procedures both to construct prediction intervals and to obtain confidence bands for feature-importance measures. It adopts model-specific and XAI-based explainability tools. A Model Confidence Set procedure identifies statistically superior learners, which are then combined through simple, weighted, and exponentially weighted schemes; the resulting time-varying weights provide an interpretable representation of model contributions. Predictive ability is assessed via Giacomini-White tests. Empirical results show that penalized regressions, dimensionality-reduction models, and GRU networks consistently outperform all benchmarks, with RMSFE reductions of roughly 40-60%; aggregation delivers further gains. Feature-attribution methods highlight industrial production, external trade, and labor-market indicators as dominant drivers of Singapore's short-run growth dynamics.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.02092
  27. By: Micha{\l} Sikorski
    Abstract: Volatility clustering is one of the most robust stylized facts of financial markets, yet it is typically detected using moment-based diagnostics or parametric models such as GARCH. This paper shows that clustered volatility also leaves a clear imprint on the time-reversal symmetry of horizontal visibility graphs (HVGs) constructed on absolute returns in physical time. For each time point, we compute the maximal forward and backward visibility distances, $L^{+}(t)$ and $L^{-}(t)$, and use their empirical distributions to build a visibility-asymmetry fingerprint comprising the Kolmogorov--Smirnov distance, variance difference, entropy difference, and a ratio of extreme visibility spans. In a Monte Carlo study, these HVG asymmetry features sharply separate volatility-clustered GARCH(1, 1) dynamics from i.i.d.\ Gaussian noise and from randomly shuffled GARCH series that preserve the marginal distribution but destroy temporal dependence; a simple linear classifier based on the fingerprint achieves about 90\% in-sample accuracy. Applying the method to daily S\&P500 data reveals a pronounced forward--backward imbalance, including a variance difference $\Delta\mathrm{Var}$ that exceeds the simulated GARCH values by two orders of magnitude and vanishes after shuffling. Overall, the visibility-graph asymmetry fingerprint emerges as a simple, model-free, and geometrically interpretable indicator of volatility clustering and time irreversibility in financial time series.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.02352
  28. By: Ziyao Wang; A. Alexandre Trindade; Svetlozar T. Rachev
    Abstract: This paper develops a three-dimensional decomposition of volatility memory into orthogonal components of level, shape, and tempo. The framework unifies regime-switching, fractional-integration, and business-time approaches within a single canonical representation that identifies how each dimension governs persistence strength, long-memory form, and temporal speed. We establish conditions for existence, uniqueness, and ergodicity of this decomposition and show that all GARCH-type processes arise as special cases. Empirically, applications to SPY and EURUSD (2005--2024) reveal that volatility memory is state-dependent: regime and tempo gates dominate in equities, while fractional-memory gates prevail in foreign exchange. The unified tri-gate model jointly captures these effects. By formalizing volatility dynamics through a level--shape--tempo structure, the paper provides a coherent link between information flow, market activity, and the evolving memory of financial volatility.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.02166
  29. By: Eghbal Rahimikia; Hao Ni; Weiguan Wang
    Abstract: Financial time series forecasting is central to trading, portfolio optimization, and risk management, yet it remains challenging due to noisy, non-stationary, and heterogeneous data. Recent advances in time series foundation models (TSFMs), inspired by large language models, offer a new paradigm for learning generalizable temporal representations from large and diverse datasets. This paper presents the first comprehensive empirical study of TSFMs in global financial markets. Using a large-scale dataset of daily excess returns across diverse markets, we evaluate zero-shot inference, fine-tuning, and pre-training from scratch against strong benchmark models. We find that off-the-shelf pre-trained TSFMs perform poorly in zero-shot and fine-tuning settings, whereas models pre-trained from scratch on financial data achieve substantial forecasting and economic improvements, underscoring the value of domain-specific adaptation. Increasing the dataset size, incorporating synthetic data augmentation, and applying hyperparameter tuning further enhance performance.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.18578

This nep-ecm issue is ©2025 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.