|
on Econometrics |
| By: | Timothy J. Vogelsang |
| Abstract: | This paper develops inference methods for ratios of deterministic trend slopes in systems of pairs of time series. Hypotheses based on linear cross-equation restrictions are considered with particular interest in tests that trend ratios are equal across pairs of trending series. Tests of equal ratios can be used for the empirical assessment of climate models through comparisons of trend ratios (amplification ratios) of model generated temperature series and observed temperature series. The analysis in this paper builds on the estimation and inference methods developed by Vogelsang and Nawaz (2017, Journal of Time Series Analysis) for a single pair of trending time series. Because estimators of ratios can have poor finite sample properties when the trend slope are small relative to variation around the trends, tests of equal trend ratios are restated in terms of products of trend slopes leading to inference that is less affected by small trend slopes. Asymptotic theory is developed that can be used to generate critical values. For tests of equal trend ratios, finite sample performance is assessed using simulations. Practical advice is provided for empirical practitioners. An empirical application compares amplification ratios (trend ratios) across a set of five groups of observed global temperature series. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.23482 |
| By: | Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Yifeng Chen (Department of Economics, Nanyang Technological University, Singapore 639798); Seok Young Hong (Department of Economics, Nanyang Technological University, Singapore 639798); Daniel Tsvetanov (Norwich Business School, University of East Anglia, Norwich NR4 7TJ, UK) |
| Abstract: | We develop an empirical likelihood framework for testing return predictability in the conditional mean and conditional quantiles. A unified chi-square limit theory is established across a broad spectrum of predictor persistence, including stationary, mildly integrated, nearly integrated, unit-root, and mildly explosive cases. We provide two complementary approaches to handle the unknown intercept: (i) a sample-splitting approach under relaxed regularity conditions and (ii) a new two-stage method that improves efficiency and accommodates quantile inference, where sample-splitting is infeasible. We examine the finite-sample bias of the two-stage method, and propose a bias-correction scheme and gradually saturated weights that improve performance under high persistence. Simulation evidence demonstrates that our tests exhibit competitive size and power across persistence classes, with notable gains in quantile predictability. An empirical application to the U.S. stock market shows modest evidence of mean predictability, whereas quantile-based inference reveals stronger and economically relevant predictability in the tails of the return distribution. |
| Keywords: | Predictive Mean Regression; Predictive Quantile Regression; Empirical Likelihood; Bartlett Bias Correction. |
| JEL: | C12 C32 C51 C52 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:kan:wpaper:202609 |
| By: | Martin Huber; Sarina Joy Oberh\"ansli |
| Abstract: | We propose a difference-in-differences (DiD) framework with mediation for possibly multivalued discrete or continuous treatments and mediators, aimed at identifying the direct effect of the treatment on the outcome (net of effects operating through the mediator), the indirect effect via the mediator, and the joint effects of treatment and mediator, consistent with the framework of dynamic treatment effects. Identification relies on a conditional parallel trends assumption imposed on the mean potential outcome across treatment and mediator states, or (depending on the causal parameter) additionally on the mean potential outcomes and potential mediator distributions across treatment states. We propose ATET estimators for repeated cross sections and panel data within the double/debiased machine learning framework, which allows for data-driven control of covariates, and we establish their asymptotic normality under standard regularity conditions. We investigate the finite-sample performance of the proposed methods in a simulation study and illustrate our approach in an empirical application to the US National Longitudinal Survey of Youth, estimating the direct effect of health care coverage on general health as well as the indirect effect operating through routine checkups. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.23877 |
| By: | Alain Guay; Dalibor Stevanovic |
| Abstract: | This paper develops a spectral framework for identification, estimation, and inference in non-Gaussian Structural Vector Autoregressive (SVAR) models using higher-order cumulants. Under independence or the absence of cross-cumulants, cumulant tensors of whitened innovations admit an orthogonal decomposition whose singular vectors recover the structural shocks. Identification is therefore governed by the spectral geometry of the population cumulant ten- sor. In particular, separation of tensor singular values provides a quantitative measure of identification strength through explicit perturbation bounds linking estimation error to the inverse singular-value gap. This characterization yields asymptotic normality under strong identification and nonstandard limits under local-to-weak identification sequences. We derive asymptotic distributions for tensor SVD estimators and show how statistically identified subsystems can be completed using conventional structural restrictions. Monte Carlo experiments and empirical applications illustrate the finite-sample properties and empirical relevance of the approach. Cet article développe un cadre spectral pour l’identification, l’estimation et l’inférence dans les modèles SVAR (Structural Vector Autoregressive) non gaussiens à l’aide de cumulants d’ordre supérieur. Sous l’hypothèse d’indépendance ou d’absence de cumulants croisés, les tenseurs de cumulants des innovations blanchies admettent une décomposition orthogonale dont les vecteurs singuliers permettent de retrouver les chocs structurels. L’identification est ainsi gouvernée par la géométrie spectrale du tenseur de cumulants de la population. En particulier, la séparation des valeurs singulières du tenseur fournit une mesure quantitative de la force de l’identification grâce à des bornes explicites de perturbation reliant l’erreur d’estimation à l’inverse de l’écart entre les valeurs singulières. Cette caractérisation conduit à une normalité asymptotique sous identification forte et à des lois limites non standard dans des séquences d’identification localement faibles. Nous dérivons les distributions asymptotiques des estimateurs fondés sur la SVD tensorielle et montrons comment des sous-systèmes statistiquement identifiés peuvent être complétés à l’aide de restrictions structurelles conventionnelles. Des expériences de Monte Carlo et des applications empiriques illustrent les propriétés en échantillon fini et la pertinence empirique de l’approche. |
| Keywords: | Non-Gaussian SVAR, tensor decomposition, cumulants, SVAR non gaussien, décomposition tensorielle, cumulants |
| JEL: | C12 C32 C51 |
| Date: | 2026–03–09 |
| URL: | https://d.repec.org/n?u=RePEc:cir:cirwor:2026s-02 |
| By: | Nathan Canen; Shantanu Chadha |
| Abstract: | In the linear-in-means model, endogeneity arises naturally due to the reflection problem. A common solution is to use Instrumental Variables (IVs) based on higher-order network links, such as using friends-of-friends' characteristics. We first show that such instruments are unlikely to work well in many applied settings: in very sparse or very dense networks, friends-of-friends may be similar to the original links. This implies that the IVs may be weak or their first stage estimand may be undefined. For a class of random graphs, we use random graph theory and characterize regimes where such instruments perform well, and when they would not. We prove how weak-IV robust inference can be adapted to this environment, and how scaling the network can help. We provide extensive Monte Carlo simulations and revisit empirical applications, showing the prevalence of such issues in empirical practice, and how our results restore valid inference. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.24215 |
| By: | Gianna Fenaroli |
| Abstract: | Two key identifying assumptions used to justify difference-in-differences are parallel trends and no anticipation, yet both may fail in practice. I propose a class of assumptions on anticipation and derive closed-form, sharp bounds on the average treatment effect on the treated while simultaneously relaxing parallel trends. Deviations from both assumptions are jointly disciplined using observed pre-trends. When some anticipation is imposed, the identified set under joint deviations can be shorter than under parallel trends violations alone. These bounds inform a sensitivity analysis assessing the robustness of qualitative conclusions to anticipation and parallel trends violations. I illustrate with an empirical application. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.00868 |
| By: | Aleksei Nemtyrev; Otilia Boldea |
| Abstract: | Local projection (LP) and structural vector autoregression (SVAR) are commonly employed to estimate dynamic causal effects of macroeconomic policies at multiple horizons. With enough lags as controls, LP estimators have little bias but their variance can increase with the horizon due to accumulating additional shocks. Because they typically employ fewer lags or suffer from local misspecification, SVAR estimators typically incur higher bias, but their variance decreases with the horizon due to exponentiation. We propose to target the LP estimators towards their SVAR counterparts - constructed with fewer lags than LP at each horizon - to reduce their variance at the cost of incurring some bias. The resulting targeted LP estimator is a linear combination of the LP and SVAR estimators. We propose choosing this linear combination optimally to minimize the mean-squared error of the new estimator. Our simulations show that, under a locally misspecified SVAR model, targeting substantially reduces the LP variance at longer horizons while maintaining near-nominal coverage in small samples when a double bootstrap is employed. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.00248 |
| By: | Yanqin Fan; Carlos A. Manzanares; Hyeonseok Park; Yuan Qi |
| Abstract: | This paper develops a sensitivity analysis of the surrogacy assumption for the surrogate index approach in Athey et al. [2025b]. We introduce "Weighted Surrogate Indices (WSIs), " the analog of the surrogate index under the surrogacy assumption. We show that under comparability, the ATE on WSI identifies the ATE on the long-term outcome when a copula of the treatment and the long-term outcome conditional on baseline covariates and surrogates is known. When the copula is unknown, we establish the identified set of the ATE on the long-term outcome. Furthermore, we construct debiased estimators of the ATE for any given copula and develop asymptotically valid inference in both point-identified and partially identified cases. Using data from a poverty alleviation program in Pakistan, we demonstrate the importance of sensitivity checks as well as the usefulness of our approach. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.00580 |
| By: | Federico A. Bugni; Ivan A. Canay; Deborah Kim |
| Abstract: | Induced order statistics (IOS) arise when sample units are reordered according to the value of an auxiliary variable, and the associated responses are analyzed in that induced order. IOS play a central role in applications where the goal is to approximate the conditional distribution of an outcome at a fixed covariate value using observations whose covariates lie closest to that point, including regression discontinuity designs, k-nearest-neighbor methods, and distributionally robust optimization. Existing asymptotic results allow the dimension of the IOS vector to grow with the sample size only under smoothness conditions that are often too restrictive for practical data-generating processes. In particular, these conditions rule out boundary points, which are central to regression discontinuity designs. This paper develops general convergence rates for IOS under primitive and comparatively weak assumptions. We derive sharp marginal rates for the approximation of the target conditional distribution in Hellinger and total variation distances under quadratic mean differentiability and show how these marginal rates translate into joint convergence rates for the IOS vector. Our results are widely applicable: they rely on a standard smoothness condition and accommodate both interior and boundary conditioning points, as required in regression discontinuity and related settings. In the supplementary appendix, we provide complementary results under a Taylor/Holder remainder condition. Our results reveal a clear trade-off between smoothness and speed of convergence, identify regimes in which Hellinger and total variation distances behave differently, and provide explicit growth conditions on the number of nearest neighbors. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.07255 |
| By: | Wei Ma; Zeqi Wu; Zheng Zhang |
| Abstract: | In modern randomized experiments, large-scale data collection increasingly yields rich baseline covariates and auxiliary information from multiple sources. Such information offers opportunities for more precise treatment effect estimation, but it also raises the challenge of integrating heterogeneous information coherently without compromising validity. Covariate-adaptive randomization (CAR) is widely used to improve covariate balance at the design stage, but it typically balances only a small set of covariates used to form strata, making covariate adjustment at the analysis stage essential for more efficient estimation of treatment effects. Beyond standard covariate adjustment, it is often desirable to incorporate auxiliary information, including cross-stratum information, predictions from various machine learning models, and external data from historical trials or real-world sources. While this auxiliary information is widely available, existing covariate adjustment methods under CAR primarily exploit within-stratum covariates and do not provide a coherent mechanism for integrating it. We propose a unified calibration framework that integrates such information through an information proxy vector and calibration weights defined by a convex optimization problem. The resulting estimator recovers many recent covariate adjustment procedures as special cases while providing a systematic mechanism for both internal and external information borrowing within a single framework. We establish large-sample validity and a no-harm efficiency guarantee, showing that incorporating additional information sources cannot increase asymptotic variance, and we extend the theory to settings in which both the number of strata and the number of information sources grow with the sample size. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.07055 |
| By: | Dobrislav Dobrev; Pawel J. Szerszén |
| Abstract: | Replacing erroneous observations with missing values is known to mitigate outlier-induced distortions in state-space model inference. Yet, in economic data, outliers can be small and difficult to detect, while still occurring in temporal clusters and generating persistent distortions. We therefore put forward an unsupervised approach for exogenously randomized substitution of missing data (RMDX), designed as an ensemble-averaging enhancement that can be used to improve the robustness of any filter also to more elusive outliers. Our bias-variance decomposition theory for RMDX ensemble averaging establishes that, under mild regularity conditions on the influence of outliers, the missing data randomization rate acts as a regularization parameter, which can be set optimally to minimize mean squared error loss using standard cross-validation. We corroborate these theoretical results using Monte Carlo simulations, which show that RMDX ensemble averaging can substantially enhance the performance of commonly used robust filters, including ones that rely on supervised missing data substitution upon exceeding outlier detection thresholds. As anticipated, the gains are most pronounced in the presence of patches of moderately sized outliers that are difficult to mitigate. To further assess empirical relevance in economics, we also document that RMDX-enhanced filters perform favorably in widely used state-space models for extracting inflation trends, where clusters of measurement outliers in inflation data are known to pose an extra challenge. |
| Keywords: | State-space models; outlier-robust filtering and forecasting; missing data randomization; bagging and ensemble averaging; bias-variance tradeoff. |
| JEL: | C15 C22 C53 E37 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:gwc:wpaper:2026-004 |
| By: | Millimet, Daniel (Southern Methodist University); Paloyo, Alfredo (University of Wollongong) |
| Abstract: | Empirical researchers often replace latent constructs with composite indices, treating this as a neutral data-processing step. We prove this is a consequential identification choice. Our (near) impossibility theorem demonstrates that no linear index can guarantee consistent estimates of all parameters in multiple regressions. We show that proxy indices induce residual confounding, while popular weighting schemes introduce nonclassical measurement error with method-dependent biases. Using simulations and 2024 U.S. election data, we reveal that substantive conclusions are often artifacts of the chosen index. We argue that measurement models require the same formal scrutiny as other identification strategies. |
| Keywords: | latent variables, composite indices, measurement models, errors-in-variables, proxy variables, political economy, community health |
| JEL: | C13 C18 C43 D72 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18454 |
| By: | Fitzgerald, Jack (Vrije Universiteit Amsterdam); Adema, Joop; Fiala, Lenka; Kujansuu, Essi; Valenta, David (University of Ottawa) |
| Abstract: | Recent literature shows that when regression models are estimated on variables transformed with 'log-like' functions such as the inverse hyperbolic sine or ln(Z + 1) transformations, one can obtain (semi-)elasticity estimates of any magnitude by linearly re-scaling the input variable(s) before transformation. We systematically re-analyze the replication data of 46 papers whose main conclusions are defended by log-like specifications. Our replication findings motivate new theoretical and simulation results showing that in log-like specifications, unit scale can be used to overfit data, creating an uncontrolled multiple hypothesis testing problem that frequently yields spuriously significant results. In particular, 38% of the estimates we re-analyze sit in a 'sweet spot', where both upward and downward re-scalings of variables' units before transformation shrink test statistics. Consequently, published estimates in this literature are statistically significant over 40% more frequently than in the general economics literature. We find that modest changes to model specification yield different statistical significance conclusions for 14-37% of estimates defending papers' main claims. We also show that for 99.8% of estimates, variables transformed with log-like functions do not meet data requirements for log-like specifications from a methodological recommendation cited by all papers in our replication sample. We synthesize and harmonize methodological guidelines and advocate for more robust alternative specifications, including normalized estimands, Poisson regression, and quantile regression. |
| Date: | 2026–03–16 |
| URL: | https://d.repec.org/n?u=RePEc:osf:metaar:juda7_v1 |
| By: | Joachim Wilde (Osnabrueck University, Department of Economics, Rolandstr. 8, 49069 Osnabrueck, Germany); Sarah Forstinger (Osnabrueck University, Department of Economics, Rolandstr. 8, 49069 Osnabrueck, Germany) |
| Abstract: | The key assumption of normally distributed error terms is usually not tested in empirical practice when using ordered probit models. Therefore, an artificial regression version of the LM test against the class of Pearson distributions is derived that can be implemented more easily than the well-known matrix version. A comprehensive simulation study analyses the properties of the LM test and of the t-statistics in the artificial regression that correspond to skewness and fat tails, respectively. For most designs a large power against skewness and a moderate power against fat tails are found. However, the t-statistics against skewness and fat tails exhibit notable size distortions. Therefore, new double indicators are proposed. The simulation results indicate that the double indicators avoid the size distortions and exhibit power characteristics similar to the original statistics for most designs. |
| Keywords: | ordered probit model, normality assumption, Lagrange multiplier test, artificial regression |
| JEL: | C25 |
| Date: | 2026–03–11 |
| URL: | https://d.repec.org/n?u=RePEc:iee:wpaper:wp0127 |
| By: | Peter Kyungtae Park |
| Abstract: | Shift-share designs are gaining popularity in political science. This article introduces what shift-share designs are, reviews their application in the literature, synthesizes recent methodological developments, and discusses their potential utility in the field. Although shift-share designs have a long historical use in economics, their causal properties only recently began to be understood. Articles in political science tend to be aware of these developments, but do not fully discuss and test identifying assumptions and sometimes apply the methods incorrectly. Most articles rely on the share exogeneity framework, suggesting that the shifter exogeneity framework is underutilized despite its comparable prevalence in economics. I illustrate shifter exogeneity framework and develop auxiliary theoretical results that are potentially useful in applying the framework in political science settings. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.00135 |
| By: | Jevan Cherniwchan; Nouri Najjar |
| Abstract: | Decompositions are a common method for quantifying within- and across-agent contributions to aggregate economic dynamics. We show that the standard practice of applying decompositions to sample data yields biased estimates of these contributions, and for common sample designs, these biases can be addressed by reformulating the decomposition as an estimation problem and applying standard statistical techniques. An application to India suggests sample bias meaningfully changes our understanding of how firm dynamics contribute to productivity growth. We also demonstrate our method enables the study of settings traditionally impeded by data limitations, such as productivity and firm dynamics in Sub-Saharan Africa. |
| Keywords: | Decomposition; Sample Bias; Economic Dynamics; Firm Dynamics; Productivity |
| JEL: | C18 D24 E24 O47 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:mcm:deptwp:2026-02 |
| By: | Batuhan Koyuncu; Byeungchun Kwon; Marco Jacopo Lombardi; Fernando Perez-Cruz; Hyun Song Shin |
| Abstract: | This article introduces the BIS Time-series Regression Oracle (BISTRO), a general purpose time series model for macroeconomic forecasting. Its edge over traditional econometric approaches lies in its ability to deal with generic unconditional and conditional forecasting tasks, without requiring to adjust the model to the macroe conomic tasks being tackled. Building on the transformer architecture underlying LLMs, BISTRO is fine-tuned on the large repository of macroeconomic data main tained at the BIS. We show that BISTRO provides reliable unconditional forecasts for key macroeconomic aggregates and illustrate how using it for conditional fore casting can help unveiling patterns of nonlinearity in the data. |
| Keywords: | forecasting, scenarios, large language models |
| JEL: | C32 C45 C55 C87 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:bis:biswps:1337 |
| By: | Roger Koenker; Jiaying Gu |
| Abstract: | Two strategies are explored for robustifying classical denoising procedures for the Gaussian sequence model. First, the Hodges and Lehmann (1952) restricted Bayes approach is used to reduce sensitivity to the specification of the initial prior distribution. Second, alternatives to the Gaussian noise assumption are explored. In both cases proposals of Huber (1964) and Mallows (1978) play a crucial role. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.00704 |
| By: | Amir Ahmadi, Pooyan; Matthes, Christian; Wang, Mu-Chun |
| Abstract: | The effects of monetary policy shocks are regularly estimated using high-frequency sur- prises in asset prices around central bank meetings as an instrument. These studies, insofar as they explicitly model the relationship between instrument and structural shock, assume a constant relationship between the instrument and the monetary policy shock. By allowing for time variation in this relationship, we show that only a few distinct periods are infor- mative about monetary policy shocks. Therefore, we build a narrative for instrument-based identification. For the instrument in Gertler & Karadi (2015), the effect on the (log) price level is almost 50 percent larger than the standard specification would suggest. |
| Keywords: | High-Frequency Identification, Instruments, Monetary Policy |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:zbw:bubdps:338091 |
| By: | Guy Tchuente |
| Abstract: | This paper develops a unifying theory of peer effects that treats the peer aggregator (the social norm mapping peers' actions into a scalar exposure) as the central behavioral primitive. We formulate peer influence as a norm game in which payoffs depend on own action and an exposure index, and we provide equilibrium existence and uniqueness for a broad class of aggregators. Using economically interpretable axioms, we organize commonly used exposure maps into a small taxonomy that nests linear-in-means, CES (peer-preference) norms, and smooth ``attention-to-salient-peers'' aggregators; rank-based quantile norms are treated as a complementary class. Building on this unification, we show that each aggregator induces an operator that governs how exogenous variation propagates through the network. Linear-in-means corresponds to constant transport (adjacency matrix), recovering the classic (friends-of-friends) instrument families. For nonlinear norms, operator becomes state- and preference-dependent and is characterized by the Jacobian of the exposure map evaluated at an exogenous predictor. This perspective yields geometry-induced instrument that exploit heterogeneity in marginal influence and nonredundant paths, and can remain informative when one-step moments or adjacency-power instruments become weak. Monte Carlo evidence and an application to NetHealth illustrate the practical implications across alternative aggregators and outcomes. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.23594 |
| By: | Hillmert, Steffen |
| Abstract: | Directed acyclic graphs (DAGs) have become popular as graphical representations of causal relationships. In practice, DAGs have proven to be particularly helpful for selecting appropriate control variables in causally oriented analytical models. While confirming such usefulness, this paper also aims to highlight another, often neglected aspect: the potential for causal diagrams to support the formulation of theories and corresponding hypotheses. This is particularly the case when diagrams have certain graphical properties, and suggestions are offered regarding how this can be achieved. Examples are drawn from the field of life-course research with the intention of better integrating the visual techniques prevalent in life-course research with DAG-style causal diagrams. While standard causal diagrams may not pay sufficient attention to certain relevant aspects, graphically enhanced causal diagrams can be quite productive for theory development and the analysis of existing life-course data. They are also useful for conceptualising new causally oriented studies. This paper illustrates suitable approaches with original and adapted visualisations. |
| Date: | 2026–02–22 |
| URL: | https://d.repec.org/n?u=RePEc:osf:socarx:z2754_v1 |
| By: | Wilmer Martinez-Rivera; Manuel Dario Hernandez-Bejarano |
| Abstract: | This document introduces a novel business-cycle turning-point analysis method that leverages the nonparametric coincident profile tool to construct confidence intervals for turning-point dates. The novel method generalized the coincident profile tool by providing a matrix of coincident relationships among a set of variables. We refer to this object as the coincident matrix. Through a numerical study and two empirical applications: one using economic data from the United States and the other from Colombia, we demonstrate the accuracy of the method in identifying turning points, closely aligning with the reference cycle in each case. In addition, in our analysis of United States economic data, we conduct a pseudo-out-of-sample analysis that further validates the method's superior performance in predicting turning-point dates. *****RESUMEN: Este documento presenta un método novedoso para el análisis de los puntos de quiebre del ciclo económico que aprovecha la herramienta no paramétrica de perfil coincidente para construir intervalos de confianza alrededor de las fechas de dichos puntos de quiebre. Este método novedoso generaliza la herramienta del perfil coincidente poniendo en una matriz las relaciones de coincidencia entre un conjunto de variables dadas. A este método lo denominamos matriz de coincidencia. A través de un estudio numérico y dos aplicaciones empíricas: una con datos económicos de Estados Unidos y otra de Colombia, demostramos la precisión del método para identificar puntos de quiebre, que se alinean estrechamente con el ciclo de referencia de cada caso. Además, en nuestro análisis de datos económicos de Estados Unidos, realizamos un análisis pseudo-fuera de muestra que confirma aún más el excelente rendimiento del método para predecir las fechas de estos puntos de quiebre. |
| Keywords: | Business cycles, Turning points, Non-parametric test, Coincident Profile, Confidence intervals, Ciclos económicos, Puntos de quiebre, Prueba no paramétrica, Perfil coincidente, Intervalos de confianza. |
| JEL: | C14 C15 E32 E37 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:bdr:borrec:1348 |
| By: | Hassan Hamie; Jinane Jouni; Vladimir Hlasny |
| Abstract: | Technical issues with income data and heterogeneous statistical approaches for addressing them lead to discrepancies in poverty estimates across studies. This study assesses how alternative parametric models perform at estimating poverty headcount ratios under various degrees of data granularity, and various thresholds for poverty. We use 982 surveys from 57 countries and years 1963–2023, across most world regions, spanning low‑income conflict‑affected to high‑income contexts. The regimes of data availability include individual-level microdata, grouped data at the level of income deciles, and a pair of basic distributional statistics – mean and Gini. Our findings show that model flexibility enhances our ability to capture income distributions accurately, with three- and four-parameter models generally providing the closest poverty estimates. However, even simpler two-parameter models, such as lognormal and Fisk, sometimes perform well on grouped data or basic distributional data, for instance in high poverty-line, broad poverty, low-income, and low inequality settings. The analysis highlights that additional data or higher model complexity does not always improve poverty estimation, and in some cases, grouped data can yield more reliable point estimates than raw microdata. Higher-parametric models do not always outperform parsimonious models, particularly when it comes to the precision of estimates in limited-data environments. GB2 and beta 2 estimates exhibit inflated standard errors when estimated on grouped data, while parsimonious models – e.g., Dagum and Singh–Maddala – are often more balanced. These results offer practical guidance to practitioners for selecting appropriate models according to data availability, balancing model complexity, and ensuring robust poverty measurement. |
| JEL: | D31 I32 N35 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:lis:liswps:913 |
| By: | Gabriele Fiorentini (Università di Firenze and RCEA); Alessandro Galesi (Idealista); Rodrigo Peña (CEMFI, Centro de Estudios Monetarios y Financieros); Gabriel Pérez Quirós (Banco de España); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros) |
| Abstract: | We show that the Laubach and Williams (2003) model and its variants in Holston, Laubach and Williams (2017, 2023) cannot estimate the natural rate with finite precision when either the IS curve or the Phillips curve are flat. To solve this unobservability, we propose a simple augmented model with a mean-reverting interest rate gap that considerably narrows the natural rate’s confidence bands in those empirically relevant situations. We also assess the ability of the corporate risk premium and the share of working age population to explain movements in the natural rate, but they generate filtered estimates that fluctuate too much. |
| Keywords: | Demographics, Kalman filter, observability, risk appetite. |
| JEL: | E43 E52 C32 C52 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:cmf:wpaper:wp2026_2603 |
| By: | Masahiro Kato |
| Abstract: | This study proposes the General Bayes framework for policy learning. We consider decision problems in which a decision-maker chooses an action from an action set to maximize its expected welfare. Typical examples include treatment choice and portfolio selection. In such problems, the statistical target is a decision rule, and the prediction of each outcome $Y(a)$ is not necessarily of primary interest. We formulate this policy learning problem by loss-based Bayesian updating. Our main technical device is a squared-loss surrogate for welfare maximization. We show that maximizing empirical welfare over a policy class is equivalent to minimizing a scaled squared error in the outcome difference, up to a quadratic regularization controlled by a tuning parameter $\zeta>0$. This rewriting yields a General Bayes posterior over decision rules that admits a Gaussian pseudo-likelihood interpretation. We clarify two Bayesian interpretations of the resulting generalized posterior, a working Gaussian view and a decision-theoretic loss-based view. As one implementation example, we introduce neural networks with tanh-squashed outputs. Finally, we provide theoretical guarantees in a PAC-Bayes style. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.23672 |