|
on Forecasting |
By: | Arkadiusz Lipiecki; Bartosz Uniejewski |
Abstract: | Quantifying the uncertainty of forecasting models is essential to assess and mitigate the risks associated with data-driven decisions, especially in volatile domains such as electricity markets. Machine learning methods can provide highly accurate electricity price forecasts, critical for informing the decisions of market participants. However, these models often lack uncertainty estimates, which limits the ability of decision makers to avoid unnecessary risks. In this paper, we propose a novel method for generating probabilistic forecasts from ensembles of point forecasts, called Isotonic Quantile Regression Averaging (iQRA). Building on the established framework of Quantile Regression Averaging (QRA), we introduce stochastic order constraints to improve forecast accuracy, reliability, and computational costs. In an extensive forecasting study of the German day-ahead electricity market, we show that iQRA consistently outperforms state-of-the-art postprocessing methods in terms of both reliability and sharpness. It produces well-calibrated prediction intervals across multiple confidence levels, providing superior reliability to all benchmark methods, particularly coverage-based conformal prediction. In addition, isotonic regularization decreases the complexity of the quantile regression problem and offers a hyperparameter-free approach to variable selection. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.15079 |
By: | Zhiren Ma; Qian Zhao; Riquan Zhang; Zhaoxing Gao |
Abstract: | This paper proposes a novel diffusion-index model for forecasting when predictors are high-dimensional matrix-valued time series. We apply an $\alpha$-PCA method to extract low-dimensional matrix factors and build a bilinear regression linking future outcomes to these factors, estimated via iterative least squares. To handle weak factor structures, we introduce a supervised screening step to select informative rows and columns. Theoretical properties, including consistency and asymptotic normality, are established. Simulations and real data show that our method significantly improves forecast accuracy, with the screening procedure providing additional gains over standard benchmarks in out-of-sample mean squared forecast error. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.04259 |
By: | Haojie Liu; Zihan Lin |
Abstract: | We introduce Galerkin-ARIMA, a novel time-series forecasting framework that integrates Galerkin projection techniques with the classical ARIMA model to capture potentially nonlinear dependencies in lagged observations. By replacing the fixed linear autoregressive component with a spline-based basis expansion, Galerkin-ARIMA flexibly approximates the underlying relationship among past values via ordinary least squares, while retaining the moving-average structure and Gaussian innovation assumptions of ARIMA. We derive closed-form solutions for both the AR and MA components using two-stage Galerkin projections, establish conditions for asymptotic unbiasedness and consistency, and analyze the bias-variance trade-off under basis-size growth. Complexity analysis reveals that, for moderate basis dimensions, our approach can substantially reduce computational cost compared to maximum-likelihood ARIMA estimation. Through extensive simulations on four synthetic processes-including noisy ARMA, seasonal, trend-AR, and nonlinear recursion series-we demonstrate that Galerkin-ARIMA matches or closely approximates ARIMA's forecasting accuracy while achieving orders-of-magnitude speedups in rolling forecasting tasks. These results suggest that Galerkin-ARIMA offers a powerful, efficient alternative for modeling complex time series dynamics in high-volume or real-time applications. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.07469 |
By: | Christopher Clayton; Antonio Coppola |
Abstract: | We examine whether and how granular, real-time predictive models should be integrated into central banks' macroprudential toolkit. First, we develop a tractable framework that formalizes the tradeoff regulators face when choosing between implementing models that forecast systemic risk accurately but have uncertain causal content and models with the opposite profile. We derive the regulator's optimal policy in a setting in which private portfolios react endogenously to the regulator's model choice and policy rule. We show that even purely predictive models can generate welfare gains for a regulator, and that predictive precision and knowledge of causal impacts of policy interventions are complementary. Second, we introduce a deep learning architecture tailored to financial holdings data--a graph transformer--and we discuss why it is optimally suited to this problem. The model learns vector embedding representations for both assets and investors by explicitly modeling the relational structure of holdings, and it attains state-of-the-art predictive accuracy in out-of-sample forecasting tasks including trade prediction. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.18747 |
By: | Yihao Ang; Qiang Wang; Qiang Huang; Yifan Bao; Xinyu Xi; Anthony K. H. Tung; Chen Jin; Zhiyong Huang |
Abstract: | Synthetic time series are essential tools for data augmentation, stress testing, and algorithmic prototyping in quantitative finance. However, in cryptocurrency markets, characterized by 24/7 trading, extreme volatility, and rapid regime shifts, existing Time Series Generation (TSG) methods and benchmarks often fall short, jeopardizing practical utility. Most prior work (1) targets non-financial or traditional financial domains, (2) focuses narrowly on classification and forecasting while neglecting crypto-specific complexities, and (3) lacks critical financial evaluations, particularly for trading applications. To address these gaps, we introduce \textsf{CTBench}, the first comprehensive TSG benchmark tailored for the cryptocurrency domain. \textsf{CTBench} curates an open-source dataset from 452 tokens and evaluates TSG models across 13 metrics spanning 5 key dimensions: forecasting accuracy, rank fidelity, trading performance, risk assessment, and computational efficiency. A key innovation is a dual-task evaluation framework: (1) the \emph{Predictive Utility} task measures how well synthetic data preserves temporal and cross-sectional patterns for forecasting, while (2) the \emph{Statistical Arbitrage} task assesses whether reconstructed series support mean-reverting signals for trading. We benchmark eight representative models from five methodological families over four distinct market regimes, uncovering trade-offs between statistical fidelity and real-world profitability. Notably, \textsf{CTBench} offers model ranking analysis and actionable guidance for selecting and deploying TSG models in crypto analytics and strategy development. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02758 |
By: | Chi-Sheng Chen; Aidan Hung-Wen Tsai |
Abstract: | The rise of decentralized finance (DeFi) has created a growing demand for accurate yield and performance forecasting to guide liquidity allocation strategies. In this study, we benchmark six models, XGBoost, Random Forest, LSTM, Transformer, quantum neural networks (QNN), and quantum support vector machines with quantum feature maps (QSVM-QNN), on one year of historical data from 28 Curve Finance pools. We evaluate model performance on test MAE, RMSE, and directional accuracy. Our results show that classical ensemble models, particularly XGBoost and Random Forest, consistently outperform both deep learning and quantum models. XGBoost achieves the highest directional accuracy (71.57%) with a test MAE of 1.80, while Random Forest attains the lowest test MAE of 1.77 and 71.36% accuracy. In contrast, quantum models underperform with directional accuracy below 50% and higher errors, highlighting current limitations in applying quantum machine learning to real-world DeFi time series data. This work offers a reproducible benchmark and practical insights into model suitability for DeFi applications, emphasizing the robustness of classical methods over emerging quantum approaches in this domain. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02685 |
By: | Imad Talhartit (Université Hassan 1er [Settat], Ecole Nationale de Commerce et Gestion - Settat, Laboratory of Research in Finance, Audit and Governance of Organizations (LARFAGO) - National School of Business and Management – ENCG Settat, Hassan The First University, Settat, Morocco.); Sanae Ait Jillali (Université Hassan 1er [Settat], Ecole Nationale de Commerce et Gestion - Settat, Laboratory of Research in Finance, Audit and Governance of Organizations (LARFAGO) - National School of Business and Management – ENCG Settat, Hassan The First University, Settat, Morocco.); Mounime El Kabbouri (Université Hassan 1er [Settat], Ecole Nationale de Commerce et Gestion - Settat, Laboratory of Research in Finance, Audit and Governance of Organizations (LARFAGO) - National School of Business and Management – ENCG Settat, Hassan The First University, Settat, Morocco.) |
Abstract: | This study is part of an empirical and quantitative approach aimed at improving stock market fluctuation forecasting through the application of artificial intelligence models. More specifically, it evaluates the performance of two methods based on Long Short-Term Memory (LSTM) neural networks, one of the most powerful algorithms for analyzing financial time series. The first method is grounded in a classic LSTM model, while the second incorporates hyperparameter optimization using the Particle Swarm Optimization (PSO) metaheuristic method, allowing for better convergence and enhanced prediction accuracy. The study is conducted on ten stocks representing the US S&P 500 index, with historical data spanning several decades, collected via the Investing.com and Yahoo Finance platforms. The empirical results demonstrate a clear superiority of the LSTM-PSO model regarding predictive accuracy, with significant reductions in errors (MSE, RMSE, MAE, MSLE, and RMSLE) compared to the traditional model. These findings emphasize the advantages of combining artificial intelligence and algorithmic optimization for handling complex financial data. In the global context of digitization and automation of investment decisions, this research contributes significantly to the development of reliable predictive systems. Finally, the study raises the question of whether this methodological framework could be effectively adapted to emerging markets, such as the Moroccan Stock Market, where financial environments are characterized by lower trading volumes, different volatility patterns, and more limited historical data. This opens up avenues for future research into the challenges and opportunities of applying advanced AI-based forecasting models in less mature financial markets. |
Keywords: | stock market forecasting, artificial intelligence, LSTM neural networks, Particle Swarm Optimization, financial time series, predictive modeling |
Date: | 2025–07–18 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05177777 |
By: | Peter Haan; Chen Sun; Felix Weinhardt; Georg Weizsäcker |
Abstract: | Different methods of eliciting long-run expectations yield data that predict economic choices differently well. We ask members of a wide population sample to make a 10-year investment decision and to forecast stock market returns in one of two formats: they either predict the average of annual growth rates over the next 10 years, or they predict the total, cumulative growth that occurs over the 10-year period. Results show that total 10-year forecasts are more pessimistic than average annual forecasts, but they better predict experimental portfolio choices and real-world stock market participation. |
Keywords: | Household finance, long-run predictions, survey experiments |
JEL: | D01 D14 D84 D9 |
Date: | 2025–07–28 |
URL: | https://d.repec.org/n?u=RePEc:bdp:dpaper:0070 |
By: | Donia Besher; Anirban Sengupta; Tanujit Chakraborty |
Abstract: | Forecasting Climate Policy Uncertainty (CPU) is essential as policymakers strive to balance economic growth with environmental goals. High levels of CPU can slow down investments in green technologies, make regulatory planning more difficult, and increase public resistance to climate reforms, especially during times of economic stress. This study addresses the challenge of forecasting the US CPU index by building the Bayesian Structural Time Series (BSTS) model with a large set of covariates, including economic indicators, financial cycle data, and public sentiments captured through Google Trends. The key strength of the BSTS model lies in its ability to efficiently manage a large number of covariates through its dynamic feature selection mechanism based on the spike-and-slab prior. To validate the effectiveness of the selected features of the BSTS model, an impulse response analysis is performed. The results show that macro-financial shocks impact CPU in different ways over time. Numerical experiments are performed to evaluate the performance of the BSTS model with exogenous variables on the US CPU dataset over different forecasting horizons. The empirical results confirm that BSTS consistently outperforms classical and deep learning frameworks, particularly for semi-long-term and long-term forecasts. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.12276 |
By: | Imen Mahmoud; Andrei Velichko |
Abstract: | This study proposes a novel methodological framework integrating a LightGBM regression model and genetic algorithm (GA) optimization to systematically evaluate the contribution of COVID-19-related indicators to Bitcoin return prediction. The primary objective was not merely to forecast Bitcoin returns but rather to determine whether including pandemic-related health data significantly enhances prediction accuracy. A comprehensive dataset comprising daily Bitcoin returns and COVID-19 metrics (vaccination rates, hospitalizations, testing statistics) was constructed. Predictive models, trained with and without COVID-19 features, were optimized using GA over 31 independent runs, allowing robust statistical assessment. Performance metrics (R2, RMSE, MAE) were statistically compared through distribution overlaps and Mann-Whitney U tests. Permutation Feature Importance (PFI) analysis quantified individual feature contributions. Results indicate that COVID-19 indicators significantly improved model performance, particularly in capturing extreme market fluctuations (R2 increased by 40%, RMSE decreased by 2%, both highly significant statistically). Among COVID-19 features, vaccination metrics, especially the 75th percentile of fully vaccinated individuals, emerged as dominant predictors. The proposed methodology extends existing financial analytics tools by incorporating public health signals, providing investors and policymakers with refined indicators to navigate market uncertainty during systemic crises. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.00078 |
By: | Alberto Vindas-Quesada (Department of Economic Research, Central Bank of Costa Rica); Carlos Brenes-Soto (Department of Economic Research, Central Bank of Costa Rica); Adriana Sandí-Esquivel (Economic Division, Central Bank of Costa Rica); Susan Jiménez-Montero (Department of Economic Research, Central Bank of Costa Rica) |
Abstract: | This document presents the methodology that the Central Bank of Costa Rica uses to evaluate and select the univariate models for short-horizon forecasting purposes. This methodology consists on cuantifying several properties that are deemed desirable for forecasting models, assigning scores and combining them to obtain a final score. The robustness of the model selection to the evaluation period is analyzed, given the recent inflation dynamics. The selection is sensitive to this period, leading to the recommendation of regular selection processes. Esta nota presenta la metodología que usa el Banco Central de Costa Rica para la evaluación y selección de los modelos univariados de proyección de inflación en el corto plazo. La metodología consiste en la cuantificación de varias propiedades deseables en modelos de pronóstico, la asignación de puntajes y su combinación para obtener un puntaje final. Se evalúa la robustez de la selección de los modelos a cambios en el periodo de evaluación, dados los cambios recientes en la dinámica inflacionaria. La selección del modelo es sensible a este periodo, por lo que se recomienda actualizar la selección con regularidad. |
Keywords: | Inflation, Forecasting, Stochastic Volatility, GARCH, Evaluation, Pronóstico |
JEL: | E31 E37 C52 C53 |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:apk:nottec:2405 |
By: | Hess, Dieter; Simon, Frederik; Weibels, Sebastian |
Abstract: | We predict earnings for forecast horizons of up to five years by using the entire set of Compustat financial statement data as input and providing it to state-of-the-art machine learning models capable of approximating arbitrary functional forms. Our approach improves prediction one year ahead by an average of 11% compared to the traditional linear approach that performs best. This superior performance is consistent across a variety of evaluation metrics as well as different firm subsamples and translates into more profitable investment strategies. Extensive model interpretation reveals that income statement variables, especially different definitions of earnings, are by far the most important predictors. Conversely, we find that while income statement variables decline in relevance, balance sheet information becomes more significant as the forecast horizon extends. Lastly, we show that the influence of interactions and non- linearities on the machine learning forecast is modest, but substantial differences between firm subsamples exist. |
Keywords: | Earnings Forecasts, Cross-Sectional Earnings Models, Machine Learning |
JEL: | G11 G12 G17 G31 G32 M40 M41 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:zbw:cfrwps:323935 |
By: | Yu Shi; Zongliang Fu; Shuo Chen; Bohan Zhao; Wei Xu; Changshui Zhang; Jian Li |
Abstract: | The success of large-scale pre-training paradigm, exemplified by Large Language Models (LLMs), has inspired the development of Time Series Foundation Models (TSFMs). However, their application to financial candlestick (K-line) data remains limited, often underperforming non-pre-trained architectures. Moreover, existing TSFMs often overlook crucial downstream tasks such as volatility prediction and synthetic data generation. To address these limitations, we propose Kronos, a unified, scalable pre-training framework tailored to financial K-line modeling. Kronos introduces a specialized tokenizer that discretizes continuous market information into token sequences, preserving both price dynamics and trade activity patterns. We pre-train Kronos using an autoregressive objective on a massive, multi-market corpus of over 12 billion K-line records from 45 global exchanges, enabling it to learn nuanced temporal and cross-asset representations. Kronos excels in a zero-shot setting across a diverse set of financial tasks. On benchmark datasets, Kronos boosts price series forecasting RankIC by 93% over the leading TSFM and 87% over the best non-pre-trained baseline. It also achieves a 9% lower MAE in volatility forecasting and a 22% improvement in generative fidelity for synthetic K-line sequences. These results establish Kronos as a robust, versatile foundation model for end-to-end financial time series analysis. Our pre-trained model is publicly available at https://github.com/shiyu-coder/Kronos. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02739 |
By: | Easton, Peter; Kapons, Martin (Tilburg University, School of Economics and Management); Monahan, S.; Schütt, Harm (Tilburg University, School of Economics and Management); Weisbrod, Eric H. |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:tiu:tiutis:47b3dffb-5015-44a5-8519-f1c5c94a835c |
By: | Christian Conrad; Zeno Enders; Gernot Müller |
Abstract: | Under inflation forecast targeting, central banks such as the ECB adjust policy to keep expected inflation on target. We evaluate the ECB’s inflation forecasts: they are unbiased and efficient but contain little information at forecast horizons beyond three quarters. In a New Keynesian model with transmission lags, inflation forecast targeting is indeed effective in stabilizing inflation—provided there is no forward-looking behavior—though the information content of forecasts is unrealistically high. In the presence of forward-looking behavior, the information content declines because monetary policy becomes more effective in meeting the target, but inflation is best stabilized by targeting current inflation. |
Keywords: | inflation targeting, inflation forecast targeting, monetary policy, inflation forecast, information content, target horizon, ECB |
JEL: | C53 E52 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12006 |
By: | Mahdi Goldani |
Abstract: | In statistical modeling, prediction and explanation are two fundamental objectives. When the primary goal is forecasting, it is important to account for the inherent uncertainty associated with estimating unknown outcomes. Traditionally, confidence intervals constructed using standard deviations have served as a formal means to quantify this uncertainty and evaluate the closeness of predicted values to their true counterparts. This approach reflects an implicit aim to capture the behavioral similarity between observed and estimated values. However, advances in similarity based approaches present promising alternatives to conventional variance based techniques, particularly in contexts characterized by large datasets or a high number of explanatory variables. This study aims to investigate which methods either traditional or similarity based are capable of producing narrower confidence intervals under comparable conditions, thereby offering more precise and informative intervals. The dataset utilized in this study consists of U.S. mega cap companies, comprising 42 firms. Due to the high number of features, interdependencies among predictors are common, therefore, Ridge Regression is applied to address this issue. The research findings indicate that variance based method and LCSS exhibit the highest coverage among the analyzed methods, although they produce broader intervals. Conversely, DTW, Hausdorff, and TWED deliver narrower intervals, positioning them as the most accurate methods, despite their medium coverage rates. Ultimately, the trade off between interval width and coverage underscores the necessity for context aware decision making when selecting similarity based methods for confidence interval estimation in time series analysis. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.16655 |
By: | Maciej Wysocki; Pawe{\l} Sakowski |
Abstract: | This paper investigates an important problem of an appropriate variance-covariance matrix estimation in the Modern Portfolio Theory. We propose a novel framework for variancecovariance matrix estimation for purposes of the portfolio optimization, which is based on deep learning models. We employ the long short-term memory (LSTM) recurrent neural networks (RNN) along with two probabilistic deep learning models: DeepVAR and GPVAR to the task of one-day ahead multivariate forecasting. We then use these forecasts to optimize portfolios of stocks and cryptocurrencies. Our analysis presents results across different combinations of observation windows and rebalancing periods to compare performances of classical and deep learning variance-covariance estimation methods. The conclusions of the study are that although the strategies (portfolios) performance differed significantly between different combinations of parameters, generally the best results in terms of the information ratio and annualized returns are obtained using the LSTM-RNN models. Moreover, longer observation windows translate into better performance of the deep learning models indicating that these methods require longer windows to be able to efficiently capture the long-term dependencies of the variance-covariance matrix structure. Strategies with less frequent rebalancing typically perform better than these with the shortest rebalancing windows across all considered methods. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.14999 |
By: | Oguzhan Akgun; Alain Pirotte; Giovanni Urga; Zhenlin Yang |
Abstract: | This paper proposes a selective inference procedure for testing equal predictive ability in panel data settings with unknown heterogeneity. The framework allows predictive performance to vary across unobserved clusters and accounts for the data-driven selection of these clusters using the Panel Kmeans Algorithm. A post-selection Wald-type statistic is constructed, and valid $p$-values are derived under general forms of autocorrelation and cross-sectional dependence in forecast loss differentials. The method accommodates conditioning on covariates or common factors and permits both strong and weak dependence across units. Simulations demonstrate the finite-sample validity of the procedure and show that it has very high power. An empirical application to exchange rate forecasting using machine learning methods illustrates the practical relevance of accounting for unknown clusters in forecast evaluation. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.14621 |
By: | Stefan Nagel |
Abstract: | Return prediction with Random Fourier Features (RFF)—a very large number, P , of nonlinear trans-formations of a small number, K, of predictor variables—has become popular recently. Surprisingly, this approach appears to yield a successful out-of-sample stock market index timing strategy even when trained in rolling windows as small as T = 12 months with P in the thousands. However, when P ≫ T , the RFF-based forecast becomes a weighted average of the T training sample returns, with weights determined by the similarity between the predictor vectors in the training data and the current predictor vector. In short training windows, similarity primarily reflects temporal proximity, so the forecast reduces to a recency-weighted average of the T return observations in the training data—essentially a momentum strategy. Moreover, because similarity declines with predictor volatility, the result is a volatility-timed momentum strategy. The strong performance of the RFF-based strategy thus stems not from its ability to extract predictive signals from the training data, but from the fact that a volatility-timed momentum strategy happened to perform well in historical data. This point becomes clear when applying the same method to artificial data in which returns exhibit reversals rather than momentum: the RFF approach still constructs the same volatility-timed momentum strategy, which then performs poorly. |
JEL: | G12 G14 G17 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34104 |
By: | Gustavo Silva Araujo; José Ignacio Ándres Bergallo; Flávio de Freitas Val |
Abstract: | This study revisits the question "Do inflation-linked bonds contain information about future inflation?" posed by Vicente and Guillen (2013), by addressing two critical issues related to the breakeven inflation rate (BEIR) that were not considered by the authors: the inflation lag embedded in inflation-indexed securities and the seasonality inherent in inflation indices. The analysis evaluates three methods for calculating the BEIR for the purpose of forecasting inflation: the one used by Vicente and Guillen (2013), which we refer to as Naïve BEIR; an alternative measure derived from government bonds but incorporating corrections for inflation lag and seasonality (Bond Market BEIR, BM-BEIR); and a BEIR based on the Futures Market (Futures Market BEIR, FM-BEIR), which also includes these adjustments. The results show that BM-BEIR significantly outperforms Naive BEIR for short-term horizons (3 and 6 months), closely tracking actual inflation. Moreover, both BM-BEIR and FM-BEIR match the predictive accuracy of survey expectations, while offering the advantage of frequent updates - highlighting their utility for short-term inflation forecasting and decision-making. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:bcb:wpaper:625 |
By: | Dan Li; Vassili Kitsios; David Newth; Terence John O'Kane |
Abstract: | This paper introduces a Bayesian hierarchical modeling framework within a fully probabilistic setting for crop yield estimation, model selection, and uncertainty forecasting under multiple future greenhouse gas emission scenarios. By informing on regional agricultural impacts, this approach addresses broader risks to global food security. Extending an established multivariate econometric crop-yield model to incorporate country-specific error variances, the framework systematically relaxes restrictive homogeneity assumptions and enables transparent decomposition of predictive uncertainty into contributions from climate models, emission scenarios, and crop model parameters. In both in-sample and out-of-sample analyses focused on global wheat production, the results demonstrate significant improvements in calibration and probabilistic accuracy of yield projections. These advances provide policymakers and stakeholders with detailed, risk-sensitive information to support the development of more resilient and adaptive agricultural and climate strategies in response to escalating climate-related risks. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.21559 |
By: | Yogeshwar Bharat (Shiv Nadar University); Rajeswari Sengupta (Indira Gandhi Institute of Development Research); Gautham Udupa (Centre For Advanced Financial Research And Learning) |
Abstract: | Core inflation measure is widely tracked as a measure of trend inflation, but it does not forecast headline inflation well. In this paper, we use disaggregated, state-level inflation data from India to construct a `cleaned' core inflation measure. We do this by stripping out the passthrough of past food inflation from the raw core inflation measure. We estimate the passthrough using local projection with global supply-side instruments in order to achieve better identification. We further find that our `cleaned' core inflation measure generates better forecasts of the headline inflation after a six-month horizon, compared to the raw core measure. |
Keywords: | Inflation forecasting, Core inflation, Headline inflation, State-level inflation |
JEL: | E31 E37 E52 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:ind:igiwpp:2025-021 |
By: | Kenneth D. West; Kurt G. Lunsford |
Abstract: | We study the use of a misspecified overdifferenced model to forecast the level of a stationary scalar time series. Let x(t) be the series, and let bias be the sample average of a series of forecast errors. Then, the bias of forecasts of x(t) generated by a misspecified overdifferenced ARMA model for Δx(t) will tend to be smaller in magnitude than the bias of forecasts of x(t) generated by a correctly specified model for x(t). Formally, let P be the number of forecasts. The bias from the model for Δx(t) has a variance that is O(1/P^2), while the variance of the bias from the model for x(t) generally is O(1/P). With a driftless random walk as our baseline overdifferenced model, we confirm this theoretical result with simulations and empirical work: random walk bias is generally one-tenth to one-half that of an appropriately specified model fit to levels. |
JEL: | C22 C53 E37 E47 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34112 |
By: | Massimiliano Caporin (Department of Statistical Sciences, University of Padova, Via Cesare Battisti 241, 35121 Padova, Italy); Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Sowmya Subramaniam (Indian Institute of Management Lucknow, Prabandh Nagar off Sitapur Road, Lucknow, Uttar Pradesh 226013, India); Hudson S. Torrent (Department of Statistics, Universidade Federal do Rio Grande do Sul Porto Alegre, 91509-900, Brazil) |
Abstract: | We use a mixed-frequency non-parametric causality-in-quantiles test to detect predictability from newspapers articles-based daily indexes of supply bottlenecks to the conditional distributions of monthly inflation rate and its volatility of China, the European Monetary Union (EMU), the United Kingdom (UK) and the United States (US). Based on a sample period of January 2010 to December 2024, we find that the causal impact of supply bottlenecks on inflation volatility is consistently ob-served across the four economies, while the same is particularly strong for the inflation rates of the EMU and the UK. The second-moment impact is further emphasized in a forecasting set-up, as we detect statistically significant impact of these supply chain constraints in the prediction of the lower quantiles of inflation volatility. Our findings have important implications for monetary policy decisions. |
Keywords: | Inflation, Inflation Volatility, Supply Bottlenecks, Mixed-Frequency, Nonparametric Causality-in-Quantiles Test |
JEL: | C22 C53 E23 E31 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:pre:wpaper:202526 |
By: | Michael McGrane |
Abstract: | In this paper, I present a dynamic term structure model of interest rates that fea- tures a shifting endpoint and incorporates survey forecasts of interest rates to sharpen the model’s implied forecasts and estimate trend interest rates. I present a new esti- mate of trend interest rates from the model as well as the model’s estimates of term premiums. I conduct an out-of-sample forecast analysis with the model and find that it significantly outperforms a standard dynamic term structure model with no shifting endpoint and only slightly underperforms a random walk model. |
JEL: | E43 E47 G12 |
Date: | 2025–08–05 |
URL: | https://d.repec.org/n?u=RePEc:cbo:wpaper:60888 |
By: | Gorodnichenko, Yuriy (University of California, Berkeley); Vasudevan, Vittal (UC Berkeley) |
Abstract: | Using a short- and long-term macroeconomic forecasts, we estimate the cost of the Russian full-scale invasion of Ukraine for countries in Eastern Europe, Caucasus, and Central Asia. Shortly after the Russian attack, the projected cost (cumulative over six years) stood at $2.44 trillion for the region. Professional forecasters predicted a dramatic increase in macroeconomic uncertainty, significant spillover effects, some hysteresis effects as well as a changing nature of business cycles. We also use the war shock to study how professional forecasters acquire and process information. Our results point to state dependence as well as an important role of forward information in shaping macroeconomic outlook of professional forecasters. |
Keywords: | defense, event analysis, geoeconomics, Ukraine, forecasting, conflict, military, uncertainty |
JEL: | F51 C53 E3 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18017 |