|
on Forecasting |
By: | Yunhua Pei; John Cartlidge; Anandadeep Mandal; Daniel Gold; Enrique Marcilio; Riccardo Mazzon |
Abstract: | Accurate financial market forecasting requires diverse data sources, including historical price trends, macroeconomic indicators, and financial news, each contributing unique predictive signals. However, existing methods often process these modalities independently or fail to effectively model their interactions. In this paper, we introduce Cross-Modal Temporal Fusion (CMTF), a novel transformer-based framework that integrates heterogeneous financial data to improve predictive accuracy. Our approach employs attention mechanisms to dynamically weight the contribution of different modalities, along with a specialized tensor interpretation module for feature extraction. To facilitate rapid model iteration in industry applications, we incorporate a mature auto-training scheme that streamlines optimization. When applied to real-world financial datasets, CMTF demonstrates improvements over baseline models in forecasting stock price movements and provides a scalable and effective solution for cross-modal integration in financial market prediction. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.13522 |
By: | Hannah O’Keeffe; Katerina Petrova |
Abstract: | In this paper, we propose a component-based dynamic factor model for nowcasting GDP growth. We combine ideas from “bottom-up” approaches, which utilize the national income accounting identity through modelling and predicting sub-components of GDP, with a dynamic factor (DF) model, which is suitable for dimension reduction as well as parsimonious real-time monitoring of the economy. The advantages of the new model are twofold: (i) in contrast to existing dynamic factor models, it respects the GDP accounting identity; (ii) in contrast to existing “bottom-up” approaches, it models all GDP components jointly through the dynamic factor model, inheriting its main advantages. An additional advantage of the resulting CBDF approach is that it generates nowcast densities and impact decompositions for each component of GDP as a by-product. We present a comprehensive forecasting exercise, where we evaluate the model’s performance in terms of point and density forecasts, and we compare it to existing models (e.g. the model of Almuzara, Baker, O’Keeffe, and Sbordone (2023)) currently used by the New York Fed, as well as the model of Higgins (2014) currently used by the Atlanta Fed. We demonstrate that, on average, the point nowcast performance (in terms of RMSE) of the standard DF model can be improved by 15 percent and its density nowcast performance (in terms of log-predictive scores) can be improved by 20 percent over a large historical sample. |
Keywords: | Dynamic factor model; GDP nowcasting |
JEL: | C32 C38 C53 |
Date: | 2025–04–01 |
URL: | https://d.repec.org/n?u=RePEc:fip:fednsr:99906 |
By: | Anand, Vaibhav |
Abstract: | Forecasts play a key role in guiding short-term adaptations--from pre-treating roads before snow to evacuating people before hurricanes. However, it is unclear how improvements in forecast skill should shape optimal adaptation. I develop a theoretical model for forecast-based prevention and provide three key insights. First, better forecasts lead to higher, yet less frequent, adaptation investments. Second, risk preferences matter less as improved forecasts resolve more uncertainty. Third, the average adaptation declines for highly risk-averse decision-makers but may rise for less risk-averse ones. These findings highlight the need to align resource allocation and planning with forecast skill, especially given varying levels of trust in forecasts. |
Date: | 2025–03–14 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:tvwhz_v2 |
By: | Apró, William Zoltán |
Abstract: | Abstract Traditional IT project forecasting methods rely on siloed, retrospective data (e.g., Jira ticket histories), leaving teams unprepared for evolving risks such as shifting customer demands, accumulating technical debt, or new regulatory mandates. Studies show that 60% of IT projects exceed budgets due to unplanned scope changes, exposing the limitations of reactive approaches. We introduce Agentic AI for Proactive IT Forecasting (AAPIF), a novel framework that integrates intelligence-grade premise valuation with multi-source data fusion to proactively forecast project outcomes across technical, business, and market contexts. Unlike static models, AAPIF dynamically weights input data—such as customer requirements, organizational context, and compliance signals—based on reliability (freshness, credibility) and relevance (contribution weights C_i). It continuously refines predictions using reinforcement learning. Key Contributions: A mathematical model computing confidence-weighted success probabilities, achieving 89% accuracy—a 32% improvement over Random Forest baselines. Actionable intelligence protocols that reduce data collection errors by 45%, utilizing premise valuation (e.g., stakeholder alignment scoring) and automated risk alerts. In a fintech case study, AAPIF reduced unplanned scope changes by 37% through risk prediction (e.g., "72% likelihood of API scalability issues in Q3") and strategic recommendations (e.g., "Reassign three developers to refactor modules"). By transforming raw data into strategic foresight, AAPIF empowers project managers to become proactive architects of success, rather than reactive trouble-shooters. Keywords: Agentic AI, IT project forecasting, premise valuation, Agile project management, predictive analytics, risk mitigation |
Date: | 2025–04–21 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:jtvqu_v1 |
By: | Nikhil Shivakumar Nayak; Michael P. Brenner |
Abstract: | This study investigates enhancing option pricing by extending the Black-Scholes model to include stochastic volatility and interest rate variability within the Partial Differential Equation (PDE). The PDE is solved using the finite difference method. The extended Black-Scholes model and a machine learning-based LSTM model are developed and evaluated for pricing Google stock options. Both models were backtested using historical market data. While the LSTM model exhibited higher predictive accuracy, the finite difference method demonstrated superior computational efficiency. This work provides insights into model performance under varying market conditions and emphasizes the potential of hybrid approaches for robust financial modeling. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.03175 |
By: | Matthew Smith; Francisco Alvarez |
Abstract: | Machine learning (ML) is becoming an essential tool in economics, offering powerful methods for prediction, classification, and decision-making. This paper provides an intuitive introduction to two widely used families of ML models: tree-based methods (decision trees, Random Forests, boosting techniques) and neural networks. The goal is to equip practitioners with a clear understanding of how these models work, their strengths and limitations, and their applications in economics. Additionally, we briefly discuss some other methods, as support vector machines (SVMs) and Shapley values, highlighting their relevance in economic research. Rather than providing an exhaustive survey, this paper focuses on practical insights to help economists effectively apply ML in their work. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:fda:fdaddt:2025-03 |
By: | Fredy Pokou (CRIStAL, INOCS); Jules Sadefo Kamdem (MRE); Fran\c{c}ois Benhmad (MRE) |
Abstract: | In an environment of increasingly volatile financial markets, the accurate estimation of risk remains a major challenge. Traditional econometric models, such as GARCH and its variants, are based on assumptions that are often too rigid to adapt to the complexity of the current market dynamics. To overcome these limitations, we propose a hybrid framework for Value-at-Risk (VaR) estimation, combining GARCH volatility models with deep reinforcement learning. Our approach incorporates directional market forecasting using the Double Deep Q-Network (DDQN) model, treating the task as an imbalanced classification problem. This architecture enables the dynamic adjustment of risk-level forecasts according to market conditions. Empirical validation on daily Eurostoxx 50 data covering periods of crisis and high volatility shows a significant improvement in the accuracy of VaR estimates, as well as a reduction in the number of breaches and also in capital requirements, while respecting regulatory risk thresholds. The ability of the model to adjust risk levels in real time reinforces its relevance to modern and proactive risk management. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16635 |
By: | Nicola Borri; Denis Chetverikov; Yukun Liu; Aleh Tsyvinski |
Abstract: | We show that the higher-orders and their interactions of the common sparse linear factors can effectively subsume the factor zoo. We propose a forward selection Fama-MacBeth procedure as a method to estimate a high-dimensional stochastic discount factor model, isolating the most relevant higher-order factors. Applying this approach to terms derived from six widely used factors (the Fama-French five-factor model and the momentum factor), we show that the resulting higher-order model with only a small number of selected higher-order terms significantly outperforms traditional benchmarks both in-sample and out-of-sample. Moreover, it effectively subsumes a majority of the factors from the extensive factor zoo, suggesting that the pricing power of most zoo factors is attributable to their exposure to higher-order terms of common linear factors. |
JEL: | G0 |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33663 |
By: | Kensei Nakamura; Shohei Yanagita |
Abstract: | When a collective decision maker presents a menu of uncertain prospects to her group members, each member's choice depends on their predictions about payoff-relevant states. In reality, however, these members hold different predictions; more precisely, they have different prior beliefs about states and predictions about the information they will receive. In this paper, we develop an axiomatic framework to examine collective decision making under such disagreements. First, we characterize two classes of representations: Bewley multiple learning (BML) representations, which are unanimity rules among predictions, and justifiable multiple learning (JML) representations, where a single prediction has veto power. Furthermore, we characterize a general class of representations called hierarchical multiple learning representations, which includes BML and JML representations as special cases. Finally, motivated by the fact that these representations violate completeness or intransitivity due to multiple predictions, we propose a rationalization procedure for constructing complete and transitive preferences from them. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.04368 |
By: | Richard K. Crump; Stefano Eusepi; Emanuel Moench; Bruce Preston |
Abstract: | Using a novel and unique panel dataset of individual-level professional forecasts at short, medium, and very-long horizons, we provide new stylized facts about survey forecasts. We present direct evidence that forecasters use multivariate models in an environment with imperfect information about the current state, leading to heterogenous non-stationary expectations about the long run. We show forecast revisions are consistent with the predictions of a multivariate unobserved trend and cycle model. Our results suggest models of expectations formation which are either univariate, stationary, or both, are inherently misspecified and that macroeconomic modelling should reconsider the conventional assumption that agents operate in a well-understood stationary environment. |
Keywords: | expectations formation; shifting endpoint models; imperfect information; survey forecasts |
JEL: | D83 D84 |
Date: | 2025–04–01 |
URL: | https://d.repec.org/n?u=RePEc:fip:fednsr:99868 |
By: | Alejandro Lopez-Lira; Yuehua Tang; Mingyin Zhu |
Abstract: | Large language models (LLMs) cannot be trusted for economic forecasts during periods covered by their training data. We provide the first systematic evaluation of LLMs' memorization of economic and financial data, including major economic indicators, news headlines, stock returns, and conference calls. Our findings show that LLMs can perfectly recall the exact numerical values of key economic variables from before their knowledge cutoff dates. This recall appears to be randomly distributed across different dates and data types. This selective perfect memory creates a fundamental issue -- when testing forecasting capabilities before their knowledge cutoff dates, we cannot distinguish whether LLMs are forecasting or simply accessing memorized data. Explicit instructions to respect historical data boundaries fail to prevent LLMs from achieving recall-level accuracy in forecasting tasks. Further, LLMs seem exceptional at reconstructing masked entities from minimal contextual clues, suggesting that masking provides inadequate protection against motivated reasoning. Our findings raise concerns about using LLMs to forecast historical data or backtest trading strategies, as their apparent predictive success may merely reflect memorization rather than genuine economic insight. Any application where future knowledge would change LLMs' outputs can be affected by memorization. In contrast, consistent with the absence of data contamination, LLMs cannot recall data after their knowledge cutoff date. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.14765 |
By: | SeungJae Hwang |
Abstract: | This paper examines the empirical failure of uncovered interest parity (UIP) and proposes a structural explanation based on a mean-reverting risk premium. We define a realized premium as the deviation between observed exchange rate returns and the interest rate differential, and demonstrate its strong mean-reverting behavior across multiple horizons. Motivated by this pattern, we model the risk premium using an Ornstein-Uhlenbeck (OU) process embedded within a stochastic differential equation for the exchange rate. Our model yields closed-form approximations for future exchange rate distributions, which we evaluate using coverage-based backtesting. Applied to USD/KRW data from 2010 to 2025, the model shows strong predictive performance at both short-term and long-term horizons, while underperforming at intermediate (3-month) horizons and showing conservative behavior in the tails of long-term forecasts. These results suggest that exchange rate deviations from UIP may reflect structured, forecastable dynamics rather than pure noise, and point to future modeling improvements via regime-switching or time-varying volatility. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.06028 |
By: | Jinhui Li; Wenjia Xie; Luis Seco |
Abstract: | This study introduces a dynamic investment framework to enhance portfolio management in volatile markets, offering clear advantages over traditional static strategies. Evaluates four conventional approaches : equal weighted, minimum variance, maximum diversification, and equal risk contribution under dynamic conditions. Using K means clustering, the market is segmented into ten volatility-based states, with transitions forecasted by a Bayesian Markov switching model employing Dirichlet priors and Gibbs sampling. This enables real-time asset allocation adjustments. Tested across two asset sets, the dynamic portfolio consistently achieves significantly higher risk-adjusted returns and substantially higher total returns, outperforming most static methods. By integrating classical optimization with machine learning and Bayesian techniques, this research provides a robust strategy for optimizing investment outcomes in unpredictable market environments. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.02841 |