nep-for New Economics Papers
on Forecasting
Issue of 2025–03–17
twenty papers chosen by
Rob J Hyndman, Monash University


  1. Forecasting realized volatility in the stock market: a path-dependent perspective By Xiangdong Liu; Sicheng Fu; Shaopeng Hong
  2. Quantifying Cryptocurrency Unpredictability: A Comprehensive Study of Complexity and Forecasting By Francesco Puoti; Fabrizio Pittorino; Manuel Roveri
  3. Trend-encoded Probabilistic Multi-order Model: A Non-Machine Learning Approach for Enhanced Stock Market Forecasts By Peiwan Wang; Chenhao Cui; Yong Li
  4. Data Transparency and GDP Growth Forecast Errors By Gatti, Roberta V.; Lederman, Daniel; Islam, Asif Mohammed; Nguyen, Ha; Lotfi, Rana Mohamed Amr Mohamed Nabil; Mousa, Mennatallah Emam Mohamed Sayed
  5. Predicting Insurance Penetration Rate in Ghana Using the Autoregressive Integrated Moving Average (ARIMA) Model By Thomas Gyima-Adu; Godwin Gidisu
  6. White Noise and Its Misapplications: Impacts on Time Series Model Adequacy and Forecasting By Hossein Hassani; Leila Marvian Mashhad; Manuela Royer-Carenzi; Mohammad Reza Yeganegi; Nadejda Komendantova
  7. Using quantile time series and historical simulation to forecast financial risk multiple steps ahead By Richard Gerlach; Antonio Naimoli; Giuseppe Storti
  8. OrderFusion: Encoding Orderbook for Probabilistic Intraday Price Prediction By Runyao Yu; Yuchen Tao; Fabian Leimgruber; Tara Esterl; Jochen L. Cremer
  9. Considering Labor Informality in Forecasting Poverty and Inequality: A Microsimulation Model for Latin American and Caribbean Countries By Montoya Munoz, Kelly Yelitza; Olivieri, Sergio Daniel; Silveira Braga, Cicero Augusto
  10. The Role of Deep Learning in Financial Asset Management: A Systematic Review By Pedro Reis; Ana Paula Serra; Jo\~ao Gama
  11. The Uncertainty of Machine Learning Predictions in Asset Pricing By Yuan Liao; Xinjie Ma; Andreas Neuhierl; Linda Schilling
  12. Contrastive Similarity Learning for Market Forecasting: The ContraSim Framework By Nicholas Vinden; Raeid Saqur; Zining Zhu; Frank Rudzicz
  13. Misspecification-Robust Shrinkage and Selection for VAR Forecasts and IRFs By Oriol González-Casasús; Frank Schorfheide
  14. Forecasting drug overdose mortality by age in the United States at the national and county levels By Bottcher, Lucas; Chou, Tom; D'Orsogna, Maria Rita
  15. Harnessing the Power of Artificial Intelligence to Forecast Startup Success: An Empirical Evaluation of the SECURE AI Model By Morande, Swapnil; Arshi, Tahseen; Gul, Kanwal; Amini, Mitra
  16. Market-Derived Financial Sentiment Analysis: Context-Aware Language Models for Crypto Forecasting By Hamid Moradi-Kamali; Mohammad-Hossein Rajabi-Ghozlou; Mahdi Ghazavi; Ali Soltani; Amirreza Sattarzadeh; Reza Entezari-Maleki
  17. Predicting Liquidity-Aware Bond Yields using Causal GANs and Deep Reinforcement Learning with LLM Evaluation By Jaskaran Singh Walia; Aarush Sinha; Srinitish Srinivasan; Srihari Unnikrishnan
  18. Regression and Forecasting of U.S. Stock Returns Based on LSTM By Shicheng Zhou; Zizhou Zhang; Rong Zhang; Yuchen Yin; Chia Hong Chang; Qinyan Shen
  19. Bounded Foresight Equilibrium in Large Dynamic Economies with Heterogeneous Agents and Aggregate Shocks By Bilal Islah; Bar Light
  20. Struggling with the Rain : Weather Variability and Food Insecurity Forecasting in Mauritania By Blanchard, Paul Baptiste; Ishizawa Escudero, Oscar Anil; Humbert, Thibaut; Van Der Borght, Rafael

  1. By: Xiangdong Liu; Sicheng Fu; Shaopeng Hong
    Abstract: Volatility forecasting in financial markets is a topic that has received more attention from scholars. In this paper, we propose a new volatility forecasting model that combines the heterogeneous autoregressive (HAR) model with a family of path-dependent volatility models (HAR-PD). The model utilizes the long- and short-term memory properties of price data to capture volatility features and trend features. By integrating the features of path-dependent volatility into the HAR model family framework, we develop a new set of volatility forecasting models. And, we propose a HAR-REQ model based on the empirical quartile as a threshold, which exhibits stronger forecasting ability compared to the HAR-REX model. Subsequently, the predictive performance of the HAR-PD model family is evaluated by statistical tests using data from the Chinese stock market and compared with the basic HAR model family. The empirical results show that the HAR-PD model family has higher forecasting accuracy compared to the underlying HAR model family. In addition, robustness tests confirm the significant predictive power of the HAR-PD model family.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.00851
  2. By: Francesco Puoti; Fabrizio Pittorino; Manuel Roveri
    Abstract: This paper offers a thorough examination of the univariate predictability in cryptocurrency time-series. By exploiting a combination of complexity measure and model predictions we explore the cryptocurrencies time-series forecasting task focusing on the exchange rate in USD of Litecoin, Binance Coin, Bitcoin, Ethereum, and XRP. On one hand, to assess the complexity and the randomness of these time-series, a comparative analysis has been performed using Brownian and colored noises as a benchmark. The results obtained from the Complexity-Entropy causality plane and power density spectrum analysis reveal that cryptocurrency time-series exhibit characteristics closely resembling those of Brownian noise when analyzed in a univariate context. On the other hand, the application of a wide range of statistical, machine and deep learning models for time-series forecasting demonstrates the low predictability of cryptocurrencies. Notably, our analysis reveals that simpler models such as Naive models consistently outperform the more complex machine and deep learning ones in terms of forecasting accuracy across different forecast horizons and time windows. The combined study of complexity and forecasting accuracies highlights the difficulty of predicting the cryptocurrency market. These findings provide valuable insights into the inherent characteristics of the cryptocurrency data and highlight the need to reassess the challenges associated with predicting cryptocurrency's price movements.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.09079
  3. By: Peiwan Wang; Chenhao Cui; Yong Li
    Abstract: In recent years, the dominance of machine learning in stock market forecasting has been evident. While these models have shown decreasing prediction errors, their robustness across different datasets has been a concern. A successful stock market prediction model minimizes prediction errors and showcases robustness across various data sets, indicating superior forecasting performance. This study introduces a novel multiple lag order probabilistic model based on trend encoding (TeMoP) that enhances stock market predictions through a probabilistic approach. Results across different stock indexes from nine countries demonstrate that the TeMoP outperforms the state-of-the-art machine learning models in predicting accuracy and stabilization.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.08144
  4. By: Gatti, Roberta V.; Lederman, Daniel; Islam, Asif Mohammed; Nguyen, Ha; Lotfi, Rana Mohamed Amr Mohamed Nabil; Mousa, Mennatallah Emam Mohamed Sayed
    Abstract: This paper examines the role of a country’s data transparency in explaining gross domestic product growth forecast errors. It reports four sets of results that have not been previously reported in the existing literature. First, forecast errors—the difference between forecasted and realized gross domestic product growth—are large. Globally, between 2010 and 2020, the average same-year forecast error was 1.3 percentage points for the World Bank’s forecasts published in January of each year, and 1.5 percentage points for the International Monetary Fund’s January forecasts. Second, the Middle East and North Africa region has the largest forecast errors compared to other regions. Third, data capacity and transparency significantly explain forecast errors. On average, an improvement in a country’s Statistical Capacity Index, a measure of data capacity and transparency, is associated with a decline in absolute forecast errors. A one standard deviation increase in the log of the Statistical Capacity Index is associated with a decline in absolute forecast errors by 0.44 percentage point for World Bank forecasts and 0.49 percentage point for International Monetary Fund forecasts. The results are robust to a battery of control variables and robustness checks. Fourth, the role of the overall data ecosystem, not just those elements related to gross domestic product growth forecasting, is important for the accuracy of gross domestic product growth forecasts. Finally, gross domestic product growth forecasts from the World Bank are more accurate and less optimistic than those from the International Monetary Fund and the private sector.
    Date: 2023–04–12
    URL: https://d.repec.org/n?u=RePEc:wbk:wbrwps:10406
  5. By: Thomas Gyima-Adu; Godwin Gidisu
    Abstract: Ghana records a low penetration of 1.05% compared to some of its African counterparts. For example South Africa, which has an insurance penetration rate of 17%, followed by Namibia which records 6.3%. This means, there is more room for improvement. More upsetting, with Ghana hovering around the 1% as at 2018, the rate works out to the small amount of Gross Domestic Product. This research seeks to model and forecast insurance penetration rate in Ghana using the Autoregressive Integrated Moving Average technique. The result indicates that ARIMA (3, 1, 0) is the appropriate model for insurance penetration in Ghana. Also, results from the forecast could serve as an advisory or the need to re-strategize as a country. Therefore, determining the future pattern of insurance penetration will lead to the remedies that will increase the number of insured in the future.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.07841
  6. By: Hossein Hassani; Leila Marvian Mashhad; Manuela Royer-Carenzi (I2M - Institut de Mathématiques de Marseille - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique); Mohammad Reza Yeganegi; Nadejda Komendantova
    Abstract: This paper contributes significantly to time series analysis by discussing the empirical properties of white noise and their implications for model selection. This paper illustrates the ways in which the standard assumptions about white noise typically fail in practice, with a special emphasis on striking differences in sample ACF and PACF. Such findings prove particularly important when assessing model adequacy and discerning between residuals of different models, especially ARMA processes. This study addresses issues involving testing procedures, for instance, the Ljung–Box test, to select the correct time series model determined in the review. With the improvement in understanding the features of white noise, this work enhances the accuracy of modeling diagnostics toward real forecasting practice, which gives it applied value in time series analysis and signal processing.
    Keywords: time series analysis, model selection, Hassani -1/2 theorem, white noise, ARMA, Gaussian, Ljung-Box test
    Date: 2025–02–05
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04937317
  7. By: Richard Gerlach; Antonio Naimoli; Giuseppe Storti
    Abstract: A method for quantile-based, semi-parametric historical simulation estimation of multiple step ahead Value-at-Risk (VaR) and Expected Shortfall (ES) models is developed. It uses the quantile loss function, analogous to how the quasi-likelihood is employed by standard historical simulation methods. The returns data are scaled by the estimated quantile series, then resampling is employed to estimate the forecast distribution one and multiple steps ahead, allowing tail risk forecasting. The proposed method is applicable to any data or model where the relationship between VaR and ES does not change over time and can be extended to allow a measurement equation incorporating realized measures, thus including Realized GARCH and Realized CAViaR type models. Its finite sample properties, and its comparison with existing historical simulation methods, are evaluated via a simulation study. A forecasting study assesses the relative accuracy of the 1% and 2.5% VaR and ES one-day-ahead and ten-day-ahead forecasting results for the proposed class of models compared to several competitors.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.20978
  8. By: Runyao Yu; Yuchen Tao; Fabian Leimgruber; Tara Esterl; Jochen L. Cremer
    Abstract: Efficient and reliable probabilistic prediction of intraday electricity prices is essential to manage market uncertainties and support robust trading strategies. However, current methods often suffer from parameter inefficiencies, as they fail to fully exploit the potential of modeling interdependencies between bids and offers in the orderbook, requiring a large number of parameters for representation learning. Furthermore, these methods face the quantile crossing issue, where upper quantiles fall below the lower quantiles, resulting in unreliable probabilistic predictions. To address these two challenges, we propose an encoding method called OrderFusion and design a hierarchical multi-quantile head. The OrderFusion encodes the orderbook into a 2.5D representation, which is processed by a tailored jump cross-attention backbone to capture the interdependencies of bids and offers, enabling parameter-efficient learning. The head sets the median quantile as an anchor and predicts multiple quantiles hierarchically, ensuring reliability by enforcing monotonicity between quantiles through non-negative functions. Extensive experiments and ablation studies are conducted on four price indices: 60-min ID3, 60-min ID1, 15-min ID3, and 15-min ID1 using the German orderbook over three years to ensure a fair evaluation. The results confirm that our design choices improve overall performance, offering a parameter-efficient and reliable solution for probabilistic intraday price prediction.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.06830
  9. By: Montoya Munoz, Kelly Yelitza; Olivieri, Sergio Daniel; Silveira Braga, Cicero Augusto
    Abstract: Economists have long been interested in measuring the poverty and distributional impacts of macroeconomic projections and shocks. In this sense, microsimulation models have been widely used to estimate the distributional effects since they allow accounting for several transmission channels through which macroeconomic forecasts could impact individuals and households. This paper innovates previous microsimulation methodology by introducing more flexibility in labor earnings, considering intra-sectoral variation according to the formality status, and assessing its effect on forecasting country-level poverty, inequality, and other distributive indicators. The results indicate that the proposed methodology accurately estimates the intensity of poverty in the most immediate years indistinctively of how labor income is simulated. However, allowing for more intra-sectoral variation in labor income leads to more accurate projections in poverty and across the income distribution, with gains in performance in the middle term, especially in atypical years such as 2020.
    Date: 2023–06–22
    URL: https://d.repec.org/n?u=RePEc:wbk:wbrwps:10497
  10. By: Pedro Reis; Ana Paula Serra; Jo\~ao Gama
    Abstract: This review systematically examines deep learning applications in financial asset management. Unlike prior reviews, this study focuses on identifying emerging trends, such as the integration of explainable artificial intelligence (XAI) and deep reinforcement learning (DRL), and their transformative potential. It highlights new developments, including hybrid models (e.g., transformer-based architectures) and the growing use of alternative data sources such as ESG indicators and sentiment analysis. These advancements challenge traditional financial paradigms and set the stage for a deeper understanding of the evolving landscape. We use the Scopus database to select the most relevant articles published from 2018 to 2023. The inclusion criteria encompassed articles that explicitly apply deep learning models within financial asset management. We excluded studies focused on physical assets. This review also outlines our methodology for evaluating the relevance and impact of the included studies, including data sources and analytical methods. Our search identified 934 articles, with 612 meeting the inclusion criteria based on their focus and methodology. The synthesis of results from these articles provides insights into the effectiveness of deep learning models in improving portfolio performance and price forecasting accuracy. The review highlights the broad applicability and potential enhancements deep learning offers to financial asset management. Despite some limitations due to the scope of model application and variation in methodological rigour, the overall evidence supports deep learning as a valuable tool in this field. Our systematic review underscores the progressive integration of deep learning in financial asset management, suggesting a trajectory towards more sophisticated and impactful applications.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.01591
  11. By: Yuan Liao; Xinjie Ma; Andreas Neuhierl; Linda Schilling
    Abstract: Machine learning in asset pricing typically predicts expected returns as point estimates, ignoring uncertainty. We develop new methods to construct forecast confidence intervals for expected returns obtained from neural networks. We show that neural network forecasts of expected returns share the same asymptotic distribution as classic nonparametric methods, enabling a closed-form expression for their standard errors. We also propose a computationally feasible bootstrap to obtain the asymptotic distribution. We incorporate these forecast confidence intervals into an uncertainty-averse investment framework. This provides an economic rationale for shrinkage implementations of portfolio selection. Empirically, our methods improve out-of-sample performance.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.00549
  12. By: Nicholas Vinden; Raeid Saqur; Zining Zhu; Frank Rudzicz
    Abstract: We introduce the Contrastive Similarity Space Embedding Algorithm (ContraSim), a novel framework for uncovering the global semantic relationships between daily financial headlines and market movements. ContraSim operates in two key stages: (I) Weighted Headline Augmentation, which generates augmented financial headlines along with a semantic fine-grained similarity score, and (II) Weighted Self-Supervised Contrastive Learning (WSSCL), an extended version of classical self-supervised contrastive learning that uses the similarity metric to create a refined weighted embedding space. This embedding space clusters semantically similar headlines together, facilitating deeper market insights. Empirical results demonstrate that integrating ContraSim features into financial forecasting tasks improves classification accuracy from WSJ headlines by 7%. Moreover, leveraging an information density analysis, we find that the similarity spaces constructed by ContraSim intrinsically cluster days with homogeneous market movement directions, indicating that ContraSim captures market dynamics independent of ground truth labels. Additionally, ContraSim enables the identification of historical news days that closely resemble the headlines of the current day, providing analysts with actionable insights to predict market trends by referencing analogous past events.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.16023
  13. By: Oriol González-Casasús; Frank Schorfheide
    Abstract: VARs are often estimated with Bayesian techniques to cope with model dimensionality. The posterior means define a class of shrinkage estimators, indexed by hyperparameters that determine the relative weight on maximum likelihood estimates and prior means. In a Bayesian setting, it is natural to choose these hyperparameters by maximizing the marginal data density. However, this is undesirable if the VAR is misspecified. In this paper, we derive asymptotically unbiased estimates of the multi-step forecasting risk and the impulse response estimation risk to determine hyperparameters in settings where the VAR is (potentially) misspecified. The proposed criteria can be used to jointly select the optimal shrinkage hyperparameter, VAR lag length, and to choose among different types of multi-step-ahead predictors; or among IRF estimates based on VARs and local projections. The selection approach is illustrated in a Monte Carlo study and an empirical application.
    JEL: C11 C32 C52 C53
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:33474
  14. By: Bottcher, Lucas; Chou, Tom; D'Orsogna, Maria Rita
    Abstract: The drug overdose crisis in the United States continues to intensify. Fatalities have increased five-fold since 1999 reaching a record high of 108, 000 deaths in 2021. The epidemic has unfolded through distinct waves of different drug types, uniquely impacting various age, gender, race and ethnic groups in specific geographical areas. One major challenge in designing effective interventions is the forecasting of age-specific overdose patterns at the local level so that prevention and preparedness can be effectively delivered. We develop a forecasting method that assimilates observational data obtained from the CDC WONDER database with an age-structured model of addiction and overdose mortality. We apply our method nationwide and to three select areas: Los Angeles County, Cook County and the five boroughs of New York City, providing forecasts of drug-overdose mortality and estimates of relevant epidemiological quantities, such as mortality and age-specific addiction rates.
    Date: 2023–09–28
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:yfdj5_v1
  15. By: Morande, Swapnil; Arshi, Tahseen; Gul, Kanwal; Amini, Mitra
    Abstract: This pioneering study employs machine learning to predict startup success, addressing the long-standing challenge of deciphering entrepreneurial outcomes amidst uncertainty. Integrating the multidimensional SECURE framework for holistic opportunity evaluation with AI's pattern recognition prowess, the research puts forth a novel analytics-enabled approach to illuminate success determinants. Rigorously constructed predictive models demonstrate remarkable accuracy in forecasting success likelihood, validated through comprehensive statistical analysis. The findings reveal AI’s immense potential in bringing evidence-based objectivity to the complex process of opportunity assessment. On the theoretical front, the research enriches entrepreneurship literature by bridging the knowledge gap at the intersection of structured evaluation tools and data science. On the practical front, it empowers entrepreneurs with an analytical compass for decision-making and helps investors make prudent funding choices. The study also informs policymakers to optimize conditions for entrepreneurship. Overall, it lays the foundation for a new frontier of AI-enabled, data-driven entrepreneurship research and practice. However, acknowledging AI’s limitations, the synthesis underscores the persistent relevance of human creativity alongside data-backed insights. With high predictive performance and multifaceted implications, the SECURE-AI model represents a significant stride toward an analytics-empowered paradigm in entrepreneurship management.
    Date: 2023–08–29
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:p3gyb_v1
  16. By: Hamid Moradi-Kamali; Mohammad-Hossein Rajabi-Ghozlou; Mahdi Ghazavi; Ali Soltani; Amirreza Sattarzadeh; Reza Entezari-Maleki
    Abstract: Financial Sentiment Analysis (FSA) traditionally relies on human-annotated sentiment labels to infer investor sentiment and forecast market movements. However, inferring the potential market impact of words based on their human-perceived intentions is inherently challenging. We hypothesize that the historical market reactions to words, offer a more reliable indicator of their potential impact on markets than subjective sentiment interpretations by human annotators. To test this hypothesis, a market-derived labeling approach is proposed to assign tweet labels based on ensuing short-term price trends, enabling the language model to capture the relationship between textual signals and market dynamics directly. A domain-specific language model was fine-tuned on these labels, achieving up to an 11% improvement in short-term trend prediction accuracy over traditional sentiment-based benchmarks. Moreover, by incorporating market and temporal context through prompt-tuning, the proposed context-aware language model demonstrated an accuracy of 89.6% on a curated dataset of 227 impactful Bitcoin-related news events with significant market impacts. Aggregating daily tweet predictions into trading signals, our method outperformed traditional fusion models (which combine sentiment-based and price-based predictions). It challenged the assumption that sentiment-based signals are inferior to price-based predictions in forecasting market movements. Backtesting these signals across three distinct market regimes yielded robust Sharpe ratios of up to 5.07 in trending markets and 3.73 in neutral markets. Our findings demonstrate that language models can serve as effective short-term market predictors. This paradigm shift underscores the untapped capabilities of language models in financial decision-making and opens new avenues for market prediction applications.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.14897
  17. By: Jaskaran Singh Walia; Aarush Sinha; Srinitish Srinivasan; Srihari Unnikrishnan
    Abstract: Financial bond yield forecasting is challenging due to data scarcity, nonlinear macroeconomic dependencies, and evolving market conditions. In this paper, we propose a novel framework that leverages Causal Generative Adversarial Networks (CausalGANs) and Soft Actor-Critic (SAC) reinforcement learning (RL) to generate high-fidelity synthetic bond yield data for four major bond categories (AAA, BAA, US10Y, Junk). By incorporating 12 key macroeconomic variables, we ensure statistical fidelity by preserving essential market properties. To transform this market dependent synthetic data into actionable insights, we employ a finetuned Large Language Model (LLM) Qwen2.5-7B that generates trading signals (BUY/HOLD/SELL), risk assessments, and volatility projections. We use automated, human and LLM evaluations, all of which demonstrate that our framework improves forecasting performance over existing methods, with statistical validation via predictive accuracy, MAE evaluation(0.103%), profit/loss evaluation (60% profit rate), LLM evaluation (3.37/5) and expert assessments scoring 4.67 out of 5. The reinforcement learning-enhanced synthetic data generation achieves the least Mean Absolute Error of 0.103, demonstrating its effectiveness in replicating real-world bond market dynamics. We not only enhance data-driven trading strategies but also provides a scalable, high-fidelity synthetic financial data pipeline for risk & volatility management and investment decision-making. This work establishes a bridge between synthetic data generation, LLM driven financial forecasting, and language model evaluation, contributing to AI-driven financial decision-making.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.17011
  18. By: Shicheng Zhou; Zizhou Zhang; Rong Zhang; Yuchen Yin; Chia Hong Chang; Qinyan Shen
    Abstract: This paper analyses the investment returns of three stock sectors, Manuf, Hitec, and Other, in the U.S. stock market, based on the Fama-French three-factor model, the Carhart four-factor model, and the Fama-French five-factor model, in order to test the validity of the Fama-French three-factor model, the Carhart four-factor model, and the Fama-French five-factor model for the three sectors of the market. French five-factor model for the three sectors of the market. Also, the LSTM model is used to explore the additional factors affecting stock returns. The empirical results show that the Fama-French five-factor model has better validity for the three segments of the market under study, and the LSTM model has the ability to capture the factors affecting the returns of certain industries, and can better regress and predict the stock returns of the relevant industries. Keywords- Fama-French model; Carhart model; Factor model; LSTM model.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.05210
  19. By: Bilal Islah; Bar Light
    Abstract: Large dynamic economies with heterogeneous agents and aggregate shocks are central to many important applications, yet their equilibrium analysis remains computationally challenging. This is because the standard solution approach, rational expectations equilibria require agents to predict the evolution of the full cross-sectional distribution of state variables, leading to an extreme curse of dimensionality. In this paper, we introduce a novel equilibrium concept, N-Bounded Foresight Equilibrium (N-BFE), and establish its existence under mild conditions. In N-BFE, agents optimize over an infinite horizon but form expectations about key economic variables only for the next N periods. Beyond this horizon, they assume that economic variables remain constant and use a predetermined continuation value. This equilibrium notion reduces computational complexity and draws a direct parallel to lookahead policies in reinforcement learning, where agents make near-term calculations while relying on approximate valuations beyond a computationally feasible horizon. At the same time, it lowers cognitive demands on agents while better aligning with the behavioral literature by incorporating time inconsistency and limited attention, all while preserving desired forward-looking behavior and ensuring that agents still respond to policy changes. Importantly, in N-BFE equilibria, forecast errors arise endogenously. We measure the foresight errors for different foresight horizons and show that foresight significantly influences the variation in endogenous equilibrium variables, distinguishing our findings from traditional risk aversion or precautionary savings channels. This variation arises from a feedback mechanism between individual decision-making and equilibrium variables, where increased foresight induces greater non-stationarity in agents' decisions and, consequently, in economic variables.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.16536
  20. By: Blanchard, Paul Baptiste; Ishizawa Escudero, Oscar Anil; Humbert, Thibaut; Van Der Borght, Rafael
    Abstract: Weather-related shocks and climate variability contribute to hampering progress toward poverty reduction in Sub-Saharan Africa. Droughts have a direct impact on weather-dependent livelihood means and the potential to affect key dimensions of households’ welfare, including food consumption. Yet, the ability to forecast food insecurity for intervention planning remains limited and current approaches mainly rely on qualitative methods. This paper incorporates microeconomic estimates of the effect of the rainy season quality on food consumption into a catastrophe risk modeling approach to develop a novel framework for early forecasting of food insecurity at sub-national levels. The model relies on three usual components of catastrophe risk models that are adapted for estimation of the impact of rainy season quality on food insecurity: natural hazards, households’ vulnerability and exposure. The paper applies this framework in the context of rural Mauritania and optimizes the model calibration with a machine learning procedure. The model can produce fairly accurate lean season food insecurity predictions very early on in the agricultural season (October-November), that is six to eight months ahead of the lean season. Comparisons of model predictions with survey-based estimates yield a mean absolute error of 1.2 percentage points at the national level and a high degree of correlation at the regional level (0.84).
    Date: 2023–05–30
    URL: https://d.repec.org/n?u=RePEc:wbk:wbrwps:10457

This nep-for issue is ©2025 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.