|
on Forecasting |
By: | Gael M. Martin; David T. Frazier; Worapree Maneesoonthorn; Ruben Loaiza-Maya; Florian Huber; Gary Koop; John Maheu; Didier Nibbering; Anastasios Panagiotelis |
Abstract: | The Bayesian statistical paradigm provides a principled and coherent approach to probabilistic forecasting. Uncertainty about all unknowns that characterize any forecasting problem -- model, parameters, latent states -- is factored into the forecast distribution, with forecasts conditioned only on what is known or observed. Allied with the elegance of the method, Bayesian forecasting is now underpinned by the burgeoning field of Bayesian computation, which enables Bayesian forecasts to be produced for virtually any problem, no matter how large, or complex. The current state of play in Bayesian forecasting is the subject of this review. The aim is to provide readers with an overview of modern approaches to the field, set in some historical context. Whilst our primary focus is on applications in the fields of economics and finance, and their allied disciplines, sufficient general details about implementation are provided to aid and inform all investigators. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.03471&r=for |
By: | Mboya, Mwasi; Sibbertsen, Philipp |
Abstract: | We develop methods to obtain optimal forecast under long memory in the presence of a discrete structural break based on different weighting schemes for the observations. We observe significant changes in the forecasts when long-range dependence is taken into account. Using Monte Carlo simulations, we confirm that our methods substantially improve the forecasting performance under long memory. We further present an empirical application to in inflation rates that emphasizes the importance of our methods. |
Keywords: | Long memory; Forecasting; Structural break; Optimal weight; ARFIMA model |
JEL: | C12 C22 |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:han:dpaper:dp-705&r=for |
By: | Emmanuel Alanis; Sudheer Chava; Agam Shah |
Abstract: | Using a comprehensive sample of 2, 585 bankruptcies from 1990 to 2019, we benchmark the performance of various machine learning models in predicting financial distress of publicly traded U.S. firms. We find that gradient boosted trees outperform other models in one-year-ahead forecasts. Variable permutation tests show that excess stock returns, idiosyncratic risk, and relative size are the more important variables for predictions. Textual features derived from corporate filings do not improve performance materially. In a credit competition model that accounts for the asymmetric cost of default misclassification, the survival random forest is able to capture large dollar profits. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.12051&r=for |