|
on Forecasting |
By: | Konstantin Boss; Andre Groeger; Tobias Heidland; Finja Krueger; Conghan Zheng |
Abstract: | We develop monthly refugee flow forecasting models for 150 origin countries to the EU27, using machine learning and high-dimensional data, including digital trace data from Google Trends. Comparing different models and forecasting horizons and validating them out-of-sample, we find that an ensemble forecast combining Random Forest and Extreme Gradient Boosting algorithms consistently outperforms for forecast horizons between 3 to 12 months. For large refugee flow corridors, this holds in a parsimonious model exclusively based on Google Trends variables, which has the advantage of close-to-real-time availability. We provide practical recommendations about how our approach can enable ahead-of-period refugee forecasting applications. |
Keywords: | forecasting, refugee flows, asylum seekers, European Union, machine, learning, Google trends |
JEL: | C53 C55 F22 |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:bge:wpaper:1387&r=for |
By: | Joseph de Vilmarest; Nicklas Werge |
Abstract: | In this note, we address the problem of probabilistic forecasting using an adaptive volatility method based on classical time-varying volatility models and stochastic optimization algorithms. These principles were successfully applied in the recent M6 financial forecasting competition for both probabilistic forecasting and investment decision-making under the team named AdaGaussMC. The key points of our strategy are: (a) apply a univariate time-varying volatility model, called AdaVol, (b) obtain probabilistic forecasts of future returns, and (c) optimize the competition metrics using stochastic gradient-based algorithms. We claim that the frugality of the methods implies its robustness and consistency. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.01855&r=for |
By: | Hakan Pabuccu; Adrian Barbu |
Abstract: | This work investigates the importance of feature selection for improving the forecasting performance of machine learning algorithms for financial data. Artificial neural networks (ANN), convolutional neural networks (CNN), long-short term memory (LSTM) networks, as well as linear models were applied for forecasting purposes. The Feature Selection with Annealing (FSA) algorithm was used to select the features from about 1000 possible predictors obtained from 26 technical indicators with specific periods and their lags. In addition to this, the Boruta feature selection algorithm was applied as a baseline feature selection method. The dependent variables consisted of daily logarithmic returns and daily trends of ten financial data sets, including cryptocurrency and different stocks. Experiments indicate that the FSA algorithm increased the performance of ML models regardless of the problem type. The FSA hybrid machine learning models showed better performance in 10 out of 10 data sets for regression and 8 out of 10 data sets for classification. None of the hybrid Boruta models outperformed the hybrid FSA models. However, the BORCNN model performance was comparable to the best model for 4 out of 10 data sets for regression estimates. BOR-LR and BOR-CNN models showed comparable performance with the best hybrid FSA models in 2 out of 10 datasets for classification. FSA was observed to improve the model performance in both better performance metrics as well as a decreased computation time by providing a lower dimensional input feature space. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.02223&r=for |