|
on Forecasting |
By: | Jianying Xie |
Abstract: | One of the most important studies in finance is to find out whether stock returns could be predicted. This research aims to create a new multivariate model, which includes dividend yield, earnings-to-price ratio, book-to-market ratio as well as consumption-wealth ratio as explanatory variables, for future stock returns predictions. The new multivariate model will be assessed for its forecasting performance using empirical analysis. The empirical analysis is performed on S&P500 quarterly data from Quarter 1, 1952 to Quarter 4, 2019 as well as S&P500 monthly data from Month 12, 1920 to Month 12, 2019. Results have shown this new multivariate model has predictability for future stock returns. When compared to other benchmark models, the new multivariate model performs the best in terms of the Root Mean Squared Error (RMSE) most of the time. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.01873&r= |
By: | Shijia Song; Handong Li |
Abstract: | Under the framework of dynamic conditional score, we propose a parametric forecasting model for Value-at-Risk based on the normal inverse Gaussian distribution (Hereinafter NIG-DCS-VaR), which creatively incorporates intraday information into daily VaR forecast. NIG specifies an appropriate distribution to return and the semi-additivity of the NIG parameters makes it feasible to improve the estimation of daily return in light of intraday return, and thus the VaR can be explicitly obtained by calculating the quantile of the re-estimated distribution of daily return. We conducted an empirical analysis using two main indexes of the Chinese stock market, and a variety of backtesting approaches as well as the model confidence set approach prove that the VaR forecasts of NIG-DCS model generally gain an advantage over those of realized GARCH (RGARCH) models. Especially when the risk level is relatively high, NIG-DCS-VaR beats RGARCH-VaR in terms of coverage ability and independence. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.02492&r= |
By: | Andrey Davydenko (Independent Researcher, Antalya, Turkey); Paul Goodwin (The Management School, University of Bath, Bath, BA2 7AY, United Kingdom) |
Abstract: | Measuring bias is important as it helps identify flaws in quantitative forecasting methods or judgmental forecasts. It can, therefore, potentially help improve forecasts. Despite this, bias tends to be under-represented in the literature: many studies focus solely on measuring accuracy. Methods for assessing bias in single series are relatively well-known and well-researched, but for datasets containing thousands of observations for multiple series, the methodology for measuring and reporting bias is less obvious. We compare alternative approaches against a number of criteria when rolling-origin point forecasts are available for different forecasting methods and for multiple horizons over multiple series. We focus on relatively simple, yet interpretable and easy-to-implement metrics and visualization tools that are likely to be applicable in practice. To study the statistical properties of alternative measures we use theoretical concepts and simulation experiments based on artificial data with predetermined features. We describe the difference between mean and median bias, describe the connection between metrics for accuracy and bias, provide suitable bias measures depending on the loss function used to optimise forecasts, and suggest which measures for accuracy should be used to accompany bias indicators. We propose several new measures and provide our recommendations on how to evaluate forecast bias across multiple series. |
Abstract: | La mesure du biais est importante car elle permet d'identifier les failles des méthodes de prévision quantitative ou des prévisions fondées sur le jugement. Elle peut donc potentiellement contribuer à améliorer les prévisions. Malgré cela, le biais tend à être sous-représenté dans la littérature : de nombreuses études se concentrent uniquement sur la mesure de la précision. Les méthodes d'évaluation du biais dans les séries uniques sont relativement bien connues et bien étudiées, mais pour les ensembles de données contenant des milliers d'observations pour des séries multiples, la méthodologie de mesure et de communication du biais est moins évidente. Nous comparons des approches alternatives par rapport à un certain nombre de critères lorsque des prévisions ponctuelles d'origine glissante sont disponibles pour différentes méthodes de prévision et pour plusieurs horizons sur plusieurs séries. Nous nous concentrons sur des mesures et des outils de visualisation relativement simples, mais interprétables et faciles à mettre en œuvre, qui sont susceptibles d'être applicables dans la pratique. Pour étudier les propriétés statistiques des mesures alternatives, nous utilisons des concepts théoriques et des expériences de simulation basées sur des données artificielles avec des caractéristiques prédéterminées. Nous décrivons la différence entre le biais moyen et le biais médian, nous décrivons le lien entre les mesures de précision et de biais, nous fournissons des mesures de biais appropriées en fonction de la fonction de perte utilisée pour optimiser les prévisions, et nous suggérons quelles mesures de précision devraient être utilisées pour accompagner les indicateurs de biais. Nous proposons plusieurs nouvelles mesures et fournissons nos recommandations sur la manière d'évaluer le biais des prévisions sur plusieurs séries. |
Keywords: | forecasting,forecast bias,mean bias,median bias,MPE,AvgRel-metrics,AvgRelAME,AvgRelAMdE,RelAME,RelMdE,AvgRelME,AvgRelMdE,OPc,Mean Percentage Error,MAD/MEAN ratio,Overestimation Percentage corrected,OPc-diagram,OPc-boxplot,AvgRel-prefix,RelAMdE,RelME,absolute mean error,absolute median error,AvgRelRMSE,AvgRelMAE,AvgRelMSE,AvgRel-boxplots,statistical graphics,forecast evaluation workflow,FEW,FEW-L1,FEW-L2,pooled prediction-realization diagram,prediction-realization diagram,criteria for error measures,construct validity,target loss function,point forecast evaluation setup,PFES,forecasting competitions,testing for bias,geometric mean,optimal correction of forecasts,symmetric quadratic loss,symmetric linear loss,absolute mean scaled error,LnQ,ease of communication,ease of interpretation,ease of implementation,scale-independence,time series analysis,rolling-origin evaluation,inventory control,relative root mean squared error,RelRMSE,relative performance,forecast evaluation setup,data science,forecast density,mean-unbiasedness,median-unbiasedness,binomial test,Wilcoxon signed rank test,boxplots,double-scale plots |
Date: | 2021–08–20 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03359179&r= |
By: | Shijia Song; Handong Li |
Abstract: | Constructing a more effective value at risk (VaR) prediction model has long been a goal in financial risk management. In this paper, we propose a novel parametric approach and provide a standard paradigm to demonstrate the modeling. We establish a dynamic conditional score (DCS) model based on high-frequency data and a generalized distribution (GD), namely, the GD-DCS model, to improve the forecasts of daily VaR. The model assumes that intraday returns at different moments are independent of each other and obey the same kind of GD, whose dynamic parameters are driven by DCS. By predicting the motion law of the time-varying parameters, the conditional distribution of intraday returns is determined; then, the bootstrap method is used to simulate daily returns. An empirical analysis using data from the Chinese stock market shows that Weibull-Pareto -DCS model incorporating high-frequency data is superior to traditional benchmark models, such as RGARCH, in the prediction of VaR at high risk levels, which proves that this approach contributes to the improvement of risk measurement tools. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.02953&r= |
By: | K. S. Naik |
Abstract: | Since the 1990s, there have been significant advances in the technology space and the e-Commerce area, leading to an exponential increase in demand for cashless payment solutions. This has led to increased demand for credit cards, bringing along with it the possibility of higher credit defaults and hence higher delinquency rates, over a period of time. The purpose of this research paper is to build a contemporary credit scoring model to forecast credit defaults for unsecured lending (credit cards), by employing machine learning techniques. As much of the customer payments data available to lenders, for forecasting Credit defaults, is imbalanced (skewed), on account of a limited subset of default instances, this poses a challenge for predictive modelling. In this research, this challenge is addressed by deploying Synthetic Minority Oversampling Technique (SMOTE), a proven technique to iron out such imbalances, from a given dataset. On running the research dataset through seven different machine learning models, the results indicate that the Light Gradient Boosting Machine (LGBM) Classifier model outperforms the other six classification techniques. Thus, our research indicates that the LGBM classifier model is better equipped to deliver higher learning speeds, better efficiencies and manage larger data volumes. We expect that deployment of this model will enable better and timely prediction of credit defaults for decision-makers in commercial lending institutions and banks. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.02206&r= |