|
on Forecasting |
By: | Sokol, Andrej |
Abstract: | I propose a new model, conditional quantile regression (CQR), that generates density forecasts consistent with a specific view of the future evolution of some variables. This addresses a shortcoming of existing quantile regression-based models, for example the at-risk framework popularised by Adrian et al. (2019), when used in settings, such as most forecasting processes within central banks and similar institutions, that require forecasts to be conditional on a set of technical assumptions. Through an application to house price inflation in the euro area, I show that CQR provides a viable alternative to existing approaches to conditional density forecasting, notably Bayesian VARs, with considerable advantages in terms of flexibility and additional insights that do not come at the cost of forecasting performance. JEL Classification: C22, C53, E37, R31 |
Keywords: | at-risk, conditional forecasting, density forecast evaluation, house prices, quantile regression |
Date: | 2021–12 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212624&r= |
By: | Leland Farmer; Emi Nakamura; Jón Steinsson |
Abstract: | Forecasts of professional forecasters are anomalous: they are biased, forecast errors are autocorrelated, and forecast revisions predict forecast errors. Sticky or noisy information models seem like unlikely explanations for these anomalies: professional forecasters pay attention constantly and have precise knowledge of the data in question. We propose that these anomalies arise because professional forecasters don’t know the model that generates the data. We show that Bayesian agents learning about hard-to-learn features of the data generating process (low frequency behavior) can generate all the prominent aggregate anomalies emphasized in the literature. We show this for two applications: professional forecasts of nominal interest rates for the sample period 1980-2019 and CBO forecasts of GDP growth for the sample period 1976- 2019. Our learning model for interest rates also provides an explanation for deviations from the expectations hypothesis of the term structure that does not rely on time-variation in risk premia. |
JEL: | E37 E47 G12 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:29495&r= |
By: | Clements, Adam (Queensland University of Technology, Australia); Hurn, Stan (Queensland University of Technology, Australia); Volkov, Vladimir (Tasmanian School of Business & Economics, University of Tasmania) |
Abstract: | Forecasting intraday trading volume is an important problem in economics and finance. One influential approach to achieving this objective is the non-linear Component Multiplicative Error Model (CMEM) that captures time series dependence and intraday periodicity in volume. While the model is well suited to dealing with a non-negative time series, it is relatively cumbersome to implement. This paper proposes a system of linear equations, that is estimated using ordinary least squares, and provides at least as good a forecasting performance as that of the CMEM. This linear specification can easily be applied to model any time series that exhibits diurnal behaviour. |
Keywords: | Volume, forecasting, high-frequency data, CMEM, diurnal |
JEL: | C22 G00 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:tas:wpaper:38716&r= |
By: | Luxuan Yang; Ting Gao; Yubin Lu; Jinqiao Duan; Tao Liu |
Abstract: | With the fast development of modern deep learning techniques, the study of dynamic systems and neural networks is increasingly benefiting each other in a lot of different ways. Since uncertainties often arise in real world observations, SDEs (stochastic differential equations) come to play an important role. To be more specific, in this paper, we use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series which has big jump properties and high probability distribution shift. Our contributions are, first, we use the phase space reconstruction method to extract intrinsic dimension of the time series data so as to determine the input structure for our forecasting model. Second, we explore SDEs driven by $\alpha$-stable L\'evy motion to model the time series data and solve the problem through neural network approximation. Third, we construct the attention mechanism to achieve multi-time step prediction. Finally, we illustrate our method by applying it to stock marketing time series prediction and show the results outperform several baseline deep learning models. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2111.13164&r= |
By: | Joshua C. C. Chan; Gary Koop; Xuewen Yu |
Abstract: | Many popular specifications for Vector Autoregressions (VARs) with multivariate stochastic volatility are not invariant to the way the variables are ordered due to the use of a Cholesky decomposition for the error covariance matrix. We show that the order invariance problem in existing approaches is likely to become more serious in large VARs. We propose the use of a specification which avoids the use of this Cholesky decomposition. We show that the presence of multivariate stochastic volatility allows for identification of the proposed model and prove that it is invariant to ordering. We develop a Markov Chain Monte Carlo algorithm which allows for Bayesian estimation and prediction. In exercises involving artificial and real macroeconomic data, we demonstrate that the choice of variable ordering can have non-negligible effects on empirical results. In a macroeconomic forecasting exercise involving VARs with 20 variables we find that our order-invariant approach leads to the best forecasts and that some choices of variable ordering can lead to poor forecasts using a conventional, non-order invariant, approach. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2111.07225&r= |