|
on Forecasting |
By: | Gustavo A. Sánchez (StataCorp) |
Abstract: | After one fits regression models, it is quite common to produce out-of-sample forecasts to evaluate the predictive accuracy of the model or simply to estimate the expected behavior of one or more dependent variables (assuming that the model is valid be yond the estimation sample). I selected a few examples to illustrate some of the tools available in Stata to produce single or joint forecasts based on parameter estimates from a set of regression models. |
Date: | 2013–05–13 |
URL: | http://d.repec.org/n?u=RePEc:boc:msug13:06&r=for |
By: | Poghosyan, K. |
Abstract: | Abstract: The thesis deals with structural and reduced-form modeling and forecasting of key macroeconomic variables (real growth of GDP, inflation, exchange rate, and policy interest rate). The central part of the thesis (Chapters 2-4) consists of three chapters. Chapter 2 considers the structural DSGE model and its forecasting possibilities. Chapter 3 considers the dynamic factor model (DFM) and its forecasting possibilities. Finally, Chapter 4 compares these two popular forecasting approaches, and describes which modeling approach gives more accurate and reliable forecasting results. |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:ner:tilbur:urn:nbn:nl:ui:12-5590845&r=for |
By: | Johannes Tang Kristensen (Aarhus University and CREATES) |
Abstract: | The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs to be taken when choosing which variables to include in the model. A number of different approaches to determining these variables have been put forward. These are, however, often based on ad-hoc procedures or abandon the underlying theoretical factormodel. In this paper we will take a different approach to the problem by using the LASSO as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model which is better suited for forecasting compared to the traditional principal components (PC) approach.We provide an asymptotic analysis of the estimator and illustrate its merits empirically in a forecasting experiment based on US macroeconomic data. Overall we find that compared to PC we obtain improvements in forecasting accuracy and thus find it to be an important alternative to PC. |
Keywords: | Forecasting, FactorsModels, Principal Components Analysis, LASSO |
JEL: | C38 C53 E27 E37 |
Date: | 2013–03–07 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2013-22&r=for |
By: | Laurent Ferrara; Clément Marsilli; Juan-Pablo Ortega |
Abstract: | The Great Recession endured by the main industrialized countries during the period 2008-2009, in the wake of the financial and banking crisis, has pointed out the major role of the financial sector on macroeconomic fluctuations. In this paper, we reconsider macrofinancial linkages by assessing the leading role of the daily volatility of two major financial variables, namely commodity and stock prices, in their ability to anticipate US GDP growth. For this purpose, an extended MIDAS model is proposedthat allows the forecasting of the quarterly growth rate using exogenous variables sampled at various higher frequencies. Empirical results show that using both daily financial volatilities and monthly industrial production is helpful at the time of predicting quarterly GDP growth over the Great Recession period. |
Keywords: | GDP forecasting, financial volatility, MIDAS approach |
JEL: | C53 E37 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:drm:wpaper:2013-19&r=for |
By: | Andrew C Worthington; Helen Higgs |
Keywords: | wholesale spot electricity prices, generation mix, emission controls and taxation |
JEL: | Q48 C33 D40 Q41 |
Date: | 2013–06 |
URL: | http://d.repec.org/n?u=RePEc:gri:fpaper:finance:201306&r=for |
By: | Chia-Lin Chang; David E. Allen; Michael McAleer (University of Canterbury); Teodosio Perez Amaral |
Abstract: | The papers in this special issue of Mathematics and Computers in Simulation are substantially revised versions of the papers that were presented at the 2011 Madrid International Conference on “Risk Modeling and Management” (RMM2011). The papers cover the following topics: currency hedging strategies using dynamic multivariate GARCH, risk management of risk under the Basel Accord: A Bayesian approach to forecasting value-at-risk of VIX futures, fast clustering of GARCH processes via Gaussian mixture models, GFC-robust risk management under the Basel Accord using extreme value methodologies, volatility spillovers from the Chinese stock market to economic neighbours, a detailed comparison of Value-at-Risk estimates, the dynamics of BRICS's country risk ratings and domestic stock markets, U.S. stock market and oil price, forecasting value-at-risk with a duration-based POT method, and extreme market risk and extreme value theory. |
Keywords: | Currency hedging strategies, Basel Accord, risk management, forecasting, VIX futures, fast clustering, mixture models, extreme value methodologies, volatility spillovers, Value-at-Risk, country risk ratings, BRICS, extreme market risk |
JEL: | C14 C32 C53 C58 G11 G32 |
Date: | 2013–06–26 |
URL: | http://d.repec.org/n?u=RePEc:cbt:econwp:13/22&r=for |
By: | Chia-Lin Chang (Department of Applied Economics Department of Finance National Chung Hsing University Taiwan); David E. Allen (School of Accounting, Finance and Economics Edith Cowan University Australia); Michael McAleer (Econometric Institute Erasmus School of Economics Erasmus University Rotterdam and Tinbergen Institute The Netherlands and Department of Quantitative Economics Complutense University of Madrid Spain and Institute of Economic Research Kyoto University Japan); Teodosio Perez Amaral (Department of Quantitative Economics Complutense University of Madrid Spain) |
Abstract: | The papers in this special issue of Mathematics and Computers in Simulation are substantially revised versions of the papers that were presented at the 2011 Madrid International Conference on “Risk Modelling and Management” (RMM2011). The papers cover the following topics: currency hedging strategies using dynamic multivariate GARCH, risk management of risk under the Basel Accord: A Bayesian approach to forecasting value-at-risk of VIX futures, fast clustering of GARCH processes via Gaussian mixture models, GFC-robust risk management under the Basel Accord using extreme value methodologies, volatility spillovers from the Chinese stock market to economic neighbours, a detailed comparison of Value-at-Risk estimates, the dynamics of BRICS's country risk ratings and domestic stock markets, U.S. stock market and oil price, forecasting value-at-risk with a duration-based POT method, and extreme market risk and extreme value theory. |
Keywords: | Currency hedging strategies, Basel Accord, risk management, forecasting, VIX futures, fast clustering, mixture models, extreme value methodologies, volatility spillovers, Value-at-Risk, country risk ratings, BRICS, extreme market risk. |
JEL: | C14 C32 C53 C58 G11 G32 |
Date: | 2013–07 |
URL: | http://d.repec.org/n?u=RePEc:kyo:wpaper:872&r=for |
By: | Ardelean, Vlad; Pleier, Thomas |
Abstract: | Nonparametric prediction of time series is a viable alternative to parametric prediction, since parametric prediction relies on the correct specification of the process, its order and the distribution of the innovations. Often these are not known and have to be estimated from the data. Another source of nuisance can be the occurrence of outliers. By using nonparametric methods we circumvent both problems, the specification of the processes and the occurrence of outliers. In this article we compare the prediction power for parametric prediction, semiparametric prediction and nonparamatric methods such as support vector machines and pattern recognition. To measure the prediction power we use the MSE. Furthermore we test if the increase in prediction power is statistically significant. -- |
Keywords: | Parametric prediction,Nonparametric prediction,Support Vector Regression,Outliers |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:052013&r=for |
By: | Leonardo Becchetti (University of Rome "Tor Vergata"); Rocco Ciciretti (University of Rome "Tor Vergata"); Iftekhar Hasan (Fordham University and Bank of Finland) |
Abstract: | Idiosyncratic volatility (IV) is regarded as a measure of firm specific information and has been shown to be correlated with ex post lower stock returns. We explore the nexus between IV and corporate social responsibility (CSR) and document that IV is positively correlated with net aggregate CSR and is negatively correlated with a CSR specific risk factor (namely stakeholder risk) and with the standard error of the absolute earning forecast error. Our findings show that: (i) less (more) reliance on market information (firm specific information) implies more difficulty in predictive accuracy; (ii) negative correlation between IV and exposure to the above mentioned CSR risk dimension contributes to explain the puzzle of the negative excess returns of high IV portfolios widely documented in the literature. Our findings are consistent with the hypothesis that CSR reduces flexibility in answering to productive shocks via reduction of stakeholders’ wellbeing, thereby making earnings less predictable in conventional ways, even though they are less exposed to risk of conflicts with stakeholders. |
Keywords: | idiosyncratic volatility, corporate social responsibility, earning forecast bias. |
JEL: | D84 E44 F30 G17 C53 |
Date: | 2013–07–04 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:285&r=for |
By: | Nick Beyler |
Keywords: | Physical Activity, NHANES; accelerometry, validation study, estimated generalized least squares |
JEL: | C |
Date: | 2013–06–30 |
URL: | http://d.repec.org/n?u=RePEc:mpr:mprres:7811&r=for |
By: | Asger Lunde (Aarhus University and CREATES); Anne Floor Brix (Aarhus University and CREATES) |
Abstract: | In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF based estimator in practice. |
Keywords: | GMMestimation, Heston model, high-frequency data, integrated volatility, market microstructure noise, prediction-based estimating functions, realized variance, realized kernel |
JEL: | C13 C22 C51 |
Date: | 2013–02–07 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2013-23&r=for |
By: | Ben Jann (University of Bern) |
Abstract: | Tables of estimated regression coefficients, usually accompanied by additional information such as standard errors, t statistics, p-values, confidence intervals, or significance stars, have long been the preferred way of communicating results from statistical models. In recent years, however, the limits of this form of exposition have been increasingly recognized. For example, interpretation of regression tables can be very challenging in the presence of complications such as interaction effects, categorical variables, or nonlinear functional forms. Furthermore, while these issues might still be manageable in the case of linear regression, interpretational difficulties can be overwhelming in nonlinear models (for example, logistic regression). To facilitate sensible interpretation of these models, one must often compute additional results such as marginal effects, predictive margins, or contrasts. Moreover, smart graphical displays of results can be very valuable in making complex relations accessible. A number of helpful commands geared at supporting these tasks have been recently introduced in Stata, making elaborate interpretation and communication of regression results possible without much extra effort. Examples of these commands are margins, contrasts, and marginsplot. In my talk, I will discuss the capabilities of these commands and present a range of examples illustrating their use. |
Date: | 2013–07–03 |
URL: | http://d.repec.org/n?u=RePEc:boc:dsug13:11&r=for |
By: | Gregor Wergen |
Abstract: | We study the statistics of record-breaking events in daily stock prices of 366 stocks from the Standard and Poors 500 stock index. Both the record events in the daily stock prices themselves and the records in the daily returns are discussed. In both cases we try to describe the record statistics of the stock data with simple theoretical models. The daily returns are compared to i.i.d. RV's and the stock prices are modeled using a biased random walk, for which the record statistics are known. These models agree partly with the behavior of the stock data, but we also identify several interesting deviations. Most importantly, the number of records in the stocks appears to be systematically decreased in comparison with the random walk model. Considering the autoregressive AR(1) process, we can predict the record statistics of the daily stock prices more accurately. We also compare the stock data with simulations of the record statistics of the more complicated GARCH(1,1) model, which, in combination with the AR(1) model, gives the best agreement with the observational data. To better understand our findings, we discuss the survival and first-passage times of stock prices on certain intervals and analyze the correlations between the individual record events. After recapitulating some recent results for the record statistics of ensembles of N stocks, we also present some new observations for the weekly distributions of record events. |
Date: | 2013–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1307.2048&r=for |