|
on Econometric Time Series |
By: | Gregor Zens; Maximilian B\"ock |
Abstract: | This paper investigates the role of high-dimensional information sets in the context of Markov switching models with time varying transition probabilities. Markov switching models are commonly employed in empirical macroeconomic research and policy work. However, the information used to model the switching process is usually limited drastically to ensure stability of the model. Increasing the number of included variables to enlarge the information set might even result in decreasing precision of the model. Moreover, it is often not clear a priori which variables are actually relevant when it comes to informing the switching behavior. Building strongly on recent contributions in the field of dynamic factor analysis, we introduce a general type of Markov switching autoregressive models for non-linear time series analysis. Large numbers of time series are allowed to inform the switching process through a factor structure. This factor-augmented Markov switching (FAMS) model overcomes estimation issues that are likely to arise in previous assessments of the modeling framework. More accurate estimates of the switching behavior as well as improved model fit result. The performance of the FAMS model is illustrated in a simulated data example as well as in an US business cycle application. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.13194&r=all |
By: | Peter C. B. Phillips; Zhentao Shi |
Abstract: | The Hodrick-Prescott (HP) filter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend specification. Like all nonparametric methods, the HP filter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP filter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the filter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the boosted HP filter in view of its connection to L2-boosting in machine learning. The paper develops limit theory to show that the boosted HP filter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks -- the most common trends that appear in macroeconomic data and current modeling methodology. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP filtering, the data-determined boosted filter, and an alternative autoregressive approach. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.00175&r=all |
By: | Kapetanios, George (King's College, London); Millard, Stephen (Bank of England); Petrova, Katerina (St Andrews University); Price, Simon (Essex Business School) |
Abstract: | We re-examine the great ratios associated with balanced growth models and ask whether they have remained constant over time. Having first looked at whether Kaldor’s stylised facts still apply to the UK data, we employ a nonparametric methodology that allows for slowly varying coefficients to estimate trends over time. We formally test for stable relationships in the great ratios with a new statistical test based on these nonparametric estimators designed to detect time varying cointegrating relationships. Small sample properties of the test are explored in a small Monte Carlo exercise. Generally, we find little evidence for cointegration when parameters are constant, but strong evidence when allowing for time variation. The implications are that in macroeconometric models (including DSGE models), provision should be made to explicitly facilitate such shifting long-run relationships. |
Keywords: | Time variation; great ratios; cointegration |
JEL: | C14 C26 C51 O40 |
Date: | 2019–04–18 |
URL: | http://d.repec.org/n?u=RePEc:boe:boeewp:0789&r=all |
By: | A Clements; D Preve |
Abstract: | The standard heterogeneous autoregressive (HAR) model is perhaps the most popular benchmark model for forecasting return volatility. It is often estimated using raw realized variance (RV) and ordinary least squares (OLS). However, given the stylized facts of RV and wellknown properties of OLS, this combination should be far from ideal. One goal of this paper is to investigate how the predictive accuracy of the HAR model depends on the choice of estimator, transformation, and forecasting scheme made by the market practitioner. Another goal is to examine the effect of replacing its high-frequency data based volatility proxy (RV) with a proxy based on free and publicly available low-frequency data (logarithmic range). In an out-of-sample study, covering three major stock market indices over 16 years, it is found that simple remedies systematically outperform not only standard HAR but also state of the art HARQ forecasts, and that HAR models using logarithmic range can often produce forecasts of similar quality to those based on RV. |
Keywords: | Volatility forecasting; Realized variance; HAR model; HARQ model; Robust regression; Box-Cox transformation; Forecast comparisons; QLIKE loss; Model confidence set |
JEL: | C22 C51 C52 C53 C58 |
Date: | 2019–04–12 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2019_01&r=all |
By: | A Clements; M Doolan |
Abstract: | The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that are capable of placing varying emphasis on losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta. |
Keywords: | Multivariate volatility, combination forecasts, forecast evaluation, model confidence set |
JEL: | C22 G00 |
Date: | 2018–12–11 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2018_02&r=all |
By: | Guglielmo Maria Caporale; Daria Teterkina |
Abstract: | This paper compares volatility forecasts for the RTS Index (the main index for the Russian stock market) generated by alternative models, specifically option-implied volatility forecasts based on the Black-Scholes model, ARCH/GARCH-type model forecasts, and forecasts combining those two using a mixing strategy based either on a simple average or a weighted average with the weights being determined according to two different criteria (either minimizing the errors or maximizing the information content). Various forecasting performance tests are carried out which suggest that both implied volatility and combination methods using a simple average outperform ARCH/GARCH-type models in terms of forecasting accuracy. |
Keywords: | option-implied volatility, ARCH-type models, mixed strategies |
JEL: | C22 G12 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_7612&r=all |
By: | Suwanhirunkul, Suwijak; Masih, Mansur |
Abstract: | International diversification of equity is important for investors who want to reduce risk of capital loss. However, international diversification benefit is inconclusive. This research is the initial attempt to find international diversification benefit of global Islamic equity (in the World, US, EU and Asia Pacific region) from the perspective of the conventional Southeast Asian investors. We use relatively advanced and robust techniques MGARCH-DCC and Wavelet coherence. We find that, for the Southeast Asia as a whole region, very-short-term investors have obvious diversification benefit in Islamic equities in World, Europe and US regions but not in Asia Pacific region. Short-term and Medium-term investors face limited diversification benefit, while long-term investors have benefit in Islamic Europe and US equity. Each Southeast Asian country has varying benefit at different investment horizons. The result of this study provides suggestion for the investors who have different investment horizons to effectively diversify their conventional stocks with Islamic equity. |
Keywords: | portfolio diversification, Islamic equity, MGARCH-DCC, Wavelet coherence, Southeast Asia |
JEL: | C22 C58 G11 G15 |
Date: | 2018–12–26 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:93542&r=all |
By: | Andr\'es Garc\'ia Medina; Graciela Gonz\'alez-Far\'ias |
Abstract: | We determine the number of statistically significant factors in a forecast model using a random matrices test. The applied forecast model is of the type of Reduced Rank Regression (RRR), in particular, we chose a flavor which can be seen as the Canonical Correlation Analysis (CCA). As empirical data, we use cryptocurrencies at hour frequency, where the variable selection was made by a criterion from information theory. The results are consistent with the usual visual inspection, with the advantage that the subjective element is avoided. Furthermore, the computational cost is minimal compared to the cross-validation approach. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.00545&r=all |
By: | Bazhenov, Timofey; Fantazzini, Dean |
Abstract: | This work proposes to forecast the Realized Volatility (RV) and the Value-at-Risk (VaR) of the most liquid Russian stocks using GARCH, ARFIMA and HAR models, including both the implied volatility computed from options prices and Google Trends data. The in-sample analysis showed that only the implied volatility had a significant effect on the realized volatility across most stocks and estimated models, whereas Google Trends did not have any significant effect. The out-of-sample analysis highlighted that models including the implied volatility improved their forecasting performances, whereas models including internet search activity worsened their performances in several cases. Moreover, simple HAR and ARFIMA models without additional regressors often reported the best forecasts for the daily realized volatility and for the daily Value-at-Risk at the 1% probability level, thus showing that efficiency gains more than compensate any possible model misspecifications and parameters biases. Our empirical evidence shows that, in the case of Russian stocks, Google Trends does not capture any additional information already included in the implied volatility. |
Keywords: | Forecasting; Realized Volatility; Value-at-Risk; Implied Volatility; Google Trends; GARCH; ARFIMA; HAR; |
JEL: | C22 C51 C53 G17 G32 |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:93544&r=all |
By: | Arturas Juodis (Faculty of Economics and Business, University of Groningen); Yiannis Karavias (Department of Economics, University of Birmingham) |
Abstract: | The power of Granger non-causality tests in panel data depends on the type of the alternative hypothesis: feedback from other variables might be homogeneous, homogeneous within groups or heterogeneous across different panel units. Existing tests have power against only one of these alternatives and may fail to reject the null hypothesis if the specified type of alternative is incorrect. This paper proposes a new Union-Intersections (UI) test which has correct size and good power against any type of alternative. The UI test is based on an existing test which is powerful against heterogeneous alternatives and a new Wald-type test which is powerful against homogeneous alternatives. The Wald test is designed to have good size and power properties for moderate to large time series dimensions and is based on a bias-corrected split panel jackknife-type estimator. Evidence from simulations confirm the new UI tests provide power against any direction of the alternative. |
Keywords: | Panel Data, Granger Causality, VAR |
JEL: | C13 C33 |
Date: | 2019–04–24 |
URL: | http://d.repec.org/n?u=RePEc:lie:wpaper:59&r=all |
By: | Tetsuya Takaishi |
Abstract: | Recent studies have found that the log-volatility of asset returns exhibit roughness. This study investigates roughness or the anti-persistence of Bitcoin volatility. Using the multifractal detrended fluctuation analysis, we obtain the generalized Hurst exponent of the log-volatility increments and find that the generalized Hurst exponent is less than $1/2$, which indicates log-volatility increments that are rough. Furthermore, we find that the generalized Hurst exponent is not constant. This observation indicates that the log-volatility has multifractal property. Using shuffled time series of the log-volatility increments, we infer that the source of multifractality partly comes from the distributional property. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.12346&r=all |
By: | Freyaldenhoven, Simon (Federal Reserve Bank of Philadelphia) |
Abstract: | I extend the theory on factor models by incorporating local factors into the model. Local factors only affect an unknown subset of the observed variables. This implies a continuum of eigenvalues of the covariance matrix, as is commonly observed in applications. I derive which factors are pervasive enough to be economically important and which factors are pervasive enough to be estimable using the common principal component estimator. I then introduce a new class of estimators to determine the number of those relevant factors. Unlike existing estimators, my estimators use not only the eigenvalues of the covariance matrix, but also its eigenvectors. I find strong evidence of local factors in a large panel of US macroeconomic indicators. |
Keywords: | high-dimensional data; factor models; weak factors; local factors; sparsity |
JEL: | C38 C52 |
Date: | 2019–04–19 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:19-23&r=all |