|
on Econometric Time Series |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University); Zhentao Shi (The Chinese University of Hong Kong) |
Abstract: | The Hodrick-Prescott (HP) filter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend speci?cation. Like all nonparametric methods, the HP filter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP filter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the filter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the boosted HP filter in view of its connection to L_2-boosting in machine learning. The paper develops limit theory to show that the boosted HP filter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks – the most common trends that appear in macroeconomic data and current modeling methodology. In doing so, the boosted filter provides a new mechanism for consistently estimating multiple structural breaks. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP filtering, the data-determined boosted filter, and an alternative autoregressive approach. These examples show that the boosted HP filter is helpful in analyzing a large collection of heterogeneous macroeconomic time series that manifest various degrees of persistence, trend behavior, and volatility. |
Keywords: | Boosting, Cycles, Empirical macroeconomics, Hodrick-Prescott filter, Machine learning, Nonstationary time series, Trends, Unit root processes |
JEL: | C22 E20 |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2192&r=all |
By: | Violetta Dalla (National and Kapodistrian University of Athens); Liudas Giraitis (Queen Mary, University of London); Peter C.B. Phillips (Cowles Foundation, Yale University) |
Abstract: | Commonly used tests to assess evidence for the absence of autocorrelation in a univariate time series or serial cross-correlation between time series rely on procedures whose validity holds for i.i.d. data. When the series are not i.i.d., the size of correlogram and cumulative Ljung-Box tests can be signi?cantly distorted. This paper adapts standard correlogram and portmanteau tests to accommodate hidden dependence and non-stationarities involving heteroskedasticity, thereby uncoupling these tests from limiting assumptions that reduce their applicability in empirical work. To enhance the Ljung-Box test for non-i.i.d. data a new cumulative test is introduced. Asymptotic size of these tests is una?ected by hidden dependence and heteroskedasticity in the series. Related extensions are provided for testing cross-correlation at various lags in bivariate time series. Tests for the i.i.d. property of a time series are also developed. An extensive Monte Carlo study con?rms good performance in both size and power for the new tests. Applications to real data reveal that standard tests frequently produce spurious evidence of serial correlation. |
Keywords: | Serial correlation, Cross-correlation, Heteroskedasticity, Martingale differences |
JEL: | C12 |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2194&r=all |
By: | Matteo Bonato (Department of Economics and Econometrics, University of Johannesburg, Auckland Park, South Africa; IPAG Business School, 184 Boulevard Saint-Germain, 75006 Paris, France); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, 0002, South Africa; IPAG Business School, 184 Boulevard Saint-Germain, 75006 Paris, France); Chi Keung Marco Lau (Huddersfield Business School, University of Huddersfield, Huddersfield, HD1 3DH, United Kingdom); Shixuan Wang (Department of Economics, University of Reading, Reading, RG6 6AA, United Kingdom) |
Abstract: | In this paper, we use intraday futures market data on gold and oil to compute returns, realized volatility, volatility jumps, realized skewness and realized kurtosis. Using these daily metrics associated with two markets over the period of December 2, 1997 to May 26, 2017, we conduct linear, nonparametric, and time-varying (rolling) tests of causality, with the latter two approaches motivated due to the existence of nonlinearity and structural breaks. While, there is hardly any evidence of spillovers between the returns of these two markets, strong evidence of bidirectional causality is detected for realized volatility, which seems to be resulting from volatility jumps. Evidence of spillovers are also detected for the crash risk variables, i.e., realized skewness, and for realized kurtosis as well, with the effect on the latter being relatively stronger. Moreover, based on a moments-based test of causality, evidence of co-volatility is deduced, whereby we find that extreme positive and negative returns of gold and oil tend to drive the volatilities in these markets. Our results have important implications for not only investors, but also policymakers. |
Keywords: | Gold and Oil Markets, Linear, Nonparametric and Time-Varying Causality Tests, Moments-Based Spillovers |
JEL: | C32 Q02 |
Date: | 2019–08 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:201966&r=all |
By: | Samuel Asante Gyamerah |
Abstract: | In this paper, an application of three GARCH-type models (sGARCH, iGARCH, and tGARCH) with Student t-distribution, Generalized Error distribution (GED), and Normal Inverse Gaussian (NIG) distribution are examined. The new development allows for the modeling of volatility clustering effects, the leptokurtic and the skewed distributions in the return series of Bitcoin. Comparative to the two distributions, the normal inverse Gaussian distribution captured adequately the fat tails and skewness in all the GARCH type models. The tGARCH model was the best model as it described the asymmetric occurrence of shocks in the Bitcoin market. That is, the response of investors to the same amount of good and bad news are distinct. From the empirical results, it can be concluded that tGARCH-NIG was the best model to estimate the volatility in the return series of Bitcoin. Generally, it would be optimal to use the NIG distribution in GARCH type models since time series of most cryptocurrency are leptokurtic. |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1909.04903&r=all |
By: | Qi Wang; Jos\'e E. Figueroa-L\'opez; Todd Kuffner |
Abstract: | Volatility estimation based on high-frequency data is key to accurately measure and control the risk of financial assets. A L\'{e}vy process with infinite jump activity and microstructure noise is considered one of the simplest, yet accurate enough, models for financial data at high-frequency. Utilizing this model, we propose a "purposely misspecified" posterior of the volatility obtained by ignoring the jump-component of the process. The misspecified posterior is further corrected by a simple estimate of the location shift and re-scaling of the log likelihood. Our main result establishes a Bernstein-von Mises (BvM) theorem, which states that the proposed adjusted posterior is asymptotically Gaussian, centered at a consistent estimator, and with variance equal to the inverse of the Fisher information. In the absence of microstructure noise, our approach can be extended to inferences of the integrated variance of a general It\^o semimartingale. Simulations are provided to demonstrate the accuracy of the resulting credible intervals, and the frequentist properties of the approximate Bayesian inference based on the adjusted posterior. |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1909.04853&r=all |
By: | Jaros{\l}aw Klamut; Tomasz Gubiec |
Abstract: | We are introducing the new family of the Continuous Time Random Walks (CTRW) with long-term memory within consecutive waiting times. This memory is introduced to the model by the assumption that consecutive waiting times are the analog of CTRW themselves. This way, we obtain the 'Continuous' Time Random Walk in Continuous Time Random Walk. Surprisingly, this type of process, only with the long-term memory within waiting times, can successfully describe slowly decaying nonlinear autocorrelation function of the stock market return. The model achieves this result without any dependence between consecutive price changes. It proves the crucial role of inter-event times in volatility clustering phenomenon. |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1909.04986&r=all |