nep-ets New Economics Papers
on Econometric Time Series
Issue of 2010‒04‒17
24 papers chosen by
Yong Yin
SUNY at Buffalo

  1. Beyond Panel Unit Root Tests : Using Multiple Testing to Determine the Non-Stationarity Properties of Individual Series in a Panel By MOON, H.R.; PERRON, Benoit
  2. A Quasi-locally Most powerful Test for Correlation in the conditional Variance of Positive Data By Brendan P.M. McCabe; Gael Martin; Keith Freeland
  3. Stochastic Volatility Model with Leverage and Asymmetrically Heavy-tailed Error Using GH Skew Student's t-distribution By Jouchi Nakajima; Yasuhiro Omori
  4. Components of bull and bear markets: bull corrections and bear rallies By John M Maheu; Thomas H McCurdy; Yong Song
  5. Intraday Dynamics of Volatility and Duration: Evidence from the Chinese Stock Market By Chun Liu; John M Maheu
  6. Decision Making in hard Times: What is a Recession, Why Do We Care and How Do We Know When We Are in One? By Kevin Lee
  7. The Behaviour of Dickey Fuller test in the case of noisy data: to what extent we can trust the outcome. By Stephen Hall
  8. Linear and Non-linear Causality Test in a LSTAR model - wavelet decomposition in a non-linear environment By Li, Yushu; Shukur, Ghazi
  9. A Bootstrap Test for Causality with Endogenous Lag Length Choice - theory and application in finance By Hacker, R. Scott; Hatemi-J, Abdulnasser
  10. Long cycles in growth: explorations using new frequency domain techniques with US data By Crowley, Patrick M
  11. A Bootstrap Neural Network Based Heterogeneous Panel Unit Root Test: Application to Exchange Rates By Christian de Peretti; Carole Siani; Mario Cerrato
  12. Block Structure Multivariate Stochastic Volatility Models By Asai, M.; Caporin, M.
  13. Evaluating Macroeconomic Forecast: A Review of Some Recent Developments By Franses, Ph.H.B.F.; McAleer, M.J.; Legerstee, R.
  14. Multi-regime models for nonlinear nonstationary time series By Francesco Battaglia; Mattheos K. Protopapas
  15. Outliers in Garch models and the estimation of risk measures By Aurea Grané; Helena Veiga
  16. A New Solution to Time Series Inference in Spurious Regression Problems By Hrishikesh D. Vinod
  17. Stochastic Search Variable Selection in Vector Error Correction Models with an Application to a Model of the UK Macroeconomy By Markus Jochmann; Gary Koop; Roberto Leon-Gonzalez; Rodney W. Strachan
  18. Estimation of the characteristics of a Lévy process observed at arbitrary frequency By Johanna Kappus; Markus Reiß
  19. Forecasting Realized Volatility with Linear and Nonlinear Models By Michael McAleer; Marcelo Cunha Medeiros
  20. Unit Roots, Level Shifts and Trend Breaks in PerCapita Output: A Robust Evaluation By Mohitosh Kejriwal; Claude Lopez
  21. Model selection, estimation and forecasting in VAR models with short-run and long-run restrictions By Athanasopoulos, George; Guillén, Osmani Teixeira de Carvalho; Issler, João Victor; Vahid, Farshid
  22. Hidden Markov models with t components. Increased persistence and other aspects By Bulla, Jan
  23. A robust version of the KPSS test based on ranks By Matteo Pelagatti; Pranab Sen
  24. Dating the Timeline of Financial Bubbles During the Subprime Crisis By Peter C. B. Phillips; Jun Yu

  1. By: MOON, H.R.; PERRON, Benoit
    Abstract: Most panel unit root tests are designed to test the joint null hypothesis of a unit root for each individual series in a panel. After a rejection, it will often be of interest to identify which series can be deemed to be stationary and which series can be deemed nonstationary. Researchers will sometimes carry out this classification on the basis of n individual (univariate) unit root tests based on some ad hoc significance level. In this paper, we demonstrate how to use the false discovery rate (FDR) in evaluating I(1)=I(0) classifications based on individual unit root tests when the size of the cross section (n) and time series (T) dimensions are large. We report results from a simulation experiment and illustrate the methods on two data sets.
    Keywords: False discovery rate, Multiple testing, unit root tests, panel data
    JEL: C32 C33 C44
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:mtl:montec:10-2010&r=ets
  2. By: Brendan P.M. McCabe; Gael Martin; Keith Freeland
    Abstract: A test is derived for short-memory correlation in the conditional variance of strictly positive, skewed data. The test is quasi-locally most powerful (QLMP) under the assumption of conditionally gamma data. Analytical asymptotic relative efficiency calculations show that an alternative test, based on the first-order autocorrelation coefficient of the squared data, has negligible relative power to detect correlation in the conditional variance. Finite sample simulation results con.rm the poor performance of the squares-based test for fixed alternatives, as well as demonstrating the poor performance of the test based on the first-order autocorrelation coefficient of the raw (levels) data. Robustness of the QLMP test, both to misspecification of the conditional distribution and misspecification of the dynamics is also demonstrated using simulation. The test is illustrated using financial trade durations data.
    Keywords: Locally most powerful test; quasi-likelihood; asymptotic relative efficiency; durations data; gamma distribution; Weibull distribution.
    JEL: C12 C16 C22
    Date: 2010–02–09
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2010-2&r=ets
  3. By: Jouchi Nakajima; Yasuhiro Omori
    Abstract: Bayesian analysis of a stochastic volatility model with a generalized hyperbolic (GH) skew Student's t-error distribution is described where we first consider an asymmetric heavy-tailness as well as leverage effects. An efficient Markov chain Monte Carlo estimation method is described exploiting a normal variance-mean mixture representation of the error distribution with an inverse gamma distribution as a mixing distribution. The proposed method is illustrated using simulated data, daily TOPIX and S&P500 stock returns. The model comparison for stock returns is conducted based on the marginal likelihood in the empirical study. The strong evidence of the leverage and asymmetric heavy-tailness is found in the stock returns. Further, the prior sensitivity analysis is conducted to investigate whether obtained results are robust with respect to the choice of the priors.
    Keywords: generalized hyperbolic skew Student's t-distribution, Markov chain Monte Carlo, Mixing distribution, State space model, Stochastic volatility, Stock returns
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd09-124&r=ets
  4. By: John M Maheu; Thomas H McCurdy; Yong Song
    Abstract: Existing methods of partitioning the market index into bull and bear regimes do not identify market corrections or bear market rallies. In contrast, our probabilistic model of the return distribution allows for rich and heterogeneous intra-regime dynamics. We focus on the characteristics and dynamics of bear market rallies and bull market corrections, including, for example, the probability of transition from a bear market rally into a bull market versus back to the primary bear state. A Bayesian estimation approach accounts for parameter and regime uncertainty and provides probability statements regarding future regimes and returns. A Value-at-Risk example illustrates the economic value of our approach.
    Keywords: Markov switching, Gibbs sampling, turning points
    JEL: C22 C51 C52 G1
    Date: 2010–04–06
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-402&r=ets
  5. By: Chun Liu; John M Maheu
    Abstract: We propose a new joint model of intraday returns and durations to study the dynamics of several Chinese stocks. We include IBM from the U.S. market for comparison purposes. Flexible innovation distributions are used for durations and returns, and the total variance of returns is decomposed into different volatility components associated with different transaction horizons. Our new model strongly dominates existing specifications in the literature. The conditional hazard functions are non-monotonic and there is strong evidence for different volatility components. Although diurnal patterns, volatility components, and market microstructure implications are similar across the markets, there are interesting differences. Durations for lightly traded Chinese stocks tend to carry more information than heavily traded stocks. Chinese investors usually have longer investment horizons, which may be explained by the specific trading rules in China.
    Keywords: market microstructure, transaction horizon, high-frequency data, ACD, GARCH
    JEL: C22 C11 G10
    Date: 2010–04–06
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-401&r=ets
  6. By: Kevin Lee
    Abstract: Defining a recessionary event as one which impacts adversely on individuals’ economic well-being, the paper argues that recession is a multi-faceted phenomenon whose meaning differs from person to person as it impacts on their decision-making in real time. It argues that recession is best represented through the calculation of the nowcast of recession event probabilities. A variety of such probabilities are produced using a real-time data set for the US for the period, focusing on the likelihood of various recessionary events through 1986q1-2008q4 and on prospects beyond the end of the sample.
    Keywords: Recession; Probability Forecasts; Real Time
    JEL: E52 E58
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:09/22&r=ets
  7. By: Stephen Hall
    Abstract: We examine the behaviour of Dickey Fuller test (DF) in the case of noisy data using Monte Carlo simulation. The findings show clearly that the size distortion of DF test becomes larger as the noise increases in the data.
    Keywords: Hypothesis testing; Unit root test; Monte Carlo Analysis.
    JEL: C01 C12 C15
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:09/18&r=ets
  8. By: Li, Yushu (CAFO, Växjö University); Shukur, Ghazi (CESIS - Centre of Excellence for Science and Innovation Studies, Royal Institute of Technology)
    Abstract: In this paper, we use simulated data to investigate the power of different causality tests in a two-dimensional vector autoregressive (VAR) model. The data are presented in a non-linear environment that is modelled using a logistic smooth transition autoregressive (LSTAR) function. We use both linear and non-linear causality tests to investigate the unidirection causality relationship and compare the power of these tests. The linear test is the commonly used Granger causality test. The non-linear test is a non-parametric test based on Baek and Brock (1992) and Hiemstra and Jones (1994). When implementing the non-linear test, we use separately the original data, the linear VAR filtered residuals, and the wavelet decomposed series based on wavelet multiresolution analysis (MRA). The VAR filtered residuals and the wavelet decomposition series are used to extract the non-linear structure of the original data. The simulation results show that the non-parametric test based on the wavelet decomposition series (which is a model free approach) has the highest power to explore the causality relationship in the non-linear models.
    Keywords: Granger causality; LSTAR model; Wavelet multiresolution; Monte Carlo simulation
    JEL: C01 C10 C51 C52
    Date: 2010–04–10
    URL: http://d.repec.org/n?u=RePEc:hhs:cesisp:0227&r=ets
  9. By: Hacker, R. Scott (CESIS - Centre of Excellence for Science and Innovation Studies, Royal Institute of Technology); Hatemi-J, Abdulnasser (UAE University)
    Abstract: Granger causality tests have become among the most popular empirical applications with time series data. Several new tests have been developed in the literature that can deal with different data generating processes. In all existing theoretical papers it is assumed that the lag length is known a priori. However, in applied research the lag length has to be selected before testing for causality. This paper suggests that in investigating the effectiveness of various Granger causality testing methodologies, including those using bootstrapping, the lag length choice should be endogenized, by which we mean the data-driven preselection of lag length should be taken into account. We provide and accordingly evaluate a Granger-causality bootstrap test which may be used with data that may or may not be integrated, and compare the performance of this test to that for the analogous asymptotic test. The suggested bootstrap test performs well and appears to be also robust to ARCH effects that usually characterize the financial data. This test is applied to testing the causal impact of the US financial market on the market of the United Arab Emirates.
    Keywords: Causality; VAR Model; Stability; Endogenous Lag; ARCH; Leverages
    JEL: C15 C32 G11
    Date: 2010–04–10
    URL: http://d.repec.org/n?u=RePEc:hhs:cesisp:0223&r=ets
  10. By: Crowley, Patrick M (College of Business, Texas A&M University)
    Abstract: In his celebrated 1966 Econometrica article, Granger first hypothesized that there is a ‘typical’ spectral shape for an economic variable. This ‘typical’ shape implies decreasing levels of energy as frequency increases, which in turn implies an extremely long cycle in economic fluctuations and particulary in growth. Spectral analysis is however based on certain assumptions particulary in that render these basic frequency domain techniques inappropriate for analysing non-stationary economic data. In this paper three recent frequency domain methods for extracting cycles from non-stationary data are used with US real GNP data to analyse fluctuations in economic growth. The findings, among others, are that these more recent frequency domain techniques do not provide evidence to support the ‘typical’ spectral shape and nor an extremely long growth cycle á la Granger.
    Keywords: business cycles; growth cycles; frequency domain; spectral analysis; long cycles; Granger; wavelet analysis; Hilbert-Huang Transform (HHT); empirical mode decomposition (EMD); non-stationarity
    JEL: C13 C14 O47
    Date: 2010–02–21
    URL: http://d.repec.org/n?u=RePEc:hhs:bofrdp:2010_006&r=ets
  11. By: Christian de Peretti; Carole Siani; Mario Cerrato
    Abstract: This paper proposes a bootstrap artificial neural network based panel unit root test in a dynamic heterogeneous panel context. An application to a panel of bilateral real exchange rate series with the US Dollar from the 20 major OECD countries is provided to investigate the Purchase Power Parity (PPP). The combination of neural network and bootstrapping significantly changes the findings of the economic study in favour of PPP.
    Keywords: Artificial neural network, panel unit root test, bootstrap, Monte Carlo experiments, exchange rates.
    JEL: C12 C15 C22 C23 F31
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2010_05&r=ets
  12. By: Asai, M.; Caporin, M. (Erasmus Econometric Institute)
    Abstract: Most multivariate variance models suffer from a common problem, the “curse of dimensionalityâ€. For this reason, most are fitted under strong parametric restrictions that reduce the interpretation and flexibility of the models. Recently, the literature has focused on multivariate models with milder restrictions, whose purpose was to combine the need for interpretability and efficiency faced by model users with the computational problems that may emerge when the number of assets is quite large. We contribute to this strand of the literature proposing a block-type parameterization for multivariate stochastic volatility models.
    Keywords: block structures;multivariate stochastic volatility;curse of dimensionality
    Date: 2009–12–17
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765017523&r=ets
  13. By: Franses, Ph.H.B.F.; McAleer, M.J.; Legerstee, R. (Erasmus Econometric Institute)
    Abstract: Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth
    Keywords: macroeconomic forecasts;econometric models;human intuition;biased forecasts;forecast performance;forecast evaluation;forecast comparison
    Date: 2010–03–30
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765018604&r=ets
  14. By: Francesco Battaglia; Mattheos K. Protopapas
    Abstract: Nonlinear nonstationary models for time series are considered, where the series is generated from an autoregressive equation whose coe±cients change both according to time and the delayed values of the series itself, switching between several regimes. The transition from one regime to the next one may be discontinuous (self-exciting threshold model), smooth (smooth transition model) or continuous linear (piecewise linear threshold model). A genetic algorithm for identifying and estimating such models is proposed, and its behavior is evaluated through a simulation study and application to temperature data and a financial index.
    Date: 2010–01–28
    URL: http://d.repec.org/n?u=RePEc:com:wpaper:026&r=ets
  15. By: Aurea Grané; Helena Veiga
    Abstract: In this paper we focus on the impact of additive level outliers on the calculation of risk measures, such as minimum capital risk requirements, and compare four alternatives of reducing these measures' estimation biases. The first three proposals proceed by detecting and correcting outliers before estimating these risk measures with the GARCH(1,1) model, while the fourth procedure fits a Student’s t-distributed GARCH(1,1) model directly to the data. The former group includes the proposal of Grané and Veiga (2010), a detection procedure based on wavelets with hard- or soft-thresholding filtering, and the well known method of Franses and Ghijsels (1999). The first results, based on Monte Carlo experiments, reveal that the presence of outliers can bias severely the minimum capital risk requirement estimates calculated using the GARCH(1,1) model. The message driven from the second results, both empirical and simulations, is that outlier detection and filtering generate more accurate minimum capital risk requirements than the fourth alternative. Moreover, the detection procedure based on wavelets with hard-thresholding filtering gathers a very good performance in attenuating the effects of outliers and generating accurate minimum capital risk requirements out-of-sample, even in pretty volatile periods
    Keywords: Minimum capital risk requirements, Outliers, Wavelets
    JEL: C22 C5 G13
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws100502&r=ets
  16. By: Hrishikesh D. Vinod (Fordham University, Department of Economics)
    Abstract: Phillips (1986) provides asymptotic theory for regressions that relate nonstationary time series including those integrated of order 1, I(1). A practical implication of the literature on spurious regression is that one cannot trust the usual confidence intervals. In the absence of prior knowledge that two series are cointegrated, it is therefore recommended that after carrying out unit root tests we work with differenced or detrended series instead of original data in levels. We propose a new alternative for obtaining confidence intervals based on the Maximum Entropy bootstrap explained in Vinod and Lopez-de-Lacalle (2009). An extensive Monte Carlo simulation shows that our proposal can provide more reliable conservative confidence intervals than traditional, differencing and block bootstrap (BB) intervals.
    Keywords: Bootstrap, simulation, confidence intervals
    JEL: C12 C15 C22 C51
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:frd:wpaper:dp2010-01&r=ets
  17. By: Markus Jochmann (Department of Economics, University of Strathclyde); Gary Koop (Department of Economics, University of Strathclyde); Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies); Rodney W. Strachan (University of Queensland)
    Abstract: This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
    Keywords: Bayesian, cointegration, model averaging, model selection, Markov chain Monte Carlo
    JEL: C11 C32 C52
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:str:wpaper:0919&r=ets
  18. By: Johanna Kappus; Markus Reiß
    Abstract: A Lévy process is observed at time points of distance delta until time T. We construct an estimator of the Lévy-Khinchine characteristics of the process and derive optimal rates of convergence simultaneously in T and delta. Thereby, we encompass the usual low- and high-frequency assumptions and obtain also asymptotics in the mid-frequency regime.
    Keywords: Lévy process, Lévy-Khinchine characteristics, Nonparametric estimation, Inverse problem, Optimal rates of convergence
    JEL: G13 C14
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010-015&r=ets
  19. By: Michael McAleer (Econometric Institute, Erasmus University Rotterdam); Marcelo Cunha Medeiros (Department of Economics PUC-Rio)
    Abstract: In this paper we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 indexes. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high frequency intra-day returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analyzed in this paper.
    Keywords: Financial econometrics, volatility forecasting, neural networks, nonlinear models, realized volatility, bagging.
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:rio:texdis:568&r=ets
  20. By: Mohitosh Kejriwal; Claude Lopez
    Abstract: Determining whether per capita output can be characterized by a stochastic trend is complicated by the fact that infrequent breaks in trend can bias standard unit root tests towards non-rejection of the unit root hypothesis. The bulk of the existing literature has focused on the application of unit root tests allowing for structural breaks in the trend function under the trend stationary alternative but not under the unit root null. These tests, however, provide little information regarding the existence and number of trend breaks. Moreover, these tests su¤er from serious power and size distortions due to the asymmetric treatment of breaks under the null and alternative hypotheses. This paper estimates the number of breaks in trend employing procedures that are robust to the unit root/stationarity properties of the data. Our analysis of the per-capita GDP for OECD countries thereby permits a robust classi?cation of countries according to the ?growth shift?, ?level shift? and ?linear trend? hypotheses. In contrast to the extant literature, unit root tests conditional on the presence or absence of breaks do not provide evidence against the unit root hypothesis.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:cin:ucecwp:2010-02&r=ets
  21. By: Athanasopoulos, George; Guillén, Osmani Teixeira de Carvalho; Issler, João Victor; Vahid, Farshid
    Abstract: We study the joint determination of the lag length, the dimension of the cointegrating spaceand the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model usingmodel selection criteria. We consider model selection criteria which have data-dependent penaltiesas well as the traditional ones. We suggest a new two-step model selection procedure which is ahybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency.Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arisefrom the joint determination of lag-length and rank using our proposed procedure, relative to anunrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting thelag-length only and then testing for cointegration. Two empirical applications forecasting Brazilianin ation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of themodel-selection strategy proposed here. The gains in di¤erent measures of forecasting accuracy aresubstantial, especially for short horizons.
    Date: 2010–03–29
    URL: http://d.repec.org/n?u=RePEc:fgv:epgewp:704&r=ets
  22. By: Bulla, Jan
    Abstract: Hidden Markov models have been applied in many different fields during the last decades, including econometrics and finance. However, the lion’s share of the investigated models is Markovian mixtures of Gaussian distributions. We present an extension to conditional t-distributions, including models with unequal distribution types in different states. It is shown that the extended models, on the one hand, reproduce various stylized facts of daily returns better than the common Gaussian model. On the other hand, robustness to outliers and persistence of the visited states increases significantly.
    Keywords: Hidden Markov model; Markov-switching model; state persistence; t-distribution; daily returns
    JEL: C51 C52 C22 E44
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:21830&r=ets
  23. By: Matteo Pelagatti (Department of Statistics, Università degli Studi di Milano-Bicocca); Pranab Sen (Department of Statistics and Operations Research, University of North Carolina at Chapel Hill)
    Abstract: This paper proposes a test of the null hypothesis of stationarity that is robust to the presence of fat-tailed errors. The test statistic is a modified version of the KPSS statistic, in which ranks substitute the original observations. The rank KPSS statistic has the same limiting distribution as the standard KPSS statistic under the null and diverges under I(1) alternatives. It features good power both under thin-tailed and fat-tailed distributions and it turns out to be a valid alternative to the original KPSS and the recently proposed Index KPSS (de Jong et al. 2007).
    Keywords: Stationarity testing, Time series, Robustness, Rank statistics, Empirical processes
    JEL: C12 C14 C22
    Date: 2009–07
    URL: http://d.repec.org/n?u=RePEc:mis:wpaper:20090701&r=ets
  24. By: Peter C. B. Phillips (Yale University, University of Auckland, University of York & Singapore Management University); Jun Yu (School of Economics,Singapore Management University)
    Abstract: A recursive regression methodology is used to analyze the bubble characteristics of various financial time series during the subprime crisis. The methods provide a technology for identifying bubble behavior and consistent dating of their origination and collapse. Seven relevant financial series are investigated, including three financial assets (the Nasdaq index, home price index and asset-backed commercial paper), two commodities (the crude oil price and platinum price), one bond rate (Baa), and one exchange rate (Pound/USD). Statistically significant bubble characteristics are found in all of these series. The empirical estimates of the origination and collapse dates suggest an interesting migration mechanism among the financial variables: a bubble first emerged in the equity market during mid-1995 lasting to the end of 2000, followed by a bubble in the real estate market between January 2001 and July 2007 and in the mortgage market between November 2005 and August 2007. After the subprime crisis erupted, the phenomenon migrated selectively into the commodity market and the foreign exchange market, creating bubbles which subsequently burst at the end of 2008, just as the effects on the real economy and economic growth became manifest. Our empirical estimates of the origination and collapse dates support strongly the general features of the scenario of this crisis put forward in a recent study by Caballero, Farhi and Gourinchas (2008).
    Keywords: Financial bubbles, Crashes, Date stamping, Explosive behavior, Mildly explosive process, Subprime crisis, Timeline
    JEL: C15 G12
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:18-2009&r=ets

This nep-ets issue is ©2010 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.