|
on Econometric Time Series |
By: | Joseph P. Byrne and Roger Perman |
Abstract: | Since Perron (1989) the time series literature has emphasised the importance of testing for structural breaks in typical economic data sets and pronounced the implications of structural breaks when testing for unit root processes. In this paper we survey recent developments in testing for unit roots taking account of possible structural breaks. In doing so we discuss the distinction between taking structural break dates as exogenously determined, an approach initially adopted in the literature, and endogenously testing break dates. That is, we differentiate between testing for breaks when the break date is known and when it is assumed to be unknown. Also important is the distinction between discrete breaks and gradual breaks. Additionally we describe tests for both single and multiple breaks and discuss some of the pitfalls of the latter. |
JEL: | C12 C32 |
URL: | http://d.repec.org/n?u=RePEc:gla:glaewp:2006_10&r=ets |
By: | Jokipii , Terhi (Bank of Finland and Trinity College Dublin); Lucey, Brian (Institute for International Integration Studies, Trinity College Dublin) |
Abstract: | Making use of ten years of daily data, this paper examines whether banking sector co-movements be-tween the three largest Central and Eastern European Countries (CEECs) can be attributed to contagion or to interdependence. Our tests based on simple unadjusted correlation analysis uncover evidence of conta-gion between all pairs of countries. Adjusting for market volatility during turmoil, however, produces dif-ferent results. We then find contagion from the Czech Republic to Hungary during this time, but all other cross-market co-movements are rather attributable rather to strong cross-market linkages. In addition, we construct a set of dummy variables to try to capture the impact of macroeconomic news on these markets. Controlling for own-country fundamentals, we discover that the correlations diminish between the Czech Republic and Poland, but that coefficients for all pairs remain substantial and significant. Finally, we ad-dress the problem of simultaneous equations, omitted variables and heteroskedasticity, and adjust our data accordingly. We confirm our previous findings. Our tests provide evidence in favour of parameter insta-bility, again signifying the existence of contagion arising from problems in the Czech Republic affecting Hungary during much of 1996. |
Keywords: | contagion; interdependence; macroeconomic news; banking sector; stock returns |
JEL: | F30 F40 G15 |
Date: | 2006–07–03 |
URL: | http://d.repec.org/n?u=RePEc:hhs:bofrdp:2006_015&r=ets |
By: | Crowley , Patrick (College of Business, Texas A&M University); Maraun , Douglas (Nonlinear Dynamics Group Physics Institute, University of Potsdam); Mayes , David (Monetary Policy and Research Department,) |
Abstract: | Using recent advances in time-varying spectral methods, this research analyses the growth cycles of the core of the euro area in terms of frequency content and phasing of cycles. The methodology uses the con-tinuous wavelet transform (CWT) and also Hilbert wavelet pairs in the setting of a non-decimated discrete wavelet transform in order to analyse bivariate time series in terms of conventional frequency domain measures from spectral analysis. The findings are that coherence and phasing between the three core members of the euro area (France, Germany and Italy) have increased since the launch of the euro. |
Keywords: | time-varying spectral analysis; coherence; phase; business cycles; EMU; growth cycles; Hilbert trans-form; wavelet analysis |
JEL: | C19 C63 C65 E32 E39 E58 F40 |
Date: | 2006–09–27 |
URL: | http://d.repec.org/n?u=RePEc:hhs:bofrdp:2006_018&r=ets |
By: | Crowley , Patrick (Bank of Finland Research and College of Business, Texas A&M University); Lee , Jim (College of Business, Texas A&M University) |
Abstract: | This article analyses the frequency components of European business cycles using real GDP by employ-ing multiresolution decomposition (MRD) with the use of maximal overlap discrete wavelet transforms (MODWT). Static wavelet variance and correlation analysis is performed, and phasing is studied using co-correlation with the euro area by scale. Lastly dynamic conditional correlation GARCH models are used to obtain dynamic correlation estimates by scale against the EU to evaluate synchronicity of cycles through time. The general findings are that euro area members fall into one of three categories: i) high and dynamic correlations at all frequency cycles (eg France, Belgium, Germany), ii) low static and dy-namic correlations, with little sign of convergence occurring (eg Greece), and iii) low static correlation but convergent dynamic correlations (eg Finland and Ireland). |
Keywords: | business cycles; growth cycles; European Union; multiresolution analysis; wavelets; co-correlation; dynamic correlation |
JEL: | C65 E32 O52 |
Date: | 2005–05–11 |
URL: | http://d.repec.org/n?u=RePEc:hhs:bofrdp:2005_012&r=ets |
By: | Männistö , Hanna-Leena (Bank of Finland Research) |
Abstract: | To develop forecasting procedures with a forward-looking dynamic general equilibrium model, we built a small New-Keynesian model and calibrated it to euro area data. It was essential in this context that we allowed for long-run growth in GDP. We brought additional asset price equations based on the expecta-tions hypothesis and the Gordon growth model, into the standard open economy model, in order to extract information on private sector long-run expectations on fundamentals, and to combine that information into the macro economic forecast. We propose a method of transforming the model in forecasting use in such a way, as to match, in an economically meaningful way, the short-term forecast levels, especially of the model's jump-variables, to the parameters affecting the long-run trends of the key macroeconomic variables. More specifically, in the model we have used for illustrative purposes, we pinned down the long-run inflation expectations and domestic and foreign potential growth-rates using the model's steady state solution in combination with, by assumption, forward looking information in up-to-date financial market data. Consequently, our proposed solution preserves consistency with market expectations and results, as a favourable by-product, in forecast paths with no initial, first forecast period jumps. Further-more, no ad hoc re-calibration is called for in the proposed forecasting procedures, which clearly is an advantage from point of view of transparency in communication. |
Keywords: | forecasting; New Keynesian model; DSGE model; rational expectations; open economy |
JEL: | E17 E30 E31 F41 |
Date: | 2005–10–11 |
URL: | http://d.repec.org/n?u=RePEc:hhs:bofrdp:2005_021&r=ets |
By: | Vuorenmaa , Tommi (Department of Economics, University of Helsinki) |
Abstract: | This paper investigates the dependence of average stock market volatility on the timescale or on the time interval used to measure price changes, which dependence is often referred to as the scaling law. Scaling factor, on the other hand, refers to the elasticity of the volatility measure with respect to the timescale. This paper studies, in particular, whether the scaling factor differs from the one in a simple random walk model and whether it has remained stable over time. It also explores possible underlying reasons for the observed behaviour of volatility in terms of heterogeneity of stock market players and periodicity of in-traday volatility. The data consist of volatility series of Nokia Oyj at the Helsinki Stock Exchange at five minute frequency over the period from January 4, 1999 to December 30, 2002. The paper uses wavelet methods to decompose stock market volatility at different timescales. Wavelet methods are particularly well motivated in the present context due to their superior ability to describe local properties of times se-ries. The results are, in general, consistent with multiscaling in Finnish stock markets. Furthermore, the scaling factor and the long-memory parameters of the volatility series are not constant over time, nor con-sistent with a random walk model. Interestingly, the evidence also suggests that, for a significant part, the behaviour of volatility is accounted for by an intraday volatility cycle referred to as the New York effect. Long-memory features emerge more clearly in the data over the period around the burst of the IT bubble and may, consequently, be an indication of irrational exuberance on the part of investors. |
Keywords: | long-memory; scaling; stock market; volatility; wavelets |
JEL: | C14 C22 |
Date: | 2005–10–11 |
URL: | http://d.repec.org/n?u=RePEc:hhs:bofrdp:2005_027&r=ets |
By: | Crowley , Patrick (College of Business, Texas A&M University) |
Abstract: | Wavelet analysis, although used extensively in disciplines such as signal processing, engineering, medical sciences, physics and astronomy, has not yet fully entered the economics discipline. In this discussion paper, wavelet analysis is introduced in an intuitive manner, and the existing economics and finance literature that utilises wavelets is explored. Extensive examples of exploratory wavelet analysis are given, many using Canadian, US and Finnish industrial production data. Finally, potential future applications for wavelet analysis in economics are also discussed and explored. |
Keywords: | statistical methodology; multiresolution analysis; wavelets; business cycles; economic growth |
JEL: | C19 C65 C87 E32 |
Date: | 2005–01–01 |
URL: | http://d.repec.org/n?u=RePEc:hhs:bofrdp:2005_001&r=ets |
By: | Chollete, Lorán (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Heinen, Andreas (Dept. of Statistics and Econometrics, Universidad Carlos III de Madrid) |
Abstract: | How common and how persistent are turbulent periods? We address these questions by developing and applying a dynamic dependence framework. In order to answer the first question we estimate an unconditional mixture model of normal copulas, based on both economic and econometric justification. In order to answer the second question, we develop and estimate a hidden markov model of copulas, which allows for dynamic clustering of correlations. These models permit one to infer the relative importance of turbulent and quiescent periods in international markets. Empirically, the three most striking findings are as follows. First, for the unconditional model, turbulent regimes are more common. Second, the conditional copula model dominates the unconditional model. Third, turbulent regimes tend to be more persistent. |
Keywords: | International Markets; Turbulence; Hidden Markov Model; Copula |
JEL: | C14 C22 C50 F30 G15 |
Date: | 2006–10–11 |
URL: | http://d.repec.org/n?u=RePEc:hhs:nhhfms:2006_010&r=ets |
By: | Panicos Demetriades; Michail Karoglou; Siong Hook Law |
Abstract: | This paper employs several newly proposed techniques to identify the number and timing of structural breaks in the variance dynamics of stock market returns. These techniques are applied to five East Asian emerging markets, all of which liberalised their financial markets during the period under consideration. It is shown that the detected breakdates in the volatility of stock market returns do not correspond to official liberalisation dates; as a result the use of official liberalisation dates as breakdates is likely to result in inaccurate inference. By using data driven techniques to detect multiple structural changes a richer - and inevitably more accurate - pattern of volatility dynamics emerges in comparison to focussing on official liberalisation dates. |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:lec:leecon:06/13&r=ets |
By: | Patrick J. Kehoe |
Abstract: | The common approach to evaluating a model in the structural VAR literature is to compare the impulse responses from structural VARs run on the data to the theoretical impulse responses from the model. The Sims-Cogley-Nason approach instead compares the structural VARs run on the data to identical structural VARs run on data from the model of the same length as the actual data. Chari, Kehoe, and McGrattan (2006) argue that the inappropriate comparison made by the common approach is the root of the problems in the SVAR literature. In practice, the problems can be solved simply. Switching from the common approach to the Sims-Cogley-Nason approach basically involves changing a few lines of computer code and a few lines of text. This switch will vastly increase the value of the structural VAR literature for economic theory. |
JEL: | C32 C51 C52 E13 E17 E21 E27 E32 E37 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12575&r=ets |
By: | F. Laurini; J. A. Tawn |
Abstract: | Generalised autoregressive conditional heteroskedastic (GARCH) processes have wide application in financial modelling. To characterise the extreme values of this process the extremal index is required. Mikosch and Starica (2000) derive the extremal index for the squared GARCH(1,1) process. Here we propose an algorithm for the evaluation of the extremal index and for the limiting distribution of the size of clusters of extremes for GARCH(1,1) processes with t-distributed innovations, and tabulate values of these characteristics for a range of parameters of the GARCH(1,1) process. This algorithm also enables properties of other cluster functionals to be evaluated. |
Keywords: | clusters, extreme value theory, extremal index, finance, GARCH, multivariate regular variation |
JEL: | C15 C32 C53 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:par:dipeco:2006-se01&r=ets |
By: | L. Grossi; G. Morelli |
Abstract: | In order to cope with the stylized facts of financial time series, many models have been proposed inside the GARCH family (e.g. EGARCH, GJR-GARCH, QGARCH, FIGARCH, LSTGARCH) and the stochastic volatility models (e.g. SV). Generally, all these models tend to produce very similar results as concerns forecasting performance. Most of the time it is difficult to choose which is the most appropriate specification. In addition, all these models are very sensitive to the presence of atypical observations. The purpose of this paper is to provide the user with new robust model selection procedures in financial models which downweight or eliminate the effect of atypical observations. The extreme case is when outliers are treated as missing data. In this paper we extend the theory of missing data to the family of GARCH models and show how to robustify the loglikelihood to make it insensitive to the presence of outliers. The suggested procedure enables us both to detect atypical observations and to select the best models in terms of forecasting performance. |
Keywords: | GARCH models, extreme value, robust estimation |
JEL: | C16 C22 C53 G15 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:par:dipeco:2006-se02&r=ets |
By: | Oleg Korenok (Department of Economics, VCU School of Business); Stanislav Radchenko (Department of Economics, University of North Carolina at Charlotte) |
Abstract: | This paper proposes to model the error term in smooth transition autoregressive target zone model as Gaussian with stochastic volatility (STARTZ-SV) or as Student-t with GARCH volatility (STARTZ-TGARCH). Using the dynamics of Norwegian krone exchange rate index, we show that both models produce standardized residuals that are closer to assumed distributions and do not produce a hump in the estimated marginal distribution of exchange rate which is more consistent with theoretical predictions. We apply developed models to test whether the dynamics of oil price can be well approximated by the Krugman’s target zone model. Our estimates of conditional volatility and marginal distribution reject the target zone hypothesis. |
Keywords: | target zone, oil price, exchange rate, stochastic volatility, griddy Gibbs, smooth transition |
JEL: | C52 Q38 F31 |
Date: | 2005–08 |
URL: | http://d.repec.org/n?u=RePEc:vcu:wpaper:0505&r=ets |
By: | Eickmeier, Sandra |
Abstract: | This paper seeks to assess comovements and heterogeneity in the euro area by fitting a nonstationary dynamic factor model (Bai and Ng, 2004), augmented with a structural factor setup (Forni and Reichlin, 1998), to a large set of euro-area macroeconomic variables observed between 1982 and 2003. This framework allows us to estimate stationary and non-stationary common factors and idiosyncratic components, to identify the structural shocks behind the common factors and assess their transmission to individual EMU countries. Our most important findings are the following. EMU countries share five common trends. However, the source of non-stationarity of individual countries’ key macroeconomic variables is not only pervasive. Instead, most countries’ output and inflation are also affected by long-lasting idiosyncratic shocks. Unweighted dispersion is primarily due to idiosyncratic shocks rather than the asymmetric spread of common shocks. However, the latter seems to be the main driving force of weighted dispersion of output at the end of the 1980s and the beginning of the 1990s and again from 1999 on and of inflation in the mid-1980s and the mid-1990s. To examine the transmission of common shocks to individual EMU countries in more detail, we identify five structural common shocks, namely two euro-area supply shocks, one euro-area demand shock, one common monetary policy shock and a US shock. We find similar output and inflation responses across countries (with some exceptions), and similarity generally increases with the horizon. |
Keywords: | Dynamic factor models, sign restrictions, common trends, common cycles, international business cycles, EMU, output and inflation differentials |
JEL: | C3 E32 E5 F00 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdp1:4793&r=ets |
By: | Abramov, Vyacheslav; Klebaner, Fima |
Abstract: | In this paper we study volatility functions. Our main assumption is that the volatility is deterministic or stochastic but driven by a Brownian motion independent of the stock. We propose a forecasting method and check the consistency with option pricing theory. To estimate the unknown volatility function we use the approach of \cite{Goldentayer Klebaner and Liptser} based on filters for estimation of an unknown function from its noisy observations. One of the main assumptions is that the volatility is a continuous function, with derivative satisfying some smoothness conditions. The two forecasting methods correspond to the the first and second order filters, the first order filter tracks the unknown function and the second order tracks the function and it derivative. Therefore the quality of forecasting depends on the type of the volatility function: if oscillations of volatility around its average are frequent, then the first order filter seems to be appropriate, otherwise the second order filter is better. Further, in deterministic volatility models the price of options is given by the Black-Scholes formula with averaged future volatility \cite{Hull White 1987}, \cite{Stein and Stein 1991}. This enables us to compare the implied volatility with the averaged estimated historical volatility. This comparison is done for five companies and shows that the implied volatility and the historical volatilities are not statistically related. |
Keywords: | Non-constant volatility; approximating and forecasting volatility; Black-Scholes formula; best linear predictor |
JEL: | G13 |
Date: | 2006–06–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:207&r=ets |
By: | Scalas, Enrico; Kim, Kyungsik |
Abstract: | This paper illustrates a procedure for fitting financial data with alpha-stable distributions. After using all the available methods to evaluate the distribution parameters, one can qualitatively select the best estimate and run some goodness-of-fit tests on this estimate, in order to quantitatively assess its quality. It turns out that, for the two investigated data sets (MIB30 and DJIA from 2000 to present), an alpha-stable fit of log-returns is reasonably good. |
Keywords: | finance; statistical methods; stable distributions |
JEL: | C14 C16 G00 |
Date: | 2006–08–23 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:336&r=ets |
By: | Leeb, Hannes; Pötscher, Benedikt M. |
Abstract: | We consider the problem of estimating the unconditional distribution of a post-model-selection estimator. The notion of a post-model-selection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model selection criterion like AIC or by a hypothesis testing procedure) and then estimating the parameters in the selected model (e.g., by least-squares or maximum likelihood), all based on the same data set. We show that it is impossible to estimate the unconditional distribution with reasonable accuracy even asymptotically. In particular, we show that no estimator for this distribution can be uniformly consistent (not even locally). This follows as a corollary to (local) minimax lower bounds on the performance of estimators for the distribution. These lower bounds are shown to approach 1/2 or even 1 in large samples, depending on the situation considered. Similar impossibility results are also obtained for the distribution of linear functions (e.g., predictors) of the post-model-selection estimator. |
Keywords: | Inference after model selection; Post-model-selection estimator; Pre-test estimator; Selection of regressors; Akaike's information criterion AIC; Thresholding; Model uncertainty; Consistency; Uniform consistency; Lower risk bound. |
JEL: | C20 C13 C52 C12 C51 |
Date: | 2005–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:72&r=ets |