nep-ets New Economics Papers
on Econometric Time Series
Issue of 2007‒01‒14
thirteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. Regional Unemployment Forecasting Using Structural Component Models With Spatial Autocorrelation By Katharina Hampel; Marcus Kunz; Norbert Schanne; Ruediger Wapler; Antje Weyh
  2. Quantile Forecasts of Daily Exchange Rate Returns from Forecasts of Realized Volatility By Clements, Michael P.; Galvão, Ana Beatriz; Kim, Jae H.
  3. Testing for seasonal unit roots in heterogeneous panels in the presence of cross section dependence By Giulietti, Monica; Otero, Jesus; Smith, Jeremy
  4. A Monte Carlo Evaluation of the Efficiency of the PCSE Estimator By Xiujian Chen; Shu Lin; W. Robert Reed
  5. Forecasting using a large number of predictors - Is Bayesian regression a valid alternative to principal components? By Christine De Mol; Domenico Giannone; Lucrezia Reichlin
  6. Comovements in volatility in the euro money market By Nuno Cassola; Claudio Morana
  7. Simple (but effective) tests of long memory versus structural breaks By Katsumi Shimotsu
  8. Estimation of Approximate Factor Models: Is it Important to have a Large Number of Variables? By Chris Heaton; Victor Solo
  9. Forecasting volatility and volume in the Tokyo stock market : long memory, fractality and regime switching By Lux, Thomas; Kaizoji, Taisei
  10. The Markov-Switching Multifractal Model of asset returns : GMM estimation and linear forecasting of volatility By Lux, Thomas
  11. A Smooth Transition to the Unit Root Distribution via the Chi-Square Distribution with Interval Estimation for Nearly Integrated Autoregressive Processes By Chen, Willa; Deo, Rohit
  12. Computational Intelligence in Exchange-Rate Forecasting By Andreas S. Andreou; George A. Zombanakis
  13. A New Method for Combining Detrending Techniques with Application to Business Cycle Synchronization of the New EU Members By Zsolt Darvas; Gábor Vadas

  1. By: Katharina Hampel; Marcus Kunz; Norbert Schanne; Ruediger Wapler; Antje Weyh
    Abstract: Labour-market policies are increasingly being decided on a regional level. This implies that institutions have an increased need for regional forecasts as a guideline for their decision-making process. Therefore, we forecast regional unemployment in the 176 German labour market districts. We use an augmented structural component (SC) model and compare the results from this model with those from basic SC and autoregressive integrated moving average (ARIMA) models. Basic SC models lack two important dimensions: First, they only use level, trend, seasonal and cyclical components, although former periods of the dependent variable generally have a significant influence on the current value. Second, as spatial units become smaller, the influence of “neighbour-effects†becomes more important. In this paper we augment the SC model for structural breaks, autoregressive components and spatial autocorrelation. Using unemployment data from the Federal Employment Services in Germany for the period December 1997 to August 2005, we first estimate basic SC models with components for structural breaks and ARIMA models for each spatial unit separately. In a second stage, autoregressive components are added into the SC model. Third, spatial autocorrelation is introduced into the SC model. We assume that unemployment in adjacent districts is not independent for two reasons: One source of spatial autocorrelation may be that the effect of certain determinants of unemployment is not limited to the particular district but also spills over to neighbouring districts. Second, factors may exist which influence a whole region but are not fully captured by exogenous variables and are reflected in the residuals. We test the quality of the forecasts from the basic models and the augmented SC model by ex-post-estimation for the period September 2004 to August 2005. First results show that the SC model with autoregressive elements and spatial autocorrelation is superior to basic SC and ARIMA models in most of the German labour market districts.
    Date: 2006–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa06p196&r=ets
  2. By: Clements, Michael P. (University of Warwick); Galvão, Ana Beatriz (Queen Mary, University of London); Kim, Jae H. (Monash University)
    Abstract: Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors : the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main ?ndings are that the HAR model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts.
    Keywords: realized volatility ; quantile forecasting ; MIDAS ; HAR ; exchange rates
    JEL: C32 C53 F37
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:777&r=ets
  3. By: Giulietti, Monica (Aston Business School); Otero, Jesus (Universidad del Rosario, Colombia); Smith, Jeremy (University of Warwick)
    Abstract: This paper presents two alternative methods for modifying the HEGY-IPS test in the presence of cross-sectional dependency. In general, the bootstrap method (BHEGY-IPS) has greater power than the method suggested by Pesaran (2007) (CHEGY-IPS), although for large T and high degree of cross-sectional dependency the CHEGY-IPS test dominates the BHEGY-IPS test.
    Keywords: Heterogeneous dynamic panels ; Monte Carlo ; seasonal unit roots ; cross sectional dependence
    JEL: C12 C15 C22 C23
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:784&r=ets
  4. By: Xiujian Chen; Shu Lin; W. Robert Reed (University of Canterbury)
    Abstract: Panel data characterized by groupwise heteroscedasticity, cross-sectional correlation, and AR(1) serial correlation pose problems for econometric analyses. It is well known that the asymptotically efficient, FGLS estimator (Parks) sometimes performs poorly in finite samples. In a widely cited paper, Beck and Katz (1995) claim that their estimator (PCSE) is able to produce more accurate coefficient standard errors without any loss in efficiency in ¡°practical research situations.¡± This study disputes that claim. We find that the PCSE estimator is usually less efficient than Parks -- and substantially so -- except when the number of time periods is close to the number of cross-sections.
    Keywords: Panel data estimation; Monte Carlo analysis; FGLS; Parks; PCSE; finite sample
    JEL: C15 C23
    Date: 2006–11–03
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:06/14&r=ets
  5. By: Christine De Mol (Universite Libre de Bruxelles – ECARES, Av. F. D. Roosevelt, 50 – CP 114, 1050 Bruxelles, Belgium.); Domenico Giannone (Universite Libre de Bruxelles – ECARES, Av. F. D. Roosevelt, 50 – CP 114, 1050 Bruxelles, Belgium.); Lucrezia Reichlin (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: This paper considers Bayesian regression with normal and doubleexponential priors as forecasting methods based on large panels of time series. We show that, empirically, these forecasts are highly correlated with principal component forecasts and that they perform equally well for a wide range of prior choices. Moreover, we study the asymptotic properties of the Bayesian regression under Gaussian prior under the assumption that data are quasi collinear to establish a criterion for setting parameters in a large cross-section. JEL Classification: C11,C13, C33, C53.
    Keywords: Bayesian VAR, ridge regression, Lasso regression, principal components, large cross-sections.
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060700&r=ets
  6. By: Nuno Cassola (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Claudio Morana (International Centre for Economic Research (ICER, Torino) and University of Piemonte Orientale, Faculy of Economics and Quantitative Methods, Via Perrone 18, 28100, Novara, Italy.)
    Abstract: This paper assesses the sources of volatility persistence in Euro Area money market interest rates and the existence of linkages relating volatility dynamics. The main findings of the study are as follows. Firstly, there is evidence of stationary long memory, of similar degree, in all series. Secondly, there is evidence of fractional cointegration relationships relating all series, except the overnight rate. Two common long memory factors are found to drive the temporal evolution of the volatility processes. The first factor shows how persistent volatility shocks are trasmitted along the term structure, while the second factor points to excess persistent volatility at the longer end of the yield curve, relative to the shortest end. Finally, impulse response analysis and forecast error variance decomposition point to forward transmission of shocks only, involving the closest maturities. JEL Classification: C14, C63, E41.
    Keywords: Money market interest rates; liquidity effect, realized volatility, fractional integration and cointegration, fractional vector error correction model.
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060703&r=ets
  7. By: Katsumi Shimotsu (Department of Economics, Queen's University)
    Abstract: This paper proposes two simple tests that are based on certain time domain properties of I(d) processes. First, if a time series follows an I(d) process, then each subsample of the time series also follows an I(d) process with the same value of d. Second, if a time series follows an I(d) process, then its dth differenced series follows an I(0) process. Simple as they may sound, these properties provide useful tools to distinguish between true and spurious I(d) processes. In the first test, we split the sample into b subsamples, estimate d for each subsample, and compare them with the estimate of d from the full sample. In the second test, we estimate d, use the estimate to take the dth difference of the sample, and apply the KPSS test and Phillips-Perron test to the differenced data and its partial sum. Both tests are applicable to both stationary and nonstationary I(d) processes. Simulations show that the proposed tests have good power against the spurious long memory models considered in the literature. The tests are applied to the daily realized volatility of the S&P 500 index.
    Keywords: long memory, fractional integration, structural breaks, realized volatility
    JEL: C12 C13 C14 C22
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1101&r=ets
  8. By: Chris Heaton (Department of Economics, Macquarie University); Victor Solo (University of New South Wales)
    Abstract: The use of principal component techniques to estimate approximate factor models with large cross-sectional dimension is now well established. However, recent work by Inklaar, Jacobs and Romp (2003) and Boivin and Ng (2005) has cast some doubt on the importance of a large cross-sectional dimension for the precision of the estimates. This paper presents some new theory for approximate factor model estimation. Consistency is proved and rates of convergence are derived under conditions that allow for a greater degree of cross-correlation in the model disturbances than previously published results. The rates of convergence depend on the rate at which the cross-sectional correlation of the model disturbances grows as the cross-sectional dimension grows. The consequences for applied economic analysis are discussed.
    Keywords: Factor analysis, time series models, principal components
    JEL: C13 C32 C43 C53
    Date: 2006–09
    URL: http://d.repec.org/n?u=RePEc:mac:wpaper:0605&r=ets
  9. By: Lux, Thomas; Kaizoji, Taisei
    Abstract: We investigate the predictability of both volatility and volume for a large sample of Japanese stocks. The particular emphasis of this paper is on assessing the performance of long memory time series models in comparison to their short-memory counterparts. Since long memory models should have a particular advantage over long forecasting horizons, we consider predictions of up to 100 days ahead. In most respects, the long memory models (ARFIMA, FIGARCH and the recently introduced multifractal model) dominate over GARCH and ARMA models. However, while FIGARCH and ARFIMA also have quite a number of cases with dramatic failures of their forecasts, the multifractal model does not suffer from this shortcoming and its performance practically always improves upon the naïve forecast provided by historical volatility. As a somewhat surprising result, we also find that, for FIGARCH and ARFIMA models, pooled estimates (i.e. averages of parameter estimates from a sample of time series) give much better results than individually estimated models.
    Keywords: forecasting, long memory models, volume, volatility
    JEL: C22 C53 G12
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:5160&r=ets
  10. By: Lux, Thomas
    Abstract: Multifractal processes have recently been proposed as a new formalism for modelling the time series of returns in ¯nance. The major attraction of these processes is their ability to generate various degrees of long memory in di®erent powers of returns - a feature that has been found in virtually all ¯nancial data. Initial di±culties stemming from non-stationarity and the combinatorial nature of the original model have been overcome by the introduction of an iterative Markov-switching multifractal model in Calvet and Fisher (2001) which allows for estimation of its parameters via maximum likelihood and Bayesian forecasting of volatility. However, applicability of MLE is restricted to cases with a discrete distribution of volatility components. From a practical point of view, ML also be- comes computationally unfeasible for large numbers of components even if they are drawn from a discrete distribution. Here we propose an alter- native GMM estimator together with linear forecasts which in principle is applicable for any continuous distribution with any number of volatility components. Monte Carlo studies show that GMM performs reasonably well for the popular Binomial and Lognormal models and that the loss incurred with linear compared to optimal forecasts is small. Extending the number of volatility components beyond what is feasible with MLE leads to gains in forecasting accuracy for some time series.
    Keywords: Markov-switching, multifractal, forecasting, volatility, GMM estimation
    JEL: C20 G12
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:5164&r=ets
  11. By: Chen, Willa; Deo, Rohit
    Abstract: The distribution of the t-statistic in autoregressive (AR) processes is discontinuous near the unit root, causing problems for interval estimation. We show that the likelihood ratio test (RLRT) based on the restricted likelihood circumvents this problem. Chen and Deo (2006) show that irrespective of the AR coefficient, the error in the chi-square approximation to the RLRT distribution for stationary AR(1) processes is -0.5n^(-¹)(G₃(.)-G₁(.))+O(n^-²), where Gs is the c.d.f of a χ{s}² distribution. In this paper, the non-standard asymptotic distribution of the RLRT for the unit root boundary value is obtained and shown to be almost identical to that of the chi-square in the right tail. Together, the above two results imply that the chi-square distribution approximates the RLRT distribution very well even for nearly integrated series and transitions smoothly to the unit root distribution. The chi-square based confidence intervals obtained by inverting the RLRT thus have almost correct coverage near the unit root and have width shrinking to zero with increasing sample size. Related work by Francke and de Vos (2006) suggests the RLRT intervals may also be close to uniformly most accurate invariant. A simulation study supports the theory presented in the paper.
    Keywords: Curvature; boundary value; trend stationary; restricted likelihood; confidence interval
    JEL: C22 C12
    Date: 2006–12–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:1215&r=ets
  12. By: Andreas S. Andreou (University of Cyprus); George A. Zombanakis (Bank of Greece)
    Abstract: This paper applies computational intelligence methods to exchange rate forecasting. In particular, it employs neural network methodology in order to predict developments of the Euro exchange rate versus the U.S. Dollar and the Japanese Yen. Following a study of our series using traditional as well as specialized, non-parametric methods together with Monte Carlo simulations we employ selected Neural Networks (NNs) trained to forecast rate fluctuations. Despite the fact that the data series have been shown by the Rescaled Range Statistic (R/S) analysis to exhibit random behaviour, their internal dynamics have been successfully captured by certain NN topologies, thus yielding accurate predictions of the two exchange-rate series.
    Keywords: Exchange - rate forecasting, Neural networks
    JEL: C53
    Date: 2006–11
    URL: http://d.repec.org/n?u=RePEc:bog:wpaper:49&r=ets
  13. By: Zsolt Darvas (Corvinus University of Budapest); Gábor Vadas (Magyar Nemzeti Bank)
    Abstract: Decomposing output into trend and cyclical components is an uncertain exercise and depends on the method applied. It is an especially dubious task for countries undergoing large structural changes, such as transition countries. Despite their deficiencies, however, univariate detrending methods are frequently adopted for both policy oriented and academic research. This paper proposes a new procedure for combining univariate detrending techniques which is based on revisions of the estimated output gaps adjusted by the variance of and the correlation among output gaps. The procedure is applied to the study of the similarity of business cycles between the euro area and new EU Member States.
    Keywords: combination, detrending, new EU members, OCA, output gap, revision
    JEL: C22 E32
    Date: 2005–08–15
    URL: http://d.repec.org/n?u=RePEc:mkg:wpaper:0505&r=ets

This nep-ets issue is ©2007 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.