
on Econometric Time Series 
By:  Stefano Grassi (Aarhus University and CREATES); Tommaso Proietti (University of Sydney) 
Abstract:  An important issue in modelling economic time series is whether key unobserved components representing trends, seasonality and calendar components, are deterministic or evolutive. We address it by applying a recently proposed Bayesian variable selection methodology to an encompassing linear mixed model that features, along with deterministic effects, additional random explanatory variables that account for the evolution of the underlying level, slope, seasonality and trading days. Variable selection is performed by estimating the posterior model probabilities using a suitable Gibbs sampling scheme. The paper conducts an extensive empirical application on a large and representative set of monthly time series concerning industrial production and retail turnover. We find strong support for the presence of stochastic trends in the series, either in the form of a timevarying level, or, less frequently, of a stochastic slope, or both. Seasonality is a more stable component: only in 70% of the cases we were able to select at least one stochastic trigonometric cycle out of the six possible cycles. Most frequently the time variation is found in correspondence with the fundamental and the first harmonic cycles. An interesting and intuitively plausible finding is that the probability of estimating timevarying components increases with the sample size available. However, even for very large sample sizes we were unable to find stochastically varying calendar effects. 
Keywords:  Bayesian model selection, stationarity, unit roots, stochastic trends, variable selection. 
JEL:  E32 E37 C53 
Date:  2011–09–02 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201130&r=ets 
By:  Yushu Li (Centre for Labour Market Policy Research (CAFO), Department of Economic and Statistics) 
Abstract:  Detection turning points in unimodel has various applications to time series which have cyclic periods. Related techniques are widely explored in the field of statistical surveillance, that is, online turning point detection procedures. This paper will first present a power controlled turning point detection method based on the theory of the likelihood ratio test in statistical surveillance. Next we show how outliers will influence the performance of this methodology. Due to the sensitivity of the surveillance system to outliers, we finally present a wavelet multiresolution (MRA) based outlier elimination approach, which can be combined with the online turning point detection process and will then alleviate the false alarm problem introduced by the outliers. 
Keywords:  Unimodel, Turning point, Statistical surveillance, Outlier, Wavelet multiresolution, Threshold. 
JEL:  C12 C15 C22 
Date:  2011–07–15 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201129&r=ets 
By:  Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES) 
Abstract:  In this work we consider forecasting macroeconomic variables dur ing an economic crisis. The focus is on a specific class of models, the socalled single hiddenlayer feedforward autoregressive neural net work models. What makes these models interesting in the present context is that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. These models are often difficult to estimate, and we follow the idea of White (2006) to transform the speci?fication and non linear estimation problem into a linear model selection and estimation problem. To this end we employ three automatic modelling devices. One of them is White's QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians. The performance of these three model selectors is compared by look ing at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment se ries of the G7 countries and the four Scandinavian ones, and focus on forecasting during the economic crisis 20072009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other. 
Keywords:  Autometrics, economic forecasting, Marginal Bridge estimator, neural network, nonlinear time series model, Wilcoxon's signedrank test 
JEL:  C22 C45 C52 C53 
Date:  2011–08–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201128&r=ets 
By:  Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES) 
Abstract:  In this paper we consider the forecasting performance of a welldefined class of flexible models, the socalled single hiddenlayer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally identified. Recently, White (2006) presented a solution that amounts to converting the specification and nonlinear estimation problem into a linear model selection and estimation problem. He called this procedure the QuickNet and we shall compare its performance to two other procedures which are built on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where this is not done. 
Keywords:  artificial neural network, forecast comparison, model selection, nonlinear autoregressive model, nonlinear time series, root mean square forecast error, Wilcoxon’s signedrank test 
JEL:  C22 C45 C52 C53 
Date:  2011–08–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201127&r=ets 
By:  Oleg Sokolinskiy (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam) 
Abstract:  This paper develops a novel approach to modeling and forecasting realized volatility (RV) measures based on copula functions. Copulabased time series models can capture relevant characteristics of volatility such as nonlinear dynamics and longmemory type behavior in a flexible yet parsimonious way. In an empirical application to daily volatility for S&P500 index futures, we find that the copulabased RV (CRV) model outperforms conventional forecasting approaches for oneday ahead volatility forecasts in terms of accuracy and efficiency. Among the copula specifications considered, the Gumbel CRV model achieves the best forecast performance, which highlights the importance of asymmetry and upper tail dependence for modeling volatility dynamics. Although we find substantial variation in the copula parameter estimates over time, conditional copulas do not improve the accuracy of volatility forecasts. 
Keywords:  Nonlinear dependence; long memory; copulas; volatility forecasting 
JEL:  C22 C53 
Date:  2011–09–05 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110125&r=ets 
By:  Eo, Yunjong; Morley, James 
Abstract:  We propose a new approach to constructing confidence sets for the timing of structural breaks. The confidence sets are based on marginal "fiducial" distributions of break dates, which are simulated from the likelihood function via Markovchain Monte Carlo methods. We show that the confidence sets can be related to a sequence of average exponential likelihood ratio tests and are, therefore, admissible. Using Monte Carlo analysis, we compare the finitesample performance of our proposed approach to standard asymptotic and bootstrap confidence sets for break dates given structural breaks of the kind hypothesized for macroeconomic data. The confidence sets based on marginal fiducial distributions perform best overall in terms of short length and accurate coverage. For our application, we investigate the nature and timing of structural breaks in postwar U.S. Real GDP. Based on our proposed approach, we find much tighter 95% confidence sets for the timing of the socalled "Great Moderation" than have been previously reported. 
Keywords:  Markovchain Monte Carlo; Coverage Accuracy and Expected Length; Intervals and Sets; Confidence; Structural Breaks; Bootstrap Methods; Fiducial Inference 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:syd:wpaper:2123/7761&r=ets 
By:  Andreea Halunga; Chris D. Orme; Takashi Yamagata 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1118&r=ets 
By:  Antonio E. Noriega; Daniel VentosaSantaulària 
Abstract:  It has been found that the tstatistic for testing the null of no relationship between two independent variables diverges asymptotically under a wide variety of nonstationary data generating processes. This paper introduces a simple method which guarantees convergence of this tstatistic to a pivotal limit distribution, when there are drifts in the integrated processes generating the data, thus allowing asymptotic inference. This method can be used to distinguish a genuine relationship from a spurious one among integrated (I(1) and I(2)) processes. Simulation experiments show that the test has good properties in small samples. When applying the proposed procedure to real data (including the marriages and mortality data of Yule), we do not find (spurious) significant relationships between the variables. 
Keywords:  Spurious Regression, Integrated Process, Detrending, Asymptotic Theory, Cointegration, Monte Carlo Experiments. 
JEL:  C12 C15 C22 C46 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:bdm:wpaper:201105&r=ets 
By:  Peter Fuleky (UHERO, and Department of Economics, University of Hawaii at Manoa) 
Abstract:  When estimating the parameters of a process, researchers can choose the reference unit of time (unit period) for their study. Frequently, they set the unit period equal to the observation interval. However, I show that decoupling the unit period from the observation interval facilitates the comparison of parameter estimates across studies with different data sampling frequencies. If the unit period is standardized (for example annualized) across these studies, then the parameters will represent the same attributes of the underlying process, and their interpretation will be independent of the sampling frequency. 
Keywords:  Unit Period, Sampling Frequency, Bias, Time Series. 
JEL:  C13 C22 C51 C82 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:hae:wpaper:20114&r=ets 