
on Econometric Time Series 
By:  Gary Koop; Simon Potter 
Abstract:  Many structural break and regimeswitching models have been used with macroeconomic and financial data. In this paper, we develop an extremely flexible parametric model that accommodates virtually any of these specifications?and does so in a simple way that allows for straightforward Bayesian inference. The basic idea underlying our model is that it adds two concepts to a standard state space framework. These ideas are ordering and distance. By ordering the data in different ways, we can accommodate a wide range of nonlinear time series models. By allowing the state equation variances to depend on the distance between observations, the parameters can evolve in a wide variety of ways, allowing for models that exhibit abrupt change as well as those that permit a gradual evolution of parameters. We show how our model will (approximately) nest almost every popular model in the regimeswitching and structural break literatures. Bayesian econometric methods for inference in this model are developed. Because we stay within a state space framework, these methods are relatively straightforward and draw on the existing literature. We use artificial data to show the advantages of our approach and then provide two empirical illustrations involving the modeling of real GDP growth. 
Keywords:  Timeseries analysis ; Econometric models ; Economic forecasting 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:285&r=ets 
By:  Ralph D. Snyder; Gael M. Martin; Phillip Gould; Paul D. Feigin 
Abstract:  This paper compares two alternative models for autocorrelated count time series. The first model can be viewed as a 'single source of error' discrete state space model, in which a timevarying parameter is specified as a function of lagged counts, with no additional source of error introduced. The second model is the more conventional 'dual source of error' discrete state space model, in which the timevarying parameter is driven by a random autocorrelated process. Using the nomenclature of the literature, the two representations can be viewed as observationdriven and parameterdriven respectively, with the distinction between the two models mimicking that between analogous models for other nonGaussian data such as financial returns and trade durations. The paper demonstrates that when adopting a conditional Poisson specification, the two models have vastly different dispersion/correlation properties, with the dual source model having properties that are a much closer match to the empirical properties of observed count series than are those of the single source model. Simulation experiments are used to measure the finite sample performance of maximum likelihood (ML) estimators of the parameters of each model, and MLbased predictors, with ML estimation implemented for the dual source model via a deterministic hidden Markov chain approach. Most notably, the numerical results indicate that despite the very different properties of the two models, predictive accuracy is reasonably robust to misspecification of the state space form. 
Keywords:  Discrete statespace model; single source of error model; hidden Markov 
JEL:  C13 C22 C46 C53 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20074&r=ets 
By:  Alfred A Haug; Christie Smith (Reserve Bank of New Zealand) 
Abstract:  Traditional vector autoregressions derive impulse responses using iterative techniques that may compound specification errors. Local projection techniques are robust to this problem, and Monte Carlo evidence suggests they provide reliable estimates of the true impulse responses. We use local linear projections to investigate the dynamic properties of a model for a small open economy, New Zealand. We compare impulse responses from local projections to those from standard techniques, and consider the implications for monetary policy. We pay careful attention to the dimensionality of the model, and focus on the effects of policy on GDP, interest rates, prices and the exchange rate. 
JEL:  C51 E52 F41 
Date:  2007–04 
URL:  http://d.repec.org/n?u=RePEc:nzb:nzbdps:2007/09&r=ets 
By:  Robin G. de Vilder; Marcel P. Visser 
Abstract:  High frequency data are often used to construct proxies for the daily volatility in discrete time volatility models. This paper introduces a calculus for such proxies, making it possible to compare and optimize them. The two distinguishing features of the approach are (1) a simple continuous time extension of discrete time volatility models and (2) an abstract definition of volatility proxy. The theory is applied to eighteen years worth of S&P 500 index data. It is used to construct a proxy that outperforms realized volatility. 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:pse:psecon:200711&r=ets 
By:  Christian Gillitzer (Reserve Bank of Australia); Jonathan Kearns (Reserve Bank of Australia) 
Abstract:  This paper demonstrates that factorbased forecasts for key Australian macroeconomic series can outperform standard timeseries benchmarks. In practice, however, the advantages of using large panels of data to construct the factors typically comes at the cost of using less timely series, thereby delaying when the forecasts can be made. To produce more timely forecasts it is possible to use a narrower data panel, though this will possibly result in less accurate factor estimates and so less accurate forecasts. We demonstrate this tradeoff between accuracy and timeliness with outofsample forecasts. With the exception of only consumer price inflation, the forecasts do not become less accurate as they utilise less information by excluding less timely series. So while factor forecasts have large data requirements, we show that these should not prevent their practical use when timely forecasts are needed. 
Keywords:  forecasting; factor models; Australia 
JEL:  C53 E27 E37 
Date:  2007–04 
URL:  http://d.repec.org/n?u=RePEc:rba:rbardp:rdp200703&r=ets 
By:  Francesco Audrino; Peter Bühlmann 
Abstract:  We propose a flexible GARCHtype model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate Bsplines of lagged observations and volatilities. Estimation of such a Bspline basis expansion is constructed within the likelihood framework for nonGaussian observations. As the dimension of the Bspline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, also in comparison to other approaches, and we present some supporting asymptotic arguments. 
Keywords:  Boosting, Bsplines, Conditional variance, Financial time series, GARCH model, Volatility 
JEL:  C13 C14 C22 C51 C53 C63 
Date:  2007–04 
URL:  http://d.repec.org/n?u=RePEc:usg:dp2007:200711&r=ets 
By:  Renee Fry; Adrian Pagan 
Abstract:  The paper looks at estimation of structural VARs with sign restrictions. Since sign restrictions do not generate a unique model it is necessary to find some way of summarizing the information they yield. Existing methods present impulse responses from different models and it is argued that they should come from a common model. If this is not done the implied shocks implicit in the impulse responses will not be orthogonal. A method is described that tries to resolve this difficulty. It works with a common model whose impulse responses are as close as possible to the median values of the impulse responses (taken over the range of models satisfying the sign restrictions). Using a simple demand and supply model it is shown that there is no reason to think that sign restrictions will generate better quantitative estimates of the effects of shocks than existing methods such as assuming a system is recursive. 
Date:  2007–04–13 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:20078&r=ets 
By:  Liu, Ruipeng; Di Matteo, Tiziana; Lux, Thomas 
Abstract:  In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multiscaling properties by estimating a simple specification of the Markov switching multifractal model (MSM). In order to see how well the estimated models capture the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q = 1; 2) for both empirical data and simulated data of the estimated MSM models. In most cases the multifractal model appears to generate `apparent' long memory in agreement with the empirical scaling laws. 
Keywords:  scaling, generalized Hurst exponent, multifractal model, GMM estimation 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:zbw:cauewp:5534&r=ets 
By:  Herwartz, Helmut 
Abstract:  The paper provides Monte Carlo evidence on the performance of generaltospecific and specifictogeneral selection of explanatory variables in linear (auto)regressions. In small samples the former is markedly inefficient in terms of exante forecasting performance. 
Keywords:  Model selection, specification testing, Lagrange multiplier tests 
JEL:  C22 C51 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:zbw:cauewp:5537&r=ets 
By:  Kilin, Fiodar 
Abstract:  This paper compares the performance of three methods for pricing vanilla options in models with known characteristic function: (1) Direct integration, (2) Fast Fourier Transform (FFT), (3) Fractional FFT. The most important application of this comparison is the choice of the fastest method for the calibration of stochastic volatility models, e.g. Heston, Bates, Barndor®NielsenShephard models or Levy models with stochastic time. We show that using additional cache technique makes the calibration with the direct integration method at least seven times faster than the calibration with the fractional FFT method. 
Keywords:  Stochastic Volatility Models; Calibration; Numerical Integration; Fast Fourier Transform 
JEL:  G13 
Date:  2006–12–31 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2975&r=ets 
By:  chen, willa; deo, rohit 
Abstract:  The restricted likelihood (RL) of an autoregressive (AR) process of order one with intercept/trend possesses enormous advantages, such as yielding estimates with significantly reduced bias, powerful unit root tests, small curvature, a wellbehaved likelihood ratio test (RLRT) near the unit root and confidence intervals with good coverage. Here we consider the RLRT for the sum of the coefficients in AR(p) processes with intercept/trend. We show that the limit of the leading error term in the chisquare approximation to the RLRT distribution is finite as the unit root is approached, implying a uniformly good approximation over the entire parameter space and wellbehaved interval inference for nearly integrated processes. We extend the correspondence between the stationary AR coefficients and the partial autocorrelations to the unit root case and provide a simple unified representation of the RL for both stationary and integrated AR processes which eliminates the singularity at the unit root. The resulting parameter space is shown to be the bounded pdimensional hypercube (1,1]×(1,1)^{p1}, thus simplifying the optimisation. Confidence intervals for the sum of the AR coefficients are easily obtained from the RLRT as they are equivalent to intervals for a simple bounded function of the partial autocorrelations. An empirical application to the NelsonPlosser data is provided. 
Keywords:  curvature; confidence interval; autoregressive; near unit root; Bartlett correction 
JEL:  C10 C22 C12 
Date:  2007–04–23 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:3002&r=ets 
By:  Albu, LucianLiviu 
Abstract:  The study concentrated on demonstrating how nonlinear modelling can be useful to investigate the behavioural of dynamic economic systems. Using some adequate nonlinear models could be a good way to find more refined solutions to actually unsolved problems or ambiguities in economics. Beginning with a short presentation of the simplest nonlinear models, then we are demonstrating how the dynamics of complex systems, as the economic system is, could be explained on the base of some more advanced nonlinear models and using specific techniques of simulation. We are considering the nonlinear models only as an alternative to the stochastic linear models in economics. The conventional explanations of the behaviour of economic system contradict many times the empirical evidence. We are trying to demonstrate that small modifications in the standard linear form of some economic models make more complex and consequently more realistic the behaviour of system simulated on the base of the new nonlinear models. Finally, few applications of nonlinear models to the study of inflationunemployment relationship, potentially useful for further empirical studies, are presented. 
Keywords:  nonlinear model; continuous time map; strange attractor; fractal dimension; natural unemployment 
JEL:  E32 E27 C63 C02 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:3100&r=ets 
By:  Di Iorio, Francesca; Fachin, Stefano 
Abstract:  In this paper we propose panel cointegration tests allowing for breaks and crosssection dependence based on the ContinuosPath Block bootstrap. Simulation evidence shows that the proposed panel tests have satisfactory size and power properties, hence improving considerably on asymptotic tests applied to individual series. As an empirical illustration we examine investment and saving for a panel of European countries over the 19602002 period, finding, contrary to the results of most individual tests, that the hypothesis of a longrun relationship with breaks is compatible with the data 
Keywords:  Panel cointegration; continuospath block bootstrap; breaks; FeldsteinHorioka Puzzle. 
JEL:  C23 
Date:  2007–05–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:3139&r=ets 
By:  Vanessa BerenguerRico (Faculty of Economics, Juan Carlos III.); Josep Lluís CarrioniSilvestre (Faculty of Economics, University of Barcelona) 
Abstract:  In this paper we model the multicointegration relation, allowing for one structural break. Since multicointegration is a particular case of polynomial or I(2) cointegration, our proposal can also be applied in these cases. The paper proposes the use of a residualbased DickeyFuller class of statistic that accounts for one known or unknown structural break. Finite sample performance of the proposed statistic is investigated by using Monte Carlo simulations, which reveals that the statistic shows good properties in terms of empirical size and power. We complete the study with an empirical application of the sustainability of the US external deficit. Contrary to existing evidence, the consideration of one structural break leads to conclude in favour of the sustainability of the US external deficit. 
Keywords:  I(2) processes, multicointegration, polynomial cointegration, structural break, sustainability of external deficit. 
JEL:  C12 C22 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:ira:wpaper:200709&r=ets 
By:  Syed A. Basher (Department of Economics. York University.); Josep Lluís CarrioniSilvestre (Faculty of Economics, University of Barcelona.) 
Abstract:  This paper reexamines the null of stationary of real exchange rate for a panel of seventeen OECD developed countries during the postBretton Woods era. Our analysis simultaneously considers both the presence of crosssection dependence and multiple structural breaks that have not received much attention in previous panel methods of longrun PPP. Empirical results indicate that there is little evidence in favor of PPP hypothesis when the analysis does not account for structural breaks. This conclusion is reversed when structural breaks are considered in computation of the panel statistics. We also compute point estimates of halflife separately for idiosyncratic and common factor components and find that it is always below one year. 
Keywords:  Purchasing power parity, Halflives, Panel unit roottests, Multiple structural breaks, Crosssection dependence. 
JEL:  C32 C33 E31 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:ira:wpaper:200710&r=ets 
By:  Jun Ma 
Abstract:  This paper presents a closedform asymptotic variancecovariance matrix of the Maximum Likelihood Estimators (MLE) for the GARCH(1,1) model. Starting from the standard asymptotic result, a closed form expression for the information matrix of the MLE is derived via a local approximation. The closed form variancecovariance matrix of MLE for the GARCH(1,1) model can be obtained by inverting the information matrix. The Monte Carlo simulation experiments show that this closed form expression works well in the admissible region of parameters. 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:udb:wpaper:uwec200611r&r=ets 
By:  Jun Ma; Charles Nelson; Richard Startz 
Abstract:  This paper shows that the ZeroInformationLimitCondition (ZILC) formulated by Nelson and Startz (2006) holds in the GARCH(1,1) model. As a result, the GARCH estimate tends to have too small a standard error relative to the true one when the ARCH parameter is small, even when sample size becomes very large. In combination with an upward bias in the GARCH estimate, the small standard error will often lead to the spurious inference that volatility is highly persistent when it is not. We develop an empirical strategy to deal with this issue and show how it applies to real datasets. 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:udb:wpaper:uwec200614p&r=ets 
By:  Kum Hwa Oh; Eric Zivot; Drew Creal 
Abstract:  Many researchers believe that the BeveridgeNelson decomposition leads to permanent and transitory components whose shocks are perfectly negatively correlated. Indeed, some even consider it to be a property of the decomposition. We demonstrate that the BeveridgeNelson decomposition does not provide definitive information about the correlation between permanent and transitory shocks in an unobserved components model. Given an ARIMA model describing the evolution of U.S. real GDP, we show that there are many state space representations that generate the BeveridgeNelson decomposition. These include unobserved components models with perfectly correlated shocks and partially correlated shocks. In our applications, the only knowledge we have about the correlation is that it lies in a restricted interval that does not include zero. Although the filtered estimates of the trend and cycle are identical for models with different correlations, the observationally equivalent unobserved components models produce different smoothed estimates. 
Date:  2006–07 
URL:  http://d.repec.org/n?u=RePEc:udb:wpaper:uwec200616&r=ets 
By:  Ying Gu; Eric Zivot 
Abstract:  In this paper, the efficient method of moments (EMM) estimation using a seminonparametric (SNP) auxiliary model is employed to determine the best fitting model for the volatility dynamics of the U.S. weekly threemonth interest rate. A variety of volatility models are considered, including onefactor diffusion models, twofactor and threefactor stochastic volatility (SV) models, nonGaussian diffusion models with Stable distributed errors, and a variety of Markov regime switching (RS) models. The advantage of using EMM estimation is that all of the proposed structural models can be evaluated with respect to a common auxiliary model. We find that a continuoustime twofactor SV model, a continuoustime threefactor SV model, and a discretetime RSinvolatility model with level effect can well explain the salient features of the short rate as summarized by the auxiliary model. We also show that either an SV model with a level effect or a RS model with a level effect, but not both, is needed for explaining the data. Our EMM estimates of the level effect are much lower than unity, but around 1/2 after incorporating the SV effect or the RS effect. 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:udb:wpaper:uwec200617&r=ets 