nep-ets New Economics Papers
on Econometric Time Series
Issue of 2007‒05‒12
nineteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. A flexible approach to parametric inference in nonlinear time series models By Gary Koop; Simon Potter
  2. An Assessment of Alternative State Space Models for Count Time Series By Ralph D. Snyder; Gael M. Martin; Phillip Gould; Paul D. Feigin
  3. Local linear impulse responses for a small open economy By Alfred A Haug; Christie Smith
  4. Proxies for daily volatility By Robin G. de Vilder; Marcel P. Visser
  5. Forecasting with Factors: The Accuracy of Timeliness By Christian Gillitzer; Jonathan Kearns
  6. Splines for Financial Volatility By Francesco Audrino; Peter Bühlmann
  7. Some Issues in Using Sign Restrictions for Identifying Structural VARs By Renee Fry; Adrian Pagan
  8. True and Apparent Scaling: The Proximity of the Markov- Switching Multifractal Model to Long-Range Dependence By Liu, Ruipeng; Di Matteo, Tiziana; Lux, Thomas
  9. A note on model selection in (time series) regression models - General-to-specific or specific-to-general? By Herwartz, Helmut
  10. Accelerating the calibration of stochastic volatility models By Kilin, Fiodar
  11. The Chi-square Approximation of the Restricted Likelihood Ratio Test for the Sum of Autoregressive Coefficients with Interval Estimation By chen, willa; deo, rohit
  12. Non-linear models: applications in economics By Albu, Lucian-Liviu
  13. Cointegration testing in dependent panels with breaks By Di Iorio, Francesca; Fachin, Stefano
  14. Multicointegration, polynomial cointegration and I(2) cointegration with structural breaks. An application to the sustainability of the US external deficit. By Vanessa Berenguer-Rico; Josep Lluís Carrion-i-Silvestre
  15. Another Look at the Null of Stationary RealExchange Rates. Panel Data with Structural Breaks and Cross-section Dependence By Syed A. Basher; Josep Lluís Carrion-i-Silvestre
  16. A Closed-Form Asymptotic Variance-Covariance Matrix for the Maximum Likelihood Estimator of the GARCH(1,1) Model By Jun Ma
  17. Spurious Inference in the GARCH(1,1) Model When It Is Weakly Identified By Jun Ma; Charles Nelson; Richard Startz
  18. The Relationship between the Beveridge-Nelson Decomposition and Unobserved Component Models with Correlated Shocks By Kum Hwa Oh; Eric Zivot; Drew Creal
  19. A Comparison of Univariate Stochastic Volatility Models for U.S. Short Rates Using EMM Estimation By Ying Gu; Eric Zivot

  1. By: Gary Koop; Simon Potter
    Abstract: Many structural break and regime-switching models have been used with macroeconomic and financial data. In this paper, we develop an extremely flexible parametric model that accommodates virtually any of these specifications?and does so in a simple way that allows for straightforward Bayesian inference. The basic idea underlying our model is that it adds two concepts to a standard state space framework. These ideas are ordering and distance. By ordering the data in different ways, we can accommodate a wide range of nonlinear time series models. By allowing the state equation variances to depend on the distance between observations, the parameters can evolve in a wide variety of ways, allowing for models that exhibit abrupt change as well as those that permit a gradual evolution of parameters. We show how our model will (approximately) nest almost every popular model in the regime-switching and structural break literatures. Bayesian econometric methods for inference in this model are developed. Because we stay within a state space framework, these methods are relatively straightforward and draw on the existing literature. We use artificial data to show the advantages of our approach and then provide two empirical illustrations involving the modeling of real GDP growth.
    Keywords: Time-series analysis ; Econometric models ; Economic forecasting
    Date: 2007
  2. By: Ralph D. Snyder; Gael M. Martin; Phillip Gould; Paul D. Feigin
    Abstract: This paper compares two alternative models for autocorrelated count time series. The first model can be viewed as a 'single source of error' discrete state space model, in which a time-varying parameter is specified as a function of lagged counts, with no additional source of error introduced. The second model is the more conventional 'dual source of error' discrete state space model, in which the time-varying parameter is driven by a random autocorrelated process. Using the nomenclature of the literature, the two representations can be viewed as observation-driven and parameter-driven respectively, with the distinction between the two models mimicking that between analogous models for other non-Gaussian data such as financial returns and trade durations. The paper demonstrates that when adopting a conditional Poisson specification, the two models have vastly different dispersion/correlation properties, with the dual source model having properties that are a much closer match to the empirical properties of observed count series than are those of the single source model. Simulation experiments are used to measure the finite sample performance of maximum likelihood (ML) estimators of the parameters of each model, and ML-based predictors, with ML estimation implemented for the dual source model via a deterministic hidden Markov chain approach. Most notably, the numerical results indicate that despite the very different properties of the two models, predictive accuracy is reasonably robust to misspecification of the state space form.
    Keywords: Discrete state-space model; single source of error model; hidden Markov
    JEL: C13 C22 C46 C53
    Date: 2007–05
  3. By: Alfred A Haug; Christie Smith (Reserve Bank of New Zealand)
    Abstract: Traditional vector autoregressions derive impulse responses using iterative techniques that may compound specification errors. Local projection techniques are robust to this problem, and Monte Carlo evidence suggests they provide reliable estimates of the true impulse responses. We use local linear projections to investigate the dynamic properties of a model for a small open economy, New Zealand. We compare impulse responses from local projections to those from standard techniques, and consider the implications for monetary policy. We pay careful attention to the dimensionality of the model, and focus on the effects of policy on GDP, interest rates, prices and the exchange rate.
    JEL: C51 E52 F41
    Date: 2007–04
  4. By: Robin G. de Vilder; Marcel P. Visser
    Abstract: High frequency data are often used to construct proxies for the daily volatility in discrete time volatility models. This paper introduces a calculus for such proxies, making it possible to compare and optimize them. The two distinguishing features of the approach are (1) a simple continuous time extension of discrete time volatility models and (2) an abstract definition of volatility proxy. The theory is applied to eighteen years worth of S&P 500 index data. It is used to construct a proxy that outperforms realized volatility.
    Date: 2007
  5. By: Christian Gillitzer (Reserve Bank of Australia); Jonathan Kearns (Reserve Bank of Australia)
    Abstract: This paper demonstrates that factor-based forecasts for key Australian macroeconomic series can outperform standard time-series benchmarks. In practice, however, the advantages of using large panels of data to construct the factors typically comes at the cost of using less timely series, thereby delaying when the forecasts can be made. To produce more timely forecasts it is possible to use a narrower data panel, though this will possibly result in less accurate factor estimates and so less accurate forecasts. We demonstrate this trade-off between accuracy and timeliness with out-of-sample forecasts. With the exception of only consumer price inflation, the forecasts do not become less accurate as they utilise less information by excluding less timely series. So while factor forecasts have large data requirements, we show that these should not prevent their practical use when timely forecasts are needed.
    Keywords: forecasting; factor models; Australia
    JEL: C53 E27 E37
    Date: 2007–04
  6. By: Francesco Audrino; Peter Bühlmann
    Abstract: We propose a flexible GARCH-type model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B-splines of lagged observations and volatilities. Estimation of such a B-spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B-spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, also in comparison to other approaches, and we present some supporting asymptotic arguments.
    Keywords: Boosting, B-splines, Conditional variance, Financial time series, GARCH model, Volatility
    JEL: C13 C14 C22 C51 C53 C63
    Date: 2007–04
  7. By: Renee Fry; Adrian Pagan
    Abstract: The paper looks at estimation of structural VARs with sign restrictions. Since sign restrictions do not generate a unique model it is necessary to find some way of summarizing the information they yield. Existing methods present impulse responses from different models and it is argued that they should come from a common model. If this is not done the implied shocks implicit in the impulse responses will not be orthogonal. A method is described that tries to resolve this difficulty. It works with a common model whose impulse responses are as close as possible to the median values of the impulse responses (taken over the range of models satisfying the sign restrictions). Using a simple demand and supply model it is shown that there is no reason to think that sign restrictions will generate better quantitative estimates of the effects of shocks than existing methods such as assuming a system is recursive.
    Date: 2007–04–13
  8. By: Liu, Ruipeng; Di Matteo, Tiziana; Lux, Thomas
    Abstract: In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov- switching multifractal model (MSM). In order to see how well the estimated models capture the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q = 1; 2) for both empirical data and simulated data of the estimated MSM models. In most cases the multifractal model appears to generate `apparent' long memory in agreement with the empirical scaling laws.
    Keywords: scaling, generalized Hurst exponent, multifractal model, GMM estimation
    Date: 2007
  9. By: Herwartz, Helmut
    Abstract: The paper provides Monte Carlo evidence on the performance of general-to-specific and specific-to-general selection of explanatory variables in linear (auto)regressions. In small samples the former is markedly inefficient in terms of ex-ante forecasting performance.
    Keywords: Model selection, specification testing, Lagrange multiplier tests
    JEL: C22 C51
    Date: 2007
  10. By: Kilin, Fiodar
    Abstract: This paper compares the performance of three methods for pricing vanilla options in models with known characteristic function: (1) Direct integration, (2) Fast Fourier Transform (FFT), (3) Fractional FFT. The most important application of this comparison is the choice of the fastest method for the calibration of stochastic volatility models, e.g. Heston, Bates, Barndor®-Nielsen-Shephard models or Levy models with stochastic time. We show that using additional cache technique makes the calibration with the direct integration method at least seven times faster than the calibration with the fractional FFT method.
    Keywords: Stochastic Volatility Models; Calibration; Numerical Integration; Fast Fourier Transform
    JEL: G13
    Date: 2006–12–31
  11. By: chen, willa; deo, rohit
    Abstract: The restricted likelihood (RL) of an autoregressive (AR) process of order one with intercept/trend possesses enormous advantages, such as yielding estimates with significantly reduced bias, powerful unit root tests, small curvature, a well-behaved likelihood ratio test (RLRT) near the unit root and confidence intervals with good coverage. Here we consider the RLRT for the sum of the coefficients in AR(p) processes with intercept/trend. We show that the limit of the leading error term in the chi-square approximation to the RLRT distribution is finite as the unit root is approached, implying a uniformly good approximation over the entire parameter space and well-behaved interval inference for nearly integrated processes. We extend the correspondence between the stationary AR coefficients and the partial autocorrelations to the unit root case and provide a simple unified representation of the RL for both stationary and integrated AR processes which eliminates the singularity at the unit root. The resulting parameter space is shown to be the bounded p-dimensional hypercube (-1,1]×(-1,1)^{p-1}, thus simplifying the optimisation. Confidence intervals for the sum of the AR coefficients are easily obtained from the RLRT as they are equivalent to intervals for a simple bounded function of the partial autocorrelations. An empirical application to the Nelson-Plosser data is provided.
    Keywords: curvature; confidence interval; autoregressive; near unit root; Bartlett correction
    JEL: C10 C22 C12
    Date: 2007–04–23
  12. By: Albu, Lucian-Liviu
    Abstract: The study concentrated on demonstrating how non-linear modelling can be useful to investigate the behavioural of dynamic economic systems. Using some adequate non-linear models could be a good way to find more refined solutions to actually unsolved problems or ambiguities in economics. Beginning with a short presentation of the simplest non-linear models, then we are demonstrating how the dynamics of complex systems, as the economic system is, could be explained on the base of some more advanced non-linear models and using specific techniques of simulation. We are considering the non-linear models only as an alternative to the stochastic linear models in economics. The conventional explanations of the behaviour of economic system contradict many times the empirical evidence. We are trying to demonstrate that small modifications in the standard linear form of some economic models make more complex and consequently more realistic the behaviour of system simulated on the base of the new non-linear models. Finally, few applications of non-linear models to the study of inflation-unemployment relationship, potentially useful for further empirical studies, are presented.
    Keywords: non-linear model; continuous time map; strange attractor; fractal dimension; natural unemployment
    JEL: E32 E27 C63 C02
    Date: 2006
  13. By: Di Iorio, Francesca; Fachin, Stefano
    Abstract: In this paper we propose panel cointegration tests allowing for breaks and cross-section dependence based on the Continuos-Path Block bootstrap. Simulation evidence shows that the proposed panel tests have satisfactory size and power properties, hence improving considerably on asymptotic tests applied to individual series. As an empirical illustration we examine investment and saving for a panel of European countries over the 1960-2002 period, finding, contrary to the results of most individual tests, that the hypothesis of a long-run relationship with breaks is compatible with the data
    Keywords: Panel cointegration; continuos-path block bootstrap; breaks; Feldstein-Horioka Puzzle.
    JEL: C23
    Date: 2007–05–09
  14. By: Vanessa Berenguer-Rico (Faculty of Economics, Juan Carlos III.); Josep Lluís Carrion-i-Silvestre (Faculty of Economics, University of Barcelona)
    Abstract: In this paper we model the multicointegration relation, allowing for one structural break. Since multicointegration is a particular case of polynomial or I(2) cointegration, our proposal can also be applied in these cases. The paper proposes the use of a residualbased Dickey-Fuller class of statistic that accounts for one known or unknown structural break. Finite sample performance of the proposed statistic is investigated by using Monte Carlo simulations, which reveals that the statistic shows good properties in terms of empirical size and power. We complete the study with an empirical application of the sustainability of the US external deficit. Contrary to existing evidence, the consideration of one structural break leads to conclude in favour of the sustainability of the US external deficit.
    Keywords: I(2) processes, multicointegration, polynomial cointegration, structural break, sustainability of external deficit.
    JEL: C12 C22
    Date: 2007–05
  15. By: Syed A. Basher (Department of Economics. York University.); Josep Lluís Carrion-i-Silvestre (Faculty of Economics, University of Barcelona.)
    Abstract: This paper re-examines the null of stationary of real exchange rate for a panel of seventeen OECD developed countries during the post-Bretton Woods era. Our analysis simultaneously considers both the presence of cross-section dependence and multiple structural breaks that have not received much attention in previous panel methods of long-run PPP. Empirical results indicate that there is little evidence in favor of PPP hypothesis when the analysis does not account for structural breaks. This conclusion is reversed when structural breaks are considered in computation of the panel statistics. We also compute point estimates of half-life separately for idiosyncratic and common factor components and find that it is always below one year.
    Keywords: Purchasing power parity, Half-lives, Panel unit roottests, Multiple structural breaks, Cross-section dependence.
    JEL: C32 C33 E31
    Date: 2007–05
  16. By: Jun Ma
    Abstract: This paper presents a closed-form asymptotic variance-covariance matrix of the Maximum Likelihood Estimators (MLE) for the GARCH(1,1) model. Starting from the standard asymptotic result, a closed form expression for the information matrix of the MLE is derived via a local approximation. The closed form variance-covariance matrix of MLE for the GARCH(1,1) model can be obtained by inverting the information matrix. The Monte Carlo simulation experiments show that this closed form expression works well in the admissible region of parameters.
    Date: 2006–10
  17. By: Jun Ma; Charles Nelson; Richard Startz
    Abstract: This paper shows that the Zero-Information-Limit-Condition (ZILC) formulated by Nelson and Startz (2006) holds in the GARCH(1,1) model. As a result, the GARCH estimate tends to have too small a standard error relative to the true one when the ARCH parameter is small, even when sample size becomes very large. In combination with an upward bias in the GARCH estimate, the small standard error will often lead to the spurious inference that volatility is highly persistent when it is not. We develop an empirical strategy to deal with this issue and show how it applies to real datasets.
    Date: 2007–03
  18. By: Kum Hwa Oh; Eric Zivot; Drew Creal
    Abstract: Many researchers believe that the Beveridge-Nelson decomposition leads to permanent and transitory components whose shocks are perfectly negatively correlated. Indeed, some even consider it to be a property of the decomposition. We demonstrate that the Beveridge-Nelson decomposition does not provide definitive information about the correlation between permanent and transitory shocks in an unobserved components model. Given an ARIMA model describing the evolution of U.S. real GDP, we show that there are many state space representations that generate the Beveridge-Nelson decomposition. These include unobserved components models with perfectly correlated shocks and partially correlated shocks. In our applications, the only knowledge we have about the correlation is that it lies in a restricted interval that does not include zero. Although the filtered estimates of the trend and cycle are identical for models with different correlations, the observationally equivalent unobserved components models produce different smoothed estimates.
    Date: 2006–07
  19. By: Ying Gu; Eric Zivot
    Abstract: In this paper, the efficient method of moments (EMM) estimation using a seminonparametric (SNP) auxiliary model is employed to determine the best fitting model for the volatility dynamics of the U.S. weekly three-month interest rate. A variety of volatility models are considered, including one-factor diffusion models, two-factor and three-factor stochastic volatility (SV) models, non-Gaussian diffusion models with Stable distributed errors, and a variety of Markov regime switching (RS) models. The advantage of using EMM estimation is that all of the proposed structural models can be evaluated with respect to a common auxiliary model. We find that a continuous-time twofactor SV model, a continuous-time three-factor SV model, and a discrete-time RS-involatility model with level effect can well explain the salient features of the short rate as summarized by the auxiliary model. We also show that either an SV model with a level effect or a RS model with a level effect, but not both, is needed for explaining the data. Our EMM estimates of the level effect are much lower than unity, but around 1/2 after incorporating the SV effect or the RS effect.
    Date: 2006–08

This nep-ets issue is ©2007 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.