|
on Econometric Time Series |
By: | Anders Rahbek (University of Copenhagen and CREATES); Heino Bohn Nielsen (University of Copenhagen) |
Abstract: | We propose a discrete-time multivariate model where lagged levels of the process enter both the conditional mean and the conditional variance. This way we allow for the empirically observed persistence in time series such as interest rates, often implying unit-roots, while at the same time maintain stationarity despite such unit-roots. Specifically, the model bridges vector autoregressions and multivariate ARCH models in which residuals are replaced by levels lagged. An empirical illustration using recent US term structure data is given in which the individual interest rates have unit roots, have no finite first-order moments, but remain strictly stationary and ergodic, while they co-move in the sense that their spread has no unit root. The model thus allows for volatility induced stationarity, and the paper shows conditions under which the multivariate process is strictly stationary and geometrically ergodic. Interestingly, these conditions include the case of unit roots and a reduced rank structure in the conditional mean, known from linear co-integration to imply non-stationarity. Asymptotic theory of the maximum likelihood estimators for a particular structured case (so-called self-exciting) is provided, and it is shown that square-root T convergence to Gaussian distributions apply despite unit roots as well as absence of finite first and higher order moments. Monte Carlo simulations confirm the usefulness of the asymptotics in finite samples. |
Keywords: | Vector Autoregression, Unit-Root, Reduced Rank, Volatility Induced Stationarity, Term Structure, Double Autoregression |
JEL: | C32 |
Date: | 2012–06–08 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-29&r=ets |
By: | Johannes Tang Kristensen (Aarhus University and CREATES) |
Abstract: | Macroeconomic forecasting using factor models estimated by principal components has become a popular research topic with many both theoretical and applied contributions in the literature. In this paper we attempt to address an often neglected issue in these models: The problem of outliers in the data. Most papers take an ad-hoc approach to this problem and simply screen datasets prior to estimation and remove anomalous observations.We investigate whether forecasting performance can be improved by using the original unscreened dataset and replacing principal components with a robust alternative. We propose an estimator based on least absolute deviations (LAD) as this alternative and establish a tractable method for computing the estimator. In addition to this we demonstrate the robustness features of the estimator through a number of Monte Carlo simulation studies. Finally, we apply our proposed estimator in a simulated real-time forecasting exercise to test its merits. We use a newly compiled dataset of US macroeconomic series spanning the period 1971:2–2011:4. Our findings suggest that the chosen treatment of outliers does affect forecasting performance and that in many cases improvements can be made using a robust estimator such as our proposed LAD estimator. |
Keywords: | Forecasting, FactorsModels, Principal Components Analysis, Robust Estimation, Least Absolute Deviations |
JEL: | C38 C53 E37 |
Date: | 2012–06–08 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-28&r=ets |
By: | Genaro Sucarrat (BI Norwegian Business School); Álvaro Escribano (Universidad Carlos III de Madrid) |
Abstract: | Exponential models of autoregressive conditional heteroscedasticity (ARCH) are attractive in empirical analysis because they guarantee the non-negativity of volatility, and because they enable richer autoregressive dynamics. However, the currently available models exhibit stability only for a limited number of conditional densities, and the available estimation and inference methods in the case where the conditional density is unknown are valid only under very restrictive assumptions. Here, we provide results and simple methods that readily enables consistent estimation and inference of univariate and multivariate power log-GARCH(P,Q) models with time-varying correlations under very general and non-restrictive assumptions, via vector ARMA(P,Q) representations. Augmented by explanatory or exogenous regressors in the volatility specification(s), our empirical applications show that the models are particularly suited for complex modelling problems where many series and/or variables are involved. |
Keywords: | power ARCH; exponential GARCH; log-GARCH; multivariate GARCH; time-varying correlations |
JEL: | C22 C32 C51 C52 |
Date: | 2010–12–24 |
URL: | http://d.repec.org/n?u=RePEc:imd:wpaper:wp2010-25&r=ets |
By: | Phillip Wild (School of Economics, The University of Queensland); John Foster (School of Economics, The University of Queensland) |
Abstract: | In this paper, we present three nonparametric trispectrum tests that can establish whether the spectral decomposition of kurtosis of high frequency financial asset price time series is consistent with the assumptions of Gaussianity, linearity and time reversiblility. The detection of nonlinear and time irreversible probabilistic structure has important implications for the choice and implementation of a range of models of the evolution of asset prices, including Black-Sholes-Merton (BSM) option pricing model, ARCH/GARCH and stochastic volatility models. We apply the tests to a selection of high frequency Australian (ASX) stocks. |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:qld:uq2004:466&r=ets |
By: | Trenkler, Carsten; Weber, Enzo |
Abstract: | This paper investigates whether codependence restrictions can be uniquely imposed on VAR and VEC models via the so-called pseudo-structural form used in the literature. Codependence of order q is given if a linear combination of autocorrelated variables eliminates the serial correlation after q lags. Importantly, maximum likelihood estimation and likelihood ratio testing are only possible if the codependence restrictions can be uniquely imposed. Applying the pseudo-structural form, our study reveals that this is not generally the case, but that unique imposition is guaranteed in several important special cases. Moreover, we discuss further issues, in particular upper bounds of the codependence order. |
Keywords: | Codependence; VAR; cointegration; pseudo-structural form; serial correlation common features |
JEL: | C32 |
Date: | 2012–06–11 |
URL: | http://d.repec.org/n?u=RePEc:bay:rdwiwi:24776&r=ets |
By: | Uwe Hassler; Paulo M.M. Rodrigues; Antonio Rubia |
Abstract: | In this paper we derive a quantile regression approach to formally test for long memory in time series. We propose both individual and joint quantile tests which are useful to determine the order of integration along the different percentiles of the conditional distribution and, therefore, allow to address more robustly the overall hypothesis of fractional integration. The null distributions of these tests obey standard laws (e.g., standard normal) and are free of nuisance parameters. The finite sample validity of the approach is established through Monte Carlo simulations, showing, for instance, large power gains over several alternative procedures under non-Gaussian errors. An empirical application of the testing procedure on different measures of daily realized volatility is presented. Our analysis reveals several interesting features, but the main finding is that the suitability of a long-memory model with a constant order of integration around 0.4 cannot be rejected along the different percentiles of the distribution, which provides strong support to the existence of long memory in realized volatility from a completely new perspective. |
JEL: | C12 C22 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:ptu:wpaper:w201207&r=ets |
By: | Hayakawa, Kazuhiko (Hiroshima University); Pesaran, M. Hashem (University of Cambridge) |
Abstract: | This paper extends the transformed maximum likelihood approach for estimation of dynamic panel data models by Hsiao, Pesaran, and Tahmiscioglu (2002) to the case where the errors are crosssectionally heteroskedastic. This extension is not trivial due to the incidental parameters problem that arises, and its implications for estimation and inference. We approach the problem by working with a mis-specified homoskedastic model. It is shown that the transformed maximum likelihood estimator continues to be consistent even in the presence of cross-sectional heteroskedasticity. We also obtain standard errors that are robust to cross-sectional heteroskedasticity of unknown form. By means of Monte Carlo simulation, we investigate the finite sample behavior of the transformed maximum likelihood estimator and compare it with various GMM estimators proposed in the literature. Simulation results reveal that, in terms of median absolute errors and accuracy of inference, the transformed likelihood estimator outperforms the GMM estimators in almost all cases. |
Keywords: | dynamic panels, cross-sectional heteroskedasticity, Monte Carlo simulation, GMM estimation |
JEL: | C12 C13 C23 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp6583&r=ets |
By: | Cristina G. de la Fuente; Pedro Galeano; Michael P. Wiper |
Abstract: | Financial returns often present moderate skewness and high kurtosis. As a consequence, it is natural to look for a model that is exible enough to capture these characteristics. The proposal is to undertake inference for a generalized autoregressive conditional heteroskedastic (GARCH) model, where the innovations are assumed to follow a skew slash distribution. Both classical and Bayesian inference are carried out. Simulations and a real data example illustrate the performance of the proposed methodology. |
Keywords: | Financial returns, GARCH model, Kurtosis, Skew slash distribution, Skewness |
Date: | 2012–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws121108&r=ets |
By: | Marcus Scheiblecker (WIFO) |
Abstract: | This paper proposes a cumulative error correction model where the summing weights follow a geometrically decreasing function of prior deviations from equilibrium and are estimated from the data. It is shown that this approach is located in between the traditional error correction model – where no weight is given to deviations from steady state prior to the most recent period – and the error correction model based on the idea of multicointegration. The presented form of accumulation does not change the order of integration of the series, like in the multicointegration approach of Granger and Lee (1989). Based on this model type, the relationship between US private consumption and real disposable income of private households is estimated. The short-run forces setting-off last period's deviations are much smaller than a VEC and a conventional single equation ECM suggests. Furthermore, the proposed model outperforms both others in respect of its forecasting power. |
Date: | 2012–06–14 |
URL: | http://d.repec.org/n?u=RePEc:wfo:wpaper:y:2012:i:431&r=ets |
By: | John M. Maheu (Department of Economics, University of Toronto, Canada; RCEA, Italy); Yong Song (CenSoC, University of Technology, Sydney, Australia; RCEA, Italy) |
Abstract: | This paper develops an efficient approach to model and forecast time-series data with an unknown number of change-points. Using a conjugate prior and conditional on time-invariant parameters, the predictive density and the posterior distribution of the change-points have closed forms. The conjugate prior is further modeled as hierarchical to exploit the information across regimes. This framework allows breaks in the variance, the regression coefficients or both. Regime duration can be modelled as a Poisson distribution. A new efficient Markov Chain Monte Carlo sampler draws the parameters as one block from the posterior distribution. An application to Canada inflation time series shows the gains in forecasting precision that our model provides. |
Keywords: | multiple change-points, regime duration, inflation targeting, predictive density, MCMC |
Date: | 2012–06 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:27_12&r=ets |
By: | K.M. Abadir (Imperial College London, UK); W. Distaso (Imperial College London, UK); L. Giraitis (Queen Mary, University of London, UK); H.L. Koul (Michigan State University, USA) |
Abstract: | We establish asymptotic normality of weighted sums of stationary linear processes with general triangular array weights and when the innovations in the linear process are martingale differences. The results are obtained under minimal conditions on the weights and as long as the process of conditional variances of innovations is covariance stationary with lag k auto-covariances tending to zero, as k tends to infinity. We also obtain weak convergence of weighted partial sum processes. The results are applicable to linear processes that have short or long memory or exhibit seasonal long memory behavior. In particular they are applicable to GARCH and ARCH(∞) models. They are also useful in deriving asymptotic normality of kernel estimators of a nonparametric regression function when errors may have long memory. |
Keywords: | Linear process, weighted sum, Lindeberg-Feller |
Date: | 2012–06 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:23_12&r=ets |
By: | Karim M. Abadir (Imperial College London, UK); Rolf Larsson (Uppsala University, Sweden) |
Abstract: | We derive the relation between the biases of correlograms and of estimates of auto-regressive AR(k) representations of stationary series, and we illustrate it with a simple AR example. The new relation allows for k to vary with the sample size, which is a representation that can be used for most stationary processes. As a result, the biases of the estimators of such processes can now be quantified explicitly and in a unified way. |
Keywords: | Auto-correlation function (ACF) and correlogram, autoregressive (AR) representation, least-squares bias |
Date: | 2012–06 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:24_12&r=ets |
By: | Shu-Ping Shi (Australian National University, Australia); Yong Song (University of Technology Sydney, Australia) |
Abstract: | This paper proposes an infinite hidden Markov model (iHMM) to detect, date stamp, and estimate speculative bubbles. Three features make this new approach attractive to practitioners. first, the iHMM is capable of capturing the nonlinear dynamics of different types of bubble behaviors as it allows an infinite number of regimes. Second, the implementation of this procedure is straightforward as the detection, dating, and estimation of bubbles are done simultaneously in a coherent Bayesian framework. Third, the iHMM, by assuming hierarchical structures, is parsimonious and superior in out-of-sample forecast. Two empirical applications are presented: one to the Argentinian money base, exchange rate, and consumer price from January 1983 to November 1989; and the other to the U.S. oil price from April 1983 to December 2010. We find prominent results, which have not been discovered by the existing finite hidden Markov model. Model comparison shows that the iHMM is strongly supported by the predictive likelihood. |
Keywords: | speculative bubbles, infinite hidden Markov model, Dirichlet process |
JEL: | C11 C15 |
Date: | 2012–06 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:26_12&r=ets |
By: | Yong Song (University of Technology, Sydney, Australia; RCEA, Italy) |
Abstract: | This paper proposes an infinite hidden Markov model to integrate the regime switching and the structural break dynamics in a single, coherent Bayesian framework. Two parallel hierarchical structures, one governing the transition probabilities and another governing the parameters of the conditional data density, keep the model parsimonious and improve forecasts. This flexible approach allows for regime persistence and estimates the number of states automatically. A global identification methodology for structural changes versus regime switching is presented. An application to U.S. real interest rates compares the new model to existing parametric alternatives. |
Keywords: | Markov switching, structural break, Dirichlet process, infinite hidden Markov model |
JEL: | C11 C53 |
Date: | 2012–06 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:28_12&r=ets |
By: | Yves Dominicy; Siegfried Hörmann; Hiroaki Ogata; David Veredas |
Abstract: | We establish the asymptotic normality of marginal sample quantiles for S–mixing vector stationary processes. S–mixing is a recently introduced and widely applicable notion of dependence. Results of some Monte Carlo simulations are given. |
Keywords: | Quantiles; S-mixing |
Date: | 2012–06 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/119605&r=ets |
By: | Elena Andreou; Eric Ghysels; Constantinos Kourouyiannis |
Abstract: | Financial time series often undergo periods of structural change that yield biased estimates or forecasts of volatility and thereby risk management measures. We show that in the context of GARCH diussion models ignoring structural breaks in the leverage coecient and the constant can lead to biased and inecient AR-RV and GARCH-type volatility estimates. Similarly, we nd that volatility forecasts based on AR-RV and GARCH-type models that take into account structural breaks by estimating the parameters only in the post-break period, signicantly outperform those that ignore them. Hence, we propose a Flexible Forecast Combination method that takes into account not only information from dierent volatility models, but from different subsamples as well. This methods consists of two main steps: First, it splits the estimation period in subsamples based on estimated structural breaks detected by a change-pointtest. Second, it forecasts volatility weighting information from all subsamples by minimizing particular loss function, such as the Square Error and QLIKE. An empirical application using the S&P 500 Index shows that our approach performs better, especially in periods of high volatility, compared to a large set of individual volatility models and simple averaging methods as well as Forecast Combinations under Regime Switching. |
Keywords: | forecast, combinations, volatility, structural breaks |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:ucy:cypeua:08-2012&r=ets |
By: | Elena Andreou; Constantinos Kourouyiannis; Andros Kourtellos |
Abstract: | The paper deals with the problem o fmodel uncertainty in the forecasting volatility using forecast combinations and a flexible family of asymmetric loss functions that allow for the possibility that an investor would attach different preferences to high vis-a-vis low volatility periods. Using daily as well as 5 minute data for US and major international stock market indices we provide volatility forecasts by minimizing the Homogeneous Robust Loss function of the Realized Volatility and the combined forecast. Our findings show that forecast combinations based on the homogeneous robust loss function significantly outperform simple forecast combination methods, especially during the period of recent financial crisis. |
Keywords: | asymetric loss functions, forecast combinations, realized volatility, volatility forecasting |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:ucy:cypeua:07-2012&r=ets |
By: | Chan, Joshua; Strachan, Rodney |
Abstract: | In recent years state space models, particularly the linear Gaussian version, have become the standard framework for analyzing macro-economic and financial data. However, many theoretically motivated models imply non-linear or non-Gaussian specifications or both. Existing methods for estimating such models are computationally intensive, and often cannot be applied to models with more than a few states. Building upon recent developments in precision-based algorithms, we propose a general approach to estimating high-dimensional non-linear non-Gaussian state space models. The baseline algorithm approximates the conditional distribution of the states by a multivariate Gaussian or t density, which is then used for posterior simulation. We further develop this baseline algorithm to construct more sophisticated samplers with attractive properties: one based on the accept-reject Metropolis-Hastings (ARMH) algorithm, and another adaptive collapsed sampler inspired by the cross-entropy method. To illustrate the proposed approach, we investigate the effect of the zero lower bound of interest rate on monetary transmission mechanism. |
Keywords: | integrated likelihood; accept-reject Metropolis-Hastings; cross-entropy; liquidity trap; zero lower bound |
JEL: | C32 E52 C15 C11 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:39360&r=ets |
By: | Simwaka, Kisu |
Abstract: | According to Engle and Granger (1987), the concept of fractional cointegration was introduced to generalize the traditional cointegration to the long memory framework. In this paper, we extend the fractional cointegration model in Johansen (2008) and propose a time-varying framework, in which the fractional cointegrating relationship varies over time. In this case, the Johansen (2008) fractional cointegration setup is treated as a special case of our model. |
Keywords: | Time varying Fractional cointegration |
JEL: | C32 C10 |
Date: | 2012–06–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:39505&r=ets |