|
on Econometric Time Series |
By: | Rasmus Tangsgaard Varneskov (Aarhus University and CREATES) |
Abstract: | This paper extends the class of generalized at-top realized kernels, introduced in Varneskov (2011), to the multivariate case, where quadratic covariation of non-synchronously observed asset prices is estimated in the presence of market microstructure noise that is allowed to exhibit serial dependence and to be correlated with the efficient price process. Estimators in this class are shown to posses desirable statistical properties such as consistency, asymptotic normality, and asymptotic unbiasedness at an optimal n^(1/4)-convergence rate. A finite sample correction based on projections of symmetric matrices ensures positive (semi-)definiteness without altering asymptotic properties of the class of estimators. The finite sample correction admits non-linear transformations of the estimated covariance matrix such as correlations and realized betas, and it can be used in portfolio optimization problems. These transformations are all shown to inherit the desirable asymptotic properties of the generalized at-top realized kernels. A simulation study shows that the class of estimators has a superior finite sample tradeoff between bias and root mean squared error relative to competing estimators. Lastly, two small empirical applications to high frequency stock market data illustrate the bias reduction relative to competing estimators in estimating correlations, realized betas, and mean-variance frontiers, as well as the use of the new estimators in the dynamics of hedging. |
Keywords: | Bias Reduction, Nonparametric Estimation, Market Microstructure Noise, Portfolio Optimization, Quadratic Covariation, Realized Beta. |
JEL: | C14 C15 G11 |
Date: | 2011–09–27 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2011-35&r=ets |
By: | Markus Eberhardt (University of Oxford) |
Abstract: | Stata already has an extensive range of built-in and user-written commands for analyzing xt (cross-sectional time-series) data. However, most of these commands do not take into account important features of the data relating to their time-series properties or cross-sectional dependence. This talk reviews the recent literature concerned with these features with reference to the types of data in which they arise. Most of the talk will be spent discussing and illustrating various Stata commands for analyzing these types of data, including several new user-written commands. The talk should be of general interest to users of xt data and of particular interest to researchers with panel datasets in which countries or regions are the unit of analysis and there is also a substantial time-series element. Over the past two decades, a literature dedicated to the analysis of macro panel data has concerned itself with some of the idiosyncrasies of this type of data, including variable nonstationarity and cointegration, as well as with the investigation of possible parameter heterogeneity across panel members and its implications for estimation and inference. Most recently, this literature has turned its attention to concerns over cross-sectional dependence, which can arise either in the form of unobservable global shocks that differ in their impact across countries (for example, the recent financial crisis) or as spillover effects (again, unobservable) between a subset of countries or regions. |
Date: | 2011–09–26 |
URL: | http://d.repec.org/n?u=RePEc:boc:usug11:22&r=ets |
By: | Barbara Rossi |
Abstract: | The forecasting literature has identi fied two important, broad issues. The fi rst stylized fact is that the predictive content is unstable over time; the second is that in-sample predictive content does not necessarily translate into out-of-sample predictive ability, nor ensures the stability of the predictive relation over time. The objective of this chapter is to understand what we have learned about forecasting in the presence of instabilities, especially regarding the two questions above. The empirical evidence raises a multitude of questions. If in-sample tests provide poor guidance to out-of-sample forecasting ability, what should researchers do? If there are statistically significant instabilities in the Granger-causality relationships, how do researchers establish whether there is any Granger-causality at all? And if there is substantial instability in predictive relationships, how do researchers establish which models is the "best" forecasting model? And finally, if a model forecasts poorly, why is that, and how should researchers proceed to improve the forecasting models? In this chapter, we will answer these questions by discussing various methodologies for inference as well as estimation that have been recently proposed in the literature. We also provide an empirical analysis of the usefulness of the existing methodologies using an extensive database of macroeconomic predictors of output growth and inflation. |
JEL: | C53 C22 C01 E2 E27 E37 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:duk:dukeec:11-20&r=ets |
By: | Wolfgang Rinnergschwentner; Gottfried Tappeiner; Janette Walde |
Abstract: | This paper picks up on a model developed by Philipov and Glickman (2006) for modeling multivariate stochastic volatility via Wishart processes. MCMC simulation from the posterior distribution is employed to fit the model. However, erroneous mathematical transformations in the full conditionals cause false implementation of the approach. We adjust the model, upgrade the analysis and investigate the statistical properties of the estimators using an extensive Monte Carlo study. Employing a Gibbs sampler in combination with a Metropolis Hastings algorithm inference for the time-dependent covariance matrix is feasible with appropriate statistical properties. |
Keywords: | Bayesian time series; Stochastic covariance; Timevarying correlation; Markov Chain Monte Carlo |
JEL: | C01 C11 C63 |
Date: | 2011–08 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2011-19&r=ets |
By: | Md Atikur Rahman Khan; D.S. Poskitt |
Abstract: | In this paper we propose a new methodology for selecting the window length in Singular Spectral Analysis in which the window length is determined from the data prior to the commencement of modeling. The selection procedure is based on statistical tests designed to test the convergence of the autocovariance function. A classical time series portmanteau type statistic and two test statistics derived using a conditional moment principle are considered. The first two are applicable to short-memory processes, and the third is applicable to both short- and long-memory processes. We derive the asymptotic distribution of the statistics under fairly general regularity conditions and show that the criteria will identify true convergence with a finite window length with probability one as the sample size increases. Results obtained using Monte-Carlo simulation indicate the relevance of the asymptotic theory, even in relatively small samples, and that the conditional moment tests will choose a window length consistent with the Whitney embedding theorem. Application to observations on the Southern Oscillation Index shows how observed experimental behaviour can be reflected in features seen with real world data sets. |
Keywords: | Portmanteau type test, Conditional moment test, Asymptotic distribution, Linear regular process, Singular spectrum analysis, Embedding |
JEL: | C12 C22 C52 |
Date: | 2011–09 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2011-22&r=ets |
By: | Daisuke Nagakura; Toshiaki Watanabe |
Abstract: | We call the realized variance (RV), calculated with observed prices contaminated by (market) microstructure noises (MNs), the noise-contaminated RV (NCRV), and refer to the bias component in the NCRV, associated with the MNs, as the MN component. This paper develops a state space method for estimating the integrated variance (IV) and MN component. We represent the NCRV by a state space form and show that the state space form parameters are not identifiable, however, they can be expressed as functions of identifiable parameters. We illustrate how to estimate these parameters. We apply the proposed method to yen/dollar exchange rate data, where we find that most of the variation in NCRV is of the MN component. The proposed method also serves as a convenient way for estimating a general class of continuous-time stochastic volatility (SV) models under the existence of MN. |
Keywords: | Realized Variance, Integrated Variance, Microstructure Noise, State Space, Identification, Exchange Rate |
JEL: | C13 C22 C53 |
Date: | 2011–08 |
URL: | http://d.repec.org/n?u=RePEc:hst:ghsdps:gd11-200&r=ets |
By: | Joanna Janczura; Sebastian Orzel; Agnieszka Wylomanska |
Abstract: | The classical financial models are based on the standard Brownian diffusion-type processes. However, in exhibition of some real market data (like interest or exchange rates) we observe characteristic periods of constant values. Moreover, in the case of financial data, the assumption of normality is often unsatisfied. In such cases the popular Vasicek model, that is a mathematical system describing the evolution of interest rates based on the Ornstein-Uhlenbeck process, seems not to be applicable. Therefore we propose an alternative approach based on a combination of the popular Ornstein-Uhlenbeck process with a stable distribution and subdiffusion systems that demonstrate such characteristic behavior. The probability density function of the proposed process can be described by a Fokker-Planck type equation and therefore it can be examined as an extension of the basic Ornstein-Uhlenbeck model. In this paper we propose the parameters' estimation method and calibrate the subordinated Vasicek model to the interest rate data. |
Keywords: | Vasicek model; Ornstein-Uhlenbeck process; Alpha-stable distribution; Subdiffusion; Estimation; Calibration; Interest rates; |
JEL: | C16 C51 E43 E47 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1103&r=ets |
By: | Ronny Nilsson; Gyorgy Gyomai |
Abstract: | This paper reports on revision properties of different de-trending and smoothing methods (cycle estimation methods), including PAT with MCD smoothing, a double Hodrick-Prescott (HP) filter and the Christiano-Fitzgerald (CF) filter. The different cycle estimation methods are rated on their revision performance in a simulated real time experiment. Our goal is to find a robust method that gives early turning point signals and steady turning point signals. The revision performance of the methods has been evaluated according to bias, overall revision size and signal stability measures. In a second phase, we investigate if revision performance is improved using stabilizing forecasts or by changing the cycle estimation window from the baseline 6 and 96 months (i.e. filtering out high frequency noise with a cycle length shorter than 6 months and removing trend components with cycle length longer than 96 months) to 12 and 120 months. The results show that, for all tested time series, the PAT de-trending method is outperformed by both the HP or CF filter. In addition, the results indicate that the HP filter outperforms the CF filter in turning point signal stability but has a weaker performance in absolute numerical precision. Short horizon stabilizing forecasts tend to improve revision characteristics of both methods and the changed filter window also delivers more robust turning point estimates.<BR>Ce document présente l’impact des révisions dû à différentes méthodes de lissage et de correction de la tendance (méthodes d'estimation du cycle), comme la méthode PAT avec lissage en utilisant le mois de dominance cyclique (MCD), le double filtre de Hodrick-Prescott (HP) et le filtre Christiano-Fitzgerald (CF). Les différentes méthodes d'estimation du cycle sont évaluées sur leur performance de révision faite à partir d’une simulation en temps réel. Notre objectif est de trouver une méthode robuste qui donne des signaux de point de retournement tôt et stable á la fois. La performance de révisions de ces méthodes a été évaluée en fonction du biais, de la grandeur de la révision et de la stabilité du signal. Nous examinerons ensuite si la performance de la révision peut être améliorée en utilisant des prévisions de stabilisation ou en changeant la fenêtre d'estimation du cycle de base de 6 et 96 mois à une fenêtre de 12 et 120 mois. La fenêtre d’estimation de base correspond à un filtre pour éliminer le bruit (hautes fréquences) avec une longueur de cycle de moins de 6 mois et supprimer la tendance avec une longueur de cycle supérieure à 96 mois. Les résultats montrent que, pour toutes les séries testées, la méthode PAT est moins performante que les deux filtres HP ou CF. En outre, les résultats indiquent que le filtre HP surpasse le filtre CF du point de vue de la stabilité du signal du point de retournement mais sa performance est plus faible quant à la précision numérique absolue. Des prévisions à court terme ont la tendance à améliorer les caractéristiques des révisions des deux méthodes et la modification de la fenêtre de base offre aussi des estimations plus robustes des points de retournement. |
Date: | 2011–05–27 |
URL: | http://d.repec.org/n?u=RePEc:oec:stdaaa:2011/4-en&r=ets |
By: | Peter Fuleky (UHERO and Department of Economics, University of Hawaii); Carl S. Bonham (UHERO and Department of Economics, University of Hawaii) |
Abstract: | We extend the existing literature on small mixed frequency single factor models by allowing for multiple factors, considering indicators in levels, and allowing for cointegration among the indicators. We capture the cointegrating relationships among the indicators by common factors modeled as stochastic trends. We show that the stationary single-factor model frequently used in the literature is misspecified if the data set contains common stochastic trends. We find that taking advantage of common stochastic trends improves forecasting performance over a stationary single-factor model. The common-trends factor model outperforms the stationary single-factor model at all analyzed forecast horizons on a root mean squared error basis. Our results suggest that when the constituent indicators are integrated and cointegrated, modeling common stochastic trends, as opposed to eliminating them, will improve forecasts. |
Keywords: | Dynamic Factor Model, Mixed Frequency Samples, Common Trends, Forecasting, Tourism Industry |
JEL: | E37 C32 C53 L83 |
Date: | 2011–06–13 |
URL: | http://d.repec.org/n?u=RePEc:hai:wpaper:201110&r=ets |
By: | Peter Fuleky (UHERO and Department of Economics, University of Hawaii) |
Abstract: | When estimating the parameters of a process, researchers can choose the reference unit of time (unit period) for their study. Frequently, they set the unit period equal to the observation interval. However, I show that decoupling the unit period from the observation interval facilitates the comparison of parameter estimates across studies with different data sampling frequencies. If the unit period is standardized (for example annualized) across these studies, then the parameters will represent the same attributes of the underlying process, and their interpretation will be independent of the sampling frequency. |
Keywords: | Unit Period, Sampling Frequency, Bias, Time Series |
JEL: | C13 C22 C51 C82 |
Date: | 2011–08–14 |
URL: | http://d.repec.org/n?u=RePEc:hai:wpaper:201111&r=ets |
By: | Todd Clark; Michael W. McCracken |
Abstract: | This paper surveys recent developments in the evaluation of point forecasts. Taking West’s (2006) survey as a starting point, we briefly cover the state of the literature as of the time of West’s writing. We then focus on recent developments, including advancements in the evaluation of forecasts at the population level (based on true, unknown model coefficients), the evaluation of forecasts in the finite sample (based on estimated model coefficients), and the evaluation of conditional versus unconditional forecasts. We present original results in a few subject areas: the optimization of power in determining the split of a sample into in-sample and out-of-sample portions; whether the accuracy of inference in evaluation of multistep forecasts can be improved with the judicious choice of HAC estimator (it can); and the extension of West’s (1996) theory results for population-level, unconditional forecast evaluation to the case of conditional forecast evaluation. |
Keywords: | Forecasting ; Time-series analysis |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwp:1120&r=ets |
By: | Todd Clark; Michael W. McCracken |
Abstract: | This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out-of-sample version of the two-step testing procedure recommended by Vuong but also show that an exact one-step procedure is sometimes applicable. When the models are overlapping, we provide a simple-to-use fixed regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two-step procedure is conservative while the one-step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting U.S. real GDP growth. |
Keywords: | Forecasting |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwp:1121&r=ets |
By: | Markus Jochmann (Newcastle University, UK; The Rimini Centre for Economic Analysis (RCEA), Italy); Gary Koop (University of Strathclyde, UK; The Rimini Centre for Economic Analysis (RCEA), Italy) |
Abstract: | We develop methods for Bayesian inference in vector error correction models which are subject to a variety of switches in regime (e.g. Markov switches in regime or structural breaks). An important aspect of our approach is that we allow both the cointegrating vectors and the number of cointegrating relationships to change when the regime changes. We show how Bayesian model averaging r model selection methods can be used to deal with the high-dimensional model space that results. Our methods are used in an empirical study of the Fisher effect. |
Keywords: | Bayesian, Markov switching, structural breaks, cointegration, model averaging |
JEL: | C11 C32 C52 |
Date: | 2011–09 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:40_11&r=ets |
By: | Vitor Castro (University of Coimbra and NIPE) |
Abstract: | This paper tries to identify, for the first time, a chronology for the Portuguese stock market cycle and test for the presence of duration dependence in bull and bear markets. A duration dependent Markov-switching model is estimated over monthly growth rates of the Portuguese Stock Index for the period 1989-2010. Six episodes of bull/bear markets are identified during that period, as well as the presence of positive duration dependence in bear but not in bull markets. |
Keywords: | stock market cycles; bull and bear markets; duration dependence; Markov-switching. |
JEL: | E32 G19 C41 C24 |
Date: | 2011–09 |
URL: | http://d.repec.org/n?u=RePEc:gmf:wpaper:2011-17&r=ets |
By: | Miquel Montero |
Abstract: | The Continuous-Time Random Walk (CTRW) formalism can be adapted to encompass stochastic processes with memory. In this article we will show how the random combination of two different unbiased CTRWs can give raise to a process with clear drift, if one of them is a CTRW with memory. If one identifies the other one as noise, the effect can be thought as a kind of stochastic resonance. The ultimate origin of this phenomenon is the same of the Parrondo's paradox in game theory |
Date: | 2011–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1107.2346&r=ets |