|
on Econometric Time Series |
By: | Torben G. Andersen (Northwestern University, NBER, and CREATES); Nicola Fusari (Northwestern University); Viktor Todorov (Northwestern University) |
Abstract: | We develop a new parametric estimation procedure for option panels observed with error which relies on asymptotic approximations assuming an ever increasing set of observed option prices in the moneyness-maturity (cross-sectional) dimension, but with a fixed time span. We develop consistent estimators of the parameter vector and the dynamic realization of the state vector that governs the option price dynamics. The estimators converge stably to a mixed-Gaussian law and we develop feasible estimators for the limiting variance. We provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and the one obtained nonparametrically from high-frequency data on the underlying asset. We further construct new formal tests of the model t for specific regions of the volatility surface and for the stability of the risk-neutral dynamics over a given period of time. A large-scale Monte Carlo study indicates that the inference procedures work well for empirically realistic model specifications and sample sizes. In an empirical application to S&P 500 index options we extend the popular double-jump stochastic volatility model to allow for time-varying risk premia of extreme events, i.e., jumps, as well as a more exible relation between the risk premia and the level of risk. We show that both extensions provide a significantly improved characterization, both statistically and economically, of observed option prices. |
Keywords: | Option Pricing, Inference, Risk Premia, Jumps, Latent State Vector, Stochastic Volatility, Specification Testing, Stable Convergence. |
JEL: | C51 C52 G12 |
Date: | 2011–05–29 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-11&r=ets |
By: | Mark Podolskij (Heidelberg University and CREATES); Katrin Wasmuth (Heidelberg University) |
Abstract: | This paper presents a goodness-of-fit test for the volatility function of a SDE driven by a Gaussian process with stationary and centered increments. Under rather weak assumptions on the Gaussian process, we provide a procedure for testing whether the unknown volatility function lies in a given linear functional space or not. This testing problem is highly non-trivial, because the volatility function is not identifiable in our model. The underlying fractional diffusion is assumed to be observed at high frequency on a fixed time interval and the test statistic is based on weighted power variations. Our test statistic is consistent against any fixed alternative. |
Keywords: | central limit theorem, goodness-of-fit tests, high frequency observations, fractional diffusions, stable convergence. |
JEL: | C10 C13 C14 |
Date: | 2012–04–16 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-13&r=ets |
By: | Niels Haldrup (Aarhus University and CREATES); Robinson Kruse (Leibniz Universität Hannover and CREATES); Timo Teräsvirta (Aarhus University and CREATES); Rasmus T. Varneskov (Aarhus University and CREATES) |
Abstract: | One of the most infl?uential research ?fields in econometrics over the past decades concerns unit root testing in economic time series. In macro-economics much of the interest in the area originate from the fact that when unit roots are present, then shocks to the time series processes have a persistent effect with resulting policy implications. From a statistical perspective on the other hand, the presence of unit roots has dramatic implications for econometric model building, estimation, and inference in order to avoid the so-called spurious regression problem. The present paper provides a selective review of contributions to the fi?eld of unit root testing over the past three decades. We discuss the nature of stochastic and deterministic trend processes, including break processes, that are likely to affect unit root inference. A range of the most popular unit root tests are presented and their modi?cations to situations with breaks are discussed. We also review some results on unit root testing within the framework of non-linear processes. |
Keywords: | Unit roots, nonlinearity, structural breaks. |
JEL: | C2 C22 |
Date: | 2012–04–18 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-14&r=ets |
By: | Neil Shephard; Dacheng Xiu |
Abstract: | Estimating the covariance and correlation between assets using high frequency data is challenging due to market microstructure effects and Epps effects. In this paper we extend Xiu’s univariate QML approach to the multivariate case, carrying out inference as if the observations arise from an asynchronously observed vector scaled Brownian model observed with error. Under stochastic volatility the resulting QML estimator is positive semi-definite, uses all available data, is consistent and asymptotically mixed normal. The quasi-likelihood is computed using a Kalman filter and optimised using a relatively simple EM algorithm which scales well with the number of assets. We derive the theoretical properties of the estimator and prove that it achieves the efficient rate of convergence. We show how to make it achieve the non-parametric efficiency bound for this problem. The estimator is also analysed using Monte Carlo methods and applied on equity data that are distinct in their levels of liquidity. |
Keywords: | EM algorithm, Kalman filter, Market microstructure noise, Non-synchronous data, Portfolio optimisation, Quadratic variation, Quasi-likelihood, Semimartingale, Volatility |
JEL: | C14 C58 D53 D81 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:oxf:wpaper:604&r=ets |
By: | Jennifer L. Castle; David F. Hendry; Michael P. clements |
Abstract: | We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies such as intercept corrections or differencing when location shifts occur, as in the recent financial crisis. |
Keywords: | Model selection, Factor models, Forecasting, Impulse-indicator saturation, Autometrics |
JEL: | C51 C22 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:oxf:wpaper:600&r=ets |
By: | D.S. Poskitt; Gael M. Martin; Simone D. Grose |
Abstract: | This paper investigates the use of bootstrap-based bias correction of semi-parametric estimators of the long memory parameter in fractionally integrated processes. The re-sampling method involves the application of the sieve bootstrap to data pre-filtered by a preliminary semi-parametric estimate of the long memory parameter. Theoretical justification for using the bootstrap techniques to bias adjust log periodogram and semi-parametric local Whittle estimators of the memory parameter is provided. Simulation evidence comparing the performance of the bootstrap bias correction with analytical bias correction techniques is also presented. The bootstrap method is shown to produce notable bias reductions, in particular when applied to an estimator for which analytical adjustments have already been used. The empirical coverage of confidence intervals based on the bias-adjusted estimators is very close to the nominal, for a reasonably large sample size, more so than for the comparable analytically adjusted estimators. The precision of inferences (as measured by interval length) is also greater when the bootstrap is used to bias correct rather than analytical adjustments. |
Keywords: | Analytical bias correction, bootstrap bias correction, confidence interval, coverage, precision, log periodogram estimator, local Whittle estimator. |
JEL: | C18 C22 C52 |
Date: | 2012–04 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2012-8&r=ets |
By: | D.S. Poskitt; Wenying Yao |
Abstract: | In this article we investigate the theoretical behaviour of finite lag VAR(n) models fitted to time series that in truth come from an infinite order VAR(?) data generating mechanism. We show that overall error can be broken down into two basic components, an estimation error that stems from the difference between the parameter estimates and their population ensemble VAR(n) counterparts, and an approximation error that stems from the difference between the VAR(n) and the true VAR(?). The two sources of error are shown to be present in other performance indicators previously employed in the literature to characterize, so called, truncation effects. Our theoretical analysis indicates that the magnitude of the estimation error exceeds that of the approximation error, but experimental results based upon a prototypical real business cycle model indicate that in practice the approximation error approaches its asymptotic position far more slowly than does the estimation error, their relative orders of magnitude notwithstanding. The experimental results suggest that with sample sizes and lag lengths like those commonly employed in practice VAR(n) models are likely to exhibit serious errors of both types when attempting to replicate the dynamics of the true underlying process and that inferences based on VAR(n) models can be very untrustworthy. |
Keywords: | VAR, estimation error, approximation error, RBC model |
JEL: | C18 C32 C52 C54 E37 |
Date: | 2012–04–19 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2012-11&r=ets |
By: | D.S. Poskitt; Simone D. Grose; Gael M. Martin |
Abstract: | This paper investigates the accuracy of bootstrap-based inference in the case of long memory fractionally integrated processes. The re-sampling method is based on the semi-parametric sieve approach, whereby the dynamics in the process used to produce the bootstrap draws are captured by an autoregressive approximation. Application of the sieve method to data pre-filtered by a semi-parametric estimate of the long memory parameter is also explored. Higher-order improvements yielded by both forms of re-sampling are demonstrated using Edgeworth expansions for a broad class of linear statistics. The methods are then applied to the problem of estimating the sampling distribution of the sample mean under long memory, in an experimental setting. The pre-filtered version of the bootstrap is shown to avoid the distinct underestimation of the sampling variance of the mean which the raw sieve method demonstrates in finite samples, higher order accuracy of the latter notwithstanding. |
Keywords: | Bias, bootstrap-based inference, Edgeworth expansion, pre-filtered sieve bootstrap, sampling distribution. |
JEL: | C18 C22 C52 |
Date: | 2012–04 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2012-9&r=ets |
By: | Mark J Jensen; John M Maheu |
Abstract: | In this paper we extend the parametric, asymmetric, stochastic volatility model (ASV), where returns are correlated with volatility, by flexibly modeling the bivariate distribution of the return and volatility innovations nonparametrically. Its novelty is in modeling the joint, conditional, return-volatility, distribution with a infinite mixture of bivariate Normal distributions with mean zero vectors, but having unknown mixture weights and covariance matrices. This semiparametric ASV model nests stochastic volatility models whose innovations are distributed as either Normal or Student-t distributions, plus the response in volatility to unexpected return shocks is more general than the fixed asymmetric response with the ASV model. The unknown mixture parameters are modeled with a Dirichlet Process prior. This prior ensures a parsimonious, finite, posterior, mixture that bests represents the distribution of the innovations and a straightforward sampler of the conditional posteriors. We develop a Bayesian Markov chain Monte Carlo sampler to fully characterize the parametric and distributional uncertainty. Nested model comparisons and out-of-sample predictions with the cumulative marginal-likelihoods, and one-day-ahead, predictive log-Bayes factors between the semiparametric and parametric versions of the ASV model shows the semiparametric model forecasting more accurate empirical market returns. A major reason is how volatility responds to an unexpected market movement. When the market is tranquil, expected volatility reacts to a negative (positive) price shock by rising (initially declining, but then rising when the positive shock is large). However, when the market is volatile, the degree of asymmetry and the size of the response in expected volatility is muted. In other words, when times are good, no news is good news, but when times are bad, neither good nor bad news matters with regards to volatility. |
Keywords: | Bayesian nonparametrics, cumulative Bayes factor, Dirichlet process mixture, inï¬nite mixture model, leverage effect, marginal likelihood, MCMC, non-normal, stochastic volatility, volatility-return relationship |
JEL: | C11 C14 C53 C58 |
Date: | 2012–04–20 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-453&r=ets |
By: | Lennart Hoogerheide (VU University Amsterdam); Anne Opschoor (Erasmus University Amsterdam); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam) |
Abstract: | A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-<I>t</I> densities that approximates accurately the target distribution - typically a posterior distribution, of which we only require a kernel - in the sense that the Kullback-Leibler divergence between target and mixture is minimized. We label this approach <I>Mixture of t by Importance Sampling and Expectation Maximization</I> (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis-Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a <I>sequential</I> manner. Second, we introduce a <I>permutation-augmented</I> MitISEM approach. Third, we propose a <I>partial</I> MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model. |
Keywords: | mixture of Student-t distributions; importance sampling; Kullback-Leibler divergence; Expectation Maximization; Metropolis-Hastings algorithm; predictive likelihood; DCC GARCH; mixture GARCH; instrumental variables |
JEL: | C11 C22 C26 |
Date: | 2012–03–23 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20120026&r=ets |
By: | James Morley (School of Economics, The University of New South Wales); Jeremy Pigger (University of Oregon); Pao-Lin Tien (Wesleyan University) |
Abstract: | We consider the extent to which different time-series models can generate simulated data with the same business cycle features that are evident in U.S. real GDP. We focus our analysis on whether multivariate linear models can improve on the previously documented failure of univariate linear models to replicate certain key business cycle features. We find that a particular nonlinear Markov-switching specification with an explicit “bounceback” effect continues to outperform linear models, even when the models incorporate variables such as the unemployment rate, inflation, interest rates, and the components of GDP. These results are robust to simulated data generated either using Normal disturbances or bootstrapped disturbances, as well as to allowing for a one-time structural break in the variance of shocks to real GDP growth. |
Keywords: | Business cycle features, nonlinear dynamics, multivariate models. |
JEL: | C52 E30 |
Date: | 2012–03 |
URL: | http://d.repec.org/n?u=RePEc:swe:wpaper:2012-23&r=ets |
By: | Yulei Luo; Jun Nie; Eric R. Young |
Abstract: | This technical paper considers ways to capture uncertainty in the context of so-called "state-space" models. ; State-space models are powerful tools commonly used in macroeconomics, international economics, and finance. State-space models can generate estimates of an underlying, ultimately unobserved variable—such as the natural rate of unemployment—based on the movements of other variables that are observed and have some relationship to the unobserved variable. The paper shows how several macroeconomic models can be mapped to the state-space framework, thus helping quantify uncertainty about the true model (model uncertainty) or about the amount of information available when decisions are made (state uncertainty). |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp12-02&r=ets |
By: | Peter Martey Addo (Centre d'Economie de la Sorbonne); Monica Billio (Ca' Foscari university - Department of Economics); Dominique Guegan (Centre d'Economie de la Sorbonne - Paris School of Economics) |
Abstract: | We provide a signal modality analysis to characterize and detect nonlinearity schemes in the US Industrial Production Index time series. The analysis is achieved by using the recently proposed "delay vector variance" (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtained via a differential entropy based method using wavelet-based surrogates. A complex Morlet wavelet is employed to detect and characterize the US business cycle. A comprehensive analysis of the feasibility of this approach is provided. Our results coincide with the business cycles peaks and troughs dates published by the National Bureau of Economic Research (NBER). |
Keywords: | Nonlinearity analysis, surrogates, Delay Vector Variance (DVV) method, wavelets, business cycle, embedding parameters. |
JEL: | C14 C22 C40 E32 |
Date: | 2012–04 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:12023&r=ets |
By: | Gregor Wergen; Satya N. Majumdar; Gregory Schehr |
Abstract: | We study the statistics of the number of records R_{n,N} for N identical and independent symmetric discrete-time random walks of n steps in one dimension, all starting at the origin at step 0. At each time step, each walker jumps by a random length drawn independently from a symmetric and continuous distribution. We consider two cases: (I) when the variance \sigma^2 of the jump distribution is finite and (II) when \sigma^2 is divergent as in the case of L\'evy flights with index 0 < \mu < 2. In both cases we find that the mean record number <R_{n,N}> grows universally as \sim \alpha_N \sqrt{n} for large n, but with a very different behavior of the amplitude \alpha_N for N > 1 in the two cases. We find that for large N, \alpha_N \approx 2 \sqrt{\log N} independently of \sigma^2 in case I. In contrast, in case II, the amplitude approaches to an N-independent constant for large N, \alpha_N \approx 4/\sqrt{\pi}, independently of 0<\mu<2. For finite \sigma^2 we argue, and this is confirmed by our numerical simulations, that the full distribution of (R_{n,N}/\sqrt{n} - 2 \sqrt{\log N}) \sqrt{\log N} converges to a Gumbel law as n \to \infty and N \to \infty. In case II, our numerical simulations indicate that the distribution of R_{n,N}/\sqrt{n} converges, for n \to \infty and N \to \infty, to a universal nontrivial distribution, independently of \mu. We discuss the applications of our results to the study of the record statistics of 366 daily stock prices from the Standard & Poors 500 index. |
Date: | 2012–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1204.5039&r=ets |
By: | Gayatri Tilak; Tamas Szell; Remy Chicheportiche; Anirban Chakraborti |
Abstract: | The aim of this article is to briefly review and make new studies of correlations and co-movements of stocks, so as to understand the "seasonalities" and market evolution. Using the intraday data of the CAC40, we begin by reasserting the findings of Allez and Bouchaud [New J. Phys. 13, 025010 (2011)]: the average correlation between stocks increases throughout the day. We then use multidimensional scaling (MDS) in generating maps and visualizing the dynamic evolution of the stock market during the day. We do not find any marked difference in the structure of the market during a day. Another aim is to use daily data for MDS studies, and visualize or detect specific sectors in a market and periods of crisis. We suggest that this type of visualization may be used in identifying potential pairs of stocks for "pairs trade". |
Date: | 2012–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1204.5103&r=ets |