Econometric Time Series
http://lists.repec.orgmailman/listinfo/nep-ets
Econometric Time Series
2016-05-21
An Explicit Formula for Likelihood Function for Gaussian Vector Autoregressive Moving-Average Model Conditioned on Initial Observables with Application to Model Calibration
http://d.repec.org/n?u=RePEc:arx:papers:1604.08677&r=ets
We derive an explicit formula for likelihood function for Gaussian VARMA model conditioned on initial observables where the moving-average (MA) coefficients are scalar. For fixed MA coefficients the likelihood function is optimized in the autoregressive variables $\Phi$'s by a closed form formula generalizing regression calculation of the VAR model with the introduction of an inner product defined by MA coefficients. We show the assumption of scalar MA coefficients is not restrictive and this formulation of the VARMA model shares many nice features of VAR and MA model. The gradient and Hessian could be computed analytically. The likelihood function is preserved under the root invertion maps of the MA coefficients. We discuss constraints on the gradient of the likelihood function with moving average unit roots. With the help of FFT the likelihood function could be computed in $O((kp+1)^2T +ckT\log(T))$ time. Numerical calibration is required for the scalar MA variables only. The approach can be generalized to include additional drifts as well as integrated components. We discuss a relationship with the Borodin-Okounkov formula and the case of infinite MA components.
Du Nguyen
2016-04
A new structural stochastic volatility model of asset pricing and its stylized facts
http://d.repec.org/n?u=RePEc:arx:papers:1604.08824&r=ets
Building on a prominent agent-based model, we present a new structural stochastic volatility asset pricing model of fundamentalists vs. chartists where the prices are determined based on excess demand. Specifically, this allows for modelling stochastic interactions between agents, based on a herding process corrected by a price misalignment, and incorporating strong noise components in the agents' demand. The model's parameters are estimated using the method of simulated moments, where the moments reflect the basic properties of the daily returns of a stock market index. In addition, for the first time we apply a (parametric) bootstrap method in a setting where the switching between strategies is modelled using a discrete choice approach. As we demonstrate, the resulting dynamics replicate a rich set of the stylized facts of the daily financial data including: heavy tails, volatility clustering, long memory in absolute returns, as well as the absence of autocorrelation in raw returns, volatility-volume correlations, aggregate Gaussianity, concave price impact and extreme price events.
Radu T. Pruna
Maria Polukarov
Nicholas R. Jennings
2016-04
Unbiased Monte Carlo Simulation of Diffusion Processes
http://d.repec.org/n?u=RePEc:arx:papers:1605.01998&r=ets
Monte Carlo simulations of diffusion processes often introduce bias in the final result, due to time discretization. Using an auxiliary Poisson process, it is possible to run simulations which are unbiased. In this article, we propose such a Monte Carlo scheme which converges to the exact value. We manage to keep the simulation variance finite in all cases, so that the strong law of large numbers guarantees the convergence. Moreover, the simulation noise is a decreasing function of the Poisson process intensity. Our method handles multidimensional processes with nonconstant drifts and nonconstant variance-covariance matrices. It also encompasses stochastic interest rates.
Louis Paulot
2016-05
Forecasting time series with structural breaks with Singular Spectrum Analysis, using a general form of recurrent formula
http://d.repec.org/n?u=RePEc:arx:papers:1605.02188&r=ets
This study extends and evaluates the forecasting performance of the Singular Spectrum Analysis (SSA) technique using a general non-linear form for the re- current formula. In this study, we consider 24 series measuring the monthly seasonally adjusted industrial production of important sectors of the German, French and UK economies. This is tested by comparing the performance of the new proposed model with basic SSA and the SSA bootstrap forecasting, especially when there is evidence of structural breaks in both in-sample and out-of-sample periods. According to root mean-square error (RMSE), SSA using the general recursive formula outperforms both the SSA and the bootstrap forecasting at horizons of up to a year. We found no significant difference in predicting the direction of change between these methods. Therefore, it is suggested that the SSA model with the general recurrent formula should be chosen by users in the case of structural breaks in the series.
Donya Rahmani
Saeed Heravi
Hossein Hassani
Mansi Ghodsi
2016-05
Value-at-Risk: The Effect of Autoregression in a Quantile Process
http://d.repec.org/n?u=RePEc:arx:papers:1605.04940&r=ets
Value-at-Risk (VaR) is an institutional measure of risk favored by financial regulators. VaR may be interpreted as a quantile of future portfolio values conditional on the information available, where the most common quantile used is 95%. Here we demonstrate Conditional Autoregressive Value at Risk, first introduced by Engle, Manganelli (2001). CAViaR suggests that negative/positive returns are not i.i.d., and that there is significant autocorrelation. The model is tested using data from 1986- 1999 and 1999-2009 for GM, IBM, XOM, SPX, and then validated via the dynamic quantile test. Results suggest that the tails (upper/lower quantile) of a distribution of returns behave differently than the core.
Khizar Qureshi
2016-03
Bootstrapping pre-averaged realized volatility under market microstructure noise
http://d.repec.org/n?u=RePEc:cir:cirwor:2016s-25&r=ets
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach, where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre-averaged returns implies that the leading martingale part in the pre-averaged returns are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the “blocks of blocks” bootstrap method is not valid when volatility is time-varying. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that combines the wild bootstrap with the blocks of blocks bootstrap. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves the finite sample properties of the existing first order asymptotic theory. We use empirical work to illustrate its use in practice.
Ulrich Hounyo
Sílvia Gonçalves
Nour Meddahi
Block bootstrap, high frequency data, market microstructure noise, preaveraging, realized volatility, wild bootstrap,
2016-05-09
The Rational Inattention Filter
http://d.repec.org/n?u=RePEc:cpr:ceprdp:11237&r=ets
Dynamic rational inattention problems used to be difficult to solve. This paper provides simple, analytical results for dynamic rational inattention problems. We start from the benchmark rational inattention problem. An agent tracks a variable of interest that follows a Gaussian process. The agent chooses how to pay attention to this variable. The agent aims to minimize, say, the mean squared error subject to a constraint on information flow, as in Sims (2003). We prove that if the variable of interest follows an ARMA(p,q) process, the optimal signal is about a linear combination of {X(t),...,X(t-p+1)} and {e(t),... e(t-q+1)}, where X(t) denotes the variable of interest and e(t) denotes its period t innovation. The optimal signal weights can be computed from a simple extension of the Kalman filter: the usual Kalman filter equations in combination with first-order conditions for the optimal signal weights. We provide several analytical results regarding those signal weights. We also prove the equivalence of several different formulations of the information flow constraint. We conclude with general equilibrium applications from Macroeconomics.
Mackowiak, Bartosz Adam
Matejka, Filip
Wiederholt, Mirko
Kalman filter; Macroeconomics; rational inattention
2016-04
Tractable Likelihood-Based Estimation of Non-Linear DSGE Models Using Higher-Order Approximations
http://d.repec.org/n?u=RePEc:eca:wpaper:2013/228887&r=ets
This paper discusses a tractable approach for computing the likelihood function of non-linear Dynamic Stochastic General Equilibrium (DSGE) models that are solved using second- and third order accurate approximations. By contrast to particle filters, no stochastic simulations are needed for the method here. The method here is, hence, much faster and it is thus suitable for the estimation of medium-scale models. The method assumes that the number of exogenous innovations equals the number of observables. Given an assumed vector of initial states, the exogenous innovations can thus recursively be inferred from the observables. This easily allows to compute the likelihood function. Initial states and model parameters are estimated by maximizing the likelihood function. Numerical examples suggest that the method provides reliable estimates of model parameters and of latent state variables, even for highly non-linear economies with big shocks.
Robert Kollmann
likelihood-based estimation of non-linear DSGE models; higher-order approximations; pruning; latent state variables
2016-03
A Bootstrap Stationarity Test for Predictive Regression Invalidity
http://d.repec.org/n?u=RePEc:esy:uefcwp:16666&r=ets
We examine how the familiar spurious regression problem can manifest itself in the context of recently proposed predictability tests. For these tests to provide asymptotically valid inference, account has to be taken of the degree of persistence of the putative predictors. Failure to do so can lead to spurious over-rejections of the no predictability null hypothesis. A number of methods have been developed to achieve this. However, these approaches all make an underlying assumption that any predictability in the variable of interest is purely attributable to the predictors under test, rather than to any unobserved persistent latent variables, themselves uncorrelated with the predictors being tested. We show that where this assumption is violated, something that could very plausibly happen in practice, sizeable (spurious) rejections of the null can occur in cases where the variables under test are not valid predictors. In response, we propose a screening test for predictive regression invalidity based on a stationarity testing approach. In order to allow for an unknown degree of persistence in the putative predictors, and for both conditional and unconditional heteroskedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We establish the asymptotic validity of this bootstrap test, which entails establishing a conditional invariance principle along with its bootstrap counterpart, both of which appear to be new to the literature and are likely to have important applications beyond the present context. We also show how our bootstrap test can be used, in conjunction with extant predictability tests, to deliver a two-step feasible procedure. Monte Carlo simulations suggest that our proposed bootstrap methods work well in finite samples. An illustration employing U.S. stock returns data demonstrates the practical usefulness of our procedures.
Georgiev, Iliyan
Harvey, David I
Leybourne, Stephen J
Taylor, A M Robert
Predictive regression; causality; persistence; spurious regression; stationarity test; fixed regressor wild bootstrap; conditional distribution.
2015-11
The Influence of Additive Outliers on the Performance of Information Criteria to Detect Nonlinearity
http://d.repec.org/n?u=RePEc:han:dpaper:dp-575&r=ets
In this paper the performance of information criteria and a test against SETAR nonlinearity for outlier contaminated time series are investigated. Additive outliers can seriously influence the properties of the underlying time series and hence of linearity tests, resulting in spurious test decisions of nonlinearity. Using simulation studies, the performance of the information criteria SIC and WIC as an alternative to linearity tests are assessed in time series with different degrees of persistence and different outlier magnitudes. For uncontaminated series and a small sample size the performance of SIC and WIC is similar to the performance of the linearity test at the $5\%$ and $10\%$ significance level, respectively. For an increasing number of observations the size of SIC and WIC tends to zero. In contaminated series the size of the test and of the information criteria increases with the outlier magnitude and the degree of persistence. SIC and WIC clearly outperform the test in larger samples and larger outlier magnitudes. The power of the test and of the information criteria depends on the sample size and on the difference between the regimes. The more distinct the regimes and the larger the sample, the higher is the power. Additive outliers decrease the power in distinct regimes in small samples and in intermediate regimes in large samples, but increase the power in similar regimes. Due to their higher robustness in terms of size, information criteria are a valuable alternative to linearity tests in outlier contaminated time series.
Rinke, Saskia
Additive Outliers, Nonlinear Time Series, Information Criteria, Linearity Test, Monte Carlo
2016-04
Dynamic Factor Models with infinite-dimensional factor space: asymptotic analysis
http://d.repec.org/n?u=RePEc:mod:recent:115&r=ets
Factor models, all particular cases of the Generalized Dynamic Factor Model (GDFM) introduced in Forni, Hallin, Lippi and Reichlin (2000), have become extremely popular in the theory and practice of large panels of time series data. The asymptotic properties (consistency and rates) of the corresponding estimators have been studied in Forni, Hallin, Lippi and Reichlin (2004). Those estimators, however, rely on Brillinger’s dynamic principal components, and thus involve two-sided filters, which leads to rather poor forecasting performances. No such problem arises with estimators based on standard (static) principal components, which have been dominant in this literature. On the other hand, the consistency of those static estimators requires the assumption that the space spanned by the factors has finite dimension, which severely restricts the generality afforded by the GDFM. This paper derives the asymptotic properties of a semiparametric estimator of the loadings and common shocks based on one-sided filters recently proposed by Forni, Hallin, Lippi and Zaffaroni (2015). Consistency and exact rates of convergence are obtained for this estimator, under a general class of GDFMs that does not require a finite-dimensional factor space. A Monte Carlo experiment and an empirical exercise on US macroeconomic data corroborate those theoretical results and demonstrate the excellent performance of those estimators in out-of-sample forecasting
Mario Forni
Marc Hallin
Marco Lippi
Paolo Zaffaroni
High-dimensional time series. Generalized dynamic factor models. Vector processes with singular spectral density. One-sided representations of dynamic factor models. Consistency and rates
2015-09
VAR Information and the Empirical Validation of DSGE Models
http://d.repec.org/n?u=RePEc:mod:recent:119&r=ets
A shock of interest can be recovered, either exactly or with a good approximation, by means of standard VAR techniques even when the structural MA representation is non- invertible or non-fundamental. We propose a measure of how informative a VAR model is for a specific shock of interest. We show how to use such a measure for the validation of shocks' transmission mechanism of DSGE models through VARs. In an application, we validate a theory of news shocks. The theory does remarkably well for all variables, but understates the long-run effects of technology news on TFP.
Mario Forni
Luca Gambetti
Luca Sala
invertibility, non-fundamentalness, news shocks, DSGE model validation, structural VAR
2016-04
Local Explosion Modelling by Noncausal Process
http://d.repec.org/n?u=RePEc:pra:mprapa:71105&r=ets
The noncausal autoregressive process with heavy-tailed errors possesses a nonlinear causal dynamics, which allows for %unit root, local explosion or asymmetric cycles often observed in economic and financial time series. It provides a new model for multiple local explosions in a strictly stationary framework. The causal predictive distribution displays surprising features, such as the existence of higher moments than for the marginal distribution, or the presence of a unit root in the Cauchy case. Aggregating such models can yield complex dynamics with local and global explosion as well as variation in the rate of explosion. The asymptotic behavior of a vector of sample autocorrelations is studied in a semi-parametric noncausal AR(1) framework with Pareto-like tails, and diagnostic tests are proposed. Empirical results based on the Nasdaq composite price index are provided.
Gouriéroux, Christian
Zakoian, Jean-Michel
Causal innovation; Explosive bubble; Heavy-tailed errors; Noncausal process; Stable process
2016-05-05
Testing constancy of unconditional variance in volatility models by misspecification and specification tests
http://d.repec.org/n?u=RePEc:qut:auncer:2015_06&r=ets
The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean models. An application to exchange rate returns is included.
Annastiina Silvennoinen
Timo Terasvirta
autoregressive conditional heteroskedasticity, modelling volatility, testing parameter constancy, time-varying GARCH
2015-10-28
BIAS-CORRECTED COMMON CORRELATED EFFECTS POOLED ESTIMATION IN HOMOGENEOUS DYNAMIC PANELS
http://d.repec.org/n?u=RePEc:rug:rugwps:16/920&r=ets
This paper extends the Common Correlated Effects Pooled (CCEP) estimator designed by Pesaran (2006) to dynamic homogeneous models. For static panels, this estimator is consistent as the number of cross-sections (N) goes to infinity irrespectively of the time series dimension (T). However, it suffers from a large bias in dynamic models when T is fixed (Everaert and De Groote, 2016). We develop a bias-corrected CCEP estimator based on an asymptotic bias expression that is valid for a multi-factor error structure provided that a sufficient number of cross-sectional averages, and lags thereof, are added to the model. We show that the resulting CCEPbc estimator is consistent as N tends to infinity, both for T fixed or T growing large, and derive its limiting distribution. Monte Carlo experiments show that our bias correction performs very well. It is nearly unbiased, even when T and/or N are small, and hence offers a strong improvement over the severely biased CCEP estimator. CCEPbc is also found to be superior to alternative bias correction methods available in the literature in terms of bias, variance and inference.
Ignace De Vos
Gerdie Everaert
Dynamic panel data, bias, bias-correction, common correlated effects, unobserved common factors, cross-section dependence, lagged dependent variable
2016-04