
on Econometric Time Series 
By:  Du Nguyen 
Abstract:  We derive an explicit formula for likelihood function for Gaussian VARMA model conditioned on initial observables where the movingaverage (MA) coefficients are scalar. For fixed MA coefficients the likelihood function is optimized in the autoregressive variables $\Phi$'s by a closed form formula generalizing regression calculation of the VAR model with the introduction of an inner product defined by MA coefficients. We show the assumption of scalar MA coefficients is not restrictive and this formulation of the VARMA model shares many nice features of VAR and MA model. The gradient and Hessian could be computed analytically. The likelihood function is preserved under the root invertion maps of the MA coefficients. We discuss constraints on the gradient of the likelihood function with moving average unit roots. With the help of FFT the likelihood function could be computed in $O((kp+1)^2T +ckT\log(T))$ time. Numerical calibration is required for the scalar MA variables only. The approach can be generalized to include additional drifts as well as integrated components. We discuss a relationship with the BorodinOkounkov formula and the case of infinite MA components. 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1604.08677&r=ets 
By:  Radu T. Pruna; Maria Polukarov; Nicholas R. Jennings 
Abstract:  Building on a prominent agentbased model, we present a new structural stochastic volatility asset pricing model of fundamentalists vs. chartists where the prices are determined based on excess demand. Specifically, this allows for modelling stochastic interactions between agents, based on a herding process corrected by a price misalignment, and incorporating strong noise components in the agents' demand. The model's parameters are estimated using the method of simulated moments, where the moments reflect the basic properties of the daily returns of a stock market index. In addition, for the first time we apply a (parametric) bootstrap method in a setting where the switching between strategies is modelled using a discrete choice approach. As we demonstrate, the resulting dynamics replicate a rich set of the stylized facts of the daily financial data including: heavy tails, volatility clustering, long memory in absolute returns, as well as the absence of autocorrelation in raw returns, volatilityvolume correlations, aggregate Gaussianity, concave price impact and extreme price events. 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1604.08824&r=ets 
By:  Louis Paulot 
Abstract:  Monte Carlo simulations of diffusion processes often introduce bias in the final result, due to time discretization. Using an auxiliary Poisson process, it is possible to run simulations which are unbiased. In this article, we propose such a Monte Carlo scheme which converges to the exact value. We manage to keep the simulation variance finite in all cases, so that the strong law of large numbers guarantees the convergence. Moreover, the simulation noise is a decreasing function of the Poisson process intensity. Our method handles multidimensional processes with nonconstant drifts and nonconstant variancecovariance matrices. It also encompasses stochastic interest rates. 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1605.01998&r=ets 
By:  Donya Rahmani; Saeed Heravi; Hossein Hassani; Mansi Ghodsi 
Abstract:  This study extends and evaluates the forecasting performance of the Singular Spectrum Analysis (SSA) technique using a general nonlinear form for the re current formula. In this study, we consider 24 series measuring the monthly seasonally adjusted industrial production of important sectors of the German, French and UK economies. This is tested by comparing the performance of the new proposed model with basic SSA and the SSA bootstrap forecasting, especially when there is evidence of structural breaks in both insample and outofsample periods. According to root meansquare error (RMSE), SSA using the general recursive formula outperforms both the SSA and the bootstrap forecasting at horizons of up to a year. We found no significant difference in predicting the direction of change between these methods. Therefore, it is suggested that the SSA model with the general recurrent formula should be chosen by users in the case of structural breaks in the series. 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1605.02188&r=ets 
By:  Khizar Qureshi 
Abstract:  ValueatRisk (VaR) is an institutional measure of risk favored by financial regulators. VaR may be interpreted as a quantile of future portfolio values conditional on the information available, where the most common quantile used is 95%. Here we demonstrate Conditional Autoregressive Value at Risk, first introduced by Engle, Manganelli (2001). CAViaR suggests that negative/positive returns are not i.i.d., and that there is significant autocorrelation. The model is tested using data from 1986 1999 and 19992009 for GM, IBM, XOM, SPX, and then validated via the dynamic quantile test. Results suggest that the tails (upper/lower quantile) of a distribution of returns behave differently than the core. 
Date:  2016–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1605.04940&r=ets 
By:  Ulrich Hounyo; Sílvia Gonçalves; Nour Meddahi 
Abstract:  The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the preaveraging approach, where the preaveraging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the preaveraged returns implies that the leading martingale part in the preaveraged returns are kndependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the “blocks of blocks” bootstrap method is not valid when volatility is timevarying. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared preaveraged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared preaveraged returns, we propose a novel procedure that combines the wild bootstrap with the blocks of blocks bootstrap. We provide a proof of the first order asymptotic validity of this method for percentile and percentilet intervals. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves the finite sample properties of the existing first order asymptotic theory. We use empirical work to illustrate its use in practice. 
Keywords:  Block bootstrap, high frequency data, market microstructure noise, preaveraging, realized volatility, wild bootstrap, 
Date:  2016–05–09 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2016s25&r=ets 
By:  Mackowiak, Bartosz Adam; Matejka, Filip; Wiederholt, Mirko 
Abstract:  Dynamic rational inattention problems used to be difficult to solve. This paper provides simple, analytical results for dynamic rational inattention problems. We start from the benchmark rational inattention problem. An agent tracks a variable of interest that follows a Gaussian process. The agent chooses how to pay attention to this variable. The agent aims to minimize, say, the mean squared error subject to a constraint on information flow, as in Sims (2003). We prove that if the variable of interest follows an ARMA(p,q) process, the optimal signal is about a linear combination of {X(t),...,X(tp+1)} and {e(t),... e(tq+1)}, where X(t) denotes the variable of interest and e(t) denotes its period t innovation. The optimal signal weights can be computed from a simple extension of the Kalman filter: the usual Kalman filter equations in combination with firstorder conditions for the optimal signal weights. We provide several analytical results regarding those signal weights. We also prove the equivalence of several different formulations of the information flow constraint. We conclude with general equilibrium applications from Macroeconomics. 
Keywords:  Kalman filter; Macroeconomics; rational inattention 
JEL:  C61 D83 E30 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:11237&r=ets 
By:  Robert Kollmann 
Abstract:  This paper discusses a tractable approach for computing the likelihood function of nonlinear Dynamic Stochastic General Equilibrium (DSGE) models that are solved using second and third order accurate approximations. By contrast to particle filters, no stochastic simulations are needed for the method here. The method here is, hence, much faster and it is thus suitable for the estimation of mediumscale models. The method assumes that the number of exogenous innovations equals the number of observables. Given an assumed vector of initial states, the exogenous innovations can thus recursively be inferred from the observables. This easily allows to compute the likelihood function. Initial states and model parameters are estimated by maximizing the likelihood function. Numerical examples suggest that the method provides reliable estimates of model parameters and of latent state variables, even for highly nonlinear economies with big shocks. 
Keywords:  likelihoodbased estimation of nonlinear DSGE models; higherorder approximations; pruning; latent state variables 
JEL:  C63 C68 E37 
Date:  2016–03 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/228887&r=ets 
By:  Georgiev, Iliyan; Harvey, David I; Leybourne, Stephen J; Taylor, A M Robert 
Abstract:  We examine how the familiar spurious regression problem can manifest itself in the context of recently proposed predictability tests. For these tests to provide asymptotically valid inference, account has to be taken of the degree of persistence of the putative predictors. Failure to do so can lead to spurious overrejections of the no predictability null hypothesis. A number of methods have been developed to achieve this. However, these approaches all make an underlying assumption that any predictability in the variable of interest is purely attributable to the predictors under test, rather than to any unobserved persistent latent variables, themselves uncorrelated with the predictors being tested. We show that where this assumption is violated, something that could very plausibly happen in practice, sizeable (spurious) rejections of the null can occur in cases where the variables under test are not valid predictors. In response, we propose a screening test for predictive regression invalidity based on a stationarity testing approach. In order to allow for an unknown degree of persistence in the putative predictors, and for both conditional and unconditional heteroskedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We establish the asymptotic validity of this bootstrap test, which entails establishing a conditional invariance principle along with its bootstrap counterpart, both of which appear to be new to the literature and are likely to have important applications beyond the present context. We also show how our bootstrap test can be used, in conjunction with extant predictability tests, to deliver a twostep feasible procedure. Monte Carlo simulations suggest that our proposed bootstrap methods work well in finite samples. An illustration employing U.S. stock returns data demonstrates the practical usefulness of our procedures. 
Keywords:  Predictive regression; causality; persistence; spurious regression; stationarity test; fixed regressor wild bootstrap; conditional distribution. 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:esy:uefcwp:16666&r=ets 
By:  Rinke, Saskia 
Abstract:  In this paper the performance of information criteria and a test against SETAR nonlinearity for outlier contaminated time series are investigated. Additive outliers can seriously influence the properties of the underlying time series and hence of linearity tests, resulting in spurious test decisions of nonlinearity. Using simulation studies, the performance of the information criteria SIC and WIC as an alternative to linearity tests are assessed in time series with different degrees of persistence and different outlier magnitudes. For uncontaminated series and a small sample size the performance of SIC and WIC is similar to the performance of the linearity test at the $5\%$ and $10\%$ significance level, respectively. For an increasing number of observations the size of SIC and WIC tends to zero. In contaminated series the size of the test and of the information criteria increases with the outlier magnitude and the degree of persistence. SIC and WIC clearly outperform the test in larger samples and larger outlier magnitudes. The power of the test and of the information criteria depends on the sample size and on the difference between the regimes. The more distinct the regimes and the larger the sample, the higher is the power. Additive outliers decrease the power in distinct regimes in small samples and in intermediate regimes in large samples, but increase the power in similar regimes. Due to their higher robustness in terms of size, information criteria are a valuable alternative to linearity tests in outlier contaminated time series. 
Keywords:  Additive Outliers, Nonlinear Time Series, Information Criteria, Linearity Test, Monte Carlo 
JEL:  C15 C22 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp575&r=ets 
By:  Mario Forni; Marc Hallin; Marco Lippi; Paolo Zaffaroni 
Abstract:  Factor models, all particular cases of the Generalized Dynamic Factor Model (GDFM) introduced in Forni, Hallin, Lippi and Reichlin (2000), have become extremely popular in the theory and practice of large panels of time series data. The asymptotic properties (consistency and rates) of the corresponding estimators have been studied in Forni, Hallin, Lippi and Reichlin (2004). Those estimators, however, rely on Brillinger’s dynamic principal components, and thus involve twosided filters, which leads to rather poor forecasting performances. No such problem arises with estimators based on standard (static) principal components, which have been dominant in this literature. On the other hand, the consistency of those static estimators requires the assumption that the space spanned by the factors has finite dimension, which severely restricts the generality afforded by the GDFM. This paper derives the asymptotic properties of a semiparametric estimator of the loadings and common shocks based on onesided filters recently proposed by Forni, Hallin, Lippi and Zaffaroni (2015). Consistency and exact rates of convergence are obtained for this estimator, under a general class of GDFMs that does not require a finitedimensional factor space. A Monte Carlo experiment and an empirical exercise on US macroeconomic data corroborate those theoretical results and demonstrate the excellent performance of those estimators in outofsample forecasting 
Keywords:  Highdimensional time series. Generalized dynamic factor models. Vector processes with singular spectral density. Onesided representations of dynamic factor models. Consistency and rates 
JEL:  C0 C01 E0 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:mod:recent:115&r=ets 
By:  Mario Forni; Luca Gambetti; Luca Sala 
Abstract:  A shock of interest can be recovered, either exactly or with a good approximation, by means of standard VAR techniques even when the structural MA representation is non invertible or nonfundamental. We propose a measure of how informative a VAR model is for a specific shock of interest. We show how to use such a measure for the validation of shocks' transmission mechanism of DSGE models through VARs. In an application, we validate a theory of news shocks. The theory does remarkably well for all variables, but understates the longrun effects of technology news on TFP. 
Keywords:  invertibility, nonfundamentalness, news shocks, DSGE model validation, structural VAR 
JEL:  C32 E32 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:mod:recent:119&r=ets 
By:  Gouriéroux, Christian; Zakoian, JeanMichel 
Abstract:  The noncausal autoregressive process with heavytailed errors possesses a nonlinear causal dynamics, which allows for %unit root, local explosion or asymmetric cycles often observed in economic and financial time series. It provides a new model for multiple local explosions in a strictly stationary framework. The causal predictive distribution displays surprising features, such as the existence of higher moments than for the marginal distribution, or the presence of a unit root in the Cauchy case. Aggregating such models can yield complex dynamics with local and global explosion as well as variation in the rate of explosion. The asymptotic behavior of a vector of sample autocorrelations is studied in a semiparametric noncausal AR(1) framework with Paretolike tails, and diagnostic tests are proposed. Empirical results based on the Nasdaq composite price index are provided. 
Keywords:  Causal innovation; Explosive bubble; Heavytailed errors; Noncausal process; Stable process 
JEL:  C13 C22 C52 
Date:  2016–05–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:71105&r=ets 
By:  Annastiina Silvennoinen (QUT); Timo Terasvirta (CREATES) 
Abstract:  The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean models. An application to exchange rate returns is included. 
Keywords:  autoregressive conditional heteroskedasticity, modelling volatility, testing parameter constancy, timevarying GARCH 
JEL:  C32 C52 
Date:  2015–10–28 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:2015_06&r=ets 
By:  Ignace De Vos; Gerdie Everaert () 
Abstract:  This paper extends the Common Correlated Effects Pooled (CCEP) estimator designed by Pesaran (2006) to dynamic homogeneous models. For static panels, this estimator is consistent as the number of crosssections (N) goes to infinity irrespectively of the time series dimension (T). However, it suffers from a large bias in dynamic models when T is fixed (Everaert and De Groote, 2016). We develop a biascorrected CCEP estimator based on an asymptotic bias expression that is valid for a multifactor error structure provided that a sufficient number of crosssectional averages, and lags thereof, are added to the model. We show that the resulting CCEPbc estimator is consistent as N tends to infinity, both for T fixed or T growing large, and derive its limiting distribution. Monte Carlo experiments show that our bias correction performs very well. It is nearly unbiased, even when T and/or N are small, and hence offers a strong improvement over the severely biased CCEP estimator. CCEPbc is also found to be superior to alternative bias correction methods available in the literature in terms of bias, variance and inference. 
Keywords:  Dynamic panel data, bias, biascorrection, common correlated effects, unobserved common factors, crosssection dependence, lagged dependent variable 
JEL:  C23 C13 C15 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:16/920&r=ets 