
on Econometric Time Series 
By:  Torben G. Andersen (Northwestern University and CREATES); Rasmus T. Varneskov (Copenhagen Business School and CREATES) 
Abstract:  This paper studies the properties of standard predictive regressions in model economies, characterized through persistent vector autoregressive dynamics for the state variables and the associated series of interest. In particular, we consider a setting where all, or a subset, of the variables may be fractionally integrated, and note that this induces a spurious regression problem. We then propose a new inference and testing procedure  the local spectrum (LCM) approach  for the joint significance of the regressors, which is robust against the variables having different integration orders. The LCM procedure is based on (semi)parametric fractionalfiltering and band spectrum regression using a suitably selected set of frequency ordinates. We establish the asymptotic properties and explain how they differ from and extend existing procedures. Using these new inference and testing techniques, we explore the implications of assuming VAR dynamics in predictive regressions for the realized return variation. Standard least squares predictive regressions indicate that popular financial and macroeconomic variables carry valuable information about return volatility. In contrast, we find no significant evidence using our robust LCM procedure, indicating that prior conclusions may be premature. In fact, if anything, our results suggest the reverse causality, i.e., rising volatility predates adverse innovations to key macroeconomic variables. Simulations are employed to illustrate the relevance of the theoretical arguments for finitesample inference. 
Keywords:  Endogeneity Bias, Fractional Integration, Frequency Domain Inference, Hypothesis Testing, Spurious Inference, Stochastic Volatility, VAR Models 
JEL:  C13 C14 C32 C52 C53 G12 
Date:  2018–02–27 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201809&r=all 
By:  Liang, Chong; Schienle, Melanie 
Abstract:  We provide a shrinkage type methodology which allows for simultaneous model selection and estimation of vector error correction models (VECM) when the dimension is large and can increase with sample size. Model determination is treated as a joint selection problem of cointegrating rank and autoregressive lags under respective practically valid sparsity assumptions. We show consistency of the selection mechanism by the resulting LassoVECM estimator under very general assumptions on dimension, rank and error terms. Moreover, with computational complexity of a linear programming problem only, the procedure remains computationally tractable in high dimensions. We demonstrate the effectiveness of the proposed approach by a simulation study and an empirical application to recent CDS data after the financial crisis. 
Keywords:  Highdimensional time series,VECM,Cointegration rank and lag selection,Lasso,Credit Default Swap 
JEL:  C32 C52 
Date:  2019 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:124&r=all 
By:  Changli He (Tianjin University of Finance and Economics); Jian Kang (Tianjin University of Finance and Economics); Timo Teräsvirta (CREATES and Aarhus University, C.A.S.E, HumboldtUniversität zu Berlin); Shuhua Zhang (Tianjin University of Finance and Economics) 
Abstract:  In this paper we introduce an autoregressive model with seasonal dummy variables in which coefficients of seasonal dummies vary smoothly and deterministically over time. The error variance of the model is seasonally heteroskedastic and multiplicatively decomposed, the decomposition being similar to that in well known ARCH and GARCH models. This variance is also allowed to be smoothly and deterministically timevarying. Under regularity conditions, consistency and asymptotic normality of the maximum likelihood estimators of parameters of this model is proved. A test of constancy of the seasonal coefficients is derived. The test is generalised to specifying the parametric structure of the model. A test of constancy over time of the heteroskedastic error variance is presented. The purpose of building this model is to use it for describing changing seasonality in the wellknown monthly central England temperature series. More specifically, the idea is to find out in which way and by how much the monthly temperatures are varying over time during the period of more than 240 years, if they do. Misspecification tests are applied to the estimated model and the findings discussed. 
Keywords:  global warming, nonlinear time series, changing seasonality, smooth transition, testing constancy 
JEL:  C22 C51 C52 Q54 
Date:  2018–04–25 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201815&r=all 
By:  JeanJacques Forneron 
Abstract:  This paper proposes a Sieve Simulated Method of Moments (SieveSMM) estimator for the parameters and the distribution of the shocks in nonlinear dynamic models where the likelihood and the moments are not tractable. An important concern with SMM, which matches sample with simulated moments, is that a parametric distribution is required but economic quantities that depend on this distribution, such as welfare and assetprices, can be sensitive to misspecification. The SieveSMM estimator addresses this issue by flexibly approximating the distribution of the shocks with a Gaussian and tails mixture sieve. The asymptotic framework provides consistency, rate of convergence and asymptotic normality results, extending existing sieve estimation theory to a new framework with more general dynamics and latent variables. MonteCarlo simulations illustrate the finite sample properties of the estimator. Two empirical applications highlight the importance of the distribution of the shocks for estimates and counterfactuals. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.01456&r=all 
By:  Conrad, Christian; Schienle, Melanie 
Abstract:  We consider the problem of testing for an omitted multiplicative longterm component in GARCHtype models. Under the alternative there is a twocomponent model with a shortterm GARCH component that fluctuates around a smoothly timevarying longterm component which is driven by the dynamics of an explanatory variable. We suggest a Lagrange Multiplier statistic for testing the null hypothesis that the variable has no explanatory power. We derive the asymptotic theory for our test statistic and investigate its finite sample properties by MonteCarlo simulation. Our test also covers the mixedfrequency case in which the returns are observed at a higher frequency than the explanatory variable. The usefulness of our procedure is illustrated by empirical applications to S&P 500 return data. 
Keywords:  GARCHMIDAS,LM test,LongTerm Volatility,MixedFrequency Data,Volatility Component Models 
JEL:  C53 C58 E32 G12 
Date:  2019 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:121&r=all 
By:  Alexander Heinemann 
Abstract:  This paper studies the joint inference on conditional volatility parameters and the innovation moments by means of bootstrap to test for the existence of moments for GARCH(p,q) processes. We propose a residual bootstrap to mimic the joint distribution of the quasimaximum likelihood estimators and the empirical moments of the residuals and also prove its validity. A bootstrapbased test for the existence of moments is proposed, which provides asymptotically correctlysized tests without losing its consistency property. It is simple to implement and extends to other GARCHtype settings. A simulation study demonstrates the test's size and power properties in finite samples and an empirical application illustrates the testing approach. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.01808&r=all 
By:  Leopoldo Catania (Aarhus University and CREATES); Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata") 
Abstract:  The prediction of volatility is of primary importance for business applications in risk management, asset allocation and pricing of derivative instruments. This paper proposes a novel measurement model which takes into consideration the possibly timevarying interaction of realized volatility and asset returns, according to a bivariate model aiming at capturing the main stylised facts: (i) the long memory of the volatility process, (ii) the heavytailedness of the returns distribution, and (iii) the negative dependence of volatility and daily market returns. We assess the relevance of "volatility in volatility"and timevarying "leverage" effects in the outofsample forecasting performance of the model, and evaluate the density forecasts of the future level of market volatility. The empirical results illustrate that our specification can outperform the benchmark HARRV, both in terms of point and density forecasts. 
Keywords:  realized volatility, forecasting, leverage effect, volatility in volatility 
Date:  2019–02–06 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:450&r=all 
By:  Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University); Rasmus T. Varneskov (Northwestern University and CREATES) 
Abstract:  We provide unifying inference theory for parametric nonlinear factor models based on a panel of noisy observations. The panel has a large crosssection and a time span that may be either small or large. Moreover, we incorporate an additional source of information provided by noisy observations on some known functions of the factor realizations. The estimation is carried out via penalized least squares, i.e., by minimizing the L_2 distance between observations from the panel and their modelimplied counterparts, augmented by a penalty for the deviation of the extracted factors from the noisy signals for them. When the time dimension is fixed, the limit distribution of the parameter vector is mixed Gaussian with conditional variance depending on the path of the factor realizations. On the other hand, when the time span is large, the convergence rate is faster and the limit distribution is Gaussian with a constant variance. In this case, however, we incur an incidental parameter problem since, at each point in time, we need to recover the concurrent factor realizations. This leads to an asymptotic bias that is absent in the setting with a fixed time span. In either scenario, the limit distribution of the estimates for the factor realizations is mixed Gaussian, but is related to the limiting distribution of the parameter vector only in the scenario with a fixed time horizon. Although the limit behavior is very different for the small versus large time span, we develop a feasible inference theory that applies, without modification, in either case. Hence, the user need not take a stand on the relative size of the time dimension of the panel. Similarly, we propose a timevarying datadriven weighting of the penalty in the objective function, which enhances effciency by adapting to the relative quality of the signal for the factor realizations. 
Keywords:  Asymptotic Bias, Incidental Parameter Problem, Inference, Large Data Sets, Nonlinear Factor Model, Options, Panel Data, Stable Convergence, Stochastic Volatility 
JEL:  C51 C52 G12 
Date:  2018–01–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201803&r=all 
By:  Yunus Emre Ergemen (Aarhus University and CREATES); Carlos Velasco (Universidad Carlos III de Madrid) 
Abstract:  We consider large N,T panel data models with fixed effects, a common factor allowing for crosssection dependence, and persistent data and shocks, which are assumed fractionally integrated. In a basic setup, the main interest is on the fractional parameter of the idiosyncratic component, which is estimated in first differences after factor removal by projection on the crosssection average. The pooled conditionalsumofsquares estimate is rootNT consistent but the normal asymptotic distribution might not be centered, requiring the time series dimension to grow faster than the crosssection size for correction. We develop tests of homogeneity of dynamics, including the degree of integration, that have no trivial power under local departures from the null hypothesis of a nonnegligible fraction of crosssection units. A simulation study shows that our estimates and test have good performance even in moderately small panels. 
Keywords:  Fractional integration, panel data, factor models, long memory, homogeneity test 
JEL:  C22 C23 
Date:  2018–03–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201811&r=all 
By:  Huang, Wenxin (Antai College of Economics and Management, Shanghai Jiao Tong University); Jin, Sainan (School of Economics, Singapore Management University); Su, Liangjun (School of Economics, Singapore Management University) 
Abstract:  We consider a panel cointegration model with latent group structures that allows for heterogeneous longrun relationships across groups. We extend Su, Shi, and Phillips’ (2016) classifierLasso (CLasso) method to the nonstationary panels and allow for the presence of endogeneity in both the stationary and nonstationary regressors in the model. In addition, we allow the dimension of the stationary regressors to diverge with the sample size. We show that we can identify the individuals’ group membership and estimate the groupspecific longrun cointegrated relationships simultaneously. We demonstrate the desirable property of uniform classification consistency and the oracle properties of both the CLasso estimators and their postLasso versions. The special case of dynamic penalized least squares is also studied. Simulations show superb finite sample performance in both classification and estimation. In an empirical application, we study the potential heterogeneous behavior in testing the validity of longrun purchasing power parity (PPP) hypothesis in the postBretton Woods period from 19752014 covering 99 countries. We identify two groups in the period 19751998 and three ones in the period 19992014. The results confirm that at least some countries favor the longrun PPP hypothesis in the postBretton Woods period. 
Keywords:  Classifier Lasso; Dynamic OLS; Heterogeneity; Latent group structure; Nonstationarity; Penalized least squares; Panel cointegration; Purchasing power parity 
JEL:  C13 C33 C51 F31 
Date:  2018–11–20 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2019_003&r=all 
By:  Niels Haldrup (Aarhus University and CREATES); Carsten P. T. Rosenskjold (Aarhus University and CREATES) 
Abstract:  The prototypical LeeCarter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper we propose a factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the "accident hump" that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic NelsonSiegel term structure model. First, a twostep nonlinear least squares procedure based on crosssection regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US and French mortality data. We find that the model provides a good fitt of the relevant factors and in a forecast comparison with a range of benchmark models it is found that, especially for longer horizons, variants of the parametric factor model have excellent forecast performance. 
Keywords:  Mortality Forecasting, Term Structure of Mortality, Factor Modelling, Cointegration 
JEL:  C1 C22 J10 J11 G22 
Date:  2018–01–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201806&r=all 
By:  Murasawa, Yasutomo 
Abstract:  The consumption Euler equation implies that the output growth rate and the real interest rate are of the same order of integration; thus if the real interest rate is I(1), then so is the output growth rate with possible cointegration, and log output is I(2). This paper extends the multivariate BeveridgeNelson decomposition to such a case, and develops a Bayesian method to obtain error bands. The paper applies the method to US data to estimate the natural rates (or their permanent components) and gaps of output, inflation, interest, and unemployment jointly, and finds that allowing for cointegration gives much bigger estimates of all gaps. 
Keywords:  Natural rate, Output gap, Trendcycle decomposition, Trend inflation, Unit root, Vector error correction model (VECM) 
JEL:  C11 C32 C82 E32 
Date:  2019–02–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:91979&r=all 
By:  Eric Beutner; Alexander Heinemann; Stephan Smeekes 
Abstract:  In this paper we propose a general framework to analyze prediction in time series models and show how a wide class of popular time series models satisfies this framework. We postulate a set of highlevel assumptions, and formally verify these assumptions for the aforementioned time series models. Our framework coincides with that of Beutner et al. (2019, arXiv:1710.00643) who establish the validity of conditional confidence intervals for predictions made in this framework. The current paper therefore complements the results in Beutner et al. (2019, arXiv:1710.00643) by providing practically relevant applications of their theory. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.01622&r=all 
By:  Torben G. Andersen (Northwestern University, NBER, and CREATES); Martin Thyrsgaard (Aarhus University and CREATES); Viktor Todorov (Northwestern University) 
Abstract:  We develop a nonparametric test for deciding whether return volatility exhibits timevarying intraday periodicity using a long timeseries of highfrequency data. Our null hypothesis, commonly adopted in work on volatility modeling, is that volatility follows a stationary process combined with a constant timeofday periodic component. We first construct timeofday volatility estimates and studentize the highfrequency returns with these periodic components. If the intraday volatility periodicity is invariant over time, then the distribution of the studentized returns should be identical across the trading day. Consequently, the test is based on comparing the empirical characteristic function of the studentized returns across the trading day. The limit distribution of the test depends on the error in recovering volatility from discrete return data and the empirical process error associated with estimating volatility moments through their sample counterparts. Critical values are computed via easytoimplement simulation. In an empirical application to S&P 500 index returns, we find strong evidence for variation in the intraday volatility pattern driven in part by the current level of volatility. When market volatility is elevated, the period preceding the market close constitutes a significantly higher fraction of the total daily integrated volatility than is the case during low market volatility regimes. 
Keywords:  highfrequency data, periodicity, semimartingale, specification test, stochastic volatility 
JEL:  C51 C52 G12 
Date:  2018–01–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201805&r=all 
By:  John B. Donaldson; Rajnish Mehra 
Abstract:  We evaluate the properties of mean reversion and mean aversion in asset prices and returns as commonly characterized in the finance literature. The study is undertaken within a class of wellknown dynamic stochastic general equilibrium models and shows that the mean reversion/aversion distinction is largely artificial. We then propose an alternative measure, the ‘Average Crossing Time’ that both unifies these concepts and provides an alternative characterization. Ceteris paribus, mean reverting processes have a relatively shorter average crossing time as compared to mean averting processes. 
JEL:  C13 C53 E3 E44 E47 G1 G12 
Date:  2019–01 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:25519&r=all 
By:  Su, Liangjun (School of Economics, Singapore Management University); Miao, Ke (School of Economics, Singapore Management University); Jin, Sainan (School of Economics, Singapore Management University) 
Abstract:  We consider the estimation and inference in approximate factor models with random missing values. We show that with the low rank structure of the common component, we can estimate the factors and factor loadings consistently with the missing values replaced by zeros. We establish the asymptotic distributions of the resulting estimators and those based on the EM algorithm. We also propose a cross validationbased method to determine the number of factors in factor models with or without missing values and justify its consistency. Simulations demonstrate that our cross validation method is robust to fat tails in the error distribution and significantly outperforms some existing popular methods in terms of correct percentage in determining the number of factors. An application to the factoraugmented regression models shows that a proper treatment of the missing values can improve the outofsample forecast of some macroeconomic variables. 
Keywords:  Crossvalidation; ExpectationMaximization (EM) algorithm; Factor models; Matrix completion; Missing at random; Principal component analysis; Singular value decomposition 
JEL:  C23 C33 C38 
Date:  2019–01–15 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2019_004&r=all 