|
on Econometric Time Series |
By: | Torben G. Andersen (Northwestern University and CREATES); Rasmus T. Varneskov (Copenhagen Business School and CREATES) |
Abstract: | This paper studies the properties of standard predictive regressions in model economies, characterized through persistent vector autoregressive dynamics for the state variables and the associated series of interest. In particular, we consider a setting where all, or a subset, of the variables may be fractionally integrated, and note that this induces a spurious regression problem. We then propose a new inference and testing procedure - the local spectrum (LCM) approach - for the joint significance of the regressors, which is robust against the variables having different integration orders. The LCM procedure is based on (semi-)parametric fractional-filtering and band spectrum regression using a suitably selected set of frequency ordinates. We establish the asymptotic properties and explain how they differ from and extend existing procedures. Using these new inference and testing techniques, we explore the implications of assuming VAR dynamics in predictive regressions for the realized return variation. Standard least squares predictive regressions indicate that popular financial and macroeconomic variables carry valuable information about return volatility. In contrast, we find no significant evidence using our robust LCM procedure, indicating that prior conclusions may be premature. In fact, if anything, our results suggest the reverse causality, i.e., rising volatility predates adverse innovations to key macroeconomic variables. Simulations are employed to illustrate the relevance of the theoretical arguments for finite-sample inference. |
Keywords: | Endogeneity Bias, Fractional Integration, Frequency Domain Inference, Hypothesis Testing, Spurious Inference, Stochastic Volatility, VAR Models |
JEL: | C13 C14 C32 C52 C53 G12 |
Date: | 2018–02–27 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-09&r=all |
By: | Liang, Chong; Schienle, Melanie |
Abstract: | We provide a shrinkage type methodology which allows for simultaneous model selection and estimation of vector error correction models (VECM) when the dimension is large and can increase with sample size. Model determination is treated as a joint selection problem of cointegrating rank and autoregressive lags under respective practically valid sparsity assumptions. We show consistency of the selection mechanism by the resulting Lasso-VECM estimator under very general assumptions on dimension, rank and error terms. Moreover, with computational complexity of a linear programming problem only, the procedure remains computationally tractable in high dimensions. We demonstrate the effectiveness of the proposed approach by a simulation study and an empirical application to recent CDS data after the financial crisis. |
Keywords: | High-dimensional time series,VECM,Cointegration rank and lag selection,Lasso,Credit Default Swap |
JEL: | C32 C52 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kitwps:124&r=all |
By: | Changli He (Tianjin University of Finance and Economics); Jian Kang (Tianjin University of Finance and Economics); Timo Teräsvirta (CREATES and Aarhus University, C.A.S.E, Humboldt-Universität zu Berlin); Shuhua Zhang (Tianjin University of Finance and Economics) |
Abstract: | In this paper we introduce an autoregressive model with seasonal dummy variables in which coefficients of seasonal dummies vary smoothly and deterministically over time. The error variance of the model is seasonally heteroskedastic and multiplicatively decomposed, the decomposition being similar to that in well known ARCH and GARCH models. This variance is also allowed to be smoothly and deterministically time-varying. Under regularity conditions, consistency and asymptotic normality of the maximum likelihood estimators of parameters of this model is proved. A test of constancy of the seasonal coefficients is derived. The test is generalised to specifying the parametric structure of the model. A test of constancy over time of the heteroskedastic error variance is presented. The purpose of building this model is to use it for describing changing seasonality in the well-known monthly central England temperature series. More specifically, the idea is to find out in which way and by how much the monthly temperatures are varying over time during the period of more than 240 years, if they do. Misspecification tests are applied to the estimated model and the findings discussed. |
Keywords: | global warming, nonlinear time series, changing seasonality, smooth transition, testing constancy |
JEL: | C22 C51 C52 Q54 |
Date: | 2018–04–25 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-15&r=all |
By: | Jean-Jacques Forneron |
Abstract: | This paper proposes a Sieve Simulated Method of Moments (Sieve-SMM) estimator for the parameters and the distribution of the shocks in nonlinear dynamic models where the likelihood and the moments are not tractable. An important concern with SMM, which matches sample with simulated moments, is that a parametric distribution is required but economic quantities that depend on this distribution, such as welfare and asset-prices, can be sensitive to misspecification. The Sieve-SMM estimator addresses this issue by flexibly approximating the distribution of the shocks with a Gaussian and tails mixture sieve. The asymptotic framework provides consistency, rate of convergence and asymptotic normality results, extending existing sieve estimation theory to a new framework with more general dynamics and latent variables. Monte-Carlo simulations illustrate the finite sample properties of the estimator. Two empirical applications highlight the importance of the distribution of the shocks for estimates and counterfactuals. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.01456&r=all |
By: | Conrad, Christian; Schienle, Melanie |
Abstract: | We consider the problem of testing for an omitted multiplicative long-term component in GARCH-type models. Under the alternative there is a two-component model with a short-term GARCH component that fluctuates around a smoothly time-varying long-term component which is driven by the dynamics of an explanatory variable. We suggest a Lagrange Multiplier statistic for testing the null hypothesis that the variable has no explanatory power. We derive the asymptotic theory for our test statistic and investigate its finite sample properties by Monte-Carlo simulation. Our test also covers the mixed-frequency case in which the returns are observed at a higher frequency than the explanatory variable. The usefulness of our procedure is illustrated by empirical applications to S&P 500 return data. |
Keywords: | GARCH-MIDAS,LM test,Long-Term Volatility,Mixed-Frequency Data,Volatility Component Models |
JEL: | C53 C58 E32 G12 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kitwps:121&r=all |
By: | Alexander Heinemann |
Abstract: | This paper studies the joint inference on conditional volatility parameters and the innovation moments by means of bootstrap to test for the existence of moments for GARCH(p,q) processes. We propose a residual bootstrap to mimic the joint distribution of the quasi-maximum likelihood estimators and the empirical moments of the residuals and also prove its validity. A bootstrap-based test for the existence of moments is proposed, which provides asymptotically correctly-sized tests without losing its consistency property. It is simple to implement and extends to other GARCH-type settings. A simulation study demonstrates the test's size and power properties in finite samples and an empirical application illustrates the testing approach. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.01808&r=all |
By: | Leopoldo Catania (Aarhus University and CREATES); Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata") |
Abstract: | The prediction of volatility is of primary importance for business applications in risk management, asset allocation and pricing of derivative instruments. This paper proposes a novel measurement model which takes into consideration the possibly time-varying interaction of realized volatility and asset returns, according to a bivariate model aiming at capturing the main stylised facts: (i) the long memory of the volatility process, (ii) the heavy-tailedness of the returns distribution, and (iii) the negative dependence of volatility and daily market returns. We assess the relevance of "volatility in volatility"and time-varying "leverage" effects in the out-of-sample forecasting performance of the model, and evaluate the density forecasts of the future level of market volatility. The empirical results illustrate that our specification can outperform the benchmark HAR-RV, both in terms of point and density forecasts. |
Keywords: | realized volatility, forecasting, leverage effect, volatility in volatility |
Date: | 2019–02–06 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:450&r=all |
By: | Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University); Rasmus T. Varneskov (Northwestern University and CREATES) |
Abstract: | We provide unifying inference theory for parametric nonlinear factor models based on a panel of noisy observations. The panel has a large cross-section and a time span that may be either small or large. Moreover, we incorporate an additional source of information provided by noisy observations on some known functions of the factor realizations. The estimation is carried out via penalized least squares, i.e., by minimizing the L_2 distance between observations from the panel and their model-implied counterparts, augmented by a penalty for the deviation of the extracted factors from the noisy signals for them. When the time dimension is fixed, the limit distribution of the parameter vector is mixed Gaussian with conditional variance depending on the path of the factor realizations. On the other hand, when the time span is large, the convergence rate is faster and the limit distribution is Gaussian with a constant variance. In this case, however, we incur an incidental parameter problem since, at each point in time, we need to recover the concurrent factor realizations. This leads to an asymptotic bias that is absent in the setting with a fixed time span. In either scenario, the limit distribution of the estimates for the factor realizations is mixed Gaussian, but is related to the limiting distribution of the parameter vector only in the scenario with a fixed time horizon. Although the limit behavior is very different for the small versus large time span, we develop a feasible inference theory that applies, without modification, in either case. Hence, the user need not take a stand on the relative size of the time dimension of the panel. Similarly, we propose a time-varying data-driven weighting of the penalty in the objective function, which enhances effciency by adapting to the relative quality of the signal for the factor realizations. |
Keywords: | Asymptotic Bias, Incidental Parameter Problem, Inference, Large Data Sets, Nonlinear Factor Model, Options, Panel Data, Stable Convergence, Stochastic Volatility |
JEL: | C51 C52 G12 |
Date: | 2018–01–10 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-03&r=all |
By: | Yunus Emre Ergemen (Aarhus University and CREATES); Carlos Velasco (Universidad Carlos III de Madrid) |
Abstract: | We consider large N,T panel data models with fixed effects, a common factor allowing for cross-section dependence, and persistent data and shocks, which are assumed fractionally integrated. In a basic setup, the main interest is on the fractional parameter of the idiosyncratic component, which is estimated in first differences after factor removal by projection on the cross-section average. The pooled conditional-sum-of-squares estimate is root-NT consistent but the normal asymptotic distribution might not be centered, requiring the time series dimension to grow faster than the cross-section size for correction. We develop tests of homogeneity of dynamics, including the degree of integration, that have no trivial power under local departures from the null hypothesis of a non-negligible fraction of cross-section units. A simulation study shows that our estimates and test have good performance even in moderately small panels. |
Keywords: | Fractional integration, panel data, factor models, long memory, homogeneity test |
JEL: | C22 C23 |
Date: | 2018–03–12 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-11&r=all |
By: | Huang, Wenxin (Antai College of Economics and Management, Shanghai Jiao Tong University); Jin, Sainan (School of Economics, Singapore Management University); Su, Liangjun (School of Economics, Singapore Management University) |
Abstract: | We consider a panel cointegration model with latent group structures that allows for heterogeneous long-run relationships across groups. We extend Su, Shi, and Phillips’ (2016) classifier-Lasso (C-Lasso) method to the nonstationary panels and allow for the presence of endogeneity in both the stationary and nonstationary regressors in the model. In addition, we allow the dimension of the stationary regressors to diverge with the sample size. We show that we can identify the individuals’ group membership and estimate the group-specific long-run cointegrated relationships simultaneously. We demonstrate the desirable property of uniform classification consistency and the oracle properties of both the C-Lasso estimators and their post-Lasso versions. The special case of dynamic penalized least squares is also studied. Simulations show superb finite sample performance in both classification and estimation. In an empirical application, we study the potential heterogeneous behavior in testing the validity of long-run purchasing power parity (PPP) hypothesis in the post-Bretton Woods period from 1975-2014 covering 99 countries. We identify two groups in the period 1975-1998 and three ones in the period 1999-2014. The results confirm that at least some countries favor the long-run PPP hypothesis in the post-Bretton Woods period. |
Keywords: | Classifier Lasso; Dynamic OLS; Heterogeneity; Latent group structure; Nonstationarity; Penalized least squares; Panel cointegration; Purchasing power parity |
JEL: | C13 C33 C51 F31 |
Date: | 2018–11–20 |
URL: | http://d.repec.org/n?u=RePEc:ris:smuesw:2019_003&r=all |
By: | Niels Haldrup (Aarhus University and CREATES); Carsten P. T. Rosenskjold (Aarhus University and CREATES) |
Abstract: | The prototypical Lee-Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper we propose a factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the "accident hump" that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson-Siegel term structure model. First, a two-step nonlinear least squares procedure based on cross-section regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US and French mortality data. We find that the model provides a good fitt of the relevant factors and in a forecast comparison with a range of benchmark models it is found that, especially for longer horizons, variants of the parametric factor model have excellent forecast performance. |
Keywords: | Mortality Forecasting, Term Structure of Mortality, Factor Modelling, Cointegration |
JEL: | C1 C22 J10 J11 G22 |
Date: | 2018–01–12 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-06&r=all |
By: | Murasawa, Yasutomo |
Abstract: | The consumption Euler equation implies that the output growth rate and the real interest rate are of the same order of integration; thus if the real interest rate is I(1), then so is the output growth rate with possible cointegration, and log output is I(2). This paper extends the multivariate Beveridge--Nelson decomposition to such a case, and develops a Bayesian method to obtain error bands. The paper applies the method to US data to estimate the natural rates (or their permanent components) and gaps of output, inflation, interest, and unemployment jointly, and finds that allowing for cointegration gives much bigger estimates of all gaps. |
Keywords: | Natural rate, Output gap, Trend--cycle decomposition, Trend inflation, Unit root, Vector error correction model (VECM) |
JEL: | C11 C32 C82 E32 |
Date: | 2019–02–05 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:91979&r=all |
By: | Eric Beutner; Alexander Heinemann; Stephan Smeekes |
Abstract: | In this paper we propose a general framework to analyze prediction in time series models and show how a wide class of popular time series models satisfies this framework. We postulate a set of high-level assumptions, and formally verify these assumptions for the aforementioned time series models. Our framework coincides with that of Beutner et al. (2019, arXiv:1710.00643) who establish the validity of conditional confidence intervals for predictions made in this framework. The current paper therefore complements the results in Beutner et al. (2019, arXiv:1710.00643) by providing practically relevant applications of their theory. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.01622&r=all |
By: | Torben G. Andersen (Northwestern University, NBER, and CREATES); Martin Thyrsgaard (Aarhus University and CREATES); Viktor Todorov (Northwestern University) |
Abstract: | We develop a nonparametric test for deciding whether return volatility exhibits time-varying intraday periodicity using a long time-series of high-frequency data. Our null hypothesis, commonly adopted in work on volatility modeling, is that volatility follows a stationary process combined with a constant time-of-day periodic component. We first construct time-of-day volatility estimates and studentize the high-frequency returns with these periodic components. If the intraday volatility periodicity is invariant over time, then the distribution of the studentized returns should be identical across the trading day. Consequently, the test is based on comparing the empirical characteristic function of the studentized returns across the trading day. The limit distribution of the test depends on the error in recovering volatility from discrete return data and the empirical process error associated with estimating volatility moments through their sample counterparts. Critical values are computed via easy-to-implement simulation. In an empirical application to S&P 500 index returns, we find strong evidence for variation in the intraday volatility pattern driven in part by the current level of volatility. When market volatility is elevated, the period preceding the market close constitutes a significantly higher fraction of the total daily integrated volatility than is the case during low market volatility regimes. |
Keywords: | high-frequency data, periodicity, semimartingale, specification test, stochastic volatility |
JEL: | C51 C52 G12 |
Date: | 2018–01–12 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-05&r=all |
By: | John B. Donaldson; Rajnish Mehra |
Abstract: | We evaluate the properties of mean reversion and mean aversion in asset prices and returns as commonly characterized in the finance literature. The study is undertaken within a class of well-known dynamic stochastic general equilibrium models and shows that the mean reversion/aversion distinction is largely artificial. We then propose an alternative measure, the ‘Average Crossing Time’ that both unifies these concepts and provides an alternative characterization. Ceteris paribus, mean reverting processes have a relatively shorter average crossing time as compared to mean averting processes. |
JEL: | C13 C53 E3 E44 E47 G1 G12 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:25519&r=all |
By: | Su, Liangjun (School of Economics, Singapore Management University); Miao, Ke (School of Economics, Singapore Management University); Jin, Sainan (School of Economics, Singapore Management University) |
Abstract: | We consider the estimation and inference in approximate factor models with random missing values. We show that with the low rank structure of the common component, we can estimate the factors and factor loadings consistently with the missing values replaced by zeros. We establish the asymptotic distributions of the resulting estimators and those based on the EM algorithm. We also propose a cross validation-based method to determine the number of factors in factor models with or without missing values and justify its consistency. Simulations demonstrate that our cross validation method is robust to fat tails in the error distribution and significantly outperforms some existing popular methods in terms of correct percentage in determining the number of factors. An application to the factor-augmented regression models shows that a proper treatment of the missing values can improve the out-of-sample forecast of some macroeconomic variables. |
Keywords: | Cross-validation; Expectation-Maximization (EM) algorithm; Factor models; Matrix completion; Missing at random; Principal component analysis; Singular value decomposition |
JEL: | C23 C33 C38 |
Date: | 2019–01–15 |
URL: | http://d.repec.org/n?u=RePEc:ris:smuesw:2019_004&r=all |