nep-ets New Economics Papers
on Econometric Time Series
Issue of 2020‒08‒31
fifteen papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. When Bias Contributes to Variance: True Limit Theory in Functional Coefficient Cointegrating Regression By Peter C.B. Phillips; Ying Wang
  2. Nonparametric Inference of Jump Autocorrelation By Kwok, Simon
  3. High-Dimensional VARs with Common Factors By Ke Miao; Peter C.B. Phillips; Liangjun Su
  4. Short-term forecasting of the COVID-19 pandemic using Google Trends data: Evidence from 158 countries By Fantazzini, Dean
  5. Uniform and Lp Convergences of Nonparametric Estimation for Diffusion Models By Ruijun Bu; Jihyun Kim; Bin Wang
  6. Nowcasting with large Bayesian vector autoregressions By Cimadomo, Jacopo; Giannone, Domenico; Lenza, Michele; Sokol, Andrej; Monti, Francesca
  7. Common Bubble Detection in Large Dimensional Financial Systems By Ye Chen; Peter C.B. Phillips; Shuping Shi
  8. Local Projection Inference is Simpler and More Robust Than You Think By Jos\'e Luis Montiel Olea; Mikkel Plagborg-M{\o}ller
  9. Permutation-based tests for discontinuities in event studies By Federico A. Bugni; Jia Li
  10. Dynamic Factor Trees and Forests – A Theory-led Machine Learning Framework for Non-Linear and State-Dependent Short-Term U.S. GDP Growth Predictions By Daniel Wochner
  11. Generating Trading Signals by ML algorithms or time series ones? By Omid Safarzadeh
  12. Detecting bearish and bullish markets in financial time series using hierarchical hidden Markov models By Lennart Oelschl\"ager; Timo Adam
  13. Structural modeling and forecasting using a cluster of dynamic factor models By Glocker, Christian; Kaniovski, Serguei
  14. The role of global economic policy uncertainty in predicting crude oil futures volatility: Evidence from a two-factor GARCH-MIDAS model By Peng-Fei Dai; Xiong Xiong; Wei-Xing Zhou
  15. News on Stock Market Returns and Conditional Volatility in Nigeria: An EGARCH-in-Mean Approach. By Okpara, Godwin Chigozie

  1. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Ying Wang (The University of Auckland)
    Abstract: Limit distribution theory in the econometric literature for functional coefficient cointegrating (FCC) regression is shown to be incorrect in important ways, influencing rates of convergence, distributional properties, and practical work. In FCC regression the cointegrating coefficient vector \beta(.) is a function of a covariate z_t. The true limit distribution of the local level kernel estimator of \beta(.) is shown to have multiple forms, each form depending on the bandwidth rate in relation to the sample size n and with an optimal convergence rate of n^{3/4} which is achieved by letting the bandwidth have order 1/n^{1/2}.when z_t is scalar. Unlike stationary regression and contrary to the existing literature on FCC regression, the correct limit theory reveals that component elements from the bias and variance terms in the kernel regression can both contribute to variability in the asymptotics depending on the bandwidth behavior in relation to the sample size. The trade-off between bias and variance that is a common feature of kernel regression consequently takes a different and more complex form in FCC regression whereby balance is achieved via the dual-source of variation in the limit with an associated common convergence rate. The error in the literature arises because the random variability of the bias term has been neglected in earlier research. In stationary regression this random variability is of smaller order and can correctly be neglected in asymptotic analysis but with consequences for finite sample performance. In nonstationary regression, variability typically has larger order due to the nonstationary regressor and its omission leads to deficiencies and partial failure in the asymptotics reported in the literature. Existing results are shown to hold only in scalar covariate FCC regression and only when the bandwidth has order larger than 1/n and smaller than 1/n^{1/2}. The correct results in cases of a multivariate covariate z_t are substantially more complex and are not covered by any existing theory. Implications of the findings for inference, confidence interval construction, bandwidth selection, and stability testing for the functional coefficient are discussed. A novel self-normalized t-ratio statistic is developed which is robust with respect to bandwidth order and persistence in the regressor, enabling improved testing and confidence interval construction. Simulations show superior performance of this robust statistic that corroborate the finite sample relevance of the new limit theory in both stationary and nonstationary regressions.
    Keywords: Bandwidth selection, Bias variability, Functional coefficient cointegration, Kernel regression, Nonstationarity, Robust inference, Sandwich matrix
    JEL: C14 C22
    Date: 2020–08
  2. By: Kwok, Simon
    Abstract: Understanding the jump dynamics of market prices is important for asset pricing and risk management. Despite their analytical tractability, parametric models may impose unrealistic restrictions on the temporal dependence structure of jumps. In this paper, we introduce a nonparametric inference procedure for the presence of jump autocorrelation in the DGP. Our toolkit includes (i) an omnibus test that jointly detect the autocorrelation of stationary jumps over all lags, and (ii) a jump autocorrelogram that enables visualization and pointwise inference of jump autocorrelation. We establish asymptotic normality and local power of our procedure for a rich set of local alternatives (e.g., self-exciting and/or self-inhibitory jumps). Under a unified framework that combines infill and long-span asymptotics, the joint test for jump autocorrelations is robust to the choice of sampling scheme and different degree of jump activity. Simulation study confirms its robustness property and reveals its competitive size and power performance relative to existing tests. In an empirical study on high-frequency stock returns, our procedure uncovers a wide array of jump autocorrelation profiles for different stocks in different time periods.
    Keywords: jump autocorrelation, self-excited jumps, nonparametric inference, financial contagion, high-frequency returns
    Date: 2020–08
  3. By: Ke Miao (School of Economics, Fudan University); Peter C.B. Phillips (Cowles Foundation, Yale University); Liangjun Su (School of Economics and Management, Tsinghua University)
    Abstract: This paper studies high-dimensional vector autoregressions (VARs) augmented with common factors that allow for strong cross section dependence. Models of this type provide a convenient mechanism for accommodating the interconnectedness and temporal co-variability that are often present in large dimensional systems. We propose an `1-nuclear-norm regularized estimator and derive non-asymptotic upper bounds for the estimation errors as well as large sample asymptotics for the estimates. A singular value thresholding procedure is used to determine the correct number of factors with probability approaching one. Both the LASSO estimator and the conservative LASSO estimator are employed to improve estimation precision. The conservative LASSO estimates of the non-zero coefficients are shown to be asymptotically equivalent to the oracle least squares estimates. Simulations demonstrate that our estimators perform reasonably well in finite samples given the complex high dimensional nature of the model with multiple unobserved components. In an empirical illustration we apply the methodology to explore the dynamic connectedness in the volatilities of financial asset prices and the transmission of investor fear. The findings reveal that a large proportion of connectedness is due to common factors. Conditional on the presence of these common factors, the results still document remarkable connectedness due to the interactions between the individual variables, thereby supporting a common factor augmented VAR specification.
    Keywords: Common factors, Connectedness, Cross-sectional dependence, Investor fear, High-dimensional VAR, Nuclear-norm regularization
    JEL: C13 C33 C38 C51
    Date: 2020–08
  4. By: Fantazzini, Dean
    Abstract: The ability of Google Trends data to forecast the number of new daily cases and deaths of COVID-19 is examined using a dataset of 158 countries. The analysis includes the computations of lag correlations between confirmed cases and Google data, Granger causality tests, and an out-of-sample forecasting exercise with 18 competing models with a forecast horizon of 14 days ahead. This evidence shows that Google-augmented models outperform the competing models for most of the countries. This is significant because Google data can complement epidemiological models during difficult times like the ongoing COVID-19 pandemic, when official statistics maybe not fully reliable and/or published with a delay. Moreover, real-time tracking with online-data is one of the instruments that can be used to keep the situation under control when national lockdowns are lifted and economies gradually reopen.
    Keywords: Covid-19; Google Trends; VAR; ARIMA; ARIMA-X; ETS; LASSO; SIR model
    JEL: C22 C32 C51 C53 G17 I18 I19
    Date: 2020–08
  5. By: Ruijun Bu; Jihyun Kim; Bin Wang
    Abstract: We obtain the uniform convergence rates of nonparametric kernel estimators of the local time, the drift and volatility functions as well as their derivatives, of discretely sampled diffusion processes. Moreover, modified kernel estimators of the drift and volatility functions are considered and their Lp convergence rates are obtained. Our asymptotic results apply to recurrent diffusions which include both stationary or nonstationary cases. Our sampling scheme is two-dimensional, with sampling interval shrinking to zero and time span increasing to infinity jointly.
    Keywords: recurrent, diffusion, kernel, uniform convergence, Lp convergence.
    JEL: C14 C22 C58
    Date: 2020–07
  6. By: Cimadomo, Jacopo; Giannone, Domenico; Lenza, Michele; Sokol, Andrej; Monti, Francesca
    Abstract: Monitoring economic conditions in real time, or nowcasting, is among the key tasks routinely performed by economists. Nowcasting entails some key challenges, which also characterise modern Big Data analytics, often referred to as the three \Vs": the large number of time series continuously released (Volume), the complexity of the data covering various sectors of the economy, published in an asynchronous way and with different frequencies and precision (Variety), and the need to incorporate new information within minutes of their release (Velocity). In this paper, we explore alternative routes to bring Bayesian Vector Autoregressive (BVAR) models up to these challenges. We find that BVARs are able to effectively handle the three Vs and produce, in real time, accurate probabilistic predictions of US economic activity and, in addition, a meaningful narrative by means of scenario analysis. JEL Classification: E32, E37, C01, C33, C53
    Keywords: Big Data, business cycles, forecasting, mixed frequencies, real time, scenario analysis
    Date: 2020–08
  7. By: Ye Chen (Singapore Management University); Peter C.B. Phillips (Cowles Foundation, Yale University); Shuping Shi (Macquarie University)
    Abstract: Price bubbles in multiple assets are sometimes nearly coincident in occurrence. Such near-coincidence is strongly suggestive of co-movement in the associated asset prices and likely driven by certain factors that are latent in the financial or economic system with common effects across several markets. Can we detect the presence of such common factors at the early stages of their emergence? To answer this question, we build a factor model that includes I(1), mildly explosive, and stationary factors to capture normal, exuberant, and collapsing phases in such phenomena. The I(1) factor models the primary driving force of market fundamentals. The explosive and stationary factors model latent forces that underlie the formation and destruction of asset price bubbles, which typically exist only for subperiods of the sample. The paper provides an algorithm for testing the presence of and date-stamping the origination and termination of price bubbles determined by latent factors in a large-dimensional system embodying many markets. Asymptotics of the bubble test statistic are given under the null of no common bubbles and the alternative of a common bubble across these markets. We prove consistency of a factor bubble detection process for the origination and termination dates of the common bubble. Simulations show good finite sample performance of the testing algorithm in terms of its successful detection rates. Our methods are applied to real estate markets covering 89 major cities in China over the period January 2003 to March 2013. Results suggest the presence of three common bubble episodes in what are known as China's Tier 1 and Tier 2 cities over the sample period. There appears to be little evidence of a common bubble in Tier 3 cities.
    Keywords: Common Bubbles, Mildly Explosive Process, Factor Analysis, Date Stamping, Real Estate Market
    JEL: C12 C13 C58
    Date: 2020–08
  8. By: Jos\'e Luis Montiel Olea; Mikkel Plagborg-M{\o}ller
    Abstract: Applied macroeconomists often compute confidence intervals for impulse responses using local projections, i.e., direct linear regressions of future outcomes on current covariates. This paper proves that local projection inference robustly handles two issues that commonly arise in applications: highly persistent data and the estimation of impulse responses at long horizons. We consider local projections that control for lags of the variables in the regression. We show that lag-augmented local projections with normal critical values are asymptotically valid uniformly over (i) both stationary and non-stationary data, and also over (ii) a wide range of response horizons. Moreover, lag augmentation obviates the need to correct standard errors for serial correlation in the regression residuals. Hence, local projection inference is arguably both simpler than previously thought and more robust than standard autoregressive inference, whose validity is known to depend sensitively on the persistence of the data and on the length of the horizon.
    Date: 2020–07
  9. By: Federico A. Bugni; Jia Li
    Abstract: We propose using a permutation test to detect discontinuities in an underlying economic model at a cutoff point. Relative to the existing literature, we show that this test is well suited for event studies based on time-series data. The test statistic measures the distance between the empirical distribution functions of observed data in two local subsamples on the two sides of the cutoff. Critical values are computed via a standard permutation algorithm. Under a high-level condition that the observed data can be coupled by a collection of conditionally independent variables, we establish the asymptotic validity of the permutation test, allowing the sizes of the local subsamples to be either be fixed or grow to infinity. In the latter case, we also establish that the permutation test is consistent. We demonstrate that our high-level condition can be verified in a broad range of problems in the infill asymptotic time-series setting, which justifies using the permutation test to detect jumps in economic variables such as volatility, trading activity, and liquidity. An empirical illustration on a recent sample of daily S&P 500 returns is provided.
    Date: 2020–07
  10. By: Daniel Wochner (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: Machine Learning models are often considered to be “black boxes†that provide only little room for the incorporation of theory (cf. e.g. Mukherjee, 2017; Veltri, 2017). This article proposes so-called Dynamic Factor Trees (DFT) and Dynamic Factor Forests (DFF) for macroeconomic forecasting, which synthesize the recent machine learning, dynamic factor model and business cycle literature within a unified statistical machine learning framework for model-based recursive partitioning proposed in Zeileis, Hothorn and Hornik (2008). DFTs and DFFs are non-linear and state-dependent forecasting models, which reduce to the standard Dynamic Factor Model (DFM) as a special case and allow us to embed theory-led factor models in powerful tree-based machine learning ensembles conditional on the state of the business cycle. The out-of-sample forecasting experiment for short-term U.S. GDP growth predictions combines three distinct FRED-datasets, yielding a balanced panel with over 375 indicators from 1967 to 2018 (FRED, 2019; McCracken & Ng, 2016, 2019a, 2019b). Our results provide strong empirical evidence in favor of the proposed DFTs and DFFs and show that they significantly improve the predictive performance of DFMs by almost 20% in terms of MSFE. Interestingly, the improvements materialize in both expansionary and recessionary periods, suggesting that DFTs and DFFs tend to perform not only sporadically but systematically better than DFMs. Our findings are fairly robust to a number of sensitivity tests and hold exciting avenues for future research.
    Keywords: Forecasting, Machine Learning, Regression Trees and Forests, Dynamic Factor Model, Business Cycles, GDP Growth, United States
    JEL: C45 C51 C53 E32 O47
    Date: 2020–05
  11. By: Omid Safarzadeh
    Abstract: This research investigates efficiency on-line learning Algorithms to generate trading signals.I employed technical indicators based on high frequency stock prices and generated trading signals through ensemble of Random Forests. Similarly, Kalman Filter was used for signaling trading positions. Comparing Time Series methods with Machine Learning methods, results spurious of Kalman Filter to Random Forests in case of on-line learning predictions of stock prices
    Date: 2020–07
  12. By: Lennart Oelschl\"ager; Timo Adam
    Abstract: Financial markets exhibit alternating periods of rising and falling prices. Stock traders seeking to make profitable investment decisions have to account for those trends, where the goal is to accurately predict switches from bullish towards bearish markets and vice versa. Popular tools for modeling financial time series are hidden Markov models, where a latent state process is used to explicitly model switches among different market regimes. In their basic form, however, hidden Markov models are not capable of capturing both short- and long-term trends, which can lead to a misinterpretation of short-term price fluctuations as changes in the long-term trend. In this paper, we demonstrate how hierarchical hidden Markov models can be used to draw a comprehensive picture of financial markets, which can contribute to the development of more sophisticated trading strategies. The feasibility of the suggested approach is illustrated in two real-data applications, where we model data from two major stock indices, the Deutscher Aktienindex and the Standard & Poor's 500.
    Date: 2020–07
  13. By: Glocker, Christian; Kaniovski, Serguei
    Abstract: We propose a modeling approach involving a series of small-scale dynamic factor models. They are connected to each other within a cluster, whose linkages are derived from Granger-causality tests. This approach merges the benefits of large-scale macroeconomic and small-scale factor models, rendering our Cluster of Dynamic Factor Models (CDFM) useful for model-consistent nowcasting and forecasting on a larger scale. While the CDFM has a simple structure and is easy to replicate, its forecasts are more precise than those of a wide range of competing models and those of professional forecasters. Moreover, the CDFM allows forecasters to introduce their own judgment and hence produce conditional forecasts.
    Keywords: Forecasting, Dynamic factor model, Granger causality, Structural modeling
    JEL: C22 C53 C55 E37
    Date: 2020–07
  14. By: Peng-Fei Dai (TJU); Xiong Xiong (TJU); Wei-Xing Zhou (ECUST)
    Abstract: This paper aims to examine whether the global economic policy uncertainty (GEPU) and uncertainty changes have different impacts on crude oil futures volatility. We establish single-factor and two-factor models under the GARCH-MIDAS framework to investigate the predictive power of GEPU and GEPU changes excluding and including realized volatility. The findings show that the models with rolling-window specification perform better than those with fixed-span specification. For single-factor models, the GEPU index and its changes, as well as realized volatility, are consistent effective factors in predicting the volatility of crude oil futures. Specially, GEPU changes have stronger predictive power than the GEPU index. For two-factor models, GEPU is not an effective forecast factor for the volatility of WTI crude oil futures or Brent crude oil futures. The two-factor model with GEPU changes contains more information and exhibits stronger forecasting ability for crude oil futures market volatility than the single-factor models. The GEPU changes are indeed the main source of long-term volatility of the crude oil futures.
    Date: 2020–07
  15. By: Okpara, Godwin Chigozie
    Abstract: This paper aims at exploring the relationship between news on the stock market returns and conditional volatility in Nigeria. To determine this relationship, the researcher employed the exponential generalized conditional Heteroscedasticity (EGARCH) in mean model since the model accommodates asymmetric and leverage property. The results of the analysis shows that there is a significant relationship between stock market returns and conditional volatility. Secondly, that the persistence of shocks in the market takes a short time to die out, thirdly, that the stock market volatility is less sensitive to market events while asymmetric effect is positive and significant indicating that good news lowers volatility in Nigeria. In the light of the findings, the researcher suggests that Nigeria stock exchange should ensure that company specific information should be reliable with maximum transparency and speedy dissemination. Also, with the already existing good news lowering volatility and cost of capital in the economy, Government should avoid unnecessary modifications of her policies that are capable of changing the market trading pattern. These measures, the researcher believes, will bridge up information asymmetry and enhance the sensitivity of volatility to market events.
    Keywords: Stock returns,EGARCH in mean, information asymmetry, bad news, good news.
    JEL: C32 E32 F65
    Date: 2020–08–12

This nep-ets issue is ©2020 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.