nep-ets New Economics Papers
on Econometric Time Series
Issue of 2020‒11‒09
twenty papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Asymptotic F test in Regressions with Observations Collected at High Frequency over Long Span By Pellatt , Daniel; Sun, Yixiao
  2. Error-correction factor models for high-dimensional cointegrated time series By Tu, Yundong; Yao, Qiwei; Zhang, Rongmao
  3. Asymptotic Properties of the Maximum Likelihood Estimator in Endogenous Regime-Switching Models By Chaojun Li; Yan Liu
  4. A Functional-Coefficient VAR Model for Dynamic Quantiles with Constructing Financial Network By Zongwu Cai; Xiyuan Liu
  5. Time-varying Forecast Combination for High-Dimensional Data By Bin Chen; Kenwin Maung
  6. Modeling Long Cycles By Kang, Natasha; Marmer, Vadim
  7. The use of scaling properties to detect relevant changes in financial time series: a new visual warning tool By Ioannis P. Antoniades; Giuseppe Brandi; L. G. Magafas; T. Di Matteo
  8. A random forest-based approach to identifying the most informative seasonality tests By Ollech, Daniel; Webel, Karsten
  9. Bayesian state space models in macroeconometrics By Joshua C.C. Chan; Rodney W. Strachan
  10. Piecewise-Linear Approximations and Filtering for DSGE Models with Occasionally Binding Constraints By S. Borağan Aruoba; Pablo Cuba-Borda; Kenji Higa-Flores; Frank Schorfheide; Sergio Villalvazo
  11. Measuring Uncertainty and Its Effects in the COVID-19 Era By Andrea Carriero; Todd E. Clark; Massimiliano Marcellino; Elmar Mertens
  12. Forecasting Economic Activity Using the Yield Curve: Quasi-Real-Time Applications for New Zealand, Australia and the US By Todd Henry; Peter C.B. Phillips
  13. Forecasting Consumer Price Index Inflation in India: Vector Error Correction Mechanism Vs. Dynamic Factor Model Approach for Non-Stationary Time Series. By Bhattacharya, Rudrani; Kapoor, Mrigankshi
  14. Cryptocurrency portfolio optimization with multivariate normal tempered stable processes and Foster-Hart risk By Tetsuo Kurosaki; Young Shin Kim
  15. What do we gain from Seasonal Adjustment of the Indian Index of Industrial Production (IIP)? By Pandey, Radhika; Sapre, Amey; Sinha, Pramod
  16. Macroeconometric Forecasting Using a Cluster of Dynamic Factor Models By Christian Glocker; Serguei Kaniovski
  17. Real-time forecasting of the Australian macroeconomy using flexible Bayesian VARs By Bo Zhang; Bao H. Nguyen
  18. A comparison of monthly global indicators for forecasting growth By Christiane Baumeister; Pierre Guérin
  19. Real-Time Density Nowcasts of US Inflation: A Model-Combination Approach By Edward S. Knotek; Saeed Zaman
  20. High-dimensional covariance matrix estimation By Lam, Clifford

  1. By: Pellatt , Daniel; Sun, Yixiao
    Abstract: This paper proposes tests of linear hypotheses when the variables may be continuous-time processes with observations collected at a high sampling frequency over a long span. Utilizing series long run variance (LRV) estimation in place of the traditional kernel LRV estimation, we develop easy-to-implement and more accurate F tests in both stationary and nonstationary environments. The nonstationary environment accommodates endogenous regressors that are general semimartinglales. The F tests can be implemented in exactly the same way as in the usual discrete-time setting. The F tests are, therefore, robust to the continuous-time or discrete-time nature of the data. Simulations demonstrate the improved size accuracy and competitive power of the F tests relative to existing continuous-time testing procedures and their improved versions. The F tests are of practical interest as recent work by Chang et al. (2018) demonstrates that traditional inference methods can become invalid and produce spurious results when continuous-time processes are observed on finer grids over a long span.
    Keywords: Social and Behavioral Sciences, continuous time model, F distribution, high frequency regression, long run variance estimation
    Date: 2020–10–29
  2. By: Tu, Yundong; Yao, Qiwei; Zhang, Rongmao
    Abstract: Cointegration inferences often rely on a correct specification for the short-run dynamic vector autoregression. However, this specification is unknown, a priori. A lag length that is too small leads to an erroneous inference as a result of the misspecification. In contrast, using too many lags leads to a dramatic increase in the number of parameters, especially when the dimension of the time series is high. In this paper, we develop a new methodology which adds an error-correction term for the long-run equilibrium to a latent factor model in order to model the short-run dynamic relationship. The inferences use the eigenanalysis-based methods to estimate the cointegration and latent factor process. The proposed error-correction factor model does not require an explicit specification of the short-run dynamics, and is particularly effective for high-dimensional cases, in which the standard error-correction suffers from overparametrization. In addition, the model improves the predictive performance of the pure factor model. The asymptotic properties of the proposed methods are established when the dimension of the time series is either fixed or diverging slowly as the length of the time series goes to infinity. Lastly, the performance of the model is evaluated using both simulated and real data sets.
    Keywords: cointegration; eigenanalysis; factor models; nonstationary processes; vector time series
    JEL: C1
    Date: 2020–07–01
  3. By: Chaojun Li; Yan Liu
    Abstract: This study proves the asymptotic properties of the maximum likelihood estimator (MLE) in a wide range of endogenous regime-switching models. This class of models extends the constant state transition probability in Markov-switching models to a time-varying probability that includes information from observations. A feature of importance in this proof is the mixing rate of the state process conditional on the observations, which is time varying owing to the time-varying transition probabilities. Consistency and asymptotic normality follow from the almost deterministic geometric decaying bound of the mixing rate. Relying on low-level assumptions that have been shown to hold in general, this study provides theoretical foundations for statistical inference in most endogenous regime-switching models in the literature. As an empirical application, an endogenous regime-switching autoregressive conditional heteroscedasticity (ARCH) model is estimated and analyzed with the obtained inferential results.
    Date: 2020–10
  4. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Xiyuan Liu (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: The degree of interdependences among holdings of financial sectors and its varying patterns play important roles in forming systemic risks within a financial system. In this article, we propose a VAR model of conditional quantiles with functional coefficients to construct a novel class of dynamic network system, of which the interdependences among tail risks such as Value-at-Risk are allowed to vary with a variable of general economy. Methodologically, we develop an easy-to-implement two-stage procedure to estimate functionals in the dynamic network system by the local linear smoothing technique. We establish the consistency and the asymptotic normality of the proposed estimator under time series settings. The simulation studies are conducted to show that our new methods work fairly well. The potential of the proposed estimation procedures is demonstrated by an empirical study of constructing and estimating a new type of dynamic financial network.
    Keywords: cDynamic financial network; Functional coefficient models; Multivariate conditional quantile models; Nonparametric estimation; VAR modeling
    JEL: C14 C58 C45 G32
    Date: 2020–10
  5. By: Bin Chen; Kenwin Maung
    Abstract: In this paper, we propose a new nonparametric estimator of time-varying forecast combination weights. When the number of individual forecasts is small, we study the asymptotic properties of the local linear estimator. When the number of candidate forecasts exceeds or diverges with the sample size, we consider penalized local linear estimation with the group SCAD penalty. We show that the estimator exhibits the oracle property and correctly selects relevant forecasts with probability approaching one. Simulations indicate that the proposed estimators outperform existing combination schemes when structural changes exist. Two empirical studies on inflation forecasting and equity premium prediction highlight the merits of our approach relative to other popular methods.
    Date: 2020–10
  6. By: Kang, Natasha; Marmer, Vadim
    Abstract: Recurrent boom-and-bust cycles are a salient feature of economic and finan- cial history. Cycles found in the data are stochastic, often highly persistent, and span substantial fractions of the sample size. We refer to such cycles as “long†. In this paper, we develop a novel approach to modeling cyclical behavior specifically designed to capture long cycles. We show that existing inferential procedures may produce misleading results in the presence of long cycles, and propose a new econometric procedure for the inference on the cycle length. Our procedure is asymptotically valid regardless of the cycle length. We apply our methodology to a set of macroeconomic and financial variables for the U.S. We find evidence of long stochastic cycles in the standard business cycle variables, as well as in credit and house prices. However, we rule out the presence of stochastic cycles in asset market data. Moreover, according to our result, financial cycles as characterized by credit and house prices tend to be twice as long as business cycles.
    Keywords: Stochastic cycles, autoregressive processes, local-to-unity asymptotics, confi- dence sets, business cycle, financial cycle
    JEL: C12 C22 C5 E32 E44
    Date: 2020–10–25
  7. By: Ioannis P. Antoniades; Giuseppe Brandi; L. G. Magafas; T. Di Matteo
    Abstract: The dynamical evolution of multiscaling in financial time series is investigated using time-dependent Generalized Hurst Exponents (GHE), $H_q$, for various values of the parameter $q$. Using $H_q$, we introduce a new visual methodology to algorithmically detect critical changes in the scaling of the underlying complex time-series. The methodology involves the degree of multiscaling at a particular time instance, the multiscaling trend which is calculated by the Change-Point Analysis method, and a rigorous evaluation of the statistical significance of the results. Using this algorithm, we have identified particular patterns in the temporal co-evolution of the different $H_q$ time-series. These GHE patterns, distinguish in a statistically robust way, not only between time periods of uniscaling and multiscaling, but also among different types of multiscaling: symmetric multiscaling (M) and asymmetric multiscaling (A). We apply the visual methodology to time-series comprising of daily close prices of four stock market indices: two major ones (S\&P~500 and NIKKEI) and two peripheral ones (Athens Stock Exchange general Index and Bombay-SENSEX). Results show that multiscaling varies greatly with time: time periods of strong multiscaling behavior and time periods of uniscaling behavior are interchanged while transitions from uniscaling to multiscaling behavior occur before critical market events, such as stock market bubbles. Moreover, particular asymmetric multiscaling patterns appear during critical stock market eras and provide useful information about market conditions. In particular, they can be used as 'fingerprints' of a turbulent market period as well as provide warning signals for an upcoming stock market 'bubble'. The applied visual methodology also appears to distinguish between exogenous and endogenous stock market crises, based on the observed patterns before the actual events.
    Date: 2020–10
  8. By: Ollech, Daniel; Webel, Karsten
    Abstract: Virtually each seasonal adjustment software includes an ensemble of seasonality tests for assessing whether a given time series is in fact a candidate for seasonal adjustment. However, such tests are certain to produce either the same resultor conflicting results, raising the question if there is a method that is capable of identifying the most informative tests in order (1) to eliminate the seemingly non-informative ones in the former case and (2) to find a final decision in the more severe latter case. We argue that identifying the seasonal status of a given time series is essentially a classification problem and, thus, can be solved with machine learning methods. Using simulated seasonal and non-seasonal ARIMA processes that are representative of the Bundesbank's time series database, we compare certain popular methods with respect to accuracy, interpretability and availability of unbiased variable importance measures and find random forests of conditional inference trees to be the method which best balances these key requirements. Applying this method to the seasonality tests implemented in the seasonal adjustment software JDemetra+ finally reveals that the modifiedQSand Friedman tests yield by far the most informative results.
    Keywords: binary classification,conditional inference trees,correlated predictors,JDemetra+,simulation study,supervised machine learning
    JEL: C12 C14 C22 C45 C63
    Date: 2020
  9. By: Joshua C.C. Chan; Rodney W. Strachan
    Abstract: State space models play an important role in macroeconometric analysis and the Bayesian approach has been shown to have many advantages. This paper outlines recent developments in state space modelling applied to macroeconomics using Bayesian methods. We outline the directions of recent research, specifically the problems being addressed and the solutions proposed. After presenting a general form for the linear Gaussian model, we discuss the interpretations and virtues of alternative estimation routines and their outputs. This discussion includes the Kalman filter and smoother, and precision based algorithms. As the advantages of using large models have become better understood, a focus has developed on dimension reduction and computational advances to cope with high-dimensional parameter spaces. We give an overview of a number of recent advances in these directions. Many models suggested by economic theory are either non-linear or non-Gaussian, or both. We discuss work on the particle filtering approach to such models as well as other techniques that use various approximations - to either the time t state and measurement equations or to the full posterior for the states - to obtain draws.
    Keywords: State space model, filter, smoother, non-linear, non-Gaussian, high-dimension, dimension reduction.
    JEL: C11 C22 E32
    Date: 2020–10
  10. By: S. Borağan Aruoba; Pablo Cuba-Borda; Kenji Higa-Flores; Frank Schorfheide; Sergio Villalvazo
    Abstract: We develop an algorithm to construct approximate decision rules that are piecewise-linear and continuous for DSGE models with an occasionally binding constraint. The functional form of the decision rules allows us to derive a conditionally optimal particle filter (COPF) for the evaluation of the likelihood function that exploits the structure of the solution. We document the accuracy of the likelihood approximation and embed it into a particle Markov chain Monte Carlo algorithm to conduct Bayesian estimation. Compared with a standard bootstrap particle filter, the COPF significantly reduces the persistence of the Markov chain, improves the accuracy of Monte Carlo approximations of posterior moments, and drastically speeds up computations. We use the techniques to estimate a small-scale DSGE model to assess the effects of the government spending portion of the American Recovery and Reinvestment Act in 2009 when interest rates reached the zero lower bound.
    JEL: C5 E4 E5
    Date: 2020–10
  11. By: Andrea Carriero; Todd E. Clark; Massimiliano Marcellino; Elmar Mertens
    Abstract: We measure the effects of the COVID-19 outbreak on macroeconomic and financial uncertainty, and we assess the consequences of the latter for key economic variables. We use a large, heteroskedastic vector autoregression (VAR) in which the error volatilities share two common factors, interpreted as macro and financial uncertainty, in addition to idiosyncratic components. Macro and financial uncertainty are allowed to contemporaneously affect the macroeconomy and financial conditions, with changes in the common component of the volatilities providing contemporaneous identifying information on uncertainty. We also consider an extended version of the model, based on a latent state approach to accommodating outliers in volatility, to reduce the influence of extreme observations from the COVID period. The estimates we obtain yield very large increases in macroeconomic and financial uncertainty over the course of the COVID-19 period. These increases have contributed to the downturn in economic and financial conditions, but with both models, the contributions of uncertainty are small compared to the overall movements in many macroeconomic and financial indicators. That implies that the downturn is driven more by other dimensions of the COVID crisis than shocks to aggregate uncertainty (as measured by our method).
    Keywords: Bayesian VARs; stochastic volatility; pandemics; COVID-19
    JEL: E32 E44 C11 C55
    Date: 2020–10–23
  12. By: Todd Henry (University of Auckland); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: Inversion of the yield curve has come to be viewed as a leading recession indicator. Unsurprisingly, some recent instances of inversion have attracted attention from economic commentators and policymakers about possible impending recessions. Using a variety of time series models and recent innovations in econometric method, this paper conducts quasi-real-time forecasting exercises to investigate whether the predictive capability of the yield curve extends to forecasting economic activity in general and whether removing the term premium component from yields affects forecast accuracy. The empirical ï¬ ndings for the US, Australia, and New Zealand show that forecast performance is not improved either by augmenting simplistic models with information from the yield curve or by making such a decomposition of yields. Results from similar research exercises in previous work in the literature are mixed. The results of the present analysis suggest possible explanations that reconcile these conflicting results.
    Keywords: Forecasting, Inversion, Recession indicator, Yield curve
    JEL: C53 E43
    Date: 2020–10
  13. By: Bhattacharya, Rudrani (National Institute of Public Finance and Policy); Kapoor, Mrigankshi (Birla Institute of Technology and Science)
    Abstract: Short to medium term forecasting of inflation rate is important for economic decision making by economic agents and timely implementation of monetary policy. In this study, we develop two alternative forecasting models for Year-on-Year (YOY) inflation in Consumer Price Index (CPI) in India using a large number of macroeconomic indicators. The YOY CPI inflation and its predictive indicators are found to be non-stationary and cointegrated. To address this issue, we employ Vector Error Correction Model (VECM) and Dynamic Factor Model (DFM) modified for non-stationary time series to forecast CPI inflation. We find that in terms of Root Mean Square Error (RMSE), the VECM model performs marginally better than the DFM model. However, both models are found to have the same predictive accuracy using Diebold-Mariano test.
    Keywords: CPI Inflation ; India ; Forecasting ; Vector Error Correction Model ; Dynamic Factor Model
    JEL: C32 C53
    Date: 2020–10
  14. By: Tetsuo Kurosaki; Young Shin Kim
    Abstract: We study portfolio optimization of four major cryptocurrencies. Our time series model is a generalized autoregressive conditional heteroscedasticity (GARCH) model with multivariate normal tempered stable (MNTS) distributed residuals used to capture the non-Gaussian cryptocurrency return dynamics. Based on the time series model, we optimize the portfolio in terms of Foster-Hart risk. Those sophisticated techniques are not yet documented in the context of cryptocurrency. Statistical tests suggest that the MNTS distributed GARCH model fits better with cryptocurrency returns than the competing GARCH-type models. We find that Foster-Hart optimization yields a more profitable portfolio with better risk-return balance than the prevailing approach.
    Date: 2020–10
  15. By: Pandey, Radhika (National Institute of Public Finance and Policy); Sapre, Amey (National Institute of Public Finance and Policy); Sinha, Pramod (National Institute of Public Finance and Policy)
    Abstract: In this paper we conduct a seasonal adjustment (SA) of the 2011-12 base series of Index of Industrial Production (IIP). We use the x-13 ARIMA-SEATS iterative process and follow an indirect approach of first identifying seasonality at the product level and then recompile the manufacturing index with seasonally adjusted series. The SA process shows identifiable seasonality in 206/405 (almost 50%) items spread within broad NIC groups of food, beverages,textiles, leather & apparels. Seasonally adjusted levels also provide a smooth and low fluctuation series that can be used for extrapolation in the advance and provisional estimate stage of GDP estimates. However, the SA process reveals several data quality issues of inexplicable outliers, growth rates and changes in pattern of individual items. While seasonal adjustment has advantages, the process pre-supposes pristine data quality and given the trends shown by item level data, both the SA and actual IIP are inadequate in explaining the growth performance of the manufacturing sector.
    Keywords: Seasonal Adjustment ; X-13 ARIMA-SEATS ; Index of Industrial Production ; Fluctuations ; India
    JEL: C43 C50 P44
    Date: 2020–10
  16. By: Christian Glocker; Serguei Kaniovski
    Abstract: We propose a modelling approach involving a series of small-scale factor models. They are connected to each other within a cluster, whose linkages are derived from Granger-causality tests. GDP forecasts are established across the production, income and expenditure accounts within a disaggregated approach. This method merges the benefits of large-scale macroeconomic and small-scale factor models, rendering our Cluster of Dynamic Factor Models (CDFM) useful for model-consistent forecasting on a large scale. While the CDFM has a simple structure, its forecasts outperform those of a wide range of competing models and of professional forecasters. Moreover, the CDFM allows forecasters to introduce their own judgment and hence produce conditional forecasts.
    Keywords: Forecasting, Dynamic factor model, Granger causality, Structural modeling
    Date: 2020–10–27
  17. By: Bo Zhang; Bao H. Nguyen
    Abstract: This paper evaluates the real-time forecast performance of alternative Bayesian Vector Autoregressive (VAR) models for the Australian macroeconomy. To this end, we construct an updated vintage database and estimate a set of model specifications with different covariance structures. The results suggest that a large VAR model with 20 variables tends to outperform a small VAR model when forecasting GDP growth, CPI inflation and unemployment rate. We find consistent evidence that the models with more flexible error covariance structures forecast GDP growth and inflation better than the standard VAR, while the standard VAR does better than its counterparts for unemployment rate. The results are robust under alternative priors and when the data includes the early stage of the COVID-19 crisis.
    Keywords: Australia, real-time forecast, Non-Gaussian, Stochastic Volatility
    JEL: C11 C32 C53 C55
    Date: 2020–10
  18. By: Christiane Baumeister; Pierre Guérin
    Abstract: This paper evaluates the predictive content of a set of alternative monthly indicators of global economic activity for nowcasting and forecasting quarterly world GDP using mixed-frequency models. We find that a recently proposed indicator that covers multiple dimensions of the global economy consistently produces substantial improvements in forecast accuracy, while other monthly measures have more mixed success. This global economic conditions indicator contains valuable information also for assessing the current and future state of the economy for a set of individual countries and groups of countries. We use this indicator to track the evolution of the nowcasts for the US, the OECD area, and the world economy during the coronavirus pandemic and quantify the main factors driving the nowcasts.
    Keywords: MIDAS models, global economic conditions, world GDP growth, nowcasting, forecasting, mixed frequency
    JEL: C22 C52 E37
    Date: 2020–10
  19. By: Edward S. Knotek; Saeed Zaman
    Abstract: We develop a flexible modeling framework to produce density nowcasts for US inflation at a trading-day frequency. Our framework: (1) combines individual density nowcasts from three classes of parsimonious mixed-frequency models; (2) adopts a novel flexible treatment in the use of the aggregation function; and (3) permits dynamic model averaging via the use of weights that are updated based on learning from past performance. Together these features provide density nowcasts that can accommodate non-Gaussian properties. We document the competitive properties of the nowcasts generated from our framework using high-frequency real-time data over the period 2000-2015.
    Keywords: mixed-frequency models; inflation; density nowcasts; density combinations
    JEL: C15 C53 E3 E37
    Date: 2020–10–22
  20. By: Lam, Clifford
    Abstract: Covariance matrix estimation plays an important role in statistical analysis in many fields, including (but not limited to) portfolio allocation and risk management in finance, graphical modeling, and clustering for genes discovery in bioinformatics, Kalman filtering and factor analysis in economics. In this paper, we give a selective review of covariance and precision matrix estimation when the matrix dimension can be diverging with, or even larger than the sample size. Two broad categories of regularization methods are presented. The first category exploits an assumed structure of the covariance or precision matrix for consistent estimation. The second category shrinks the eigenvalues of a sample covariance matrix, knowing from random matrix theory that such eigenvalues are biased from the population counterparts when the matrix dimension grows at the same rate as the sample size. This article is categorized under: Statistical and Graphical Methods of Data Analysis > Analysis of High Dimensional Data Statistical and Graphical Methods of Data Analysis > Multivariate Analysis Statistical and Graphical Methods of Data Analysis > Nonparametric Methods.
    Keywords: Structured covariance estimation; sparsity; low rank plus sparse; factor model; shrinkage
    JEL: C1
    Date: 2020–03–01

This nep-ets issue is ©2020 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.