nep-ets New Economics Papers
on Econometric Time Series
Issue of 2019‒02‒18
eighteen papers chosen by
Jaqueson K. Galimberti
KOF Swiss Economic Institute

  1. Structural Breaks in Time Series By Alessandro Casini; Pierre Perron
  2. Estimating Multiple Breaks in Nonstationary Autoregressive Models By Pang, Tianxiao; Du, Lingjie; Chong, Terence Tai Leung
  3. Time-Varying General Dynamic Factor Models and the Measurement of Financial Connectedness By Matteo Barigozzi; Marc Hallin; Stefano Soccorsi
  4. Bayesian Nonparametric Adaptive Spectral Density Estimation for Financial Time Series By Nick James; Roman Marchant; Richard Gerlach; Sally Cripps
  5. A Horse Race in High Dimensional Space By Paolo Andreini; Donato Ceci
  6. Dynamic Tobit models By Harvey, A.; Liao, Y.
  7. Resuscitating the co-fractional model of Granger (1986) By Federico Carlini; Paolo Santucci de Magistris
  8. Integer-valued stochastic volatility By Aknouche, Abdelhakim; Dimitrakopoulos, Stefanos; Touche, Nassim
  9. Testing for Changes in Forecasting Performance By Pierre Perron; Yohei Yamamoto
  10. Cross-temporal aggregation: Improving the forecast accuracy of hierarchical electricity consumption By Spiliotis, Evangelos; Petropoulos, Fotios; Kourentzes, Nikolaos; Assimakopoulos, Vassilios
  11. Improved methods for combining point forecasts for an asymmetrically distributed variable By Ozer Karagedikli; Shaun P. Vahey; Elizabeth C. Wakerly
  12. Multivariate Bayesian Predictive Synthesis in Macroeconomic Forecasting By Kenichiro McAlinn; Knut Are Aastveit; Jouchi Nakajima; Mike West
  13. Long memory via networking By Susanne M. Schennach
  14. Direct determination approach for the multifractal detrending moving average analysis By Hai-Chuan Xu; Gao-Feng Gu; Wei-Xing Zhou
  15. Robust Bayesian inference for set-identified models By Raffaella Giacomini; Toru Kitagawa
  16. Kernel block bootstrap By Paulo Parente; Richard J. Smith
  17. Hawkes processes for credit indices time series analysis: How random are trades arrival times? By Achraf Bahamou; Maud Doumergue; Philippe Donnat
  18. Sup-ADF-style bubble-detection methods under test By Verena Monschang; Bernd Wilfling

  1. By: Alessandro Casini (Boston University); Pierre Perron (Boston University)
    Abstract: This chapter covers methodological issues related to estimation, testing and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered. The first part summarizes and updates developments described in an earlier review, Perron (2006), with the exposition following heavily that of Perron (2008). Additions are included for recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct usefulness in practice as opposed to issues that are mostly of theoretical interest.
    Keywords: Change-point, linear models, testing, confidence intervals, trends, stationary and integrated regressors, factor models, Lasso, foracasts.
    JEL: C22
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2019-002&r=all
  2. By: Pang, Tianxiao; Du, Lingjie; Chong, Terence Tai Leung
    Abstract: Chong (1995) and Bai (1997) proposed a sample splitting method to estimate a multiple-break model. However, their studies focused on stationary time series models, where the identification of the first break depends on the magnitude and the duration of the break, and a testing procedure is needed to assist the estimation of the remaining breaks in subsamples split by the break points found earlier. In this paper, we focus on nonstationary multiple-break autoregressive models. Unlike the stationary case, we show that the duration of a break does not affect if it will be identified first. Rather, it depends on the stochastic order of magnitude of signal strength of the break under the case of constant break magnitude and also the square of the magnitude of the break under the case of shrinking break magnitude. Since the subsamples usually have different stochastic orders in nonstationary autoregressive models with breaks, one can therefore determine which break will be identified first. We apply this finding to the models proposed in Phillips and Yu (2011), Phillips et al. (2011) and Phillips et al. (2015a, 2015b). We provide an estimation procedure as well as the asymptotic theory for the model.
    Keywords: Change point, Financial bubble, Least squares estimator, Mildly explosive, Mildly integrated.
    JEL: C22
    Date: 2018–08–23
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:92074&r=all
  3. By: Matteo Barigozzi; Marc Hallin; Stefano Soccorsi
    Abstract: Ripple effects in financial markets associated with crashes, systemic risk and contagion are characterized by non-trivial lead-lag dynamics which is crucial for understanding how crises spread and, therefore, central in risk management. In the spirit of Diebold and Yilmaz (2014), we investigate connectedness among financial firms via an analysis of impulse response functions of adjusted intraday log-ranges to market shocks involving network theory methods. Motivated by overwhelming evidence that the interdependence structure of financial markets is varying over time, we are basing that analysis on the so-called time-varying General Dynamic Factor Model proposed by Eichler et al. (2011), which extends to the locally stationary context the framework developed by Forni et al. (2000) under stationarity assumptions. The estimation methods in Eichler et al. (2011), however, present the major drawback of involving two-sided filters which make it impossible to recover impulse response functions. We therefore introduce a novel approach extending to the time-varying context the one-sided method of Forni et al. (2017). The resulting estimators of time-varying impulse response functions are shown to be consistent, hence can be used in the analysis of (time-varying) connectedness. Our empirical analysis on a large and strongly comoving panel of intraday price ranges of US stocks indicates that large increases in mid to long-run connectedness are associated with the main financial turmoils.
    Keywords: Dynamic factor models, volatility, financial crises, contagion, financial connectedness, high-dimensional time series, panel data, time-varying models, local stationarity.
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/283963&r=all
  4. By: Nick James; Roman Marchant; Richard Gerlach; Sally Cripps
    Abstract: Discrimination between non-stationarity and long-range dependency is a difficult and long-standing issue in modelling financial time series. This paper uses an adaptive spectral technique which jointly models the non-stationarity and dependency of financial time series in a non-parametric fashion assuming that the time series consists of a finite, but unknown number, of locally stationary processes, the locations of which are also unknown. The model allows a non-parametric estimate of the dependency structure by modelling the auto-covariance function in the spectral domain. All our estimates are made within a Bayesian framework where we use aReversible Jump Markov Chain Monte Carlo algorithm for inference. We study the frequentist properties of our estimates via a simulation study, and present a novel way of generating time series data from a nonparametric spectrum. Results indicate that our techniques perform well across a range of data generating processes. We apply our method to a number of real examples and our results indicate that several financial time series exhibit both long-range dependency and non-stationarity.
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1902.03350&r=all
  5. By: Paolo Andreini (University of Rome "Tor Vergata"); Donato Ceci (University of Rome "Tor Vergata" & Bank of Italy)
    Abstract: In this paper, we study the predictive power of dense and sparse estimators in a high dimensional space. We propose a new forecasting method, called Elastically Weighted Principal Components Analysis (EWPCA) that selects the variables, with respect to the target variable, taking into account the collinearity among the data using the Elastic Net soft thresholding. Then, we weight the selected predictors using the Elastic Net regression coefficient, and we finally apply the principal component analysis to the new “elastically” weighted data matrix. We compare this method to common benchmark and other methods to forecast macroeconomic variables in a data-rich environment, dived into dense representation, such as Dynamic Factor Models and Ridge regressions and sparse representations, such as LASSO regression. All these models are adapted to take into account the linear dependency of the macroeconomic time series. Moreover, to estimate the hyperparameters of these models, including the EWPCA, we propose a new procedure called “brute force”. This method allows us to treat all the hyperparameters of the model uniformly and to take the longitudinal feature of the time-series data into account. Our findings can be summarized as follows. First, the “brute force” method to estimate the hyperparameters is more stable and gives better forecasting performances, in terms of MSFE, than the traditional criteria used in the literature to tune the hyperparameters. This result holds for all samples sizes and forecasting horizons. Secondly, our two-step forecasting procedure enhances the forecasts’ interpretability. Lastly, the EWPCA leads to better forecasting performances, in terms of mean square forecast error (MSFE), than the other sparse and dense methods or naïve benchmark, at different forecasts horizons and sample sizes.
    Keywords: Variable selection,High-dimensional time series,Dynamic factor models,Shrinkage methods,Cross-validation
    JEL: C22 C52 C53
    Date: 2019–02–14
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:452&r=all
  6. By: Harvey, A.; Liao, Y.
    Abstract: Score-driven models provide a solution to the problem of modelling time series when the observations are subject to censoring and location and/or scale may change over time. The method applies to generalized-t and EGB2 distributions, as well as to the normal distribution. A set of Monte Carlo experiments show that the score-driven model provides good forecasts even when the true model is parameterdriven. The viability of the new models is illustrated by fitting them to data on Chinese stock returns.
    Keywords: Censored distributions, dynamic conditional score model, EGARCH models, logistic distribution, generalized t distribution
    JEL: C22 C24
    Date: 2019–02–02
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1913&r=all
  7. By: Federico Carlini (Università della Svizzera italiana); Paolo Santucci de Magistris (LUISS University and Aarhus University and CREATES)
    Abstract: We study the theoretical properties of the model for fractional cointegration proposed by Granger (1986), namely the FVECM_{d,b}. First, we show that the stability of any discretetime stochastic system of the type Pi(L)Y_t = e_t can be assessed by means of the argument principle under mild regularity condition on Pi(L), where L is the lag operator. Second, we prove that, under stability, the FVECM_{d,b} allows for a representation of the solution that demonstrates the fractional and co-fractional properties and we find a closed-form expression for the impulse response functions. Third, we prove that the model is identified for any combination of number of lags and cointegration rank, while still being able to generate polynomial co-fractionality. In light of these properties, we show that the asymptotic properties of the maximum likelihood estimator reconcile with those of the FCVAR_{d,b} model studied in Johansen and Nielsen (2012). Finally, an empirical illustration is provided.
    Keywords: Fractional cointegration, Granger representation theorem, Stability, Identification, Impulse Response Functions, Profile Maximum Likelihood
    JEL: C01 C02 C58 G12 G13
    Date: 2019–01–02
    URL: http://d.repec.org/n?u=RePEc:aah:create:2019-02&r=all
  8. By: Aknouche, Abdelhakim; Dimitrakopoulos, Stefanos; Touche, Nassim
    Abstract: We propose a novel class of count time series models, the mixed Poisson integer-valued stochastic volatility models. The proposed specification, which can be considered as an integer-valued analogue of the discrete-time stochastic volatility model, encompasses a wide range of conditional distributions of counts. We study its probabilistic structure and develop an easily adaptable Markov chain Monte Carlo algorithm, based on the Griddy-Gibbs approach that can accommodate any conditional distribution that belongs to that class. We demonstrate that by considering the cases of Poisson and negative binomial distributions. The methodology is applied to simulated and real data.
    Keywords: Griddy-Gibbs, Markov chain Monte Carlo, mixed Poisson parameter-driven models, stochastic volatility, Integer-valued GARCH.
    JEL: C13 C15 C32 C35 C58
    Date: 2019–02–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:91962&r=all
  9. By: Pierre Perron (Boston University); Yohei Yamamoto (Hitotsubashi University)
    Abstract: We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of-sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossi’s (2009) (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over values of m within a pre-specified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss sup-Wald (TLSW) and Total Loss UDmax (TLUD) tests. Using theoretical analyses and simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all data-generating processes considered are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and empirical applications illustrate the relevance of our findings in practice.
    Keywords: forecast breakdown, non-monotonic power, structural change, out-of-sample forecast
    JEL: C14 C22
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2019-003&r=all
  10. By: Spiliotis, Evangelos; Petropoulos, Fotios; Kourentzes, Nikolaos; Assimakopoulos, Vassilios
    Abstract: Achieving high accuracy in load forecasting requires the selection of appropriate forecasting models, able to capture the special characteristics of energy consumption time series. When hierarchies of load from different sources are considered together, the complexity increases further; for example, when forecasting both at system and region level. Not only the model selection problem is expanded to multiple time series, but we also require aggregation consistency of the forecasts across levels. Although hierarchical forecast can address the aggregation consistency concerns, it does not resolve the model selection uncertainty. To address this we rely on Multiple Temporal Aggregation, which has been shown to mitigate the model selection problem for low frequency time series. We propose a modification for high frequency time series and combine conventional cross-sectional hierarchical forecasting with multiple temporal aggregation. The effect of incorporating temporal aggregation in hierarchical forecasting is empirically assessed using a real data set from five bank branches, demonstrating superior accuracy, aggregation consistency and reliable automatic forecasting.
    Keywords: Temporal aggregation; Hierarchical forecasting; Electricity load; Exponential smoothing; MAPA
    JEL: C4 C53 D8 D81 L94
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:91762&r=all
  11. By: Ozer Karagedikli; Shaun P. Vahey; Elizabeth C. Wakerly
    Abstract: Many studies have found that combining forecasts improves predictive accuracy. An often-used approach developed by Granger and Ramanathan (GR, 1984) utilises a linear-Gaussian regression model to combine point forecasts. This paper generalises their approach for an asymmetrically distributed target variable. Our copula point forecast combination methodology involves fitting marginal distributions for the target variable and the individual forecasts being combined; and then estimating the correlation parameters capturing linear dependence between the target and the experts’ predictions. If the target variable and experts’ predictions are individually Gaussian distributed, our copula point combination reproduces the GR combination. We illustrate our methodology with two applications examining quarterly forecasts for the Federal Funds rate and for US output growth, respectively. The copula point combinations outperform the forecasts from the individual experts in both applications, with gains in root mean squared forecast error in the region of 40% for the Federal Funds rate and 4% for output growth relative to the GR combination. The fitted marginal distribution for the interest rate exhibits strong asymmetry.
    Keywords: Forecast combination, Copula modelling, Interest rates, Vulnerable economic growth
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-15&r=all
  12. By: Kenichiro McAlinn; Knut Are Aastveit; Jouchi Nakajima; Mike West
    Abstract: We present new methodology and a case study in use of a class of Bayesian predictive synthesis (BPS) models for multivariate time series forecasting. This extends the foundational BPS framework to the multivariate setting, with detailed application in the topical and challenging context of multi-step macroeconomic forecasting in a monetary policy setting. BPS evaluates– sequentially and adaptively over time– varying forecast biases and facets of miscalibration of individual forecast densities for multiple time series, and– critically– their time-varying interdependencies. We define BPS methodology for a new class of dynamic multivariate latent factor models implied by BPS theory. Structured dynamic latent factor BPS is here motivated by the application context– sequential forecasting of multiple US macroeconomic time series with forecasts generated from several traditional econometric time series models. The case study highlights the potential of BPS to improve of forecasts of multiple series at multiple forecast horizons, and its use in learning dynamic relationships among forecasting models or agents.
    Keywords: Agent opinion analysis, Bayesian forecasting, Dynamic latent factors models, Dynamic SURE models, Macroeconomic forecasting, Multivariate density forecast combination,
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:bny:wpaper:0073&r=all
  13. By: Susanne M. Schennach (Institute for Fiscal Studies and Brown University)
    Abstract: Many time-series exhibit “long memory”: Their autocorrelation function decays slowly with lag. This behavior has traditionally been modeled via unit roots or fractional Brownian motion and explained via aggregation of heterogenous processes, nonlinearity, learning dynamics, regime switching or structural breaks. This paper identifies a different and complementary mechanism for long memory generation by showing that it can naturally arise when a large number of simple linear homogenous economic subsystems with short memory are interconnected to form a network such that the outputs of the subsystems are fed into the inputs of others. This networking picture yields a type of aggregation that is not merely additive, resulting in a collective behavior that is richer than that of individual subsystems. Interestingly, the long memory behavior is found to be almost entirely determined by the geometry of the network, while being relatively insensitive to the specific behavior of individual agents.
    Keywords: Long memory, fractionally integrated processes, spectral dimension, networks, fractals.
    Date: 2018–07–27
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:49/18&r=all
  14. By: Hai-Chuan Xu (ECUST); Gao-Feng Gu (ECUST); Wei-Xing Zhou (ECUST)
    Abstract: In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent $\tau(q)$ is related to the partition function and the multifractal spectrum $f(\alpha)$ can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional $p$-model, the two-dimensional $p$-model and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum $f(\alpha)$ can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1902.04437&r=all
  15. By: Raffaella Giacomini (Institute for Fiscal Studies and cemmap and UCL); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London)
    Abstract: This paper reconciles the asymptotic disagreement between Bayesian and frequentist inference in set-identified models by adopting a multiple-prior (robust) Bayesian approach. We propose new tools for Bayesian inference in set-identified models. We show that these tools have a well-defined posterior interpretation in finite samples and are asymptotically valid from the frequentist perspective. The main idea is to construct a prior class that removes the source of the disagreement: the need to specify an unrevisable prior. The corresponding class of posteriors can be summarized by reporting the ‘posterior lower and upper probabilities’ of a given event and/or the ‘set of posterior means’ and the associated ‘robust credible region’. We show that the set of posterior means is a consistent estimator of the true identified set and the robust credible region has the correct frequentist asymptotic coverage for the true identified set if it is convex. Otherwise, the method can be interpreted as providing posterior inference about the convex hull of the identified set. For impulse-response analysis in set-identified Structural Vector Autoregressions, the new tools can be used to overcome or quantify the sensitivity of standard Bayesian inference to the choice of an unrevisable prior.
    Keywords: multiple priors, identified set, credible region, consistency, asymptotic coverage, identifying restrictions, impulse-response analysis
    Date: 2018–11–07
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:61/18&r=all
  16. By: Paulo Parente (Institute for Fiscal Studies); Richard J. Smith (Institute for Fiscal Studies)
    Abstract: This article introduces and investigates the properties of a new bootstrap method for time-series data, the kernel block bootstrap. The bootstrap method, although akin to, offers an improvement over the tapered block bootstrap of Paparoditis and Politis (2001), admitting kernels with unbounded support. Given a suitable choice of kernel, a kernel block bootstrap estimator of the spectrum at zero asymptotically close to the optimal Parzen (1957) estimator is possible. The paper shows the large sample validity of the kernel block bootstrap and derives the higher order bias and variance of the kernel block bootstrap variance estimator. Like the tapered block bootstrap variance estimator, the kernel block bootstrap estimator has a favourable higher order bias property. Simulations based on the designs of Paparoditis and Politis (2001) indicate that the kernel block bootstrap may be efficacious in practice.
    Keywords: Bias; Con fidence Interval; Resampling; Kernel function; Spectral density estimation; Time Series; Variance estimation
    Date: 2018–07–25
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:48/18&r=all
  17. By: Achraf Bahamou; Maud Doumergue; Philippe Donnat
    Abstract: Targeting a better understanding of credit market dynamics, the authors have studied a stochastic model named the Hawkes process. Describing trades arrival times, this kind of model allows for the capture of self-excitement and mutual interactions phenomena. The authors propose here a simple yet conclusive method for fitting multidimensional Hawkes processes with exponential kernels, based on a maximum likelihood non-convex optimization. The method was successfully tested on simulated data, then used on new publicly available real trading data for three European credit indices, thus enabling quantification of self-excitement as well as volume impacts or cross indices influences.
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1902.03714&r=all
  18. By: Verena Monschang; Bernd Wilfling
    Abstract: In this paper we analyze the performance of supremum augmented Dickey-Fuller (SADF), generalized SADF (GSADF), and backward SADF (BSADF) tests, as introduced by Phillips et al. (International Economic Review 56:1043-1078, 2015) for detecting and date-stamping financial bubbles. In Monte Carlo simulations, we show that the SADF and GSADF tests may reveal substantial size distortions under typical financial-market characteristics (like the empirically well-documented leverage effect). We consider the rational bubble specification suggested by Rotermann and Wilfl ing (Applied Economics Letters 25:1091-1096, 2018) that is able to generate realistic stock-price dynamics (in terms of level trajectories and volatility paths). Simulating stock-price trajectories that contain these parametric bubbles, we demonstrate that the SADF and GSADF tests can have extremely low power under a wide range of bubble-parameter constellations. In an empirical analysis, we use NASDAQ data covering a time-span of 45 years and find that the outcomes of the bubble date-stamping procedure (based on the BSADF test) are sensitive to the data-frequency chosen by the econometrician.
    Keywords: Stock markets, present-value model, rational bubble, explosiveness, SADF and GSADF tests, bubble detection, date-stamping
    JEL: C15 C32 C58 G15
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:7819&r=all

This nep-ets issue is ©2019 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.