nep-ets New Economics Papers
on Econometric Time Series
Issue of 2018‒12‒24
eighteen papers chosen by
Jaqueson K. Galimberti
KOF Swiss Economic Institute

  1. Co-movements in Market Prices and Fundamentals: A Semiparametric Multivariate GARCH Approach By Loann D. Desboulets
  2. A switching self-exciting jump diffusion process for stock prices By Donatien Hainaut; Franck Moraux
  3. Inference in Second-Order Identified Models By Prosper Dovonon; Frank Kleibergen (Sans nom); Alastair Hall
  4. HAR Testing for Spurious Regression in Trend By Peter C.B. Phillips; Yonghui Zhang; Xiaohu Wang
  5. Testing for randomness in a random coefficient autoregression model By Lajos Horvath; Lorenzo Trapani
  6. Testing Fractional Unit Roots with Non-linear Smooth Break Approximations using Fourier functions By Gil-Alana, Luis A.; Yaya, OlaOluwa S
  7. High dimensional semiparametric moment restriction models By Chaohua Dong; Jiti Gao; Oliver Linton
  8. Calibrating rough volatility models: a convolutional neural network approach By Henry Stone
  9. Merger and Acquire of Series: A New Approach of Time Series Modeling By Jitendra Kumar; Varun Agiwal
  10. Testing for strict stationarity in a random coefficient autoregressive model By Lorenzo Trapani
  11. Size matters: Estimation sample length and electricity price forecasting accuracy By Carlo Fezzi; Luca Mosetti
  12. A Frequency-Domain Approach to Dynamic Macroeconomic Models By Tan, Fei
  13. The influence of renewables on electricity price forecasting: a robust approach By Luigi Grossi; Fany Nan
  14. FFORMA: Feature-based forecast model averaging By Pablo Montero-Manso; George Athanasopoulos; Rob J Hyndman; Thiyanga S Talagala
  15. Fundamentalness, Granger Causality and Aggregation By Mario Forni; Luca Gambetti; Luca Sala
  16. Testing explosive bubbles with time-varying volatility By David Harvey; Stephen Leybourne; Yang Zu
  17. Sequential testing for structural stability in approximate factor models By Matteo Barigozzi; Lorenzo Trapani
  18. High and Low Intraday Commodity Prices: A Fractional Integration and Cointegration Approach By Yaya, OlaOluwa S; Gil-Alana, Luis A.

  1. By: Loann D. Desboulets (Aix-Marseille Univ., CNRS, EHESS, Centrale Marseille, AMSE)
    Abstract: In this paper we investigate on Multivariate GARCH models to assess the co-movements between stock prices of american firms listed on main markets and fundamentals. Co-movements can be seen as correlations. The latter are usually estimated via standard GARCH models such as the Dynamic Conditional Correlation (Engle, 2002) or the Baba-Engle-Kraft-Kroner (Baba et al., 1990). Nevertheless more flexible models such as the Orthogonal GARCH of Alexander (2001) can be used as well. We also introduce a new Semi-parametric Orthogonal GARCH as a natural non-linear extension of the Orthogonal GARCH. A Montecarlo simulation is conducted to evaluate finite sample performance of each model before applying them to the data. Empirical results show evidence that during crises, prices are less correlated with fundamentals that in normal periods.
    Keywords: non-parametric, Multivariate GARCH, dynamic correlation, PCA
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:aim:wpaimx:1851&r=ets
  2. By: Donatien Hainaut (ESC Rennes School of Business); Franck Moraux (CREM - Centre de recherche en économie et management - UNICAEN - Université de Caen Normandie - NU - Normandie Université - UR1 - Université de Rennes 1 - UNIV-RENNES - Université de Rennes - CNRS - Centre National de la Recherche Scientifique)
    Abstract: This study proposes a new Markov switching process with clustering eects. In this approach, a hidden Markov chain with a nite number of states modulates the parameters of a self-excited jump process combined to a geometric Brownian motion. Each regime corresponds to a particular economic cycle determining the expected return, the diusion coecient and the long-run frequency of clustered jumps. We study rst the theoretical properties of this process and we propose a sequential Monte-Carlo method to lter the hidden state variables. We next develop a Markov Chain Monte-Carlo procedure to t the model to the S&P 500. Finally, we analyse the impact of such a jump clustering on implied volatilities of European options.
    Keywords: switching regime,Hawkes process,self-excited jumps
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:hal:journl:halshs-01909772&r=ets
  3. By: Prosper Dovonon; Frank Kleibergen (Sans nom); Alastair Hall
    Abstract: We explore the local power properties of different test statistics for conducting inference in moment condition models that only identify the parameters locally to second order. We consider the conventional Wald and LM statistics, and also the Generalized Anderson Rubin (GAR) statistic (Anderson and Rubin, 1949; Dufour, 1997; Staiger and Stock, 1997; Stock and Wright, 2000), KLM statistic (Kleibergen, 2002, 2005) and the GMM extension of Moreira’s (2003) (GMM-M) conditional likelihood ratio statistic. The GAR, KLM and GMM-M statistics are so-called “identification robust” since their (conditional) limiting distribution is the same under first-order, weak and therefore also second order identification. For inference about the model specification, we consider the identification-robust J statistic (Kleibergen, 2005) and the GAR statistic. Interestingly, we find that the limiting distribution of the Wald statistic under local alternatives not only de-pends on the distance to the null hypothesis but also on the convergence rate of the Jacobian. We specifically analyze two empirically relevant models with second order identification. In the panel autoregressive model of order one, our analysis indicates that the Wald test of a unit root value of the autoregressive parameter has better power compared to the corresponding GAR test which, in turn, dominates the KLM, GMM-M and LM tests. For the conditionally heteroskedastic factor model, we compare Kleibergen’s (2005) J and the GAR statistics to Hansen’s (1982) overidentifying restrictions test (previously analyzed in this context by Dovonon and Renault, 2013) and find the power ranking depends on the sample size. Collectively, our results suggest that tests with meaningful power can be conducted in second-order identified models.
    Keywords: Generalized Method of Moments estimation,First-order identification failure,Identification-robust inference,
    Date: 2018–12–17
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2018s-36&r=ets
  4. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Yonghui Zhang (Renmin University of China); Xiaohu Wang (The Chinese University of Hong Kong)
    Abstract: The usual t test, the t test based on heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimators, and the heteroskedasticity and autocorrelation robust (HAR) test are three statistics that are widely used in applied econometric work. The use of these significance tests in trend regression is of particular interest given the potential for spurious relationships in trend formulations. Following a longstanding tradition in the spurious regression literature, this paper investigates the asymptotic and finite sample properties of these test statistics in several spurious regression contexts, including regression of stochastic trends on time polynomials and regressions among independent random walks. Concordant with existing theory (Phillips, 1986, 1998; Sun, 2004, 2014), the usual t test and HAC standardized test fail to control size as the sample size n \to \infty in these spurious formulations, whereas HAR tests converge to well-defined limit distributions in each case and therefore have the capacity to be consistent and control size. However, it is shown that when the number of trend regressors K \to \infty, all three statistics, including the HAR test, diverge and fail to control size as n \to \infty. These findings are relevant to high dimensional nonstationary time series regressions.
    Keywords: HAR inference, Karhunen-Loève representation, Spurious regression, t-statistics
    JEL: C12 C14 C23
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2153&r=ets
  5. By: Lajos Horvath; Lorenzo Trapani
    Abstract: We propose a test to discern between an ordinary autoregressive model, and a random coefficient one. To this end, we develop a full- edged estimation theory for the variances of the idiosyncratic innovation and of the random coefficient, based on a two-stage WLS approach. Our results hold irrespective of whether the series is stationary or nonstationary, and, as an immediate result, they afford the construction of a test for "relevant" randomness. Further, building on these results, we develop a randomised test statistic for the null that the coefficient is non-random, as opposed to the alternative of a standard RCA(1) model. Monte Carlo evidence shows that the test has the correct size and very good power for all cases considered. MSC 2010 subject classifications: Primary 62G10, 62H25; secondary 62M10.
    Keywords: Random Coefficient AutoRegression, WLS estimator, randomised test.
    URL: http://d.repec.org/n?u=RePEc:not:notgts:18/03&r=ets
  6. By: Gil-Alana, Luis A.; Yaya, OlaOluwa S
    Abstract: In this paper we present a testing procedure for fractional orders of integration in the context of non-linear terms approximated by Fourier functions. The procedure is a natural extension of the linear method proposed in Robinson (1994) and similar to the one proposed in Cuestas and Gil-Alana (2016) based on Chebyshev polynomials in time. The test statistic has an asymptotic standard normal distribution and several Monte Carlo experiments conducted in the paper show that it performs well in finite samples. Various applications using real life time series, such as US unemployment rates, US GNP and Purchasing Power Parity (PPP) of G7 countries are presented at the end of the paper.
    Keywords: Fractional unit root; Chebyshev polynomial; Monte Carlo simulation; Nonlinearity; Smooth break; Fourier transform
    JEL: C12 C15 C22
    Date: 2018–11–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:90516&r=ets
  7. By: Chaohua Dong; Jiti Gao; Oliver Linton
    Abstract: We consider nonlinear moment restriction semiparametric models where both the dimension of the parameter vector and the number of restrictions are divergent with sample size and an unknown smooth function is involved. We propose an estimation method based on the sieve generalized method of moments (sieve-GMM). We establish consistency and asymptotic normality for the estimated quantities when the number of parameters increases modestly with sample size. We also consider the case where the number of potential parameters/covariates is very large, i.e., increases rapidly with sample size, but the true model exhibits sparsity. We use a penalized sieve GMM approach to select the relevant variables, and establish the oracle property of our method in this case. We also provide new results for inference. We propose several new test statistics for the over-identification and establish their large sample properties. We provide a simulation study and an application to data from the NLSY79 used by Carneiro et al. [14].
    Keywords: generalized method of moments, high dimensional models, moment restriction, over-identification, penalization, sieve method, sparsity.
    JEL: C12 C14 C22 C30
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2018-23&r=ets
  8. By: Henry Stone
    Abstract: In this paper we use convolutional neural networks to find the H\"older exponent of simulated sample paths of the rBergomi model, a recently proposed stock price model used in mathematical finance. We contextualise this as a calibration problem, thereby providing a very practical and useful application.
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1812.05315&r=ets
  9. By: Jitendra Kumar; Varun Agiwal
    Abstract: Present paper proposes an autoregressive time series model to study the behaviour of merger and acquire concept which is equally important as other available theories like structural break, de- trending etc. The main motivation behind newly proposed merged autoregressive (M-AR) model is to study the impact of merger in the parameters as well as acquired series. First, we recommend the estimation setup using popular classical least square and posterior distribution under Bayesian method with different loss function. Then, we obtain Bayes factor, full Bayesian significance test and credible interval to know the significance of the merger series. A simulation as well as empirical study is illustrated.
    Keywords: Autoregressive model, Break point, Merger series, Bayesian inference.
    JEL: C32 G34 C11
    Date: 2018–12–16
    URL: http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2018_16&r=ets
  10. By: Lorenzo Trapani
    Abstract: We propose a procedure to decide between the null hypothesis of (strict) stationarity and the alternative of non-stationarity, in the context of a Random Coefficient AutoRegression (RCAR). The procedure is based on randomising a diagnostic which diverges to positive infinity under the null, and drifts to zero under the alternative. Thence, we propose a randomised test which can be used directly and - building on it - a decision rule to discern between the null and the alternative. The procedure can be applied under very general circumstances: albeit developed for an RCAR model, it can be used in the case of a standard AR(1) model, without requiring any modifications or prior knowledge. Also, the test works (again with no modification or prior knowledge being required) in the presence of infinite variance, and in general requires minimal assumptions on the existence of moments.
    Keywords: Random Coefficient AutoRegression, Stationarity, Unit Root, Heavy Tails, Randomised Tests.
    URL: http://d.repec.org/n?u=RePEc:not:notgts:18/02&r=ets
  11. By: Carlo Fezzi; Luca Mosetti
    Abstract: Electricity price forecasting models are typically estimated via rolling windows, i.e. by using only the most recent observations. Nonetheless, the current literature does not provide much guidance on how to select the size of such windows. This paper shows that determining the appropriate window prior to estimation dramatically improves forecasting performances. In addition, it proposes a simple two-step approach to choose the best performing models and window sizes. The value of this methodology is illustrated by analyzing hourly datasets from two large power markets with a selection of ten different forecasting models. Incidentally, our empirical application reveals that simple models, such as the linear regression, can perform surprisingly well if estimated on extremely short samples.
    Keywords: electricity price forecasting, day-ahead market, parameter instability, bandwidth selection, artificial neural networks
    JEL: C22 C45 C51 C53 Q47
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:trn:utwprg:2018/10&r=ets
  12. By: Tan, Fei
    Abstract: This article is concerned with frequency-domain analysis of dynamic linear models under the hypothesis of rational expectations. We develop a unified framework for conveniently solving and estimating these models. Unlike existing strategies, our starting point is to obtain the model solution entirely in the frequency domain. This solution method is applicable to a wide class of models and permits straightforward construction of the spectral density for performing likelihood-based inference. To cope with potential model uncertainty, we also generalize the well-known spectral decomposition of the Gaussian likelihood function to a composite version implied by several competing models. Taken together, these techniques yield fresh insights into the model’s theoretical and empirical implications beyond what conventional time-domain approaches can offer. We illustrate the proposed framework using a prototypical new Keynesian model with fiscal details and two distinct monetary-fiscal policy regimes. The model is simple enough to deliver an analytical solution that makes the policy effects transparent under each regime, yet still able to shed light on the empirical interactions between U.S. monetary and fiscal policies along different frequencies.
    Keywords: solution method, analytic function, Bayesian inference, spectral density, monetary and fiscal policy
    JEL: C32 C51 C52 C65 E63 H63
    Date: 2018–10–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:90487&r=ets
  13. By: Luigi Grossi (University of Verona); Fany Nan (European Commission's Joint Research Centre (JRC))
    Abstract: In this paper a robust approach to modelling electricity spot prices is introduced. Differently from what has been recently done in the literature on electricity price forecasting, where the attention has been mainly drawn by the prediction of spikes, the focus of this contribution is on the robust estimation of nonlinear SETARX models (Self-Exciting Threshold Auto Regressive models with eXogenous regressors). In this way, parameters estimates are not, or very lightly, influenced by the presence of extreme observations and the large majority of prices, which are not spikes, could be better forecasted. A Monte Carlo study is carried out in order to select the best weighting function for Generalized M-estimators of SETAR processes. A robust procedure to select and estimate nonlinear processes for electricity prices is introduced, including robust tests for stationarity and nonlinearity and robust information criteria. The application of the procedure to the Italian electricity market reveals the forecasting superiority of the robust GM-estimator based on the polynomial weighting function respect to the non-robust Least Squares estimator. Finally, the introduction of external regressors in the robust estimation of SETARX processes contributes to the improvement of the forecasting ability of the model.
    Keywords: Electricity Price, Nonlinear Time Series, Price Forecasting, Robust GM-Estimator, Spikes, Threshold Models
    JEL: C13 C15 C22 C53 Q47
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:ieb:wpaper:doc2018-10&r=ets
  14. By: Pablo Montero-Manso; George Athanasopoulos; Rob J Hyndman; Thiyanga S Talagala
    Abstract: We propose an automated method for obtaining weighted forecast combinations using time series features. The proposed approach involves two phases. First, we use a collection of time series to train a meta-model to assign weights to various possible forecasting methods with the goal of minimizing the average forecasting loss obtained from a weighted forecast combination. The inputs to the meta-model are features extracted from each series. In the second phase, we forecast new series using a weighted forecast combination where the weights are obtained from our previously trained meta-model. Our method outperforms a simple forecast combination, and outperforms all of the most popular individual methods in the time series forecasting literature. The approach achieved second position in the M4 competition.
    Keywords: time series feature, forecast combination, XGBoost, M4 competition, meta-learning.
    JEL: C10 C14 C22
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2018-19&r=ets
  15. By: Mario Forni; Luca Gambetti; Luca Sala
    Abstract: The testing procedure suggested in Canova and Sahneh (2018) is essentially the same as the one proposed in Forni and Gambetti (2014), the only one difference being the use of Geweke, Meese and Dent (1983) version of Sims (1972) test in place of a standard Granger causality test. The two procedures produce similar results, both for small and large samples, and perform remarkably well in detecting non-fundamentalness. Neither methods have anything to do with the problem of aggregation. A “structural aggregate model” does not exist.
    Keywords: Non-fundamentalness, Granger causality, aggregation, structural VAR
    JEL: C32 E32
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:mod:recent:139&r=ets
  16. By: David Harvey; Stephen Leybourne; Yang Zu
    Abstract: This paper considers the problem of testing for an explosive bubble in financial data in the presence of time-varying volatility. We propose a weighted least squares-based variant of the Phillips, Wu and Yu (2011) test for explosive autoregressive behaviour. We find that such an approach has appealing asymptotic power properties, with the potential to deliver substantially greater power than the established OLS-based approach for many volatility and bubble settings. Given that the OLS-based test can outperform the weighted least squares-based test for other volatility and bubble specifications, we also suggested a union of rejections procedure that succeeds in capturing the better power available from the two constituent tests for a given alternative. Our approach involves a nonparametric kernel-based volatility function estimator for computation of the weighted least squares-based statistic, together with the use of a wild bootstrap procedure applied jointly to both individual tests, delivering a opowerful testing procedure that is asymptotically size-robust to a wide range of time-varying volatility specifications.
    Keywords: Rational bubble; Explosive autoregression; Time-varying volatility; Weighted least squares; Right-tailed unit root testing.
    URL: http://d.repec.org/n?u=RePEc:not:notgts:18/05&r=ets
  17. By: Matteo Barigozzi; Lorenzo Trapani
    Abstract: We develop an on-line monitoring procedure to detect a change in a large approximate factor model. Our statistics are based on a well-known property of the (r + 1)-th eigenvalue of the sample covariance matrix of the data (having defined r as the number of common factors): whilst under the null the (r + 1)-th eigenvalue is bounded, under the alternative of a change (either in the loadings, or in the number of factors itself) it becomes spiked. Given that the sample eigenvalue cannot be estimated consistently under the null, we regularise the problem by randomising the test statistic in conjunction with sample conditioning, obtaining a sequence of i.i.d., asymptotically chi-square statistics which are then employed to build the monitoring scheme. Numerical evidence shows that our procedure works very well in finite samples, with a very small probability of false detections and tight detection times in presence of a genuine change-point.
    Keywords: large factor model, change-point, sequential testing, randomised tests.
    URL: http://d.repec.org/n?u=RePEc:not:notgts:18/04&r=ets
  18. By: Yaya, OlaOluwa S; Gil-Alana, Luis A.
    Abstract: This paper examines the behaviour of high and low prices of four commodities, namely crude oil, natural gas, gold and silver, and of the corresponding ranges using both daily and intraday data at various frequencies. For this purpose, it applies fractional integration and cointegration techniques; in particular, an FCVAR model is estimated to capture both the long-run equilibrium relationships between high and low commodity prices, referred to as the range, and the long-memory properties of their linear combination. Fractional cointegration in found in all cases, with the range showing stationary and nonstationary patterns and changing substantially across the frequencies. The findings may assist investors in improving their trading strategies since high and low prices serve as entry and exit signals in the market.
    Keywords: Commodity prices, intraday, fractional integration, fractional cointegration, FCVAR
    JEL: C22 C32 G11 G15
    Date: 2018–12–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:90518&r=ets

This nep-ets issue is ©2018 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.