nep-ets New Economics Papers
on Econometric Time Series
Issue of 2019‒04‒15
thirteen papers chosen by
Jaqueson K. Galimberti
KOF Swiss Economic Institute

  1. A PRIMER ON BOOTSTRAP TESTING OF HYPOTHESES IN TIME SERIES MODELS: WITH AN APPLICATION TO DOUBLE AUTOREGRESSIVE MODELS By Giuseppe Cavaliere; Anders Rahbek
  2. The Endo-Exo Problem in High Frequency Financial Price Fluctuations and Rejecting Criticality By Spencer Wheatley; Alexander Wehrli; Didier Sornette
  3. Bayesian prediction of jumps in large panels of time series data By Angelos Alexopoulos; Petros Dellaportas; Omiros Papaspiliopoulos
  4. Tests for conditional heteroscedasticity with functional data and goodness-of-fit tests for FGARCH models By Rice, Gregory; Wirjanto, Tony; Zhao, Yuqian
  5. Estimating Impulse Response Functions When the Shock Series Is Observed By Choi, Chi-Young; Chudik, Alexander
  6. Local Polynomial Estimation of Time-Varying Parameters in Nonlinear Models By Dennis Kristensen; Young Jun Lee
  7. From (Martingale) Schrodinger bridges to a new class of Stochastic Volatility Models By Pierre Henry-Labordere
  8. Tests of Conditional Predictive Ability: Some Simulation Evidence By McCracken, Michael W.
  9. Empirical Asset Pricing via Machine Learning By Shihao Gu; Bryan T. Kelly; Dacheng Xiu
  10. Another Look at Calendar Anomalies By Evanthia Chatzitzisi; Stilianos Fountas; Theodore Panagiotidis
  11. Monthly art market returns By BOCART Fabian Y.R.P.,; GHYSELS Eric,; HAFNER Christian,
  12. Bitcoin Price Prediction: An ARIMA Approach By Amin Azari
  13. Deconstructing the yield curve By Crump, Richard K.; Gospodinov, Nikolay

  1. By: Giuseppe Cavaliere (Department of Economics, University of Bologna, Italy); Anders Rahbek (Department of Economics, University of Copenhagen, Denmark)
    Abstract: In this paper we discuss the general application of the bootstrap as a tool for statistical inference in econometric time series models. We do this by considering the implementation of bootstrap inference in the class of double-autoregressive [DAR] models discussed in Ling (2004). DAR models are particularly interesting to illustrate implementation of the bootstrap to time series: first, standard asymptotic inference is usually difficult to implement due to the presence of nuisance parameters under the null hypothesis; second, inference involves testing whether one or more parameters are on the boundary of the parameter space; third, under the alternative hypothesis, fourth or even second order moments may not exist. In most of these cases, the bootstrap is not considered an appropriate tool for inference. Conversely, and taking testing (non-) stationarity to illustrate, we show that although a standard bootstrap based on unrestricted parameter estimation is invalid, a correct implementation of a bootstrap based on restricted parameter estimation (restricted bootstrap) is first-order valid; that is, it is able to replicate, under the null hypothesis, the correct limiting null distribution. Importantly, we also show that the behaviour of this bootstrap under the alternative hypothesis may be different because of possible lack of finite second-order moments of the bootstrap innovations. This features makes - for some parameter configurations - the restricted bootstrap unable to replicate the null asymptotic distribution when the null is false. We show that this drawback can be fixed by using a new 'hybrid' bootstrap, where the parameter estimates used to construct the bootstrap data are obtained with the null imposed, while the bootstrap innovations are sampled with replacement from the unrestricted residuals. We show that this bootstrap, novel in this framework, mimics the correct asymptotic null distribution, irrespetively of the null to be true or false. Throughout the paper, we use a number of examples from the bootstrap time series literature to illustrate the importance of properly defining and analyzing the bootstrap generating process and associated bootstrap statistics.
    Keywords: Bootstrap; Hypothesis testing; Double-Autoregressive models; Parameter on the boundary; Infinite Variance
    JEL: C32
    Date: 2019–04–02
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1903&r=all
  2. By: Spencer Wheatley (ETH Zurich); Alexander Wehrli (ETH Zurich); Didier Sornette (ETH Zürich - Department of Management, Technology, and Economics (D-MTEC); Swiss Finance Institute)
    Abstract: The endo-exo problem lies at the heart of statistical identification in many fields of science, and is often plagued by spurious strong-and-long memory due to improper treatment of trends, shocks and shifts in the data. A class of models that has shown to be useful in discerning exogenous and endogenous activity is the Hawkes process. This class of point processes has enjoyed great recent popularity and rapid development within the quantitative finance literature, with particular focus on the study of market microstructure and high frequency price fluctuations. We show that there are important lessons from older fields like time series and econometrics that should also be applied in financial point process modelling. In particular, we emphasize the importance of appropriately treating trends and shocks for the identification of the strength and length of memory in the system. We exploit the powerful Expectation Maximization (EM) algorithm and objective statistical criteria (BIC) to select the flexibility of the deterministic background intensity. With these methods, we strongly reject the hypothesis that the considered financial markets are critical at univariate and bivariate microstructural levels.
    Keywords: mid-price changes, trade times, Hawkes process, endogeneity, criticality, Expectation- Maximization, BIC, non-stationarity, ARMA point process, spurious inference, external shocks
    JEL: C01 C40 C52
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp1857&r=all
  3. By: Angelos Alexopoulos; Petros Dellaportas; Omiros Papaspiliopoulos
    Abstract: We take a new look at the problem of disentangling the volatility and jumps processes in a panel of stock daily returns. We first provide an efficient computational framework that deals with the stochastic volatility model with Poisson-driven jumps in a univariate scenario that offers a competitive inference alternative to the existing implementation tools. This methodology is then extended to a large set of stocks in which it is assumed that the unobserved jump intensities of each stock co-evolve in time through a dynamic factor model. A carefully designed sequential Monte Carlo algorithm provides out-of-sample empirical evidence that our suggested model outperforms, with respect to predictive Bayes factors, models that do not exploit the panel structure of stocks.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.05312&r=all
  4. By: Rice, Gregory; Wirjanto, Tony; Zhao, Yuqian
    Abstract: Functional data objects that are derived from high-frequency financial data often exhibit volatility clustering characteristic of conditionally heteroscedastic time series. Versions of functional generalized autoregressive conditionally heteroscedastic (FGARCH) models have recently been proposed to describe such data, but so far basic diagnostic tests for these models are not available. We propose two portmanteau type tests to measure conditional heteroscedasticity in the squares of financial asset return curves. A complete asymptotic theory is provided for each test, and we further show how they can be applied to model residuals in order to evaluate the adequacy, and aid in order selection of FGARCH models. Simulation results show that both tests have good size and power to detect conditional heteroscedasticity and model mis-specification in finite samples. In an application, the proposed tests reveal that intra-day asset return curves exhibit conditional heteroscedasticity. Additionally, we found that this conditional heteroscedasticity cannot be explained by the magnitude of inter-daily returns alone, but that it can be adequately modeled by an FGARCH(1,1) model.
    Keywords: Functional time series, Heteroscedasticity testing, Model diagnostic checking, High-frequency volatility models, Intra-day asset price
    JEL: C12 C32 C58 G10
    Date: 2019–03–31
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:93048&r=all
  5. By: Choi, Chi-Young (University of Texas at Arlington); Chudik, Alexander (Federal Reserve Bank of Dallas)
    Abstract: We compare the finite sample performance of a variety of consistent approaches to estimating Impulse Response Functions (IRFs) in a linear setup when the shock of interest is observed. Although there is no uniformly superior approach, iterated approaches turn out to perform well in terms of root mean-squared error (RMSE) in diverse environments and sample sizes. For smaller sample sizes, parsimonious specifications are preferred over full specifications with all ‘relevant’ variables.
    Keywords: Observed shock; Impulse-response functions; Monte Carlo experiments; Finite sample performance
    JEL: C13 C50
    Date: 2019–03–04
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:353&r=all
  6. By: Dennis Kristensen; Young Jun Lee
    Abstract: We develop a novel asymptotic theory for local polynomial (quasi-) maximum-likelihood estimators of time-varying parameters in a broad class of nonlinear time series models. Under weak regularity conditions, we show the proposed estimators are consistent and follow normal distributions in large samples. Our conditions impose weaker smoothness and moment conditions on the data-generating process and its likelihood compared to existing theories. Furthermore, the bias terms of the estimators take a simpler form. We demonstrate the usefulness of our general results by applying our theory to local (quasi-)maximum-likelihood estimators of a time-varying VAR's, ARCH and GARCH, and Poisson autogressions. For the first three models, we are able to substantially weaken the conditions found in the existing literature. For the Poisson autogression, existing theories cannot be be applied while our novel approach allows us to analyze it.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.05209&r=all
  7. By: Pierre Henry-Labordere (SOCIETE GENERALE)
    Abstract: Following closely the construction of the Schrodinger bridge, we build a new class of Stochastic Volatility Models exactly calibrated to market instruments such as for example Vanillas, options on realized variance or VIX options. These models differ strongly from the well-known local stochastic volatility models, in particular the instantaneous volatility-of-volatility of the associated naked SVMs is not modified, once calibrated to market instruments. They can be interpreted as a martingale version of the Schrodinger bridge. The numerical calibration is performed using a dynamic-like version of the Sinkhorn algorithm. We finally highlight a striking relation with Dyson non-colliding Brownian motions.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.04554&r=all
  8. By: McCracken, Michael W. (Federal Reserve Bank of St. Louis)
    Abstract: In this note we provide simulation evidence on the size and power of tests of predictive ability described in Giacomini and White (2006). Our goals are modest but non-trivial. First, we establish that there exist data generating processes that satisfy the null hypotheses of equal finite-sample (un)conditional predictive ability. We then consider various parameterizations of these DGPs as a means of evaluating the size and power properties of the proposed tests. While some of our results reinforce those in Giacomini and White (2006), others do not. We recommend against using the fixed scheme when conducting these tests and provide evidence that very large bandwidths are sometimes required when estimating long-run variances.
    Keywords: prediction; out-of-sample; inference
    JEL: C12 C52 C53
    Date: 2019–03–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2019-011&r=all
  9. By: Shihao Gu (University of Chicago - Booth School of Business); Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Dacheng Xiu (University of Chicago - Booth School of Business)
    Abstract: We synthesize the field of machine learning with the canonical problem of empirical asset pricing: measuring asset risk premia. In the familiar empirical setting of cross section and time series stock return prediction, we perform a comparative analysis of methods in the machine learning repertoire, including generalized linear models, dimension reduction, boosted regression trees, random forests, and neural networks. At the broadest level, we find that machine learning offers an improved description of expected return behavior relative to traditional forecasting methods. Our implementation establishes a new standard for accuracy in measuring risk premia summarized by an unprecedented out-of-sample return prediction R2. We identify the best performing methods (trees and neural nets) and trace their predictive gains to allowance of nonlinear predictor interactions that are missed by other methods. Lastly, we find that all methods agree on the same small set of dominant predictive signals that includes variations on momentum, liquidity, and volatility. Improved risk premia measurement through machine learning can simplify the investigation into economic mechanisms of asset pricing and justifies its growing role in innovative financial technologies.
    Keywords: Machine Learning, Big Data, Return Prediction, Cross-Section of Returns, Ridge Regression, (Group) Lasso, Elastic Net, Random Forest, Gradient Boosting, (Deep) Neural Networks, Fintech
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp1871&r=all
  10. By: Evanthia Chatzitzisi (Department of Economics, University of Macedonia); Stilianos Fountas (Department of Economics, University of Macedonia); Theodore Panagiotidis (Department of Economics, University of Macedonia)
    Abstract: We employ daily aggregate and sectoral S&P500 data to shed further light on the day-of-the-week anomaly using GARCH and EGARCH models. We obtain the following results: First, there is strong evidence for day-of-the-week effects in all sectors, implying that these effects are part of a wide phenomenon affecting the entire market structure. Second, using rolling-regressions, we find that significant seasonality represents a small proportion of the total sample. Third, using a logit setup, we examine the impact of four factors, namely recessions, uncertainty, trading volume and bearish sentiment on seasonality. We reveal that recessions and uncertainty have explanatory power for anomalies whereas trading volume does not.
    Keywords: day-of-the-week effect, GARCH, calendar anomalies, S&P500 Index, sectors, rolling regression, logit.
    JEL: C32
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:mcd:mcddps:2019_02&r=all
  11. By: BOCART Fabian Y.R.P., (Artnet, New York); GHYSELS Eric, (Kenan-Flager Business School); HAFNER Christian, (ISBA and CORE, Université catholique de Louvain)
    Abstract: We provide an innovative methodological contribution to the measurement of returns on infrequently traded assets using a novel approach to repeat-sales regression estimation. The model for price indices we propose allows for correlation with other markets, typically with higher liquidity and high frequency trading. Using the new econometric approach, we propose a monthly art market index, as well as sub-indices from Impressionist, Modern, Post-War, and Contemporary paintings based on repeated sales at a monthly frequency. The correlations enable us to update the art index via observed transactions in other markets that have a link with the art market.
    Keywords: art index, repeated sales, correlation
    JEL: C14 C43 Z11
    Date: 2018–09–10
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2018028&r=all
  12. By: Amin Azari
    Abstract: Bitcoin is considered the most valuable currency in the world. Besides being highly valuable, its value has also experienced a steep increase, from around 1 dollar in 2010 to around 18000 in 2017. Then, in recent years, it has attracted considerable attention in a diverse set of fields, including economics and computer science. The former mainly focuses on studying how it affects the market, determining reasons behinds its price fluctuations, and predicting its future prices. The latter mainly focuses on its vulnerabilities, scalability, and other techno-crypto-economic issues. Here, we aim at revealing the usefulness of traditional autoregressive integrative moving average (ARIMA) model in predicting the future value of bitcoin by analyzing the price time series in a 3-years-long time period. On the one hand, our empirical studies reveal that this simple scheme is efficient in sub-periods in which the behavior of the time-series is almost unchanged, especially when it is used for short-term prediction, e.g. 1-day. On the other hand, when we try to train the ARIMA model to a 3-years-long period, during which the bitcoin price has experienced different behaviors, or when we try to use it for a long-term prediction, we observe that it introduces large prediction errors. Especially, the ARIMA model is unable to capture the sharp fluctuations in the price, e.g. the volatility at the end of 2017. Then, it calls for more features to be extracted and used along with the price for a more accurate prediction of the price. We have further investigated the bitcoin price prediction using an ARIMA model, trained over a large dataset, and a limited test window of the bitcoin price, with length $w$, as inputs. Our study sheds lights on the interaction of the prediction accuracy, choice of ($p,q,d$), and window size $w$.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.05315&r=all
  13. By: Crump, Richard K. (Federal Reserve Bank of New York); Gospodinov, Nikolay (Federal Reserve Bank of Atlanta)
    Abstract: We investigate the factor structure of the term structure of interest rates and argue that characterizing the minimal dimension of the data-generating process is more challenging than currently appreciated. To circumvent these difficulties, we introduce a novel nonparametric bootstrap that is robust to general forms of time and cross-sectional dependence and conditional heteroskedasticity of unknown form. We show that our bootstrap procedure is asymptotically valid and exhibits excellent finite-sample properties in simulations. We demonstrate the applicability of our results in two empirical exercises: First, we show that measures of equity market tail risk and the state of the macroeconomy predict bond returns beyond the level or slope of the yield curve; second, we provide a bootstrap-based bias correction and confidence intervals for the probability of recession based on the shape of the yield curve. Our results apply more generally to all assets with a finite maturity structure.
    Keywords: term structure of interest rates; factor models; principal components; bond risk premiums; resampling-based inference
    Date: 2019–04–01
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:884&r=all

This nep-ets issue is ©2019 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.