nep-ets New Economics Papers
on Econometric Time Series
Issue of 2021‒03‒15
eleven papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Minimax MSE Bounds and Nonlinear VAR Prewhitening for Long-Run Variance Estimation Under Nonstationarity By Alessandro Casini; Pierre Perron
  2. Non-Parametric Estimation of Spot Covariance Matrix with High-Frequency Data By Mustafayeva, Konul; Wang, Weining
  3. Theory of Evolutionary Spectra for Heteroskedasticity and Autocorrelation Robust Inference in Possibly Misspecified and Nonstationary Models By Alessandro Casini
  4. Theory of Low Frequency Contamination from Nonstationarity and Misspecification: Consequences for HAR Inference By Alessandro Casini; Taosong Deng; Pierre Perron
  5. Local asymptotic normality of general conditionally heteroskedastic and score-driven time-series models By Francq, Christian; Zakoian, Jean-Michel
  6. Standing on the Shoulders of Machine Learning: Can We Improve Hypothesis Testing? By Gary Cornwall; Jeff Chen; Beau Sauley
  7. Improved Estimation of Dynamic Models of Conditional Means and Variances By Wang, Weining; Wooldridge, Jeffrey M.; Xu, Mengshan
  8. On Cointegration and Cryptocurrency Dynamics By Keilbar, Georg; Zhang, Yanfen
  9. A data-driven P-spline smoother and the P-Spline-GARCH models By Feng, Yuanhua; Härdle, Wolfgang Karl
  10. Simultaneous Bandwidths Determination for DK-HAC Estimators and Long-Run Variance Estimation in Nonparametric Settings By Federico Belotti; Alessandro Casini; Leopoldo Catania; Stefano Grassi; Pierre Perron
  11. Estimating the causal effect of an intervention in a time series setting: the C-ARIMA approach By Fiammetta Menchetti; Fabrizio Cipollini; Fabrizia Mealli

  1. By: Alessandro Casini; Pierre Perron
    Abstract: We establish new mean-squared error (MSE) bounds for long-run variance (LRV) estimation, valid for both stationary and nonstationary sequences that are sharper than previously established. The key element to construct such bounds is to use restrictions onthe degree of nonstationarity. Unlike previous bounds, they show how nonstationarity influences the bias-variance trade-off. Unlike previously established bounds, either under stationarity or nonstationarity, the new bounds depends on the form of nonstationarity. The bounds are established for double kernel long-run variance estimators. The corresponding bounds for classical long-run variance estimators follow as a special case. We use them to construct new data-dependent methods for the selection of bandwidths for (double) kernel heteroskedasticity autocorrelation consistent (DK-HAC) estimators. These account more flexibly for nonstationarity and lead to tests with The new MSE bounds and associated bandwidths help to to improve good finite-sample performance, especially good power when existing LRV estimators lead to tests having little or no or no power. The second contribution is to introduce a nonparametric nonlinear VAR prewhitened LRV estimator. This accounts explicitly for nonstationarity unlike previous prewhitened procedures which are known to be unstable. Its consistency, rate of convergence and MSE bounds are established. The prewhitened DK-HAC estimators lead to tests with good finite-sample size while maintaining good monotonic power.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.02235&r=all
  2. By: Mustafayeva, Konul; Wang, Weining
    Abstract: Estimating spot covariance is an important issue to study, especially with the increasing availability of high-frequency nancial data. We study the estimation of spot covariance using a kernel method for high-frequency data. In particular, we consider rst the kernel weighted version of realized covariance estimator for the price process governed by a continuous multivariate semimartingale. Next, we extend it to the threshold kernel estimator of the spot covariances when the underlying price process is a discontinuous multivariate semimartingale with nite activity jumps. We derive the asymptotic distribution of the estimators for both xed and shrinking bandwidth. The estimator in a setting with jumps has the same rate of convergence as the estimator for di usion processes without jumps. A simulation study examines the nite sample properties of the estimators. In addition, we study an application of the estimator in the context of covariance forecasting. We discover that the forecasting model with our estimator outperforms a benchmark model in the literature.
    Keywords: high-frequency data,kernel estimation,jump,forecasting covariance matrix
    JEL: C00
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020025&r=all
  3. By: Alessandro Casini
    Abstract: We develop a theory of evolutionary spectra for heteroskedasticity and autocorrelation robust (HAR) inference when the data may not satisfy second-order stationarity. Nonstationarity is a common feature of economic time series which may arise either from parameter variation or model misspecification. In such a context, the theories that support HAR inference are either not applicable or do not provide accurate approximations. HAR tests standardized by existing long-run variance estimators then may display size distortions and little or no power. This issue can be more severe for methods that use long bandwidths (i.e., fixed-b HAR tests). We introduce a class of nonstationary processes that have a time-varying spectral representation which evolves continuously except at a finite number of time points. We present an extension of the classical heteroskedasticity and autocorrelation consistent (HAC) estimators that applies two smoothing procedures. One is over the lagged autocovariances, akin to classical HAC estimators, and the other is over time. The latter element is important to flexibly account for nonstationarity. We name them double kernel HAC (DK-HAC) estimators. We show the consistency of the estimators and obtain an optimal DK-HAC estimator under the mean squared error (MSE) criterion. Overall, HAR tests standardized by the proposed DK-HAC estimators are competitive with fixed-b HAR tests, when the latter work well, with regards to size control even when there is strong dependence. Notably, in those empirically relevant situations in which previous HAR tests are undersized and have little or no power, the DK-HAC estimator leads to tests that have good size and power.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.02981&r=all
  4. By: Alessandro Casini; Taosong Deng; Pierre Perron
    Abstract: We establish theoretical and analytical results about the low frequency contamination induced by general nonstationarity for estimates such as the sample autocovariance and the periodogram, and deduce consequences for heteroskedasticity and autocorrelation robust (HAR) inference. We show that for short memory nonstationarity data these estimates exhibit features akin to long memory. We present explicit expressions for the asymptotic bias of these estimates. This bias increases with the degree of heterogeneity. in the data and is responsible for generating low frequency contamination or simply making the time series exhibiting long memory features. The sample autocovariances display hyperbolic rather than exponential decay while the periodogram becomes unbounded near the origin. We distinguish cases where this contamination only occurs as a small-sample problem and cases where the contamination continues to hold asymptotically. We show theoretically that nonparametric smoothing over time is robust to low frequency contamination.in that the sample local autocovariance and the local periodogram are unlikely to exhibit long memory features. Simulations confirm that our theory provides useful approximations. Since the autocovariances and the periodogram are key elements for HAR inference, our results provide new insights on the debate between consistent versus inconsistent small versus long/fixed-b bandwidths for long-run variance (LRV) estimation-based inference.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.01604&r=all
  5. By: Francq, Christian; Zakoian, Jean-Michel
    Abstract: The paper establishes the Local Asymptotic Normality (LAN) property for general conditionally heteroskedastic time series models of multiplicative form, $\epsilon_t=\sigma_t(\btheta_0)\eta_t$, where the volatility $\sigma_t(\btheta_0)$ is a parametric function of $\{\epsilon_{s}, s
    Keywords: APARCH; Asymmetric Student-$t$ distribution; Beta-$t$-GARCH; Conditional heteroskedasticity; LAN in time series; Quadratic mean differentiability.
    JEL: C51
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:106542&r=all
  6. By: Gary Cornwall; Jeff Chen; Beau Sauley
    Abstract: In this paper we have updated the hypothesis testing framework by drawing upon modern computational power and classification models from machine learning. We show that a simple classification algorithm such as a boosted decision stump can be used to fully recover the full size-power trade-off for any single test statistic. This recovery implies an equivalence, under certain conditions, between the basic building block of modern machine learning and hypothesis testing. Second, we show that more complex algorithms such as the random forest and gradient boosted machine can serve as mapping functions in place of the traditional null distribution. This allows for multiple test statistics and other information to be evaluated simultaneously and thus form a pseudo-composite hypothesis test. Moreover, we show how practitioners can make explicit the relative costs of Type I and Type II errors to contextualize the test into a specific decision framework. To illustrate this approach we revisit the case of testing for unit roots, a difficult problem in time series econometrics for which existing tests are known to exhibit low power. Using a simulation framework common to the literature we show that this approach can improve upon overall accuracy of the traditional unit root test(s) by seventeen percentage points, and the sensitivity by thirty six percentage points.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.01368&r=all
  7. By: Wang, Weining; Wooldridge, Jeffrey M.; Xu, Mengshan
    Abstract: Modelling dynamic conditional heteroscedasticity is the daily routine in time series econometrics. We propose a weighted conditional moment estimation to potentially improve the eciency of the QMLE (quasi maximum likelihood estimation). The weights of conditional moments are selected based on the analytical form of optimal instruments, and we nominally decide the optimal instrument based on the third and fourth moments of the underlying error term. This approach is motivated by the idea of general estimation equations (GEE). We also provide an analysis of the eciency of QMLE for the location and variance parameters. Simulations and applications are conducted to show the better performance of our estimators.
    JEL: C00
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020021&r=all
  8. By: Keilbar, Georg; Zhang, Yanfen
    Abstract: This paper aims to model the joint dynamics of cryptocurrencies in a nonstationary setting. In particular, we analyze the role of cointegration relationships within a large system of cryptocurrencies in a vector error correction model (VECM) framework. To enable analysis in a dynamic setting, we propose the COINtensity VECM, a nonlinear VECM specification accounting for a varying systemwide cointegration exposure. Our results show that cryptocurrencies are indeed cointegrated with a cointegration rank of four. We also find that all currencies are affected by these long term equilibrium relations. A simple statistical arbitrage trading strategy is proposed showing a great in-sample performance.
    Keywords: Cointegration,VECM,Nonstationarity,Cryptocurrencies
    JEL: C00
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020012&r=all
  9. By: Feng, Yuanhua; Härdle, Wolfgang Karl
    Abstract: Penalized spline smoothing of time series and its asymptotic properties are studied. A data-driven algorithm for selecting the smoothing parameter is developed. The proposal is applied to de ne a semiparametric extension of the well-known Spline- GARCH, called a P-Spline-GARCH, based on the log-data transformation of the squared returns. It is shown that now the errors process is exponentially strong mixing with nite moments of all orders. Asymptotic normality of the P-spline smoother in this context is proved. Practical relevance of the proposal is illustrated by data examples and simulation. The proposal is further applied to value at risk and expected shortfall.
    Keywords: P-spline smoother,smoothing parameter selection,P-Spline-GARCH,strong mixing,value at risk,expected shortfall
    JEL: C14 C51
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020016&r=all
  10. By: Federico Belotti; Alessandro Casini; Leopoldo Catania; Stefano Grassi; Pierre Perron
    Abstract: We consider the derivation of data-dependent simultaneous bandwidths for double kernel heteroskedasticity and autocorrelation consistent (DK-HAC) estimators. In addition to the usual smoothing over lagged autocovariances for classical HAC estimators, the DK-HAC estimator also applies smoothing over the time direction. We obtain the optimal bandwidths that jointly minimize the global asymptotic MSE criterion and discuss the trade-off between bias and variance with respect to smoothing over lagged autocovariances and over time. Unlike the MSE results of Andrews (1991), we establish how nonstationarity affects the bias-variance trade-o?. We use the plug-in approach to construct data-dependent bandwidths for the DK-HAC estimators and compare them with the DK-HAC estimators from Casini (2021) that use data-dependent bandwidths obtained from a sequential MSE criterion. The former performs better in terms of size control, especially with stationary and close to stationary data. Finally, we consider long-run variance estimation under the assumption that the series is a function of a nonparametric estimator rather than of a semiparametric estimator that enjoys the usual T^(1/2) rate of convergence. Thus, we also establish the validity of consistent long-run variance estimation in nonparametric parameter estimation settings.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.00060&r=all
  11. By: Fiammetta Menchetti; Fabrizio Cipollini; Fabrizia Mealli
    Abstract: The potential outcomes approach to causal inference, most commonly known as the Rubin Causal Model (RCM), is a framework that allows to define the causal effect of a treatment (or "intervention") as a contrast of potential outcomes, to discuss assumptions enabling to identify such causal effects from available data, as well as to develop methods for estimating causal effects under these assumptions. In recent years, several methods have been developed under the RCM to estimate causal effects in time series settings. However, none of these make use of ARIMA models, which are instead very common in the econometrics literature. In this paper, we propose a novel approach, C-ARIMA, to define and estimate the causal effect of an intervention in a time series setting under the RCM. We check the validity of the proposed method with an extensive simulation study, comparing its performance against a standard intervention analysis approach. In the empirical application, we use C-ARIMA to assess the causal effect of a new price policy on supermarket sales.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.06740&r=all

This nep-ets issue is ©2021 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.