nep-ets New Economics Papers
on Econometric Time Series
Issue of 2022‒02‒21
eight papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. A machine learning search for optimal GARCH parameters By Luke De Clerk; Sergey Savl'ev
  2. A New Test for Multiple Predictive Regression By Ke-Li Xu; Junjie Guo
  3. Stationary GE-Process and its Application in Analyzing Gold Price Data By Debasis Kundu
  4. A Test for Kronecker Product Structure Covariance Matrix By Patrik Guggenberge; Frank Kleibergen; Sophocles Mavroeidis
  5. Determining the number of factors in high-dimensional generalized latent factor models By Chen, Yunxiao; Li, Xiaoou
  6. The Prior Adaptive Group Lasso and the Factor Zoo By Kristoffer Pons Bertelsen
  7. Asymptotic properties of Bayesian inference in linear regression with a structural break By Kenichi Shimizu
  8. High-dimensional, multiscale online changepoint detection By Chen, Yudong; Wang, Tengyao; Samworth, Richard J.

  1. By: Luke De Clerk; Sergey Savl'ev
    Abstract: Here, we use Machine Learning (ML) algorithms to update and improve the efficiencies of fitting GARCH model parameters to empirical data. We employ an Artificial Neural Network (ANN) to predict the parameters of these models. We present a fitting algorithm for GARCH-normal(1,1) models to predict one of the model's parameters, $\alpha_1$ and then use the analytical expressions for the fourth order standardised moment, $\Gamma_4$ and the unconditional second order moment, $\sigma^2$ to fit the other two parameters; $\beta_1$ and $\alpha_0$, respectively. The speed of fitting of the parameters and quick implementation of this approach allows for real time tracking of GARCH parameters. We further show that different inputs to the ANN namely, higher order standardised moments and the autocovariance of time series can be used for fitting model parameters using the ANN, but not always with the same level of accuracy.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.03286&r=
  2. By: Ke-Li Xu (Department of Economics, Indiana University); Junjie Guo (School of Finance, Central University of Finance and Economics, Beijing, China)
    Abstract: We consider inference for predictive regressions with multiple predictors. Extant tests for predictability may perform unsatisfactorily and tend to discover spurious predictability as the number of predictors increases. We propose a battery of new instrumental-variables based tests which involve enforcement or partial enforcement of the null hypothesis in variance estimation. A test based on the few-predictors-at-a-time parsimonious system approach is recommended. Empirical Monte Carlos demonstrate the remarkable finite-sample performance regardless of numerosity of predictors and their persistence properties. Empirical application to equity premium predictability is provided.
    Keywords: Curse of dimensionality, Lagrange-multipliers test, persistence, predictive regression, return predictability
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:inu:caeprp:2022001&r=
  3. By: Debasis Kundu
    Abstract: In this paper we introduce a new discrete time and continuous state space stationary process $\{X_n; n = 1, 2, \ldots \}$, such that $X_n$ follows a two-parameter generalized exponential (GE) distribution. Joint distribution functions, characterization and some dependency properties of this new process have been investigated. The GE-process has three unknown parameters, two shape parameters and one scale parameter, and due to this reason it is more flexible than the existing exponential process. In presence of the scale parameter, if the two shape parameters are equal, then the maximum likelihood estimators of the unknown parameters can be obtained by solving one non-linear equation and if the two shape parameters are arbitrary, then the maximum likelihood estimators can be obtained by solving a two dimensional optimization problem. Two {\color{black} synthetic} data sets, and one real gold-price data set have been analyzed to see the performance of the proposed model in practice. Finally some generalizations have been indicated.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.02568&r=
  4. By: Patrik Guggenberge; Frank Kleibergen; Sophocles Mavroeidis
    Abstract: We propose a test for a covariance matrix to have Kronecker Product Structure (KPS). KPS implies a reduced rank restriction on a certain transformation of the covariance matrix and the new procedure is an adaptation of the Kleibergen and Paap (2006) reduced rank test. To derive the limiting distribution of the Wald type test statistic proves challenging partly because of the singularity of the covariance matrix estimator that appears in the weighting matrix. We show that the test statistic has a ? 2 limiting null distribution with degrees of freedom equal to the number of restrictions tested. Local asymptotic power results are derived. Monte Carlo simulations reveal good size and power properties of the test. Re-examining fifteen highly cited papers conducting instrumental variable regressions, we find that KPS is not rejected in 56 out of 118 specifications at the 5% nominal size.
    Date: 2021–12–06
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:962&r=
  5. By: Chen, Yunxiao; Li, Xiaoou
    Abstract: As a generalization of the classical linear factor model, generalized latent factor models are useful for analysing multivariate data of different types, including binary choices and counts. This paper proposes an information criterion to determine the number of factors in generalized latent factor models. The consistency of the proposed information criterion is established under a high-dimensional setting, where both the sample size and the number of manifest variables grow to infinity, and data may have many missing values. An error bound is established for the parameter estimates, which plays an important role in establishing the consistency of the proposed information criterion. This error bound improves several existing results and may be of independent theoretical interest. We evaluate the proposed method by a simulation study and an application to Eysenck’s personality questionnaire.
    Keywords: generalized latent factor model; joint maximum likelihood estimator; high-dimensional data; information criteria; selection consistency; OUP deal
    JEL: C1
    Date: 2021–08–23
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:111574&r=
  6. By: Kristoffer Pons Bertelsen (Aarhus University and CREATES)
    Abstract: This paper develops and presents the prior adaptive group lasso (pag-lasso) for generalized linear models. The pag-lasso is an extension of the prior lasso, which allows for the use of existing information in the lasso estimation. We show that the estimator exhibits properties similar to the adaptive group lasso. The performance of the pag-lasso estimator is illustrated in a Monte Carlo study. The estimator is used to select the set of relevant risk factors in asset pricing models while requiring that the chosen factors must be able to price the test assets as well as the unselected factors. The study shows that the pag-lasso yields a set of factors that explain the time variation in the returns while delivering estimated pricing errors close to zero. We find that canonical low-dimensional factor models from the asset pricing literature are insufficient to price the cross section of the test assets together with the remaining traded factors. The required number of pricing factors to include at any given time is closer to 20.
    Keywords: Asset Pricing, Factor Selection, Factor Zoo, High-Dimensional Modeling, Prior Information, Variable Selection
    JEL: C13 C33 C38 C51 C55 C58 G12
    Date: 2022–01–24
    URL: http://d.repec.org/n?u=RePEc:aah:create:2022-05&r=
  7. By: Kenichi Shimizu
    Abstract: This paper studies large sample properties of a Bayesian approach to inference about slope parameters γ in linear regression models with a structural break. In contrast to the conventional approach to inference about γ that does not take into account the uncertainty of the unknown break location τ , the Bayesian approach that we consider incorporates such uncertainty. Our main theoretical contribution is a Bernstein-von Mises type theorem (Bayesian asymptotic normality) for γ under a wide class of priors, which essentially indicates an asymptotic equivalence between the conventional frequentist and Bayesian inference. Consequently, a frequentist researcher could look at credible intervals of γ to check robustness with respect to the uncertainty of τ . Simulation studies show that the conventional confidence intervals of γ tend to undercover in finite samples whereas the credible intervals offer more reasonable coverages in general. As the sample size increases, the two methods coincide, as predicted from our theoretical conclusion. Using data from Paye and Timmermann (2006) on stock return prediction, we illustrate that the traditional confidence intervals on γ might underrepresent the true sampling uncertainty.
    Keywords: Structural break, Bernstein-von Mises theorem, Sensitivity check, Model averaging
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2022_05&r=
  8. By: Chen, Yudong; Wang, Tengyao; Samworth, Richard J.
    Abstract: We introduce a new method for high-dimensional, online changepoint detection in settings where a p-variate Gaussian data stream may undergo a change in mean. The procedure works by performing likelihood ratio tests against simple alternatives of different scales in each coordinate, and then aggregating test statistics across scales and coordinates. The algorithm is online in the sense that both its storage requirements and worst-case computational complexity per new observation are independent of the number of previous observations; in practice, it may even be significantly faster than this. We prove that the patience, or average run length under the null, of our procedure is at least at the desired nominal level, and provide guarantees on its response delay under the alternative that depend on the sparsity of the vector of mean change. Simulations confirm the practical effectiveness of our proposal, which is implemented in the R package ocd, and we also demonstrate its utility on a seismology data set.
    Keywords: average run length; detection delay; high-dimensional changepoint detection; online algorithm; sequential method; grant EP/T02772X/1; EP/N031938/1; EP/P031447/1
    JEL: C1
    Date: 2022–01–23
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:113665&r=

This nep-ets issue is ©2022 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.