nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒06‒27
23 papers chosen by
Sune Karlsson
Örebro universitet

  1. Heteroskedastic Proxy Vector Autoregressions: Testing for Time-Varying Impulse Responses in the Presence of Multiple Proxies By Martin Bruns; Helmut Lütkepohl
  2. Further Improvements of Finite Sample Approximation of Central Limit Theorems for Envelopment Estimators By Léopold Simar; Valentin Zelenyuk; Shirong Zhao
  3. Computation for latent variable model estimation: a unified stochastic proximal framework By Zhang, Siliang; Chen, Yunxiao
  4. Estimation and Inference by Stochastic Optimization By Jean-Jacques Forneron
  5. A Robust Permutation Test for Subvector Inference in Linear Regressions By Xavier D'Haultf{\oe}uille; Purevdorj Tuvaandorj
  6. EM estimation for the bivariate mixed exponential regression model By Chen, Zezhun; Dassios, Angelos; Tzougas, George
  7. Leverage, Influence, and the Jackknife in Clustered Regression Models: Reliable Inference Using summclust By James G. MacKinnon; Morten {\O}rregaard Nielsen; Matthew D. Webb
  8. Proportional Incremental Cost Probability Functions and their Frontiers By Fève, Frédérique; Florens, Jean-Pierre; Simar, Léopold
  9. Robust and Agnostic Learning of Conditional Distributional Treatment Effects By Nathan Kallus; Miruna Oprescu
  10. The Use and Mis-Use of SVARs for Validating DSGE Models By Paul Levine; Joseph Pearlman; Alessio Volpicella; Bo Yang
  11. We modeled long memory with just one lag! By Bauwens, Luc; Chevillon, Guillaume; Laurent, Sébastien
  12. Spillover Effects in Empirical Corporate Finance: Choosing the Proxy for the Treatment Intensity By Fabiana Gomez; David Pacini
  13. A single risk approach to the semiparametric copula competing risks model By Simon M. S. Lo; Ralf A. Wilke
  14. Nonparametric Identification of Incomplete Information Discrete Games with Non-equilibrium Behaviors By Erhao Xie
  15. Parameters identification for an inverse problem arising from a binary option using a Bayesian inference approach By Yasushi Ota; Yu Jiang; Daiki Maki
  16. Nowcasting Growth using Google Trends Data: A Bayesian Structural Time Series Model By Bhattacharjee, Arnab; Kohns, David
  17. 2SLS with Multiple Treatments By Manudeep Bhuller; Henrik Sigstad
  18. Efficient Score Computation and Expectation-Maximization Algorithm in Regime-Switching Models By Chaojun Li; Shi Qiu
  19. Stochastic Frontier Analysis for Healthcare, with Illustrations in R By Robin C. Sickles; Zhichao Wang; Valentin Zelenyuk
  20. The transmission of financial shocks and leverage of financial institutions: An endogenous regime switching framework By Kirstin Hubrich; Daniel F. Waggoner
  21. Graph-Based Methods for Discrete Choice By Kiran Tomlinson; Austin R. Benson
  22. Confidence Intervals for Recursive Journal Impact Factors By Johannes König; David I. Stern; Richard S.J. Tol
  23. HARNet: A Convolutional Neural Network for Realized Volatility Forecasting By Rafael Reisenhofer; Xandro Bayer; Nikolaus Hautsch

  1. By: Martin Bruns; Helmut Lütkepohl
    Abstract: We propose a test for time-varying impulse responses in heteroskedastic structural vector autoregressions that can be used when the shocks are identified by external proxy variables as a group. The test can be used even if the shocks are not identified individually. The asymptotic analysis is supported by small sample simulations which show good properties of the test. An investigation of the impact of productivity shocks in a small macroeconomic model for the U.S. illustrates the importance of the issue for empirical work.
    Keywords: Structural vector autoregression, proxy VAR, heteroskedasticity, productivity shocks
    JEL: C32
    Date: 2022
  2. By: Léopold Simar (Institut de Statistique, Biostatistique et Sciences Actuarielles, Université Catholique de Louvain, Voie du Roman Pays 20, B1348 Louvain-la-Neuve, Belgium); Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia); Shirong Zhao (School of Finance, Dongbei University of Finance and Economics, Dalian, Liaoning 116025)
    Abstract: nA simple yet easy to implement method is proposed to further improve the finite sample approximation of the recently developed central limit theorems for aggregates of envelopment estimators. Focusing on the simple mean efficiency, we propose using the bias-corrected individual efficiency estimate to improve the variance estimator. The extensive Monte-Carlo experiments confirm that, for relatively small sample sizes (≤ 100), with both low dimensions and especially for high dimensions, our new method combined with the data sharpening method generally provides better ‘coverage’ (of the true values by the estimated confidence intervals) than the previously developed approaches.
    Keywords: Efficiency, Non-parametric Efficiency Estimators, Data Envelopment Analysis, Free Disposal Hull
    JEL: C1 C3
    Date: 2022–04
  3. By: Zhang, Siliang; Chen, Yunxiao
    Abstract: Latent variable models have been playing a central role in psychometrics and related fields. In many modern applications, the inference based on latent variable models involves one or several of the following features: (1) the presence of many latent variables, (2) the observed and latent variables being continuous, discrete, or a combination of both, (3) constraints on parameters, and (4) penalties on parameters to impose model parsimony. The estimation often involves maximizing an objective function based on a marginal likelihood/pseudo-likelihood, possibly with constraints and/or penalties on parameters. Solving this optimization problem is highly non-trivial, due to the complexities brought by the features mentioned above. Although several efficient algorithms have been proposed, there lacks a unified computational framework that takes all these features into account. In this paper, we fill the gap. Specifically, we provide a unified formulation for the optimization problem and then propose a quasi-Newton stochastic proximal algorithm. Theoretical properties of the proposed algorithms are established. The computational efficiency and robustness are shown by simulation studies under various settings for latent variable model estimation.
    Keywords: latent variable models; penalized estimator; stochastic approximation; proximal algorithm; quasi-Newton methods; Polyak-Ruppert averaging; T&F deal
    JEL: C1
    Date: 2022–05–07
  4. By: Jean-Jacques Forneron
    Abstract: In non-linear estimations, it is common to assess sampling uncertainty by bootstrap inference. For complex models, this can be computationally intensive. This paper combines optimization with resampling: turning stochastic optimization into a fast resampling device. Two methods are introduced: a resampled Newton-Raphson (rNR) and a resampled quasi-Newton (rqN) algorithm. Both produce draws that can be used to compute consistent estimates, confidence intervals, and standard errors in a single run. The draws are generated by a gradient and Hessian (or an approximation) computed from batches of data that are resampled at each iteration. The proposed methods transition quickly from optimization to resampling when the objective is smooth and strictly convex. Simulated and empirical applications illustrate the properties of the methods on large scale and computationally intensive problems. Comparisons with frequentist and Bayesian methods highlight the features of the algorithms.
    Date: 2022–05
  5. By: Xavier D'Haultf{\oe}uille; Purevdorj Tuvaandorj
    Abstract: We develop a new permutation test for inference on a subvector of coefficients in linear models. The test is exact when the regressors and the error terms are independent. Then, we show that the test is consistent and has power against local alternatives when the independence condition is relaxed, under two main conditions. The first is a slight reinforcement of the usual absence of correlation between the regressors and the error term. The second is that the number of strata, defined by values of the regressors not involved in the subvector test, is small compared to the sample size. Simulations and an empirical illustration suggest that the test has good power in practice.
    Date: 2022–05
  6. By: Chen, Zezhun; Dassios, Angelos; Tzougas, George
    Abstract: In this paper, we present a new family of bivariate mixed exponential regression models for taking into account the positive correlation between the cost of claims from motor third party liability bodily injury and property damage in a versatile manner. Furthermore, we demonstrate how maximum likelihood estimation of the model parameters can be achieved via a novel Expectation-Maximization algorithm. The implementation of two members of this family, namely the bivariate Pareto or, Exponential-Inverse Gamma, and bivariate Exponential-Inverse Gaussian regression models is illustrated by a real data application which involves fitting motor insurance data from a European motor insurance company.
    Keywords: bivariate claim size modeling; regression models for the marginal means and dispersion parameters; motor third party liability insurance; expectation-maximization algorithm
    JEL: C1
    Date: 2022–05–17
  7. By: James G. MacKinnon; Morten {\O}rregaard Nielsen; Matthew D. Webb
    Abstract: Cluster-robust inference is widely used in modern empirical work in economics and many other disciplines. When data are clustered, the key unit of observation is the cluster. We propose measures of "high-leverage" clusters and "influential" clusters for linear regression models. The measures of leverage and partial leverage, and functions of them, can be used as diagnostic tools to identify datasets and regression designs in which cluster-robust inference is likely to be challenging. The measures of influence can provide valuable information about how the results depend on the data in the various clusters. We also show how to calculate two jackknife variance matrix estimators, CV3 and CV3J, as a byproduct of our other computations. All these quantities, including the jackknife variance estimators, are computed in a new Stata package called summclust that summarizes the cluster structure of a dataset.
    Date: 2022–05
  8. By: Fève, Frédérique; Florens, Jean-Pierre; Simar, Léopold (Université catholique de Louvain, LIDAM/ISBA, Belgium)
    Abstract: The econometric analysis of cost functions is based on the analysis of the condi- tional distribution of the cost Y given the level of the outputs X ∈ Rp+ and given a set of environment variables Z ∈ Rd. The model basically describes the conditional distribution of Y given X ≥ x and Z = z. In many applications, the dimension of Z is naturally large and a fully nonparametric specification of the model is limited by the curse of the dimensionality. Most of the approaches so far are based on two-stage estimations when the frontier level does not depend on the value of Z. But even in the case of separability of the frontier, the estimation procedure suffers from several prob- lems, mainly due to the inherent bias of the estimated efficiency scores and the poor rates of convergence of the frontier estimates. In this paper we suggest an alternative semi-parametric model which avoids the drawbacks of the two-stage methods. It is based on a class of model called the Proportional Incremental Cost Functions (PICF), adapted to our setup from the Cox proportional hazard models extensively used in survival analysis for durations models. We define the PICF model, then we examine its properties and propose a semi-parametric estimation. By this way of modeling, we avoid the first stage nonparametric estimation of the frontier and avoid the curse of dimensionality keeping the parametric √n rates of convergence for the parameters of interest. We are also able to derive √n-consistent estimator of the conditional order-m robust frontiers (which, by contrast to the full frontier, may depend on Z) and we prove the Gaussian asymptotic properties of the resulting estimators. We illustrate the flexibility and the power of the procedure by some simulated examples and also with some real data sets.
    Keywords: Cost efficiency ; Nonparametric robust frontier ; Proportional hazard model ; Environmental variables
    JEL: C10 C14 C51 D22
    Date: 2022–05–01
  9. By: Nathan Kallus; Miruna Oprescu
    Abstract: The conditional average treatment effect (CATE) is the best point prediction of individual causal effects given individual baseline covariates and can help personalize treatments. However, as CATE only reflects the (conditional) average, it can wash out potential risks and tail events, which are crucially relevant to treatment choice. In aggregate analyses, this is usually addressed by measuring distributional treatment effect (DTE), such as differences in quantiles or tail expectations between treatment groups. Hypothetically, one can similarly fit covariate-conditional quantile regressions in each treatment group and take their difference, but this would not be robust to misspecification or provide agnostic best-in-class predictions. We provide a new robust and model-agnostic methodology for learning the conditional DTE (CDTE) for a wide class of problems that includes conditional quantile treatment effects, conditional super-quantile treatment effects, and conditional treatment effects on coherent risk measures given by $f$-divergences. Our method is based on constructing a special pseudo-outcome and regressing it on baseline covariates using any given regression learner. Our method is model-agnostic in the sense that it can provide the best projection of CDTE onto the regression model class. Our method is robust in the sense that even if we learn these nuisances nonparametrically at very slow rates, we can still learn CDTEs at rates that depend on the class complexity and even conduct inferences on linear projections of CDTEs. We investigate the performance of our proposal in simulation studies, and we demonstrate its use in a case study of 401(k) eligibility effects on wealth.
    Date: 2022–05
  10. By: Paul Levine (University of Surrey); Joseph Pearlman (City University); Alessio Volpicella (University of Surrey); Bo Yang (Swansea University)
    Abstract: This paper studies the potential ability of an SVAR to match impulse response functions of a well-established estimated DSGE model. We study the invertibility (fundamentalness) problem setting out conditions for the RE solution of a linearized Gaussian NK-DSGE model to be invertible taking into account the information sets of agents. We then estimate an SVAR by generating artificial data from the theoretical model. A measure of approximate invertibility, where information can be imperfect, is constructed. Based on the VAR(1) representation of the DSGE model, we compare three forms of SVAR-identification restrictions; zero, sign and bounds on the forecast error variance, for mapping the reduced form residuals of the empirical model to the structural shocks of interest. Separating out two reasons why SVARs may not recover the impulse responses to structural shocks of the DGP, namely non-invertibility and inappropriate identification restrictions, is then the main objective of the paper.
    JEL: C11 C18 C32 E32
    Date: 2022–06
  11. By: Bauwens, Luc (Université catholique de Louvain, LIDAM/CORE, Belgium); Chevillon, Guillaume; Laurent, Sébastien
    Abstract: We build on two contributions that have found conditions for large dimensional networks or systems to generate long memory in their individual components, and provide a methodology for modeling and forecasting series displaying long range dependence. We model long memory properties within a vector autoregressive system of order 1 and consider Bayesian estimation or ridge regression. For these, we derive a theory-driven parametric setting that informs a prior distribution or a shrinkage target. Our proposal significantly outperforms univariate time series long memory models when forecasting a daily volatility measure for 250 US company stocks, as well as seasonally adjusted monthly streamflow series recorded at 97 locations of the Columbia river basin.
    Keywords: Bayesian estimation ; Ridge regression ; Vector autoregressive model ; Forecasting
    Date: 2022–04–03
  12. By: Fabiana Gomez; David Pacini
    Abstract: The existing literature indicates that spillovers lead to a complicated bias in the estimation of treatment effects in empirical corporate finance. We show that, under simple random treatment assignment, such a complicated bias is simplified if the proxy chosen for the group-level treatment intensity is the leave-one-out average treatment. This choice brings two advantages: first, it facilitates the diagnosis of the bias and, second, it facilitates the interpretation of the average spillover effect on the treated. These two advantages justify the use of the leave-one-out average treatment as the preferred proxy for the treatment intensity. We illustrate these advantages in the context of measuring the effect of credit supply contractions on firms’ employment decisions.
    Date: 2022–05–23
  13. By: Simon M. S. Lo; Ralf A. Wilke
    Abstract: A typical situation in competing risks analysis is that the researcher is only interested in a subset of risks. This paper considers a depending competing risks model with the distribution of one risk being a parametric or semi-parametric model, while the model for the other risks being unknown. Identifiability is shown for popular classes of parametric models and the semiparametric proportional hazards model. The identifiability of the parametric models does not require a covariate, while the semiparametric model requires at least one. Estimation approaches are suggested which are shown to be $\sqrt{n}$-consistent. Applicability and attractive finite sample performance are demonstrated with the help of simulations and data examples.
    Date: 2022–05
  14. By: Erhao Xie
    Abstract: In the literature that estimates discrete games with incomplete information, researchers usually impose two assumptions. First, either the payoff function or the distribution of private information or both are restricted to follow some parametric functional forms. Second, players’ behaviors are assumed to be consistent with the Bayesian Nash equilibrium. This paper jointly relaxes both assumptions. The framework non-parametrically specifies both the payoff function and the distribution of private information. In addition, each player’s belief about other players’ behaviors is also modeled as a nonparametric function. I allow this belief function to be any probability distribution over other players’ action sets. This specification nests the equilibrium assumption when each player’s belief corresponds to other players’ actual choice probabilities. It also allows non-equilibrium behaviors when some players’ beliefs are biased or incorrect. Under the above framework, this paper first derives a testable implication of the equilibrium condition. It then obtains the identification results for the payoff function, the belief function and the distribution of private information.
    Keywords: Econometric and statistical methods
    JEL: C57
    Date: 2022–05
  15. By: Yasushi Ota; Yu Jiang; Daiki Maki
    Abstract: No--arbitrage property provides a simple method for pricing financial derivatives. However, arbitrage opportunities exist among different markets in various fields, even for a very short time. By knowing that an arbitrage property exists, we can adopt a financial trading strategy. This paper investigates the inverse option problems (IOP) in the extended Black--Scholes model. We identify the model coefficients from the measured data and attempt to find arbitrage opportunities in different financial markets using a Bayesian inference approach, which is presented as an IOP solution. The posterior probability density function of the parameters is computed from the measured data.The statistics of the unknown parameters are estimated by a Markov Chain Monte Carlo (MCMC) algorithm, which exploits the posterior state space. The efficient sampling strategy of the MCMC algorithm enables us to solve inverse problems by the Bayesian inference technique. Our numerical results indicate that the Bayesian inference approach can simultaneously estimate the unknown trend and volatility coefficients from the measured data.
    Date: 2022–05
  16. By: Bhattacharjee, Arnab; Kohns, David
    Abstract: This paper investigates the benefits of internet search data in the form of Google Trends for nowcasting real U.S. GDP growth in real time through the lens of mixed frequency Bayesian Structural Time Series (BSTS) models. We augment and enhance both model and methodology to make these better amenable to nowcasting with large number of potential covariates. Specifically, we allow shrinking state variances towards zero to avoid overfitting, extend the SSVS (spike and slab variable selection) prior to the more flexible normal-inverse-gamma prior which stays agnostic about the underlying model size, as well as adapt the horseshoe prior to the BSTS. The application to nowcasting GDP growth as well as a simulation study demonstrate that the horseshoe prior BSTS improves markedly upon the SSVS and the original BSTS model with the largest gains in dense data-generating-processes. Our application also shows that a large dimensional set of search terms is able to improve nowcasts early in a specific quarter before other macroeconomic data become available. Search terms with high inclusion probability have good economic interpretation, reflecting leading signals of economic anxiety and wealth effects.
    Keywords: global-local priors, Google trends, non-centred state space, shrinkage
    JEL: C11 C22 C55 E37 E66
    Date: 2022–05
  17. By: Manudeep Bhuller; Henrik Sigstad
    Abstract: We study what two-stage least squares (2SLS) identifies in models with multiple treatments and multiple instruments under treatment effect heterogeneity. Two testable conditions are shown to be necessary and sufficient for 2SLS to identify a positively weighted sum of individual treatment effects: monotonicity and no cross effects. For just-identified models, these conditions imply that (i) each instrument affects exactly one treatment choice and (ii) choice behavior can be described by single-peaked preferences (for ordered treatments) or by preferences where the excluded treatment is always either the best or the next-best alternative (for unordered treatments). For overidentified models, these conditions need to hold only on average across realizations of the instruments. The conditions are satisfied in a single-index threshold-crossing model under an easily testable linearity condition. We illustrate how our results can be used to assess the validity of 2SLS with multiple treatments in applications on the returns to educational choices and feedback effects in judicial decision-making.
    Date: 2022–05
  18. By: Chaojun Li; Shi Qiu
    Abstract: This study proposes an efficient algorithm for score computation for regime-switching models, and derived from which, an efficient expectation-maximization (EM) algorithm. Different from existing algorithms, this algorithm does not rely on the forward-backward filtering for smoothed regime probabilities, and only involves forward computation. Moreover, the algorithm to compute score is readily extended to compute the Hessian matrix.
    Date: 2022–05
  19. By: Robin C. Sickles (Department of Economics, Rice University, Houston, TX 77251-1892, USA); Zhichao Wang (School of Economics, University of Queensland, Brisbane, Qld 4072, Australia); Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia)
    Abstract: In this chapter, we provide a brief overview of the stochastic frontier analysis (SFA) in the context of analysing healthcare, with a focus on hospitals, where it has received most attention. We start with the classical SFA model of Aigner, Lovell and Schmidt (1977) and then consider many of its popular extensions and generalizations in both cross-sectional and panel data (mainly published in Journal of Econometrics, Journal of Business & Economic Statistics and Journal of Productivity Analysis). We also briefly discuss semi-parametric and non-parametric generalizations, spatial frontiers, and Bayesian SFA. Whenever possible, we refer the readers to various applications of these general methods to healthcare, and for hospitals in particular. Finally, we also illustrate some of these methods for real data on public hospitals in Queensland, Australia, as well as provide practical guidance and references for their computational implementations via R.
    Keywords: Stochastic frontier analysis, R, healthcare, hospital, Queensland
    Date: 2022–05
  20. By: Kirstin Hubrich; Daniel F. Waggoner
    Abstract: We conduct a novel empirical analysis of the role of leverage of financial institutions for the transmission of financial shocks to the macroeconomy. For that purpose we develop an endogenous regime-switching structural vector autoregressive model with time-varying transition probabilities that depend on the state of the economy. We propose new identification techniques for regime switching models. Recently developed theoretical models emphasize the role of bank balance sheets for the build-up of financial instabilities and the amplification of financial shocks. We build a market-based measure of leverage of financial institutions employing institution-level data and find empirical evidence that real effects of financial shocks are amplified by the leverage of financial institutions in the financial-constraint regime. We also find evidence of heterogeneity in how financial institutions, including depository financial institutions, global systemically important banks and selected nonbank financial institutions, affect the transmission of shocks to the macroeconomy. Our results confirm the leverage ratio as a useful indicator from a policy perspective.
    Keywords: Regime switching models; Time-varying transition probabilities; Financial shocks; Leverage; Bank and nonbank financial institutions; Heterogeneity
    JEL: C11 C32 C53 C55 E44 G21
    Date: 2022–06–01
  21. By: Kiran Tomlinson; Austin R. Benson
    Abstract: Choices made by individuals have widespread impacts--for instance, people choose between political candidates to vote for, between social media posts to share, and between brands to purchase--moreover, data on these choices are increasingly abundant. Discrete choice models are a key tool for learning individual preferences from such data. Additionally, social factors like conformity and contagion influence individual choice. Existing methods for incorporating these factors into choice models do not account for the entire social network and require hand-crafted features. To overcome these limitations, we use graph learning to study choice in networked contexts. We identify three ways in which graph learning techniques can be used for discrete choice: learning chooser representations, regularizing choice model parameters, and directly constructing predictions from a network. We design methods in each category and test them on real-world choice datasets, including county-level 2016 US election results and Android app installation and usage data. We show that incorporating social network structure can improve the predictions of the standard econometric choice model, the multinomial logit. We provide evidence that app installations are influenced by social context, but we find no such effect on app usage among the same participants, which instead is habit-driven. In the election data, we highlight the additional insights a discrete choice framework provides over classification or regression, the typical approaches. On synthetic data, we demonstrate the sample complexity benefit of using social information in choice models.
    Date: 2022–05
  22. By: Johannes König (Australian National University); David I. Stern (University of Kassel); Richard S.J. Tol (Vrije Universiteit Amsterdam)
    Abstract: We compute confidence intervals for recursive impact factors, that take into account that some citations are more prestigious than others, as well as for the associated ranks of journals, applying the methods to the population of economics journals. The Quarterly Journal of Economics is clearly the journal with greatest impact, the confidence interval for its rank only includes one. Based on the simple bootstrap, the remainder of the “Top- 5†journals are in the top 6 together with the Journal of Finance, while the Xie et al. (2009), and Mogstad et al. (2022) methods generally broaden estimated confidence intervals, particularly for mid-ranking journals. All methods agree that most apparent differences in journal quality are, in fact, mostly insignificant.
    Keywords: Bibliometrics, citation analysis, publishing, bootstrapping
    JEL: C71
    Date: 2022–04–30
  23. By: Rafael Reisenhofer; Xandro Bayer; Nikolaus Hautsch
    Abstract: Despite the impressive success of deep neural networks in many application areas, neural network models have so far not been widely adopted in the context of volatility forecasting. In this work, we aim to bridge the conceptual gap between established time series approaches, such as the Heterogeneous Autoregressive (HAR) model, and state-of-the-art deep neural network models. The newly introduced HARNet is based on a hierarchy of dilated convolutional layers, which facilitates an exponential growth of the receptive field of the model in the number of model parameters. HARNets allow for an explicit initialization scheme such that before optimization, a HARNet yields identical predictions as the respective baseline HAR model. Particularly when considering the QLIKE error as a loss function, we find that this approach significantly stabilizes the optimization of HARNets. We evaluate the performance of HARNets with respect to three different stock market indexes. Based on this evaluation, we formulate clear guidelines for the optimization of HARNets and show that HARNets can substantially improve upon the forecasting accuracy of their respective HAR baseline models. In a qualitative analysis of the filter weights learnt by a HARNet, we report clear patterns regarding the predictive power of past information. Among information from the previous week, yesterday and the day before, yesterday's volatility makes by far the most contribution to today's realized volatility forecast. Moroever, within the previous month, the importance of single weeks diminishes almost linearly when moving further into the past.
    Date: 2022–05

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.