nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒06‒19
twenty-two papers chosen by
Sune Karlsson
Örebro universitet

  1. Doubly Robust Uniform Confidence Bands for Group-Time Conditional Average Treatment Effects in Difference-in-Differences By Shunsuke Imai; Lei Qin; Takahide Yanagi
  2. Fast and Order-invariant Inference in Bayesian VARs with Non-Parametric Shocks By Florian Huber; Gary Koop
  3. Statistical Estimation for Covariance Structures with Tail Estimates using Nodewise Quantile Predictive Regression Models By Christis Katsouris
  4. Approximate Bayesian Computation for Partially Identified Models By Alvarez, Luis Antonio
  5. Efficient Semiparametric Estimation of Average Treatment Effects Under Covariate Adaptive Randomization By Ahnaf Rafi
  6. INAR approximation of bivariate linear birth and death process By Chen, Zezhun Chen; Dassios, Angelos; Tzougas, George
  7. Adaptive posterior distributions for covariance matrix learning in Bayesian inversion problems for multioutput signals By Curbelo Benitez, Ernesto Angel; Martino, Luca; Llorente Fernandez, Fernando; Delgado Gómez, David
  8. Limited Monotonicity and the Combined Compliers LATE By Nadja van ’t Hoff; Arthur Lewbel; Giovanni Mellace
  9. Semiparametrically Optimal Cointegration Test By Bo Zhou
  10. Further Improvements of Finite Sample Approximation of Central Limit Theorems for Weighted and Unweighted Malmquist Productivity Indices By Valentin Zelenyuk; Shirong Zhao
  11. Grenander-type Density Estimation under Myerson Regularity By Haitian Xie
  12. Hierarchical DCC-HEAVY Model for High-Dimensional Covariance Matrices By Emilija Dzuverovic; Matteo Barigozzi
  13. Incorporating Short Data into Large Mixed-Frequency VARs for Regional Nowcasting By Gary Koop; Gary Koop; Stuart McIntyre; James Mitchell; Aubrey Poon; Ping Wu
  14. Individualized Conformal By Fernando Delbianco; Fernando Tohmé
  15. Precision versus Shrinkage: A Comparative Analysis of Covariance Estimation Methods for Portfolio Allocation By Sumanjay Dutta; Shashi Jain
  16. Calibration and Validation of Macroeconomic Simulation Models: A General Protocol by Causal Search By Mario Martinoli; Alessio Moneta; Gianluca Pallante
  17. Detecting and dating possibly distinct structural breaks in the covariance structure of financial assets By Mugrabi, Farah Daniela
  18. Studying the Welfare State by Analysing Time-Series-Cross-Section Data By Federico Podestà
  19. A Novel Robust Method for Estimating the Covariance Matrix of Financial Returns with Applications to Risk Management By Leccadito, Arturo; Staino, Alessandro; Toscano, Pietro
  20. A Multilevel Factor Model for Economic Activity with Observation Driven Dynamic Factors By Mariia Artemova; Francisco Blasques; Siem Jan Koopman
  21. Copula Variational LSTM for High-dimensional Cross-market Multivariate Dependence Modeling By Jia Xu; Longbing Cao
  22. Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models By Yichi Zhang; Mihai Cucuringu; Alexander Y. Shestopaloff; Stefan Zohren

  1. By: Shunsuke Imai; Lei Qin; Takahide Yanagi
    Abstract: This study considers a panel data analysis to examine the heterogeneity in treatment effects with respect to a pre-treatment covariate of interest in the staggered difference-in-differences setting in Callaway and Sant'Anna (2021). Under a set of standard identification conditions, a doubly robust estimand conditional on the covariate identifies the group-time conditional average treatment effect given the covariate. Given this identification result, we propose a three-step estimation procedure based on nonparametric local linear regressions and parametric estimation methods, and develop a doubly robust inference method to construct a uniform confidence band of the group-time conditional average treatment effect function.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.02185&r=ecm
  2. By: Florian Huber; Gary Koop
    Abstract: The shocks which hit macroeconomic models such as Vector Autoregressions (VARs) have the potential to be non-Gaussian, exhibiting asymmetries and fat tails. This consideration motivates the VAR developed in this paper which uses a Dirichlet process mixture (DPM) to model the shocks. However, we do not follow the obvious strategy of simply modeling the VAR errors with a DPM since this would lead to computationally infeasible Bayesian inference in larger VARs and potentially a sensitivity to the way the variables are ordered in the VAR. Instead we develop a particular additive error structure inspired by Bayesian nonparametric treatments of random effects in panel data models. We show that this leads to a model which allows for computationally fast and order-invariant inference in large VARs with nonparametric shocks. Our empirical results with nonparametric VARs of various dimensions shows that nonparametric treatment of the VAR errors is particularly useful in periods such as the financial crisis and the pandemic.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.16827&r=ecm
  3. By: Christis Katsouris
    Abstract: This paper considers the specification of covariance structures with tail estimates. We focus on two aspects: (i) the estimation of the VaR-CoVaR risk matrix in the case of larger number of time series observations than assets in a portfolio using quantile predictive regression models without assuming the presence of nonstationary regressors and; (ii) the construction of a novel variable selection algorithm, so-called, Feature Ordering by Centrality Exclusion (FOCE), which is based on an assumption-lean regression framework, has no tuning parameters and is proved to be consistent under general sparsity assumptions. We illustrate the usefulness of our proposed methodology with numerical studies of real and simulated datasets when modelling systemic risk in a network.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.11282&r=ecm
  4. By: Alvarez, Luis Antonio
    Abstract: Partial identification is a prominent feature of several economic models. Such prevalence has spurred a large literature on valid set estimation under partial identification from a frequentist viewpoint. From the Bayesian perspective, it is well known that, under partial identification, the asymptotic validity of Bayesian credible sets in conducting frequentist inference, which is ensured by several Bernstein von-Mises theorems available in the literature, breaks down. Existing solutions to this problem require either knowledge of the map between the distribution of the data and the identified set -- which is generally unavailable in more complex models --, or modifications to the methodology that difficult the Bayesian interpretability of the proposed solution. In this paper, I show how one can leverage Approximate Bayesian Computation, a Bayesian methodology designed for settings where evaluation of the model likelihood is unfeasible, to reestablish the asymptotic validity of Bayesian credible sets in conducting frequentist inference, whilst preserving the core interpretation of the Bayesian approach and dispensing with knowledge of the map between data and identified set. Specifically, I show in a simple, yet encompassing, setting how, by calibrating the main tuning parameter of the ABC methodology, one could hope to achieve asymptotic frequentist coverage. Based on my findings, I then propose a semiautomatic algorithm for selecting this parameter and constructing valid confidence sets. This is a work in progress. In future versions, I intend to present further theoretical results, Monte Carlo simulations and an empirical application on the Economics of Networks.
    Keywords: Approximate Bayesian Computation; Partial Identification; Tuning parameter selection
    JEL: C11
    Date: 2023–03–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:117339&r=ecm
  5. By: Ahnaf Rafi
    Abstract: Experiments that use covariate adaptive randomization (CAR) are commonplace in applied economics and other fields. In such experiments, the experimenter first stratifies the sample according to observed baseline covariates and then assigns treatment randomly within these strata so as to achieve balance according to pre-specified stratum-specific target assignment proportions. In this paper, we compute the semiparametric efficiency bound for estimating the average treatment effect (ATE) in such experiments with binary treatments allowing for the class of CAR procedures considered in Bugni, Canay, and Shaikh (2018, 2019). This is a broad class of procedures and is motivated by those used in practice. The stratum-specific target proportions play the role of the propensity score conditional on all baseline covariates (and not just the strata) in these experiments. Thus, the efficiency bound is a special case of the bound in Hahn (1998), but conditional on all baseline covariates. Additionally, this efficiency bound is shown to be achievable under the same conditions as those used to derive the bound by using a cross-fitted Nadaraya-Watson kernel estimator to form nonparametric regression adjustments.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.08340&r=ecm
  6. By: Chen, Zezhun Chen; Dassios, Angelos; Tzougas, George
    Abstract: In this paper, we propose a new type of univariate and bivariate Integer-valued autoregressive model of order one (INAR(1)) to approximate univariate and bivariate linear birth and death process with constant rates. Under a specific parametric setting, the dynamic of transition probabilities and probability generating function of INAR(1) will converge to that of birth and death process as the length of subintervals goes to 0. Due to the simplicity of Markov structure, maximum likelihood estimation is feasible for INAR(1) model, which is not the case for bivariate and multivariate birth and death process. This means that the statistical inference of bivariate birth and death process can be achieved via the maximum likelihood estimation of a bivariate INAR(1) model.
    JEL: C1
    Date: 2023–05–15
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:118769&r=ecm
  7. By: Curbelo Benitez, Ernesto Angel; Martino, Luca; Llorente Fernandez, Fernando; Delgado Gómez, David
    Abstract: In this work, we propose an adaptive importance sampling (AIS) scheme for multivariate Bayesian inversion problems, which is based in two main ideas: the inference procedure is divided in two parts and the variables of interest are split in two blocks. We assume that the observations are generated from a complex multivariate non-linear function perturbed by correlated Gaussian noise. We estimate both the unknown parameters of the multivariate non-linear model and the covariance matrix of the noise. In the first part of the proposed inference scheme, a novel AIS technique called adaptive target AIS (ATAIS) is designed, which alternates iteratively between an IS technique over the parameters of the non-linear model and a frequentist approach for the covariance matrix of the noise. In the second part of the proposed inference scheme, a prior density over the covariance matrix is considered and the cloud of samples obtained by ATAIS are recycled and re-weighted for obtaining a complete Bayesian study over the model parameters and covariance matrix. Two numerical examples are presented that show the benefits of the proposed approach.
    Keywords: Bayesian Inversion; Importance Sampling; Covariance Matrix; Tempering; Sequence Of Posteriors
    Date: 2023–05–30
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:37391&r=ecm
  8. By: Nadja van ’t Hoff (University of Southern Denmark); Arthur Lewbel (Boston College); Giovanni Mellace (University of Southern Denmark)
    Abstract: We consider estimating a local average treatment effect given an endogenous binary treatment and two or more valid binary instruments. We propose a novel limited monotonicity assumption that is generally weaker than alternative monotonicity assumptions considered in the literature, and allows for a great deal of choice heterogeneity. Using this limited monotonicity, we define and identify the Combined Complier Local Average Treatment Effect (CC-LATE), which is arguably a more policy relevant parameter than the weighted average of LATEs identified by Two Stage Least Squares. We apply our results to estimate the effect of learning one’s HIV status on protective behaviors.
    Keywords: Instrumental variable, Local Average Treatment Effect, monotonicity, multiple instruments
    JEL: C21 C26
    Date: 2023–05–24
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:1059&r=ecm
  9. By: Bo Zhou
    Abstract: This paper aims to address the issue of semiparametric efficiency for cointegration rank testing in finite-order vector autoregressive models, where the innovation distribution is considered an infinite-dimensional nuisance parameter. Our asymptotic analysis relies on Le Cam's theory of limit experiment, which in this context takes the form of Locally Asymptotically Brownian Functional (LABF). By leveraging the structural version of LABF, an Ornstein-Uhlenbeck experiment, we develop the asymptotic power envelopes of asymptotically invariant tests for both cases with and without a time trend. We propose feasible tests based on a nonparametrically estimated density and demonstrate that their power can achieve the semiparametric power envelopes, making them semiparametrically optimal. We validate the theoretical results through large-sample simulations and illustrate satisfactory size control and excellent power performance of our tests under small samples. In both cases with and without time trend, we show that a remarkable amount of additional power can be obtained from non-Gaussian distributions.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.08880&r=ecm
  10. By: Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia); Shirong Zhao (School of Finance, Dongbei University of Finance and Economics, Dalian, Liaoning 116025)
    Abstract: Various methods recently have been proposed to further improve the finite sample performance of the developed central limit theorems (CLTs) for the simple mean and aggregate efficiency estimated via non-parametric frontier efficiency methods. We thoroughly investigate whether these methods are also effective to improve the finite sample performance for the recently developed CLTs for the simple mean and aggregate Malmquist Productivity Indices (MPIs). The extensive Monte-Carlo experiments confirmed that the method from Simar et al. (2023a) is useful for the simple mean and aggregate MPI in relatively small sample sizes (e.g., up to around 50, perhaps 100) and especially for large dimensions. Interestingly, we find that the better performance of the data sharpening method from Nguyen et al. (2022) observed in the context of efficiency is not obvious in the context of productivity. Finally, we use one well-known empirical data set to illustrate the differences across the existing methods to guide the practitioners.
    Keywords: Malmquist Productivity Index, Non-parametric Efficiency Estimators, Data Envelopment Analysis, Inference
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:qld:uqcepa:186&r=ecm
  11. By: Haitian Xie
    Abstract: This study presents a novel approach to the density estimation of private values from second-price auctions, diverging from the conventional use of smoothing-based estimators. We introduce a Grenander-type estimator, constructed based on a shape restriction in the form of a convexity constraint. This constraint corresponds to the renowned Myerson regularity condition in auction theory, which is equivalent to the concavity of the revenue function for selling the auction item. Our estimator is nonparametric and does not require any tuning parameters. Under mild assumptions, we establish the cube-root consistency and show that the estimator asymptotically follows the scaled Chernoff's distribution. Moreover, we demonstrate that the estimator achieves the minimax optimal convergence rate.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.09052&r=ecm
  12. By: Emilija Dzuverovic; Matteo Barigozzi
    Abstract: We introduce a new HD DCC-HEAVY class of hierarchical-type factor models for conditional covariance matrices of high-dimensional returns, employing the corresponding realized measures built from higher-frequency data. The modelling approach features sophisticated asymmetric dynamics in covariances coupled with straightforward estimation and forecasting schemes, independent of the cross-sectional dimension of the assets under consideration. Empirical analyses suggest the HD DCC-HEAVY models have a better in-sample fit, and deliver statistically and economically significant out-of-sample gains relative to the standard benchmarks and existing hierarchical factor models. The results are robust under different market conditions.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.08488&r=ecm
  13. By: Gary Koop; Gary Koop; Stuart McIntyre; James Mitchell; Aubrey Poon; Ping Wu
    Abstract: Interest in regional economic issues coupled with advances in administrative data is driving the creation of new regional economic data. Many of these data series could be useful for nowcasting regional economic activity, but they suffer from a short (albeit constantly expanding) time series which makes incorporating them into nowcasting models problematic. Regional nowcasting is already challenging because the release delay on regional data tends to be greater than that at the national level, and "short" data imply a "ragged edge" at both the beginning and the end of regional data sets, which adds a further complication. In this paper, via an application to the UK, we develop methods to include a wide range of short data into a regional mixed-frequency VAR model. These short data include hitherto unexploited regional VAT turnover data. We address the problem of the ragged edge at both the beginning and end of our sample by estimating regional factors using different missing data algorithms that we then incorporate into our mixed-frequency VAR model. We find that nowcasts of regional output growth are generally improved when we condition them on the factors, but only when the regional nowcasts are produced before the national (UK-wide) output growth data are published.
    Keywords: Regional data; Mixed-frequency data; Missing data; Nowcasting; Factors; Bayesian methods; Real-time data; Vector autoregressions
    JEL: C32 C53 E37
    Date: 2023–05–08
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwq:96086&r=ecm
  14. By: Fernando Delbianco (Universidad Nacional del Sur/CONICET); Fernando Tohmé (Universidad Nacional del Sur/CONICET)
    Abstract: The problem of individualized prediction can be addressed using variants of conformal prediction, obtaining the intervals to which the actual values of the variables of interest belong. Here we present a method based on detecting the observations that may be relevant for a given question and then using simulated controls to yield the intervals for the predicted values. This method is shown to be adaptive and able to detect the presence of latent relevant variables.
    Keywords: Conformal Prediction, Individualized Inference, Split and Jacknife Distribution-Free Inference.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:aoz:wpaper:247&r=ecm
  15. By: Sumanjay Dutta; Shashi Jain
    Abstract: In this paper, we perform a comprehensive study of different covariance and precision matrix estimation methods in the context of minimum variance portfolio allocation. The set of models studied by us can be broadly categorized as: Gaussian Graphical Model (GGM) based methods, Shrinkage Methods, Thresholding and Random Matrix Theory (RMT) based methods. Among these, GGM methods estimate the precision matrix directly while the other approaches estimate the covariance matrix. We perform a synthetic experiment to study the network learning and sample complexity performance of GGM methods. Thereafter, we compare all the covariance and precision matrix estimation methods in terms of their predictive ability for daily, weekly and monthly horizons. We consider portfolio risk as an indicator of estimation error and employ it as a loss function for comparison of the methods under consideration. We find that GGM methods outperform shrinkage and other approaches. Our observations for the performance of GGM methods are consistent with the synthetic experiment. We also propose a new criterion for the hyperparameter tuning of GGM methods. Our tuning approach outperforms the existing methodology in the synthetic setup. We further perform an empirical experiment where we study the properties of the estimated precision matrix. The properties of the estimated precision matrices calculated using our tuning approach are in agreement with the algorithm performances observed in the synthetic experiment and the empirical experiment for predictive ability performance comparison. Apart from this, we perform another synthetic experiment which demonstrates the direct relation between estimation error of the precision matrix and portfolio risk.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.11298&r=ecm
  16. By: Mario Martinoli; Alessio Moneta; Gianluca Pallante
    Abstract: We propose a general protocol for calibration and validation of complex simulation models by an approach based on discovery and comparison of causal structures. The key idea is that configurations of parameters of a given theoretical model are selected by minimizing a distance index between two structural models: one estimated from the data generated by the theoretical model, another estimated from a set of observed data. Validation is conceived as a measure of matching between the theoretical and the empirical causal structure. Causal structures are identified combining structural vector autoregressive and independent component analysis, so as to avoid a priori restrictions. We use model confidence set as a tool to measure the uncertainty associated to the alternative configurations of parameters and causal structures. We illustrate the procedure by applying it to a large-scale macroeconomic agent-based model, namely the ''dystopian Schumpeter-meeting-Keynes'' model.
    Keywords: Calibration; Validation; Simulation models; SVAR models; Causal inference; Model confidence sets; Independent component analysis.
    Date: 2022–10–24
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2022/33&r=ecm
  17. By: Mugrabi, Farah Daniela (Université catholique de Louvain, LIDAM/LFIN, Belgium)
    Abstract: This paper aims to identify and date contagion by accounting for possibly distinct structural breaks among the covariance structure of financial assets. We propose an efficient three-steps procedure that applies the Lagrange Multiplier test, in particular the SupLM statistic, among the DCC-GARCH model parameters. Monte Carlo experiments show that our procedure possess good power and accurately detects the location of the true breaking points. We explore contagion between the government bond and stock markets of advanced and emerging economies. Evidence of common shifts in the covariance structure coincides with the European Sovereign Debt Crisis, the Taper Tantrum originated in United States in mid-2013 and the Covid-19 pandemic.
    Keywords: Contagion, emerging markets, unknown structural breaks, Lagrange Multiplier test, DCC-GARCH model
    JEL: C32 C15 G15
    Date: 2023–03–01
    URL: http://d.repec.org/n?u=RePEc:ajf:louvlf:2023001&r=ecm
  18. By: Federico Podestà
    Abstract: For a few decades now, quantitative researchers interested in studying welfare states have been analysing time-series-cross-section (TSCS) data relatively regularly. Given that welfare state researchers operate within an observational data framework, they seek to exploit the characteristics of TSCS data to make causal inferences. However, this objective remains quite difficult. Accordingly, the chapter aims to critically illustrate some of the most relevant TSCS techniques used in recent years. Much of the chapter regards TSCS regression, as it is the most widely used econometric tool for estimating causal effects regarding several welfare state features in a TSCS setting. The concluding part of the chapter regards the synthetic control method. This method requires a dedicated section because, although it has been widely used in numerous strands of research, it has arguably not yet been sufficiently exploited for the study of social policy.
    Keywords: time-series-cross-section analysis; welfare state; causal inference; regression; synthetic control method. Acknowledgments:
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:fbk:wpaper:2023-03&r=ecm
  19. By: Leccadito, Arturo (Université catholique de Louvain, LIDAM/LFIN, Belgium); Staino, Alessandro; Toscano, Pietro
    Abstract: In this paper we introduce the dynamic Gerber model (DGC) and compare its performance in the prediction of VaR and ES compared to alternative parametric, nonparametric and semiparametric methods to estimate the variance-covariance matrix of returns. Based on ES backtests, the DGC method produces, overall, accurate ES forecasts. Furthermore, we use the Model Confidence Set (MCS) procedure to identify the superior set of models (SSM). For all the portfolios and VaR/ES confidence levels we consider, the DGC is found to belong to the SSM.
    Keywords: VaR ; ES ; Gerber statistic ; parametric methods ; nonparametric methods ; semiparametric methods
    Date: 2022–11–29
    URL: http://d.repec.org/n?u=RePEc:ajf:louvlf:2022011&r=ecm
  20. By: Mariia Artemova (Vrije Universiteit Amsterdam); Francisco Blasques (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam)
    Abstract: We analyze the role of industrial and non-industrial production sectors in the US economy by adopting a novel multilevel factor model. The proposed model is suitable for high-dimensional panels of economic time series and allows for interdependence structures across multiple sectors. The estimation procedure is based on a multistep least squares method which is simple and fast in its implementation. By analyzing the shock propagation process throughout the network of interconnections, we corroborate some of the key findings about the role of industrial production in the US economy, quantify the importance of propagation effects and shed new light on dynamic sectoral linkages.
    Keywords: Dynamic factor model, Interconnectedness, Output growth.
    JEL: C22 C32 C38 C51
    Date: 2023–04–23
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20230021&r=ecm
  21. By: Jia Xu; Longbing Cao
    Abstract: We address an important yet challenging problem - modeling high-dimensional dependencies across multivariates such as financial indicators in heterogeneous markets. In reality, a market couples and influences others over time, and the financial variables of a market are also coupled. We make the first attempt to integrate variational sequential neural learning with copula-based dependence modeling to characterize both temporal observable and latent variable-based dependence degrees and structures across non-normal multivariates. Our variational neural network WPVC-VLSTM models variational sequential dependence degrees and structures across multivariate time series by variational long short-term memory networks and regular vine copula. The regular vine copula models nonnormal and long-range distributional couplings across multiple dynamic variables. WPVC-VLSTM is verified in terms of both technical significance and portfolio forecasting performance. It outperforms benchmarks including linear models, stochastic volatility models, deep neural networks, and variational recurrent networks in cross-market portfolio forecasting.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.08778&r=ecm
  22. By: Yichi Zhang; Mihai Cucuringu; Alexander Y. Shestopaloff; Stefan Zohren
    Abstract: In multivariate time series systems, key insights can be obtained by discovering lead-lag relationships inherent in the data, which refer to the dependence between two time series shifted in time relative to one another, and which can be leveraged for the purposes of control, forecasting or clustering. We develop a clustering-driven methodology for the robust detection of lead-lag relationships in lagged multi-factor models. Within our framework, the envisioned pipeline takes as input a set of time series, and creates an enlarged universe of extracted subsequence time series from each input time series, by using a sliding window approach. We then apply various clustering techniques (e.g, K-means++ and spectral clustering), employing a variety of pairwise similarity measures, including nonlinear ones. Once the clusters have been extracted, lead-lag estimates across clusters are aggregated to enhance the identification of the consistent relationships in the original universe. Since multivariate time series are ubiquitous in a wide range of domains, we demonstrate that our method is not only able to robustly detect lead-lag relationships in financial markets, but can also yield insightful results when applied to an environmental data set.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.06704&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.