|
on Econometrics |
By: | Juodis, Arturas; Sarafidis, Vasilis |
Abstract: | This paper develops a novel Method of Moments approach for panel data models with endogenous regressors and unobserved common factors. The proposed approach does not require estimating explicitly a large number of parameters in either time-series or cross-sectional dimension, T and N respectively. Hence, it is free from the incidental parameter problem. In particular, the proposed approach does not suffer from ``Nickell bias'' of order O(1/T), nor from bias terms that are of order O(1/N). Therefore, it can operate under substantially weaker restrictions compared to existing large T procedures. Two alternative GMM estimators are analysed; one makes use of a fixed number of ``averaged estimating equations'' a la Anderson and Hsiao (1982), whereas the other one makes use of ``stacked estimating equations'', the total number of which increases at the rate of O(T). It is demonstrated that both estimators are consistent and asymptotically mixed-normal as N goes to infinity for any value of T. Low-level conditions that ensure local and global identification in this setup are examined using several examples. |
Keywords: | Common Factors; GMM; Incidental Parameter Problem; Endogenous Regressors; U-statistic |
JEL: | C13 C15 C23 |
Date: | 2020–12–23 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:104906&r=all |
By: | Juodis, Arturas; Sarafidis, Vasilis |
Abstract: | This paper is an online supplementary appendix to "An Incidental Parameters Free Inference Approach for Panels with Common Shocks". Section S.1 of the present Supplementary Appendix studies the properties of the proposed GMM estimators under fixed T asymptotics. Section S.2 analyses the effect of transforming the model in terms of time-specific cross-sectional averages on the proposed estimating equations. Section S.3 considers identification-robust inference, building upon the idea of Anderson and Rubin (1949) and Stock and Wright (2000). Finally, Section S.4 discusses local and global identification for the panel AR(1) model and reports additional Monte Carlo results for this model. |
Keywords: | Common Factors; GMM; Incidental Parameter Problem; Endogenous Regressors; U-statistic |
JEL: | C13 C15 C18 C33 |
Date: | 2020–12–24 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:104908&r=all |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Shahnaz Parsaeian (University of Kansas); Aman Ullah (University of California, Riverside) |
Abstract: | Hashem Pesaran has made many seminal contributions, among others, in the time series econometrics estimation and forecasting under structural break, see Pesaran and Timmermann (2005, 2007), Pesaran et al. (2006), and Pesaran et al. (2013). In our paper here we focus on the estimation of regression parameters under multiple structural breaks with heteroskedasticity across regimes. We propose a combined estimator of regression parameters based on combining restricted estimator under the situation that there is no break in the parameters, with unrestricted estimator under the break. The operational optimal combination weight is between zero and one. The analytical finite sample risk is derived, and it is shown that the risk of the proposed combined estimator is lower than that of the unrestricted estimator under any break size and break points. Further, we show that the combined estimator outperforms over the unrestricted estimator in terms of the mean squared forecast errors. Properties of the estimator are also demonstrated in simulations. Finally, empirical illustrations for parameter estimators and forecasts are presented through macroeconomic and financial data sets. |
Keywords: | Structural breaks, Combined estimator |
JEL: | C13 C32 C53 |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:202101&r=all |
By: | Xiangqian Sun; Xing Yan; Qi Wu |
Abstract: | We propose a multivariate generative model to capture the complex dependence structure often encountered in business and financial data. Our model features heterogeneous and asymmetric tail dependence between all pairs of individual dimensions while also allowing heterogeneity and asymmetry in the tails of the marginals. A significant merit of our model structure is that it is not prone to error propagation in the parameter estimation process, hence very scalable, as the dimensions of datasets grow large. However, the likelihood methods are infeasible for parameter estimation in our case due to the lack of a closed-form density function. Instead, we devise a novel moment learning algorithm to learn the parameters. To demonstrate the effectiveness of the model and its estimator, we test them on simulated as well as real-world datasets. Results show that this framework gives better finite-sample performance compared to the copula-based benchmarks as well as recent similar models. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.13132&r=all |
By: | Aman Ullah (Department of Economics, University of California Riverside); Tao Wang (University of CaliforniaR iverside); Weixin Yao (University of California Riverside) |
Abstract: | Most research on panel data focuses on mean or quantile regression while there is not much research about regression methods based on the mode. In this paper, we propose a new model named fixed effects modal regression for panel data in which we model how the conditional mode of the response variable depends on the covariates, and employ a kernel based objective function to simplify the computation. The proposed modal regression can complement the mean and quantile regressions, and provide better central tendency measure and prediction performance when the data is skewed. We present a linear dummy modal regression (LDMR) method and a pseudo-demodeing two-step (PDTS) method to estimate the proposed modal regression. The computations can be easily implemented using a modified modal-expectation-maximization (MEM) algorithm. We investigate the asymptotic properties of the modal estimators under some mild regularity conditions when the number of individuals, N, and the number of time periods, T, go to infinity. The optimal bandwidths with order (NT)^-1/7 are obtained by minimizing the asymptotic weighted mean squared errors. Monte Carlo simulations and two real data analyses of a public capital productivity study and a carbon dioxide (CO2) emissions study are presented to demonstrate the finite sample performance of the newly proposed modal regression. |
Keywords: | Asymptotic property, Fixed effects, MEM algorithm, Modal regression, Panel data, Estimation. |
JEL: | C01 C14 C23 C51 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:202102&r=all |
By: | Koohyun Kwon; Soonwoo Kwon |
Abstract: | We consider the problem of adaptive inference on a regression function at a point under a multivariate nonparametric regression setting. The regression function belongs to a H\"older class and is assumed to be monotone with respect to some or all of the arguments. We derive the minimax rate of convergence for confidence intervals (CIs) that adapt to the underlying smoothness, and provide an adaptive inference procedure that obtains this minimax rate. The procedure differs from that of Cai and Low (2004), intended to yield shorter CIs under practically relevant specifications. The proposed method applies to general linear functionals of the regression function, and is shown to have favorable performance compared to existing inference procedures. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.14219&r=all |
By: | Koohyun Kwon; Soonwoo Kwon |
Abstract: | We provide an inference procedure for the sharp regression discontinuity design (RDD) under monotonicity, with possibly multiple running variables. Specifically, we consider the case where the true regression function is monotone with respect to (all or some of) the running variables and assumed to lie in a Lipschitz smoothness class. Such a monotonicity condition is natural in many empirical contexts, and the Lipschitz constant has an intuitive interpretation. We propose a minimax two-sided confidence interval (CI) and an adaptive one-sided CI. For the two-sided CI, the researcher is required to choose a Lipschitz constant where she believes the true regression function to lie in. This is the only tuning parameter, and the resulting CI has uniform coverage and obtains the minimax optimal length. The one-sided CI can be constructed to maintain coverage over all monotone functions, providing maximum credibility in terms of the choice of the Lipschitz constant. Moreover, the monotonicity makes it possible for the (excess) length of the CI to adapt to the true Lipschitz constant of the unknown regression function. Overall, the proposed procedures make it easy to see under what conditions on the underlying regression function the given estimates are significant, which can add more transparency to research using RDD methods. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.14216&r=all |
By: | Bakk, Zsuzsa; Kuha, Jouni |
Abstract: | In this article we provide an overview of existing approaches for relating latent class membership to external variables of interest. We extend on the work of Nylund-Gibson et al. (Structural Equation Modeling: A Multidisciplinary Journal, 2019, 26, 967), who summarize models with distal outcomes by providing an overview of most recommended modeling options for models with covariates and larger models with multiple latent variables as well. We exemplify the modeling approaches using data from the General Social Survey for a model with a distal outcome where underlying model assumptions are violated, and a model with multiple latent variables. We discuss software availability and provide example syntax for the real data examples in Latent GOLD. |
Keywords: | covariates; distal outcome; latent class analysis; three-step estimation; two-step estimation |
JEL: | C1 |
Date: | 2020–11–16 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:107564&r=all |
By: | Rahul Singh |
Abstract: | Negative control is a strategy for learning the causal relationship between treatment and outcome in the presence of unmeasured confounding. The treatment effect can nonetheless be identified if two auxiliary variables are available: a negative control treatment (which has no effect on the actual outcome), and a negative control outcome (which is not affected by the actual treatment). These auxiliary variables can also be viewed as proxies for a traditional set of control variables, and they bear resemblance to instrumental variables. I propose a new family of non-parametric algorithms for learning treatment effects with negative controls. I consider treatment effects of the population, of sub-populations, and of alternative populations. I allow for data that may be discrete or continuous, and low-, high-, or infinite-dimensional. I impose the additional structure of the reproducing kernel Hilbert space (RKHS), a popular non-parametric setting in machine learning. I prove uniform consistency and provide finite sample rates of convergence. I evaluate the estimators in simulations. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.10315&r=all |
By: | Sayar Karmakar; Stefan Richter; Wei Biao Wu |
Abstract: | A general class of time-varying regression models is considered in this paper. We estimate the regression coefficients by using local linear M-estimation. For these estimators, weak Bahadur representations are obtained and are used to construct simultaneous confidence bands. For practical implementation, we propose a bootstrap based method to circumvent the slow logarithmic convergence of the theoretical simultaneous bands. Our results substantially generalize and unify the treatments for several time-varying regression and auto-regression models. The performance for ARCH and GARCH models is studied in simulations and a few real-life applications of our study are presented through analysis of some popular financial datasets. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.13157&r=all |
By: | Hafner, Christian (Université catholique de Louvain, LIDAM/ISBA, Belgium); Wang, Linqi (Université catholique de Louvain, LIDAM/LFIN, Belgium) |
Abstract: | This paper proposes a new algorithm for dynamic portfolio selection that takes a sector structure into account. We consider regularization with respect to within and between sector variation of portfolio weights, additional to sparsity and trans- action cost controls. Our model includes two special cases as benchmarks: a dy- namic conditional correlation model with shrinkage estimation of the unconditional covariance matrix, and the equally weighted portfolio. We propose an algorithm for estimation of the model parameters and calibration of the penalty terms based on cross-validation. In an empirical study, we find that the within-sector penalty has by far the highest contribution to the reduction of out-of-sample volatility of portfolio returns. Our model improves both the pure DCC with shrinkage and the equally-weighted portfolio out-of-sample. |
Keywords: | dynamic conditional correlation, cross-validation, shrinkage, industry sectors |
JEL: | C14 C43 Z11 |
Date: | 2020–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2020032&r=all |
By: | Plassier, Vincent (Université catholique de Louvain); Portier, François; Segers, Johan (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
Abstract: | Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single independent random sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a highdimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the sample size. The collection of response functions, although potentially infinite, is supposed to have a finite Vapnik–Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time. |
Date: | 2020–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2020019&r=all |
By: | Gressani, Oswaldo (Université catholique de Louvain, LIDAM/ISBA, Belgium); Lambert, Philippe (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
Abstract: | Multiple linear regression is among the cornerstones of statistical model building. Whether from a descriptive or inferential perspective, it is certainly the most widespread approach to analyze the inuence of a collection of explanatory variables on a response. The straightforward interpretability in conjunction with the simple and elegant mathematics of least squares created room for a well appreciated toolbox with an ubiquitous presence in various scientific fields. In this article, the linear dependence assumption of the response variable with respect to the covariates is relaxed and replaced by an additive architecture of univariate smooth functions of predictor variables. An approximate Bayesian approach combining Laplace approximations and P-splines is used for inference in this additive partial linear model class. The analytical availability of the gradient and Hessian of the posterior penalty vector allows for a fast and efficient exploration of the penalty space, which in turn yields accurate point and set estimates of latent field variables. Different simulation settings confirm the statistical performance of the Laplace-P-spline approach and the methodology is applied on mortality data. |
Date: | 2020–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2020020&r=all |
By: | Lombardi, Stefano (VATT); van den Berg, Gerard J. (University of Groningen); Vikström, Johan (IFAU - Institute for Evaluation of Labour Market and Education Policy) |
Abstract: | This paper builds on the Empirical Monte Carlo simulation approach developed by Huber et al. (2013) to study the estimation of Timing-of-Events (ToE) models. We exploit rich Swedish data of unemployed job-seekers with information on participation in a training program to simulate placebo treatment durations. We first use these simulations to examine which covariates are key confounders to be included inselection models. The joint inclusion of specific short-term employment history indicators (notably, the share of time spent in employment), together with baseline socio-economic characteristics, regional and inflow timing information,is important to deal with selection bias. Next, we omit subsets of explanatory variables and estimate ToE models with discrete distributions for the ensuing systematic unobserved heterogeneity. In many cases the ToE approach provides accurate effect estimates, especially if time-varying variation in the unemployment rate of the local labor market is taken into account. However, assuming too many or too few support points for unobserved heterogeneity may lead to large biases. Information criteria, in particular those penalizing parameter abundance, are useful to select the number of support points. |
Keywords: | duration analysis; unemployment; propensity score; matching; training; employment |
JEL: | C14 C15 C41 J64 |
Date: | 2020–12–29 |
URL: | http://d.repec.org/n?u=RePEc:hhs:ifauwp:2020_026&r=all |
By: | Ricardo P. Masini; Marcelo C. Medeiros; Eduardo F. Mendes |
Abstract: | In this paper we survey the most recent advances in supervised machine learning and high-dimensional models for time series forecasting. We consider both linear and nonlinear alternatives. Among the linear methods we pay special attention to penalized regressions and ensemble of models. The nonlinear methods considered in the paper include shallow and deep neural networks, in their feed-forward and recurrent versions, and tree-based methods, such as random forests and boosted trees. We also consider ensemble and hybrid models by combining ingredients from different alternatives. Tests for superior predictive ability are briefly reviewed. Finally, we discuss application of machine learning in economics and finance and provide an illustration with high-frequency financial data. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.12802&r=all |
By: | Sla{\dj}ana Babi\'c; Christophe Ley; Lorenzo Ricci; David Veredas |
Abstract: | Economic and financial crises are characterised by unusually large events. These tail events co-move because of linear and/or nonlinear dependencies. We introduce TailCoR, a metric that combines (and disentangles) these linear and non-linear dependencies. TailCoR between two variables is based on the tail inter quantile range of a simple projection. It is dimension-free, it performs well in small samples, and no optimisations are needed. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.14817&r=all |
By: | Shaoxin Hong (Center for Economic Research, Shandong University, Jinan 250100, Shandong, Chin); Zhenyi Zhang (International School of Economics and Management, Capital University of Economics and Business, Beijing, Beijing 100070, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA) |
Abstract: | n this paper, we propose the Cramer-von Mises type test statistic for testing heteroskedasticity in predictive regression when regressors are nonstationary. A Monte Carlo simulation study is conducted to illustrate the finite sample performance of the proposed test statistic and a real empirical example is examined. |
Keywords: | Cramer-von Mises test statistic; Heteroskedasticity; Nonstationarity; Predictive regressions; Specification test. |
JEL: | C12 C22 |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:kan:wpaper:202101&r=all |
By: | Candelon, Bertrand; Luisi, Angelo |
Keywords: | Global VAR, Structural VAR, Likelihood Ratio Test, Interdependence |
JEL: | C12 C32 C52 E44 H63 |
Date: | 2020–01–01 |
URL: | http://d.repec.org/n?u=RePEc:ajf:louvlf:2020009&r=all |
By: | Andersson, Jonas (Dept. of Business and Management Science, Norwegian School of Economics); Olden, Andreas (Dept. of Business and Management Science, Norwegian School of Economics); Rusina, Aija (Dept. of Business and Management Science, Norwegian School of Economics) |
Abstract: | In this paper we investigate the EM-estimator of the model by Caudill et al. (2005). The purpose of the model is to identify items, e.g. individuals or companies, that are wrongly classified as honest; an example of this is the detection of tax evasion. Normally, we observe two groups of items, labeled fraudulent and honest, but suspect that many of the observationally honest items are, in fact, fraudulent. The items observed as honest are therefore divided into two unobserved groups, honestH, representing the truly honest, and honestF, representing the items that are observed as honest, but that are actually fraudulent. By using a multinomial logit model and assuming commonality between the observed fraudulent and the unobserved honestF, Caudill et al. (2005) present a method that uses the EM-algorithm to separate them. By means of a Monte Carlo study, we investigate how well the method performs, and under what circumstances. We also study how well bootstrapped standard errors estimates the standard deviation of the parameter estimators. |
Keywords: | Fraud detection; EM-algorithm; multinomial logit model; Monte Carlo study |
JEL: | C00 C10 |
Date: | 2020–12–31 |
URL: | http://d.repec.org/n?u=RePEc:hhs:nhhfms:2020_015&r=all |
By: | Yaquan Zhang; Qi Wu; Nanbo Peng; Min Dai; Jing Zhang; Hu Wang |
Abstract: | The essence of multivariate sequential learning is all about how to extract dependencies in data. These data sets, such as hourly medical records in intensive care units and multi-frequency phonetic time series, often time exhibit not only strong serial dependencies in the individual components (the "marginal" memory) but also non-negligible memories in the cross-sectional dependencies (the "joint" memory). Because of the multivariate complexity in the evolution of the joint distribution that underlies the data generating process, we take a data-driven approach and construct a novel recurrent network architecture, termed Memory-Gated Recurrent Networks (mGRN), with gates explicitly regulating two distinct types of memories: the marginal memory and the joint memory. Through a combination of comprehensive simulation studies and empirical experiments on a range of public datasets, we show that our proposed mGRN architecture consistently outperforms state-of-the-art architectures targeting multivariate time series. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.13121&r=all |
By: | Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium); Lu, Yang |
Abstract: | This paper studies multivariate mixtures with Wishart-Gamma mixing distribution. Af- ter having recalled the definition and main properties of Wishart distributions for random symmetric positive definite matrices, it is shown how they can be used to extend Gamma distributions to the multivariate case, by considering the joint distribution of the diagonal terms. The resulting distribution, which we call Wishart-Gamma distribution, appears to be particularly useful to model correlated random effects in multivariate frequency, severity and duration models, leading to closed form likelihood function and posterior ratemak- ing formula. Three main applications are discussed to demonstrate the versatility of the Wishart-Gamma mixture models: (i) experience rating with several policies or guarantees per policyholder, (ii) experience rating taking into account the correlation between claim fre- quency and severity components, and (iii) dependence modeling between time-to-payment and amount of payment in micro-loss reserving when the ultimate payment is subject to censoring. Besides introducing the Wishart and Wishart-Gamma distributions, we are also among one of the first to employ the techniques such as fractional integral and symbolic calculation in the non-life actuarial literature. |
Keywords: | Mutlivariate Gamma distribution ; Laplace transform ; fractional calculus ; symbolic calculation ; Bayesian ratemaking ; credibility ; mixed Poisson distribution ; frailty model |
Date: | 2020–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2020016&r=all |
By: | Anna Baiardi (Erasmus University Rotterdam); Andrea A. Naghi (Erasmus University Rotterdam) |
Abstract: | A new and rapidly growing econometric literature is making advances in the problem of using machine learning (ML) methods for causal inference questions. Yet, the empirical economics literature has not started to fully exploit the strengths of these modern methods. We revisit influential empirical studies with causal machine learning methods and identify several advantages of using these techniques. We show that these advantages and their implications are empirically relevant and that the use of these methods can improve the credibility of causal analysis. |
Keywords: | Machine learning, causal inference, average treatment effects, heterogeneous treatment effects |
JEL: | D04 C01 C21 |
Date: | 2021–01–04 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20210001&r=all |
By: | N'Golo Kone |
Abstract: | The maximum diversification portfolio as defined by Choueifaty (2011) depends on the vector of asset volatilities and the inverse of the covariance matrix of the asset return. In practice, these two quantities need to be replaced by their sample statistics. The estimation error associated with the use of these sample statistics may be amplified due to (near) singularity of the covariance matrix, in financial markets with many assets. This in turn may lead to the selection of portfolios that are far from the optimal regarding standard portfolio performance measures of the financial market. To address this problem, we investigate three regularization techniques, including the ridge, the spectral cut-off, and the Landweber-Fridman approaches in order to stabilize the inverse of the covariance matrix. These regularization schemes involve a tuning parameter that needs to be chosen. In light of this fact, we propose a data-driven method for selecting the tuning parameter. We show that the selected portfolio by regularization is asymptotically efficient with respect to the diversification ratio. In empirical and Monte Carlo experiments, the resulting regularized rules are compared to several strategies, such as the most diversified portfolio, the target portfolio, the global minimum variance portfolio, and the naive 1/N strategy in terms of in-sample and out-of-sample Sharpe ratio performance, and it is shown that our method yields significant Sharpe ratio improvements. |
Keywords: | Portfolio selection, Maximum diversification, Regularization |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1450&r=all |
By: | Cl\'ement de Chaisemartin; Xavier D'Haultfoeuille |
Abstract: | We study linear regressions with period and group fixed effects, with several treatment variables. We show that under a parallel trends assumption, the coefficient of each treatment identifies the sum of two terms. The first term is a weighted sum of the average effect of that treatment in each group and period, with weights that may be negative. The second term is a weighted sum of the average effect of the other treatments in each group and period, with weights that may again be negative. Accordingly, the treatment coefficients in those regressions are not robust to heterogeneous effects across groups and over time, and may also be contaminated by the effect of other treatments. We propose an alternative estimator that does not suffer from those issues. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.10077&r=all |
By: | Hafner, Christian (Université catholique de Louvain, LIDAM/ISBA, Belgium); Herwartz, Helmut |
Abstract: | A model for dynamic independent component analysis is introduced where the dynamics are driven by the score of the pseudo likelihood with respect to the rotation angle of model innovations. While conditional second moments are invariant with respect to rotations, higher conditional moments are not, which may have important implications for applications. The pseudo maximum likelihood estimator of the model is shown to be consistent and asymptotically normally distributed. A simulation study reports good finite sample properties of the estimator, including the case of a mis-specification of the innovation density. In an application to a bivariate exchange rate series of the Euro and the British Pound against the US Dollar, it is shown that the model-implied conditional portfolio kurtosis largely aligns with narratives on financial stress as a result of the global financial crisis in 2008, the European sovereign debt crisis (2010-2013) and early rumors signalling the UK to leave the European Union (2017). These insights are consistent with a recently proposed model that associates portfolio kurtosis with a geopolitical risk factor. |
Keywords: | structural vector autoregressions, multivariate GARCH, portfolio selection, risk man- agement |
Date: | 2020–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2020031&r=all |
By: | Monica Billio (Department of Economics, University Of Venice Cà Foscari); Roberto Casarin (Department of Economics, University Of Venice Cà Foscari); Michele Costola (Department of Economics, University Of Venice Cà Foscari); Matteo Iacopini (Vrije Universiteit Amsterdam) |
Abstract: | Network models represent a useful tool to describe the complex set of financial relationships among heterogeneous firms in the system. In this paper, we propose a new semiparametric model for temporal multilayer causal networks with both intra- and inter-layer connectivity. A Bayesian model with a hierarchical mixture prior distribution is assumed to capture heterogeneity in the response of the network edges to a set of risk factors including the European COVID-19 cases. We measure the financial connectedness arising from the interactions between two layers defined by stock returns and volatilities. In the empirical analysis, we study the topology of the network before and after the spreading of the COVID-19 disease. |
Keywords: | Multilayer networks, financial markets, COVID-19 |
JEL: | C11 C58 G10 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:2021:05&r=all |