|
on Econometrics |
By: | Manuel Denzer (Johannes Gutenberg University Mainz); Constantin Weiser (Johannes Gutenberg University Mainz) |
Abstract: | We propose a new method to detect weak identification in instrumental variable (IV) models. This method is based on the asymptotic normality of the distributions of the estimated endogenous variable structural equation coefficients in the presence of strong identification. Therefore, our method resulting in a specific test is more flexible than previous tests as it does not depend on a specific class of models, but is applicable for a variety of both linear and non-linear IV models or mixtures of them, which can be estimated by generalized method of moments (GMM). Moreover, our proposed test does not rely on assumptions of homoscedasticity or the absence of autocorrelation. For linear models estimated by two-stage- least-squares (2SLS), our novel test yields the same qualitative conclusions as the usually applied test on excluded instruments at the reduced form. By adopting weak identication definitions of Stock and Yogo (2005), we provide critical values for our test by means of a comprehensive Monte Carlo simulation. This enables applied econometricians to make case- by-case decisions regarding weak identification in non-homoscedastic linear models by using pair bootstrapping procedures. Moreover, we show how our insights can be applied to assess weak identication in a specific non-linear IV model. |
Keywords: | Weakidentication,Weakinstruments,Endogeneity,Bootstrap |
JEL: | C26 C36 |
Date: | 2021–10–05 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:2107&r= |
By: | Johannes S. Kunz (Monash University); Kevin E. Staub (University of Melbourne); Rainer Winkelmann (University of Zurich) |
Abstract: | Many applied settings in empirical economics require estimation of a large number of individual effects, like teacher effects or location effects; in health economics, prominent examples include patient effects, doctor effects, or hospital effects. Increasingly, these effects are the object of interest of the estimation, and predicted effects are often used for further descriptive and regression analyses. To avoid imposing distributional assumptions on these effects, they are typically estimated via fixed effects methods. In short panels, the conventional maximum likelihood estimator for fixed effects binary response models provides poor estimates of these individual effects since the finite sample bias is typically substantial. We present a bias-reduced fixed effects estimator that provides better estimates of the individual effects in these models by removing the first-order asymptotic bias. An additional, practical advantage of the estimator is that it provides finite predictions for all individual effects in the sample, including those for which the corresponding dependent variable has identical outcomes in all time periods over time (either all zeros or ones); for these, the maximum likelihood prediction is infinite. We illustrate the approach in simulation experiments and in an application to health care utilization. Stata estimation command is available at [Github:brfeglm](https://github.com/Joha nnesSKunz/brfeglm) |
Keywords: | Incidental parameter bias, Perfect prediction, Fixed effects, Panel data, Bias reduction |
JEL: | C23 C25 I11 I18 |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:ajr:sodwps:2021-05&r= |
By: | Yi Qian; Hui Xie |
Abstract: | Unlike random sampling, selective sampling draws units based on the outcome values, such as over-sampling rare events in choice outcomes and extreme activities on continuous and count outcomes. Despite high cost effectiveness for marketing research, such endogenously selected samples must be carefully analyzed to avoid selection bias. We introduce a unified and efficient approach based on semiparametric odds ratio (SOR) models applicable for categorical, continuous and count response data collected using selective sampling. Unlike extant sampling-adjusting methods and Heckman-type selection models, the proposed approach requires neither modeling selection mechanisms nor imposing parametric distributional assumptions on the response variables, eliminating both sources of mis-specification bias. Using this approach, one can quantify and test for the relationships among variables as if samples had been collected via random sampling, simplifying bias correction of endogenously selected samples. We evaluate and illustrate the method using extensive simulation studies and two real data examples: endogenously stratified sampling for linear/nonlinear regressions to identify drivers of the share-of-wallet outcome for cigarettes smokers, and using truncated and on-site samples for count data models of store shopping demand. The evaluation shows that selective sampling followed by applying the SOR approach reduces required sample size by more than 70% compared with random sampling, and that in a wide range of selective sampling scenarios SOR offers novel solutions outperforming extant methods for selective samples with opportunities to make better managerial decisions. |
JEL: | C01 C1 C5 C8 |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:28801&r= |
By: | Yu, Jun (School of Economics, Singapore Management University) |
Abstract: | This paper proposes a class of state-space models where the state equation is a local-to-unity process. The large sample theory is obtained for the least squares (LS) estimator of the autoregressive (AR) parameter in the AR representation of the model under two sets of conditions. In the first set of conditions, the error term in the observation equation is independent and identically distributed (iid), and the error term in the state equation is stationary and fractionally integrated with memory parameter H ϵ 2 (0; 1). It is shown that both the rate of convergence and the asymptotic distribution of the LS estimator depend on H. In the second set of conditions, the error term in the observation equation is independent but not necessarily identically distributed, and the error term in the state equation is strong mixing. When both error terms are iid, we also develop the asymptotic theory for an instrumental variable estimator. Special cases of our models are discussed. |
Keywords: | State-space; Local-to-unity; O-U process; Fractional O-U process; Fractional Brownian motion; Fractional integration; Instrumental variable |
JEL: | C12 C22 G01 |
Date: | 2021–05–05 |
URL: | http://d.repec.org/n?u=RePEc:ris:smuesw:2021_004&r= |
By: | AmirEmad Ghassami; Numair Sani; Yizhen Xu; Ilya Shpitser |
Abstract: | In many applications, researchers are interested in the direct and indirect causal effects of an intervention on an outcome of interest. Mediation analysis offers a rigorous framework for the identification and estimation of such causal quantities. In the case of binary treatment, efficient estimators for the direct and indirect effects are derived by Tchetgen Tchetgen and Shpitser (2012). These estimators are based on influence functions and possess desirable multiple robustness properties. However, they are not readily applicable when treatments are continuous, which is the case in several settings, such as drug dosage in medical applications. In this work, we extend the influence function-based estimator of Tchetgen Tchetgen and Shpitser (2012) to deal with continuous treatments by utilizing a kernel smoothing approach. We first demonstrate that our proposed estimator preserves the multiple robustness property of the estimator in Tchetgen Tchetgen and Shpitser (2012). Then we show that under certain mild regularity conditions, our estimator is asymptotically normal. Our estimation scheme allows for high-dimensional nuisance parameters that can be estimated at slower rates than the target parameter. Additionally, we utilize cross-fitting, which allows for weaker smoothness requirements for the nuisance functions. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.09254&r= |
By: | Farbmacher, Helmut; Tauchmann, Harald |
Abstract: | This paper demonstrates that popular linear fixed-effects panel-data estimators are biased and inconsistent when applied in a discrete-time hazard setting - that is, one in which the outcome variable is a binary dummy indicating an absorbing state, even if the data-generating process is fully consistent with the linear discrete-time hazard model. In addition to conventional survival bias, these estimators suffer from another source of - frequently severe - bias that originates from the data transformation itself and, unlike survival bias, is present even in the absence of any unobserved heterogeneity. We suggest an alternative estimation strategy, which is instrumental variables estimation using first-differences of the exogenous variables as instruments for their levels. Monte Carlo simulations and an empirical application substantiate our theoretical results. |
Keywords: | linear probability model,individual fixed effects,discrete-time hazard,absorbing state,survival bias,instrumental variables estimation |
JEL: | C23 C25 C41 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:032021&r= |
By: | Victor Chernozhukov; Chen Huang; Weining Wang |
Abstract: | This paper studies the estimation of network connectedness with focally sparse structure. We try to uncover the network effect with a flexible sparse deviation from a predetermined adjacency matrix. To be more specific, the sparse deviation structure can be regarded as latent or misspecified linkages. To obtain high-quality estimator for parameters of interest, we propose to use a double regularized high-dimensional generalized method of moments (GMM) framework. Moreover, this framework also facilitates us to conduct the inference. Theoretical results on consistency and asymptotic normality are provided with accounting for general spatial and temporal dependency of the underlying data generating processes. Simulations demonstrate good performance of our proposed procedure. Finally, we apply the methodology to study the spatial network effect of stock returns. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.07424&r= |
By: | Frank Kleibergen (University of Amsterdam, Kennesaw State University); Zhaoguo Zhan (University of Amsterdam, Kennesaw State University) |
Abstract: | We propose the double robust Lagrange multiplier (DRLM) statistic for testing hypotheses specified on the pseudo-true value of the structural parameters in the generalized method of moments. The pseudo-true value is defined as the minimizer of the population continuous updating objective function and equals the true value of the structural parameter in the absence of misspecification.\nocite{hhy96} The (bounding) chi-squared limiting distribution of the DRLM statistic is robust to both misspecification and weak identification of the structural parameters, hence its name. To emphasize its importance for applied work, we use the DRLM test to analyze the return on education, which is often perceived to be weakly identified, using data from Card (1995) where misspecification occurs in case of treatment heterogeneity; and to analyze the risk premia associated with risk factors proposed in Adrian et al. (2014) and He et al. (2017), where both misspecification and weak identification need to be addressed. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.08345&r= |
By: | Edwin Fourrier-Nicolai (Aix Marseille Univ, CNRS, AMSE, Marseille, France and Toulouse School of Economics, Université Toulouse Capitole, Toulouse, France); Michel Lubrano (School of Economics, Jiangxi University of Finance and Economics & Aix-Marseille Univ., CNRS, AMSE) |
Abstract: | The growth incidence curve of Ravallion and Chen (2003) is based on the quantile function. Its distribution-free estimator behaves erratically with usual sample sizes leading to problems in the tails. We propose a series of parametric models in a Bayesian framework. A first solution consists in modelling the underlying income distribution using simple densities for which the quantile function has a closed analytical form. This solution is extended by considering a mixture model for the underlying income distribution. However in this case, the quantile function is semi-explicit and has to be evaluated numerically. The alternative solution consists in adjusting directly a functional form for the Lorenz curve and deriving its first order derivative to find the corresponding quantile function. We compare these models first by Monte Carlo simulations and second by using UK data from the Family Expenditure Survey where we devote a particular attention to the analysis of subgroups. |
Keywords: | Bayesian inference, growth incidence curve, Inequality |
JEL: | C11 D31 I31 |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:aim:wpaimx:2131&r= |
By: | Maurice J. G. Bun (De Nederlandse Bank and University of Amsterdam); Frank Kleibergen (De Nederlandse Bank and University of Amsterdam) |
Abstract: | We use identification robust tests to show that difference, level and non-linear moment conditions, as proposed by Arellano and Bond (1991), Arellano and Bover (1995), Blundell and Bond (1998) and Ahn and Schmidt (1995) for the linear dynamic panel data model, do not separately identify the autoregressive parameter when its true value is close to one and the variance of the initial observations is large. We prove that combinations of these moment conditions, however, do so when there are more than three time series observations. This identification then solely results from a set of, so-called, robust moment conditions. These robust moments are spanned by the combined difference, level and non-linear moment conditions and only depend on differenced data. We show that, when only the robust moments contain identifying information on the autoregressive parameter, the discriminatory power of the Kleibergen (2005) LM test using the combined moments is identical to the largest rejection frequencies that can be obtained from solely using the robust moments. This shows that the KLM test implicitly uses the robust moments when only they contain information on the autoregressive parameter. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.08346&r= |
By: | Atsushi Inoue; Tong Li; Qi Xu |
Abstract: | This paper proposes a new framework to evaluate unconditional quantile effects (UQE) in a data combination model. The UQE measures the effect of a marginal counterfactual change in the unconditional distribution of a covariate on quantiles of the unconditional distribution of a target outcome. Under rank similarity and conditional independence assumptions, we provide a set of identification results for UQEs when the target covariate is continuously distributed and when it is discrete, respectively. Based on these identification results, we propose semiparametric estimators and establish their large sample properties under primitive conditions. Applying our method to a variant of Mincer's earnings function, we study the counterfactual quantile effect of actual work experience on income. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.09445&r= |
By: | Adeola Oyenubi |
Abstract: | This paper considers the sensitivity of Genetic Matching (GenMatch) to the choice of balance measure. It explores the performance of a newly introduced distributional balance measure that is similar to the KS test but is more evenly sensitive to imbalance across the support. This measure is introduced by Goldman & Kaplan (2008) (i.e. the GK measure). This is important because the rationale behind distributional balance measures is their ability to provide a broader description of balance. I also consider the performance of multivariate balance measures i.e. distance covariance and correlation. This is motivated by the fact that ideally, balance for causal inference refers to balance in joint density and individual balance in a set of univariate distributions does not necessarily imply balance in the joint distribution.Simulation results show that GK dominates the KS test in terms of Bias and Mean Square Error (MSE); and the distance correlation measure dominates all other measure in terms of Bias and MSE. These results have two important implication for the choice of balance measure (i) Even sensitivity across the support is important and not all distributional measures has this property (ii) Multivariate balance measures can improve the performance of matching estimators. |
Keywords: | Genetic matching, balance measures, causal inference, Machine learning |
JEL: | I38 H53 C21 D13 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:rza:wpaper:840&r= |
By: | Dolado, Juan J.; Rachinger, Heiko; Velasco, Carlos |
Abstract: | We consider a single-step Lagrange Multiplier (LM) test for joint breaks (at known or unknown dates) in the long memory parameter, the short-run dynamics and the level of a fractionally integrated time-series process. The regression version of this test is easily implementable and allows to identify the speciÂ?c sources of the break when the null hypothesis of parameter stability is rejected. However, its size and power properties are sensitive to the correct specification of short-run dynamics under the null. To address this problem, we propose a slight modification of the LM test (labeled LMW-type test) which also makes use of some information under the alternative (in the spirit of a Wald test). This test shares the same limiting distribution as the LM test under the null and local alternatives but achieves higher power by facilitating the correct specification of the short-run dynamics under the null and any alternative (either local or fixed). Monte Carlo simulations provide support for these theoretical results. An empirical application, concerning the origin of shifts in the long-memory properties of forward discount rates in five G7 countries, illustrates the usefulness of the proposed LMW-type test. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:15435&r= |
By: | Kiran Tomlinson; Johan Ugander; Austin R. Benson |
Abstract: | Standard methods in preference learning involve estimating the parameters of discrete choice models from data of selections (choices) made by individuals from a discrete set of alternatives (the choice set). While there are many models for individual preferences, existing learning methods overlook how choice set assignment affects the data. Often, the choice set itself is influenced by an individual's preferences; for instance, a consumer choosing a product from an online retailer is often presented with options from a recommender system that depend on information about the consumer's preferences. Ignoring these assignment mechanisms can mislead choice models into making biased estimates of preferences, a phenomenon that we call choice set confounding; we demonstrate the presence of such confounding in widely-used choice datasets. To address this issue, we adapt methods from causal inference to the discrete choice setting. We use covariates of the chooser for inverse probability weighting and/or regression controls, accurately recovering individual preferences in the presence of choice set confounding under certain assumptions. When such covariates are unavailable or inadequate, we develop methods that take advantage of structured choice set assignment to improve prediction. We demonstrate the effectiveness of our methods on real-world choice data, showing, for example, that accounting for choice set confounding makes choices observed in hotel booking and commute transportation more consistent with rational utility-maximization. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.07959&r= |
By: | Igor Custodio João (Vrije Universiteit Amsterdam); Andre Lucas (Vrije Universiteit Amsterdam); Julia Schaumburg (Vrije Universiteit Amsterdam) |
Abstract: | We introduce a new method for dynamic clustering of panel data with dynamics for cluster location and shape, cluster composition, and for the number of clusters. Whereas current techniques typically result in (economically) too many switches, our method results in economically more meaningful dynamic clustering patterns. It does so by extending standard cross-sectional clustering techniques using shrinkage towards previous cluster means. In this way, the different cross-sections in the panel are tied together, substantially reducing short-lived switches of units between clusters (flickering) and the birth and death of incidental, economically less meaningful clusters. In a Monte Carlo simulation, we study how to set the penalty parameter in a data-driven way. A systemic risk surveillance example for business model classification in the global insurance industry illustrates how the new method works empirically. |
Keywords: | dynamic clustering, shrinkage, cluster membership persistence, Silhouette index, insurance |
JEL: | G22 C33 C38 |
Date: | 2021–05–10 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20210040&r= |
By: | Cl\'ement de Chaisemartin |
Abstract: | I consider the estimation of the average treatment effect (ATE), in a population that can be divided into $G$ groups, and such that one has unbiased and uncorrelated estimators of the conditional average treatment effect (CATE) in each group. These conditions are for instance met in stratified randomized experiments. I first assume that the outcome is homoscedastic, and that each CATE is bounded in absolute value by $B$ standard deviations of the outcome, for some known constant $B$. I derive, across all linear combinations of the CATEs' estimators, the estimator of the ATE with the lowest worst-case mean-squared error. This optimal estimator assigns a weight equal to group $g$'s share in the population to the most precisely estimated CATEs, and a weight proportional to one over the CATE's variance to the least precisely estimated CATEs. This optimal estimator is feasible: the weights only depend on known quantities. I then allow for heteroskedasticity and for positive correlations between the estimators. This latter condition is often met in differences-in-differences designs, where the CATEs are estimators of the effect of having been treated for a certain number of time periods. In that case, the optimal estimator is no longer feasible, as it depends on unknown quantities, but a feasible estimator can easily be constructed by replacing those unknown quantities by estimators. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.08766&r= |
By: | Dong, Xueqi; Liu, Shuo Li |
Abstract: | Likelihood functions have been the central pieces of statistical inference. For discrete choice data, conventional likelihood functions are specified by random utility(RU) models, such as logit and tremble, which generate choice stochasticity through an ”error”, or, equivalently, random preference.For risky discrete choice, this paper explores an alternative method to construct the likelihood function: Rational Expectation Stochastic Choice (RESC). In line with Machina (1985), the subject optimally and deterministically chooses a stochastic choice function among all possible stochastic choice functions; the choice stochasticity canbe explained by risk aversion and the relaxation of the reduction of compound lottery. The model maximizes a simple two-layer expectation that disentangles risk and randomization, in the similar spirit of Klibanoff et al. (2005) where ambiguity and risk are disentangled. The model is applied to an experiment, where we do not commit to a particular stochastic choice function but let the data speak. In RESC, well-developed decision analysis methods to measure risk attitude toward objective probability can also be ap-plied to measure the attitude toward the implied choice probability. Stochastic choicefunctions are structurally estimated to estimate the stochastic choice functions, anduse standard discrimination test to compare the goodness of fit of RESC and differentRUs. The RUs are Expected Utility+logit and other leading contenders for describing decision under risk. The results suggest the statistical superiority of RESC over ”error” rules. With weakly fewer parameters, RESC outperforms different benchmarkRU models for 30%−89% of subjects. RU models outperform RESC for 0%−2% of subjects. Similar statistical superiority is replicated in a second set of experimental data. |
Keywords: | Experiment; Likelihood Function; Maximum Likelihood Identification;Risk Aversion Parameter; Clarke Test; Discrimination of Stochastic Choice Functions |
JEL: | D8 |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:107678&r= |
By: | María Florencia Gabrielli (CONICET. Universidad Nacional de Cuyo.); Manuel Willington (Universidad Adolfo Ibañez) |
Abstract: | We propose a structural method for estimating the revenue losses associated with bidding rings in symmetric and asymmetric first-price auctions. It is based on the structural analysis of auction data and is consistent with antitrust damage assessment methodologies: we build a but-for (competitive) scenario and estimate the differences between the two scenarios. We show in a Monte Carlo exercise that our methodology performs very well in moderate size samples. We apply it to Ohio Milk Data Set analyzed by Porter and Zona [1999] and find that damages are around 7%. Damages can be assessed without any information about unaffectedmarkets. |
Keywords: | Collusion First price auctions Damages |
JEL: | C1 C4 C7 D44 L4 |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:aoz:wpaper:5&r= |
By: | D'Haultfoeuille, Xavier; Gaillac, Christophe; Maurel, Arnaud |
Abstract: | In this paper, we build a new test of rational expectations based on the marginal distributions of realizations and subjective beliefs. This test is widely applicable, including in the common situation where realizations and beliefs are observed in two dierent datasets that cannot be matched. We show that whether one can rationalize rational expectations is equivalent to the distribu- tion of realizations being a mean-preserving spread of the distribution of beliefs. The null hypothesis can then be rewritten as a system of many moment inequal- ity and equality constraints, for which tests have been recently developed in the literature. The test is robust to measurement errors under some restrictions and can be extended to account for aggregate shocks. Finally, we apply our methodology to test for rational expectations about future earnings. While individuals tend to be right on average about their future earnings, our test strongly rejects rational expectations. |
Keywords: | Rational expectations; Test; Subjective expectations; Data; combination. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:125599&r= |
By: | Debopam Bhattacharya; Tatiana Komarova |
Abstract: | The econometric literature on program-evaluation and optimal treatment-choice takes functionals of outcome-distributions as target welfare, and ignores program-impacts on unobserved utilities, including utilities of those whose outcomes may be unaffected by the intervention. We show that in the practically important setting of discrete-choice, under general preference-heterogeneity and income-effects, the distribution of indirect-utility is nonparametrically identified from average demand. This enables cost-benefit analysis and treatment-targeting based on social welfare and planners' distributional preferences, while also allowing for general unobserved heterogeneity in individual preferences. We demonstrate theoretical connections between utilitarian social welfare and Hicksian compensation. An empirical application illustrates our results. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.08689&r= |
By: | Antonio Peyrache (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia); Maria C. A. Silva (CEGE - Católica Porto Business School,) |
Abstract: | Network Data Envelopment Analysis (DEA) has become a largely researched topic in the DEA literature. In this paper we consider one of the simplest network models: Parallel Network DEA models. We briefly review a large body of literature that relates to these network models. Then we proceed to discuss existing models and point out some of their pitfalls. Finally, we propose an approach that attempts to solve these pitfalls, recognising that when one computes a decision making unit (DMU) efficiency score and want to decompose it into the divisional/process efficiencies there is a component of allocative inefficiency. We develop our models at three levels of aggregation: the sub-unit (production division/process), the DMU (firm) and the industry. For each level we measure the inefficiency using the directional distance function and we relate the different levels to each other by proposing a decomposition into exhaustive and mutually exclusive components. We illustrate the application of our models to the case of Portuguese hospitals and we also propose avenues for future research, since most of the topics addressed in this paper are not only related to Parallel network models but to general network structures. |
Keywords: | Data Envelopment Analysis; Multi-Level Networks; Parallel Networks; Directional Distance Function; Efficiency. |
Date: | 2021–02 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:159&r= |
By: | Canova, Fabio |
Abstract: | I examine the properties of cross sectional estimates of multipliers, elasticities, or pass-throughs when the data is generated by a conventional multi-unit time series specification. A number of important biases plague estimates; the most relevant one occurs when the cross section is not dynamic homogenous. I suggest methods that can deal with this problem and show the magnitude of the biases cross sectional estimators display in an experimental setting. I contrast average time series and average cross sectional estimates of local fiscal multipliers for US states. |
Keywords: | Cross sectional methods; dynamic heterogeneity; fiscal multipliers; Monetary pass-through; partial pooling |
JEL: | E0 H6 H7 |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:15330&r= |
By: | Dodo Natatou Moutari; Hassane Abba Mallam; Diakarya Barro; Bisso Saley |
Abstract: | This study aims to widen the sphere of pratical applicability of the HAC model combined with the ARMA-APARCH volatility forecast model and the extreme values theory. A sequential process of modeling of the VaR of a portfolio based on the ARMA-APARCH-EVT-HAC model was discussed. The empirical analysis conducted with data from international stock market indices clearly illustrates the performance and accuracy of modeling based on HACs. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.09473&r= |