
on Econometrics 
By:  Joshua B. Gilbert; Zachary Himmelsbach; James Soland; Mridul Joshi; Benjamin W. Domingue 
Abstract:  Analyses of heterogeneous treatment effects (HTE) are common in applied causal inference research. However, when outcomes are latent variables assessed via psychometric instruments such as educational tests, standard methods ignore the potential HTE that may exist among the individual items of the outcome measure. Failing to account for "itemlevel" HTE (ILHTE) can lead to both estimated standard errors that are too small and identification challenges in the estimation of treatmentbycovariate interaction effects. We demonstrate how Item Response Theory (IRT) models that estimate a treatment effect for each assessment item can both address these challenges and provide new insights into HTE generally. This study articulates the theoretical rationale for the ILHTE model and demonstrates its practical value using data from 20 randomized controlled trials in economics, education, and health. Our results show that the ILHTE model reveals itemlevel variation masked by average treatment effects, provides more accurate statistical inference, allows for estimates of the generalizability of causal effects, resolves identification problems in the estimation of interaction effects, and provides estimates of standardized treatment effect sizes corrected for attenuation due to measurement error. 
Date:  2024–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.00161&r= 
By:  Emanuele Bacchiocchi; Toru Kitagawa 
Abstract:  In this paper we propose a class of structural vector autoregressions (SVARs) characterized by structural breaks (SVARWB). Together with standard restrictions on the parameters and on functions of them, we also consider constraints across the different regimes. Such constraints can be either (a) in the form of stability restrictions, indicating that not all the parameters or impulse responses are subject to structural changes, or (b) in terms of inequalities regarding particular characteristics of the SVARWB across the regimes. We show that all these kinds of restrictions provide benefits in terms of identification. We derive conditions for point and set identification of the structural parameters of the SVARWB, mixing equality, sign, rank and stability restrictions, as well as constraints on forecast error variances (FEVs). As point identification, when achieved, holds locally but not globally, there will be a set of isolated structural parameters that are observationally equivalent in the parametric space. In this respect, both common frequentist and Bayesian approaches produce unreliable inference as the former focuses on just one of these observationally equivalent points, while for the latter on a nonvanishing sensitivity to the prior. To overcome these issues, we propose alternative approaches for estimation and inference that account for all admissible observationally equivalent structural parameters. Moreover, we develop a pure Bayesian and a robust Bayesian approach for doing inference in setidentified SVARWBs. Both the theory of identification and inference are illustrated through a set of examples and an empirical application on the transmission of US monetary policy over the great inflation and great moderation regimes. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.04973&r= 
By:  Jiti Gao; Bin Peng; Yayi Yan 
Abstract:  In this paper, we propose a robust estimation and inferential method for highdimensional panel data models. Specifically, (1) we investigate the case where the number of regressors can grow faster than the sample size, (2) we pay particular attention to nonGaussian, serially and crosssectionally correlated and heteroskedastic error processes, and (3) we develop an estimation method for highdimensional longrun covariance matrix using a thresholded estimator. Methodologically and technically, we develop two Nagaevtypes of concentration inequalities: one for a partial sum and the other for a quadratic form, subject to a set of easily verifiable conditions. Leveraging these two inequalities, we also derive a nonasymptotic bound for the LASSO estimator, achieve asymptotic normality via the nodewise LASSO regression, and establish a sharp convergence rate for the thresholded heteroskedasticity and autocorrelation consistent (HAC) estimator. Our study thus provides the relevant literature with a complete toolkit for conducting inference about the parameters of interest involved in a highdimensional panel data framework. We also demonstrate the practical relevance of these theoretical results by investigating a highdimensional panel data model with interactive fixed effects. Moreover, we conduct extensive numerical studies using simulated and real data examples. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.07420&r= 
By:  Junho Choi; Ryo Okui 
Abstract:  This paper concerns the estimation of linear panel data models with endogenous regressors and a latent group structure in the coefficients. We consider instrumental variables estimation of the groupspecific coefficient vector. We show that direct application of the Kmeans algorithm to the generalized method of moments objective function does not yield unique estimates. We newly develop and theoretically justify twostage estimation methods that apply the Kmeans algorithm to a regression of the dependent variable on predicted values of the endogenous regressors. The results of Monte Carlo simulations demonstrate that twostage estimation with the first stage modeled using a latent group structure achieves good classification accuracy, even if the true firststage regression is fully heterogeneous. We apply our estimation methods to revisiting the relationship between income and democracy. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.08687&r= 
By:  Luis A. F. Alvarez; Bruno Ferman 
Abstract:  Gonçalves and Ng (2024) propose an interesting and simple way to improve counterfactual imputation methods when errors are predictable. For unconditional analyses, this approach yields smaller meansquared error and tighter prediction intervals in large samples, even if the dependence of the errors is misspecified. For conditional analyses, this approach corrects the bias of standard methods, and provides valid asymptotic inference, if the dependence of the errors is correctly specified. In this comment, we first discuss how the assumptions imposed on the errors depend on the model and estimator adopted. This enables researchers to assess the validity of the assumptions imposed on the structure of the errors, and the relevant information set for conditional analyses. We then propose a simple sensitivity analysis in order to quantify the amount of misspecification on the dependence structure of the errors required for the conclusions of conditional analyses to be changed. 
Keywords:  treatment effect; synthetic control; sensitivity analysis 
JEL:  C22 C23 C52 
Date:  2024–05–22 
URL:  http://d.repec.org/n?u=RePEc:spa:wpaper:2024wpecon16&r= 
By:  Ron P. Smith; M. Hashem Pesaran 
Abstract:  The risk premia of traded factors are the sum of factor means and a parameter vector we denote by {\phi} which is identified from the cross section regression of alpha of individual securities on the vector of factor loadings. If phi is nonzero one can construct "phiportfolios" which exploit the systematic components of nonzero alpha. We show that for known values of betas and when phi is nonzero there exist phiportfolios that dominate meanvariance portfolios. The paper then proposes a twostep bias corrected estimator of phi and derives its asymptotic distribution allowing for idiosyncratic pricing errors, weak missing factors, and weak error crosssectional dependence. Small sample results from extensive Monte Carlo experiments show that the proposed estimator has the correct size with good power properties. The paper also provides an empirical application to a large number of U.S. securities with risk factors selected from a large number of potential risk factors according to their strength and constructs phiportfolios and compares their Sharpe ratios to mean variance and S&P 500 portfolio. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.02217&r= 
By:  Veldhuis, Sebastian (Department of Economics, University of Klagenfurt); Wagner, Martin (Bank of Slovenia, Ljubljana and Institute for Advanced Studies, Vienna) 
Abstract:  We consider integrated modiï¬ ed least squares estimation for systems of cointegrating multivariate polynomial regressions, i. e., systems of regressions that include deterministic variables, integrated processes and products of these variables as regressors. The errors are allowed to be correlated across equations, over time and with the regressors. Since, under restrictions on the parameters or in case of nonidentical regressors across equations, integrated modiï¬ ed OLS and GLS estimation do not, in general, coincide, we discuss in detail restricted integrated generalized least squares estimators and inference based upon them. Furthermore, we develop asymptotically pivotal ï¬ xedb inference, available only in case of full design and for speciï¬ c hypotheses. 
Keywords:  Integrated modiï¬ ed estimation, cointegrating multivariate polynomial regression, ï¬ xedb inference, generalized least squares 
JEL:  C12 C13 C32 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihswps:number54&r= 
By:  Zacharias Psaradakis; Martin Sola; Nicola Spagnolo; Patricio Yunis 
Abstract:  We examine the smallsample accuracy of impulse responses obtained using local projections (LP) and vector autoregressive (VAR) models. In view of the fact that impulse responses are differences between multistep predictors, we propose to assess the relative performance of impulseresponse estimators using tests for equal predictive accuracy. In our Monte Carlo experiments, LPbased and VARbased estimators are found to be equally accurate in large samples under a mean squared error risk function. VARbased estimators tend to have an advantage over LPbased ones in small and moderately sized samples, particularly at long horizons. 
Keywords:  Local projections, Predictive accuracy, VAR models. 
JEL:  C32 C53 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:udt:wpecon:2024_01&r= 
By:  Ge, S.; Li, S.; Linton, O. B.; Liu, W.; Su, W. 
Abstract:  In this paper, we propose two novel frameworks to incorporate auxiliary information about connectivity among entities (i.e., network information) into the estimation of large covariance matrices. The current literature either completely ignores this kind of network information (e.g., thresholding and shrinkage) or utilizes some simple network structure under very restrictive settings (e.g., banding). In the era of big data, we can easily get access to auxiliary information about the complex connectivity structure among entities. Depending on the features of the auxiliary network information at hand and the structure of the covariance matrix, we provide two different frameworks correspondingly â€”the Network Guided Thresholding and the Network Guided Banding. We show that both Network Guided estimators have optimal convergence rates over a larger class of sparse covariance matrix. Simulation studies demonstrate that they generally outperform other pure statistical methods, especially when the true covariance matrix is sparse, and the auxiliary network contains genuine information. Empirically, we apply our method to the estimation of the covariance matrix with the help of many financial linkage data of asset returns to attain the global minimum variance (GMV) portfolio. 
Keywords:  Banding, Big Data, Large Covariance Matrix, Network, Thresholding 
JEL:  C13 C58 G11 
Date:  2024–05–20 
URL:  http://d.repec.org/n?u=RePEc:cam:camjip:2416&r= 
By:  Wang, Tengyao; Dobriban, Edgar; Gataric, Milana; Samworth, Richard J. 
Abstract:  We propose a new method for highdimensional semisupervised learning problems based on the careful aggregation of the results of a lowdimensional procedure applied to many axisaligned random projections of the data. Our primary goal is to identify important variables for distinguishing between the classes; existing lowdimensional methods can then be applied for final class assignment. To this end, we score projections according to their classdistinguishing ability; for instance, motivated by a generalized Rayleigh quotient, we can compute the traces of estimated whitened betweenclass covariance matrices on the projected data. This enables us to assign an importance weight to each variable for a given projection, and to select our signal variables by aggregating these weights over highscoring projections. Our theory shows that the resulting SharpSSL algorithm is able to recover the signal coordinates with high probability when we aggregate over sufficiently many random projections and when the base procedure estimates the diagonal entries of the whitened betweenclass covariance matrix sufficiently well. For the Gaussian EM base procedure, we provide a new analysis of its performance in semisupervised settings that controls the parameter estimation error in terms of the proportion of labeled data in the sample. Numerical results on both simulated data and a real colon tumor dataset support the excellent empirical performance of the method. 
Keywords:  semisupervised learning; highdimensional statistics; sparsity; random projection; ensemble learning; EP/T02772X/1; EP/T017961/1; EP/P031447/1; EP/N031938/1; 101019498; DMS 2046874 (CAREER).; T&F deal 
JEL:  C1 
Date:  2024–05–20 
URL:  https://d.repec.org/n?u=RePEc:ehl:lserod:122552&r= 
By:  Lonjezo Sithole 
Abstract:  I propose a locally robust semiparametric framework for estimating causal effects using the popular examiner IV design, in the presence of many examiners and possibly many covariates relative to the sample size. The key ingredient of this approach is an orthogonal moment function that is robust to biases and local misspecification from the first step estimation of the examiner IV. I derive the orthogonal moment function and show that it delivers multiple robustness where the outcome model or at least one of the first step components is misspecified but the estimating equation remains valid. The proposed framework not only allows for estimation of the examiner IV in the presence of many examiners and many covariates relative to sample size, using a wide range of nonparametric and machine learning techniques including LASSO, Dantzig, neural networks and random forests, but also delivers rootn consistent estimation of the parameter of interest under mild assumptions. 
Date:  2024–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2404.19144&r= 
By:  David M. Ritzwoller; Vasilis Syrgkanis 
Abstract:  We propose a method for constructing a confidence region for the solution to a conditional moment equation. The method is built around a class of algorithms for nonparametric regression based on subsampled kernels. This class includes random forest regression. We bound the error in the confidence region's nominal coverage probability, under the restriction that the conditional moment equation of interest satisfies a local orthogonality condition. The method is applicable to the construction of confidence regions for conditional average treatment effects in randomized experiments, among many other similar problems encountered in applied economics and causal inference. As a byproduct, we obtain several new orderexplicit results on the concentration and normal approximation of highdimensional $U$statistics. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.07860&r= 
By:  Nathaniel T. Wilcox 
Abstract:  Experimental and behavioral economists, as well as psychologists, commonly assume conditional independence of choices when constructing likelihood functions for structural estimation of choice functions. I test this assumption using data from a new experiment designed for this purpose. Within the limits of the experiment’s identifying restriction and designed power to detect deviations from conditional independence, conditional independence is not rejected. In naturally occurring data, concerns about violations of conditional independence are certainly proper and welltaken (for wellknown reasons). However, when an experimenter employs the particular experimental mechanisms and designs used here, the findings suggest that conditional independence is an acceptable assumption for analyzing data so generated. Key Words: Alternation, Conditional Independence, Choice Under Risk, Discrete Choice, Persistence, Random Problem Selection 
JEL:  C22 C25 C91 D81 
Date:  2024 
URL:  http://d.repec.org/n?u=RePEc:apl:wpaper:2415&r= 
By:  Rajveer Jat; Daanish Padha 
Abstract:  We forecast a single time series using a highdimensional set of predictors. When these predictors share common underlying dynamics, an approximate latent factor model provides a powerful characterization of their comovements Bai(2003). These latent factors succinctly summarize the data and can also be used for prediction, alleviating the curse of dimensionality in highdimensional prediction exercises, see Stock & Watson (2002a). However, forecasting using these latent factors suffers from two potential drawbacks. First, not all pervasive factors among the set of predictors may be relevant, and using all of them can lead to inefficient forecasts. The second shortcoming is the assumption of linear dependence of predictors on the underlying factors. The first issue can be addressed by using some form of supervision, which leads to the omission of irrelevant information. One example is the threepass regression filter proposed by Kelly & Pruitt (2015). We extend their framework to cases where the form of dependence might be nonlinear by developing a new estimator, which we refer to as the Kernel ThreePass Regression Filter (K3PRF). This alleviates the aforementioned second shortcoming. The estimator is computationally efficient and performs well empirically. The shortterm performance matches or exceeds that of established models, while the longterm performance shows significant improvement. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.07292&r= 
By:  Zacharias Psaradakis; Martin Sola; Francisco Rapetti; Patricio Yunis 
Abstract:  We consider the relationship between stock prices, volatility and consumer sentiment. The analysis is based on a new multivariate model defined as a timevarying mixture of dynamic models in which instantaneous relationships among variables are allowed and the mixing weights have a thresholdtype structure. We discuss issues related to the stability of the model and estimation of its parameters. Our empirical results show that consumer sentiment affects significantly the S&P 500 price–dividend ratio and market volatility in at least one of the two regimes identified by the model, regimes which are associated with endogenously determined low and high consumer sentiment. 
Keywords:  Consumer sentiment, Mixture models, Price–dividend ratio, Threshold, Timevarying weights, Volatility. 
JEL:  C32 G12 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:udt:wpecon:2024_02&r= 
By:  Joshua Brault 
Abstract:  In this paper, I develop a populationbased Markov chain Monte Carlo (MCMC) algorithm known as parallel tempering to estimate dynamic stochastic general equilibrium (DSGE) models. Parallel tempering approximates the posterior distribution of interest using a family of Markov chains with tempered posteriors. At each iteration, two randomly selected chains in the ensemble are proposed to swap parameter vectors, after which each chain mutates via MetropolisHastings. The algorithm results in a fastmixing MCMC, particularly well suited for problems with irregular posterior distributions. Also, due to its global nature, the algorithm can be initialized directly from the prior distributions. I provide two empirical examples with complex posteriors: a New Keynesian model with equilibrium indeterminacy and the SmetsWouters model with more diffuse prior distributions. In both examples, parallel tempering overcomes the inherent estimation challenge, providing extremely consistent estimates across different runs of the algorithm with large effective sample sizes. I provide code compatible with Dynare mod files, making this routine straightforward for DSGE practitioners to implement. 
Keywords:  Econometric and statistical methods, Economic models 
JEL:  C11 C15 E10 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:2413&r= 
By:  Sullivan Hu\'e; Christophe Hurlin; Yang Lu 
Abstract:  We propose an original twopart, durationseverity approach for backtesting Expected Shortfall (ES). While Probability Integral Transform (PIT) based ES backtests have gained popularity, they have yet to allow for separate testing of the frequency and severity of ValueatRisk (VaR) violations. This is a crucial aspect, as ES measures the average loss in the event of such violations. To overcome this limitation, we introduce a backtesting framework that relies on the sequence of interviolation durations and the sequence of severities in case of violations. By leveraging the theory of (bivariate) orthogonal polynomials, we derive orthogonal moment conditions satisfied by these two sequences. Our approach includes a straightforward, modelfree Wald test, which encompasses various unconditional and conditional coverage backtests for both VaR and ES. This test aids in identifying any misspecified components of the internal model used by banks to forecast ES. Moreover, it can be extended to analyze other systemic risk measures such as Marginal Expected Shortfall. Simulation experiments indicate that our test exhibits good finite sample properties for realistic sample sizes. Through application to two stock indices, we demonstrate how our methodology provides insights into the reasons for rejections in testing ES validity. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.02012&r= 
By:  W. Benedikt Schmal 
Abstract:  Natural language processing tools have become frequently used in social sciences such as economics, political science, and sociology. Many publications apply topic modeling to elicit latent topics in text corpora and their development over time. Here, most publications rely on visual inspections and draw inference on changes, structural breaks, and developments over time. We suggest using univariate time series econometrics to introduce more quantitative rigor that can strengthen the analyses. In particular, we discuss the econometric topics of nonstationarity as well as structural breaks. This paper serves as a comprehensive practitioners guide to provide researchers in the social and life sciences as well as the humanities with concise advice on how to implement econometric time series methods to thoroughly investigate topic prevalences over time. We provide coding advice for the statistical software R throughout the paper. The application of the discussed tools to a sample dataset completes the analysis. 
Date:  2024–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2404.18499&r= 
By:  H. Peter Boswijk; Jun Yu; Yang Zu 
Abstract:  Based on a continuoustime stochastic volatility model with a linear drift, we develop a test for explosive behavior in financial asset prices at a low frequency when prices are sampled at a higher frequency. The test exploits the volatility information in the highfrequency data. The method consists of devolatizing logasset price increments with realized volatility measures and performing a supremumtype recursive DickeyFuller test on the devolatized sample. The proposed test has a nuisanceparameterfree asymptotic distribution and is easy to implement. We study the size and power properties of the test in Monte Carlo simulations. A realtime datestamping strategy based on the devolatized sample is proposed for the origination and conclusion dates of the explosive regime. Conditions under which the realtime datestamping strategy is consistent are established. The test and the datestamping strategy are applied to study explosive behavior in cryptocurrency and stock markets. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.02087&r= 
By:  Savi Virolainen 
Abstract:  Linear structural vector autoregressive models can be identified statistically without imposing restrictions on the model if the shocks are mutually independent and at most one of them is Gaussian. We show that this result extends to structural threshold and smooth transition vector autoregressive models incorporating a timevarying impact matrix defined as a weighted sum of the impact matrices of the regimes. Our empirical application studies the effects of the climate policy uncertainty shock on the U.S. macroeconomy. In a structural logistic smooth transition vector autoregressive model consisting of two regimes, we find that a positive climate policy uncertainty shock decreases production in times of low economic policy uncertainty but slightly increases it in times of high economic policy uncertainty. The introduced methods are implemented to the accompanying R package sstvars. 
Date:  2024–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2404.19707&r= 
By:  Ben Deaner; Hyejin Ku 
Abstract:  In economic program evaluation, it is common to obtain panel data in which outcomes are indicators that an individual has reached an absorbing state. For example, they may indicate whether an individual has exited a period of unemployment, passed an exam, left a marriage, or had their parole revoked. The parallel trends assumption that underpins differenceindifferences generally fails in such settings. We suggest identifying conditions that are analogous to those of differenceindifferences but apply to hazard rates rather than mean outcomes. These alternative assumptions motivate estimators that retain the simplicity and transparency of standard diffindiff, and we suggest analogous specification tests. Our approach can be adapted to general linear restrictions between the hazard rates of different groups, motivating duration analogues of the triple differences and synthetic control methods. We apply our procedures to examine the impact of a policy that increased the generosity of unemployment benefits, using a crosscohort comparison. 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2405.05220&r= 
By:  Zhang, Siliang; Kuha, Jouni; Steele, Fiona 
Abstract:  We define a model for the joint distribution of multiple continuous latent variables which includes a model for how their correlations depend on explanatory variables. This is motivated by and applied to social scientific research questions in the analysis of intergenerational help and support within families, where the correlations describe reciprocity of help between generations and complementarity of different kinds of help. We propose an MCMC procedure for estimating the model which maintains the positive definiteness of the implied correlation matrices, and describe theoretical results which justify this approach and facilitate efficient implementation of it. The model is applied to data from the UK Household Longitudinal Study to analyse exchanges of practical and financial support between adult individuals and their noncoresident parents. 
Keywords:  Bayesian estimation; covariance matrix modelling; item response theory models; positive definite matrices; twostep estimation 
JEL:  C1 
Date:  2024–05–28 
URL:  https://d.repec.org/n?u=RePEc:ehl:lserod:123698&r= 
By:  Kojevnikov, Denis (Tilburg University, School of Economics and Management); Song, Kyungchul 
Date:  2023 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiutis:aca0631e4f8a45c7af3a4e1942712e47&r= 
By:  Kaizhao Liu; Jose Blanchet; Lexing Ying; Yiping Lu 
Abstract:  Bootstrap is a popular methodology for simulating input uncertainty. However, it can be computationally expensive when the number of samples is large. We propose a new approach called \textbf{Orthogonal Bootstrap} that reduces the number of required Monte Carlo replications. We decomposes the target being simulated into two parts: the \textit{nonorthogonal part} which has a closedform result known as Infinitesimal Jackknife and the \textit{orthogonal part} which is easier to be simulated. We theoretically and numerically show that Orthogonal Bootstrap significantly reduces the computational cost of Bootstrap while improving empirical accuracy and maintaining the same width of the constructed interval. 
Date:  2024–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2404.19145&r= 
By:  David M. Kaplan (University of Missouri); Qian Wu (Southwestern University of Finance and Economics) 
Abstract:  The famous BlinderOaxaca decomposition estimates the statistically "explained" proportion of a betweengroup difference in means, but ordinal variables have no mean. A common approach assigns cardinal values 1, 2, 3, ... to the ordinal categories and runs the conventional OLSbased decomposition. Surprisingly, we show such results are numerically identical to a decomposition of the survival function when estimating the counterfactual using OLSbased distribution regression, even if the cardinalization is wrong. Still, reporting the counterfactual helps transparency and widesense replication, and to mitigate functional form misspecification, we describe and implement a nonparametric estimator. Empirically, we decompose U.S. ruralurban differences in mental health. 
Keywords:  BlinderOaxaca decomposition, counterfactual distribution, distribution regression, survival function 
JEL:  C25 I14 
Date:  2024–05 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:2404&r= 
By:  Battulga Gankhuu 
Abstract:  This study introduces marginal density functions of the general Bayesian MarkovSwitching Vector Autoregressive (MSVAR) process. In the case of the Bayesian MSVAR process, we provide closedform density functions and MonteCarlo simulation algorithms, including the importance sampling method. The MonteCarlo simulation method departs from the previous simulation methods because it removes the duplication in a regime vector. 
Date:  2024–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2404.11235&r= 