nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒06‒17
twenty-six papers chosen by
Sune Karlsson, Örebro universitet


  1. Estimating Heterogeneous Treatment Effects with Item-Level Outcome Data: Insights from Item Response Theory By Joshua B. Gilbert; Zachary Himmelsbach; James Soland; Mridul Joshi; Benjamin W. Domingue
  2. SVARs with breaks: Identification and inference By Emanuele Bacchiocchi; Toru Kitagawa
  3. Robust Inference for High-Dimensional Panel Data Models By Jiti Gao; Bin Peng; Yayi Yan
  4. Latent group structure in linear panel data models with endogenous regressors By Junho Choi; Ryo Okui
  5. On “Imputation of Counterfactual Outcomes when the Errors are Predictable'': Discussions on Misspecification and Suggestions of Sensitivity Analyses By Luis A. F. Alvarez; Bruno Ferman
  6. Identifying and exploiting alpha in linear asset pricing models with strong, semi-strong, and latent factors By Ron P. Smith; M. Hashem Pesaran
  7. Integrated Modiï¬ ed Least Squares Estimation and (Fixed-b) Inference for Systems of Cointegrating Multivariate Polynomial Regressions By Veldhuis, Sebastian; Wagner, Martin
  8. Predictive Accuracy of Impulse Responses Estimated Using Local Projections and Vector Autoregressions By Zacharias Psaradakis; Martin Sola; Nicola Spagnolo; Patricio Yunis
  9. Should We Augment Large Covariance Matrix Estimation with Auxiliary Network Information? By Ge, S.; Li, S.; Linton, O. B.; Liu, W.; Su, W.
  10. Sharp-SSL: Selective high-dimensional axis-aligned random projections for semi-supervised learning By Wang, Tengyao; Dobriban, Edgar; Gataric, Milana; Samworth, Richard J.
  11. A Locally Robust Semiparametric Approach to Examiner IV Designs By Lonjezo Sithole
  12. Uniform Inference for Subsampled Moment Regression By David M. Ritzwoller; Vasilis Syrgkanis
  13. Conditional Independence in a Binary Choice Experiment By Nathaniel T. Wilcox
  14. Kernel Three Pass Regression Filter By Rajveer Jat; Daanish Padha
  15. The Role of Consumer Sentiment in the Stock Market: A Multivariate Dynamic Mixture Model with Threshold Effects By Zacharias Psaradakis; Martin Sola; Francisco Rapetti; Patricio Yunis
  16. Parallel Tempering for DSGE Estimation By Joshua Brault
  17. Backtesting Expected Shortfall: Accounting for both duration and severity with bivariate orthogonal polynomials By Sullivan Hu\'e; Christophe Hurlin; Yang Lu
  18. Quantitative Tools for Time Series Analysis in Natural Language Processing: A Practitioners Guide By W. Benedikt Schmal
  19. Testing for an Explosive Bubble using High-Frequency Volatility By H. Peter Boswijk; Jun Yu; Yang Zu
  20. Identification by non-Gaussianity in structural threshold and smooth transition vector autoregressive models By Savi Virolainen
  21. Causal Duration Analysis with Diff-in-Diff By Ben Deaner; Hyejin Ku
  22. Modelling correlation matrices in multivariate data, with application to reciprocity and complementarity of child-parent exchanges of support By Zhang, Siliang; Kuha, Jouni; Steele, Fiona
  23. Econometric inference on a large bayesian game with heterogeneous beliefs By Kojevnikov, Denis; Song, Kyungchul
  24. Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty By Kaizhao Liu; Jose Blanchet; Lexing Ying; Yiping Lu
  25. Ordinal Decomposition By David M. Kaplan; Qian Wu
  26. Bayesian Markov-Switching Vector Autoregressive Process By Battulga Gankhuu

  1. By: Joshua B. Gilbert; Zachary Himmelsbach; James Soland; Mridul Joshi; Benjamin W. Domingue
    Abstract: Analyses of heterogeneous treatment effects (HTE) are common in applied causal inference research. However, when outcomes are latent variables assessed via psychometric instruments such as educational tests, standard methods ignore the potential HTE that may exist among the individual items of the outcome measure. Failing to account for "item-level" HTE (IL-HTE) can lead to both estimated standard errors that are too small and identification challenges in the estimation of treatment-by-covariate interaction effects. We demonstrate how Item Response Theory (IRT) models that estimate a treatment effect for each assessment item can both address these challenges and provide new insights into HTE generally. This study articulates the theoretical rationale for the IL-HTE model and demonstrates its practical value using data from 20 randomized controlled trials in economics, education, and health. Our results show that the IL-HTE model reveals item-level variation masked by average treatment effects, provides more accurate statistical inference, allows for estimates of the generalizability of causal effects, resolves identification problems in the estimation of interaction effects, and provides estimates of standardized treatment effect sizes corrected for attenuation due to measurement error.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00161&r=
  2. By: Emanuele Bacchiocchi; Toru Kitagawa
    Abstract: In this paper we propose a class of structural vector autoregressions (SVARs) characterized by structural breaks (SVAR-WB). Together with standard restrictions on the parameters and on functions of them, we also consider constraints across the different regimes. Such constraints can be either (a) in the form of stability restrictions, indicating that not all the parameters or impulse responses are subject to structural changes, or (b) in terms of inequalities regarding particular characteristics of the SVAR-WB across the regimes. We show that all these kinds of restrictions provide benefits in terms of identification. We derive conditions for point and set identification of the structural parameters of the SVAR-WB, mixing equality, sign, rank and stability restrictions, as well as constraints on forecast error variances (FEVs). As point identification, when achieved, holds locally but not globally, there will be a set of isolated structural parameters that are observationally equivalent in the parametric space. In this respect, both common frequentist and Bayesian approaches produce unreliable inference as the former focuses on just one of these observationally equivalent points, while for the latter on a non-vanishing sensitivity to the prior. To overcome these issues, we propose alternative approaches for estimation and inference that account for all admissible observationally equivalent structural parameters. Moreover, we develop a pure Bayesian and a robust Bayesian approach for doing inference in set-identified SVAR-WBs. Both the theory of identification and inference are illustrated through a set of examples and an empirical application on the transmission of US monetary policy over the great inflation and great moderation regimes.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.04973&r=
  3. By: Jiti Gao; Bin Peng; Yayi Yan
    Abstract: In this paper, we propose a robust estimation and inferential method for high-dimensional panel data models. Specifically, (1) we investigate the case where the number of regressors can grow faster than the sample size, (2) we pay particular attention to non-Gaussian, serially and cross-sectionally correlated and heteroskedastic error processes, and (3) we develop an estimation method for high-dimensional long-run covariance matrix using a thresholded estimator. Methodologically and technically, we develop two Nagaev-types of concentration inequalities: one for a partial sum and the other for a quadratic form, subject to a set of easily verifiable conditions. Leveraging these two inequalities, we also derive a non-asymptotic bound for the LASSO estimator, achieve asymptotic normality via the node-wise LASSO regression, and establish a sharp convergence rate for the thresholded heteroskedasticity and autocorrelation consistent (HAC) estimator. Our study thus provides the relevant literature with a complete toolkit for conducting inference about the parameters of interest involved in a high-dimensional panel data framework. We also demonstrate the practical relevance of these theoretical results by investigating a high-dimensional panel data model with interactive fixed effects. Moreover, we conduct extensive numerical studies using simulated and real data examples.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.07420&r=
  4. By: Junho Choi; Ryo Okui
    Abstract: This paper concerns the estimation of linear panel data models with endogenous regressors and a latent group structure in the coefficients. We consider instrumental variables estimation of the group-specific coefficient vector. We show that direct application of the Kmeans algorithm to the generalized method of moments objective function does not yield unique estimates. We newly develop and theoretically justify two-stage estimation methods that apply the Kmeans algorithm to a regression of the dependent variable on predicted values of the endogenous regressors. The results of Monte Carlo simulations demonstrate that two-stage estimation with the first stage modeled using a latent group structure achieves good classification accuracy, even if the true first-stage regression is fully heterogeneous. We apply our estimation methods to revisiting the relationship between income and democracy.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.08687&r=
  5. By: Luis A. F. Alvarez; Bruno Ferman
    Abstract: Gonçalves and Ng (2024) propose an interesting and simple way to improve counterfactual imputation methods when errors are predictable. For unconditional analyses, this approach yields smaller mean-squared error and tighter prediction intervals in large samples, even if the dependence of the errors is misspecified. For conditional analyses, this approach corrects the bias of standard methods, and provides valid asymptotic inference, if the dependence of the errors is correctly specified. In this comment, we first discuss how the assumptions imposed on the errors depend on the model and estimator adopted. This enables researchers to assess the validity of the assumptions imposed on the structure of the errors, and the relevant information set for conditional analyses. We then propose a simple sensitivity analysis in order to quantify the amount of misspecification on the dependence structure of the errors required for the conclusions of conditional analyses to be changed.
    Keywords: treatment effect; synthetic control; sensitivity analysis
    JEL: C22 C23 C52
    Date: 2024–05–22
    URL: http://d.repec.org/n?u=RePEc:spa:wpaper:2024wpecon16&r=
  6. By: Ron P. Smith; M. Hashem Pesaran
    Abstract: The risk premia of traded factors are the sum of factor means and a parameter vector we denote by {\phi} which is identified from the cross section regression of alpha of individual securities on the vector of factor loadings. If phi is non-zero one can construct "phi-portfolios" which exploit the systematic components of non-zero alpha. We show that for known values of betas and when phi is non-zero there exist phi-portfolios that dominate mean-variance portfolios. The paper then proposes a two-step bias corrected estimator of phi and derives its asymptotic distribution allowing for idiosyncratic pricing errors, weak missing factors, and weak error cross-sectional dependence. Small sample results from extensive Monte Carlo experiments show that the proposed estimator has the correct size with good power properties. The paper also provides an empirical application to a large number of U.S. securities with risk factors selected from a large number of potential risk factors according to their strength and constructs phi-portfolios and compares their Sharpe ratios to mean variance and S&P 500 portfolio.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.02217&r=
  7. By: Veldhuis, Sebastian (Department of Economics, University of Klagenfurt); Wagner, Martin (Bank of Slovenia, Ljubljana and Institute for Advanced Studies, Vienna)
    Abstract: We consider integrated modiï¬ ed least squares estimation for systems of cointegrating multivariate polynomial regressions, i. e., systems of regressions that include deterministic variables, integrated processes and products of these variables as regressors. The errors are allowed to be correlated across equations, over time and with the regressors. Since, under restrictions on the parameters or in case of non-identical regressors across equations, integrated modiï¬ ed OLS and GLS estimation do not, in general, coincide, we discuss in detail restricted integrated generalized least squares estimators and inference based upon them. Furthermore, we develop asymptotically pivotal ï¬ xed-b inference, available only in case of full design and for speciï¬ c hypotheses.
    Keywords: Integrated modiï¬ ed estimation, cointegrating multivariate polynomial regression, ï¬ xed-b inference, generalized least squares
    JEL: C12 C13 C32
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:ihs:ihswps:number54&r=
  8. By: Zacharias Psaradakis; Martin Sola; Nicola Spagnolo; Patricio Yunis
    Abstract: We examine the small-sample accuracy of impulse responses obtained using local projections (LP) and vector autoregressive (VAR) models. In view of the fact that impulse responses are differences between multistep predictors, we propose to assess the relative performance of impulse-response estimators using tests for equal predictive accuracy. In our Monte Carlo experiments, LP-based and VAR-based estimators are found to be equally accurate in large samples under a mean squared error risk function. VAR-based estimators tend to have an advantage over LP-based ones in small and moderately sized samples, particularly at long horizons.
    Keywords: Local projections, Predictive accuracy, VAR models.
    JEL: C32 C53
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:udt:wpecon:2024_01&r=
  9. By: Ge, S.; Li, S.; Linton, O. B.; Liu, W.; Su, W.
    Abstract: In this paper, we propose two novel frameworks to incorporate auxiliary information about connectivity among entities (i.e., network information) into the estimation of large covariance matrices. The current literature either completely ignores this kind of network information (e.g., thresholding and shrinkage) or utilizes some simple network structure under very restrictive settings (e.g., banding). In the era of big data, we can easily get access to auxiliary information about the complex connectivity structure among entities. Depending on the features of the auxiliary network information at hand and the structure of the covariance matrix, we provide two different frameworks correspondingly —the Network Guided Thresholding and the Network Guided Banding. We show that both Network Guided estimators have optimal convergence rates over a larger class of sparse covariance matrix. Simulation studies demonstrate that they generally outperform other pure statistical methods, especially when the true covariance matrix is sparse, and the auxiliary network contains genuine information. Empirically, we apply our method to the estimation of the covariance matrix with the help of many financial linkage data of asset returns to attain the global minimum variance (GMV) portfolio.
    Keywords: Banding, Big Data, Large Covariance Matrix, Network, Thresholding
    JEL: C13 C58 G11
    Date: 2024–05–20
    URL: http://d.repec.org/n?u=RePEc:cam:camjip:2416&r=
  10. By: Wang, Tengyao; Dobriban, Edgar; Gataric, Milana; Samworth, Richard J.
    Abstract: We propose a new method for high-dimensional semi-supervised learning problems based on the careful aggregation of the results of a low-dimensional procedure applied to many axis-aligned random projections of the data. Our primary goal is to identify important variables for distinguishing between the classes; existing low-dimensional methods can then be applied for final class assignment. To this end, we score projections according to their class-distinguishing ability; for instance, motivated by a generalized Rayleigh quotient, we can compute the traces of estimated whitened between-class covariance matrices on the projected data. This enables us to assign an importance weight to each variable for a given projection, and to select our signal variables by aggregating these weights over high-scoring projections. Our theory shows that the resulting Sharp-SSL algorithm is able to recover the signal coordinates with high probability when we aggregate over sufficiently many random projections and when the base procedure estimates the diagonal entries of the whitened betweenclass covariance matrix sufficiently well. For the Gaussian EM base procedure, we provide a new analysis of its performance in semi-supervised settings that controls the parameter estimation error in terms of the proportion of labeled data in the sample. Numerical results on both simulated data and a real colon tumor dataset support the excellent empirical performance of the method.
    Keywords: semi-supervised learning; high-dimensional statistics; sparsity; random projection; ensemble learning; EP/T02772X/1; EP/T017961/1; EP/P031447/1; EP/N031938/1; 101019498; DMS 2046874 (CAREER).; T&F deal
    JEL: C1
    Date: 2024–05–20
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:122552&r=
  11. By: Lonjezo Sithole
    Abstract: I propose a locally robust semiparametric framework for estimating causal effects using the popular examiner IV design, in the presence of many examiners and possibly many covariates relative to the sample size. The key ingredient of this approach is an orthogonal moment function that is robust to biases and local misspecification from the first step estimation of the examiner IV. I derive the orthogonal moment function and show that it delivers multiple robustness where the outcome model or at least one of the first step components is misspecified but the estimating equation remains valid. The proposed framework not only allows for estimation of the examiner IV in the presence of many examiners and many covariates relative to sample size, using a wide range of nonparametric and machine learning techniques including LASSO, Dantzig, neural networks and random forests, but also delivers root-n consistent estimation of the parameter of interest under mild assumptions.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.19144&r=
  12. By: David M. Ritzwoller; Vasilis Syrgkanis
    Abstract: We propose a method for constructing a confidence region for the solution to a conditional moment equation. The method is built around a class of algorithms for nonparametric regression based on subsampled kernels. This class includes random forest regression. We bound the error in the confidence region's nominal coverage probability, under the restriction that the conditional moment equation of interest satisfies a local orthogonality condition. The method is applicable to the construction of confidence regions for conditional average treatment effects in randomized experiments, among many other similar problems encountered in applied economics and causal inference. As a by-product, we obtain several new order-explicit results on the concentration and normal approximation of high-dimensional $U$-statistics.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.07860&r=
  13. By: Nathaniel T. Wilcox
    Abstract: Experimental and behavioral economists, as well as psychologists, commonly assume conditional independence of choices when constructing likelihood functions for structural estimation of choice functions. I test this assumption using data from a new experiment designed for this purpose. Within the limits of the experiment’s identifying restriction and designed power to detect deviations from conditional independence, conditional independence is not rejected. In naturally occurring data, concerns about violations of conditional independence are certainly proper and well-taken (for wellknown reasons). However, when an experimenter employs the particular experimental mechanisms and designs used here, the findings suggest that conditional independence is an acceptable assumption for analyzing data so generated. Key Words: Alternation, Conditional Independence, Choice Under Risk, Discrete Choice, Persistence, Random Problem Selection
    JEL: C22 C25 C91 D81
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:apl:wpaper:24-15&r=
  14. By: Rajveer Jat; Daanish Padha
    Abstract: We forecast a single time series using a high-dimensional set of predictors. When these predictors share common underlying dynamics, an approximate latent factor model provides a powerful characterization of their co-movements Bai(2003). These latent factors succinctly summarize the data and can also be used for prediction, alleviating the curse of dimensionality in high-dimensional prediction exercises, see Stock & Watson (2002a). However, forecasting using these latent factors suffers from two potential drawbacks. First, not all pervasive factors among the set of predictors may be relevant, and using all of them can lead to inefficient forecasts. The second shortcoming is the assumption of linear dependence of predictors on the underlying factors. The first issue can be addressed by using some form of supervision, which leads to the omission of irrelevant information. One example is the three-pass regression filter proposed by Kelly & Pruitt (2015). We extend their framework to cases where the form of dependence might be nonlinear by developing a new estimator, which we refer to as the Kernel Three-Pass Regression Filter (K3PRF). This alleviates the aforementioned second shortcoming. The estimator is computationally efficient and performs well empirically. The short-term performance matches or exceeds that of established models, while the long-term performance shows significant improvement.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.07292&r=
  15. By: Zacharias Psaradakis; Martin Sola; Francisco Rapetti; Patricio Yunis
    Abstract: We consider the relationship between stock prices, volatility and consumer sentiment. The analysis is based on a new multivariate model defined as a time-varying mixture of dynamic models in which instantaneous relationships among variables are allowed and the mixing weights have a threshold-type structure. We discuss issues related to the stability of the model and estimation of its parameters. Our empirical results show that consumer sentiment affects significantly the S&P 500 price–dividend ratio and market volatility in at least one of the two regimes identified by the model, regimes which are associated with endogenously determined low and high consumer sentiment.
    Keywords: Consumer sentiment, Mixture models, Price–dividend ratio, Threshold, Time-varying weights, Volatility.
    JEL: C32 G12
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:udt:wpecon:2024_02&r=
  16. By: Joshua Brault
    Abstract: In this paper, I develop a population-based Markov chain Monte Carlo (MCMC) algorithm known as parallel tempering to estimate dynamic stochastic general equilibrium (DSGE) models. Parallel tempering approximates the posterior distribution of interest using a family of Markov chains with tempered posteriors. At each iteration, two randomly selected chains in the ensemble are proposed to swap parameter vectors, after which each chain mutates via Metropolis-Hastings. The algorithm results in a fast-mixing MCMC, particularly well suited for problems with irregular posterior distributions. Also, due to its global nature, the algorithm can be initialized directly from the prior distributions. I provide two empirical examples with complex posteriors: a New Keynesian model with equilibrium indeterminacy and the Smets-Wouters model with more diffuse prior distributions. In both examples, parallel tempering overcomes the inherent estimation challenge, providing extremely consistent estimates across different runs of the algorithm with large effective sample sizes. I provide code compatible with Dynare mod files, making this routine straightforward for DSGE practitioners to implement.
    Keywords: Econometric and statistical methods, Economic models
    JEL: C11 C15 E10
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:24-13&r=
  17. By: Sullivan Hu\'e; Christophe Hurlin; Yang Lu
    Abstract: We propose an original two-part, duration-severity approach for backtesting Expected Shortfall (ES). While Probability Integral Transform (PIT) based ES backtests have gained popularity, they have yet to allow for separate testing of the frequency and severity of Value-at-Risk (VaR) violations. This is a crucial aspect, as ES measures the average loss in the event of such violations. To overcome this limitation, we introduce a backtesting framework that relies on the sequence of inter-violation durations and the sequence of severities in case of violations. By leveraging the theory of (bivariate) orthogonal polynomials, we derive orthogonal moment conditions satisfied by these two sequences. Our approach includes a straightforward, model-free Wald test, which encompasses various unconditional and conditional coverage backtests for both VaR and ES. This test aids in identifying any mis-specified components of the internal model used by banks to forecast ES. Moreover, it can be extended to analyze other systemic risk measures such as Marginal Expected Shortfall. Simulation experiments indicate that our test exhibits good finite sample properties for realistic sample sizes. Through application to two stock indices, we demonstrate how our methodology provides insights into the reasons for rejections in testing ES validity.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.02012&r=
  18. By: W. Benedikt Schmal
    Abstract: Natural language processing tools have become frequently used in social sciences such as economics, political science, and sociology. Many publications apply topic modeling to elicit latent topics in text corpora and their development over time. Here, most publications rely on visual inspections and draw inference on changes, structural breaks, and developments over time. We suggest using univariate time series econometrics to introduce more quantitative rigor that can strengthen the analyses. In particular, we discuss the econometric topics of non-stationarity as well as structural breaks. This paper serves as a comprehensive practitioners guide to provide researchers in the social and life sciences as well as the humanities with concise advice on how to implement econometric time series methods to thoroughly investigate topic prevalences over time. We provide coding advice for the statistical software R throughout the paper. The application of the discussed tools to a sample dataset completes the analysis.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.18499&r=
  19. By: H. Peter Boswijk; Jun Yu; Yang Zu
    Abstract: Based on a continuous-time stochastic volatility model with a linear drift, we develop a test for explosive behavior in financial asset prices at a low frequency when prices are sampled at a higher frequency. The test exploits the volatility information in the high-frequency data. The method consists of devolatizing log-asset price increments with realized volatility measures and performing a supremum-type recursive Dickey-Fuller test on the devolatized sample. The proposed test has a nuisance-parameter-free asymptotic distribution and is easy to implement. We study the size and power properties of the test in Monte Carlo simulations. A real-time date-stamping strategy based on the devolatized sample is proposed for the origination and conclusion dates of the explosive regime. Conditions under which the real-time date-stamping strategy is consistent are established. The test and the date-stamping strategy are applied to study explosive behavior in cryptocurrency and stock markets.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.02087&r=
  20. By: Savi Virolainen
    Abstract: Linear structural vector autoregressive models can be identified statistically without imposing restrictions on the model if the shocks are mutually independent and at most one of them is Gaussian. We show that this result extends to structural threshold and smooth transition vector autoregressive models incorporating a time-varying impact matrix defined as a weighted sum of the impact matrices of the regimes. Our empirical application studies the effects of the climate policy uncertainty shock on the U.S. macroeconomy. In a structural logistic smooth transition vector autoregressive model consisting of two regimes, we find that a positive climate policy uncertainty shock decreases production in times of low economic policy uncertainty but slightly increases it in times of high economic policy uncertainty. The introduced methods are implemented to the accompanying R package sstvars.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.19707&r=
  21. By: Ben Deaner; Hyejin Ku
    Abstract: In economic program evaluation, it is common to obtain panel data in which outcomes are indicators that an individual has reached an absorbing state. For example, they may indicate whether an individual has exited a period of unemployment, passed an exam, left a marriage, or had their parole revoked. The parallel trends assumption that underpins difference-in-differences generally fails in such settings. We suggest identifying conditions that are analogous to those of difference-in-differences but apply to hazard rates rather than mean outcomes. These alternative assumptions motivate estimators that retain the simplicity and transparency of standard diff-in-diff, and we suggest analogous specification tests. Our approach can be adapted to general linear restrictions between the hazard rates of different groups, motivating duration analogues of the triple differences and synthetic control methods. We apply our procedures to examine the impact of a policy that increased the generosity of unemployment benefits, using a cross-cohort comparison.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.05220&r=
  22. By: Zhang, Siliang; Kuha, Jouni; Steele, Fiona
    Abstract: We define a model for the joint distribution of multiple continuous latent variables which includes a model for how their correlations depend on explanatory variables. This is motivated by and applied to social scientific research questions in the analysis of intergenerational help and support within families, where the correlations describe reciprocity of help between generations and complementarity of different kinds of help. We propose an MCMC procedure for estimating the model which maintains the positive definiteness of the implied correlation matrices, and describe theoretical results which justify this approach and facilitate efficient implementation of it. The model is applied to data from the UK Household Longitudinal Study to analyse exchanges of practical and financial support between adult individuals and their non-coresident parents.
    Keywords: Bayesian estimation; covariance matrix modelling; item response theory models; positive definite matrices; two-step estimation
    JEL: C1
    Date: 2024–05–28
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:123698&r=
  23. By: Kojevnikov, Denis (Tilburg University, School of Economics and Management); Song, Kyungchul
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:tiu:tiutis:aca0631e-4f8a-45c7-af3a-4e1942712e47&r=
  24. By: Kaizhao Liu; Jose Blanchet; Lexing Ying; Yiping Lu
    Abstract: Bootstrap is a popular methodology for simulating input uncertainty. However, it can be computationally expensive when the number of samples is large. We propose a new approach called \textbf{Orthogonal Bootstrap} that reduces the number of required Monte Carlo replications. We decomposes the target being simulated into two parts: the \textit{non-orthogonal part} which has a closed-form result known as Infinitesimal Jackknife and the \textit{orthogonal part} which is easier to be simulated. We theoretically and numerically show that Orthogonal Bootstrap significantly reduces the computational cost of Bootstrap while improving empirical accuracy and maintaining the same width of the constructed interval.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.19145&r=
  25. By: David M. Kaplan (University of Missouri); Qian Wu (Southwestern University of Finance and Economics)
    Abstract: The famous Blinder-Oaxaca decomposition estimates the statistically "explained" proportion of a between-group difference in means, but ordinal variables have no mean. A common approach assigns cardinal values 1, 2, 3, ... to the ordinal categories and runs the conventional OLS-based decomposition. Surprisingly, we show such results are numerically identical to a decomposition of the survival function when estimating the counterfactual using OLS-based distribution regression, even if the cardinalization is wrong. Still, reporting the counterfactual helps transparency and wide-sense replication, and to mitigate functional form misspecification, we describe and implement a nonparametric estimator. Empirically, we decompose U.S. rural-urban differences in mental health.
    Keywords: Blinder-Oaxaca decomposition, counterfactual distribution, distribution regression, survival function
    JEL: C25 I14
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:2404&r=
  26. By: Battulga Gankhuu
    Abstract: This study introduces marginal density functions of the general Bayesian Markov-Switching Vector Autoregressive (MS-VAR) process. In the case of the Bayesian MS-VAR process, we provide closed--form density functions and Monte-Carlo simulation algorithms, including the importance sampling method. The Monte--Carlo simulation method departs from the previous simulation methods because it removes the duplication in a regime vector.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.11235&r=

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.