nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒09‒05
24 papers chosen by
Sune Karlsson
Örebro universitet

  1. Doubly Robust Estimation of Local Average Treatment Effects Using Inverse Probability Weighted Regression Adjustment By Tymon S{\l}oczy\'nski; S. Derya Uysal; Jeffrey M. Wooldridge
  2. A Tale of Two Panel Data Regressions By Dennis Shen; Peng Ding; Jasjeet Sekhon; Bin Yu
  3. High Dimensional Generalised Penalised Least Squares By Ilias Chronopoulos; Katerina Chrysikou; George Kapetanios
  4. Differentially Private Estimation via Statistical Depth By Ryan Cumings-Menon
  5. Forecasting Algorithms for Causal Inference with Panel Data By Jacob Goldin; Julian Nyarko; Justin Young
  6. Misclassification in Difference-in-differences Models By Augustine Denteh; D\'esir\'e K\'edagni
  7. Sparse Bayesian State-Space and Time-Varying Parameter Models By Sylvia Fr\"uhwirth-Schnatter; Peter Knaus
  8. BONuS: Multiple Multivariate Testing with a Data-Adaptive Test Statistic By Yang, Chiao-Yu; Lei, Lihua; Ho, Nhat; Fithian, William
  9. Estimation of group structures in panel models with individual fixed effects By Mammen, Enno; Wilke, Ralf A.; Zapp, Kristina Maria
  10. Time Series Prediction under Distribution Shift using Differentiable Forgetting By Stefanos Bennett; Jase Clarkson
  11. The Effect of Omitted Variables on the Sign of Regression Coefficients By Matthew A. Masten; Alexandre Poirier
  12. Simultaneity in Binary Outcome Models with an Application to Employment for Couples By Bo E. Honor\'e; Luojia Hu; Ekaterini Kyriazidou; Martin Weidner
  13. A Flexible Predictive Density Combination for Large Financial Data Sets in Regular and Crisis Periods By Roberto Casarin; Stefano Grassi; Francesco Ravazzolo; Herman K. van Dijk
  14. Detecting common bubbles in multivariate mixed causal-noncausal models By Gianluca Cubadda; Alain Hecq; Elisa Voisin
  15. Quantile Regression Analysis of Censored Data with Selection An Application to Willingness-to-Pay Data By Victor Champonnois; Olivier Chanel; Costin Protopopescu
  16. Tangential Wasserstein Projections By Florian Gunsilius; Meng Hsuan Hsieh; Myung Jin Lee
  17. Parallel Trends and Dynamic Choices By Philip Marx; Elie Tamer; Xun Tang
  18. Estimation of Heterogeneous Treatment Effects Using Quantile Regression with Interactive Fixed Effects By Ruofan Xu; Jiti Gao; Tatsushi Oka; Yoon-Jae Whang
  19. On Deep Generative Modeling in Economics: An Application with Public Procurement Data By Marcelin Joanis; Andrea Lodi; Igor Sadoune
  20. Missing Values and the Dimensionality of Expected Returns By Andrew Y. Chen; Jack McCoy
  21. Risk modeling with option-implied correlations and score-driven dynamics By Marco Piña; Rodrigo Herrera
  22. Conformal Prediction Bands for Two-Dimensional Functional Time Series By Niccol\`o Ajroldi; Jacopo Diquigiovanni; Matteo Fontana; Simone Vantini
  23. Change point detection in dynamic Gaussian graphical models: the impact of COVID-19 pandemic on the US stock market By Beatrice Franzolini; Alexandros Beskos; Maria De Iorio; Warrick Poklewski Koziell; Karolina Grzeszkiewicz
  24. Extending the procedure of Engelberg et al. (2009) to surveys with varying interval-widths By Becker, Christoph; Dürsch, Peter; Eife, Thomas A.; Glas, Alexander

  1. By: Tymon S{\l}oczy\'nski; S. Derya Uysal; Jeffrey M. Wooldridge
    Abstract: We revisit the problem of estimating the local average treatment effect (LATE) and the local average treatment effect on the treated (LATT) when control variables are available, either to render the instrumental variable (IV) suitably exogenous or to improve precision. Unlike previous approaches, our doubly robust (DR) estimation procedures use quasi-likelihood methods weighted by the inverse of the IV propensity score - so-called inverse probability weighted regression adjustment (IPWRA) estimators. By properly choosing models for the propensity score and outcome models, fitted values are ensured to be in the logical range determined by the response variable, producing DR estimators of LATE and LATT with appealing small sample properties. Inference is relatively straightforward both analytically and using the nonparametric bootstrap. Our DR LATE and DR LATT estimators work well in simulations. We also propose a DR version of the Hausman test that compares different estimates of the average treatment effect on the treated (ATT) under one-sided noncompliance.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.01300&r=
  2. By: Dennis Shen; Peng Ding; Jasjeet Sekhon; Bin Yu
    Abstract: A central goal in social science is to evaluate the causal effect of a policy. In this pursuit, researchers often organize their observations in a panel data format, where a subset of units are exposed to a policy (treatment) for some time periods while the remaining units are unaffected (control). The spread of information across time and space motivates two general approaches to estimate and infer causal effects: (i) unconfoundedness, which exploits time series patterns, and (ii) synthetic controls, which exploits cross-sectional patterns. Although conventional wisdom decrees that the two approaches are fundamentally different, we show that they yield numerically identical estimates under several popular settings that we coin the symmetric class. We study the two approaches for said class under a generalized regression framework and argue that valid inference relies on both correlation patterns. Accordingly, we construct a mixed confidence interval that captures the uncertainty across both time and space. We illustrate its advantages over inference procedures that only account for one dimension using data-inspired simulations and empirical applications. Building on these insights, we advocate for panel data agnostic (PANDA) regression--rooted in model checking and based on symmetric estimators and mixed confidence intervals--when the data generating process is unknown.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.14481&r=
  3. By: Ilias Chronopoulos; Katerina Chrysikou; George Kapetanios
    Abstract: In this paper we develop inference for high dimensional linear models, with serially correlated errors. We examine Lasso under the assumption of strong mixing in the covariates and error process, allowing for fatter tails in their distribution. While the Lasso estimator performs poorly under such circumstances, we estimate via GLS Lasso the parameters of interest and extend the asymptotic properties of the Lasso under more general conditions. Our theoretical results indicate that the non-asymptotic bounds for stationary dependent processes are sharper, while the rate of Lasso under general conditions appears slower as $T,p\to \infty$. Further we employ the debiased Lasso to perform inference uniformly on the parameters of interest. Monte Carlo results support the proposed estimator, as it has significant efficiency gains over traditional methods.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.07055&r=
  4. By: Ryan Cumings-Menon
    Abstract: Constructing a differentially private (DP) estimator requires deriving the maximum influence of an observation, which can be difficult in the absence of exogenous bounds on the input data or the estimator, especially in high dimensional settings. This paper shows that standard notions of statistical depth, i.e., halfspace depth and regression depth, are particularly advantageous in this regard, both in the sense that the maximum influence of a single observation is easy to analyze and that this value is typically low. This is used to motivate new approximate DP location and regression estimators using the maximizers of these two notions of statistical depth. A more computationally efficient variant of the approximate DP regression estimator is also provided. Also, to avoid requiring that users specify a priori bounds on the estimates and/or the observations, variants of these DP mechanisms are described that satisfy random differential privacy (RDP), which is a relaxation of differential privacy provided by Hall, Wasserman, and Rinaldo (2013). We also provide simulations of the two DP regression methods proposed here. The proposed estimators appear to perform favorably relative to the existing DP regression methods we consider in these simulations when either the sample size is at least 100-200 or the privacy-loss budget is sufficiently high.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.12602&r=
  5. By: Jacob Goldin; Julian Nyarko; Justin Young
    Abstract: Conducting causal inference with panel data is a core challenge in social science research. Advances in forecasting methods can facilitate this task by more accurately predicting the counterfactual evolution of a treated unit had treatment not occurred. In this paper, we draw on a newly developed deep neural architecture for time series forecasting (the N-BEATS algorithm). We adapt this method from conventional time series applications by incorporating leading values of control units to predict a "synthetic" untreated version of the treated unit in the post-treatment period. We refer to the estimator derived from this method as SyNBEATS, and find that it significantly outperforms traditional two-way fixed effects and synthetic control methods across a range of settings. We also find that SyNBEATS attains comparable or more accurate performance relative to more recent panel estimation methods such as matrix completion and synthetic difference in differences. Our results highlight how advances in the forecasting literature can be harnessed to improve causal inference in panel settings.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.03489&r=
  6. By: Augustine Denteh; D\'esir\'e K\'edagni
    Abstract: The difference-in-differences (DID) design is one of the most popular methods in empirical economics research. However, there is almost no work examining what the DID method identifies in the presence of a misclassified treatment variable. This paper fills this gap by studying the identification of treatment effects in DID designs when the treatment is misclassified. Misclassification arises in various ways, including when the timing of policy intervention is ambiguous or when researchers need to infer treatment from auxiliary data. We show that the DID estimand is biased and recovers a weighted average of the average treatment effects on the treated (ATT) in two subpopulations -- the correctly classified and misclassified units. The DID estimand may yield the wrong sign in some cases and is otherwise attenuated. We provide bounds on the ATT when the researcher has access to information on the extent of misclassification in the data. We demonstrate our theoretical results using simulations and provide two empirical applications to guide researchers in performing sensitivity analysis using our proposed methods.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.11890&r=
  7. By: Sylvia Fr\"uhwirth-Schnatter; Peter Knaus
    Abstract: In this chapter, we review variance selection for time-varying parameter (TVP) models for univariate and multivariate time series within a Bayesian framework. We show how both continuous as well as discrete spike-and-slab shrinkage priors can be transferred from variable selection for regression models to variance selection for TVP models by using a non-centered parametrization. We discuss efficient MCMC estimation and provide an application to US inflation modeling.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.12147&r=
  8. By: Yang, Chiao-Yu (UC Berkeley); Lei, Lihua (Stanford U); Ho, Nhat (UT Austin); Fithian, William (UC Berkeley)
    Abstract: We propose a new adaptive empirical Bayes framework, the Bag-Of-Null-Statistics (BONuS) procedure, for multiple testing where each hypothesis testing problem is itself multivariate or nonparametric. BONuS is an adaptive and interactive knockoff-type method that helps improve the testing power while controlling the false discovery rate (FDR), and is closely connected to the "counting knockoffs" procedure analyzed in Weinstein et al. (2017). Contrary to procedures that start with a p-value for each hypothesis, our method analyzes the entire data set to adaptively estimate an optimal p-value transform based on an empirical Bayes model. Despite the extra adaptivity, our method controls FDR in finite samples even if the empirical Bayes model is incorrect or the estimation is poor. An extension, the Double BONuS procedure, validates the empirical Bayes model to guard against power loss due to model misspecification.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:4031&r=
  9. By: Mammen, Enno; Wilke, Ralf A.; Zapp, Kristina Maria
    Abstract: The fixed effects (FE) panel model is one of the main econometric tools in empirical economic research. A major practical limitation is that the parameters on time-constant covariates are not identifiable. This paper presents a new approach to grouping FE in the linear panel model to reduce their dimensionality and ensure identifiability. By using unsupervised nonparametric density based clustering, cluster patterns including their location and number are not restricted. The approach works with large data structures (units and groups) and only clusters units that are sufficiently similar, while leaving others as unclustered atoms. Asymptotic theory and rates of convergence are presented. With the help of simulations and an application to economic data it is shown that the suggested method performs well and gives more insightful and efficient results than conventional panel models.
    Keywords: Panel Data,Statistical Learning,Regularisation,Endogeneity
    JEL: C14 C23 C38
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:22023&r=
  10. By: Stefanos Bennett; Jase Clarkson
    Abstract: Time series prediction is often complicated by distribution shift which demands adaptive models to accommodate time-varying distributions. We frame time series prediction under distribution shift as a weighted empirical risk minimisation problem. The weighting of previous observations in the empirical risk is determined by a forgetting mechanism which controls the trade-off between the relevancy and effective sample size that is used for the estimation of the predictive model. In contrast to previous work, we propose a gradient-based learning method for the parameters of the forgetting mechanism. This speeds up optimisation and therefore allows more expressive forgetting mechanisms.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.11486&r=
  11. By: Matthew A. Masten; Alexandre Poirier
    Abstract: Omitted variables are a common concern in empirical research. We show that "Oster's delta" (Oster 2019), a commonly reported measure of regression coefficient robustness to the presence of omitted variables, does not capture sign changes in the parameter of interest. Specifically, we show that any time this measure is large--suggesting that omitted variables may be unimportant--a much smaller value can actually reverse the sign of the parameter of interest. Relatedly, we show that selection bias adjusted estimands can be extremely sensitive to the choice of the sensitivity parameter. Specifically, researchers commonly compute a bias adjustment under the assumption that Oster's delta equals one. Under the alternative assumption that delta is very close to one, but not exactly equal to one, we show that the bias can instead be arbitrarily large. To address these concerns, we propose a modified measure of robustness that accounts for such sign changes, and discuss best practices for assessing sensitivity to omitted variables. We demonstrate this sign flipping behavior in an empirical application to social capital and the rise of the Nazi party, where we show how it can overturn conclusions about robustness, and how our proposed modifications can be used to regain robustness. We implement our proposed methods in the companion Stata module regsensitivity for easy use in practice.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.00552&r=
  12. By: Bo E. Honor\'e; Luojia Hu; Ekaterini Kyriazidou; Martin Weidner
    Abstract: Two of Peter Schmidt's many contributions to econometrics have been to introduce a simultaneous logit model for bivariate binary outcomes and to study estimation of dynamic linear fixed effects panel data models using short panels. In this paper, we study a dynamic panel data version of the bivariate model introduced in Schmidt and Strauss (1975) that allows for lagged dependent variables and fixed effects as in Ahn and Schmidt (1995). We combine a conditional likelihood approach with a method of moments approach to obtain an estimation strategy for the resulting model. We apply this estimation strategy to a simple model for the intra-household relationship in employment. Our main conclusion is that the within-household "correlation" in employment differs significantly by the ethnicity composition of the couple even after one allows for unobserved household specific heterogeneity.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.07343&r=
  13. By: Roberto Casarin (University Ca' Foscari of Venice); Stefano Grassi (University of Rome Tor Vergata); Francesco Ravazzolo (BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam)
    Abstract: A flexible predictive density combination is introduced for large financial data sets which allows for model set incompleteness. Dimension reduction procedures that include learning allocate the large sets of predictive densities and combination weights to relatively small subsets. Given the representation of the probability model in extended nonlinear state-space form, efficient simulation-based Bayesian inference is proposed using parallel dynamic clustering as well as nonlinear filtering, implemented on graphics processing units. The approach is applied to combine predictive densities based on a large number of individual US stock returns of daily observations over a period that includes the Covid-19 crisis period. Evidence on dynamic cluster composition, weight patterns and model set incompleteness gives valuable signals for improved modelling. This enables higher predictive accuracy and better assessment of uncertainty and risk for investment fund management.
    Keywords: Density Combination, Large Set of Predictive Densities, Dynamic Factor Models, Nonlinear state-space, Bayesian Inference
    JEL: C11 C15 C53 E37
    Date: 2022–08–09
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20220053&r=
  14. By: Gianluca Cubadda; Alain Hecq; Elisa Voisin
    Abstract: This paper proposes methods to investigate whether the bubble patterns observed in individual series are common to various series. We detect the non-linear dynamics using the recent mixed causal and noncausal models. Both a likelihood ratio test and information criteria are investigated, the former having better performances in our Monte Carlo simulations. Implementing our approach on three commodity prices we do not find evidence of commonalities although some series look very similar.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.11557&r=
  15. By: Victor Champonnois (Aix-Marseille Univ, CNRS, AMSE, Marseille, France.); Olivier Chanel (Aix-Marseille Univ, CNRS, AMSE, Marseille, France.); Costin Protopopescu (Aix-Marseille Univ, CNRS, AMSE, Marseille, France.)
    Abstract: Recurring statistical issues such as censoring, selection and heteroskedasticity often impact the analysis of observational data. We investigate the potential advantages of models based on quantile regression (QR) for addressing these issues, with a particular focus on willingness to pay-type data. We gather analytical arguments showing how QR can tackle these issues. We show by means of a Monte Carlo experiment how censored QR (CQR)-based methods perform compared to standard models. We empirically contrast four models on flood risk data. Our findings confirm that selection-censored models based on QR are useful for simultaneously tackling issues often present in observational data.
    Keywords: Censored Quantile Regression; Contingent Valuation; Flood; Monte Carlo Experiment; Quantile Regression; Selection Model; Willingness to Pay
    JEL: C15 C9 C21
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:aim:wpaimx:2214&r=
  16. By: Florian Gunsilius; Meng Hsuan Hsieh; Myung Jin Lee
    Abstract: We develop a notion of projections between sets of probability measures using the geometric properties of the 2-Wasserstein space. It is designed for general multivariate probability measures, is computationally efficient to implement, and provides a unique solution in regular settings. The idea is to work on regular tangent cones of the Wasserstein space using generalized geodesics. Its structure and computational properties make the method applicable in a variety of settings, from causal inference to the analysis of object data. An application to estimating causal effects yields a generalization of the notion of synthetic controls to multivariate data with individual-level heterogeneity, as well as a way to estimate optimal weights jointly over all time periods.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.14727&r=
  17. By: Philip Marx; Elie Tamer; Xun Tang
    Abstract: Difference-in-differences (or DiD) is a commonly used method for estimating treatment effects, and parallel trends is its main identifying assumption: the trend in mean untreated outcomes must be independent of the observed treatment status. In observational settings, treatment is often a dynamic choice made or influenced by rational actors, such as policy-makers, firms, or individual agents. This paper relates the parallel trends assumption to economic models of dynamic choice, which allow for dynamic selection motives such as learning or optimal stopping. In these cases, we clarify the implications of parallel trends on agent behavior, and we show how such features can lead to violations of parallel trends even in simple settings where anticipation concerns are ruled out or mean untreated outcomes are stationary. Finally, we consider some identification results under alternative assumptions that accommodate these features of dynamic choice.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.06564&r=
  18. By: Ruofan Xu; Jiti Gao; Tatsushi Oka; Yoon-Jae Whang
    Abstract: We study the estimation of heterogeneous effects of group-level policies, using quantile regression with interactive fixed effects. Our approach can identify distributional policy effects, particularly effects on inequality, under a type of difference-in-differences assumption. We provide asymptotic properties of our estimators and an inferential method. We apply the model to evaluate the effect of the minimum wage policy on earnings between 1967 and 1980 in the United States. Our results suggest that the minimum wage policy has a significant negative impact on the between-inequality but little effect on the within-inequality.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.03632&r=
  19. By: Marcelin Joanis; Andrea Lodi; Igor Sadoune
    Abstract: We propose a solution based on deep generative modeling to the problem of sampling synthetic instances of public procurement auctions from observed data. Our contribution is twofold. First, we overcome the challenges inherent to the replication of multi-level structures commonly seen in auction data, and second, we provide a specific validation procedure to evaluate the faithfulness of the resulting synthetic distributions. More generally, we argue that the generation of reliable artificial data accounts for research design improvements in applications ranging from inference to simulation crafting. In that regard, applied and social sciences can benefit from generative methods that alleviate the hardship of artificial sampling from highly-structured qualitative distributions, so characteristic of real-world data. As we dive deep into the technicalities of such algorithms, this paper can also serve as a general guideline in the context of density estimation for discrete distributions.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.12255&r=
  20. By: Andrew Y. Chen; Jack McCoy
    Abstract: Combining 100+ cross-sectional predictors requires either dropping 90% of the data or imputing missing values. We compare imputation using the expectation-maximization algorithm with simple ad-hoc methods. Surprisingly, expectation-maximization and ad-hoc methods lead to similar results. This similarity happens because predictors are largely independent: Correlations cluster near zero and more than 10 principal components are required to span 50% of total variance. Independence implies observed predictors are uninformative about missing predictors, making ad-hoc methods valid. In an out-of-sample principal components (PC) regression test, 50 PCs are required to capture equal-weighted long-short expected returns (30 PCs value-weighted), regardless of the imputation method.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.13071&r=
  21. By: Marco Piña; Rodrigo Herrera
    Abstract: In this paper we make use of option-implied volatilities to build a time-varying implied correlation matrix. Then, we use this matrix to estimate jointly both the covariance matrix of the returns and the implied covariance matrix dynamics. Finally, we do a backtest and show that the proposed model can effectively use the risk-neutral information to model the variance of the returns and to forecast the Value-at-Risk. Our results show that the model obtains results comparable to the benchmark while considerably reducing the number of estimated parameters.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:932&r=
  22. By: Niccol\`o Ajroldi; Jacopo Diquigiovanni; Matteo Fontana; Simone Vantini
    Abstract: Conformal Prediction (CP) is a versatile nonparametric framework used to quantify uncertainty in prediction problems. In this work, we provide an extension of such method to the case of time series of functions defined on a bivariate domain, by proposing for the first time a distribution-free technique which can be applied to time-evolving surfaces. In order to obtain meaningful and efficient prediction regions, CP must be coupled with an accurate forecasting algorithm, for this reason, we extend the theory of autoregressive processes in Hilbert space in order to allow for functions with a bivariate domain. Given the novelty of the subject, we present estimation techniques for the Functional Autoregressive model (FAR). A simulation study is implemented, in order to investigate how different point predictors affect the resulting prediction bands. Finally, we explore benefits and limits of the proposed approach on a real dataset, collecting daily observations of Sea Level Anomalies of the Black Sea in the last twenty years.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.13656&r=
  23. By: Beatrice Franzolini; Alexandros Beskos; Maria De Iorio; Warrick Poklewski Koziell; Karolina Grzeszkiewicz
    Abstract: Reliable estimates of volatility and correlation are fundamental in economics and finance for understanding the impact of macroeconomics events on the market and guiding future investments and policies. Dependence across financial returns is likely to be subject to sudden structural changes, especially in correspondence with major global events, such as the COVID-19 pandemic. In this work, we are interested in capturing abrupt changes over time in the dependence across US industry stock portfolios, over a time horizon that covers the COVID-19 pandemic. The selected stocks give a comprehensive picture of the US stock market. To this end, we develop a Bayesian multivariate stochastic volatility model based on a time-varying sequence of graphs capturing the evolution of the dependence structure. The model builds on the Gaussian graphical models and the random change points literature. In particular, we treat the number, the position of change points, and the graphs as object of posterior inference, allowing for sparsity in graph recovery and change point detection. The high dimension of the parameter space poses complex computational challenges. However, the model admits a hidden Markov model formulation. This leads to the development of an efficient computational strategy, based on a combination of sequential Monte-Carlo and Markov chain Monte-Carlo techniques. Model and computational development are widely applicable, beyond the scope of the application of interest in this work.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.00952&r=
  24. By: Becker, Christoph; Dürsch, Peter; Eife, Thomas A.; Glas, Alexander
    Abstract: The approach by Engelberg, Manski, and Williams (2009) to convert probabilistic survey responses into continuous probability distributions implicitly assumes that the question intervals are equally wide. Almost all recently established household surveys have intervals of varying widths. Applying the standard approach to surveys with varying widths gives implausible and potentially misleading results. This note shows how the approach of Engelberg et al. (2009) can be adjusted to account for intervals of unequal width.
    Keywords: Survey methods,probabilistic questions,density forecasts
    JEL: C18 C82 C83
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:052022&r=

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.