nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒05‒03
25 papers chosen by
Sune Karlsson
Örebro universitet

  1. Truncated sum-of-squares estimation of fractional time series models with generalized power law trend By Javier Hualde; Morten Ørregaard Nielsen
  2. Social Networks with Mismeasured Links By Arthur Lewbel; Xi Qu; Xun Tang
  3. Nonparametric Test for Volatility in Clustered Multiple Time Series By Erniel B. Barrios; Paolo Victor T. Redondo
  4. Sequential Search Models: A Pairwise Maximum Rank Approach By Jiarui Liu
  5. Daily news sentiment and monthly surveys: A mixed–frequency dynamic factor model for nowcasting consumer confidence By Andres Algaba; Samuel Borms; Kris Boudt; Brecht Verbeken
  6. Nonparametric Difference-in-Differences in Repeated Cross-Sections with Continuous Treatments By Xavier D'Haultfoeuille; Stefan Hoderlein; Yuya Sasaki
  7. Changepoint detection in random coefficient autoregressive models By Lajos Horvath; Lorenzo Trapani
  8. Weak Instrumental Variables: Limitations of Traditional 2SLS and Exploring Alternative Instrumental Variable Estimators By Aiwei Huang; Madhurima Chandra; Laura Malkhasyan
  9. CATE meets ML: Conditional average treatment effect and machine learning By Jacob, Daniel
  10. Bayesian Local Projections By Miranda-Agrippino, Silvia; Ricco, Giovanni
  11. Generalized Linear Models with Structured Sparsity Estimators By Mehmet Caner
  12. Loss-Based Variational Bayes Prediction By David T. Frazier; Ruben Loaiza-Maya; Gael M. Martin; Bonsoo Koo
  13. Fractional Dickey-Fuller test with or without prehistorical influence By BENSALMA, Ahmed
  14. K-expectiles clustering By Wang, Bingling; Li, Yingxing; Härdle, Wolfgang
  15. Performance of Empirical Risk Minimization for Linear Regression with Dependent Data By Christian Brownlees; Gu{\dh}mundur Stef\'an Gu{\dh}mundsson
  16. The Mean Squared Prediction Error Paradox By Pincheira, Pablo; Hardy, Nicolas
  17. Valid Heteroskedasticity Robust Testing By Pötscher, Benedikt M.; Preinerstorfer, David
  18. Maximum Likelihood Bunching Estimators of the ETI By Aronsson, Thomas; Jenderny, Katharina; Lanot, Gauthier
  19. Addressing Sample Selection Bias for Machine Learning Methods By Dylan Brewer; Alyssa Carlson
  20. Stochastic Gradient Variational Bayes and Normalizing Flows for Estimating Macroeconomic Models By Ramis Khbaibullin; Sergei Seleznev
  21. Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules By Yusuke Narita; Kohei Yata
  22. Estimating Future VaR from Value Samples and Applications to Future Initial Margin By Narayan Ganesan; Bernhard Hientzsch
  23. A Gaussian Process Model of Cross-Category Dynamics in Brand Choice By Ryan Dew; Yuhao Fan
  24. (When) should you adjust inferences for multiple hypothesis testing? By Davide Viviano; Kaspar Wuthrich; Paul Niehaus
  25. Is it MOLS or COLS? By Parmeter, Christopher F.

  1. By: Javier Hualde (Universidad Pública de Navarra); Morten Ørregaard Nielsen (Queen's University and CREATES)
    Abstract: We consider truncated (or conditional) sum-of-squares estimation of a parametric fractional time series model with an additive deterministic structure. The latter consists of both a drift term and a generalized power law trend. The memory parameter of the stochastic component and the power parameter of the deterministic trend component are both considered unknown real numbers to be estimated and belonging to arbitrarily large compact sets. Thus, our model captures different forms of nonstationarity and noninvertibility as well as a very flexible deterministic specification. As in related settings, the proof of consistency (which is a prerequisite for proving asymptotic normality) is challenging due to non-uniform convergence of the objective function over a large admissible parameter space and due to the competition between stochastic and deterministic components. As expected, parameter estimates related to the deterministic component are shown to be consistent and asymptotically normal only for parts of the parameter space depending on the relative strength of the stochastic and deterministic components. In contrast, we establish consistency and asymptotic normality of parameter estimates related to the stochastic component for the entire parameter space. Furthermore, the asymptotic distribution of the latter estimates is unaffected by the presence of the deterministic component, even when this is not consistently estimable. We also include a small Monte Carlo simulation to illustrate our results.
    Keywords: Asymptotic normality, Consistency, Deterministic trend, Fractional process, Generalized polynomial trend, Generalized power law trend, Noninvertibility, Nonstationarity, Sum-of-squares estimation
    JEL: C22
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1458&r=
  2. By: Arthur Lewbel (Boston College); Xi Qu (Shanghai Jiao Tong University); Xun Tang (Rice University)
    Abstract: We consider estimation of peer effects in social network models where some network links are incorrectly measured. We show that if the number of mismeasured links does not grow too quickly with the sample size, then standard instrumental variables estimators that ignore the measurement error remain consistent, and standard asymptotic inference methods remain valid. These results hold even when measurement errors in the links are correlated with regressors, or with the model errors. Monte Carlo simulations and real data experiments confirm our results in finite samples. These findings imply that researchers can ignore small amounts of measurement errors in networks.
    Keywords: Social networks, Peer e§ects, MisclassiÖed links, Missing links, Mismeasured network
    JEL: C31 C51
    Date: 2021–04–28
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:1031&r=
  3. By: Erniel B. Barrios; Paolo Victor T. Redondo
    Abstract: Contagion arising from clustering of multiple time series like those in the stock market indicators can further complicate the nature of volatility, rendering a parametric test (relying on asymptotic distribution) to suffer from issues on size and power. We propose a test on volatility based on the bootstrap method for multiple time series, intended to account for possible presence of contagion effect. While the test is fairly robust to distributional assumptions, it depends on the nature of volatility. The test is correctly sized even in cases where the time series are almost nonstationary. The test is also powerful specially when the time series are stationary in mean and that volatility are contained only in fewer clusters. We illustrate the method in global stock prices data.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14412&r=
  4. By: Jiarui Liu
    Abstract: This paper studies sequential search models that (1) incorporate unobserved product quality, which can be correlated with endogenous observable characteristics (such as price) and endogenous search cost variables (such as product rankings in online search intermediaries); and (2) do not require researchers to know the true distribution of the match value between consumers and products. A likelihood approach to estimate such models gives biased results. Therefore, I propose a new estimator -- pairwise maximum rank (PMR) estimator -- for both preference and search cost parameters. I show that the PMR estimator is consistent using only data on consumers' search order among one pair of products rather than data on consumers' full consideration set or final purchase. Additionally, we can use the PMR estimator to test for the true match value distribution in the data. In the empirical application, I apply the PMR estimator to quantify the effect of rankings in Expedia hotel search using two samples of the data set, to which consumers are randomly assigned. I find the position effect to be \$0.11-\$0.36, and the effect estimated using the sample with randomly generated rankings is close to the effect estimated using the sample with endogenous rankings. Moreover, I find that the true match value distribution in the data is unlikely to be N(0,1). Likelihood estimation ignoring endogeneity gives an upward bias of at least \$1.17; misspecification of match value distribution as N(0,1) gives an upward bias of at least \$2.99.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.13865&r=
  5. By: Andres Algaba (Faculty of Social Sciences and Solvay Business School, Vrije Universiteit Brussel, Pleinlaan 2, 1010 Brussel, Belgium); Samuel Borms (Faculty of Social Sciences and Solvay Business School, Institute of Financial Analysis, University of Neuchâtel, Switzerland.); Kris Boudt (Solvay Business School, Vrije Universiteit Brussel; Department of Economics, Ghent University; School of Business and Economics, Vrije Universiteit Amsterdam); Brecht Verbeken (Faculty of Social Sciences and Solvay Business School.)
    Abstract: Policymakers, firms, and investors closely monitor traditional survey–based consumer confidence indicators and treat it as an important piece of economic information. We propose a latent factor model for the vector of monthly survey–based consumer confidence and daily sentiment embedded in economic media news articles. The proposed mixed– frequency dynamic factor model framework uses a novel covariance matrix specification. Model estimation and real–time filtering of the latent consumer confidence index are computationally simple. In a Monte Carlo simulation study and an empirical application concerning Belgian consumer confidence, we document the economically significant accuracy gains obtained by including daily news sentiment in the dynamic factor model for nowcasting consumer confidence.
    Keywords: dynamic factor model, mixed-frequency, nowcasting, sentiment index, Sentometrics, state space
    JEL: C32 C51 C53 C55
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:nbb:reswpp:202102-396&r=
  6. By: Xavier D'Haultfoeuille; Stefan Hoderlein; Yuya Sasaki
    Abstract: This paper studies the identification of causal effects of a continuous treatment using a new difference-in-difference strategy. Our approach allows for endogeneity of the treatment, and employs repeated cross-sections. It requires an exogenous change over time which affects the treatment in a heterogeneous way, stationarity of the distribution of unobservables and a rank invariance condition on the time trend. On the other hand, we do not impose any functional form restrictions or an additive time trend, and we are invariant to the scaling of the dependent variable. Under our conditions, the time trend can be identified using a control group, as in the binary difference-in-differences literature. In our scenario, however, this control group is defined by the data. We then identify average and quantile treatment effect parameters. We develop corresponding nonparametric estimators and study their asymptotic properties. Finally, we apply our results to the effect of disposable income on consumption.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14458&r=
  7. By: Lajos Horvath; Lorenzo Trapani
    Abstract: We propose a family of CUSUM-based statistics to detect the presence of changepoints in the deterministic part of the autoregressive parameter in a Random Coefficient AutoRegressive (RCA) sequence. In order to ensure the ability to detect breaks at sample endpoints, we thoroughly study weighted CUSUM statistics, analysing the asymptotics for virtually all possible weighing schemes, including the standardised CUSUM process (for which we derive a Darling-Erdos theorem) and even heavier weights (studying the so-called R\'enyi statistics). Our results are valid irrespective of whether the sequence is stationary or not, and no prior knowledge of stationarity or lack thereof is required. Technically, our results require strong approximations which, in the nonstationary case, are entirely new. Similarly, we allow for heteroskedasticity of unknown form in both the error term and in the stochastic part of the autoregressive coefficient, proposing a family of test statistics which are robust to heteroskedasticity, without requiring any prior knowledge as to the presence or type thereof. Simulations show that our procedures work very well in finite samples. We complement our theory with applications to financial, economic and epidemiological time series.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.13440&r=
  8. By: Aiwei Huang; Madhurima Chandra; Laura Malkhasyan
    Abstract: Instrumental variables estimation has gained considerable traction in recent decades as a tool for causal inference, particularly amongst empirical researchers. This paper makes three contributions. First, we provide a detailed theoretical discussion on the properties of the standard two-stage least squares estimator in the presence of weak instruments and introduce and derive two alternative estimators. Second, we conduct Monte-Carlo simulations to compare the finite-sample behavior of the different estimators, particularly in the weak-instruments case. Third, we apply the estimators to a real-world context; we employ the different estimators to calculate returns to schooling.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.12370&r=
  9. By: Jacob, Daniel
    Abstract: For treatment effects - one of the core issues in modern econometric analysis - prediction and estimation are flip-sides of the same coin. As it turns out, machine learning methods are the tool for generalized prediction models. Combined with econometric theory allows us to estimate not only the average but a personalized treatment effect - the conditional average treatment effect (CATE). In this tutorial, we give an overview of novel methods, explain them in detail, and apply them via Quantlets in real data applications. We study the effect that microcredit availability has on the amount of money borrowed and if the 401(k) pension plan eligibility has an impact on net financial assets, as two empirical examples. The presented toolbox of methods contains metalearners, like the Doubly-Robust, the R-, T- and X-learner, and methods that are specially designed to estimate the CATE like the causal BART and the generalized random forest. In both, the microcredit and the 401(k) example, we find a positive treatment effect for all observations but diverse evidence of treatment effect heterogeneity. An additional simulation study, where the true treatment effect is known, allows us to compare the different methods and to observe patterns and similarities.
    Keywords: Causal Inference,CATE,Machine Learning,Tutorial
    JEL: C00
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2021005&r=
  10. By: Miranda-Agrippino, Silvia (Bank of England, CfM and CEPR); Ricco, Giovanni (University of Warwick, OFCE-Sciences Po and CEPR)
    Abstract: We propose a Bayesian approach to Local Projections that optimally addresses the empirical bias-variance tradeo inherent in the choice between VARs and LPs. Bayesian Local Projections (BLP) regularise the LP regression models by using informative priors, thus estimating impulse response functions potentially better able to capture the properties of the data as compared to iterative VARs. In doing so, BLP preserve the exibility of LPs to empirical model misspeci cations while retaining a degree of estimation uncertainty comparable to a Bayesian VAR with standard macroeconomic priors. As a regularised direct forecast, this framework is also a valuable alternative to BVARs for multivariate out-of-sample projections.
    Keywords: Local Projections ; VARs JEL Classification: C11 ; C14
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1348&r=
  11. By: Mehmet Caner
    Abstract: In this paper, we introduce structured sparsity estimators in Generalized Linear Models. Structured sparsity estimators in the least squares loss are introduced by Stucky and van de Geer (2018) recently for fixed design and normal errors. We extend their results to debiased structured sparsity estimators with Generalized Linear Model based loss. Structured sparsity estimation means penalized loss functions with a possible sparsity structure used in the chosen norm. These include weighted group lasso, lasso and norms generated from convex cones. The significant difficulty is that it is not clear how to prove two oracle inequalities. The first one is for the initial penalized Generalized Linear Model estimator. Since it is not clear how a particular feasible-weighted nodewise regression may fit in an oracle inequality for penalized Generalized Linear Model, we need a second oracle inequality to get oracle bounds for the approximate inverse for the sample estimate of second-order partial derivative of Generalized Linear Model. Our contributions are fivefold: 1. We generalize the existing oracle inequality results in penalized Generalized Linear Models by proving the underlying conditions rather than assuming them. One of the key issues is the proof of a sample one-point margin condition and its use in an oracle inequality. 2. Our results cover even non sub-Gaussian errors and regressors. 3. We provide a feasible weighted nodewise regression proof which generalizes the results in the literature from a simple l_1 norm usage to norms generated from convex cones. 4. We realize that norms used in feasible nodewise regression proofs should be weaker or equal to the norms in penalized Generalized Linear Model loss. 5. We can debias the first step estimator via getting an approximate inverse of the singular-sample second order partial derivative of Generalized Linear Model loss.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14371&r=
  12. By: David T. Frazier; Ruben Loaiza-Maya; Gael M. Martin; Bonsoo Koo
    Abstract: We propose a new method for Bayesian prediction that caters for models with a large number of parameters and is robust to model misspecification. Given a class of high-dimensional (but parametric) predictive models, this new approach constructs a posterior predictive using a variational approximation to a loss-based, or Gibbs, posterior that is directly focused on predictive accuracy. The theoretical behavior of the new prediction approach is analyzed and a form of optimality demonstrated. Applications to both simulated and empirical data using high-dimensional Bayesian neural network and autoregressive mixture models demonstrate that the approach provides more accurate results than various alternatives, including misspecified likelihood-based predictions.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14054&r=
  13. By: BENSALMA, Ahmed
    Abstract: Recently the generalization of the standard Dickey-Fuller test to the fractional case has been proposed. The proposed test, called fractional Dickey-Fuller test can be applied to sample generated from a type I or a type II fractional process. Depending on whether the test is applied to sample generated from a type I or type II process, it is referred to as a test with or without prehistoric influence respectively. The main and the first objective of this paper is to study the impact induced by a pre-sample of the finite sample null distribution. In fact, the recently proposed test is built based on a composite null hypothesis rather than a sample one. The second objective is to highlight the theoretical justifications for the choice of the null composite hypothesis. All the theoretical results are illustrated with simulated and real data sets. Furthermore, to facilitate the reproducibility of our simulation data and figures we provide all the necessary supplementary material consisting of EViews programs.
    Keywords: ARFIMA; fractional integration, Dickey-Fuller test; Fractional Dickey-Fuller test; type I and type II fractional Brownian motion.
    JEL: C12 C15 C4 C5
    Date: 2021–04–25
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:107408&r=
  14. By: Wang, Bingling; Li, Yingxing; Härdle, Wolfgang
    Abstract: K-means clustering is one of the most widely-used partitioning algorithm in cluster analysis due to its simplicity and computational efficiency, but it may not provide ideal clustering results when applying to data with non-spherically shaped clusters. By considering the asymmetrically weighted distance, We propose the K-expectile clustering and search the clusters via a greedy algorithm that minimizes the within cluster τ -variance. We provide algorithms based on two schemes: the fixed τ clustering, and the adaptive τ clustering. Validated by simulation results, our method has enhanced performance on data with asymmetric shaped clusters or clusters with a complicated structure. Applications of our method show that the fixed τ clustering can bring some flexibility on segmentation with a decent accuracy, while the adaptive τ clustering may yield better performance.
    Keywords: clustering,expectiles,asymmetric quadratic loss,image segmentation
    JEL: C00
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2021003&r=
  15. By: Christian Brownlees; Gu{\dh}mundur Stef\'an Gu{\dh}mundsson
    Abstract: This paper establishes oracle inequalities for the prediction risk of the empirical risk minimizer for large-dimensional linear regression. We generalize existing results by allowing the data to be dependent and heavy-tailed. The analysis covers both the cases of identically and heterogeneously distributed observations. Our analysis is nonparametric in the sense that the relationship between the regressand and the regressors is assumed to be unknown. The main results of this paper indicate that the empirical risk minimizer achieves the optimal performance (up to a logarithmic factor) in a dependent data setting.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.12127&r=
  16. By: Pincheira, Pablo; Hardy, Nicolas
    Abstract: In this paper, we show that traditional comparisons of Mean Squared Prediction Error (MSPE) between two competing forecasts may be highly controversial. This is so because when some specific conditions of efficiency are not met, the forecast displaying the lowest MSPE will also display the lowest correlation with the target variable. Given that violations of efficiency are usual in the forecasting literature, this opposite behavior in terms of accuracy and correlation with the target variable may be a fairly common empirical finding that we label here as "the MSPE Paradox." We characterize "Paradox zones" in terms of differences in correlation with the target variable and conduct some simple simulations to show that these zones may be non-empty sets. Finally, we illustrate the relevance of the Paradox with two empirical applications.
    Keywords: Mean Squared Prediction Error, Correlation, Forecasting, Time Series, Random Walk.
    JEL: C1 C10 C12 C18 C2 C22 C4 C40 C5 C52 C53 C58 E0 E00 E30 E31 E37 E44 E47 E52 E58 F30 F31 F37 G00 G12 G15 G17 Q0 Q00 Q02 Q1 Q2 Q3 Q33 Q4 Q43 Q47
    Date: 2021–04–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:107403&r=
  17. By: Pötscher, Benedikt M.; Preinerstorfer, David
    Abstract: Tests based on heteroskedasticity robust standard errors are an important technique in econometric practice. Choosing the right critical value, however, is not all that simple: Conventional critical values based on asymptotics often lead to severe size distortions; and so do existing adjustments including the bootstrap. To avoid these issues, we suggest to use smallest size-controlling critical values, the generic existence of which we prove in this article. Furthermore, sufficient and often also necessary conditions for their existence are given that are easy to check. Granted their existence, these critical values are the canonical choice: larger critical values result in unnecessary power loss, whereas smaller critical values lead to over-rejections under the null hypothesis, make spurious discoveries more likely, and thus are invalid. We suggest algorithms to numerically determine the proposed critical values and provide implementations in accompanying software. Finally, we numerically study the behavior of the proposed testing procedures, including their power properties.
    Keywords: Heteroskedasticity, Robustness, Tests, Size of a test
    JEL: C12 C14 C20
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:107420&r=
  18. By: Aronsson, Thomas (Department of Economics, Umeå University); Jenderny, Katharina (Department of Economics, Umeå University); Lanot, Gauthier (Department of Economics, Umeå University)
    Abstract: We propose a maximum likelihood method to improve the bunching approach of estimating the elasticity of taxable income (ETI), and derive estimators for several model settings such as bunching with optimization frictions, notches, and heterogeneity in the ETI. Modelling optimization frictions explicitly, our estimators fit the data of several published studies very well. In the presence of a notch, the results can differ substantially from those obtained using the polynomial approach. If there is heterogeneity in the ETI, the elasticity among those who bunch exceeds the average elasticity in the population.
    Keywords: Bunching Estimators; Elasticity of Taxable Income; Income Tax
    JEL: C51 H24 H31
    Date: 2021–04–22
    URL: http://d.repec.org/n?u=RePEc:hhs:umnees:0987&r=
  19. By: Dylan Brewer (School of Economics, Georgia Institute of Technology); Alyssa Carlson (Department of Economics, University of Missouri)
    Abstract: We study approaches for adjusting machine learning methods when the training sample differs from the prediction sample on unobserved dimensions. The machine learning literature predominately assumes selection only on observed dimensions. Common suggestions are to re-weight or control for variables that influence selection as solutions to selection on observables. Simulation results indicate that common machine learning practices such as re-weighting or controlling for variables that influence selection into the training or testing sample often worsens sample selection bias. We suggest two control-function approaches that remove the effects of selection bias before training and find that they reduce meansquared prediction error in simulations with a high degree of selection. We apply these approaches to predicting the vote share of the incumbent in gubernatorial elections using previously observed re-election bids. We find that ignoring selection on unobservables leads to substantially higher predicted vote shares for the incumbent than when the control function approach is used.
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:2102&r=
  20. By: Ramis Khbaibullin (Bank of Russia, Russian Federation); Sergei Seleznev (Bank of Russia, Russian Federation)
    Abstract: We illustrate the ability of the stochastic gradient variational Bayes algorithm, which is a very popular machine learning tool, to work with macrodata and macromodels. Choosing two approximations (mean-field and normalizing flows), we test properties of algorithms for a set of models and show that these models can be estimated fast despite the presence of estimated hyperparameters. Finally, we discuss the difficulties and possible directions of further research.
    Keywords: Stochastic gradient variational Bayes, normalizing flows, mean-field approximation, sparse Bayesian learning, BVAR, Bayesian neural network, DFM.
    JEL: C11 C32 C32 C45 E17
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:bkr:wpaper:wps61&r=
  21. By: Yusuke Narita; Kohei Yata
    Abstract: Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from differential geometry and geometric measure theory, which may be of independent interest. The practical performance of our method is first demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We finally apply our estimator to evaluate the effect of Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than \$10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little effects on COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.12909&r=
  22. By: Narayan Ganesan; Bernhard Hientzsch
    Abstract: Predicting future values at risk (fVaR) is an important problem in finance. They arise in the modelling of future initial margin requirements for counterparty credit risk and future market risk VaR. One is also interested in derived quantities such as: i) Dynamic Initial Margin (DIM) and Margin Value Adjustment (MVA) in the counterparty risk context; and ii) risk weighted assets (RWA) and Capital Value Adjustment (KVA) for market risk. This paper describes several methods that can be used to predict fVaRs. We begin with the Nested MC-empirical quantile method as benchmark, but it is too computationally intensive for routine use. We review several known methods and discuss their novel applications to the problem at hand. The techniques considered include computing percentiles from distributions (Normal and Johnson) that were matched to parametric moments or percentile estimates, quantile regressions methods, and others with more specific assumptions or requirements. We also consider how limited inner simulations can be used to improve the performance of these techniques. The paper also provides illustrations, results, and visualizations of intermediate and final results for the various approaches and methods.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.11768&r=
  23. By: Ryan Dew; Yuhao Fan
    Abstract: Understanding individual customers' sensitivities to prices, promotions, brand, and other aspects of the marketing mix is fundamental to a wide swath of marketing problems, including targeting and pricing. Companies that operate across many product categories have a unique opportunity, insofar as they can use purchasing data from one category to augment their insights in another. Such cross-category insights are especially crucial in situations where purchasing data may be rich in one category, and scarce in another. An important aspect of how consumers behave across categories is dynamics: preferences are not stable over time, and changes in individual-level preference parameters in one category may be indicative of changes in other categories, especially if those changes are driven by external factors. Yet, despite the rich history of modeling cross-category preferences, the marketing literature lacks a framework that flexibly accounts for \textit{correlated dynamics}, or the cross-category interlinkages of individual-level sensitivity dynamics. In this work, we propose such a framework, leveraging individual-level, latent, multi-output Gaussian processes to build a nonparametric Bayesian choice model that allows information sharing of preference parameters across customers, time, and categories. We apply our model to grocery purchase data, and show that our model detects interesting dynamics of customers' price sensitivities across multiple categories. Managerially, we show that capturing correlated dynamics yields substantial predictive gains, relative to benchmarks. Moreover, we find that capturing correlated dynamics can have implications for understanding changes in consumers preferences over time, and developing targeted marketing strategies based on those dynamics.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.11702&r=
  24. By: Davide Viviano; Kaspar Wuthrich; Paul Niehaus
    Abstract: The use of multiple hypothesis testing adjustments varies widely in applied economic research, without consensus on when and how it should be done. We provide a game-theoretic foundation for this practice. Adjustments are often appropriate in our model when research influences multiple policy decisions. While control of classical notions of compound error rates can emerge in special cases, the appropriate adjustments generally depend on the nature of scale economies in the research production function and on economic interactions between policy decisions. When research examines multiple outcomes this motivates their aggregation into sufficient statistics for policy-making rather than multiple testing adjustments.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.13367&r=
  25. By: Parmeter, Christopher F.
    Abstract: This paper assesses the terminology of modifed and corrected ordinary least squares (MOLS/COLS) in effciency analysis. These two approaches, while di erent, are often conflated.
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:oeg:wpaper:2021/04&r=

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.