nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒10‒16
seventeen papers chosen by
Sune Karlsson, Örebro universitet


  1. Sensitivity Analysis for Linear Estimands By Jacob Dorn; Luther Yap
  2. Break-Point Date Estimation for Nonstationary Autoregressive and Predictive Regression Models By Christis Katsouris
  3. Boosting GMM with Many Instruments When Some Are Invalid or Irrelevant By Hao Hao; Tae-Hwy Lee
  4. Reduced-rank Envelope Vector Autoregressive Models By S. Yaser Samadi; Wiranthe B. Herath
  5. Efficient estimation of regression models with user-specified parametric model for heteroskedasticty By Chaudhuri, Saraswata; Renault, Eric
  6. Estimation and Testing of Forecast Rationality with Many Moments By Tae-Hwy Lee; Tao Wang
  7. Optimizing pessimism in dynamic treatment regimes: a Bayesian learning approach By Zhou, Yunzhe; Qi, Zhengling; Shi, Chengchun; Li, Lexin
  8. Testing for Stationary or Persistent Coefficient Randomness in Predictive Regressions By Mikihito Nishi
  9. Identifying spatial interdependence in panel data with large N and small T By Deborah Gefang; Stephen G. Hall; George S. Tavlas
  10. Mixed-Effects Methods for Search and Matching Research By John M. Abowd; Kevin L. McKinney
  11. Nonlinear Granger Causality using Kernel Ridge Regression By Wojciech "Victor" Fulmyk
  12. Identification Using Higher-Order Moments Restrictions By Philippe Andrade; Filippo Ferroni; Leonardo Melosi
  13. Empirical Analysis of Network Effects in Nonlinear Pricing Data By Liang Chen; Yao Luo
  14. Nonparametric estimation of k-modal taste heterogeneity for group level agent-based mixed logit By Xiyuan Ren; Joseph Y. J. Chow
  15. Interpreting IV Estimators in Information Provision Experiments By Vod Vilfort; Whitney Zhang
  16. Total-effect Test May Erroneously Reject So-called "Full" or "Complete" Mediation By TingXuan Han; Luxi Zhang; Xinshu Zhao; Ke Deng
  17. Adjusting for Scale-Use Heterogeneity in Self-Reported Well-Being By Daniel J. Benjamin; Kristen Cooper; Ori Heffetz; Miles S. Kimball; Jiannan Zhou

  1. By: Jacob Dorn; Luther Yap
    Abstract: We propose a novel sensitivity analysis framework for linear estimands when identification failure can be viewed as seeing the wrong distribution of outcomes. Our family of assumptions bounds the density ratio between the observed and true conditional outcome distribution. This framework links naturally to selection models, generalizes existing assumptions for the Regression Discontinuity (RD) and Inverse Propensity Weighting (IPW) estimand, and provides a novel nonparametric perspective on violations of identification assumptions for ordinary least squares (OLS). Our sharp partial identification results extend existing results for IPW to cover other estimands and assumptions that allow even unbounded likelihood ratios, yielding a simple and unified characterization of bounds under assumptions like c-dependence of Masten and Poirier (2018). The sharp bounds can be written as a simple closed form moment of the data, the nuisance functions estimated in the primary analysis, and the conditional outcome quantile function. We find our method does well in simulations even when targeting a discontinuous and nearly infinite bound.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.06305&r=ecm
  2. By: Christis Katsouris
    Abstract: In this article, we study the statistical and asymptotic properties of break-point estimators in nonstationary autoregressive and predictive regression models for testing the presence of a single structural break at an unknown location in the full sample. Moreover, we investigate aspects such as how the persistence properties of covariates and the location of the break-point affects the limiting distribution of the proposed break-point estimators.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.13915&r=ecm
  3. By: Hao Hao (Ford Motor Company); Tae-Hwy Lee (Department of Economics, University of California Riverside)
    Abstract: When the endogenous variable is an unknown function of observable instruments, its conditional mean can be approximated using the sieve functions of observable instruments. We propose a novel instrument selection method, Double-criteria Boosting (DB), that consistently selects only valid and relevant instruments from a large set of candidate instruments. Monte Carlo compares GMM using DB with other methods such as GMM using Lasso and shows DB-GMM gives lower bias and RMSE. In the empirical application to automobile demand, the DB-GMM estimator is suggesting a more elastic estimate of the price elasticity of demand than the standard 2SLS estimator.
    Keywords: Causal inference with high dimensional instruments, Irrelevant instruments, Invalid instruments, Instrument Selection, Machine Learning, Boosting.
    JEL: C1 C2 C3 C5
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202309&r=ecm
  4. By: S. Yaser Samadi; Wiranthe B. Herath
    Abstract: The standard vector autoregressive (VAR) models suffer from overparameterization which is a serious issue for high-dimensional time series data as it restricts the number of variables and lags that can be incorporated into the model. Several statistical methods, such as the reduced-rank model for multivariate (multiple) time series (Velu et al., 1986; Reinsel and Velu, 1998; Reinsel et al., 2022) and the Envelope VAR model (Wang and Ding, 2018), provide solutions for achieving dimension reduction of the parameter space of the VAR model. However, these methods can be inefficient in extracting relevant information from complex data, as they fail to distinguish between relevant and irrelevant information, or they are inefficient in addressing the rank deficiency problem. We put together the idea of envelope models into the reduced-rank VAR model to simultaneously tackle these challenges, and propose a new parsimonious version of the classical VAR model called the reduced-rank envelope VAR (REVAR) model. Our proposed REVAR model incorporates the strengths of both reduced-rank VAR and envelope VAR models and leads to significant gains in efficiency and accuracy. The asymptotic properties of the proposed estimators are established under different error assumptions. Simulation studies and real data analysis are conducted to evaluate and illustrate the proposed method.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.12902&r=ecm
  5. By: Chaudhuri, Saraswata (Department of Economics, McGill University & Cireq, Montreal); Renault, Eric (Department of Economics, University of Warwick)
    Abstract: Several modern textbooks report that, thanks to the availability of heteroskedasticity robust standard errors, one observes the near-death of Weighted Least Squares (WLS) in cross-sectional applied work. We argue in this paper that it is actually possible to estimate regression parameters at least as precisely as Ordinary Least Squares (OLS) and WLS, even when using a misspeci ed parametric model for conditional heteroskedasticity. Our analysis is valid for a general regression framework (including Instrumental Variables and Nonlinear Regression) as long as the regression is de ned by a conditional expectation condition. The key is to acknowledge, as first pointed out by Cragg (1992) that, when the user-specific heteroskedasticity model is misspecified, WLS has to be modified depending on a choice of some univariate target for estimation. Moreover, targeted WLS can be improved by properly combining moment equations for OLS and WLS respectively. Efficient GMM must be regularized to take into account the possible multicollinearity of estimating equations when errors terms are actually nearly homoscedastic.
    Keywords: asymptotic optimality ; misspecification ; nuisance parameters ; weighted least squares JEL Codes: C12 ; C13 ; C21
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1473&r=ecm
  6. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Tao Wang (Department of Economics, University of Victoria, Canada)
    Abstract: We in this paper utilize P-GMM (Cheng and Liao, 2015) moment selection procedure to select valid and relevant moments for estimating and testing forecast rationality under the flexible loss proposed by Elliott et al. (2005). We motivate the moment selection in a large dimensional setting, explain the fundamental mechanism of P-GMM moment selection procedure, and elucidate how to implement it in the context of forecast rationality by allowing the existence of potentially invalid moment conditions. A set of Monte Carlo simulations is conducted to examine the finite sample performance of P-GMM estimation in integrating the information available in instruments into both the estimation and testing, and a real data analysis using data from the Survey of Professional Forecasters issued by the Federal Reserve Bank of Philadelphia is presented to further illustrate the practical value of the suggested methodology. The results indicate that the P-GMM post-selection estimator of forecaster’s attitude is comparable to the oracle estimator by using the available information efficiently. The accompanying power of rationality and symmetry tests utilizing P-GMM estimation would be substantially increased through reducing the influence of uninformative instruments. When a forecast user estimates and tests for rationality of forecasts that have been produced by others such as Greenbook, P-GMM moment selection procedure can assist in achieving consistent and more efficient outcomes.
    Keywords: Forecast rationality; Moment selection; P-GMM; Relevance; Validity.
    JEL: C10 C36 C53 E17
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202307&r=ecm
  7. By: Zhou, Yunzhe; Qi, Zhengling; Shi, Chengchun; Li, Lexin
    Abstract: In this article, we propose a novel pessimismbased Bayesian learning method for optimal dynamic treatment regimes in the offline setting. When the coverage condition does not hold, which is common for offline data, the existing solutions would produce sub-optimal policies. The pessimism principle addresses this issue by discouraging recommendation of actions that are less explored conditioning on the state. However, nearly all pessimism-based methods rely on a key hyper-parameter that quantifies the degree of pessimism, and the performance of the methods can be highly sensitive to the choice of this parameter. We propose to integrate the pessimism principle with Thompson sampling and Bayesian machine learning for optimizing the degree of pessimism. We derive a credible set whose boundary uniformly lower bounds the optimal Q-function, and thus we do not require additional tuning of the degree of pessimism. We develop a general Bayesian learning method that works with a range of models, from Bayesian linear basis model to Bayesian neural network model. We develop the computational algorithm based on variational inference, which is highly efficient and scalable. We establish the theoretical guarantees of the proposed method, and show empirically that it outperforms the existing state-of-theart solutions through both simulations and a real data example.
    JEL: C1
    Date: 2023–01–20
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:118233&r=ecm
  8. By: Mikihito Nishi
    Abstract: This study considers tests for coefficient randomness in predictive regressions. Our focus is on how tests for coefficient randomness are influenced by the persistence of random coefficient. We find that when the random coefficient is stationary, or I(0), Nyblom's (1989) LM test loses its optimality (in terms of power), which is established against the alternative of integrated, or I(1), random coefficient. We demonstrate this by constructing tests that are more powerful than the LM test when random coefficient is stationary, although these tests are dominated in terms of power by the LM test when random coefficient is integrated. This implies that the best test for coefficient randomness differs from context to context, and practitioners should take into account the persistence of potentially random coefficient and choose from several tests accordingly. In particular, we show through theoretical and numerical investigations that the product of the LM test and a Wald-type test proposed in this paper is preferable when there is no prior information on the persistence of potentially random coefficient. This point is illustrated by an empirical application using the U.S. stock returns data.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.04926&r=ecm
  9. By: Deborah Gefang; Stephen G. Hall; George S. Tavlas
    Abstract: This paper develops a simple two-stage variational Bayesian algorithm to estimate panel spatial autoregressive models, where N, the number of cross-sectional units, is much larger than T, the number of time periods without restricting the spatial effects using a predetermined weighting matrix. We use Dirichlet-Laplace priors for variable selection and parameter shrinkage. Without imposing any a priori structures on the spatial linkages between variables, we let the data speak for themselves. Extensive Monte Carlo studies show that our method is super-fast and our estimated spatial weights matrices strongly resemble the true spatial weights matrices. As an illustration, we investigate the spatial interdependence of European Union regional gross value added growth rates. In addition to a clear pattern of predominant country clusters, we have uncovered a number of important between-country spatial linkages which are yet to be documented in the literature. This new procedure for estimating spatial effects is of particular relevance for researchers and policy makers alike.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.03740&r=ecm
  10. By: John M. Abowd; Kevin L. McKinney
    Abstract: We study mixed-effects methods for estimating equations containing person and firm effects. In economics such models are usually estimated using fixed-effects methods. Recent enhancements to those fixed-effects methods include corrections to the bias in estimating the covariance matrix of the person and firm effects, which we also consider.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.15445&r=ecm
  11. By: Wojciech "Victor" Fulmyk
    Abstract: I introduce a novel algorithm and accompanying Python library, named mlcausality, designed for the identification of nonlinear Granger causal relationships. This novel algorithm uses a flexible plug-in architecture that enables researchers to employ any nonlinear regressor as the base prediction model. Subsequently, I conduct a comprehensive performance analysis of mlcausality when the prediction regressor is the kernel ridge regressor with the radial basis function kernel. The results demonstrate that mlcausality employing kernel ridge regression achieves competitive AUC scores across a diverse set of simulated data. Furthermore, mlcausality with kernel ridge regression yields more finely calibrated $p$-values in comparison to rival algorithms. This enhancement enables mlcausality to attain superior accuracy scores when using intuitive $p$-value-based thresholding criteria. Finally, mlcausality with the kernel ridge regression exhibits significantly reduced computation times compared to existing nonlinear Granger causality algorithms. In fact, in numerous instances, this innovative approach achieves superior solutions within computational timeframes that are an order of magnitude shorter than those required by competing algorithms.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.05107&r=ecm
  12. By: Philippe Andrade; Filippo Ferroni; Leonardo Melosi
    Abstract: We exploit inequality restrictions on higher-order moments of the distribution of structural shocks to sharpen their identification. We show that these constraints can be treated as necessary conditions and used to shrink the set of admissible rotations. We illustrate the usefulness of this approach showing, by simulations, how it can dramatically improve the identification of monetary policy shocks when combined with widely used sign-restriction schemes. We then apply our methodology to two empirical questions: the effects of monetary policy shocks in the U.S. and the effects of sovereign bond spread shocks in the euro area. In both cases, using higher-moment restrictions significantly sharpens identification. After a shock to euro area government bond spreads, monetary policy quickly turns expansionary, corporate borrowing conditions worsen on impact, the real economy and the labor market of the euro area contract appreciably, and returns on German government bonds fall, likely reflecting investors’ flight to quality.
    Keywords: shock identification; Skewness; Kurtosis; VAR; Sign restrictions; monetary shocks; Euro area
    JEL: C32 E27 E32
    Date: 2023–08–18
    URL: http://d.repec.org/n?u=RePEc:fip:fedhwp:96666&r=ecm
  13. By: Liang Chen; Yao Luo
    Abstract: Network effects, i.e., an agent's utility may depend on other agents' choices, appear in many contracting situations. Empirically assessing them faces two challenges: an endogeneity problem in contract choice and a reflection problem in network effects. This paper proposes a nonparametric approach to tackle both challenges by exploiting restriction conditions from both demand and supply sides. We illustrate our methodology in the yellow pages advertising industry. Using advertising purchases and nonlinear price schedules from seven directories in Toronto, we find positive network effects, which account for a substantial portion of the publisher's profit and businesses' surpluses. We finally conduct counterfactuals to assess the overall and distributional welfare effects of the nonlinear pricing scheme relative to an alternative linear pricing scheme with and without network effects.
    Keywords: Identification, Asymmetric Information, Network Effects, Nonlinear Pricing
    JEL: L11 L12 L13
    Date: 2023–09–25
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-758&r=ecm
  14. By: Xiyuan Ren; Joseph Y. J. Chow
    Abstract: Estimating agent-specific taste heterogeneity with a large information and communication technology (ICT) dataset requires both model flexibility and computational efficiency. We propose a group-level agent-based mixed (GLAM) logit approach that is estimated with inverse optimization (IO) and group-level market share. The model is theoretically consistent with the RUM model framework, while the estimation method is a nonparametric approach that fits to market-level datasets, which overcomes the limitations of existing approaches. A case study of New York statewide travel mode choice is conducted with a synthetic population dataset provided by Replica Inc., which contains mode choices of 19.53 million residents on two typical weekdays, one in Fall 2019 and another in Fall 2021. Individual mode choices are grouped into market-level market shares per census block-group OD pair and four population segments, resulting in 120, 740 group-level agents. We calibrate the GLAM logit model with the 2019 dataset and compare to several benchmark models: mixed logit (MXL), conditional mixed logit (CMXL), and individual parameter logit (IPL). The results show that empirical taste distribution estimated by GLAM logit can be either unimodal or multimodal, which is infeasible for MXL/CMXL and hard to fulfill in IPL. The GLAM logit model outperforms benchmark models on the 2021 dataset, improving the overall accuracy from 82.35% to 89.04% and improving the pseudo R-square from 0.4165 to 0.5788. Moreover, the value-of-time (VOT) and mode preferences retrieved from GLAM logit aligns with our empirical knowledge (e.g., VOT of NotLowIncome population in NYC is $28.05/hour; public transit and walking is preferred in NYC). The agent-specific taste parameters are essential for the policymaking of statewide transportation projects.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.13159&r=ecm
  15. By: Vod Vilfort; Whitney Zhang
    Abstract: A growing literature measures "belief effects" -- that is, the effect of a change in beliefs on one's actions -- using information provision experiments, where the provision of information is used as an instrument for beliefs. We show that in passive control design experiments with heterogeneous belief effects, using information provision as an instrument may not produce a positive weighted average of belief effects. We propose a "mover instrumental variables" (MIV) framework and estimator that attains a positive weighted average of belief effects by inferring the direction of belief updating using the prior. Relative to our preferred MIV, commonly used specifications in the literature produce a form of MIV that overweights individuals with larger prior errors; additionally, some specifications may require additional assumptions to generate positive weights.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.04793&r=ecm
  16. By: TingXuan Han (Tsinghua University); Luxi Zhang (University of Macau); Xinshu Zhao (University of Macau); Ke Deng (Tsinghua University)
    Abstract: The procedure for establishing mediation, i.e., determining that an independent variable $X$ affects a dependent variable $Y$ through some mediator M, has been under debate. The classic causal steps require that a ``total effect'' be significant, now also known as statistically acknowledged. It has been shown that the total-effect test can erroneously reject competitive mediation and is superfluous for establishing complementary mediation. Little is known about the last type, indirect-only mediation, aka ``full" or ``complete" mediation, in which the indirect (ab) path passes the statistical partition test while the direct-and-remainder (d) path fails. This study 1) provides proof that the total-effect test can erroneously reject indirect-only mediation, including both sub-types, assuming least square estimation (LSE) F-test or Sobel test; 2) provides a simulation to duplicate the mathematical proofs and extend the conclusion to LAD-Z test; 3) provides two real-data examples, one for each sub-type, to illustrate the mathematical conclusion; 4) in view of the mathematical findings, proposes to revisit concepts, theories, and techniques of mediation analysis and other causal dissection analyses, and showcase a more comprehensive alternative, process-and-product analysis (PAPA).
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.08910&r=ecm
  17. By: Daniel J. Benjamin; Kristen Cooper; Ori Heffetz; Miles S. Kimball; Jiannan Zhou
    Abstract: Analyses of self-reported-well-being (SWB) survey data may be confounded if people use response scales differently. We use calibration questions, designed to have the same objective answer across respondents, to measure dimensional (i.e., specific to an SWB dimension) and general (i.e., common across questions) scale-use heterogeneity. In a sample of ~3, 350 MTurkers, we find substantial such heterogeneity that is correlated with demographics. We develop a theoretical framework and econometric approaches to quantify and adjust for this heterogeneity. We apply our new estimators in several standard SWB applications. Adjusting for general-scale-use heterogeneity changes results in some cases.
    JEL: C83 D60 D63 D90 D91 I14 I31
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31728&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.