nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒10‒30
eleven papers chosen by
Sune Karlsson, Örebro universitet


  1. Trimmed Mean Group Estimation of Average Treatment Effects in Ultra Short T Panels under Correlated Heterogeneity By Pesaran, M. H.; Yang, L.
  2. Least squares estimation in nonlinear cohort panels with learning from experience By Alexander Mayer; Michael Massmann
  3. Identification analysis in models with unrestricted latent variables: Fixed effects and initial conditions By Andrew Chesher; Adam Rosen; Yuanqi Zhang
  4. The Second-order Bias and Mean Squared Error of Quantile Regression Estimators By Tae-Hwy Lee; Aman Ullah; He Wang
  5. Tests of no cross-sectional error dependence in panel quantile regressions By Demetrescu, Matei; Hosseinkouchack, Mehdi; Rodrigues, Paulo M. M.
  6. Uniform Priors for Impulse Responses By Jonas E. Arias; Juan F. Rubio-Ramirez; Daniel F. Waggoner
  7. Bayesian correction for missing rich using a Pareto II tail with unknown threshold: Combining EU-SILC and WID data By Mathias Silva; Michel Lubrano
  8. Off-policy confidence interval estimation with confounded Markov decision process By Shi, Chengchun; Zhu, Jin; Shen, Ye; Luo, Shikai; Zhu, Hongtu; Song, Rui
  9. A detection analysis for temporal memory patterns at different time-scales By Fabio Vanni; David Lambert
  10. Optimal Conditional Inference in Adaptive Experiments By Jiafeng Chen; Isaiah Andrews
  11. Estimating and Applying Autoregression Models via Their Eigensystem Representation By Leo Krippner

  1. By: Pesaran, M. H.; Yang, L.
    Abstract: Under correlated heterogeneity, the commonly used two-way fixed effects estimator is biased and can lead to misleading inference. This paper proposes a new trimmed mean group (TMG) estimator which is consistent at the irregular rate of n1/3 even if the time dimension of the panel is as small as the number of its regressors. Extensions to panels with time effects are provided, and a Hausman-type test of correlated heterogeneity is proposed. Small sample properties of the TMG estimator (with and without time effects) are investigated by Monte Carlo experiments and shown to be satisfactory and perform better than other trimmed estimators proposed in the literature. The proposed test of correlated heterogeneity is also shown to have the correct size and satisfactory power. The utility of the TMG approach is illustrated with an empirical application.
    Keywords: Correlated heterogeneity, irregular estimators, two-way fixed effects, FE-TE, tests of correlated heterogeneity, calorie demand
    JEL: C21 C23
    Date: 2023–10–16
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2364&r=ecm
  2. By: Alexander Mayer; Michael Massmann
    Abstract: We discuss techniques of estimation and inference for nonlinear cohort panels with learning from experience, showing, inter alia, the consistency and asymptotic normality of the nonlinear least squares estimator employed in the seminal paper by Malmendier and Nagel (2016). Potential pitfalls for hypothesis testing are identified and solutions proposed. Monte Carlo simulations verify the properties of the estimator and corresponding test statistics in finite samples, while an application to a panel of survey expectations demonstrates the usefulness of the theory developed.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.08982&r=ecm
  3. By: Andrew Chesher; Adam Rosen; Yuanqi Zhang
    Abstract: Many structural econometric models include latent variables on whose probability distributions one may wish to place minimal restrictions. Leading examples in panel data models are individual-specific variables sometimes treated as “fixed effects” and, in dynamic models, initial conditions. This paper presents a generally applicable method for characterizing sharp identified sets when models place no restrictions on the probability distribution of certain latent variables and no restrictions on their covariation with other variables. Endogenous explanatory variables can be easily accommodated. Examples of application to some static and dynamic binary, ordered and multiple discrete choice panel data models are presented.
    Date: 2023–10–11
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:20/23&r=ecm
  4. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Aman Ullah (Department of Economics, University of California, Riverside); He Wang (University of International Business and Economics, Beijing)
    Abstract: The finite sample theory using higher order asymptotics provides better approximations of the bias and mean squared error (MSE) for a class of estimators. Rilston, Srivastava and Ullah (1996) provided the second-order bias results of conditional mean regression. This paper develops new analytical results on the second-order bias up to order O(N⠻¹) and MSE up to order O(N⠻²) of the conditional quantile regression estimators. First, we provide the general results on the second-order bias and MSE of conditional quantile estimators. The second-order bias result enables an improved bias correction and thus to obtain improved quantile estimation. In particular, we show that the second-order bias are much larger towards the tails of the conditional density than near the median, and therefore the benefit of the second order bias correction is greater when we are interested in the deeper tail quantiles, e.g., for the study of financial risk management. The higher order MSE result for the quantile estimation also enables us to better understand the sources of estimation uncertainty. Next, we consider three special cases of the general results, for the unconditional quantile estimation, for the conditional quantile regression with a binary covariate, and for the instrumental variable quantile regression (IVQR). For each of these special cases, we provide the second-order bias and MSE to illustrate their behavior which depends on certain parameters and distributional characteristics. The Monte Carlo simulation indicates that the bias is larger at the extreme low and high tail quantiles, and the second-order bias corrected estimator has better behavior than the uncorrected ones in both conditional and unconditional quantile estimation. The second-order bias corrected estimators are numerically much closer to the true parameters of the data generating processes. As the higher order bias and MSE decrease as the sample size increases or as the regression error variance decreases, the benefits of the finite sample theory are more apparent when there are larger sampling errors in estimation. The empirical application of the theory to the predictive quantile regression model in finance highlights the benefit of the proposed second-order bias reduction.
    Keywords: Check-loss, Dirac delta function, Quantile regression, Second-order bias, MSE.
    JEL: C13 C33 C52
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202313&r=ecm
  5. By: Demetrescu, Matei; Hosseinkouchack, Mehdi; Rodrigues, Paulo M. M.
    Abstract: This paper argues that cross-sectional dependence (CSD) is an indicator of misspecification in panel quantile regression (QR) rather than just a nuisance that may be accounted for with panel-robust standard errors. This motivates the development of a novel test for panel QR misspecification based on detecting CSD. The test possesses a standard normal limiting distribution under joint N, T asymptotics with restrictions on the relative rate at which N and T go to infinity. A finitesample correction improves the applicability of the test for panels with larger N. An empirical application to housing markets illustrates the use of the proposed cross-sectional dependence test.
    Keywords: Cross-unit correlation, conditional quantile, factor model, exogeneity
    JEL: C12 C23
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:rwirep:1041&r=ecm
  6. By: Jonas E. Arias; Juan F. Rubio-Ramirez; Daniel F. Waggoner
    Abstract: There has been a call for caution when using the conventional method for Bayesian inference in set-identified structural vector autoregressions on the grounds that the uniform prior over the set of orthogonal matrices could be nonuniform for individual impulse responses or other quantity of interest. This paper challenges this call by formally showing that, when the focus is on joint inference, the uniform prior over the set of orthogonal matrices is not only sufficient but also necessary for inference based on a uniform joint prior distribution over the identified set for the vector of impulse responses. In addition, we show how to use the conventional method to conduct inference based on a uniform joint prior distribution for the vector of impulse responses. We generalize our results to vectors of objects of interest beyond impulse responses.
    Keywords: Bayesian; SVARs; uniform prior; sign restrictions
    JEL: C11 C33 E47
    Date: 2023–09–28
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:96956&r=ecm
  7. By: Mathias Silva (ENS Lyon, France and Aix Marseille Univ, CNRS, AMSE, Marseille, France); Michel Lubrano (Aix Marseille Univ, CNRS, AMSE, Marseille, France)
    Abstract: Survey data are known for under-reporting rich households while providing large information on contextual variables. Tax data provide a better representation of top incomes at the expense of lacking any contextual variables. So the literature has developed several methods to combine the two sources of information. For Pareto imputation, the question is how to chose the Pareto model for the right tail of the income distribution. The Pareto I model has the advantage of simplicity. But Jenkins (2017) promoted the use of the Pareto II for its nicer properties, reviewing three different approaches to correct for missing top incomes. In this paper, we propose a Bayesian approach to combine tax and survey data, using a Pareto II tail. We build on the extreme value literature to develop a compound model where the lower part of the income distribution is approximated with a Bernstein polynomial truncated density estimate while the upper part is represented by a Pareto II. This provides a way to estimate the threshold where to start the Pareto II. Then WID tax data are used to build up a prior information for the Pareto coefficient in the form of a gamma prior density to be combined with the likelihood function. We apply the methodology to the EU-SILC data set to decompose the Gini index. We finally analyse the impact of top income correction on the Growth Incidence Curve between 2008 and 2018 for a group of 23 European countries.
    Keywords: Bayesian inference, Pareto II, profile likelihood, Bernstein density estimation, top income correction, EU-SILC
    JEL: C11 D31 D63 I31
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:aim:wpaimx:2320&r=ecm
  8. By: Shi, Chengchun; Zhu, Jin; Shen, Ye; Luo, Shikai; Zhu, Hongtu; Song, Rui
    Abstract: This article is concerned with constructing a confidence interval for a target policy’s value offline based on a pre-collected observational data in infinite horizon settings. Most of the existing works assume no unmeasured variables exist that confound the observed actions. This assumption, however, is likely to be violated in real applications such as healthcare and technological industries. In this article, we show that with some auxiliary variables that mediate the effect of actions on the system dynamics, the target policy’s value is identifiable in a confounded Markov decision process. Based on this result, we develop an efficient off-policy value estimator that is robust to potential model misspecification and provide rigorous uncertainty quantification. Our method is justified by theoretical results, simulated and real datasets obtained from ridesharing companies. A Python implementation of the proposed procedure is available at https://github.com/Mamba413/cope.
    Keywords: reinforcement learning; off-policy evaluation; statistical inference; unmeasured confounders; infinite horizons; ridesourcing platforms; EP/W014971/1; T&F deal
    JEL: C1
    Date: 2022–10–05
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:115774&r=ecm
  9. By: Fabio Vanni; David Lambert
    Abstract: This paper introduces a novel methodology that utilizes latency to unveil time-series dependence patterns. A customized statistical test detects memory dependence in event sequences by analyzing their inter-event time distributions. Synthetic experiments based on the renewal-aging property assess the impact of observer latency on the renewal property. Our test uncovers memory patterns across diverse time scales, emphasizing the event sequence's probability structure beyond correlations. The time series analysis produces a statistical test and graphical plots which helps to detect dependence patterns among events at different time-scales if any. Furthermore, the test evaluates the renewal assumption through aging experiments, offering valuable applications in time-series analysis within economics.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.12034&r=ecm
  10. By: Jiafeng Chen; Isaiah Andrews
    Abstract: We study batched bandit experiments and consider the problem of inference conditional on the realized stopping time, assignment probabilities, and target parameter, where all of these may be chosen adaptively using information up to the last batch of the experiment. Absent further restrictions on the experiment, we show that inference using only the results of the last batch is optimal. When the adaptive aspects of the experiment are known to be location-invariant, in the sense that they are unchanged when we shift all batch-arm means by a constant, we show that there is additional information in the data, captured by one additional linear function of the batch-arm means. In the more restrictive case where the stopping time, assignment probabilities, and target parameter are known to depend on the data only through a collection of polyhedral events, we derive computationally tractable and optimal conditional inference procedures.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.12162&r=ecm
  11. By: Leo Krippner
    Abstract: This article introduces the eigensystem autoregression (EAR) framework, which allows an AR model to be specified, estimated, and applied directly in terms of its eigenvalues and eigenvectors. An EAR estimation can therefore impose various constraints on AR dynamics that would not be possible within standard linear estimation. Examples are restricting eigenvalue magnitudes to control the rate of mean reversion, additionally imposing that eigenvalues be real and positive to avoid pronounced oscillatory behavior, and eliminating the possibility of explosive episodes in a time-varying AR. The EAR framework also produces closed-form AR forecasts and associated variances, and forecasts and data may be decomposed into components associated with the AR eigenvalues to provide additional diagnostics for assessing the model.
    Keywords: autoregression, lag polynomial, eigenvalues, eigenvectors, companion matrix
    JEL: C22 C53 C63
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2023-47&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.