nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒10‒18
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Jackknife empirical likelihood: small bandwidth, sparse network and high-dimension asymptotic By Matsushita, Yukitoshi; Otsu, Taisuke
  2. \beta-Intact-VAE: Identifying and Estimating Causal Effects under Limited Overlap By Pengzhou Wu; Kenji Fukumizu
  3. Fixed $T$ Estimation of Linear Panel Data Models with Interactive Fixed Effects By Ayden Higgins
  4. Nonparametric Tests of Conditional Independence for Time Series By Xiaojun Song; Haoyu Wei
  5. Estimating High Dimensional Monotone Index Models by Iterative Convex Optimization1 By Shakeeb Khan; Xiaoying Lan; Elie Tamer
  6. Many Proxy Controls By Ben Deaner
  7. Partial Identification of Marginal Treatment Effects with discrete instruments and misreported treatment By Santiago Acerenza
  8. Efficient Estimation in NPIV Models: A Comparison of Various Neural Networks-Based Estimators By Jiafeng Chen; Xiaohong Chen; Elie Tamer
  9. Dyadic Double/Debiased Machine Learning for Analyzing Determinants of Free Trade Agreements By Harold D Chiang; Yukun Ma; Joel Rodrigue; Yuya Sasaki
  10. Deep Learning of Potential Outcomes By Bernard Koch; Tim Sainburg; Pablo Geraldo; Song Jiang; Yizhou Sun; Jacob Gates Foster
  11. The time-varying evolution of inflation risks By Korobilis, Dimitris; Landau, Bettina; Musso, Alberto; Phella, Anthoulla
  12. Choice probabilities and correlations in closed-form route choice models: specifications and drawbacks By Fiore Tinessa; Vittorio Marzano; Andrea Papola
  13. A time-varying skewness model for Growth-at-Risk By Martin Iseringhausen

  1. By: Matsushita, Yukitoshi; Otsu, Taisuke
    Abstract: This paper sheds light on inference problems for statistical models under alternative or nonstandard asymptotic frameworks from the perspective of jackknife empirical likelihood. Examples include small bandwidth asymptotics for semiparametric inference and goodness-of- fit testing, sparse network asymptotics, many covariates asymptotics for regression models, and many-weak instruments asymptotics for instrumental variable regression. We first establish Wilks’ theorem for the jackknife empirical likelihood statistic on a general semiparametric in- ference problem under the conventional asymptotics. We then show that the jackknife empirical likelihood statistic may lose asymptotic pivotalness under the above nonstandard asymptotic frameworks, and argue that these phenomena are understood as emergence of Efron and Stein’s (1981) bias of the jackknife variance estimator in the first order. Finally we propose a modi- fication of the jackknife empirical likelihood to recover asymptotic pivotalness under both the conventional and nonstandard asymptotics. Our modification works for all above examples and provides a unified framework to investigate nonstandard asymptotic problems.
    JEL: C1
    Date: 2020–10–05
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:106488&r=
  2. By: Pengzhou Wu; Kenji Fukumizu
    Abstract: As an important problem in causal inference, we discuss the identification and estimation of treatment effects (TEs) under limited overlap; that is, when subjects with certain features belong to a single treatment group. We use a latent variable to model a prognostic score which is widely used in biostatistics and sufficient for TEs; i.e., we build a generative prognostic model. We prove that the latent variable recovers a prognostic score, and the model identifies individualized treatment effects. The model is then learned as \beta-Intact-VAE--a new type of variational autoencoder (VAE). We derive the TE error bounds that enable representations balanced for treatment groups conditioned on individualized features. The proposed method is compared with recent methods using (semi-)synthetic datasets.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.05225&r=
  3. By: Ayden Higgins
    Abstract: This paper studies the estimation of linear panel data models with interactive fixed effects, where one dimension of the panel, typically time, may be fixed. To this end, a novel transformation is introduced that reduces the model to a lower dimension, and, in doing so, relieves the model of incidental parameters in the cross-section. The central result of this paper demonstrates that transforming the model and then applying the principal component (PC) estimator of \cite{bai_panel_2009} delivers $\sqrt{n}$ consistent estimates of regression slope coefficients with $T$ fixed. Moreover, these estimates are shown to be asymptotically unbiased in the presence of cross-sectional dependence, serial dependence, and with the inclusion of dynamic regressors, in stark contrast to the usual case. The large $n$, large $T$ properties of this approach are also studied, where many of these results carry over to the case in which $n$ is growing sufficiently fast relative to $T$. Transforming the model also proves to be useful beyond estimation, a point illustrated by showing that with $T$ fixed, the eigenvalue ratio test of \cite{horenstein} provides a consistent test for the number of factors when applied to the transformed model.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.05579&r=
  4. By: Xiaojun Song; Haoyu Wei
    Abstract: We propose consistent nonparametric tests of conditional independence for time series data. Our methods are motivated from the difference between joint conditional cumulative distribution function (CDF) and the product of conditional CDFs. The difference is transformed into a proper conditional moment restriction (CMR), which forms the basis for our testing procedure. Our test statistics are then constructed using the integrated moment restrictions that are equivalent to the CMR. We establish the asymptotic behavior of the test statistics under the null, the alternative, and the sequence of local alternatives converging to conditional independence at the parametric rate. Our tests are implemented with the assistance of a multiplier bootstrap. Monte Carlo simulations are conducted to evaluate the finite sample performance of the proposed tests. We apply our tests to examine the predictability of equity risk premium using variance risk premium for different horizons and find that there exist various degrees of nonlinear predictability at mid-run and long-run horizons.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.04847&r=
  5. By: Shakeeb Khan; Xiaoying Lan; Elie Tamer
    Abstract: In this paper we propose a new approach to estimating large dimensional monotone index models. This class of models has been popular in the applied and theoretical econometrics literatures as they include discrete choice, nonparametric transformation, and duration models. The main advantage of our approach is computational: in comparison, rank estimation procedures such as proposed in Han (1987) and Cavanagh and Sherman (1998) optimize a nonsmooth, non convex objective function, and finding a global maximum gets increasingly difficult with a large number of regressors. This makes such procedures particularly unsuitable for big data models. For our semiparametric model of increasing dimension, we propose a new algorithm based estimator involving the method of sieves and establish asymptotic its properties. The algorithm uses an iterative procedure where the key step exploits its strictly convex objective function. Our main results here generalize those in, e.g. Dominitz and Sherman (2005) and Toulis and Airoldi (2017), who consider algorithmic based estimators for models of fixed dimension.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.04388&r=
  6. By: Ben Deaner
    Abstract: A recent literature considers causal inference using noisy proxies for unobserved confounding factors. The proxies are divided into two sets that are independent conditional on the confounders. One set of proxies are `negative control treatments' and the other are `negative control outcomes'. Existing work applies to low-dimensional settings with a fixed number of proxies and confounders. In this work we consider linear models with many proxy controls and possibly many confounders. A key insight is that if each group of proxies is strictly larger than the number of confounding factors, then a matrix of nuisance parameters has a low-rank structure and a vector of nuisance parameters has a sparse structure. We can exploit the rank-restriction and sparsity to reduce the number of free parameters to be estimated. The number of unobserved confounders is not known a priori but we show that it is identified, and we apply penalization methods to adapt to this quantity. We provide an estimator with a closed-form as well as a doubly-robust estimator that must be evaluated using numerical methods. We provide conditions under which our doubly-robust estimator is uniformly root-$n$ consistent, asymptotically centered normal, and our suggested confidence intervals have asymptotically correct coverage. We provide simulation evidence that our methods achieve better performance than existing approaches in high dimensions, particularly when the number of proxies is substantially larger than the number of confounders.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.03973&r=
  7. By: Santiago Acerenza
    Abstract: This paper provides partial identification results for the marginal treatment effect ($MTE$) when the binary treatment variable is potentially misreported and the instrumental variable is discrete. Identification results are derived under different sets of nonparametric assumptions. The identification results are illustrated in identifying the marginal treatment effects of food stamps on health.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.06285&r=
  8. By: Jiafeng Chen; Xiaohong Chen; Elie Tamer
    Abstract: We investigate the computational performance of Artificial Neural Networks (ANNs) in semi-nonparametric instrumental variables (NPIV) models of high dimensional covariates that are relevant to empirical work in economics. We focus on efficient estimation of and inference on expectation functionals (such as weighted average derivatives) and use optimal criterion-based procedures (sieve minimum distance or SMD) and novel efficient score-based procedures (ES). Both these procedures use ANN to approximate the unknown function. Then, we provide a detailed practitioner's recipe for implementing these two classes of estimators. This involves the choice of tuning parameters both for the unknown functions (that include conditional expectations) but also for the choice of estimation of the optimal weights in SMD and the Riesz representers used with the ES estimators. Finally, we conduct a large set of Monte Carlo experiments that compares the finite-sample performance in complicated designs that involve a large set of regressors (up to 13 continuous), and various underlying nonlinearities and covariate correlations. Some of the takeaways from our results include: 1) tuning and optimization are delicate especially as the problem is nonconvex; 2) various architectures of the ANNs do not seem to matter for the designs we consider and given proper tuning, ANN methods perform well; 3) stable inferences are more difficult to achieve with ANN estimators; 4) optimal SMD based estimators perform adequately; 5) there seems to be a gap between implementation and approximation theory. Finally, we apply ANN NPIV to estimate average price elasticity and average derivatives in two demand examples.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.06763&r=
  9. By: Harold D Chiang; Yukun Ma; Joel Rodrigue; Yuya Sasaki
    Abstract: This paper presents novel methods and theories for estimation and inference about parameters in econometric models using machine learning of nuisance parameters when data are dyadic. We propose a dyadic cross fitting method to remove over-fitting biases under arbitrary dyadic dependence. Together with the use of Neyman orthogonal scores, this novel cross fitting method enables root-$n$ consistent estimation and inference robustly against dyadic dependence. We illustrate an application of our general framework to high-dimensional network link formation models. With this method applied to empirical data of international economic networks, we reexamine determinants of free trade agreements (FTA) viewed as links formed in the dyad composed of world economies. We document that standard methods may lead to misleading conclusions for numerous classic determinants of FTA formation due to biased point estimates or standard errors which are too small.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.04365&r=
  10. By: Bernard Koch; Tim Sainburg; Pablo Geraldo; Song Jiang; Yizhou Sun; Jacob Gates Foster
    Abstract: This review systematizes the emerging literature for causal inference using deep neural networks under the potential outcomes framework. It provides an intuitive introduction on how deep learning can be used to estimate/predict heterogeneous treatment effects and extend causal inference to settings where confounding is non-linear, time varying, or encoded in text, networks, and images. To maximize accessibility, we also introduce prerequisite concepts from causal inference and deep learning. The survey differs from other treatments of deep learning and causal inference in its sharp focus on observational causal estimation, its extended exposition of key algorithms, and its detailed tutorials for implementing, training, and selecting among deep estimators in Tensorflow 2 available at github.com/kochbj/Deep-Learning-for-Caus al-Inference.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.04442&r=
  11. By: Korobilis, Dimitris; Landau, Bettina; Musso, Alberto; Phella, Anthoulla
    Abstract: This paper develops a Bayesian quantile regression model with time-varying parameters (TVPs) for forecasting inflation risks. The proposed parametric methodology bridges the empirically established benefits of TVP regressions for forecasting inflation with the ability of quantile regression to model flexibly the whole distribution of inflation. In order to make our approach accessible and empirically relevant for forecasting, we derive an efficient Gibbs sampler by transforming the state-space form of the TVP quantile regression into an equivalent high-dimensional regression form. An application of this methodology points to a good forecasting performance of quantile regressions with TVPs augmented with specific credit and money-based indicators for the prediction of the conditional distribution of inflation in the euro area, both in the short and longer run, and specifically for tail risks. JEL Classification: C11, C22, C52, C53, C55, E31, E37, E51
    Keywords: Bayesian shrinkage, euro area, Horseshoe, inflation tail risks, MCMC, quantile regression, time-varying parameters
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212600&r=
  12. By: Fiore Tinessa; Vittorio Marzano; Andrea Papola
    Abstract: This paper investigates the performance, in terms of choice probabilities and correlations, of existing and new specifications of closed-form route choice models with flexible correlation patterns, namely the Link Nested Logit (LNL), the Paired Combinatorial Logit (PCL) and the more recent Combination of Nested Logit (CoNL) models. Following a consolidated track in the literature, choice probabilities and correlations of the Multinomial Probit (MNP) model by (Daganzo and Sheffi, 1977) are taken as target. Laboratory experiments on small/medium-size networks are illustrated, also leveraging a procedure for practical calculation of correlations of any GEV models, proposed by (Marzano 2014). Results show that models with inherent limitations in the coverage of the domain of feasible correlations yield unsatisfactory performance, whilst the specifications of the CoNL proposed in the paper appear the best in fitting both MNP correlations and probabilities. Performance of the models are appreciably ameliorated by introducing lower bounds to the nesting parameters. Overall, the paper provides guidance for the practical application of tested models.
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2110.07224&r=
  13. By: Martin Iseringhausen (ESM)
    Abstract: This paper studies macroeconomic risks in a panel of advanced economies based on a stochastic volatility model in which macro-financial conditions shape the predictive growth distribution. We find sizable time variation in the skewness of these distributions, conditional on the macro-financial environment. Tightening financial conditions signal increasing downside risk in the short term, but this link reverses at longer horizons. When forecasting downside risk, the proposed model, on average, outperforms existing approaches based on quantile regression and a GARCH model, especially at short horizons. In forecasting upside risk, it improves the average accuracy across all horizons up to four quarters ahead. The suggested approach can inform policy makers' assessment of macro-financial vulnerabilities by providing a timely signal of shifting risks and a quantification of their magnitude.
    Keywords: Bayesian analysis, downside risk, macro-financial linkages, time variation
    JEL: C11 C23 C53 E44
    Date: 2021–06–10
    URL: http://d.repec.org/n?u=RePEc:stm:wpaper:49&r=

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.