|
on Econometrics |
By: | Ganesh Karapakula |
Abstract: | In this paper, I try to tame "Basu's elephants" (data with extreme selection on observables). I propose new practical large-sample and finite-sample methods for estimating and inferring heterogeneous causal effects (under unconfoundedness) in the empirically relevant context of limited overlap. I develop a general principle called "Stable Probability Weighting" (SPW) that can be used as an alternative to the widely used Inverse Probability Weighting (IPW) technique, which relies on strong overlap. I show that IPW (or its augmented version), when valid, is a special case of the more general SPW (or its doubly robust version), which adjusts for the extremeness of the conditional probabilities of the treatment states. The SPW principle can be implemented using several existing large-sample parametric, semiparametric, and nonparametric procedures for conditional moment models. In addition, I provide new finite-sample results that apply when unconfoundedness is plausible within fine strata. Since IPW estimation relies on the problematic reciprocal of the estimated propensity score, I develop a "Finite-Sample Stable Probability Weighting" (FPW) set-estimator that is unbiased in a sense. I also propose new finite-sample inference methods for testing a general class of weak null hypotheses. The associated computationally convenient methods, which can be used to construct valid confidence sets and to bound the finite-sample confidence distribution, are of independent interest. My large-sample and finite-sample frameworks extend to the setting of multivalued treatments. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.05703&r=ecm |
By: | Christophe Bell\'ego; David Benatia; Vincent Dortet-Bernardet |
Abstract: | This paper studies the identification, estimation, and inference of long-term (binary) treatment effect parameters when balanced panel data is not available, or consists of only a subset of the available data. We develop a new estimator: the chained difference-in-differences, which leverages the overlapping structure of many unbalanced panel data sets. This approach consists in efficiently aggregating a collection of short-term treatment effects estimated on multiple incomplete panels. Our estimator accommodates (1) multiple time periods, (2) variation in treatment timing, (3) treatment effect heterogeneity, and (4) general missing data patterns. We establish the asymptotic properties of the proposed estimator and discuss identification and efficiency gains in comparison to existing methods. Finally, we illustrate its relevance through (i) numerical simulations, and (ii) an application about the effects of an innovation policy in France. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.01085&r=ecm |
By: | Xiaohong Chen; Yuan Liao; Weichen Wang |
Abstract: | General nonlinear sieve learnings are classes of nonlinear sieves that can approximate nonlinear functions of high dimensional variables much more flexibly than various linear sieves (or series). This paper considers general nonlinear sieve quasi-likelihood ratio (GN-QLR) based inference on expectation functionals of time series data, where the functionals of interest are based on some nonparametric function that satisfy conditional moment restrictions and are learned using multilayer neural networks. While the asymptotic normality of the estimated functionals depends on some unknown Riesz representer of the functional space, we show that the optimally weighted GN-QLR statistic is asymptotically Chi-square distributed, regardless whether the expectation functional is regular (root-$n$ estimable) or not. This holds when the data are weakly dependent beta-mixing condition. We apply our method to the off-policy evaluation in reinforcement learning, by formulating the Bellman equation into the conditional moment restriction framework, so that we can make inference about the state-specific value functional using the proposed GN-QLR method with time series data. In addition, estimating the averaged partial means and averaged partial derivatives of nonparametric instrumental variables and quantile IV models are also presented as leading examples. Finally, a Monte Carlo study shows the finite sample performance of the procedure |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.00092&r=ecm |
By: | Glynn, Adam; Rueda, miguel; Schuessler, Julian (Aarhus University) |
Abstract: | Post-instrument covariates are often included as controls in IV analyses to address a violation of the exclusion restriction. However, we show that such analyses are subject to biases unless strong assumptions hold. Using linear constant-effects models, we present asymptotic bias formulas for three estimators (with and without measurement error): IV with post-instrument covariates, IV without post-instrument covariates, and OLS. In large samples and when the model provides a reasonable approximation, these formulas sometimes allow the analyst to bracket the parameter of interest with two estimators and allow the analyst to choose the estimator with the least asymptotic bias. We illustrate these points with a discussion of Acemoglu, Johnson, and Robinson (2001). |
Date: | 2023–01–13 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:axn4t&r=ecm |
By: | Matias D. Cattaneo; Max H. Farrell; Michael Jansson; Ricardo Masini |
Abstract: | The density weighted average derivative (DWAD) of a regression function is a canonical parameter of interest in economics. Classical first-order large sample distribution theory for kernel-based DWAD estimators relies on tuning parameter restrictions and model assumptions leading to an asymptotic linear representation of the point estimator. Such conditions can be restrictive, and the resulting distributional approximation may not be representative of the underlying sampling distribution of the statistic of interest, in particular not being robust to bandwidth choices. Small bandwidth asymptotics offers an alternative, more general distributional approximation for kernel-based DWAD estimators that allows for, but does not require, asymptotic linearity. The resulting inference procedures based on small bandwidth asymptotics were found to exhibit superior finite sample performance in simulations, but no formal theory justifying that empirical success is available in the literature. Employing Edgeworth expansions, this paper shows that small bandwidth asymptotics lead to inference procedures with demonstrable superior higher-order distributional properties relative to procedures based on asymptotic linear approximations. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.00277&r=ecm |
By: | Weifeng Jin |
Abstract: | Non-causal processes have been drawing attention recently in Macroeconomics and Finance for their ability to display nonlinear behaviors such as asymmetric dynamics, clustering volatility, and local explosiveness. In this paper, we investigate the statistical properties of empirical conditional quantiles of non-causal processes. Specifically, we show that the quantile autoregression (QAR) estimates for non-causal processes do not remain constant across different quantiles in contrast to their causal counterparts. Furthermore, we demonstrate that non-causal autoregressive processes admit nonlinear representations for conditional quantiles given past observations. Exploiting these properties, we propose three novel testing strategies of non-causality for non-Gaussian processes within the QAR framework. The tests are constructed either by verifying the constancy of the slope coefficients or by applying a misspecification test of the linear QAR model over different quantiles of the process. Some numerical experiments are included to examine the finite sample performance of the testing strategies, where we compare different specification tests for dynamic quantiles with the Kolmogorov-Smirnov constancy test. The new methodology is applied to some time series from financial markets to investigate the presence of speculative bubbles. The extension of the approach based on the specification tests to AR processes driven by innovations with heteroskedasticity is studied through simulations. The performance of QAR estimates of non-causal processes at extreme quantiles is also explored. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.02937&r=ecm |
By: | Böhl, Gregor |
Abstract: | This paper proposes a Differential-Independence Mixture Ensemble (DIME) sampler for the Bayesian estimation of macroeconomic models. It allows sampling from particularly challenging, high-dimensional black-box posterior distributions which may also be computationally expensive to evaluate. DIME is a "Swiss Army knife", combining the advantages of a broad class of gradient-free global multi-start optimizers with the properties of a Monte Carlo Markov chain. This includes (i) fast burn-in and convergence absent any prior numerical optimization or initial guesses, (ii) good performance for multimodal distributions, (iii) a large number of chains (the "ensemble") running in parallel, (iv) an endogenous proposal density generated from the state of the full ensemble, which (v) respects the bounds of the prior distribution. I show that the number of parallel chains scales well with the number of necessary ensemble iterations. DIME is used to estimate the medium-scale heterogeneous agent New Keynesian ("HANK") model with liquid and illiquid assets, thereby for the first time allowing to also include the households' preference parameters. The results mildly point towards a less accentuated role of household heterogeneity for the empirical macroeconomic dynamics. |
Keywords: | Bayesian Estimation, Monte Carlo Methods, Heterogeneous Agents, Global Optimization, Swiss Army Knife |
JEL: | C11 C13 C15 E10 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:zbw:imfswp:177&r=ecm |
By: | Antonis Demos (www.aueb.gr/users/demos) |
Abstract: | Here we investigate the statistical properties of two normal asymmetric SV models with possibly time varying risk premia. In fact, we investigate two popular autoregressive stochastic volatility specifications. These, although they seem very similar, it turns out, that they possess quite different statistical properties. The derived properties can be employed to develop tests or to check stationarity of various orders, something important for the asymptotic properties of various estimators. |
Date: | 2023–01–19 |
URL: | http://d.repec.org/n?u=RePEc:aue:wpaper:2303&r=ecm |
By: | Hartwig, Benny |
Abstract: | This paper investigates the ability of several generalized Bayesian vector autoregressions to cope with the extreme COVID-19 observations and discusses their impact on prior calibration for inference and forecasting purposes. It shows that the preferred model interprets the pandemic episode as a rare event rather than a persistent increase in macroeconomic volatility. For forecasting, the choice among outlier-robust error structures is less important, however, when a large cross-section of information is used. Besides the error structure, this paper shows that the standard Minnesota prior calibration is an important source of changing macroeconomic transmission channels during the pandemic, altering the predictability of real and nominal variables. To alleviate this sensitivity, an outlier-robust prior calibration is proposed. |
Keywords: | forecasting, multivariate t errors, common time-varying volatility, outlier-robust prior calibration |
JEL: | C11 C51 C53 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:522022&r=ecm |
By: | Ruben Dewitte (: Ghent University); Catherine Fuss (Economics and Research Department); Angelos Theodorakopoulos (Aston Business Schoo) |
Abstract: | Productivity is influenced by several firm-level factors, often latent. When unexplained, this latent heterogeneity can lead to the mismeasurement of productivity differences between groups of firms. We propose a flexible, semi-parametric extension of current production function estimation techniques using finite mixture models to control for latent firm-specific productivity determinants. We establish the performance of the proposed methodology through a Monte Carlo analysis and estimate export premia using firm-level data to demonstrate its empirical applicability. We apply our framework to assess export productivity premia and their robustness with respect to latent heterogeneity. Our results highlight that latent heterogeneity distorts export premia estimates and their contribution to aggregate productivity growth. The proposed approach delivers robust estimates of productivity differences between firm groups, regardless of the availability of productivity determinants in the data. |
Keywords: | : finite mixture model, productivity estimation, productivity distribution, latent productivity determinants |
JEL: | C13 C14 D24 L11 |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:nbb:reswpp:202212-428&r=ecm |
By: | Erik Kole (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam) |
Abstract: | To investigate how economies, financial markets or institutions can deal with stress, we often analyze the effects of shocks conditional on a recession or a bear market. MSVAR models are perfectly suited for such analyses because they combine gradual movements with sudden switches. In this paper, we develop a comprehensive methodology to conduct these analyses. We derive first and second moments conditional only on the regime distribution and propose impulse response functions for both moments. By formulating the MSVAR as an extended linear non-Gaussian VAR, all results are in closed-form. We illustrate our methods with an application to stock and bond return predictability. We show how forecasts of means, volatilities and (auto-)correlations depend on the regimes. The effect of shocks becomes highly nonlinear, and they propagate via different channels. During bear markets, shocks have stronger e?ects on means and volatilities and die out more slowly. |
Keywords: | Markov-switching VAR, moments, impulse response analysis, bull and bear markets; Markov-switching, VAR, moments, impulse response analysis, bull and bear markets |
JEL: | C32 C58 G01 G17 |
Date: | 2022–04–25 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20210080&r=ecm |
By: | Fabio Canova; Kenneth Sæterhagen Paulsen |
Abstract: | Dynamic equilibrium models are specified to track time series with unit root-like behavior. Thus, unit roots are typically introduced and the optimality conditions adjusted. This step requires tedious algebra and often leads to algebraic mistakes, especially in models with several unit roots. We propose a symbolic algorithm that simplies the step of rendering non-stationary models stationary. It is easy to implement and works when trends are stochastic or deterministic, exogenous or endogenous. Three examples illustrate the mechanics and the properties of the approach. A comparison with existing methods is provided. |
Keywords: | DSGE models, unit roots, endogenous growth, symbolic computation |
Date: | 2021–12–20 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2021_18&r=ecm |
By: | Francesco Fusari (University of Surrey) |
Abstract: | This paper proposes a new strategy for the identification of monetary policy shocks in structural vector autoregressions (SVARs). It combines traditional sign restrictions with external variable constraints on high-frequency monetary surprises and central bank’s macroeconomic projections. I use it to characterize the transmission of US monetary policy over the period 1965-2007. First, I find that contractionary monetary policy shocks unequivocally decrease output, sharpening the ambiguous implications of standard sign-restricted SVARs. Second, I show that the identified structural models are consistent with narrative sign restrictions and restrictions on the monetary policy equation. On the contrary, the shocks identified through these alternative methodologies turn out to be correlated with the information set of the central bank and to weakly comove with monetary surprises. Finally, I implement an algorithm for robust Bayesian inference in set-identified SVARs, providing further evidence in support of my identification strategy. |
JEL: | E52 C51 |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:sur:surrec:0123&r=ecm |
By: | Kenneth Sæterhagen Paulsen; Tuva Marie Fastbø; Tobias Ingebrigtsen |
Abstract: | We propose a novel copula approach to producing density forecasts of economic aggregates combining models using disaggregate data. Our copula approach is more flexible compared to existing techniques, because it is applicable to any econometric model that produces density forecasts. We construct a set of Monte Carlo studies to investigate the properties of the suggested approach. In our empirical application, we use the Norwegian index for goods consumption (VKI) and the Norwegian consumer price index for underlying inflation (CPI-ATE). We find that the copula approach compares well to alternative methods using recursive out-of-sample estimation. |
Keywords: | Aggregate forecast, disaggregates, density forecast, copula |
JEL: | C53 E27 |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2022_5&r=ecm |
By: | Im, K S.; Pesaran, M. H.; Shin, Y. |
Abstract: | This article is our personal perspective on the IPS test and the subsequent developments of unit root and cointegration tests in dynamic panels with and without cross-section dependence. In this note, we discuss the main idea behind the test and the publication process that led to Im, Pesaran and Shin (2003). |
Keywords: | Dickey and Fuller statistic, stationarity, panel unit root tests, prevalence of unit roots. |
JEL: | C01 C23 |
Date: | 2023–01–11 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:2310&r=ecm |
By: | Princewill Okoroafor; Vaishnavi Gupta; Robert Kleinberg; Eleanor Goh |
Abstract: | Estimating the empirical distribution of a scalar-valued data set is a basic and fundamental task. In this paper, we tackle the problem of estimating an empirical distribution in a setting with two challenging features. First, the algorithm does not directly observe the data; instead, it only asks a limited number of threshold queries about each sample. Second, the data are not assumed to be independent and identically distributed; instead, we allow for an arbitrary process generating the samples, including an adaptive adversary. These considerations are relevant, for example, when modeling a seller experimenting with posted prices to estimate the distribution of consumers' willingness to pay for a product: offering a price and observing a consumer's purchase decision is equivalent to asking a single threshold query about their value, and the distribution of consumers' values may be non-stationary over time, as early adopters may differ markedly from late adopters. Our main result quantifies, to within a constant factor, the sample complexity of estimating the empirical CDF of a sequence of elements of $[n]$, up to $\varepsilon$ additive error, using one threshold query per sample. The complexity depends only logarithmically on $n$, and our result can be interpreted as extending the existing logarithmic-complexity results for noisy binary search to the more challenging setting where noise is non-stochastic. Along the way to designing our algorithm, we consider a more general model in which the algorithm is allowed to make a limited number of simultaneous threshold queries on each sample. We solve this problem using Blackwell's Approachability Theorem and the exponential weights method. As a side result of independent interest, we characterize the minimum number of simultaneous threshold queries required by deterministic CDF estimation algorithms. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.05682&r=ecm |
By: | Alex Rees-Jones; Ao Wang |
Abstract: | We present a general approach to experimentally testing candidate reference points. This approach builds from Prospect Theory’s prediction that an increase in payoffs is perfectly offset by an equivalent increase in the reference point. Violation of this prediction can be tested with modifications to existing econometric techniques in experiments of a particular design. The resulting approach to testing theories of the reference point is minimally parametric, robust to broad classes of heterogeneity, yet still implementable in comparatively small sample sizes. We demonstrate the application of this approach in an experiment that tests the role of salience in setting reference points. |
JEL: | C14 D9 |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:30773&r=ecm |