|
on Econometrics |
By: | Adam Lee; Geert Mesters |
Abstract: | All parameters in linear simultaneous equations models can be identified (up to permutation and scale) if the underlying structural shocks are independent and if at most one of them is Gaussian. Unfortunately, existing inference methods that exploit such a non-Gaussian identifying assumption suffer from size distortions when the true shocks are close to Gaussian. To address this weak non-Gaussian problem, we develop a robust semi-parametric inference method that yields valid confidence intervals for the structural parameters of interest regardless of the distance to Gaussianity. We treat the densities of the structural shocks non-parametrically and construct identification robust tests based on the efficient score function. The approach is shown to be applicable for a broad class of linear simultaneous equations models in cross-sectional and panel data settings. A simulation study and an empirical study for production function estimation highlight the practical relevance of the methodology. |
Keywords: | Weak identification, semiparametric modeling, independent component analysis, simultaneous equations. |
JEL: | C12 C14 C30 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1792&r= |
By: | Xiaohong Chen (Cowles Foundation, Yale University); Timothy M. Christensen (Department of Economics, New York University); Sid Kankanala (Department of Economics, Yale University) |
Abstract: | We introduce computationally simple, data-driven procedures for estimation and inference on a structural function $h_0$ and its derivatives in nonparametric models using instrumental variables. Our ï¬ rst procedure is a bootstrap-based, data-driven choice of sieve dimension for sieve nonparametric instrumental variables (NPIV) estimators. When implemented with this data-driven choice, sieve NPIV estimators of $h_0$ and its derivatives are adaptive: they converge at the best possible (i.e., minimax) sup-norm rate, without having to know the smoothness of $h_0$, degree of endogeneity of the regressors, or instrument strength. Our second procedure is a data-driven approach for constructing honest and adaptive uniform conï¬ dence bands (UCBs) for $h_0$ and its derivatives. Our data-driven UCBs guarantee coverage for $h_0$ and its derivatives uniformly over a generic class of data-generating processes (honesty) and contract at, or within a logarithmic factor of, the minimax sup-norm rate (adaptivity). As such, our data-driven UCBs deliver asymptotic efficiency gains relative to UCBs constructed via the usual approach of undersmoothing. In addition, both our procedures apply to nonparametric regression as a special case. We use our procedures to estimate and perform inference on a nonparametric gravity equation for the intensive margin of firm exports and nd evidence against common parameterizations of the distribution of unobserved firm productivity. |
Keywords: | Honest and adaptive uniform confidence bands, Minimax sup-norm rate-adaptive estimation, Nonparametric instrumental variables, Bootstrap |
JEL: | C13 C14 C36 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2292&r= |
By: | Josep Lluís Carrion-i-Silvestre (AQR-IREA, University of Barcelona); María Dolores Gadea (University of Zaragoza) |
Abstract: | The paper proposes a sequential statistical procedure to test for the presence of level shifts affecting bounded time series, regardless of their order of integration. The paper shows that bounds are relevant for the statistic that assume that the time series are integrated of order one, whereas they do not affect the limiting distribution of the statistic that is defined for time series that are integrated of order zero. The paper proposes a union rejection statistic for bounded processes that does not require information about the order of integration of the stochastic processes. The model specification is general enough to consider the existence of structural breaks that can affect either the level of the time series and/or the bounds that limit its evolution. Monte Carlo simulations indicate that the procedure works well in finite samples. An empirical application that focuses on the Swiss franc against the euro exchange rate evolution illustrates the usefulness of the proposal. |
Keywords: | Structural breaks, bounded processes, changing bounds JEL classification: C12, C22 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:aqr:wpaper:202106&r= |
By: | Szymon Sacher; Laura Battaglia; Stephen Hansen |
Abstract: | Latent variable models are becoming increasingly popular in economics for high-dimensional categorical data such as text and surveys. Often the resulting low-dimensional representations are plugged into downstream econometric models that ignore the statistical structure of the upstream model, which presents serious challenges for valid inference. We show how Hamiltonian Monte Carlo (HMC) implemented with parallelized automatic differentiation provides a computationally efficient, easy-to-code, and statistically robust solution for this problem. Via a series of applications, we show that modeling integrated structure can non-trivially affect inference and that HMC appears to markedly outperform current approaches to inference in integrated models. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.08112&r= |
By: | Milda Norkute (Bank of Lithuania, Vilnius University); Joakim Westerlund (Lund University, Deakin University); Ovidijus Stauskas (Lunk University) |
Abstract: | In this study, we re-visit the factor analytical (FA) approach for (near unit root) dynamic panel data models, whose asymptotic distribution has been shown to be normal and well centered at zero without the need for valid instruments or correction for bias. It is therefore very appealing. The question is: Does the appeal of FA, which so far has only been documented for fixed effects panels, extends to panels with incidental trends? This is an important question, because many persistent variables are trending. The answer turns out to be negative. In particular, while consistent, the asymptotic normality of FA breaks down when there is an exact unit root present, which limits its applicability. |
Keywords: | Dynamic panel data models, Unit root, Factor analytical method. |
JEL: | C12 C13 C33 |
Date: | 2021–07–29 |
URL: | http://d.repec.org/n?u=RePEc:lie:wpaper:91&r= |
By: | Seisho Sato (Graduate School of Economics, University of Tokyo); Naoto Kunitomo (Gendai-Finance-Center, Tokyo Keizai University) |
Abstract: | In this study, we investigate a new smoothing approach to estimate the hidden states of random variables and to handle multiple noisy non-stationary time series data. Kunitomo and Sato (2021) have developed a new method to solve the smoothing problem of hidden random variables, and the resulting separating information maximum likelihood (SIML) method enables the handling of multivariate non-stationary time series. We continue to investigate the filtering problem. In particular, we propose the backward SIML smoothing method and the multi-step smoothing method to address the initial value issue. The resulting filtering methods can be interpreted in the time and frequency domains. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:cfi:fseres:cf517&r= |
By: | David Kohns; Tibor Szendrei |
Abstract: | This paper extends the idea of decoupling shrinkage and sparsity for continuous priors to Bayesian Quantile Regression (BQR). The procedure follows two steps: In the first step, we shrink the quantile regression posterior through state of the art continuous priors and in the second step, we sparsify the posterior through an efficient variant of the adaptive lasso, the signal adaptive variable selection (SAVS) algorithm. We propose a new variant of the SAVS which automates the choice of penalisation through quantile specific loss-functions that are valid in high dimensions. We show in large scale simulations that our selection procedure decreases bias irrespective of the true underlying degree of sparsity in the data, compared to the un-sparsified regression posterior. We apply our two-step approach to a high dimensional growth-at-risk (GaR) exercise. The prediction accuracy of the un-sparsified posterior is retained while yielding interpretable quantile specific variable selection results. Our procedure can be used to communicate to policymakers which variables drive downside risk to the macro economy. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.08498&r= |
By: | Silvia De Nicol\`o; Maria Rosaria Ferrante; Silvia Pacei |
Abstract: | Income inequality measures are biased in small samples leading generally to an underestimation. After investigating the nature of the bias, we propose a bias-correction framework for a large class of inequality measures comprising Gini Index, Generalized Entropy and Atkinson families by accounting for complex survey designs. The proposed methodology is based on Taylor's expansions and Generalized Linearization Method, and does not require any parametric assumption on income distribution, being very flexible. Design-based performance evaluation of the suggested correction has been carried out using data taken from EU-SILC survey. Results show a noticeable bias reduction for all measures. A bootstrap variance estimation proposal and a distributional analysis follow in order to provide a comprehensive overview of the behavior of inequality estimators in small samples. Results about estimators distributions show increasing positive skewness and leptokurtosis at decreasing sample sizes, confirming the non-applicability of classical asymptotic results in small samples and suggesting the development of alternative methods of inference. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.08950&r= |
By: | Jiafeng Chen; David M. Ritzwoller |
Abstract: | This paper studies the estimation of long-term treatment effects though the combination of short-term experimental and long-term observational datasets. In particular, we consider settings in which only short-term outcomes are observed in an experimental sample with exogenously assigned treatment, both short-term and long-term outcomes are observed in an observational sample where treatment assignment may be confounded, and the researcher is willing to assume that the causal relationships between treatment assignment and the short-term and long-term outcomes share the same unobserved confounding variables in the observational sample. We derive the efficient influence function for the average causal effect of treatment on long-term outcomes in each of the models that we consider and characterize the corresponding asymptotic semiparametric efficiency bounds. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.14405&r= |
By: | Jaime Sevilla; Alexandra Mayn |
Abstract: | The Y-test is a useful tool for detecting missing confounders in the context of a multivariate regression.However, it is rarely used in practice since it requires identifying multiple conditionally independent instruments, which is often impossible. We propose a heuristic test which relaxes the independence requirement. We then show how to apply this heuristic test on a price-demand and a firm loan-productivity problem. We conclude that the test is informative when the variables are linearly related with Gaussian additive noise, but it can be misleading in other contexts. Still, we believe that the test can be a useful concept for falsifying a proposed control set. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.09765&r= |
By: | Jesús Otero (Facultad de Economía, Universidad del Rosario, Colombia); Theodore Panagiotidis (Department of Economics, University of Macedonia, Greece; Rimini Centre for Economic Analysis); Georgios Papapanagiotou (Department of Economics, University of Macedonia, Greece) |
Abstract: | We undertake Monte Carlo simulation experiments to examine the effect of changing the frequency of observations and the data span on the Phillips, Shi, and Yu (2015) Generalised Supremum ADF (GSADF) test for explosive behaviour via Monte Carlo simulations. We find that when a series is characterised by multiple bubbles (periodically collapsing), decreasing the frequency of observations is associated with profound power losses for the test. We illustrate the effects of temporal aggregation by examining two real house price data bases, namely the S&P Case-Shiller real house prices and the international real house price indices available at the Federal Reserve Bank of Dallas. |
Keywords: | Exuberant/explosive behaviour, bubbles, Monte Carlo, house prices |
JEL: | C15 C22 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:21-13&r= |
By: | Felix Brunner; Ruben Hipp |
Abstract: | We estimate sectoral spillovers around the Great Moderation with the help of forecast error variance decomposition tables. Obtaining such tables in high dimensions is challenging since they are functions of the estimated vector autoregressive coefficients and the residual covariance matrix. In a simulation study, we compare various regularization methods for both and conduct a comprehensive analysis of their performance. We show that standard estimators of large connectedness tables lead to biased results and high estimation uncertainty, which can both be mitigated by regularization. To explore possible causes for the Great Moderation, we apply a cross-validated estimator on sectoral spillovers of industrial production in the US from 1972 to 2007. We find that a handful of sectors considerably decreased their outgoing links, which hints at a complimentary explanation for the Great Moderation. |
Keywords: | Business fluctuations and cycles; Econometric and statistical methods |
JEL: | C52 E27 |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:21-37&r= |
By: | Wen Su |
Abstract: | In an era when derivatives is getting popular, risk management has gradually become the core content of modern finance. In order to study how to accurately estimate the volatility of the S&P 500 index, after introducing the theoretical background of several methods, this paper uses the historical volatility method, GARCH model method and implied volatility method to estimate the real volatility respectively. At the same time, two ways of adjusting the estimation window, rolling and increasing, are also considered. The unbiased test and goodness of fit test are used to evaluate these methods. The empirical result shows that the implied volatility is the best estimator of the real volatility. The rolling estimation window is recommended when using the historical volatility. On the contrary, the estimation window is supposed to be increased when using the GARCH model. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.09273&r= |
By: | Barkowski, Scott |
Abstract: | I argue interpretation of nonlinear difference-in-differences models depends on the form of the parallel trends assumption. When they are assumed in the natural scale of the dependent variable, the treatment effect is the interaction effect (a cross-difference). If they are assumed in the transformed scale, it is a single difference. I further note that assuming parallel trends in one scale implies they do not hold in the other, except in special cases. Finally, I consider log-linear (and related) difference-in-differences models and provide a constant form of the treatment effect that is comparable across applications with different parallel trends assumptions. |
Keywords: | Difference-in-differences; Nonlinear Models; Model Interpretation; Identification; Probit; Logit; Log-linear; Semilogarithmic |
JEL: | C21 C23 C25 |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:108975&r= |
By: | Gadat, Sébastien; Bercu, Bernard; Bigot, Jérémie; Siviero, Emilia |
Abstract: | We introduce a new second order stochastic algorithm to estimate the entropically regularized optimal transport cost between two probability measures. The source measure can be arbitrary chosen, either absolutely continuous or discrete, while the target measure is assumed to be discrete. To solve the semi-dual formulation of such a regularized and semi-discrete optimal transportation problem, we propose to consider a stochastic Gauss-Newton algorithm that uses a sequence of data sampled from the source measure. This algorithm is shown to be adaptive to the geometry of the underlying convex optimization problem with no important hyperparameter to be accurately tuned. We establish the almost sure convergence and the asymptotic normality of various estimators of interest that are constructed from this stochastic Gauss-Newton algorithm. We also analyze their non-asymptotic rates of convergence for the expected quadratic risk in the absence of strong convexity of the underlying objective function. The results of numerical experiments from simulated data are also reported to illustrate the nite sample properties of this Gauss-Newton algorithm for stochastic regularized optimal transport, and to show its advantages over the use of the stochastic gradient descent, stochastic Newton and ADAM algorithms. |
Keywords: | Stochastic optimization; Stochastic Gauss-Newton algorithm; Optimal transport;; Entropic regularization; Convergence of random variables. |
Date: | 2021–07–20 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:125790&r= |
By: | Martin Bruns (University of East Anglia); Michele Piffer (King's College London) |
Abstract: | We extend the Smooth Transition Vector Autoregressive model to allow for identification via a combination of external instruments and sign restrictions, while estimating rather than calibrating the parameters ruling the nonlinearity of the model. We hence o er an alternative to using the recursive identification with selected calibrated parameters, which is the main approach currently available. We use the model to study how the effects of monetary policy shocks change over the business cycle. We show that financial variables, inflation and output respond to a monetary shock more in a recession than in an expansion, in line with the predictions from the financial accelerator literature. |
Keywords: | Nonlinear models, proxy SVARs, monetary policy shocks, sign restrictions. |
JEL: | C32 E52 |
Date: | 2021–08–05 |
URL: | http://d.repec.org/n?u=RePEc:uea:ueaeco:2021-07&r= |
By: | Filip Premik (Group for Research in Applied Economics (GRAPE)) |
Abstract: | We investigate immediate effects of a large scale child benefit program introduction on labor supply of the household members in Poland. Due to nonrandom eligibility and universal character of the program standard evaluation estimators are likely to be inconsistent. In order to address this issues we propose a novel approach which combines difference-in-difference (DID) propensity score based methods with covariate balancing propensity score (CBPS) by Imai and Ratkovic (2014). The DID part solves potential problems with non-parallel outcome dynamics in treated and non-treated subpopulations resulting from non-experimental character of the data, whereas CBPS is expected to reduce significantly bias from the systematic differences between treated and untreated subpopulations. We account also for potential heterogeneity among households by estimating a range of local average treatment effects which jointly provide a reliable view on the overall impact. We found that the program has a minor impact on the labor supply in periods following its introduction. There is an evidence for a small encouraging effect on hours worked by treated mothers of children at school age, both sole and married. Additionally, the program may influence the intra-household division of duties among parents of the youngest children as suggested by simultaneous slight decline in participating mothers' probability of working and a small increase in treated fathers' hours worked |
Keywords: | child benefits, labor supply, program evaluation, difference-in-difference estimation, covariate balancing propensity score |
JEL: | C21 C23 I38 J22 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:fme:wpaper:53&r= |
By: | Cem Cakmakli (Koc University); Verda Ozturk (Duke University) |
Abstract: | We propose a joint modeling strategy for timing the joint distribution of the returns and their volatility. We do this by incorporating the potentially asymmetric links into the system of ‘independent’ predictive regressions of returns and volatility, allowing for asymmetric cross-correlations, denoted as instantaneous leverage effects, in addition to cross-autocorrelations between returns and volatility, denoted as intertemporal leverage effects. We show that while the conventional intertemporal leverage effects bear little economic value, our results point to the sizeable value of exploiting the contemporaneous asymmetric link between returns and volatility. Specifically, a mean-variance investor would be willing to pay several hundred basis points to switch from the strategies based on conventional predictive regressions of mean and volatility in isolation of each other to the joint models of returns and its volatility, taking the link between these two moments into account. Moreover, our findings are robust to various effects documented in the literature. |
Keywords: | Economic value, system of equations, leverage timing, market timing, volatility timing. |
JEL: | C30 C52 C53 C58 G11 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:koc:wpaper:2110&r= |
By: | Mathur, Maya B; VanderWeele, Tyler |
Abstract: | Meta-analyses contribute critically to cumulative science, but they can produce misleading conclusions if their constituent primary studies are biased, for example by unmeasured confounding in nonrandomized studies. We provide practical guidance on how meta-analysts can address confounding and other biases that affect studies' internal validity, focusing primarily on sensitivity analyses that help quantify how biased the meta-analysis estimates might be. We review a number of sensitivity analysis methods to do so, especially recent developments that are straightforward to implement and interpret and that use somewhat less stringent statistical assumptions than earlier methods. We give recommendations for how these methods could be applied in practice and illustrate using a previously published meta-analysis. Sensitivity analyses can provide informative quantitative summaries of evidence strength, and we suggest reporting them routinely in meta-analyses of potentially biased studies. This recommendation in no way diminishes the importance of defining study eligibility criteria that reduce bias and of characterizing studies’ risks of bias qualitatively. |
Date: | 2021–07–30 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:v7dtq&r= |
By: | Allan W. Gregory (Queen's University); James McNeil (Dalhousie University); Gregor W. Smith (Queen's University) |
Abstract: | An SVAR in US federal spending, federal revenue, and GDP is a standard setting for the study of the impact of fiscal shocks. An appealing feature of identifying a fiscal shock with an external instrument is that one can find the effects of that shock without fully identifying the SVAR. But we show that fully or almost fully instrumenting the SVAR allows one to overidentify the model by restricting the shock covariances to be zero. In this application the overidentifying restrictions are not rejected. Compared to the unrestricted case the restricted SVAR yields (a) greater precision in estimating impulse response functions and multipliers and (b) smaller estimated effects of government spending shocks on output growth. |
Keywords: | structural vector autoregression, fiscal policy, external instruments |
JEL: | E62 C36 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1461&r= |