|
on Econometrics |
By: | Giovanni Angelini; Giuseppe Cavaliere; Enzo D'Innocenzo; Luca De Angelis |
Abstract: | In this paper we propose a new time-varying econometric model, called Time-Varying Poisson AutoRegressive with eXogenous covariates (TV-PARX), suited to model and forecast time series of counts. {We show that the score-driven framework is particularly suitable to recover the evolution of time-varying parameters and provides the required flexibility to model and forecast time series of counts characterized by convoluted nonlinear dynamics and structural breaks.} We study the asymptotic properties of the TV-PARX model and prove that, under mild conditions, maximum likelihood estimation (MLE) yields strongly consistent and asymptotically normal parameter estimates. Finite-sample performance and forecasting accuracy are evaluated through Monte Carlo simulations. The empirical usefulness of the time-varying specification of the proposed TV-PARX model is shown by analyzing the number of new daily COVID-19 infections in Italy and the number of corporate defaults in the US. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.11003&r= |
By: | Grigory Franguridi |
Abstract: | For the kernel estimator of the quantile density function (the derivative of the quantile function), I show how to perform the boundary bias correction, establish the rate of strong uniform consistency of the bias-corrected estimator, and construct the confidence bands that are asymptotically exact uniformly over the entire domain $[0,1]$. The proposed procedures rely on the pivotality of the studentized bias-corrected estimator and known anti-concentration properties of the Gaussian approximation for its supremum. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.09004&r= |
By: | Bartolucci, Francesco; Pigini, Claudia; Valentini, Francesco |
Abstract: | We propose a test for state dependence in the fixed-effects ordered logit model, based on the combination of the Quadratic Exponential model with the popular Blow-Up and Cluster procedure, used to estimate the fixed-effects ordered logit model. The test exhibits satisfactory size and power properties in simulation, for data generated according to models where persistence lies either in the latent or observed response variable. |
Keywords: | Conditional Maximum Likelihood; Fixed effects; Ordered panel data; Quadratic Exponential model; State dependence |
JEL: | C12 C23 C25 |
Date: | 2022–07–26 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:113890&r= |
By: | J\"org Breitung; Dominik Wied |
Abstract: | This paper considers a linear regression model with an endogenous regressor which is not normally distributed. It is shown that the corresponding coefficient can be consistently estimated without external instruments by adding a rank-based transformation of the regressor to the model and performing standard OLS estimation. In contrast to other approaches, our nonparametric control function approach does not rely on a conformably specified copula. Furthermore, the approach allows for the presence of additional exogenous regressors which may be (linearly) correlated with the endogenous regressor(s). Consistency and further asymptotic properties of the estimator are considered and the estimator is compared with copula based approaches by means of Monte Carlo simulations. An empirical application on wage data of the US current population survey demonstrates the usefulness of our method. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.09246&r= |
By: | Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton) |
Abstract: | T. W. Anderson did pathbreaking work in econometrics during his remarkable career as an eminent statistician. His primary contributions to econometrics are reviewed here, including his early research on estimation and inference in simultaneous equations models and reduced rank regression. Some of his later works that connect in important ways to econometrics are also briefly covered, including limit theory in explosive autoregression, asymptotic expansions, and exact distribution theory for econometric estimators. The research is considered in the light of its influence on subsequent and ongoing developments in econometrics, notably confidence interval construction under weak instruments and inference in mildly explosive regressions. |
Keywords: | Asymptotic expansions, Confidence interval construction, Explosive autoregression, LIML, Reduced rank regression, Simultaneous equation models, Weak identification regression, MA unit root, Trend regression, Wald statistic |
JEL: | C23 |
Date: | 2022–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2333&r= |
By: | Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton) |
Abstract: | Limit theory is developed for least squares regression estimation of a model involving time trend polynomials and a moving average error process with a unit root. Models with these features can arise from data manipulation such as overdifferencing and model features such as the presence of multicointegration. The impact of such features on the asymptotic equivalence of least squares and generalized least squares is considered. Problems of rank deficiency that are induced asymptotically by the presence of time polynomials in the regression are also studied, focusing on the impact that singularities have on hypothesis testing using Wald statistics and matrix normalization. The paper is largely pedagogical but contains new results, notational innovations, and procedures for dealing with rank deficiency that are useful in cases of wider applicability. |
Keywords: | Asymptotic deficiency, Asymptotic equivalence, Hypothesis testing, Least squares regression, MA unit root, Trend regression, Wald statistic |
JEL: | C23 |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2332&r= |
By: | John Gardner |
Abstract: | A recent literature has shown that when adoption of a treatment is staggered and average treatment effects vary across groups and over time, difference-in-differences regression does not identify an easily interpretable measure of the typical effect of the treatment. In this paper, I extend this literature in two ways. First, I provide some simple underlying intuition for why difference-in-differences regression does not identify a group$\times$period average treatment effect. Second, I propose an alternative two-stage estimation framework, motivated by this intuition. In this framework, group and period effects are identified in a first stage from the sample of untreated observations, and average treatment effects are identified in a second stage by comparing treated and untreated outcomes, after removing these group and period effects. The two-stage approach is robust to treatment-effect heterogeneity under staggered adoption, and can be used to identify a host of different average treatment effect measures. It is also simple, intuitive, and easy to implement. I establish the theoretical properties of the two-stage approach and demonstrate its effectiveness and applicability using Monte-Carlo evidence and an example from the literature. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.05943&r= |
By: | Undral Byambadalai |
Abstract: | This paper studies identification and inference of the welfare gain that results from switching from one policy (such as the status quo policy) to another policy. The welfare gain is not point identified in general when data are obtained from an observational study or a randomized experiment with imperfect compliance. I characterize the sharp identified region of the welfare gain and obtain bounds under various assumptions on the unobservables with and without instrumental variables. Estimation and inference of the lower and upper bounds are conducted using orthogonalized moment conditions to deal with the presence of infinite-dimensional nuisance parameters. I illustrate the analysis by considering hypothetical policies of assigning individuals to job training programs using experimental data from the National Job Training Partnership Act Study. Monte Carlo simulations are conducted to assess the finite sample performance of the estimators. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.04314&r= |
By: | Dennis Lim; Wenjie Wang; Yichong Zhang |
Abstract: | We consider a linear combination of jackknife Anderson-Rubin (AR) and orthogonalized Lagrangian multiplier (LM) tests for inference in IV regressions with many weak instruments and heteroskedasticity. We choose the weight in the linear combination based on a decision-theoretic rule that is adaptive to the identification strength. Under both weak and strong identifications, the proposed linear combination test controls asymptotic size and is admissible. Under strong identification, we further show that our linear combination test is the uniformly most powerful test against local alternatives among all tests that are constructed based on the jackknife AR and LM tests only and invariant to sign changes. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.11137&r= |
By: | Helton Saulo; Roberto Vila; Giovanna V. Borges; Marcelo Bourguignon |
Abstract: | Univariate normal regression models are statistical tools widely applied in many areas of economics. Nevertheless, income data have asymmetric behavior and are best modeled by non-normal distributions. The modeling of income plays an important role in determining workers' earnings, as well as being an important research topic in labor economics. Thus, the objective of this work is to propose parametric quantile regression models based on two important asymmetric income distributions, namely, Dagum and Singh-Maddala distributions. The proposed quantile models are based on reparameterizations of the original distributions by inserting a quantile parameter. We present the reparameterizations, some properties of the distributions, and the quantile regression models with their inferential aspects. We proceed with Monte Carlo simulation studies, considering the maximum likelihood estimation performance evaluation and an analysis of the empirical distribution of two residuals. The Monte Carlo results show that both models meet the expected outcomes. We apply the proposed quantile regression models to a household income data set provided by the National Institute of Statistics of Chile. We showed that both proposed models had a good performance both in terms of model fitting. Thus, we conclude that results were favorable to the use of Singh-Maddala and Dagum quantile regression models for positive asymmetric data, such as income data. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.06558&r= |
By: | Jia Li (Singapore Management University); Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton); Shuping Shi (Macquarie University); Jun Yu (Singapore Management University) |
Abstract: | This paper explores weak identification issues arising in commonly used models of economic and financial time series. Two highly popular configurations are shown to be asymptotically observationally equivalent: one with long memory and weak autoregressive dynamics, the other with antipersistent shocks and a near-unit autoregressive root. We develop a data-driven semiparametric and identification-robust approach to inference that reveals such ambiguities and documents the prevalence of weak identification in many realized volatility and trading volume series. The identification-robust empirical evidence generally favors long memory dynamics in volatility and volume, a conclusion that is corroborated using social-media news flow data. |
Keywords: | Realized volatility; Weak identification; Disjoint confidence sets, Trading volume, Long memory |
JEL: | C12 C13 C58 |
Date: | 2022–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2334&r= |
By: | Laura Coroneo; Fabrizio Iacone; Fabio Profumo |
Abstract: | We apply fixed-b and fixed-m asymptotics to tests of equal predictive accuracy and of encompassing for density forecasts. We verify in an original Monte Carlo design that fixed-smoothing asymptotics delivers correctly sized tests in this framework, even when only a small number of out of sample observations is available. We use the proposed density forecast comparison tests with fixed-smoothing asymptotics to assess the predictive ability of density forecasts from the European Central Bank's Survey of Professional Forecasters (ECB SPF). |
Keywords: | density forecast comparison, ECB SPF, Diebold-Mariano test, forecast encompassing, fixed-smoothing asymptotics |
JEL: | C12 C22 E17 |
Date: | 2022–06 |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:22/03&r= |
By: | Guillaume Allaire Pouliot |
Abstract: | We produce methodology for regression analysis when the geographic locations of the independent and dependent variables do not coincide, in which case we speak of misaligned data. We develop and investigate two complementary methods for regression analysis with misaligned data that circumvent the need to estimate or specify the covariance of the regression errors. We carry out a detailed reanalysis of Maccini and Yang (2009) and find economically significant quantitative differences but sustain most qualitative conclusions. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.04082&r= |
By: | Anton Skrobotov |
Abstract: | This review discusses methods of testing for explosive bubbles in time series. A large number of recently developed testing methods under various assumptions about innovation of errors are covered. The review also considers the methods for dating explosive (bubble) regimes. Special attention is devoted to time-varying volatility in the errors. Moreover, the modelling of possible relationships between time series with explosive regimes is discussed. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.08249&r= |
By: | Tommasi, Denni (University of Bologna); Zhang, Lina (University of Amsterdam) |
Abstract: | In cases of non-compliance with a prescribed treatment, estimates of causal effects typically rely on instrumental variables. However, when participation is also misreported, this approach can be severely biased. We provide an instrumental variable method that researchers can use to identify the true heterogeneous treatment effects in data that include both non-compliance and misclassification of treatment status. Our method can be used regardless of whether the treatment is misclassified because it is missing at random, missing not at random, or was generally mismeasured. We conclude with the use of a dedicated Stata command, ivreg2m, to assess the return on education in the United Kingdom. |
Keywords: | treatment effect, causality, non-differential misclassification, weighted average of LATEs, endogeneity, program evaluation |
JEL: | C14 C21 C26 C35 C51 |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp15427&r= |
By: | Guillaume Allaire Pouliot; Zhen Xie |
Abstract: | We provide an analytical characterization of the model flexibility of the synthetic control method (SCM) in the familiar form of degrees of freedom. We obtain estimable information criteria. These may be used to circumvent cross-validation when selecting either the weighting matrix in the SCM with covariates, or the tuning parameter in model averaging or penalized variants of SCM. We assess the impact of car license rationing in Tianjin and make a novel use of SCM; while a natural match is available, it and other donors are noisy, inviting the use of SCM to average over approximately matching donors. The very large number of candidate donors calls for model averaging or penalized variants of SCM and, with short pre-treatment series, model selection per information criteria outperforms that per cross-validation. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.02943&r= |
By: | Regis Barnichon; Geert Mesters |
Abstract: | The evaluation of macroeconomic policy decisions has traditionally relied on the formulation of a specific economic model. In this work, we show that two statistics are sufficient to detect, often even correct, non-optimal policies, i.e., policies that do not minimize the loss function. The two sufficient statistics are (i) the effects of policy shocks on the policy objectives, and (ii) forecasts for the policy objectives conditional on the policy decision. Both statistics can be estimated without relying on a specific model. We illustrate the method by studying US monetary policy decisions. |
Keywords: | optimal policy; impulse responses; forecasting |
JEL: | C14 C32 E32 E52 |
Date: | 2022–04–27 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedfwp:94570&r= |
By: | Amanda Coston; Edward H. Kennedy |
Abstract: | Historically used in settings where the outcome is rare or data collection is expensive, outcome-dependent sampling is relevant to many modern settings where data is readily available for a biased sample of the target population, such as public administrative data. Under outcome-dependent sampling, common effect measures such as the average risk difference and the average risk ratio are not identified, but the conditional odds ratio is. Aggregation of the conditional odds ratio is challenging since summary measures are generally not identified. Furthermore, the marginal odds ratio can be larger (or smaller) than all conditional odds ratios. This so-called non-collapsibility of the odds ratio is avoidable if we use an alternative aggregation to the standard arithmetic mean. We provide a new definition of collapsibility that makes this choice of aggregation method explicit, and we demonstrate that the odds ratio is collapsible under geometric aggregation. We describe how to partially identify, estimate, and do inference on the geometric odds ratio under outcome-dependent sampling. Our proposed estimator is based on the efficient influence function and therefore has doubly robust-style properties. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.09016&r= |
By: | Gregory Benton; Wesley J. Maddox; Andrew Gordon Wilson |
Abstract: | A broad class of stochastic volatility models are defined by systems of stochastic differential equations. While these models have seen widespread success in domains such as finance and statistical climatology, they typically lack an ability to condition on historical data to produce a true posterior distribution. To address this fundamental limitation, we show how to re-cast a class of stochastic volatility models as a hierarchical Gaussian process (GP) model with specialized covariance functions. This GP model retains the inductive biases of the stochastic volatility model while providing the posterior predictive distribution given by GP inference. Within this framework, we take inspiration from well studied domains to introduce a new class of models, Volt and Magpie, that significantly outperform baselines in stock and wind speed forecasting, and naturally extend to the multitask setting. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.06544&r= |
By: | Kara Karpman (Department of Statistics and Data Science, Cornell University); Samriddha Lahiry (Department of Statistics and Data Science, Cornell University); Diganta Mukherjee (Sampling and Official Statistics Unit, Indian Statistical Institute Kolkata); Sumanta Basu (Department of Statistics and Data Science, Cornell University) |
Abstract: | In the post-crisis era, financial regulators and policymakers are increasingly interested in data-driven tools to measure systemic risk and to identify systemically important firms. Granger Causality (GC) based techniques to build networks among financial firms using time series of their stock returns have received significant attention in recent years. Existing GC network methods model conditional means, and do not distinguish between connectivity in lower and upper tails of the return distribution - an aspect crucial for systemic risk analysis. We propose statistical methods that measure connectivity in the financial sector using system-wide tail-based analysis and is able to distinguish between connectivity in lower and upper tails of the return distribution. This is achieved using bivariate and multivariate GC analysis based on regular and Lasso penalized quantile regressions, an approach we call quantile Granger causality (QGC). By considering centrality measures of these financial networks, we can assess the build-up of systemic risk and identify risk propagation channels. We provide an asymptotic theory of QGC estimators under a quantile vector autoregressive model, and show its benefit over regular GC analysis on simulated data. We apply our method to the monthly stock returns of large U.S. firms and demonstrate that lower tail based networks can detect systemically risky periods in historical data with higher accuracy than mean-based networks. In a similar analysis of large Indian banks, we find that upper and lower tail networks convey different information and have the potential to distinguish between periods of high connectivity that are governed by positive vs negative news in the market. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2207.10705&r= |