nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒04‒27
fourteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Testing for the appropriate level of clustering in linear regression models By James G. MacKinnon; Morten Ørregaard Nielsen; Matthew D. Webb
  2. Sampling properties of the Bayesian posterior mean with anapplication to WALS estimation By Giuseppe De Luca; Jan R. Magnus; Franco Peracchi
  3. Modelling Non-stationary 'Big Data' By Jennifer Castle; Jurgen Doornik; David Hendry
  4. An Indirect Proof for the Asymptotic Properties of VARMA Model Estimators By Guy Melard
  5. Robust Discovery of Regression Models By Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry
  6. On event studies and distributed-lags in two-way fixed effects models: Identification, equivalence, and generalization By Schmidheiny, Kurt; Siegloch, Sebastian
  7. Analyzing Differences between Scenarios By David F. Hendry; Felix Pretis
  8. Some Unpleasant Markup Arithmetic: Production Function Elasticities and their Estimation from Production Data By Steve Bond; Arshia Hashemi; Greg Kaplan; Piotr Zoch
  9. Decomposing the Fiscal Multiplier By James Cloyne; Òscar Jordà; Alan M. Taylor
  10. Spanning analysis of stock market anomalies under Prospect Stochastic Dominance By Stelios Arvanitis; O. Scaillet; Nikolas Topaloglou
  11. Beyond Cobb-Douglas: Flexibly Estimating Matching Functions with Unobserved Matching Efficiency By Fabian Lange; Theodore Papageorgiou
  12. The Triple Difference Estimator By Olden, Andreas; Møen, Jarle
  13. IDENTIFICATION OF MONETARY POLICY SHOCKS FROM FOMC TRANSCRIPTS By Nataliia Ostapenko
  14. Invertibility Condition of the Fisher Information Matrix of a VARMAX Process and the Tensor Sylvester Matrix By André Klein; Guy Melard

  1. By: James G. MacKinnon (Queen's University); Morten Ørregaard Nielsen (Queen's University and CREATES); Matthew D. Webb (Carleton University)
    Abstract: Reliable inference with clustered data has received a great deal of attention in recent years. The overwhelming majority of this research assumes that the cluster structure is known. This assumption is very strong, because there are often several possible ways in which a dataset could be clustered. We propose two tests for the correct level of clustering. One test focuses on inference about a single coefficient, and the other on inference about two or more coefficients. We also prove the asymptotic validity of a wild bootstrap implementation. The proposed tests work for a null hypothesis of either no clustering or "fine'' clustering against alternatives of "coarser'' clustering. We also propose a sequential testing procedure to determine the appropriate level of clustering. Simulations suggest that the bootstrap tests perform very well under the null hypothesis and can have excellent power. An empirical example suggests that using our tests leads to sensible inferences.
    Keywords: CRVE, grouped data, clustered data, cluster-robust variance estimator, robust inference, wild bootstrap, wild cluster bootstrap
    JEL: C15 C21 C23
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1428&r=all
  2. By: Giuseppe De Luca (University of Palermo); Jan R. Magnus (Vrije Universiteit Amsterdam); Franco Peracchi (Georgetown University and EIEF)
    Abstract: Many statistical and econometric learning methods rely on Bayesian ideas, often applied or reinterpreted in a frequentist setting. Two leading examples are shrinkage estimators and model averaging estimators, such as weighted-average least squares (WALS). In many instances, the accuracy of these learning methods in repeated samples is assessed using the variance of the posterior distribution of the parameters of interest given the data. This may be permissible when the sample size is large because, under the conditions of the Bernstein–von Mises theorem, the posterior variance agrees asymptotically with the frequentist variance. In finite samples, however, things are less clear. In this paper we explore this issue by first considering the frequentist properties (bias and variance) of the posterior mean in the important case of the normal location model, which consists of a single observation on a univariate Gaussian distribution with unknown mean and known variance. Based on these results, we derive new estimators of the frequentist bias and variance of the WALS estimator in finite samples. We then study the finite-sample performance of the proposed estimators by a Monte Carlo experiment with design derived from a real data application about the effect of abortion on crime rates.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:eie:wpaper:2003&r=all
  3. By: Jennifer Castle; Jurgen Doornik; David Hendry
    Abstract: Abstract: Seeking substantive relationships among vast numbers of spurious connections when modelling Big Data requires an appropriate approach. Big Data are useful if they can increase the probability that the data generation process is nested in the postulated model, increase the power of specification and mis-specification tests, and yet do not raise the chances of adventitious significance. Simply choosing the best-fitting equation or trying hundreds of empirical fits and selecting a preferred one–perhaps contradicted by others that go unreported–is not going to lead to a useful outcome. Wide-sense non-stationarity (including both distributional shifts and integrated data) must be taken into account. The paper discusses the use of principal components analysis to identify cointegrating relations as a route to handling that aspect of non-stationary big data, along with saturation to handle distributional shifts, and models the monthly UK unemployment rate, using both macroeconomic and Google Trends data, searching over 3000 explanatory variables and yet identifying a parsimonious, well-specified and theoretically interpretable model specification.
    Keywords: Cointegration; Big Data; Model Selection; Outliers; Indicator Saturation; Autometrics
    JEL: C51 Q54
    Date: 2020–04–15
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:905&r=all
  4. By: Guy Melard
    Abstract: In this paper, we establish, in an indirect way, strong consistency and asymptotic normality of a Gaussian quasi-maximum likelihood estimator for the parameters of a causal, invertible, and identifiable vector autoregressive-moving average (VARMA) model. The proof is based on similar results for a much wider class of VARMA models with time-dependent coefficients, thus in the context of non-stationary and non-homoscedastic time series. For that reason, the proof is characterized by avoidanceof spectral analysis arguments and does not make use of ergodicity. The paperis of course also applicable to ARMA models.
    Keywords: non-stationary process; multivariate time series; time-varying models; identifiability; ARMA models
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/304272&r=all
  5. By: Jennifer L. Castle (Dept of Economics, Institute for New Economic Thinking at the Oxford Martin School and Magdalen College, University of Oxford); Jurgen A. Doornik (Dept of Economics, Institute for New Economic Thinking at the Oxford Martin School and Climate Econometrics, Nuffield College, University of Oxford); David F. Hendry (Dept of Economics, Institute for New Economic Thinking at the Oxford Martin School and Climate Econometrics, Nuffield College, University of Oxford)
    Abstract: Since complete and correct a priori specifications of models for observational data never exist, model selection is unavoidable in that context. The target of selection needs to be the process generating the data for the variables under analysis, while retaining the objective of the study, often a theorybased formulation. Successful selection requires robustness against many potential problems jointly, including outliers and shifts; omitted variables; incorrect distributional shape; non-stationarity; misspecified dynamics; and non-linearity, as well as inappropriate exogeneity assumptions. The aim is to seek parsimonious final representations that retain the relevant information, are well specified, encompass alternative models, and evaluate the validity of the study. Our approach to doing so inevitably leads to more candidate variables than observations, handled by iteratively switching between contracting and expanding multi-path searches, here programmed in Autometrics. We investigate the ability of indicator saturation to discriminate between measurement errors and outliers, between outliers and large observations arising from non-linear responses (illustrated by artificial data), and apparent outliers due to alternative distributional assumptions. We illustrate the approach by exploring empirical models of the Boston housing market and inflation for the UK (both tackling outliers and non-linearities that can distort other estimation methods). We re-analyze the ‘local instability’ in the robust method of least median of squares shown by Hettmansperger and Sheather (1992) using indicator saturation to explain their findings.
    Keywords: Model Selection; Robustness; Outliers; Location Shifts; Indicator Saturation; Autometrics.
    JEL: C51 C22
    Date: 2020–04–15
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:2004&r=all
  6. By: Schmidheiny, Kurt; Siegloch, Sebastian
    Abstract: We discuss important properties and pitfalls of panel-data event study designs. We derive three main results. First, binning of effect window endpoints is a practical necessity and key for identification of dynamic treatment effects. Second, event study designs with binned endpoints and distributed-lag models are numerically identical leading to the same parameter estimates after correct reparametrization. Third, classic dummy variable event study designs can be generalized to models that account for multiple events of different sign and intensity of the treatment, which are particularly interesting for research in labor economics and public finance. We show the practical relevance of our methodological points in a replication study.
    Keywords: event study,distributed-lag,applied microeconomics,credibility revolution
    JEL: C23 C51 H00 J08
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:20017&r=all
  7. By: David F. Hendry (Dept of Economics, Institute for New Economic Thinking at the Oxford Martin School and Climate Econometrics, Nuffield College, University of Oxford); Felix Pretis (University of Victoria, Canada)
    Abstract: Comparisons between alternative scenarios are used in many disciplines from macroeconomics to climate science to help with planning future responses. Differences between scenario paths are often interpreted as signifying likely differences between outcomes that would materialise in reality. However, even when using correctly specified statistical models of the in-sample data generation process, additional conditions are needed to sustain inferences about differences between scenario paths. We consider two questions in scenario analyses: First, does testing the difference between scenarios yield additional insight beyond simple tests conducted on the model estimated in-sample? Second, when does the estimated scenario difference yield unbiased estimates of the true difference in outcomes? Answering the first question, we show that the calculation of uncertainties around scenario differences raises difficult issues since the underlying in-sample distributions are identical for both ‘potential’ outcomes when the reported paths are deterministic functions. Under these circumstances, a scenario comparison adds little beyond testing for the significance of the perturbed variable in the estimated model. Resolving the second question, when models include multiple covariates, inferences about scenario differences depend on the relationships between the conditioning variables, especially their invariance to the interventions. Tests for invariance based on automatic detection of structural breaks can help identify in-sample invariance of models to evaluate likely constancy in projected scenarios. Applications of scenario analyses to impacts on the UK’s wage share from unemployment and agricultural growth from climate change illustrate the concepts.
    Date: 2020–04–22
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:2005&r=all
  8. By: Steve Bond; Arshia Hashemi; Greg Kaplan; Piotr Zoch
    Abstract: The ratio estimator of a firm’s markup is the ratio of the output elasticity ofThe ratio estimator of a firm’s markup is the ratio of the output elasticity ofa variable input to that input’s cost share in revenue. This note raises issues thatconcern identification and estimation of markups using the ratio estimator. Concerningidentification: (i) if the revenue elasticity is used in place of the output elasticity, thenthe estimand underlying the ratio estimator does not contain any information aboutthe markup; (ii) if any part of the input bundle is either used to influence demand, or isneither fully fixed nor fully flexible, then the estimand underlying the ratio estimatoris not equal to the markup. Concerning estimation: (i) even with data on outputquantities, it is challenging to obtain consistent estimates of output elasticities whenfirms have market power; (ii) without data on output quantities, as is typically thecase, it is not possible to obtain consistent estimates of output elasticities when firmshave market power and markups are heterogeneous. These issues cast doubt overwhether anything useful can be learned about heterogeneity or trends in markups,from recent attempts to apply the ratio estimator in settings without output quantitydata.
    Keywords: Markups, Output Elasticity, Revenue Elasticity, Production Functions
    JEL: D2 D4 L1 L2
    Date: 2020–04–20
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:906&r=all
  9. By: James Cloyne; Òscar Jordà; Alan M. Taylor
    Abstract: Unusual circumstances often coincide with unusual fiscal policy actions. Much attention has been paid to estimates of how fiscal policy affects the macroeconomy, but these are typically average treatment effects. In practice, the fiscal “multiplier” at any point in time depends on the monetary policy response. Using the IMF fiscal consolidations dataset for identification and a new decomposition-based approach, we show how to evaluate these monetary-fiscal effects. In the data, the fiscal multiplier varies considerably with monetary policy: it can be zero, or as large as 2 depending on the monetary offset. We show how to decompose the typical macro impulse response function into (1) the direct effect of the intervention on the outcome; (2) the indirect effect due to changes in how other covariates affect the outcome when there is an intervention; and (3) a composition effect due to differences in covariates between treated and control subpopulations. This Blinder-Oaxaca-type decomposition provides convenient way to evaluate the effects of policy, state-dependence, and balance conditions for identification.
    Keywords: state-dependence; identification; fiscal policy; interest rates; Blinder-Oaxaca decomposition; balance; local projections
    JEL: H20 E32 C54 E62 H5 N10 C99
    Date: 2020–03–27
    URL: http://d.repec.org/n?u=RePEc:fip:fedfwp:87713&r=all
  10. By: Stelios Arvanitis (Athens University of Economics and Business - Department of Economics); O. Scaillet (University of Geneva GSEM and GFRI; Swiss Finance Institute; University of Geneva - Research Center for Statistics); Nikolas Topaloglou (Athens University of Economics and Business)
    Abstract: We develop and implement methods for determining whether introducing new securities or relaxing investment constraints improves the investment opportunity set for prospect investors. We formulate a new testing procedure for prospect spanning for two nested portfolio sets based on subsampling and Linear Programming. In an application, we use the prospect spanning framework to evaluate whether well-known anomalies are spanned by standard factors. We find that of the strategies considered, many expand the opportunity set of the prospect type investors, thus have real economic value for them. In-sample and out-of-sample results prove remarkably consistent in identifying genuine anomalies for prospect investors.
    Keywords: Nonparametric test, prospect stochastic dominance efficiency, prospect spanning, market anomaly, Linear Programming.
    JEL: C12 C14 C44 C58 D81 G11
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2018&r=all
  11. By: Fabian Lange (McGill University); Theodore Papageorgiou (Boston College)
    Abstract: Exploiting results from the literature on non-parametric identification, we make three methodological contributions to the empirical literature estimating the matching function, commonly used to map unemployment and vacancies into hires. First, we show how to non-parametrically identify the matching function. Second, we estimate the matching function allowing for unobserved matching efficacy, without imposing the usual independence assumption between matching efficiency and search on either side of the labor market. Third, we allow for multiple types of jobseekers and consider an "augmented" Beveridge curve that includes them. Our estimated elasticity of hires with respect to vacancies is procyclical and varies between 0.15 and 0.3. This is substantially lower than common estimates suggesting that a significant bias stems from the commonly-used independence assumption. Moreover, variation in match efficiency accounts for much of the decline in hires during the Great Recession.
    Keywords: non-parametric identification, matching function, matching efficiency
    JEL: C14 C78 C10 J64
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:hka:wpaper:2020-025&r=all
  12. By: Olden, Andreas (Dept. of Business and Management Science, Norwegian School of Economics); Møen, Jarle (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: Triple difference has become a widely used estimator in empirical work. A close reading of articles in top economics journals reveals that the use of the estimator to a large extent rests on intuition. The identifying assumptions are neither formally derived nor generally agreed on. We give a complete presentation of the triple difference estimator, and show that even though the estimator can be computed as the difference between two difference-in-differences estimators, it does not require two parallel trend assumptions to have a causal interpretation. The reason is that the difference between two biased difference-in-differences estimators will be unbiased as long as the bias is the same in both estimators. This requires only one parallel trend assumption to hold.
    Keywords: Triple difference; difference-in-difference-in-differences; difference-in-differences; DID; DiDiD; parallel trend assumption
    JEL: C10 C18 C21
    Date: 2020–04–22
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2020_001&r=all
  13. By: Nataliia Ostapenko
    Abstract: I propose a new approach to identifying exogenous monetary policy shocks that requires no priors on the underlying macroeconomic structure, nor any observation of monetary policy actions. My approach entails directly estimating the unexpected changes in the federal funds rate as those which cannot be predicted from the internal Federal Open Market Committee's (FOMC) discussions. I employ deep learning and basic machine learning regressors to predict the effective federal funds rate from the FOMC's discussions without imposing any time-series structure. The result of the standard three variable Structural Vector Autoregression (SVAR) with my new measure shows that economic activity and inflation decline in response to a monetary policy shock.
    Keywords: monetary policy, identification, shock, deep learning, FOMC, transcripts
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:mtk:febawb:123&r=all
  14. By: André Klein; Guy Melard
    Abstract: In this paper the invertibility condition of the asymptotic Fisher information matrix of a controlled vector autoregressive moving average stationary process, VARMAX, is displayed in a theorem. It is shown that the Fisher information matrix of a VARMAX process becomes invertible if the VARMAX matrix polynomials have no common eigenvalue. Contrarily to what was mentioned previously in a VARMA framework, the reciprocal property is untrue. We make use of tensor Sylvester matrices since checking equality of the eigenvalues of matrix polynomials is most easily done in that way. A tensor Sylvester matrix is a block Sylvester matrix with blocks obtained by Kronecker products of the polynomial coefficients by an identity matrix, on the left for one polynomial and on the right for the other one. The results are illustrated by numerical computations.
    Keywords: Tensor Sylvester matrix; Matrix polynomial; Common eigenvalues; Fisher in- formation matrix; Stationary VARMAX process
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/304274&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.