nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒09‒28
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. IV Estimation of Spatial Dynamic Panels with Interactive Effects: Large Sample Theory and an Application on Bank Attitude Toward Risk By Cui, Guowei; Sarafidis, Vasilis; Yamagata, Takashi
  2. Local Composite Quantile Regression for Regression Discontinuity By Xiao Huang; Zhaoguo Zhan
  3. Correcting for Misclassified Binary Regressors Using Instrumental Variables By Steven J. Haider; Melvin Stephens Jr.
  4. Exact Computation of Maximum Rank Correlation Estimator By Youngki Shin; Zvezdomir Todorov
  5. Estimating DSGE Models: Recent Advances and Future Challenges By Jesús Fernández-Villaverde; Pablo A. Guerrón-Quintana
  6. Identification of Structural VAR Models via Independent Component Analysis: A Performance Evaluation Study By Alessio Moneta; Gianluca Pallante
  7. Forecasting Low Frequency Macroeconomic Events with High Frequency Data By Ana B. Galvão; Michael T. Owyang
  8. Two-Stage Least Squares Random Forests with an Application to Angrist and Evans (1998) By Biewen, Martin; Kugler, Philipp
  9. Optimal Model Selection in RDD and Related Settings Using Placebo Zones By Kettlewell, Nathan; Siminski, Peter
  10. Measuring Monetary Policy with Residual Sign Restrictions at Known Shock Dates By Harald Badinger; Stefan Schiman
  11. Stochastic approximation algorithms for superquantiles estimation By Bercu, B.; Costa, Manon; Gadat, Sébastien
  12. Understanding Persistence By Morgan Kelly
  13. Accelerating Peak Dating in a Dynamic Factor Markov-Switching Model By Bram van Os; Dick van Dijk

  1. By: Cui, Guowei; Sarafidis, Vasilis; Yamagata, Takashi
    Abstract: The present paper develops a new Instrumental Variables (IV) estimator for spatial, dynamic panel data models with interactive effects under large N and T asymptotics. For this class of models, the only approaches available in the literature are based on quasi-maximum likelihood estimation. The approach put forward in this paper is appealing from both a theoretical and a practical point of view for a number of reasons. Firstly, the proposed IV estimator is linear in the parameters of interest and it is computationally inexpensive. Secondly, the IV estimator is free from asymptotic bias. In contrast, existing QML estimators suffer from incidental parameter bias, depending on the magnitude of unknown parameters. Thirdly, the IV estimator retains the attractive feature of Method of Moments estimation in that it can accommodate endogenous regressors, so long as external exogenous instruments are available. The IV estimator is consistent and asymptotically normal as both N,T tend to infinity, such that N/T converges to a bounded constant. The proposed methodology is employed to study the determinants of risk attitude of banking institutions. The results of our analysis provide evidence that the more risk-sensitive capital regulation that was introduced by the Dodd-Frank Act in 2011 has succeeded in influencing banks’ behaviour in a substantial manner.
    Keywords: Panel data, instrumental variables, state dependence, social interactions, common factors, large N and T asymptotics, bank risk behaviour; capital regulation.
    JEL: C33 C36 C38 G21
    Date: 2020–08–18
  2. By: Xiao Huang; Zhaoguo Zhan
    Abstract: We introduce the local composite quantile regression (LCQR) to causal inference in regression discontinuity (RD) designs. Kai et al. (2010) study the efficiency property of LCQR, while we show that its nice boundary performance translates to accurate estimation of treatment effects in RD under a variety of data generating processes. Moreover, we propose a bias-corrected and standard error-adjusted t-test for inference, which leads to confidence intervals with good coverage probabilities. A bandwidth selector is also discussed. For illustration, we conduct a simulation study and revisit a classic example from Lee (2008). A companion R package rdcqr is developed.
    Date: 2020–09
  3. By: Steven J. Haider; Melvin Stephens Jr.
    Abstract: Estimators that exploit an instrumental variable to correct for misclassification in a binary regressor typically assume that the misclassification rates are invariant across all values of the instrument. We show that this assumption is invalid in routine empirical settings. We derive a new estimator that is consistent when misclassification rates vary across values of the instrumental variable. In cases where identification is weak, our moments can be combined with bounds to provide a confidence set for the parameter of interest.
    JEL: C18 C26
    Date: 2020–09
  4. By: Youngki Shin; Zvezdomir Todorov
    Abstract: In this paper we provide an exact computation algorithm for the maximum rank correlation estimator using the mixed integer programming (MIP) approach. We construct a new constrained optimization problem by transforming all indicator functions into binary parameters to be estimated and show that it is equivalent to the original problem. Using a modern MIP solver, we apply the proposed method to an empirical example and Monte Carlo simulations. We also consider an application of the best subset rank prediction and show that the original optimization problem can be reformulated as MIP. We derive the non-asymptotic bound for the tail probability of the predictive performance measure.
    Date: 2020–09
  5. By: Jesús Fernández-Villaverde; Pablo A. Guerrón-Quintana
    Abstract: We review the current state of the estimation of DSGE models. After introducing a general framework for dealing with DSGE models, the state-space representation, we discuss how to evaluate moments or the likelihood function implied by such a structure. We discuss, in varying degrees of detail, recent advances in the field, such as the tempered particle filter, approximated Bayesian computation, the Hamiltonian Monte Carlo, variational inference, and machine learning, methods that show much promise, but that have not been fully explored yet by the DSGE community. We conclude by outlining three future challenges for this line of research.
    JEL: C11 C13 E30
    Date: 2020–08
  6. By: Alessio Moneta; Gianluca Pallante
    Abstract: Independent Component Analysis (ICA) is a statistical method that transforms a set of random variables in least dependent linear combinations. Under the assumption that the observed data are mixtures of non-Gaussian and independent processes, ICA is able to recover the underlying components, but a scale and order indeterminacy. Its application to structural vector autoregressive (SVAR) models allows the researcher to recover the impact of independent structural shocks on the observed series from estimated residuals. We analyze different ICA estimators, recently proposed within the field of SVAR identification, and compare their performance in recovering structural coefficients. Moreover, after suggesting an algorithm that solve the ICA indeterminacy problem, we assess the size distortions of the estimators in hypothesis testing. We conduct our analysis by focusing on distributional scenarios that get gradually close the Gaussian case, which is the case where ICA methods fail to recover the independent components. In terms of statistical properties of the ICA estimators, we find no evidence that a method outperforms all others. We finally present an empirical illustration using US data to identify the effects of government spending and tax cuts on economic activity, thus providing an example where ICA techniques can be used for hypothesis testing.
    Keywords: Independent Component Analysis; Identification; Structural VAR; Impulse response functions; Non-Gaussianity; Generalized normal distribution
    Date: 2020–09–12
  7. By: Ana B. Galvão; Michael T. Owyang
    Abstract: High-frequency financial and economic activity indicators are usually time aggregated before forecasts of low-frequency macroeconomic events, such as recessions, are computed. We propose a mixed-frequency modelling alternative that delivers high-frequency probability forecasts (including their confidence bands) for these low-frequency events. The new approach is compared with single-frequency alternatives using loss functions adequate to rare event forecasting. We provide evidence that: (i) weekly-sampled spread improves over monthly-sampled to predict NBER recessions, (ii) the predictive content of the spread and the Chicago Fed Financial Condition Index (NFCI) is supplementary to economic activity for one-year-ahead forecasts of contractions, and (iii) a weekly activity index can date the 2020 business cycle peak two months in advance using a mixed-frequency filtering.
    Keywords: mixed frequency models; recession; financial indicators; weekly activity index; event probability forecasting
    JEL: C25 C53 E32
    Date: 2020–09
  8. By: Biewen, Martin (University of Tuebingen); Kugler, Philipp (Institut für Angewandte Wirtschaftsforschung (IAW))
    Abstract: We develop the case of two-stage least squares estimation (2SLS) in the general framework of Athey et al. (Generalized Random Forests, Annals of Statistics, Vol. 47, 2019) and provide a software implementation for R and C++. We use the method to revisit the classic application of instrumental variables in Angrist and Evans (Children and Their Parents' Labor Supply: Evidence from Exogenous Variation in Family Size, American Economic Review, Vol. 88, 1998). The two-stage least squares random forest allows one to investigate local heterogenous effects that cannot be investigated using ordinary 2SLS.
    Keywords: machine learning, generalized random forests, fertility, instrumental variable estimation
    JEL: C26 C55 J22 J13 C14
    Date: 2020–08
  9. By: Kettlewell, Nathan (University of Technology, Sydney); Siminski, Peter (University of Technology, Sydney)
    Abstract: We propose a new model-selection algorithm for Regression Discontinuity Design, Regression Kink Design, and related IV estimators. Candidate models are assessed within a 'placebo zone' of the running variable, where the true effects are known to be zero. The approach yields an optimal combination of bandwidth, polynomial, and any other choice parameters. It can also inform choices between classes of models (e.g. RDD versus cohort-IV) and any other choices, such as covariates, kernel, or other weights. We use the approach to evaluate changes in Minimum Supervised Driving Hours in the Australian state of New South Wales. We also re-evaluate evidence on the effects of Head Start and Minimum Legal Drinking Age. We conclude with practical advice for researchers, including implications of treatment effect heterogeneity.
    Keywords: regression discontinuity, regression kink, graduated driver licensing
    JEL: C13 C52 I18
    Date: 2020–08
  10. By: Harald Badinger (Department of Economics, Vienna University of Economics and Business); Stefan Schiman (Austrian Institute of Economic Research (WIFO))
    Abstract: We propose a novel identification strategy to measure monetary policy in a structural VAR. It is based exclusively on known past policy shocks, which are uncovered from high-frequency data, and does not rely on any theoretical a-priori restrictions. Our empirical analysis for the euro area reveals that interest rate decisions of the ECB surprised financial markets at least fifteen times since 1999. This information is used to restrict the sign and magnitude of the structural residuals of the policy rule equation at these shock dates accordingly. In spite of its utmost agnostic nature, this approach achieves strong identification, suggesting that unexpected ECB decisions have an immediate impact on the short-term money market rate, the narrow money stock, commodity prices, consumer prices and the Euro-Dollar exchange rate, and that real output responds gradually. Our close to assumption-free approach obtains as an outcome what traditional sign restrictions on impulse responses impose as an assumption.
    Keywords: Structural VAR, Set Identification, Monetary Policy, ECB
    JEL: C32 E52 N14
    Date: 2020–07
  11. By: Bercu, B.; Costa, Manon; Gadat, Sébastien
    Abstract: This paper is devoted to two dierent two-time-scale stochastic ap- proximation algorithms for superquantile estimation. We shall investigate the asymptotic behavior of a Robbins-Monro estimator and its convexied version. Our main contribution is to establish the almost sure convergence, the quadratic strong law and the law of iterated logarithm for our estimates via a martingale approach. A joint asymptotic normality is also provided. Our theoretical analysis is illustrated by numerical experiments on real datasets.
    Date: 2020–09
  12. By: Morgan Kelly
    Abstract: A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. These studies typically combine unusually high t statistics with severe spatial autocorrelation in residuals, suggesting that some findings may be artefacts of underestimating standard errors or of fitting spatial trends. For 25 studies in leading journals, I apply three basic robustness checks against spatial trends and find that effect sizes typically fall by over half, leaving most well known results insignificant at conventional levels. Turning to standard errors, there is currently no data-driven method for selecting an appropriate HAC spatial kernel. The paper proposes a simple procedure where a kernel with a highly flexible functional form is estimated by maximum likelihood. After correction, standard errors tend to rise substantially for cross sectional studies but to fall for panels. Overall, credible identification strategies tend to perform no better than naive regressions. Although the focus here is on historical persistence, the methods apply to regressions using spatial data more generally.
    Keywords: Deep origins; Robustness checks; Spatial noise; Explanatory variables; Standard errors
    Date: 2020–09
  13. By: Bram van Os (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam)
    Abstract: The dynamic factor Markov-switching (DFMS) model introduced by Chauvet (1998) has proven to be a powerful framework to measure the business cycle. We extend the DFMS model by allowing for time-varying transition probabilities, with the aim of accelerating the real-time dating of turning points between expansion and recession regimes. Time-variation of the transition probabilities is brought about endogenously using the accelerated score-driven approach and exogenously using the term spread. In a real-time application using the four components of The Conference Board’s Coincident Economic Index for the period 1959-2020, we find that signaling power for recessions is significantly improved.
    Keywords: business cycles, turning points, Markov-Switching, time-varying transition probabilities, generalized autoregressive score model
    JEL: E32 C32
    Date: 2020–09–15

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.