nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒09‒10
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Spatial Dynamic Panel Data Models with Correlated Random Effects By Li, Liyao; Yang, Zhenlin
  2. Partial effects estimation for fixed-effects logit panel data models By Francesco Bartolucci; Claudia Pigini
  3. Design-based Analysis in Difference-In-Differences Settings with Staggered Adoption By Susan Athey; Guido Imbens
  4. A Unified Framework for Efficient Estimation of General Treatment Models By Chunrong Ai; Oliver Linton; Kaiji Motegi; Zheng Zhang
  5. A Practical Approach to Testing Calibration Strategies By Yongquan Cao; Grey Gordon
  6. Extrapolating Treatment Effects in Multi-Cutoff Regression Discontinuity Designs By Matias D. Cattaneo; Luke Keele; Rocio Titiunik; Gonzalo Vazquez-Bare
  7. Robustness of Multistep Forecasts and Predictive Regressions at Intermediate and Long Horizons By Chevillon, Guillaume
  8. Uniform Inference in High-Dimensional Gaussian Graphical Models By Sven Klaassen; Jannis K\"uck; Martin Spindler
  9. Measuring the origins of macroeconomic uncertainty By Haroon Mumtaz
  10. LASSO-Type Penalization in the Framework of Generalized Additive Models for Location, Scale and Shape By Andreas Groll; Julien Hambuckers; Thomas Kneib; Nikolaus Umlauf
  11. Stable Predictions across Unknown Environments By Kuang, Kun; Xiong, Ruoxuan; Cui, Peng; Athey, Susan; Li, Bo
  12. Truncated sum of squares estimation of fractional time series models with deterministic trends By Hualde, Javier; Orregaard Nielsen, Morten
  13. Comovements in the Real Activity of Developed and Emerging Economies: A Test of Global versus Specific International Factors By Djogbenou, Antoine A.
  14. Intertemporal Similarity of Economic Time Series By Franses, Ph.H.B.F.; Wiemann, T.
  15. Connecting Sharpe ratio and Student t-statistic, and beyond By Eric Benhamou
  16. Inference with Large Clustered Datasets By MacKinnon, James G.
  17. Local Linear Forests By Rina Friedberg; Julie Tibshirani; Susan Athey; Stefan Wager
  18. Is there a cult of statistical significance in Agricultural Economics? By Rommel, Jens; Weltin, Meike
  19. Adaptive inference in heteroskedastic fractional time series models By Cavaliere, Giuseppe; ßrregaard Nielsen, Morten; Taylor, A.M. Robert

  1. By: Li, Liyao (School of Economics, Singapore Management University); Yang, Zhenlin (School of Economics, Singapore Management University)
    Abstract: In this paper, M-estimation and inference methods are developed for spatial dynamic panel data models with correlated random effects, based on short panels. The unobserved individual-specific effects are assumed to be correlated with the observed time-varying regressors linearly or in a linearizable way, giving the so-called correlated random effects model, which allows the estimation of effects of time-invariant regressors. The unbiased estimating functions are obtained by adjusting the conditional quasi-scores given the initial observations, leading to M-estimators that are consistent, asymptotically normal, and free from the initial conditions except the process starting time. By decomposing the estimating functions into sums of terms uncorrelated given idiosyncratic errors, a hybrid method is developed for consistently estimating the variance-covariance matrix of the M-estimators, which again depends only on the process starting time. Monte Carlo results demonstrate that the proposed methods perform well in finite sample.
    Keywords: Adjusted quasi score; Dynamic panels; Correlated random effects; Initial-conditions; Martingale difference; Spatial effects; Short panels
    JEL: C10 C13 C15 C21 C23
    Date: 2018–08–23
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2018_015&r=ecm
  2. By: Francesco Bartolucci (Dipartimento di Economia Universita' di Perugia); Claudia Pigini (Dipartimento di Scienze Economiche e Sociali - Universita' Politecnica delle Marche)
    Abstract: We develop a multiple-step procedure for the estimation of point and average partial effects in fxed-effects logit panel data models that admit suffcient statistics for the incidental parameters. In these models, estimates of the individual effects are not directly available and have to be recovered by means of an additional step. We also derive a standard error formulation for the average partial effects. We study the finite-sample properties of the proposed estimator by simulation and provide an application based on unionised workers.
    Keywords: Partial effects, Logit model, Quadratic Exponential model, Conditional Maximum Likelihood
    JEL: C12 C23 C25
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:anc:wpaper:431&r=ecm
  3. By: Susan Athey; Guido Imbens
    Abstract: In this paper we study estimation of and inference for average treatment effects in a setting with panel data. We focus on the setting where units, e.g., individuals, firms, or states, adopt the policy or treatment of interest at a particular point in time, and then remain exposed to this treatment at all times afterwards. We take a design perspective where we investigate the properties of estimators and procedures given assumptions on the assignment process. We show that under random assignment of the adoption date the standard Difference-In-Differences estimator is is an unbiased estimator of a particular weighted average causal effect. We characterize the proeperties of this estimand, and show that the standard variance estimator is conservative.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.05293&r=ecm
  4. By: Chunrong Ai; Oliver Linton; Kaiji Motegi; Zheng Zhang
    Abstract: This paper presents a weighted optimization framework that unifies the binary,multi-valued, continuous, as well as mixture of discrete and continuous treatment, under the unconfounded treatment assignment. With a general loss function, the framework includes the average, quantile and asymmetric least squares causal effect of treatment as special cases. For this general framework, we first derive the semiparametric efficiency bound for the causal effect of treatment, extending the existing bound results to a wider class of models. We then propose a generalized optimization estimation for the causal effect with weights estimated by solving an expanding set of equations. Under some sufficient conditions, we establish consistency and asymptotic normality of the proposed estimator of the causal effect and show that the estimator attains our semiparametric efficiency bound, thereby extending the existing literature on efficient estimation of causal effect to a wider class of applications. Finally, we discuss etimation of some causal effect functionals such as the treatment effect curve and the average outcome. To evaluate the finite sample performance of the proposed procedure, we conduct a small scale simulation study and find that the proposed estimation has practical value. To illustrate the applicability of the procedure, we revisit the literature on campaign advertise and campaign contributions. Unlike the existing procedures which produce mixed results, we find no evidence of campaign advertise on campaign contribution.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.04936&r=ecm
  5. By: Yongquan Cao (Indiana University); Grey Gordon (Indiana University)
    Abstract: A calibration strategy tries to match target moments using a models parameters. We propose tests for determining whether this is possible. The tests use moments at random parameter draws to assess whether the target moments are similar to the computed ones (evidence of existence) or appear to be outliers (evidence of non-existence). Our experiments show the tests are effective at detecting both existence and non-existence in a non-linear model. Multiple calibration strategies can be quickly tested using just one set of simulated data. Applying our approach to indirect inference allows for the testing of many auxiliary model speciications simultaneously. Code is provided.
    Keywords: Calibration, GMM, Indirect Inference, Existence, Misspecification, Outlier Detection, Data Mining
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:inu:caeprp:2016-004&r=ecm
  6. By: Matias D. Cattaneo; Luke Keele; Rocio Titiunik; Gonzalo Vazquez-Bare
    Abstract: Regression discontinuity (RD) designs are viewed as one of the most credible identification strategies for program evaluation and causal inference. However, RD treatment effect estimands are necessarily local, making the extrapolation of these effects a critical open question. We introduce a new method for extrapolation of RD effects that exploits the presence of multiple cutoffs, and is therefore design-based. Our approach relies on an easy-to-interpret identifying assumption that mimics the idea of `common trends' in differences-in-differences designs. We illustrate our methods with a study of the effect of a cash transfer program for post-education attendance in Colombia.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.04416&r=ecm
  7. By: Chevillon, Guillaume (ESSEC Research Center, ESSEC Business School)
    Abstract: This paper studies the properties of multi-step projections, and forecasts that are obtained using either iterated or direct methods. The models considered are local asymptotic: they allow for a near unit root and a local to zero drift. We treat short, intermediate and long term forecasting by considering the horizon in relation to the observable sample size. We show the implication of our results for models of predictive regressions used in the financial literature. We show here that direct projection methods at intermediate and long horizons are robust to the potential misspecification of the serial correlation of the regression errors. We therefore recommend, for better global power in predictive regressions, a combination of test statistics with and without autocorrelation correction.
    Keywords: Multi-step Forecasting; Predictive Regressions; Local Asymptotics; Dynamic Misspecification; Finite Samples; Long Horizons
    JEL: C22 C52 C53
    Date: 2017–07–23
    URL: http://d.repec.org/n?u=RePEc:ebg:essewp:dr-17010&r=ecm
  8. By: Sven Klaassen; Jannis K\"uck; Martin Spindler
    Abstract: Graphical models have become a very popular tool for representing dependencies within a large set of variables and are key for representing causal structures. We provide results for uniform inference on high-dimensional graphical models with the number of target parameters being possible much larger than sample size. This is in particular important when certain features or structures of a causal model should be recovered. Our results highlight how in high-dimensional settings graphical models can be estimated and recovered with modern machine learning methods in complex data sets. We also demonstrate in simulation study that our procedure has good small sample properties.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.10532&r=ecm
  9. By: Haroon Mumtaz (Queen Mary University of London)
    Abstract: This paper extends the procedure developed by Jurado et al. (2015) to allow the estimation of measures of uncertainty that can be attributed to specific structural shocks. This enables researchers to investigate the 'origin' of a change in overall macroeconomic uncertainty. To demonstrate the proposed method we consider two applications. First, we estimate UK macroeconomic uncertainty due to external shocks and show that this component has become increasingly important over time for overall uncertainty. Second, we estimate US macroeconomic uncertainty conditioned on monetary policy shocks with the results suggesting that while policy uncertainty was important during early 1980s, recent contributions are estimated to be modest.
    Keywords: FAVAR, Stochastic volatility, Proxy VAR, Uncertainty measurement
    JEL: C2 C11 E3
    Date: 2018–08–15
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:864&r=ecm
  10. By: Andreas Groll; Julien Hambuckers; Thomas Kneib; Nikolaus Umlauf
    Abstract: For numerous applications it is of interest to provide full probabilistic forecasts, which are able to assign probabilities to each predicted outcome. Therefore, attention is shifting constantly from conditional mean models to probabilistic distributional models capturing location, scale, shape (and other aspects) of the response distribution. One of the most established models for distributional regression is the generalized additive model for location, scale and shape (GAMLSS). In high dimensional data set-ups classical fitting procedures for the GAMLSS often become rather unstable and methods for variable selection are desirable. Therefore, we propose a regularization approach for high dimensional data set-ups in the framework for GAMLSS. It is designed for linear covariate effects and is based on L1 -type penalties. The following three penalization options are provided: the conventional least absolute shrinkage and selection operator (LASSO) for metric covariates, and both group and fused LASSO for categorical predictors. The methods are investigated both for simulated data and for two real data examples, namely Munich rent data and data on extreme operational losses from the Italian bank UniCredit.
    Keywords: GAMLSS, distributional regression, model selection, LASSO, fused LASSO
    JEL: C13 C15 C18
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2018-16&r=ecm
  11. By: Kuang, Kun (Tsinghua University); Xiong, Ruoxuan (Stanford University); Cui, Peng (Tsinghua University); Athey, Susan (Stanford University); Li, Bo (Tsinghua University)
    Abstract: In many important machine learning applications, the training distribution used to learn a probabilistic classifier differs from the testing distribution on which the classifier will be used to make predictions. Traditional methods correct the distribution shift by reweighting the training data with the ratio of the density between test and training data. In many applications training takes place without prior knowledge of the testing distribution on which the algorithm will be applied in the future. Recently, methods have been proposed to address the shift by learning causal structure, but those methods rely on the diversity of multiple training data to a good performance, and have complexity limitations in high dimensions. In this paper, we propose a novel Deep Global Balancing Regression (DGBR) algorithm to jointly optimize a deep auto-encoder model for feature selection and a global balancing model for stable prediction across unknown environments. The global balancing model constructs balancing weights that facilitate estimating of partial effects of features (holding fixed all other features), a problem that is challenging in high dimensions, and thus helps to identify stable, causal relationships between features and outcomes. The deep auto-encoder model is designed to reduce the dimensionality of the feature space, thus making global balancing easier. We show, both theoretically and with empirical experiments, that our algorithm can make stable predictions across unknown environments. Our experiments on both synthetic and real world datasets demonstrate that our DGBR algorithm outperforms the state-of-the-art methods for stable prediction across unknown environments.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:3695&r=ecm
  12. By: Hualde, Javier; Orregaard Nielsen, Morten
    Abstract: We consider truncated (or conditional) sum of squares estimation of a parametric model composed of a fractional time series and an additive generalized polynomial trend. Both the memory parameter, which characterizes the behaviour of the stochas- tic component of the model, and the exponent parameter, which drives the shape of the deterministic component, are considered not only unknown real numbers, but also lying in arbitrarily large (but finite) intervals. Thus, our model captures different forms of nonstationarity and noninvertibility. As in related settings, the proof of con- sistency (which is a prerequisite for proving asymptotic normality) is challenging due to non-uniform convergence of the objective function over a large admissible parameter space, but, in addition, our framework is substantially more involved due to the com- petition between stochastic and deterministic components. We establish consistency and asymptotic normality under quite general circumstances, finding that results dif- fer crucially depending on the relative strength of the deterministic and stochastic components.
    Keywords: Financial Economics
    Date: 2017–05
    URL: http://d.repec.org/n?u=RePEc:ags:quedwp:274702&r=ecm
  13. By: Djogbenou, Antoine A.
    Abstract: Although globalization has shaped the world economy in recent decades, emerging economies have experienced impressive growth compared to developed economies, suggesting a decoupling between developed and emerging business cycles. Using observed developed and emerging economy activity variables, we investigate whether the latter assertion can be supported by observed data. Based on a two-level factor model, we assume these activity variables can be decomposed into a global component, emerging or developed common component and idiosyncratic national shocks. We propose a statistical test for the null hypothesis of a one-level specification, where it is irrelevant to distinguish between emerging and developed latent factors against the two-level alternative. This paper provides a theoretical justification and simulations that document the testing procedure. An application of the test to a panel of developed and emerging countries leads to strong statistical evidence of decoupling.
    Keywords: Financial Economics, Research Methods/ Statistical Methods
    Date: 2018–04
    URL: http://d.repec.org/n?u=RePEc:ags:quedwp:274718&r=ecm
  14. By: Franses, Ph.H.B.F.; Wiemann, T.
    Abstract: This paper adapts the non-parametric Dynamic Time Warping (DTW) technique in an application to examine the temporal alignment and similarity across economic time series. DTW has important advantages over existing measures in economics as it alleviates concerns regarding a pre-defined fixed temporal alignment of series. For example, in contrast to current methods, DTW can capture alternations between leading and lagging relationships of series. We illustrate DTW in a study of US states’ business cycles around the Great Recession, and find considerable evidence that temporal alignments across states dynamic. Trough cluster analysis, we further document state-varying recoveries from the recession.
    Keywords: Business cycles, Non-parametric method, Dynamic Time Warping
    JEL: C14 C50 C87 E32
    Date: 2018–08–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:109916&r=ecm
  15. By: Eric Benhamou
    Abstract: Sharpe ratio is widely used in asset management to compare and benchmark funds and asset managers. It computes the ratio of the excess return over the strategy standard deviation. However, the elements to compute the Sharpe ratio, namely, the expected returns and the volatilities are unknown numbers and need to be estimated statistically. This means that the Sharpe ratio used by funds is subject to be error prone because of statistical estimation error. Lo (2002), Mertens (2002) derive explicit expressions for the statistical distribution of the Sharpe ratio using standard asymptotic theory under several sets of assumptions (independent normally distributed - and identically distributed returns). In this paper, we provide the exact distribution of the Sharpe ratio for independent normally distributed return. In this case, the Sharpe ratio statistic is up to a rescaling factor a non centered Student distribution whose characteristics have been widely studied by statisticians. The asymptotic behavior of our distribution provide the result of Lo (2002). We also illustrate the fact that the empirical Sharpe ratio is asymptotically optimal in the sense that it achieves the Cramer Rao bound. We then study the empirical SR under AR(1) assumptions and investigate the effect of compounding period on the Sharpe (computing the annual Sharpe with monthly data for instance). We finally provide general formula in this case of heteroscedasticity and autocorrelation.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.04233&r=ecm
  16. By: MacKinnon, James G.
    Abstract: Inference using large datasets is not nearly as straightforward as conventional econo- metric theory suggests when the disturbances are clustered, even with very small intra- cluster correlations. The information contained in such a dataset grows much more slowly with the sample size than it would if the observations were independent. More- over, inferences become increasingly unreliable as the dataset gets larger. These asser- tions are based on an extensive series of estimations undertaken using a large dataset taken from the U.S. Current Population Survey.
    Keywords: Financial Economics
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:ags:quedwp:274691&r=ecm
  17. By: Rina Friedberg; Julie Tibshirani; Susan Athey; Stefan Wager
    Abstract: Random forests are a powerful method for non-parametric regression, but are limited in their ability to fit smooth signals, and can show poor predictive performance in the presence of strong, smooth effects. Taking the perspective of random forests as an adaptive kernel method, we pair the forest kernel with a local linear regression adjustment to better capture smoothness. The resulting procedure, local linear forests, enables us to improve on asymptotic rates of convergence for random forests with smooth signals, and provides substantial gains in accuracy on both real and simulated data.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1807.11408&r=ecm
  18. By: Rommel, Jens; Weltin, Meike
    Abstract: In an analysis of articles published in ten years of the American Economic Review, Deirdre McCloskey and Stephen Ziliak have shown that economists often fail to adequately distinguish economic and statistical significance. In this paper, we briefly review their arguments and develop a ten-item questionnaire on the statistical practice in the Agricultural Economics community. We apply our questionnaire to the 2015 volumes of the American Journal of Agricultural Economics, the European Review of Agricultural Economics, the Journal of Agricultural Economics, and the American Economic Review. We specifically focus on the “sizeless stare” and the negligence of economic significance. Our initial results indicate that there is room of improvement in statistical practice. Empirical papers rarely consider the power of statistical tests or run simulations. The economic consequences of estimation results are often not adequately addressed. We discuss the implications of our findings for the publication process and teaching in Agricultural Economics.
    Keywords: Research Methods/ Statistical Methods
    Date: 2017–08–15
    URL: http://d.repec.org/n?u=RePEc:ags:gewi17:261998&r=ecm
  19. By: Cavaliere, Giuseppe; ßrregaard Nielsen, Morten; Taylor, A.M. Robert
    Keywords: Financial Economics
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:ags:quedwp:274716&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.