nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒05‒02
twenty papers chosen by
Sune Karlsson
Örebro universitet

  1. Adaptive Elastic Net GMM Estimation with Many Invalid Moment Conditions: Simultaneous Model and Moment Selection By Yoonseok Lee; Mehmet Caner; Xu Han
  2. Estimation of Heterogeneous Panels with Structural Breaks By Badi H. Baltagi; Qu Feng; Chihwa Kao
  3. Estimation and Identification of Change Points in Panel Models with Nonstationary or Stationary Regressors and Error Term By Badi H. Baltagi; Chihwa Kao; Long Liu
  4. Random Effects, Fixed Effects and Hausman’s Test for the Generalized Mixed Regressive Spatial Autoregressive Panel By Badi H. Baltagi; Long Liu
  5. WILLINGNESS TO PAY CONFIDENCE INTERVAL ESTIMATION METHODS: COMPARISONS AND EXTENSIONS By Valerio Gatta; Edoardo Marcucci; Luisa Scaccia
  6. A parametric test to discriminate between a linear regression model and a linear latent growth model By Marco Barnabani
  7. "On Testing for Sphericity with Non-normality in a Fixed Effects Panel Data Model By Badi H. Baltagi; Chihwa Kao; Bin Peng
  8. Nonstationary ARCH and GARCH with t-Distributed Innovations By Rasmus Søndergaard Pedersen; Anders Rahbek
  9. A Varying-Coefficient Panel Data Model with Fixed Effects: Theory and an Application to U.S. Commercial Banks By Guohua Feng; Jiti Gao; Bin Peng; Xiaohui Zhang
  10. Forecasting Compositional Time Series: A State Space Approach By Ralph D. Snyder; J. Keith Ord; Anne B. Koehler; Keith R. McLaren; Adrian Beaumont
  11. Seasonal Unit Roots and Structural Breaks in agricultural time series: Monthly exports and domestic supply in Argentina By Mendez Parra, Maximiliano
  12. Bayesian Analysis of Econometric Time Series Models Using Hybrid Integration Rules By Ajax R. B. Moreira; Dani Gamerman
  13. A Note on the Validity of Cross-Validation for Evaluating Time Series Prediction By Christoph Bergmeir; Rob J Hyndman; Bonsoo Koo
  14. Overidentification in Regular Models By Xiaohong Chen; Andres Santos
  15. Information Measures for Nonparametric Kernel Estimation By Neshat Beheshti; Jeffrey S. Racine Author_Name: Ehsan S. Soofi
  16. Estimation of a Weights Matrix for Determining Spatial Effects By Elcyon Caiado Rocha Lima; Paulo Brígido Rocha Macedo
  17. Forecasting Coherent Volatility Breakouts By Didenko, Alexander; Dubovikov, Michael; Poutko, Boris
  18. Term Structure Dynamics, Macro-Finance Factors and Model Uncertainty By Byrne, Joseph; Cao, Shuo; Korobilis, Dimitris
  19. “Powered to Detect Small Effect Sizes”: You keep saying that. I do not think it means what you think it means. By Michael Sanders; Aisling Ní Chonaire
  20. The Meaning of Failed Replications: A Review and Proposal By Clemens, Michael A.

  1. By: Yoonseok Lee (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Mehmet Caner (North Carolina State University, Department of Economics); Xu Han (City University of Hong Kong, Department of Economics and Finance)
    Abstract: This paper develops the adaptive elastic net GMM estimator in large dimensional models with many possibly invalid moment conditions, where both the number of structural parameters and the number of moment conditions may increase with the sample size. The basic idea is to conduct the standard GMM estimation combined with two penalty terms: the quadratic regularization and the adaptively weighted lasso shrinkage. The new estimation procedure consistently selects both the nonzero structural parameters and the valid moment conditions. At the same time, it uses information only from the valid moment conditions to estimate the selected structural parameters and thus achieves the standard GMM efficiency bound as if we know the valid moment conditions ex ante. It is shown that the quadratic regularization is important to obtain the efficient estimator. We also study the tuning parameter choice, with which we show that selection consistency still holds without assuming Gaussianity. We apply the new estimation procedure to dynamic panel data models, where both the time and cross section dimensions are large. The new estimator is robust to possible serial correlations in the regression error terms. Keywords and phrases: Adaptive Elastic Net, GMM, many invalid moments, large dimensional models, efficiency bound, tuning parameter choice, dynamic panel.
    Keywords: Adaptive Elastic Net, GMM, many invalid moments, large dimensional models, efficiency bound, turning parameter choice, dynamic panel
    JEL: C13 C23
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:177&r=ecm
  2. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Qu Feng (Division of Economics, School of Humanities and Social Sciences, Nanyang Technological University); Chihwa Kao (Center for Policy Research, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244)
    Abstract: This paper extends Pesaran's (2006) work on common correlated effects (CCE) estimators for large heterogeneous panels with a general multifactor error structure by allowing for unknown common structural breaks. Structural breaks due to new policy implementation or major technological shocks, are more likely to occur over a longer time span. Consequently, ignoring structural breaks may lead to inconsistent estimation and invalid inference. We propose a general framework that includes heterogeneous panel data models and structural break models as special cases. The least squares method proposed by Bai (1997a, 2010) is applied to estimate the common change points, and the consistency of the estimated change points is established. We find that the CCE estimator has the same asymptotic distribution as if the true change points were known. Additionally, Monte Carlo simulations are used to verify the main results of this paper.
    Keywords: Heterogeneous Panels, Cross-sectional Dependence, Structural Breaks, Common Correlated Effects
    JEL: C23 C33
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:179&r=ecm
  3. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Chihwa Kao (Department of Economics, Center for Policy Research, 426 Eggers Hall, Syracuse University, Syracuse, NY 13244); Long Liu (Department of Economics, College of Business, University of Texas at San Antonio)
    Abstract: This paper studies the estimation of change point in panel models. We extend Bai (2010) and Feng, Kao and Lazarová (2009) to the case of stationary or nonstationary regressors and error term, and whether the change point is present or not. We prove consistency and derive the asymptotic distributions of the Ordinary Least Squares (OLS) and First Difference (FD) estimators. We find that the FD estimator is robust for all cases considered.
    Keywords: Panel Data, Change Point, Consistency, Nonstationarity
    JEL: C12 C13 C22
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:178&r=ecm
  4. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Long Liu (Department of Economics, College of Business, University of Texas at San Antonio)
    Abstract: This paper suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi, Egger and Pfaffermayr (2013) by the inclusion of a spatial lag dependent variable. The estimation method utilizes the Generalized Moments method suggested by Kapoor, Kelejian, and Prucha (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.
    Keywords: Panel Data; Fixed Effects; Random Effects; Spatial Model; Hausman Test
    JEL: C12 C13 C23
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:174&r=ecm
  5. By: Valerio Gatta (University of Roma Tre and CREI); Edoardo Marcucci (University of Roma Tre and CREI); Luisa Scaccia (University of Macerata)
    Abstract: This paper systematically compares methods to build confidence intervals for willingness to pay measures in a discrete choice context. It contributes to the literature by including methods developed in other research fields. Monte Carlo simulations are used to assess the performance of all the methods considered. The various scenarios evaluated reveal a certain skewness in the estimated willingness to pay distribution. This should be reected in the condence intervals. Results show that the commonly used Delta method, producing symmetric intervals around the point estimate, often fails to account for such a skewness. Both the Fieller method and the likelihood ratio test inversion method produce more realistic confidence intervals. Some bootstrap methods also perform reasonably well. Finally, empirical data are used to illustrate an application of the methods considered.
    Keywords: Confidence intervals, willingness to pay, discrete choice models, elasticities, standard errors
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:rcr:wpaper:03_14&r=ecm
  6. By: Marco Barnabani (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze)
    Abstract: In longitudinal studies with subjects measured repeatedly across time, an important problem is how to select a model generating data choosing between a linear regression model and a linear latent growth model. Approaches based both on information criteria and on asymptotic hypothesis test on the variances of â€random†components are largely used but not completely satisfactory. In the paper we propose a finite sample parametric test based on the trace of the product of estimates of two variance covariance matrices, one defined when data come from a linear regression model, the other defined when data come from a linear latent growth model. The sampling distribution of the test statistic so defined depends on the model generating data. It can be a â€standard†F-distribution or a linear combination of F-distributions. In the paper a unified sampling distribution based on a generalized F-distribution is proposed. The knowledge of this distribution allows us to make inference in a classical hypothesis testing framework. The test statistic can be used by itself to discriminate between the two models and/or, duly modified, it can be used to test randomness on single components of the linear latent growth model avoinding the boundary problem of the likelihood ratio test statistic. Moreover, it can be used in conjunction with some indicators based on information criteria giving estimates of probability of accepting or rejecting the model chosen.
    Keywords: Linear Mixed Models; Longitudinal data; Generalized F-distribution; Hypothesis testing.
    JEL: C23
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:fir:econom:wp2015_04&r=ecm
  7. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Bin Peng (Department of Economics, Maxwell School, Syracuse University)
    Abstract: Building upon the work of Chen et al. (2010), this paper proposes a test for sphericity of the variance-covariance matrix in a …xed e¤ects panel data regression model without the normality assumption on the disturbances.
    Keywords: Sphericity; Panel Data; Cross-sectional Dependence; John Test
    JEL: C13 C33
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:176&r=ecm
  8. By: Rasmus Søndergaard Pedersen (Department of Economics, University of Copenhagen); Anders Rahbek (Department of Economics, University of Copenhagen)
    Abstract: Consistency and asymptotic normality are established for the maximum likelihood estimators in the nonstationary ARCH and GARCH models with general t-distributed innovations. The results hold for joint estimation of (G)ARCH effects and the degrees of freedom parameter parametrizing the t-distribution. With T denoting sample size, square root T-convergence is shown to hold with closed form expressions for the multivariate covariances.
    Keywords: ARCH, GARCH, asymptotic normality, asymptotic theory, consistency, t-distribution, maximum likelihood, nonstationarity.
    JEL: C32
    Date: 2015–04–24
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1507&r=ecm
  9. By: Guohua Feng; Jiti Gao; Bin Peng; Xiaohui Zhang
    Abstract: In this paper, we propose a panel data semiparametric varying-coefficient model in which covariates (variables affecting the coefficients) are purely categorical. This model has two features: first, fixed effects are included to allow for correlation between individual unobserved heterogeneity and the regressors; second, it allows for cross-sectional dependence through a general spatial error dependence structure. We derive a semiparametric estimator for our model by using a modified within transformation, and then show the asymptotic and finite properties for this estimator. Finally, we illustrate our model by analysing the effects of state-level banking regulations on the returns to scale of commercial banks in the U.S. Our empirical results suggest that returns to scale is higher in more regulated states than in less regulated states.
    Keywords: Categorial variable; estimation theory; nonlinear panel data model; returns to scale.
    JEL: C23 C51 D24 G21
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-9&r=ecm
  10. By: Ralph D. Snyder; J. Keith Ord; Anne B. Koehler; Keith R. McLaren; Adrian Beaumont
    Abstract: A method is proposed for forecasting composite time series such as the market shares for multiple brands. Its novel feature is that it relies on multi-series adaptations of exponential smoothing combined with the log-ratio transformation for the conversion of proportions onto the real line. It is designed to produce forecasts that are both non-negative and sum to one; are invariant to the choice of the base series in the log-ratio transformation; recognized and exploit features such as serial dependence and non-stationary movements in the data; allow for the possibility of non-uniform interactions between the series; and contend with series that start late, finish early, or which have values close to zero. Relying on an appropriate multivariate innovations state space mode, it can be used to generate prediction distributions in addition to point forecasts and to compute the probabilities of market share increases together with prediction intervals. A shared structure between the series in the multivariate model is used to ensure that the usual proliferation of parameter is avoided. The forecasting method is illustrated using data on the annual market shares of the major (groups of) brands in the U.S. automobile market, over the period 1961-2013.
    Keywords: Exponential smoothing; Proportions; Prediction intervals; Automobile sales; Market shares.
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-11&r=ecm
  11. By: Mendez Parra, Maximiliano
    Abstract: Monthly time-series data based on agricultural commodities tend to present strong and particular patterns of seasonality. The presence of zero values in some of the seasons is not explained by the absence of reporting but is the result of actual features of agricultural processes. Seasonal unit root tests have never been applied to data that exhibit these characteristics, with a consequent lack of critical values to be used in the inference. Monte Carlo simulations are performed to obtain critical values that can be used for this type of data. In addition, seasonal unit roots under the presence of unknown structural breaks have never been applied to any kind of monthly time series, with the associated absence of critical values to be used in the testing procedure. Monte Carlo simulations are also performed to tabulate these critical values. It is observed that the presence of zero values does not invalidate the critical values available, with or without unknown structural breaks; the values obtained here for the monthly seasonal unit root tests under unknown structural breaks can be used in any other kinds of exercise. A seasonal unit root test with more power is also considered and critical values are obtained to perform the inference. The capability of the seasonal unit root tests to select the right break date is analysed, with some divergent results with respect to previous findings. An application of these techniques on the monthly quantities of exports and domestic supply of three agricultural commodities in Argentina between 1994 and 2008, which observe the patterns of seasonality described, is presented. Although, some evidence of stochastic seasonality has been found in some of these series, in general a deterministic approach can adequately describe their seasonality
    Keywords: Monthly; Unit Root; seasonality; seasonal; Argentina; commodities; test; cointegration
    JEL: C12 C22 C4 Q13
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:63831&r=ecm
  12. By: Ajax R. B. Moreira; Dani Gamerman
    Abstract: This paper is concerned with the study of Bayesian inference procedures to commonly used time series models. In particular, the dynamic or state-space models, the time-varying vector autoregressive model and the structural vector autoregressive model are considered in detail. Inference procedures are based on a hybrid integration scheme where state parameters are analytically integrated and hyperparameters are integrated by Markov chain Monte Carlo methods. Credibility regions for forecasts and impulse responses are then derived. The procedures are illustrated in real data sets. Este artigo utiliza procedimentos de inferência bayesiana para estimar modelos econométricos freqüentemente usados. Em particular, os modelos dinâmicos ou de espaço de estado são considerados detalhadamente. Procedimentos de inferência baseiam-se em esquemas de integração híbridos, em que as variáveis de estado são integradas analiticamente, e os hiperparâmetros são integrados utilizando o método de cadeias de Markov de Monte Carlo. As regiões de credibilidade da previsão e das funções de resposta a impulso são também avaliadas. Os procedimentos são ilustrados com dados reais da economia brasileira.
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:ipe:ipetds:0105&r=ecm
  13. By: Christoph Bergmeir; Rob J Hyndman; Bonsoo Koo
    Abstract: One of the most widely used standard procedures for model evaluation in classification and regression is K-fold cross-validation (CV). However, when it comes to time series forecasting, because of the inherent serial correlation and potential non-stationarity of the data, its application is not straightforward and often omitted by practitioners in favor of an out-of-sample (OOS) evaluation. In this paper, we show that the particular setup in which time series forecasting is usually performed using Machine Learning methods renders the use of standard K-fold CV possible. We present theoretical insights supporting our arguments. Furthermore, we present a simulation study where we show empirically that K-fold CV performs favourably compared to both OOS evaluation and other time-series-specific techniques such as non-dependent cross-validation.
    Keywords: cross-validation, time series, auto regression.
    JEL: C52 C53 C22
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-10&r=ecm
  14. By: Xiaohong Chen (Cowles Foundation, Yale University); Andres Santos (Dept. of Economics, University of California, San Diego)
    Abstract: In models defined by unconditional moment restrictions, specification tests are possible and estimators can be ranked in terms of efficiency whenever the number of moment restrictions exceeds the number of parameters. We show that a similar relationship between potential refutability of a model and semiparametric efficiency is present in a much broader class of settings. Formally, we show a condition we name local overidentification is required for both specification tests to have power against local alternatives and for the existence of both efficient and inefficient estimators of regular parameters. Our results immediately imply semiparametric conditional moment restriction models are typically locally overidentified, and hence their proper specification is locally testable. We further study nonparametric conditional moment restriction models and obtain a simple characterization of local overidentification in that context. As a result, we are able to determine when nonparametric conditional moment restriction models are locally testable, and when plug-in and two stage estimators of regular parameters are semiparametrically efficient.
    Keywords: Overidentification, Semiparametric efficiency, Specification testing, Nonparametric conditional moment restrictions
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1999&r=ecm
  15. By: Neshat Beheshti; Jeffrey S. Racine Author_Name: Ehsan S. Soofi
    Abstract: This paper addresses the following question: How much information do the kernel function and the bandwidth provide for nonparametric kernel estimation? The question is addressed by showing that kernel estimation of a cumulative distribution function (CDF) is an information processing procedure for transforming the empirical cumulative distribution function into a smooth estimate. The information processing channel is the kernel function itself, which is a conditional distribution with a data point as its location parameter and a bandwidth as its scale parameter. The output of the information processing procedure is the kernel estimate of the CDF which is a marginal distribution constructed as the sample average of each of the kernel functions for each data point. This framework provides a lower bound for the entropy of the kernel estimate of the distribution in terms of the entropy of the kernel function and the bandwidth, an input information measure for kernel smoothing, and a measure of information for kernel estimation. Several well-known kernel functions are compared according to these information measures.
    Keywords: information diagnostics, Kernel selection, entropy, mutual information.
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:mcm:deptwp:2015-03&r=ecm
  16. By: Elcyon Caiado Rocha Lima; Paulo Brígido Rocha Macedo
    Abstract: Spatial dependence results from the existence of spillover effects such as the impact of the price of one housing unit on the price of its adjacent neighbors. One way to account for spatial dependence is to specify spatial lag models in which a spatially lagged variable is assumed to play a role in explaining the variation of the original dependent variable. Most studies use a priori non-sample information in the construction of the spatial weights matrix which serves as a spatial lag operator. In contrast, this study assumes no a priori value for the spatial weights matrix in the estimation of spillover effects. We adopt a classical maximum likelihood approach and also a Bayesian Sampling-Importance-Resampling (SIR) procedure to estimate the weights matrix and the significance of spatial dependence. We apply the two estimation procedures to data on housing prices in the city of Belo Horizonte, Brazil, and compare the results obtained with these two techniques with the one derived by a priori fixing the weights. The analysis shows that the likelihood function of the weights matrix parameters has a well-defined peak, and the estimated distance-decay parameter is quite different from the standard a priori assumptions such as the “all-or-nothing” decay within the cut-off distance or the “inverse distance” adopted in the empirical literature. A existência de efeitos de “transbordamento”, como o impacto do preço de uma unidade residencial no preço de seus vizinhos adjacentes, caracteriza a chamada “dependência espacial”. Uma forma de se levar em conta a dependência espacial é especificar modelos de defasagem espacial nos quais se supõe que uma variável espacialmente defasada explica, pelo menos parcialmente, a variação da variável dependente original. A maioria dos estudos fixa a priori os parâmetros utilizados na construção da matriz de pesos espaciais que serve de operador da defasagem espacial. Em contraste, este trabalho não pressupõe qualquer valor a priori para os parâmetros da matriz de pesos espaciais na estimação de efeitos de transbordamento. Nós adotamos uma abordagem de máxima verossimilhança clássica e um procedimento bayesiano, Sampling–Importance–Resampling (SIR), para estimar os pesos da matriz e a significância da dependência espacial. Utilizamos dados de unidades residenciais da cidade de Belo Horizonte, e comparamos os resultados obtidos com o procedimento desenvolvido com aqueles derivados a partir da fixação a priori dos pesos espaciais. A análise mostra que a função de verossimilhança tem um pico bem definido, e o parâmetro de decaimento estimado é bastante diverso dos valores prefixados usualmente adotados na literatura empírica, como o decaimento “tudo-ou-nada” dentro da distância crítica ou o uso do “inverso da distância”.
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:ipe:ipetds:0087&r=ecm
  17. By: Didenko, Alexander; Dubovikov, Michael; Poutko, Boris
    Abstract: The paper develops an algorithm for making long-term (up to three months ahead) predictions of volatility reversals based on long memory properties of financial time series. The approach for computing fractal dimension using sequence of the minimal covers with decreasing scale is used to decompose volatility into two dynamic components: specific and structural. We introduce two separate models for both, based on different principles and capable of catching long uptrends in volatility. To test statistical significance of its abilities we introduce several estimators of conditional and unconditional probabilities of reversals in observed and predicted dynamic components of volatility. Our results could be used for forecasting points of market transition to an unstable state.
    Keywords: stock market; price risk; fractal dimension; market crash; ARCH-GARCH; range-based volatility models; multi-scale volatility; volatility reversals; technical analysis.
    JEL: C14 C49 C5 C58
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:63708&r=ecm
  18. By: Byrne, Joseph; Cao, Shuo; Korobilis, Dimitris
    Abstract: This paper extends the Nelson-Siegel linear factor model by developing a flexible macro-finance framework for modeling and forecasting the term structure of US interest rates. Our approach is robust to parameter uncertainty and structural change, as we consider instabilities in parameters and volatilities, and our model averaging method allows for investors' model uncertainty over time. Our time-varying parameter Nelson-Siegel Dynamic Model Averaging (NS-DMA) predicts yields better than standard benchmarks and successfully captures plausible time-varying term premia in real time. The proposed model has significant in-sample and out-of-sample predictability for excess bond returns, and the predictability is of economic value.
    Keywords: Term Structure of Interest Rates; Nelson-Siegel; Dynamic Model Averaging; Bayesian Methods; Term Premia.
    JEL: C32 C52 E43 E47 G11 G17
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:63844&r=ecm
  19. By: Michael Sanders; Aisling Ní Chonaire
    Abstract: Randomised trials in education research are a valuable and increasingly common part of the research landscape. Choosing a sample size large enough to detect an effect but small enough to make the trial workable is a vital component. In the absence of a crystal ball, rules of thumb are often relied upon. In this paper, we offer criticism for commonly used rules of thumb and show that effect sizes that can be realistically expected in education research are much more modest than studies are powered to detect. This has important implications for future trials, which should arguably be larger, and for the interpretation of prior, underpowered research.
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:bri:cmpowp:15/337&r=ecm
  20. By: Clemens, Michael A. (Center for Global Development)
    Abstract: The welcome rise of replication tests in economics has not been accompanied by a single, clear definition of replication. A discrepant replication, in current usage of the term, can signal anything from an unremarkable disagreement over methods to scientific incompetence or misconduct. This paper proposes an unambiguous definition of replication, one that reflects currently common but unstandardized use. It contrasts this definition with decades of unsuccessful attempts to standardize terminology, and argues that many prominent results described as replication tests – in labor, development, and other fields of economics – should not be described as such. Adopting this definition can improve incentives for researchers, encouraging more and better replication tests.
    Keywords: replication, robustness, transparency, open data, ethics, reproducible, replicate, misconduct, fraud, error, code, registry
    JEL: B40 C18 C80
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9000&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.