nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒04‒22
twenty-six papers chosen by
Sune Karlsson
Örebro universitet

  1. Estimation of Cross-Sectional Dependence in Large Panels By Jiti Gao; Guangming Pan; Yanrong Yang; Bo Zhang
  2. Distribution Regression in Duration Analysis: an Application to Unemployment Spells By Miguel A. Delgado; Andr\'es Garc\'ia-Suaza; Pedro H. C. Sant'Anna
  3. VC - A method for estimating time-varying coefficients in linear models By Schlicht, Ekkehart
  4. Direction Selection in Stochastic Directional Distance Functions By Ferrier, Gary D.; Johnson, Andrew L.; Layer, Kevin; Sickles, Robin C.
  5. P\'olygamma Data Augmentation to address Non-conjugacy in the Bayesian Estimation of Mixed Multinomial Logit Models By Prateek Bansal; Rico Krueger; Michel Bierlaire; Ricardo A. Daziano; Taha H. Rashidi
  6. A Simple and Trustworthy Asymptotic t Test in Difference-in-Differences Regressions By Liu, Cheng; Sun, Yixiao
  7. Estimation of Impulse Response Functions When Shocks are Observed at a Higher Frequency than Outcome Variables By Chudik, Alexander; Georgiadis, Georgios
  8. Likelihood ratio Haar variance stabilization and normalization for Poisson and other non-Gaussian noise removal By Fryzlewicz, Piotr
  9. On the construction of confidence intervals for ratios of expectations By Alexis Derumigny; Lucas Girard; Yannick Guyonvarch
  10. A Generalized Continuous-Multinomial Response Model with a t-distributed Error Kernel By Subodh Dubey; Prateek Bansal; Ricardo A. Daziano; Erick Guerra
  11. Bayesian Risk Forecasting for Long Horizons By Agnieszka Borowska; Lennart Hoogerheide; Siem Jan Koopman
  12. Forecast Density Combinations with Dynamic Learning for Large Data Sets in Economics and Finance By Roberto Casarin; Stefano Grassi; Francesco Ravazzollo; Herman K. van Dijk
  13. Mostly Harmless Simulations? Using Monte Carlo Studies for Estimator Selection By Advani, Arun; Kitagawa, Toru; Sloczynski, Tymon
  14. Testing for Moderate Explosiveness in the Presence of Drift By Guo, Gangzheng; Wang, Shaoping; Sun, Yixiao
  15. Estimation of Weak Factor Models By Yoshimasa Uematsu; Takashi Yamagata
  16. Likelihood Evaluation of Models with Occasionally Binding Constraints By Pablo Cuba-Borda; Luca Guerrieri; Matteo Iacoviello; Molin Zhong
  17. Ridge regularization for Mean Squared Error Reduction in Regression with Weak Instruments By Karthik Rajkumar
  18. Heteroscedasticity and Autocorrelation Robust F and t Tests in Stata By Ye, Xiaoqing; Sun, Yixiao
  19. Non-structural Analysis of Productivity Growth for the Industrialized Countries: A Jackknife Model Averaging Approach By Isaksson, Anders; Shang, Chenjun; Sickles, Robin C.
  20. Subgeometric ergodicity and $\beta$-mixing By Mika Meitz; Pentti Saikkonen
  21. Sharp Bounds for the Marginal Treatment Effect with Sample Selection By Vitor Possebom
  22. A Performance Analysis of Some New Meta-Analysis Estimators Designed to Correct Publication Bias By Sanghyun Hong; W. Robert Reed
  23. Identification of Noncausal Models by Quantile Autoregressions By Alain Hecq; Li Sun
  24. Bayesian modelling for binary outcomes in the regression discontinuity design By Geneletti, Sara; Baio, Gianluca; O'Keeffe, Aidan; Ricciardi, Federico
  25. A Dynamic Bayesian Model for Interpretable Decompositions of Market Behaviour By Th\'eophile Griveau-Billion; Ben Calderhead
  26. A memory-based method to select the number of relevant components in Principal Component Analysis By Anshul Verma; Pierpaolo Vivo; Tiziana Di Matteo

  1. By: Jiti Gao; Guangming Pan; Yanrong Yang; Bo Zhang
    Abstract: Accurate estimation for extent of cross{sectional dependence in large panel data analysis is paramount to further statistical analysis on the data under study. Grouping more data with weak relations (cross{sectional dependence) together often results in less efficient dimension reduction and worse forecasting. This paper describes cross-sectional dependence among a large number of objects (time series) via a factor model and parameterizes its extent in terms of strength of factor loadings. A new joint estimation method, benefiting from unique feature of dimension reduction for high dimensional time series, is proposed for the parameter representing the extent and some other parameters involved in the estimation procedure. Moreover, a joint asymptotic distribution for a pair of estimators is established. Simulations illustrate the effectiveness of the proposed estimation method in the finite sample performance. Applications in cross-country macro-variables and stock returns from S&P 500 are studied.
    Date: 2019–04
  2. By: Miguel A. Delgado; Andr\'es Garc\'ia-Suaza; Pedro H. C. Sant'Anna
    Abstract: This article proposes estimation and inference procedures for distribution regression models with randomly right-censored data. The proposal generalizes classical duration models to a situation where slope coefficients can vary with the elapsed duration, and is suitable for discrete, continuous or mixed outcomes. Given that in general distribution regression coefficients do not have clear economic interpretation, we also propose consistent and asymptotically normal estimators for the average distribution marginal effects. Finite sample properties of the proposed method are studied by means of Monte Carlo experiments. Finally, we apply our proposed tools to study the effect of unemployment benefits on unemployment duration. Our results suggest that, on average, an increase in unemployment benefits is associated with a nonlinear, non-monotone effect on the unemployment duration distribution and that such an effect is more pronounced for workers subjected to liquidity constraints.
    Date: 2019–04
  3. By: Schlicht, Ekkehart
    Abstract: This paper describes a moments estimator for a standard state-space model with coefficients generated by a random walk. This estimator does not require that disturbances are normally distributed, but if they are, the proposed estimator is asymptotically equivalent to the maximum likelihood estimator.
    Keywords: time-series analysis,linear model,state-space estimation,time-varying coefficients,moments estimation
    JEL: C2 C22 C32 C51 C52
    Date: 2019
  4. By: Ferrier, Gary D. (Texas A&M U); Johnson, Andrew L. (Texas A&M U and Osaka U); Layer, Kevin (Rice U); Sickles, Robin C. (U of Arkansas)
    Abstract: Researchers rely on the distance function to model multiple product production using multiple inputs. A stochastic directional distance function (SDDF) allows for noise in potentially all input and output variables, yet when estimated, the direction selected will affect the functional estimates because deviations from the estimated function are minimized in the specified direction. Specifically, the parameters of the parametric SDDF are point identified when the direction is specified; we show that the parameters of the parametric SDDF are set identified when multiple directions are considered. Further, the set of identified parameters can be narrowed via data-driven approaches to restrict the directions considered. We demonstrate a similar narrowing of the identified parameter set for a shape constrained nonparametric method, where the shape constraints impose standard features of a cost function such as monotonicity and convexity. Our Monte Carlo simulation studies reveal significant improvements, as measured by out of sample radial mean squared error, in functional estimates when we use a directional distance function with an appropriately selected direction and the errors are uncorrelated across variables. We show that these benefits increase as the correlation in error terms across variables increases. This correlation is a type of endogeneity that is common in production settings. From our Monte Carlo simulations we conclude that selecting a direction that is approximately orthogonal to the estimated function in the central region of the data gives significantly better estimates relative to the directions commonly used in the literature. For practitioners, our results imply that selecting a direction vector that has non-zero components for all variables that may have measurement error provides a significant improvement in the estimator's performance. We illustrate these results using cost and production data from three random samples of approximately 500 US hospitals operating in 2007, 2008, and 2009, respectively, and find that the shape constrained nonparametric methods provide a significant increase in flexibility over second order local approximation parametric methods.
    Date: 2018–10
  5. By: Prateek Bansal; Rico Krueger; Michel Bierlaire; Ricardo A. Daziano; Taha H. Rashidi
    Abstract: The standard Gibbs sampler of Mixed Multinomial Logit (MMNL) models involves sampling from conditional densities of utility parameters using Metropolis-Hastings (MH) algorithm due to unavailability of conjugate prior for logit kernel. To address this non-conjugacy concern, we propose the application of P\'olygamma data augmentation (PG-DA) technique for the MMNL estimation. The posterior estimates of the augmented and the default Gibbs sampler are similar for two-alternative scenario (binary choice), but we encounter empirical identification issues in the case of more alternatives ($J \geq 3$).
    Date: 2019–04
  6. By: Liu, Cheng; Sun, Yixiao
    Abstract: We propose an asymptotically valid t test that uses Student's t distribution as the reference distribution in a difference-in-differences regression. For the asymptotic variance estimation, we adopt the clustering-by-time approach to accommodate cross-sectional dependence. This approach often assumes the clusters to be independent across time, but we allow them to be temporally dependent. The proposed t test is based on a special heteroscedasticity and autocorrelation robust (HAR) variance estimator. We target the type I and type II errors and develop a testing-oriented method to select the underlying smoothing parameter. By capturing the estimation uncertainty of the HAR variance estimator, the t test has more accurate size than the corresponding normal test and is just as powerful as the latter. Compared to the nonstandard test developed in the literature, the standard t test is just as accurate but much more convenient to use. Model-based and empirical-data-based Monte Carlo simulations show that the t test works quite well in finite samples.
    Keywords: Social and Behavioral Sciences, Basis Functions, Difference-in-Differences, Fixed-smoothing Asymptotics, Heteroscedasticity and Autocorrelation Robust, Student's t distribution, t test
    Date: 2019–03–12
  7. By: Chudik, Alexander (Federal Reserve Bank of Dallas); Georgiadis, Georgios (European Central Bank)
    Abstract: This paper proposes mixed-frequency distributed-lag (MFDL) estimators of impulse response functions (IRFs) in a setup where (i) the shock of interest is observed, (ii) the impact variable of interest is observed at a lower frequency (as a temporally aggregated or sequentially sampled variable), (iii) the data-generating process (DGP) is given by a VAR model at the frequency of the shock, and (iv) the full set of relevant endogenous variables entering the DGP is unknown or unobserved. Consistency and asymptotic normality of the proposed MFDL estimators is established, and their small-sample performance is documented by a set of Monte Carlo experiments. The proposed approach is then applied to estimate the daily pass-through of changes in crude oil prices observed at a daily frequency to U.S. gasoline consumer prices observed at a weekly frequency. We find that the pass-through is fast, with about 28% of the crude oil price changes passed through to retail gasoline prices within five working days, and that the speed of the pass-through has increased over time.
    Keywords: Mixed frequencies; temporal aggregation; impulse response functions; estimation and inference; VAR models
    JEL: C22
    Date: 2019–03–15
  8. By: Fryzlewicz, Piotr
    Abstract: We propose a methodology for denoising, variance-stabilizing and normalizing signals whose varying mean and variance are linked via a single parameter, such as Poisson or scaled chi-squared. Our key observation is that the signed and square-rooted generalized log-likelihood ratio test for the equality of the local means is approximately distributed as standard normal under the null. We use these test statistics within the Haar wavelet transform at each scale and location, referring to them as the likelihood ratio Haar (LRH) coefficients of the data. In the denoising algorithm, the LRH coefficients are used as thresholding decision statistics, which enables the use of thresholds suitable for i.i.d. Gaussian noise. In the variance-stabilizing and normalizing algorithm, the LRH coefficients replace the standard Haar coefficients in the Haar basis expansion. We prove the consistency of our LRH smoother for Poisson counts with a near-parametric rate, and various numerical experiments demonstrate the good practical performance of our methodology.
    Keywords: variance-stabilizing transform; Haar-Fisz; Anscombe transform; log transform; Box-Cox transform; Gaussianization.
    JEL: C1
    Date: 2017–06–26
  9. By: Alexis Derumigny; Lucas Girard; Yannick Guyonvarch
    Abstract: In econometrics, many parameters of interest can be written as ratios of expectations. The main approach to construct confidence intervals for such parameters is the delta method. However, this asymptotic procedure yields intervals that may not be relevant for small sample sizes or, more generally, in a sequence-of-model framework that allows the expectation in the denominator to decrease to $0$ with the sample size. In this setting, we prove a generalization of the delta method for ratios of expectations and the consistency of the nonparametric percentile bootstrap. We also investigate finite-sample inference and show a partial impossibility result: nonasymptotic uniform confidence intervals can be built for ratios of expectations but not at every level. Based on this, we propose an easy-to-compute index to appraise the reliability of the intervals based on the delta method. Simulations and an application illustrate our results and the practical usefulness of our rule of thumb.
    Date: 2019–04
  10. By: Subodh Dubey; Prateek Bansal; Ricardo A. Daziano; Erick Guerra
    Abstract: In multinomial response models, idiosyncratic variations in the indirect utility are generally modeled using Gumbel or normal distributions. This study makes a strong case to substitute these thin-tailed distributions with a t-distribution. First, we demonstrate that a model with a t-distributed error kernel better estimates and predicts preferences, especially in class-imbalanced datasets. Our proposed specification also implicitly accounts for decision-uncertainty behavior, i.e. the degree of certainty that decision-makers hold in their choices relative to the variation in the indirect utility of any alternative. Second, after applying a t-distributed error kernel in a multinomial response model for the first time, we extend this specification to a generalized continuous-multinomial (GCM) model and derive its full-information maximum likelihood estimator. The likelihood involves an open-form expression of the cumulative density function of the multivariate t-distribution, which we propose to compute using a combination of the composite marginal likelihood method and the separation-of-variables approach. Third, we establish finite sample properties of the GCM model with a t-distributed error kernel (GCM-t) and highlight its superiority over the GCM model with a normally-distributed error kernel (GCM-N) in a Monte Carlo study. Finally, we compare GCM-t and GCM-N in an empirical setting related to preferences for electric vehicles (EVs). We observe that accounting for decision-uncertainty behavior in GCM-t results in lower elasticity estimates and a higher willingness to pay for improving the EV attributes than those of the GCM-N model. These differences are relevant in making policies to expedite the adoption of EVs.
    Date: 2019–04
  11. By: Agnieszka Borowska (VU Amsterdam); Lennart Hoogerheide (VU Amsterdam); Siem Jan Koopman (VU Amsterdam)
    Abstract: We present an accurate and efficient method for Bayesian forecasting of two financial risk measures, Value-at-Risk and Expected Shortfall, for a given volatility model. We obtain precise forecasts of the tail of the distribution of returns not only for the 10-days-ahead horizon required by the Basel Committee but even for long horizons, like one-month or one-year-ahead. The latter has recently attracted considerable attention due to the different properties of short term risk and long run risk. The key insight behind our importance sampling based approach is the sequential construction of marginal and conditional importance densities for consecutive periods. We report substantial accuracy gains for all the considered horizons in empirical studies on two datasets of daily financial returns, including a highly volatile period of the recent financial crisis. To illustrate the flexibility of the proposed construction method, we present how it can be adjusted to the frequentist case, for which we provide counterparts of both Bayesian applications.
    Keywords: Bayesian inference, forecasting, importance sampling, numerical accuracy, long run risk, Value-at-Risk, Expected Shortfall
    JEL: C32
    Date: 2019–02–22
  12. By: Roberto Casarin (University Ca' Foscari of Venice); Stefano Grassi (University of Rome `Tor Vergata'); Francesco Ravazzollo (BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam)
    Abstract: A flexible forecast density combination approach is introduced that can deal with large data sets. It extends the mixture of experts approach by allowing for model set incompleteness and dynamic learning of combination weights. A dimension reduction step is introduced using a sequential clustering mechanism that allocates the large set of forecast densities into a small number of subsets and the combination weights of the large set of densities are modelled as a dynamic factor model with a number of factors equal to the number of subsets. The forecast density combination is represented as a large finite mixture in nonlinear state space form. An efficient simulation-based Bayesian inferential procedure is proposed using parallel sequential clustering and filtering, implemented on graphics processing units. The approach is applied to track the Standard & Poor 500 index combining more than 7000 forecast densities based on 1856 US individual stocks that are are clustered in a relatively small subset. Substantial forecast and economic gains are obtained, in particular, in the tails using Value-at-Risk. Using a large macroeconomic data set of 142 series, similar forecast gains, including probabilities of recession, are obtained from multivariate forecast density combinations of US real GDP, Inflation, Treasury Bill yield and Employment. Evidence obtained on the dynamic patterns in the financial as well as macroeconomic clusters provide valuable signals useful for improved modelling and more effective economic and financial policies.
    Keywords: Forecast combinations, Particle filters, Bayesian inference, State Space Models, Sequential Monte Carlo
    JEL: C11 C14 C15
    Date: 2019–04–01
  13. By: Advani, Arun (University of Warwick, CAGE, and Institute for Fiscal Studies); Kitagawa, Toru (University College London and cemmap); Sloczynski, Tymon (Brandeis University and IZA)
    Abstract: We consider two recent suggestions for how to perform an empirically motivated Monte Carlo study to help select a treatment effect estimator under unconfoundedness. We show theoretically that neither is likely to be informative except under restrictive conditions that are unlikely to be satisfied in many contexts. To test empirical relevance, we also apply the approaches to a real-world setting where estimator performance is known. Both approaches are worse than random at selecting estimators which minimise absolute bias. They are better when selecting estimators that minimise mean squared error. However, using a simple bootstrap is at least as good and often better. For now researchers would be best advised to use a range of estimators and compare estimates for robustness.
    Keywords: empirical Monte Carlo studies, program evaluation, selection on observables, treatment effects JEL Classification: C15, C21, C25, C52
    Date: 2019
  14. By: Guo, Gangzheng; Wang, Shaoping; Sun, Yixiao
    Abstract: This paper considers a moderately explosive autoregressive(1) process with drift where the autoregressive root approaches unity from the right at a certain rate. We first develop a test for the null of moderate explosiveness under independent and identically distributed errors. We show that the t statistic is asymptotically standard normal regardless of whether the errors are Gaussian. This result is in sharp contrast with the existing literature wherein nonstandard limiting distributions are obtained under different model assumptions. When the errors are weakly dependent, we show that the t statistic based on a heteroskedasticity and autocorrelation robust standard error follows Student's t distribution in large samples. Monte Carlo simulations show that our tests have satisfactory size and power performance in finite samples. Applying the asymptotic t test to ten major stock indexes in the pre-2008 financial exuberance period, we find that most indexes are only mildly explosive or not explosive at all, which implies that the bout of the irrational rise was not as serious as previously thought.
    Keywords: Social and Behavioral Sciences, Heteroskedasticity and Autocorrelation Robust Standard Error, Irrational Exuberance, Local to Unity, Moderate Explosiveness, Student's t Distribution, Unit Root.
    Date: 2018–07–09
  15. By: Yoshimasa Uematsu; Takashi Yamagata
    Abstract: In this paper, we propose a novel consistent estimation method for the approximate factor model of Chamberlain and Rothschild (1983), with large cross-sectional and time-series dimensions (N and T, respectively). Their model assumes that the r (≪N) largest eigenvalues of data covariance matrix grow as N rises without specifying each diverging rate. This is weaker than the typical assumption on the recent factor models, in which all the r largest eigenvalues diverge proportionally to N, and is frequently referred to as the weak factor models. We extend the sparse orthogonal factor regression (SOFAR) proposed by Uematsu et al. (2019) to consider consistent estimation of the weak factors structure, where the k-th largest eigenvalue grows proportionally to N^{α_{k}} with some unknown exponents 0
    Date: 2019–04
  16. By: Pablo Cuba-Borda; Luca Guerrieri; Matteo Iacoviello; Molin Zhong
    Abstract: Applied researchers interested in estimating key parameters of DSGE models face an array of choices regarding numerical solution and estimation methods. We focus on the likelihood evaluation of models with occasionally binding constraints. We document how solution approximation errors and likelihood misspecification, related to the treatment of measurement errors, can interact and compound each other.
    Keywords: Measurement error ; Solution error ; Occasionally binding constraints ; Particle filter
    JEL: C32 C53 C63
    Date: 2019–04–19
  17. By: Karthik Rajkumar
    Abstract: In this paper, I show that classic two-stage least squares (2SLS) estimates are highly unstable with weak instruments. I propose a ridge estimator (ridge IV) and show that it is asymptotically normal even with weak instruments, whereas 2SLS is severely distorted and un-bounded. I motivate the ridge IV estimator as a convex optimization problem with a GMM objective function and an L2 penalty. I show that ridge IV leads to sizable mean squared error reductions theoretically and validate these results in a simulation study inspired by data designs of papers published in the American Economic Review.
    Date: 2019–04
  18. By: Ye, Xiaoqing; Sun, Yixiao
    Abstract: In this article, we consider time series OLS and IV regressions and introduce a new pair of commands, har and hart, which implement a more accu- rate class of heteroscedasticity and autocorrelation robust (HAR) F and t tests. These tests represent part of the recent progress on HAR inference. The F and t tests are based on the convenient F and t approximations and are more accurate than the conventional chi-squared and normal approximations. The underlying smoothing parameters are selected to target the type I and type II errors, the two fundamental objects in every hypothesis testing problem. The estimation com- mand har and the post-estimation test command hart allow for both kernel HAR variance estimators and orthonormal series HAR variance estimators. In addition, we introduce another pair of new commands, gmmhar and gmmhart which imple- ment the recently developed F and t tests in a two-step GMM framework. For this command we opt for the orthonormal series HAR variance estimator based on the Fourier bases, as it allows us to develop convenient F and t approxima- tions as in the first-step GMM framework. Finally, we present several examples to demonstrate the use of these commands.
    Keywords: Social and Behavioral Sciences
    Date: 2018–07–09
  19. By: Isaksson, Anders (United National Industrial Development Organization); Shang, Chenjun (Freddie Mac); Sickles, Robin C. (Rice U)
    Abstract: Various structural and non-structural models of productivity growth have been proposed in the literature. In either class of models, predictive measurements of productivity and efficiency are obtained. This paper examines the model averaging approaches of Hansen and Racine (2012), which can provide a vehicle to weight predictions (in the form of productivity and efficiency measurements) from different non-structural methods. We first describe the jackknife model averaging estimator proposed by Hansen and Racine (2012) and illustrate how to apply the technique to a set of competing stochastic frontier estimators. The derived method is then used to analyze productivity and efficiency dynamics in 25 highly-industrialized countries over the period 1990 to 2014. Through the empirical application, we show that the model averaging method provides relatively stable estimates, in comparison to standard model selection methods that simply select one model with the highest measure of goodness of fit.
    JEL: C14 C23 O40
    Date: 2018–06
  20. By: Mika Meitz; Pentti Saikkonen
    Abstract: It is well known that stationary geometrically ergodic Markov chains are $\beta$-mixing (absolutely regular) with geometrically decaying mixing coefficients. Furthermore, for initial distributions other than the stationary one, geometric ergodicity implies $\beta$-mixing under suitable moment assumptions. In this note we show that similar results hold also for subgeometrically ergodic Markov chains. In particular, for both stationary and other initial distributions, subgeometric ergodicity implies $\beta$-mixing with subgeometrically decaying mixing coefficients. Although this result is simple it should prove very useful in obtaining rates of mixing in situations where geometric ergodicity can not be established. To illustrate our results we derive new subgeometric ergodicity and $\beta$-mixing results for the self-exciting threshold autoregressive model.
    Date: 2019–04
  21. By: Vitor Possebom
    Abstract: I analyze treatment effects in situations when agents endogenously select into the treatment group and into the observed sample. As a theoretical contribution, I propose pointwise sharp bounds for the marginal treatment effect (MTE) of interest within the always-observed subpopulation under monotonicity assumptions. Moreover, I impose an extra mean dominance assumption to tighten the previous bounds. I further discuss how to identify those bounds when the support of the propensity score is either continuous or discrete. Using these results, I estimate bounds for the MTE of the Job Corps Training Program on hourly wages for the always-employed subpopulation and find that it is decreasing in the likelihood of attending the program within the Non-Hispanic group. For example, the Average Treatment Effect on the Treated is between \$.33 and \$.99 while the Average Treatment Effect on the Untreated is between \$.71 and \$3.00.
    Date: 2019–04
  22. By: Sanghyun Hong; W. Robert Reed (University of Canterbury)
    Abstract: Publication selection bias is widely recognized as a serious challenge to the validity of meta-analyses. This study analyses the performance of three new estimators designed to correct publication bias: the weighted average of the adequately powered (WAAP) estimator of Stanley et al. (2017), and two estimators proposed by Andrews & Kasy (2019), which we call AK1 and AK2. With respect to bias, we find that none of these is consistently superior to the commonly used PET-PEESE estimator. With respect to mean squared error, we find that Andrews & Kasey’s AK1 estimator does consistently better than other estimators except when publication bias is focused solely on the sign, as opposed to the significance, of an effect. With respect to coverage rates, we find that all the estimators perform consistently poorly, so that hypothesis tests about the mean true effect are unreliable. We also find that effect heterogeneity generally worsens estimator performance, and that its adverse impact compounds with greater heterogeneity. This is particularly of concern for meta-analyses in business and economics, where I2 values, a measure of heterogeneity, are often 90 percent or higher. Finally, we find that the type of simulation environment used in the Monte Carlo experiments significantly impacts estimator performance. A better understanding of what makes an “appropriate” simulation environment for analysing meta-analysis estimators would be a potentially productive subject for future research.
    Keywords: Meta-analysis, publication bias, WAAP, Andrews-Kasy, Monte Carlo, Simulations
    JEL: B41 C15 C18
    Date: 2019–04–01
  23. By: Alain Hecq; Li Sun
    Abstract: We propose a model selection criterion to detect purely causal from purely noncausal models in the framework of quantile autoregressions (QAR). We also present asymptotics for the i.i.d. case with regularly varying distributed innovations in QAR. This new modelling perspective is appealing for investigating the presence of bubbles in economic and financial time series, and is an alternative to approximate maximum likelihood methods. We illustrate our analysis using hyperinflation episodes in Latin American countries.
    Date: 2019–04
  24. By: Geneletti, Sara; Baio, Gianluca; O'Keeffe, Aidan; Ricciardi, Federico
    Keywords: MR/K014838/1
    JEL: C1
    Date: 2019–03–28
  25. By: Th\'eophile Griveau-Billion; Ben Calderhead
    Abstract: We propose a heterogeneous simultaneous graphical dynamic linear model (H-SGDLM), which extends the standard SGDLM framework to incorporate a heterogeneous autoregressive realised volatility (HAR-RV) model. This novel approach creates a GPU-scalable multivariate volatility estimator, which decomposes multiple time series into economically-meaningful variables to explain the endogenous and exogenous factors driving the underlying variability. This unique decomposition goes beyond the classic one step ahead prediction; indeed, we investigate inferences up to one month into the future using stocks, FX futures and ETF futures, demonstrating its superior performance according to accuracy of large moves, longer-term prediction and consistency over time.
    Date: 2019–04
  26. By: Anshul Verma; Pierpaolo Vivo; Tiziana Di Matteo
    Abstract: We propose a new data-driven method to select the optimal number of relevant components in Principal Component Analysis (PCA). This new method applies to correlation matrices whose time autocorrelation function decays more slowly than an exponential, giving rise to long memory effects. In comparison with other available methods present in the literature, our procedure does not rely on subjective evaluations and is computationally inexpensive. The underlying basic idea is to use a suitable factor model to analyse the residual memory after sequentially removing more and more components, and stopping the process when the maximum amount of memory has been accounted for by the retained components. We validate our methodology on both synthetic and real financial data, and find in all cases a clear and computationally superior answer entirely compatible with available heuristic criteria, such as cumulative variance and cross-validation.
    Date: 2019–04

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.