nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒12‒09
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Bootstrapping Non-Stationary Stochastic Volatility By Peter Boswijk; Giuseppe Cavaliere; Iliyan Georgiev; Anders Rahbek
  2. Regression Discontinuity Design under Self-selection By Sida Peng; Yang Ning
  3. Dissertation R.C.M. van Aert By van Aert, Robbie Cornelis Maria
  4. Estimation of the Parameters of Symmetric Stable ARMA and ARMA-GARCH Models By Aastha M. Sathe; N. S. Upadhye
  5. Uniform inference for bounds on the distribution and quantile functions of treatment effects in randomized experiments By Antonio F. Galvao; Thomas Parker
  6. A Flexible Mixed-Frequency Vector Autoregression with a Steady-State Prior By Sebastian Ankargren; M{\aa}ns Unosson; Yukai Yang
  7. Data-driven transformations and survey-weighting for linear mixed models By Patricia Dörr; Jan Pablo Burgard
  8. Estimation of Partially Linear Spatial Autoregressive Models with Autoregressive Disturbances By Takaki Sato
  9. Hybrid quantile estimation for asymmetric power GARCH models By Guochang Wang; Ke Zhu; Guodong Li; Wai Keung Li
  10. Low sample size and regression: A Monte Carlo approach By Riveros Gavilanes, John Michael
  11. Time-Varying Income Elasticities of Healthcare Expenditure for the OECD and Eurozone By Isabel Casas; Jiti Gao; Bin Peng; Shangyu Xie
  12. The negative binomial-inverse Gaussian regression model with an application to insurance ratemaking By Tzougas, G.; Hoon, W. L.; Lim, J. M.
  13. A Scrambled Method of Moments By Jean-Jacques Forneron
  14. P-uniform* By van Aert, Robbie Cornelis Maria; van Assen, Marcel A. L. M.
  15. Sample size calculations in economic RCTs: following clinical studies? By Gruener, Sven
  16. Comparing Forecasts of Extremely Large Conditional Covariance Matrices By Moura, Guilherme V.; Ruiz, Esther; Santos, André A. P.
  17. Incorporating side information into Robust Matrix Factorization with Quantile Random Forest under Bayesian framework By Babkin, Andrey
  18. The ordinary business of macroeconometric modeling: working on the Fed-MIT-Penn model (1964-1974) By Cherrier, Beatrice; Backhouse, Roger

  1. By: Peter Boswijk (University of Amsterdam); Giuseppe Cavaliere (University of Bologna and Exeter Business School); Iliyan Georgiev (University of Bologna); Anders Rahbek (University of Copenhagen)
    Abstract: To what extent can the bootstrap be applied to conditional mean models – such as regression or time series models – when the volatility of the innovations is random and possibly non-stationary? In fact, the volatility of many economic and financial time series displays persistent changes and possible non-stationarity. However, the theory of the bootstrap for such models has focused on deterministic changes of the unconditional variance and little is known about the performance and the validity of the bootstrap when the volatility is driven by a non-stationary stochastic process. This includes near-integrated exogenous volatility processes as well as near-integrated GARCH processes, where the conditional variance has a diffusion limit; a further important example is the case where volatility exhibits infrequent jumps. This paper fills this gap in the literature by developing conditions for bootstrap validity in time series and regression models with non-stationary, stochastic volatility. We show that in such cases the distribution of bootstrap statistics (conditional on the data) is random in the limit. Consequently, the conventional approaches to proofs of bootstrap consistency, based on the notion of weak convergence in probability of the bootstrap statistic, fail to deliver the required validity results. Instead, we use the concept of `weak convergence in distribution' to develop and establish novel conditions for validity of the wild bootstrap, conditional on the volatility process. We apply our results to several testing problems in the presence of non-stationary stochastic volatility, including testing in a location model, testing for structural change using CUSUM-type functionals, and testing for a unit root in autoregressive models. Importantly, we show that sufficient conditions for conditional wild bootstrap validity include the absence of statistical leverage effects, i.e., correlation between the error process and its future conditional variance. The results of the paper are illustrated using Monte Carlo simulations, which indicate that a wild bootstrap approach leads to size control even in small samples.
    Keywords: Bootstrap, Non-stationary stochastic volatility, Random limit measures, Weak convergence in Distribution
    JEL: C32
    Date: 2019–12–01
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20190083&r=all
  2. By: Sida Peng; Yang Ning
    Abstract: In Regression Discontinuity (RD) design, self-selection leads to different distributions of covariates on two sides of the policy intervention, which essentially violates the continuity of potential outcome assumption. The standard RD estimand becomes difficult to interpret due to the existence of some indirect effect, i.e. the effect due to self selection. We show that the direct causal effect of interest can still be recovered under a class of estimands. Specifically, we consider a class of weighted average treatment effects tailored for potentially different target populations. We show that a special case of our estimands can recover the average treatment effect under the conditional independence assumption per Angrist and Rokkanen (2015), and another example is the estimand recently proposed in Fr\"olich and Huber (2018). We propose a set of estimators through a weighted local linear regression framework and prove the consistency and asymptotic normality of the estimators. Our approach can be further extended to the fuzzy RD case. In simulation exercises, we compare the performance of our estimator with the standard RD estimator. Finally, we apply our method to two empirical data sets: the U.S. House elections data in Lee (2008) and a novel data set from Microsoft Bing on Generalized Second Price (GSP) auction.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.09248&r=all
  3. By: van Aert, Robbie Cornelis Maria
    Abstract: More and more scientific research gets published nowadays, asking for statistical methods that enable researchers to get an overview of the literature in a particular research field. For that purpose, meta-analysis methods were developed that can be used for statistically combining the effect sizes from independent primary studies on the same topic. My dissertation focuses on two issues that are crucial when conducting a meta-analysis: publication bias and heterogeneity in primary studies’ true effect sizes. Accurate estimation of both the meta-analytic effect size as well as the between-study variance in true effect size is crucial since the results of meta-analyses are often used for policy making. Publication bias distorts the results of a meta-analysis since it refers to situations where publication of a primary study depends on its results. We developed new meta-analysis methods, p-uniform and p-uniform*, which estimate effect sizes corrected for publication bias and also test for publication bias. Although the methods perform well in many conditions, these and the other existing methods are shown not to perform well when researchers use questionable research practices. Additionally, when publication bias is absent or limited, traditional methods that do not correct for publication bias outperform p¬-uniform and p-uniform*. Surprisingly, we found no strong evidence for the presence of publication bias in our pre-registered study on the presence of publication bias in a large-scale data set consisting of 83 meta-analyses and 499 systematic reviews published in the fields of psychology and medicine. We also developed two methods for meta-analyzing a statistically significant published original study and a replication of that study, which reflects a situation often encountered by researchers. One method is a frequentist whereas the other method is a Bayesian statistical method. Both methods are shown to perform better than traditional meta-analytic methods that do not take the statistical significance of the original study into account. Analytical studies of both methods also show that sometimes the original study is better discarded for optimal estimation of the true effect size. Finally, we developed a program for determining the required sample size in a replication analogous to power analysis in null hypothesis testing. Computing the required sample size with the method revealed that large sample sizes (approximately 650 participants) are required to be able to distinguish a zero from a small true effect. Finally, in the last two chapters we derived a new multi-step estimator for the between-study variance in primary studies’ true effect sizes, and examined the statistical properties of two methods (Q-profile and generalized Q-statistic method) to compute the confidence interval of the between-study variance in true effect size. We proved that the multi-step estimator converges to the Paule-Mandel estimator which is nowadays one of the recommended methods to estimate the between-study variance in true effect sizes. Two Monte-Carlo simulation studies showed that the coverage probabilities of Q-profile and generalized Q-statistic method can be substantially below the nominal coverage rate if the assumptions underlying the random-effects meta-analysis model were violated.
    Date: 2018–06–05
    URL: http://d.repec.org/n?u=RePEc:osf:metaar:eqhjd&r=all
  4. By: Aastha M. Sathe; N. S. Upadhye
    Abstract: In this article, we first propose the modified Hannan-Rissanen Method for estimating the parameters of the autoregressive moving average (ARMA) process with symmetric stable noise and symmetric stable generalized autoregressive conditional heteroskedastic (GARCH) noise. Next, we propose the modified empirical characteristic function method for the estimation of GARCH parameters with symmetric stable noise. Further, we show the efficiency, accuracy, and simplicity of our methods through Monte-Carlo simulation. Finally, we apply our proposed methods to model financial data.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.09985&r=all
  5. By: Antonio F. Galvao; Thomas Parker
    Abstract: This paper develops a novel approach to uniform inference for functions that bound the distribution and quantile functions of heterogeneous treatment effects in randomized experiments when only marginal treatment and control distributions are observed and the joint distribution of outcomes is unobserved. These bounds are nonlinear maps of the marginal distribution functions of control and treatment outcomes, and statistical inference methods for nonlinear maps usually rely on smoothness through a type of differentiability. We show that the maps from marginal distributions to bound functions are not differentiable, but uniform test statistics applied to the bound functions - such as Kolmogorov-Smirnov or Cram\'er-von Mises - are directionally differentiable. We establish the consistency and weak convergence of nonparametric plug-in estimates of the test statistics and show how they can be used to conduct inference for bounds uniformly over the distribution of treatment effects. We also establish the directional differentiability of minimax operators applied to general - that is, not only convex-concave - functions, which may be of independent interest. In addition, we develop detailed resampling techniques to conduct practical inference for the bounds or for the true distribution or quantile function of the treatment effect distribution. Finally, we apply our methods to the evaluation of a job training program.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.10215&r=all
  6. By: Sebastian Ankargren; M{\aa}ns Unosson; Yukai Yang
    Abstract: We propose a Bayesian vector autoregressive (VAR) model for mixed-frequency data. Our model is based on the mean-adjusted parametrization of the VAR and allows for an explicit prior on the 'steady states' (unconditional means) of the included variables. Based on recent developments in the literature, we discuss extensions of the model that improve the flexibility of the modeling approach. These extensions include a hierarchical shrinkage prior for the steady-state parameters, and the use of stochastic volatility to model heteroskedasticity. We put the proposed model to use in a forecast evaluation using US data consisting of 10 monthly and 3 quarterly variables. The results show that the predictive ability typically benefits from using mixed-frequency data, and that improvements can be obtained for both monthly and quarterly variables. We also find that the steady-state prior generally enhances the accuracy of the forecasts, and that accounting for heteroskedasticity by means of stochastic volatility usually provides additional improvements, although not for all variables.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.09151&r=all
  7. By: Patricia Dörr; Jan Pablo Burgard
    Abstract: Many variables that social and economic researchers seek to analyze through regression analysis violate normality assumptions. A standard remedy in that case is the logarithmic transformation. However, taking logarithms is not always sufficient to reestablish model assumptions. A more general approach is to determine a family of transformations and to estimate the adequate parameter of such a transformation. This can also be done in mixed effects models, which can account for unobserved heterogeneity in grouped data. When the analyzed data is gathered from a complex survey whose design is informative for the model - which is difficult to exclude a priori - a bias on the transformed linear mixed models can occur. As the bias affects the transformation parameter, too, the distortion to the parameters in the population is even more problematic than in standard regression. In standard regression, survey weights are used to account for the design. To the best of our knowledge, none of the existing algorithms allows to include survey weights in these transformed linear mixed models. This paper adapts a recently suggested algorithm to include survey weights to Box-Cox or dual transformed mixed models. A simulation study demonstrates the need to account for informative survey design.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:trr:wpaper:201916&r=all
  8. By: Takaki Sato
    Abstract: This study considers semiparametric partially linear spatial autoregressive models with autoregressive disturbances that contain an unspecified nonparametric component and allow for spatial lags in both the dependent variables and disturbances. Having the nonparametric function approximated by basis functions, we propose a three-step estimation procedure for the proposed model. We also establish the consistency and asymptotic normality of the proposed estimators. Then, the finite sample performances of the proposed estimators are examined using Monte Carlo simulations. As an empirical application, we use the proposed model and estimation method to analyze Boston housing price data to evaluate the effect of air pollution on the value of owner-occupied homes.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:104&r=all
  9. By: Guochang Wang; Ke Zhu; Guodong Li; Wai Keung Li
    Abstract: Asymmetric power GARCH models have been widely used to study the higher order moments of financial returns, while their quantile estimation has been rarely investigated. This paper introduces a simple monotonic transformation on its conditional quantile function to make the quantile regression tractable. The asymptotic normality of the resulting quantile estimators is established under either stationarity or non-stationarity. Moreover, based on the estimation procedure, new tests for strict stationarity and asymmetry are also constructed. This is the first try of the quantile estimation for non-stationary ARCH-type models in the literature. The usefulness of the proposed methodology is illustrated by simulation results and real data analysis.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.09343&r=all
  10. By: Riveros Gavilanes, John Michael
    Abstract: This article performs simulations with different small samples considering the regression techniques of OLS, Jackknife, Bootstrap, Lasso and Robust Regression in order to stablish the best approach in terms of lower bias and statistical significance with a pre-specified data generating process -DGP-. The methodology consists of a DGP with 5 variables and 1 constant parameter which was regressed among the simulations with a set of random normally distributed variables considering samples sizes of 6, 10, 20 and 500. Using the expected values discriminated by each sample size, the accuracy of the estimators was calculated in terms of the relative bias for each technique. The results indicate that Jackknife approach is more suitable for lower sample sizes as it was stated by Speed (1994), Bootstrap approach reported to be sensitive to a lower sample size indicating that it might not be suitable for stablish significant relationships in the regressions. The Monte Carlo simulations also reflected that when a significant relationship is found in small samples, this relationship will also tend to remain significant when the sample size is increased.
    Keywords: Small sample size; Statistical significance; Regression; Simulations; Bias
    JEL: C15 C19 C63
    Date: 2019–11–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:97017&r=all
  11. By: Isabel Casas; Jiti Gao; Bin Peng; Shangyu Xie
    Abstract: Income elasticity dynamics of health expenditure is considered for the OECD and Eurozone over the period 1995-2014. Motivated by some modelling challenges, this paper studies a class of non-linear cointegration panel data models, controlling for cross-section dependence and certain endogeneity. Using the corresponding methods, our empirical analyses show a slight increase in the income elasticity of the healthcare expenditure over the years, but still with values under 1, meaning that healthcare is not a luxury good in the OECD and Eurozone.
    Keywords: health expenditure, income elasticity, nonparametric kernel smoothing, nonstationarity
    JEL: C14 C23 H51
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2019-28&r=all
  12. By: Tzougas, G.; Hoon, W. L.; Lim, J. M.
    Abstract: This paper presents the Negative Binomial-Inverse Gaussian regression model for approximating the number of claims as an alternative to mixed Poisson regression models that have been widely used in various disciplines including actuarial applications. The Negative Binomial-Inverse Gaussian regression model can be considered as a plausible model for highly dispersed claim count data and this is the first time that it is used in a statistical or actuarial context. The main achievement is that we propose a quite simple Expectation-Maximization type algorithm for maximum likelihood estimation of the model. Finally, a real data application using motor insurance data is examined and both the a priori and a posteriori, or Bonus-Malus, premium rates resulting from the Negative Binomial-Inverse Gaussian model are calculated via the net premium principle and compared to those determined by the Negative Binomial Type I and the Poisson-Inverse Gaussian regression models that have been traditionally used for a priori and a posteriori ratemaking.
    Keywords: Negative binomial-inverse Gaussian regression model; EM algorithm; Motor third party liability insurance; Ratemaking
    JEL: E6
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:101728&r=all
  13. By: Jean-Jacques Forneron
    Abstract: Quasi-Monte Carlo (qMC) methods are a powerful alternative to classical Monte-Carlo (MC) integration. Under certain conditions, they can approximate the desired integral at a faster rate than the usual Central Limit Theorem, resulting in more accurate estimates. This paper explores these methods in a simulation-based estimation setting with an emphasis on the scramble of Owen (1995). For cross-sections and short-panels, the resulting Scrambled Method of Moments simply replaces the random number generator with the scramble (available in most softwares) to reduce simulation noise. Scrambled Indirect Inference estimation is also considered. For time series, qMC may not apply directly because of a curse of dimensionality on the time dimension. A simple algorithm and a class of moments which circumvent this issue are described. Asymptotic results are given for each algorithm. Monte-Carlo examples illustrate these results in finite samples, including an income process with "lots of heterogeneity."
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.09128&r=all
  14. By: van Aert, Robbie Cornelis Maria; van Assen, Marcel A. L. M.
    Abstract: Publication bias is a major threat to the validity of a meta-analysis resulting in overestimated effect sizes. P-uniform is a meta-analysis method that corrects estimates for publication bias, but the method overestimates average effect size in the presence of heterogeneity in true effect sizes (i.e., between-study variance). We propose an extension and improvement of the p-uniform method called p-uniform*. P-uniform* improves upon p-uniform in three important ways, as it (i) entails a more efficient estimator, (ii) eliminates the overestimation of effect size in case of between-study variance in true effect sizes, and (iii) enables estimating and testing for the presence of the between-study variance in true effect sizes. We compared the statistical properties of p-uniform* with the selection model approach of Hedges (1992) as implemented in the R package “weightr” and the random-effects model in both an analytical and a Monte-Carlo simulation study. Results revealed that the statistical properties of p-uniform* and the selection model approach were generally comparable and outperformed the random-effects model if publication bias was present. We demonstrate that both methods estimate average true effect size rather well with two or more and between-study variance with ten or more primary studies in a meta-analysis. However, both methods do not perform well if the meta-analysis only includes statistically significant studies. We offer recommendations for correcting meta-analyses for publication bias in practice, and provide an R package and an easy-to-use web application for applying p-uniform*.
    Date: 2018–10–02
    URL: http://d.repec.org/n?u=RePEc:osf:metaar:zqjr9&r=all
  15. By: Gruener, Sven
    Abstract: Clinical studies and economic experiments are often conducted utilizing randomized controlled trials. In contrast to clinical drug trials, sample size calculation has rarely been carried out by experimental economists. Using simple examples for illustration purposes, I discuss pros and cons of using sample size calculations in experimental economics.
    Date: 2018–07–04
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:43zbg&r=all
  16. By: Moura, Guilherme V.; Ruiz, Esther; Santos, André A. P.
    Abstract: Modelling and forecasting high dimensional covariance matrices is a key challenge in data-richenvironments involving even thousands of time series since most of the available models sufferfrom the curse of dimensionality. In this paper, we challenge some popular multivariate GARCH(MGARCH) and Stochastic Volatility (MSV) models by fitting them to forecast the conditionalcovariance matrices of financial portfolios with dimension up to 1000 assets observed daily over a30-year time span. The time evolution of the conditional variances and covariances estimated bythe different models is compared and evaluated in the context of a portfolio selection exercise. Weconclude that, in a realistic context in which transaction costs are taken into account, modelling thecovariance matrices as latent Wishart processes delivers more stable optimal portfolio compositionsand, consequently, higher Sharpe ratios.
    Keywords: Stochastic Volatility; Risk-Adjusted Return; Portfolio Turnover; Minimum-Variance Portfolio; Garch; Covariance Forecasting
    JEL: G17 C53
    Date: 2019–11–30
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:29291&r=all
  17. By: Babkin, Andrey
    Abstract: Incorporating side information into Robust Matrix Factorization with Quantile Random Forest under Bayesian framework
    Date: 2019–08–22
    URL: http://d.repec.org/n?u=RePEc:osf:frenxi:b8jke&r=all
  18. By: Cherrier, Beatrice; Backhouse, Roger
    Abstract: The FMP model exemplifies the Keynesian models later criticized by Lucas, Sargent and others as conceptually flawed. For economists in the 1960s such models were “big science”, posing organizational as well as theoretical and empirical problems. It was part of an even larger industry in which the messiness for which such models were later criticized was endorsed as providing enabling modelers to be guided by data and as offering the flexibility needed to undertake policy analysis and to analyze the consequences of events. Practices that critics considered fatal weaknesses, such as intercept adjustments or fudging, were what clients were what clients paid for as the macroeconometric modeling industry went private.
    Date: 2018–10–15
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:39xkz&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.