nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒06‒08
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust Dynamic Panel Data Models Using e-Contamination By Baltagi, Badi H.; Bresson, Georges; Chaturvedi, Anoop; Lacroix, Guy
  2. Bootstrap Inference for Quantile Treatment Effects in Randomized Experiments with Matched Pairs By Jiang, Liang; Liu, Xiaobin; Zhang, Yichong
  3. The Log-GARCH Model via ARMA Representations By Sucarrat, Genaro
  4. Consistent estimation of panel data sample selection models By Sergi Jiménez-Martín; José M. Labeaga; Majid al Sadoon
  5. Predicting the Long-term Stock Market Volatility: A GARCH-MIDAS Model with Variable Selection By Tong Fang; Tae-Hwy Lee; Zhi Su
  6. Adaptive quadrature schemes for Bayesian inference via active learning By López Santiago, Javier; Delgado Gómez, David; Elvira Arregui, Víctor; Martino, Luca; Llorente Fernandez, Fernando
  7. Nonparametric Expected Shortfall Forecasting Incorporating Weighted Quantiles By Giuseppe Storti; Chao Wang
  8. Scale-, time- and asset-dependence of Hawkes process estimates on high frequency price changes By Alexander Wehrli; Spencer Wheatley; Didier Sornette
  9. Inference on Achieved Signal Noise Ratio By Steven E. Pav
  10. When are Google data useful to nowcast GDP? An approach via pre-selection and shrinkage By Laurent Ferrara; Anna Simoni
  11. A stochastic dominance test under survey nonresponse with an application to comparing trust levels in Lebanese public institutions By Fakih, Ali; Makdissi, Paul; Marrouch, Walid; Tabri, Rami V.; Yazbeck, Myra
  12. Real-Time Detection of Regimes of Predictability in the U.S. Equity Premium By Harvey, David I; Leybourne, Stephen J; Sollis, Robert; Taylor, AM Robert
  13. Measuring wage inequality under right censoring By Paulo M.M. Rodrigues; João Nicolau; Pedro Raposo

  1. By: Baltagi, Badi H. (Syracuse University); Bresson, Georges (University of Paris 2); Chaturvedi, Anoop (University of Allahabad); Lacroix, Guy (Université Laval)
    Abstract: This paper extends the work of Baltagi et al. (2018) to the popular dynamic panel data model. We investigate the robustness of Bayesian panel data models to possible misspecication of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, we consider the ε-contamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the ε-contamination priors use Zellner (1986)'s g-priors for the variance-covariance matrices. We propose a general "toolbox" for a wide range of specifications which includes the dynamic panel model with random effects, with cross-correlated effects à la Chamberlain, for the Hausman-Taylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and cross-sectional dependence. Using a Monte Carlo simulation study, we compare the nite sample properties of our proposed estimator to those of standard classical estimators. The paper contributes to the dynamic panel data literature by proposing a general robust Bayesian framework which encompasses the conventional frequentist specications and their associated estimation methods as special cases.
    Keywords: robust Bayesian estimator, g-priors, type-II maximum likelihood posterior density, e-contamination, panel data, dynamic model, two-stage hierarchy
    JEL: C11 C23 C26
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13214&r=all
  2. By: Jiang, Liang (Fudan University); Liu, Xiaobin (Zhejiang University); Zhang, Yichong (School of Economics, Singapore Management University)
    Abstract: This paper examines inference for quantile treatment effects (QTEs) in randomized experiments with matched-pairs designs (MPDs). We derive the limiting distribution of the QTE estimator under MPDs and highlight the difficulty of analytical inference due to parameter tuning. We show that a naive weighted bootstrap fails to approximate the limiting distribution of the QTE estimator under MPDs because it ignores the dependence structure within the matched pairs. We then propose two bootstrap methods that can consistently approximate that limiting distribution: the gradient bootstrap and the weighted bootstrap of the inverse propensity score weighted (IPW) estimator. The gradient bootstrap is free of tuning parameters but requires the knowledge of pairs’ identities. The weighted bootstrap of the IPW estimator does not require such knowledge but involves one tuning parameter. Both methods are straightforward to implement and able to provide pointwise confidence intervals and uniform confidence bands that achieve exact limiting rejection probabilities under the null. We illustrate their finite sample performance using both simulations and a well-known dataset on microfinance.
    Keywords: Bootstrap inference; matched pairs; quantile treatment effect; randomized control trials
    JEL: C14 C21
    Date: 2020–05–25
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2020_015&r=all
  3. By: Sucarrat, Genaro
    Abstract: The log-GARCH model provides a flexible framework for the modelling of economic uncertainty, financial volatility and other positively valued variables. Its exponential specification ensures fitted volatilities are positive, allows for flexible dynamics, simplifies inference when parameters are equal to zero under the null, and the log-transform makes the model robust to jumps or outliers. An additional advantage is that the model admits ARMA-like representations. This means log-GARCH models can readily be estimated by means of widely available software, and enables a vast range of well-known time-series results and methods. This chapter provides an overview of the log-GARCH model and its ARMA representation(s), and of how estimation can be implemented in practice. After the introduction, we delineate the univariate log-GARCH model with volatility asymmetry ("leverage"), and show how its (nonlinear) ARMA representation is obtained. Next, stationary covariates ("X") are added, before a first-order specification with asymmetry is illustrated empirically. Then we turn our attention to multivariate log-GARCH-X models. We start by presenting the multivariate specification in its general form, but quickly turn our focus to specifications that can be estimated equation-by-equation - even in the presence of Dynamic Conditional Correlations (DCCs) of unknown form. Next, a multivariate non-stationary log-GARCH-X model is formulated, in which the X-covariates can be both stationary and/or nonstationary. A common critique directed towards the log-GARCH model is that its ARCH terms may not exist in the presence of inliers. An own Section is devoted to how this can be handled in practice. Next, the generalisation of log-GARCH models to logarithmic Multiplicative Error Models (MEMs) is made explicit. Finally, the chapter concludes.
    Keywords: Financial return, volatility, ARCH, exponential GARCH, log-GARCH, Multivariate GARCH
    JEL: C22 C32 C51 C58
    Date: 2018–08–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:100386&r=all
  4. By: Sergi Jiménez-Martín; José M. Labeaga; Majid al Sadoon
    Abstract: We analyse the properties of classical (fixed effect, first-differences and random effects) as well as generalised method of moments-instrumental variables estimators in either static or dynamic panel data sample selection models. We show that the correlation of the unobserved errors is not sufficient for non-consistency to arise, but the presence of common (and/or nonindependent) non-deterministic covariates in the selection and outcome equations is generally necessary.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:fda:fdaddt:2020-06&r=all
  5. By: Tong Fang (Shandong University); Tae-Hwy Lee (Department of Economics, University of California Riverside); Zhi Su (Central University of Finance and Economics)
    Abstract: We consider a GARCH-MIDAS model with short-term and long-term volatility components, in which the long-term volatility component depends on many macroeconomic and financial variables. We select the variables that exhibit the strongest effects on the long-term stock market volatility via maximizing the penalized log-likelihood function with an Adaptive-Lasso penalty. The GARCH-MIDAS model with variable selection enables us to incorporate many variables in a single model without estimating a large number of parameters. In the empirical analysis, three variables (namely, housing starts, default spread and realized volatility) are selected from a large set of macroeconomic and financial variables. The recursive out-of-sample forecasting evaluation shows that variable selection significantly improves the predictive ability of the GARCH-MIDAS model for the long-term stock market volatility.
    Keywords: Stock market volatility, GARCH-MIDAS model, Variable selection, Penalized maximum likelihood, Adaptive-Lasso
    JEL: C32 C51 C53 G12
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202009&r=all
  6. By: López Santiago, Javier; Delgado Gómez, David; Elvira Arregui, Víctor; Martino, Luca; Llorente Fernandez, Fernando
    Abstract: Numerical integration and emulation are fundamental topics across scientifc fields.Computation of an integral by replacing the integrand with a surrogate function is a recurring problem in applied mathematics and statistics. In this work, we introduce novel adaptive quadrature schemes based on an active learning procedure. We introduce an interpolative approach for building a surrogate posterior density, combining it with Monte Carlo sampling methods and other quadrature rules. The nodes of the quadrature are sequentially chosen by maximizing a suitable acquisition function which takes into account the current approximation of the posterior and the positions of the nodes. It is important to remark that the maximization of the acquisition function does not require additional evaluation of the true posterior. We also provide an equivalent importance sampling interpretation which allows the design of extended techniques.The resulting methods are powerful tools which supply a numerical approximation of the integrals of interest and an emulator of the complex posterior distribution.The novel framework does not require a probabilistic interpretation of the constructed emulator, allowing the use of more general kernel-basis functions with respect to other schemes in the literature. For instance, we also consider the use of non-symmetric nearest neighbor (NN) bases which ensures to obtain always positive interpolation coefficients. Several theoretical results are provided and discussed. Numerical results show the advantage of the proposed approach. One of the numerical experiment considers the inference problem for a complex astronomic model, with the goal of revealing the number of planets orbiting other stars.
    Keywords: Active Learning; Experimental Design; Bayesian Quadrature; Monte Carlo Methods; Emulation; Numerical Integration
    Date: 2020–05–29
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:30537&r=all
  7. By: Giuseppe Storti; Chao Wang
    Abstract: A new semi-parametric Expected Shortfall (ES) estimation and forecasting framework is proposed. The proposed approach is based on a two step estimation procedure. The first step involves the estimation of Value-at-Risk (VaR) at different levels through a set of quantile time series regressions. Then, the ES is computed as a weighted average of the estimated quantiles. The quantiles weighting structure is parsimoniously parameterized by means of a Beta function whose coefficients are optimized by minimizing a joint VaR and ES loss function of the Fissler-Ziegel class. The properties of the proposed approach are first evaluated with an extensive simulation study using various data generating processes. Two forecasting studies with different out-of-sample sizes are conducted, one of which focuses on the 2008 Global Financial Crisis (GFC) period. The proposed models are applied to 7 stock market indices and their forecasting performances are compared to those of a range of parametric, non-parametric and semi-parametric models, including GARCH, Conditional AutoRegressive Expectile (CARE, Taylor 2008), joint VaR and ES quantile regression models (Taylor, 2019) and simple average of quantiles. The results of the forecasting experiments provide clear evidence in support of the proposed models.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.04868&r=all
  8. By: Alexander Wehrli (ETH Zürich); Spencer Wheatley (ETH Zürich); Didier Sornette (ETH Zürich - Department of Management, Technology, and Economics (D-MTEC); Swiss Finance Institute)
    Abstract: The statistical estimate of the branching ratio η of the Hawkes model, when fitted to windows of mid-price changes, has been reported to approach criticality (η = 1) as the fitting window becomes large. In this study -- using price changes from the EUR/USD currency pair traded on the Electronic Broking Services (EBS) interbank trading platform and the S&P 500 E-mini futures contract traded at the Chicago Mercantile Exchange (CME) -- it is shown that the estimated branching ratio depends little upon window size and is usually far from criticality. This is done by controlling for exogenous non-stationarities/heterogeneities at inter- and intraday scales, accomplished by using information criteria to select the degree of flexibility of the Hawkes immigration intensity, either piecewise constant or adaptive logspline, estimated using an expectation maximization (EM) algorithm. The bias incurred by keeping the immigration intensity constant is small for time scales up to two hours, but can become as high as 0.3 for windows spanning days. This emphasizes the importance of choosing an appropriate model for the immigration intensity in the application of Hawkes processes to financial data and elsewhere. The branching ratio is also found to have an intraday seasonality, where it appears to be higher during times where market activity is dominated by supposedly reflexive automated decisions and a lack of fundamental news and trading. The insights into the microstructure of the two considered markets derived from our Hawkes process fits suggest that equity futures exhibit more complex non-stationary features, are more endogenous, persistent and traded at higher speed than spot foreign exchange. We complement our point process study with EM-estimates of integer-valued autoregressive (INAR) time series models at even longer scales of months. Transferring our methodologies to the aggregate bin-count setting confirms that even at these very long scales, criticality can be rejected.
    Keywords: Hawkes process; Integer-valued autoregressive process; Econometrics; High frequency financial data; Market microstructure; Spurious inference; Nonstationarity; EM algorithm
    JEL: C01 C40 C52
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2039&r=all
  9. By: Steven E. Pav
    Abstract: We describe a procedure to perform approximate inference on the achieved signal-noise ratio of the Markowitz Portfolio under Gaussian i.i.d. returns. The procedure relies on a statistic similar to the Sharpe Ratio Information Criterion. Testing indicates the procedure is somewhat conservative, but otherwise works well for reasonable values of sample and asset universe sizes.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.06171&r=all
  10. By: Laurent Ferrara; Anna Simoni
    Abstract: We analyse whether, and when, a large set of Google search data can be useful to increase GDP nowcasting accuracy once we control for information contained in official variables. We put forward a new approach that combines variable pre-selection and Ridge regularization and we provide theoretical results on the asymptotic behaviour of the estimator. Empirical results on the euro area show that Google data convey useful information for pseudo-real-time nowcasting of GDP growth during the four first weeks of the quarter, when macroeconomic information is lacking. However, as soon as official data become available, their relative nowcasting power vanishes. In addition, a true real-time analysis confirms that Google data constitute a reliable alternative when official data are lacking.
    Keywords: Nowcasting, Big data, Google search data, Sure Independence Screening, Ridge Regularization
    JEL: C53 C55 E37
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:drm:wpaper:2020-11&r=all
  11. By: Fakih, Ali; Makdissi, Paul; Marrouch, Walid; Tabri, Rami V.; Yazbeck, Myra
    Abstract: Stochastic dominance comparisons of distributions based on ordinal data arise in many areas of economics. This paper develops a testing procedure for such comparisons under survey sampling from large finite populations with nonresponse using the worst-case bounds of the distributions. The advantage of using these bounds in distributional comparisons is that conclusions are robust to the nature of the nonresponse-generating mechanism. While these bounds on the distributions are often too wide in practice, we show that they can be informative for distributional comparisons in an empirical analysis. This paper examines the dynamics of trust in Lebanese public institutions using the 2013 World Values Survey as well as the 2016 and 2018 waves of the Arab Barometer, and finds convincing evidence of a decrease in confidence in most public institutions between 2013 and 2016.
    Keywords: Empirical Likelihood; Stochastic Dominance Test; Ordinal Variables; Survey Nonresponse
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2020-05&r=all
  12. By: Harvey, David I; Leybourne, Stephen J; Sollis, Robert; Taylor, AM Robert
    Abstract: We propose new real-time monitoring procedures for the emergence of end-of-sample predictive regimes using sequential implementations of standard (heteroskedasticity-robust) regression t-statistics for predictability applied over relatively short time periods. The procedures we develop can also be used for detecting historical regimes of temporary predictability. Our proposed methods are robust to both the degree of persistence and endogeneity of the regressors in the predictive regression and to certain forms of heteroskedasticity in the shocks. We discuss how the monitoring procedures can be designed such that their false positive rate can be set by the practitioner at the start of the monitoring period using detection rules based on information obtained from the data in a training period. We use these new monitoring procedures to investigate the presence of regime changes in the predictability of the U.S. equity premium at the one-month horizon by traditional macroeconomic and financial variables, and by binary technical analysis indicators. Our results suggest that the one-month ahead equity premium has temporarily been predictable, displaying so-called 'pockets of predictability', and that these episodes of predictability could have been detected in real-time by practitioners using our proposed methodology.
    Keywords: Predictive regression; persistence; temporary predictability; subsampling; U.S. equity premium
    Date: 2020–06–03
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:27775&r=all
  13. By: Paulo M.M. Rodrigues; João Nicolau; Pedro Raposo
    Abstract: In this paper we investigate potential changes which may have occurred over the last two decades in the probability mass of the right tail of the wage distribution, through the analysis of the corresponding tail index. In specific, a conditional tail index estimator is introduced which explicitly allows for right tail censoring (top-coding), which is a feature of the widely used current population survey (CPS), as well as of other surveys. Ignoring the top-coding may lead to inconsistent estimates of the tail index and to under or over statements of inequality and of its evolution over time. Thus, having a tail index estimator that explicitly accounts for this sample characteristic is of importance to better understand and compute the tail index dynamics in the censored right tail of the wage distribution. The contribution of this paper is threefold: i) we introduce a conditional tail index estimator that explicitly handles the top-coding problem, and evaluate its finite sample performance and compare it with competing methods; ii) we highlight that the factor values used to adjust the top-coded wage have changed over time and depend on the characteristics of individuals, occupations and industries, and propose suitable values; and iii) we provide an in-depth empirical analysis of the dynamics of the US wage distribution’s right tail using the public-use CPS database from 1992 to 2017.
    JEL: C18 C24 E24 J11 J31
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w202008&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.