nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒08‒19
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Relevant moment selection under mixed identification strength By Prosper Dovonon; Firmin Doko Tchatoka; Michael Aguessy
  2. Exploiting Information from Singletons in Panel Data Analysis: A GMM Approach By Bruno, Randolph Luca; Magazzini, Laura; Stampini, Marco
  3. Regression Discontinuity with Integer Running Variable and Non-Integer Cutoff: Dental Care Program Effect on Expenditure By Lee, M-j.;; Park, S-s.;; Shim, H-c.;
  4. A Vine-copula extension for the HAR model By Martin Magris
  5. Old-fashioned parametric models are still the best. A comparison of Value-at-Risk approaches in several volatility states. By Mateusz Buczyński; Marcin Chlebus
  6. Long-Run Effects of Dynamically Assigned Treatments: A New Methodology and an Evaluation of Training Effects on Earnings By van den Berg, Gerard J.; Vikström, Johan
  7. A Quantile-based Asset Pricing Model By Ando, Tomohiro; Bai, Jushan; Nishimura, Mitohide; Yu, Jun
  8. Heterogeneous Endogenous Effects in Networks By Sida Peng
  9. Tail risk interdependence By Polanski, Arnold; Stoja, Evarist; Chiu, Ching-Wai (Jeremy)
  10. Practical Significance, Meta-Analysis and the Credibility of Economics By Stanley, T. D.; Doucouliagos, Chris
  11. ESTIMATING DYNAMIC DISCRETE CHOICE PANEL MODELS USING IRREGULARLY SPACED DATA By Chen, Maolong; Myers, Robert J.; Hu, Chaoran
  12. Detecting the Hot Hand: Tests of Randomness Against Streaky Alternatives in Bernoulli Sequences By David M. Ritzwoller; Joseph P. Romano
  13. Regression Discontinuity Designs with a Continuous Treatment By Yingying DONG; Ying-Ying LEE; Michael GOU
  14. A Residual-Based Cointegration test with a Fourier Approximation By Yilanci, Veli
  15. Assessing IMF Lending: a Model of Sample Selection By Nicolas Mäder; Jean-Guillaume Poulain; Julien Reynaud
  16. Agglomerative Fast Super-Paramagnetic Clustering By Lionel Yelibi; Tim Gebbie
  17. Machine Learning for Forecasting Excess Stock Returns – The Five-Year-View By Ioannis Kyriakou; Parastoo Mousavi; Jens Perch Nielsen; Michael Scholz

  1. By: Prosper Dovonon (Economics Department, Concordia University); Firmin Doko Tchatoka (School of Economics, University of Adelaide); Michael Aguessy (Economics Department, Concordia University)
    Abstract: This paper proposes a moment selection method in the presence of moment condition models with mixed identification strength. That is moment conditions including moment functions that are local to zero uniformly over the parameter set. We show that the relevant moment selection procedure of Hall et al. (2007) is inconsistent in this setting as it does not explicitly account for the rate of convergence of parameter estimation of the candidate models which may vary. We introduce a new moment selection procedure based on a criterion that sequentially evaluates the rate of convergence of the candidate model's parameter estimate and the entropy of the estimator's asymptotic distribution. The benchmark estimator that we consider is the two-step efficient generalized method of moments (GMM) estimator which is known to be efficient in this framework as well. A family of penalization functions is introduced that guarantees the consistency of the selection procedure. The finite sample performance of the proposed method is assessed through Monte Carlo simulations.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:adl:wpaper:2019-04&r=all
  2. By: Bruno, Randolph Luca (University College London); Magazzini, Laura (University of Verona); Stampini, Marco (Inter-American Development Bank)
    Abstract: We propose a novel procedure, built within a Generalized Method of Moments framework, which exploits unpaired observations (singletons) to increase the efficiency of longitudinal fixed effect estimates. The approach allows increasing estimation efficiency, while properly tackling the bias due to unobserved time-invariant characteristics. We assess its properties by means of Monte Carlo simulations, and apply it to a traditional Total Factor Productivity regression, showing efficiency gains of approximately 8-9 percent.
    Keywords: singletons, panel data, efficient estimation, unobserved heterogeneity, GMM
    JEL: C23 C33 C51
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12465&r=all
  3. By: Lee, M-j.;; Park, S-s.;; Shim, H-c.;
    Abstract: In regression discontinuity (RD), the treatment is determined by a continuous running variable G crossing a known cutoff c or not. However, often G is observed only as a rounded-down integer S (e.g., birth year observed instead of birth date), and c is not an integer. In this case, the “cutoff sample†(the observations with the same S value around c) cannot be used, because it is not clear whether their G actually crossed c or not. This paper shows that if the distribution of the measurement error e ≡ G − S is specified, then despite non-integer c, the cutoff sample can be used fruitfully in estimating the treatment effect and in testing for the distributional assumption on e. Particularly, there are good reasons to believe that e is uniform on [0,1], not least because e is close to a popular way how pseudo uniform random numbers are generated in simulation studies. Also, whereas two-step estimation has been proposed in the RD literature, we show that the treatment effect can be estimated with single-step OLS/IVE as in typical RD with G observed. A simulation study and an empirical analysis for effects of a dental care support program on dental expenditure are provided.
    Keywords: regression discontinuity; integer running variable; non-integer cutoff;
    JEL: C21 C24 I18
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:yor:hectdg:19/16&r=all
  4. By: Martin Magris
    Abstract: The heterogeneous autoregressive (HAR) model is revised by modeling the joint distribution of the four partial-volatility terms therein involved. Namely, today's, yesterday's, last week's and last month's volatility components. The joint distribution relies on a (C-) Vine copula construction, allowing to conveniently extract volatility forecasts based on the conditional expectation of today's volatility given its past terms. The proposed empirical application involves more than seven years of high-frequency transaction prices for ten stocks and evaluates the in-sample, out-of-sample and one-step-ahead forecast performance of our model for daily realized-kernel measures. The model proposed in this paper is shown to outperform the HAR counterpart under different models for marginal distributions, copula construction methods, and forecasting settings.
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1907.08522&r=all
  5. By: Mateusz Buczyński (Faculty of Economic Sciences, University of Warsaw); Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw)
    Abstract: Numerous advances in the modelling techniques of Value-at-Risk (VaR) have provided the financial institutions with a wide scope of market risk approaches. Yet it remains unknown which of the models should be used depending on the state of volatility. In this article we present the backtesting results for 1% and 2.5% VaR of six indexes from emerging and developed countries using several most known VaR models, among many: GARCH, EVT, CAViaR and FHS with multiple sets of parameters. The backtesting procedure has been based on the excess ratio, Kupiec and Christoffersen tests for multiple thresholds and cost functions. The added value of this article is that we have compared the models in four different scenarios, with different states of volatility in training and testing samples. The results indicate that the best of the models that is the least affected by changes in the volatility is GARCH(1,1) with standardized student's t-distribution. Non-parmetric techniques (e.g. CAViaR with GARCH setup (see Engle and Manganelli, 2001) or FHS with skewed normal distribution) have very prominent results in testing periods with low volatility, but are relatively worse in the turbulent periods. We have also discussed an automatic method to setting a threshold of extreme distribution for EVT models, as well as several ensembling methods for VaR, among which minimum of best models has been proven to have very good results - in particular a minimum of GARCH(1,1) with standardized student's t-distribution and either EVT or CAViaR models.
    Keywords: Value-at-Risk, GARCH, Extreme Value Theory, Filtered Historical Simulation, CAViaR, market risk, forecast comparison
    JEL: G32 C52 C53 C58
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2019-12&r=all
  6. By: van den Berg, Gerard J. (University of Bristol); Vikström, Johan (IFAU)
    Abstract: We propose and implement a new method to estimate treatment effects in settings where individuals need to be in a certain state (e.g. unemployment) to be eligible for a treatment, treatments may commence at different points in time, and the outcome of interest is realized after the individual left the initial state. An example concerns the effect of training on earnings in subsequent employment. Any evaluation needs to take into account that some of those who are not trained at a certain time in unemployment will leave unemployment before training while others will be trained later. We are interested in effects of the treatment at a certain elapsed duration compared to "no treatment at any subsequent duration". We prove identification under unconfoundedness and propose inverse probability weighting estimators. A key feature is that the weights given to outcome observations of non-treated depend on the remaining time in the initial state. We study earnings effects of WIA training in the US and long-run effects of a training program for unemployed workers in Sweden.
    Keywords: treatment effects, dynamic treatment evaluation, program evaluation, duration analysis, matching, unemployment, employment
    JEL: C14 J3
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12470&r=all
  7. By: Ando, Tomohiro (Melbourne University); Bai, Jushan (Columbia University); Nishimura, Mitohide (Nikko Asset Management Co. Ltd); Yu, Jun (School of Economics, Singapore Management University)
    Abstract: It is well-known that the standard estimators of the risk premium in asset pricing models are biased when some price factors are omitted. To address this problem, we propose a novel quantile-based asset pricing model and a new estimation method. Our new asset pricing model allows for the risk premium to be quantile-dependent and our estimation method is applicable to models with unobserved factors. It avoids biased estimation results and always ensures a positive risk premium. The method is applied to the U.S., Japan, and U.K. stock markets. The empirical analysis demonstrates the clear benefits of our approach.
    Keywords: Five-factor model; Quantile-based asset pricing model; Risk premium
    JEL: G12 G15
    Date: 2019–07–13
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2019_015&r=all
  8. By: Sida Peng
    Abstract: This paper proposes a new method to identify leaders and followers in a network. Prior works use spatial autoregression models (SARs) which implicitly assume that each individual in the network has the same peer effects on others. Mechanically, they conclude the key player in the network to be the one with the highest centrality. However, when some individuals are more influential than others, centrality may fail to be a good measure. I develop a model that allows for individual-specific endogenous effects and propose a two-stage LASSO procedure to identify influential individuals in a network. Under an assumption of sparsity: only a subset of individuals (which can increase with sample size n) is influential, I show that my 2SLSS estimator for individual-specific endogenous effects is consistent and achieves asymptotic normality. I also develop robust inference including uniformly valid confidence intervals. These results also carry through to scenarios where the influential individuals are not sparse. I extend the analysis to allow for multiple types of connections (multiple networks), and I show how to use the sparse group LASSO to detect which of the multiple connection types is more influential. Simulation evidence shows that my estimator has good finite sample performance. I further apply my method to the data in Banerjee et al. (2013) and my proposed procedure is able to identify leaders and effective networks.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.00663&r=all
  9. By: Polanski, Arnold (University of East Anglia); Stoja, Evarist (University of Bristol); Chiu, Ching-Wai (Jeremy) (Bank of England)
    Abstract: We present a framework focused on the interdependence of high-dimensional tail events. This framework allows us to analyse and quantify tail interdependence at different levels of extremity, decompose it into systemic and residual part and to measure the contribution of a constituent to the interdependence of a system. In particular, tail interdependence can capture simultaneous distress of the constituents of a (financial or economic) system and measure its systemic risk. We investigate systemic distress in several financial datasets confirming some known stylized facts and discovering some new findings. Further, we devise statistical tests of interdependence in the tails and outline some additional extensions.
    Keywords: Co-exceedance; systemic distress; risk contribution; extreme risk interdependence
    JEL: C32 G01
    Date: 2019–08–05
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0815&r=all
  10. By: Stanley, T. D. (Deakin University); Doucouliagos, Chris (Deakin University)
    Abstract: Recently, there has been much discussion about replicability and credibility. By integrating the full research record, increasing statistical power, reducing bias and enhancing credibility, meta-analysis is widely regarded as 'best evidence'. Through Monte Carlo simulation, closely calibrated on the typical conditions found among 6,700 economics research papers, we find that large biases and high rates of false positives will often be found by conventional meta-analysis methods. Nonetheless, the routine application of meta-regression analysis and considerations of practical significance largely restore research credibility.
    Keywords: meta-analysis, meta-regression, publication bias, credibility, simulations
    JEL: C10 C12 C13 C40
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12458&r=all
  11. By: Chen, Maolong; Myers, Robert J.; Hu, Chaoran
    Keywords: Research Methods/ Statistical Methods
    Date: 2019–06–25
    URL: http://d.repec.org/n?u=RePEc:ags:aaea19:291216&r=all
  12. By: David M. Ritzwoller; Joseph P. Romano
    Abstract: We consider the problem of testing for randomness against streaky alternatives in Bernoulli sequences. In particular, we study tests of randomness (i.e., that trials are i.i.d.) which choose as test statistics (i) the difference between the proportions of successes that directly follow k consecutive successes and k consecutive failures or (ii) the difference between the proportion of successes following k consecutive successes and the proportion of successes. The asymptotic distributions of these test statistics and their permutation distributions are derived under randomness and under general models of streakiness, which allows us to evaluate their local asymptotic power. The results are applied to revisit tests of the "hot hand fallacy" implemented on data from a basketball shooting experiment, whose conclusions are disputed by Gilovich, Vallone, and Tversky (1985) and Miller and Sanjurjo (2018a). While multiple testing procedures reveal that one shooter can be inferred to exhibit shooting significantly inconsistent with randomness, supporting the existence of positive dependence in basketball shooting, we find that participants in a survey of basketball players over-estimate an average player's streakiness, corroborating the empirical support for the hot hand fallacy.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.01406&r=all
  13. By: Yingying DONG; Ying-Ying LEE; Michael GOU
    Abstract: Many empirical applications of regression discontinuity (RD) designs involve a continuous treatment. This paper establishes identification and bias-corrected robust inference for such RD designs. Causal identification is achieved by utilizing changes in the distribution of the continuous treatment at the RD threshold (including the usual mean change as a special case). Applying the proposed approach, we estimate the impacts of capital holdings on bank failure in the pre-Great Depression era. Our RD design takes advantage of the minimum capital requirements which change discontinuously with town size. We find that increased capital has no impacts on the long-run failure rates of banks.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:eti:dpaper:19058&r=all
  14. By: Yilanci, Veli
    Abstract: This paper proposes a residual-based cointegration test in the presence of smooth structural changes approximated by a Fourier function. The test offers a simple way to accommodate unknown number and form of structural breaks and have good size and power properties in the presence of breaks.
    Keywords: cointegration test; Fourier function; structural breaks.
    JEL: C12
    Date: 2019–07–31
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:95395&r=all
  15. By: Nicolas Mäder; Jean-Guillaume Poulain; Julien Reynaud
    Abstract: Extending previous work on the determinants of IMF lending in an interconnected world, we introduce a model of sample selection in which both selection and size dimensions of individual IMF arrangements are presented within a unified econometric framework. We allow for unobserved heterogeneity to create an additional channel for sample selection at the country level. The results suggest that higher external financing needs, larger exchange rate depreciation, lower GDP growth, as well as deteriorated global financial conditions, are associated with larger individual IMF arrangement sizes. Using the estimated parameters, Monte Carlo simulation of a wide spectrum of global shock scenarios suggest that the distribution of potential aggregate IMF lending exhibits a substantial right tail. Our approach may provide an insightful input to broader policy discussions on the adequacy of the IMF resources.
    Date: 2019–07–19
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:19/157&r=all
  16. By: Lionel Yelibi; Tim Gebbie
    Abstract: We consider the problem of fast time-series data clustering. Building on previous work modeling the correlation-based Hamiltonian of spin variables we present a fast non-expensive agglomerative algorithm. The method is tested on synthetic correlated time-series and noisy synthetic data-sets with built-in cluster structure to demonstrate that the algorithm produces meaningful non-trivial results. We argue that ASPC can reduce compute time costs and resource usage cost for large scale clustering while being serialized and hence has no obvious parallelization requirement. The algorithm can be an effective choice for state-detection for online learning in a fast non-linear data environment because the algorithm requires no prior information about the number of clusters.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.00951&r=all
  17. By: Ioannis Kyriakou (Cass Business School, City, University of London, UK); Parastoo Mousavi (Cass Business School, City, University of London, UK); Jens Perch Nielsen (Cass Business School, City, University of London, UK); Michael Scholz (University of Graz, Austria)
    Abstract: In this paper, we apply machine learning to forecast stock returns in excess of different benchmarks, including the short-term interest rate, long-term interest rate, earnings-by-price ratio, and the inflation. In particular, we adopt and implement a fully nonparametric smoother with the covariates and the smoothing parameter chosen by cross-validation. We find that for both one-year and five-year returns, the term spread is, overall, the most powerful predictive variable for excess stock returns. Differently combined covariates can then achieve higher predictability for different forecast horizons. Nevertheless, the set of earnings-by-price and term spread predictors under the inflation benchmark strikes the right balance between the one-year and five-year horizon.
    Keywords: Benchmark; Cross-validation; Prediction; Stock returns; Long-term forecasts; Overlapping returns; Autocorrelation
    JEL: C14 C53 C58 G17 G22
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:grz:wpaper:2019-06&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.