Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics
2019-08-19
Relevant moment selection under mixed identification strength
http://d.repec.org/n?u=RePEc:adl:wpaper:2019-04&r=ecm
This paper proposes a moment selection method in the presence of moment condition models with mixed identification strength. That is moment conditions including moment functions that are local to zero uniformly over the parameter set. We show that the relevant moment selection procedure of Hall et al. (2007) is inconsistent in this setting as it does not explicitly account for the rate of convergence of parameter estimation of the candidate models which may vary. We introduce a new moment selection procedure based on a criterion that sequentially evaluates the rate of convergence of the candidate model's parameter estimate and the entropy of the estimator's asymptotic distribution. The benchmark estimator that we consider is the two-step efficient generalized method of moments (GMM) estimator which is known to be efficient in this framework as well. A family of penalization functions is introduced that guarantees the consistency of the selection procedure. The finite sample performance of the proposed method is assessed through Monte Carlo simulations.
Prosper Dovonon
Firmin Doko Tchatoka
Michael Aguessy
2019-04
Exploiting Information from Singletons in Panel Data Analysis: A GMM Approach
http://d.repec.org/n?u=RePEc:iza:izadps:dp12465&r=ecm
We propose a novel procedure, built within a Generalized Method of Moments framework, which exploits unpaired observations (singletons) to increase the efficiency of longitudinal fixed effect estimates. The approach allows increasing estimation efficiency, while properly tackling the bias due to unobserved time-invariant characteristics. We assess its properties by means of Monte Carlo simulations, and apply it to a traditional Total Factor Productivity regression, showing efficiency gains of approximately 8-9 percent.
Bruno, Randolph Luca
Magazzini, Laura
Stampini, Marco
singletons, panel data, efficient estimation, unobserved heterogeneity, GMM
2019-07
Regression Discontinuity with Integer Running Variable and Non-Integer Cutoff: Dental Care Program Effect on Expenditure
http://d.repec.org/n?u=RePEc:yor:hectdg:19/16&r=ecm
In regression discontinuity (RD), the treatment is determined by a continuous running variable G crossing a known cutoff c or not. However, often G is observed only as a rounded-down integer S (e.g., birth year observed instead of birth date), and c is not an integer. In this case, the â€œcutoff sampleâ€ (the observations with the same S value around c) cannot be used, because it is not clear whether their G actually crossed c or not. This paper shows that if the distribution of the measurement error e â‰¡ G âˆ’ S is specified, then despite non-integer c, the cutoff sample can be used fruitfully in estimating the treatment effect and in testing for the distributional assumption on e. Particularly, there are good reasons to believe that e is uniform on [0,1], not least because e is close to a popular way how pseudo uniform random numbers are generated in simulation studies. Also, whereas two-step estimation has been proposed in the RD literature, we show that the treatment effect can be estimated with single-step OLS/IVE as in typical RD with G observed. A simulation study and an empirical analysis for effects of a dental care support program on dental expenditure are provided.
Lee, M-j.;
Park, S-s.;
Shim, H-c.;
regression discontinuity; integer running variable; non-integer cutoff;
2019-07
A Vine-copula extension for the HAR model
http://d.repec.org/n?u=RePEc:arx:papers:1907.08522&r=ecm
The heterogeneous autoregressive (HAR) model is revised by modeling the joint distribution of the four partial-volatility terms therein involved. Namely, today's, yesterday's, last week's and last month's volatility components. The joint distribution relies on a (C-) Vine copula construction, allowing to conveniently extract volatility forecasts based on the conditional expectation of today's volatility given its past terms. The proposed empirical application involves more than seven years of high-frequency transaction prices for ten stocks and evaluates the in-sample, out-of-sample and one-step-ahead forecast performance of our model for daily realized-kernel measures. The model proposed in this paper is shown to outperform the HAR counterpart under different models for marginal distributions, copula construction methods, and forecasting settings.
Martin Magris
2019-07
Old-fashioned parametric models are still the best. A comparison of Value-at-Risk approaches in several volatility states.
http://d.repec.org/n?u=RePEc:war:wpaper:2019-12&r=ecm
Numerous advances in the modelling techniques of Value-at-Risk (VaR) have provided the financial institutions with a wide scope of market risk approaches. Yet it remains unknown which of the models should be used depending on the state of volatility. In this article we present the backtesting results for 1% and 2.5% VaR of six indexes from emerging and developed countries using several most known VaR models, among many: GARCH, EVT, CAViaR and FHS with multiple sets of parameters. The backtesting procedure has been based on the excess ratio, Kupiec and Christoffersen tests for multiple thresholds and cost functions. The added value of this article is that we have compared the models in four different scenarios, with different states of volatility in training and testing samples. The results indicate that the best of the models that is the least affected by changes in the volatility is GARCH(1,1) with standardized student's t-distribution. Non-parmetric techniques (e.g. CAViaR with GARCH setup (see Engle and Manganelli, 2001) or FHS with skewed normal distribution) have very prominent results in testing periods with low volatility, but are relatively worse in the turbulent periods. We have also discussed an automatic method to setting a threshold of extreme distribution for EVT models, as well as several ensembling methods for VaR, among which minimum of best models has been proven to have very good results - in particular a minimum of GARCH(1,1) with standardized student's t-distribution and either EVT or CAViaR models.
Mateusz Buczyński
Marcin Chlebus
Value-at-Risk, GARCH, Extreme Value Theory, Filtered Historical Simulation, CAViaR, market risk, forecast comparison
2019
Long-Run Effects of Dynamically Assigned Treatments: A New Methodology and an Evaluation of Training Effects on Earnings
http://d.repec.org/n?u=RePEc:iza:izadps:dp12470&r=ecm
We propose and implement a new method to estimate treatment effects in settings where individuals need to be in a certain state (e.g. unemployment) to be eligible for a treatment, treatments may commence at different points in time, and the outcome of interest is realized after the individual left the initial state. An example concerns the effect of training on earnings in subsequent employment. Any evaluation needs to take into account that some of those who are not trained at a certain time in unemployment will leave unemployment before training while others will be trained later. We are interested in effects of the treatment at a certain elapsed duration compared to "no treatment at any subsequent duration". We prove identification under unconfoundedness and propose inverse probability weighting estimators. A key feature is that the weights given to outcome observations of non-treated depend on the remaining time in the initial state. We study earnings effects of WIA training in the US and long-run effects of a training program for unemployed workers in Sweden.
van den Berg, Gerard J.
Vikström, Johan
treatment effects, dynamic treatment evaluation, program evaluation, duration analysis, matching, unemployment, employment
2019-07
A Quantile-based Asset Pricing Model
http://d.repec.org/n?u=RePEc:ris:smuesw:2019_015&r=ecm
It is well-known that the standard estimators of the risk premium in asset pricing models are biased when some price factors are omitted. To address this problem, we propose a novel quantile-based asset pricing model and a new estimation method. Our new asset pricing model allows for the risk premium to be quantile-dependent and our estimation method is applicable to models with unobserved factors. It avoids biased estimation results and always ensures a positive risk premium. The method is applied to the U.S., Japan, and U.K. stock markets. The empirical analysis demonstrates the clear beneﬁts of our approach.
Ando, Tomohiro
Bai, Jushan
Nishimura, Mitohide
Yu, Jun
Five-factor model; Quantile-based asset pricing model; Risk premium
2019-07-13
Heterogeneous Endogenous Effects in Networks
http://d.repec.org/n?u=RePEc:arx:papers:1908.00663&r=ecm
This paper proposes a new method to identify leaders and followers in a network. Prior works use spatial autoregression models (SARs) which implicitly assume that each individual in the network has the same peer effects on others. Mechanically, they conclude the key player in the network to be the one with the highest centrality. However, when some individuals are more influential than others, centrality may fail to be a good measure. I develop a model that allows for individual-specific endogenous effects and propose a two-stage LASSO procedure to identify influential individuals in a network. Under an assumption of sparsity: only a subset of individuals (which can increase with sample size n) is influential, I show that my 2SLSS estimator for individual-specific endogenous effects is consistent and achieves asymptotic normality. I also develop robust inference including uniformly valid confidence intervals. These results also carry through to scenarios where the influential individuals are not sparse. I extend the analysis to allow for multiple types of connections (multiple networks), and I show how to use the sparse group LASSO to detect which of the multiple connection types is more influential. Simulation evidence shows that my estimator has good finite sample performance. I further apply my method to the data in Banerjee et al. (2013) and my proposed procedure is able to identify leaders and effective networks.
Sida Peng
2019-08
Tail risk interdependence
http://d.repec.org/n?u=RePEc:boe:boeewp:0815&r=ecm
We present a framework focused on the interdependence of high-dimensional tail events. This framework allows us to analyse and quantify tail interdependence at different levels of extremity, decompose it into systemic and residual part and to measure the contribution of a constituent to the interdependence of a system. In particular, tail interdependence can capture simultaneous distress of the constituents of a (financial or economic) system and measure its systemic risk. We investigate systemic distress in several financial datasets confirming some known stylized facts and discovering some new findings. Further, we devise statistical tests of interdependence in the tails and outline some additional extensions.
Polanski, Arnold
Stoja, Evarist
Chiu, Ching-Wai (Jeremy)
Co-exceedance; systemic distress; risk contribution; extreme risk interdependence
2019-08-05
Practical Significance, Meta-Analysis and the Credibility of Economics
http://d.repec.org/n?u=RePEc:iza:izadps:dp12458&r=ecm
Recently, there has been much discussion about replicability and credibility. By integrating the full research record, increasing statistical power, reducing bias and enhancing credibility, meta-analysis is widely regarded as 'best evidence'. Through Monte Carlo simulation, closely calibrated on the typical conditions found among 6,700 economics research papers, we find that large biases and high rates of false positives will often be found by conventional meta-analysis methods. Nonetheless, the routine application of meta-regression analysis and considerations of practical significance largely restore research credibility.
Stanley, T. D.
Doucouliagos, Chris
meta-analysis, meta-regression, publication bias, credibility, simulations
2019-07
ESTIMATING DYNAMIC DISCRETE CHOICE PANEL MODELS USING IRREGULARLY SPACED DATA
http://d.repec.org/n?u=RePEc:ags:aaea19:291216&r=ecm
Chen, Maolong
Myers, Robert J.
Hu, Chaoran
Research Methods/ Statistical Methods
2019-06-25
Detecting the Hot Hand: Tests of Randomness Against Streaky Alternatives in Bernoulli Sequences
http://d.repec.org/n?u=RePEc:arx:papers:1908.01406&r=ecm
We consider the problem of testing for randomness against streaky alternatives in Bernoulli sequences. In particular, we study tests of randomness (i.e., that trials are i.i.d.) which choose as test statistics (i) the difference between the proportions of successes that directly follow k consecutive successes and k consecutive failures or (ii) the difference between the proportion of successes following k consecutive successes and the proportion of successes. The asymptotic distributions of these test statistics and their permutation distributions are derived under randomness and under general models of streakiness, which allows us to evaluate their local asymptotic power. The results are applied to revisit tests of the "hot hand fallacy" implemented on data from a basketball shooting experiment, whose conclusions are disputed by Gilovich, Vallone, and Tversky (1985) and Miller and Sanjurjo (2018a). While multiple testing procedures reveal that one shooter can be inferred to exhibit shooting significantly inconsistent with randomness, supporting the existence of positive dependence in basketball shooting, we find that participants in a survey of basketball players over-estimate an average player's streakiness, corroborating the empirical support for the hot hand fallacy.
David M. Ritzwoller
Joseph P. Romano
2019-08
Regression Discontinuity Designs with a Continuous Treatment
http://d.repec.org/n?u=RePEc:eti:dpaper:19058&r=ecm
Many empirical applications of regression discontinuity (RD) designs involve a continuous treatment. This paper establishes identification and bias-corrected robust inference for such RD designs. Causal identification is achieved by utilizing changes in the distribution of the continuous treatment at the RD threshold (including the usual mean change as a special case). Applying the proposed approach, we estimate the impacts of capital holdings on bank failure in the pre-Great Depression era. Our RD design takes advantage of the minimum capital requirements which change discontinuously with town size. We find that increased capital has no impacts on the long-run failure rates of banks.
Yingying DONG
Ying-Ying LEE
Michael GOU
2019-08
A Residual-Based Cointegration test with a Fourier Approximation
http://d.repec.org/n?u=RePEc:pra:mprapa:95395&r=ecm
This paper proposes a residual-based cointegration test in the presence of smooth structural changes approximated by a Fourier function. The test offers a simple way to accommodate unknown number and form of structural breaks and have good size and power properties in the presence of breaks.
Yilanci, Veli
cointegration test; Fourier function; structural breaks.
2019-07-31
Assessing IMF Lending: a Model of Sample Selection
http://d.repec.org/n?u=RePEc:imf:imfwpa:19/157&r=ecm
Extending previous work on the determinants of IMF lending in an interconnected world, we introduce a model of sample selection in which both selection and size dimensions of individual IMF arrangements are presented within a unified econometric framework. We allow for unobserved heterogeneity to create an additional channel for sample selection at the country level. The results suggest that higher external financing needs, larger exchange rate depreciation, lower GDP growth, as well as deteriorated global financial conditions, are associated with larger individual IMF arrangement sizes. Using the estimated parameters, Monte Carlo simulation of a wide spectrum of global shock scenarios suggest that the distribution of potential aggregate IMF lending exhibits a substantial right tail. Our approach may provide an insightful input to broader policy discussions on the adequacy of the IMF resources.
Nicolas Mäder
Jean-Guillaume Poulain
Julien Reynaud
2019-07-19
Agglomerative Fast Super-Paramagnetic Clustering
http://d.repec.org/n?u=RePEc:arx:papers:1908.00951&r=ecm
We consider the problem of fast time-series data clustering. Building on previous work modeling the correlation-based Hamiltonian of spin variables we present a fast non-expensive agglomerative algorithm. The method is tested on synthetic correlated time-series and noisy synthetic data-sets with built-in cluster structure to demonstrate that the algorithm produces meaningful non-trivial results. We argue that ASPC can reduce compute time costs and resource usage cost for large scale clustering while being serialized and hence has no obvious parallelization requirement. The algorithm can be an effective choice for state-detection for online learning in a fast non-linear data environment because the algorithm requires no prior information about the number of clusters.
Lionel Yelibi
Tim Gebbie
2019-08
Machine Learning for Forecasting Excess Stock Returns – The Five-Year-View
http://d.repec.org/n?u=RePEc:grz:wpaper:2019-06&r=ecm
In this paper, we apply machine learning to forecast stock returns in excess of different benchmarks, including the short-term interest rate, long-term interest rate, earnings-by-price ratio, and the inflation. In particular, we adopt and implement a fully nonparametric smoother with the covariates and the smoothing parameter chosen by cross-validation. We find that for both one-year and five-year returns, the term spread is, overall, the most powerful predictive variable for excess stock returns. Differently combined covariates can then achieve higher predictability for different forecast horizons. Nevertheless, the set of earnings-by-price and term spread predictors under the inflation benchmark strikes the right balance between the one-year and five-year horizon.
Ioannis Kyriakou
Parastoo Mousavi
Jens Perch Nielsen
Michael Scholz
Benchmark; Cross-validation; Prediction; Stock returns; Long-term forecasts; Overlapping returns; Autocorrelation
2019-08