|
on Econometrics |
By: | Gianfranco Piras (Regional Research Institute, West Virginia University); Ingmar R. Prucha (Department of Economics, University of Maryland) |
Abstract: | This paper explores the properties of pre-tst strategies in estimating a linear Cliff-Ord -type spatial model when the researcher is unsure about the nature of the spatial dependence. More specifically, the paper explores the finite sample properties of the pre-test estimators introduced in Florax et al. (2003), which are based on Lagrange Multiplier (LM) tests, within the context of a Monte Carlo study. The performance of those estimators is compared with that of the maximum likelihood (ML) estimator of the encompassing model. We find that, even in a very simple setting, the bias of the estimates generated by pre-testing strategies can be very large in some cases and the empirical size of tests can differ substantially from the nominal size. This is in contrast to the ML estimator. |
Keywords: | cliff-ord, spatial, model, lagrange multiplier, monte carlo |
JEL: | C4 C5 |
Date: | 2013–10 |
URL: | http://d.repec.org/n?u=RePEc:rri:wpaper:2013wp07&r=ecm |
By: | Isabelle Charlier; Davy Paindaveine; Jérôme Saracco |
Abstract: | Charlier, Paindaveine, and Saracco (2014) recently introduced a nonparametric estimatorof conditional quantiles based on optimal quantization, but almost exclusively focused onits theoretical properties. In this paper, (i) we discuss its practical implementation (byproposing in particular a method to properly select the corresponding smoothing parameter,namely the number of quantizers) and (ii) we investigate how its finite-sample performancescompare with those of its classical kernel or nearest-neighbor competitors. Monte Carlostudies reveal that the quantization-based estimator competes well in all cases and tends todominate its competitors for non-uniformly distributed covariates. We also treat a real dataset. While the paper mostly focuses on the case of a univariate covariate, we also brieflydiscuss the multivariate case and provide an illustration for bivariate regressors. |
Keywords: | conditional quantiles; optimal quantization; nonparametric regression |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/174929&r=ecm |
By: | Harry Vander Elst; David Veredas |
Abstract: | We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyse – in a through Monte Carlo study – different combinations of quantile-and-median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes and in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that pre-averaged disentangled estimators provide a precise, computationally efficient and easy alternative to measure integrated covariances on basis of noisy and asynchronous prices. Moreover, the gain is not only statistical but also financial. A minimum variance portfolio application shows the superiority of the disentangled realized estimators in terms of numerous performance metrics. |
Keywords: | Realized measures, Noise, Jumps, Syncrhonization |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:es142416&r=ecm |
By: | Michael Creel (Universitat Autònoma de Barcelona and MOVE); Dennis Kristensen (University College London, CeMMaP, and CREATES) |
Abstract: | We develop novel methods for estimation and filtering of continuous-time models with stochastic volatility and jumps using so-called Approximate Bayesian Computation which build likelihoods based on limited information. The proposed estimators and filters are computationally attractive relative to standard likelihood-based versions since they rely on low-dimensional auxiliary statistics and so avoid computation of high-dimensional integrals. Despite their computational simplicity, we find that estimators and filters perform well in practice and lead to precise estimates of model parameters and latent variables. We show how the methods can incorporate intra-daily information to improve on the estimation and filtering. In particular, the availability of realized volatility measures help us in learning about parameters and latent states. The method is employed in the estimation of a flexible stochastic volatility model for the dynamics of the S&P 500 equity index. We find evidence of the presence of a dynamic jump rate and in favor of a structural break in parameters at the time of the recent financial crisis. We find evidence that possible measurement error in log price is small and has little effect on parameter estimates. Smoothing shows that, recently, volatility and the jump rate have returned to the low levels of 2004-2006. |
Keywords: | Approximate Bayesian Computation, continuous-time processes, filtering, indirect inference, jumps, realized volatility, stochastic volatility |
JEL: | C13 C14 C15 C33 G17 |
Date: | 2014–11–08 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2014-30&r=ecm |
By: | Joseph Cummins (Department of Economics, University of California Riverside) |
Abstract: | This paper addresses the problem of model misspecification bias when estimating cohort-level determinants of child height-for-age z-score (HAZ) using data from the Demographic and Health Surveys (DHS). I show that the combination of DHS survey design and the biological realities of child health in developing countries create an artifact that can strongly bias regression estimates when identification relies on seasonal, annual or spatio-temporal variation associated with a subject's birth cohort. I formalize the econometric problem and show that flexible specifications of the HAZ-age profile can greatly mitigate the bias. When regression models can exploit within-cohort variation in the covariate of interest, appropriate fixed-effects models can effectively purge the bias. I also provide Monte Carlo evidence that DHS recommended inference strategies produce standard errors that are too small when estimating birth cohort determinants of HAZ. |
Keywords: | Demographic and Health Surveys; Height; Z-score; Child Health |
JEL: | O15 C23 I15 |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201417&r=ecm |
By: | Aman Ullah (Department of Economics, University of California Riverside); Alan T.K. Wan (City University of Hong Kong); Huansha Wang (University of California, Riverside); Xinyu Zhang (Chinese Academy of Sciences); Guohua Zou (Chinese Academy of Sciences) |
Abstract: | In recent years, the suggestion of combining models as an alternative to selecting a single model from a frequentist prospective has been advanced in a number of studies. In this paper, we propose a new semi-parametric estimator of regression coe¢ cients, which is in the form of a feasible generalized ridge estimator by Hoerl and Kennard (1970b) but with di¤erent biasing factors. We prove that the generalized ridge estimator is algebraically identical to the model average estimator. Further, the biasing factors that determine the properties of both the generalized ridge and semi-parametric estimators are directly linked to the weights used in model averaging. These are interesting results for the interpretations and applications of both semi-parametric and ridge estimators. Furthermore, we demonstrate that these estimators based on model averaging weights can have properties superior to the well-known feasible generalized ridge estimator in a large region of the parameter space. Two empirical examples are presented. |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201412&r=ecm |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Huiyu Huang (GMO Emerging Markets) |
Abstract: | In prediction of quantiles of daily S&P 500 returns we consider how we use high-frequency 5-minute data. We examine methods that incorporate the high frequency information either indirectly through combining forecasts (using forecasts generated from returns sampled at different intra-day interval) or directly through combining high frequency information into one model. We consider subsample averaging, bootstrap averaging, forecast averaging methods for the indirect case, and factor models with principal component approach for both cases. We show, in forecasting daily S&P 500 index return quantile (VaR is simply the negative of it), using high-frequency information is beneficial, often substantially and particularly so in forecasting downside risk. Our empirical results show that the averaging methods (subsample averaging, bootstrap averaging, forecast averaging), which serve as different ways of forming the ensemble average from using high frequency intraday information, provide excellent forecasting performance compared to using just low frequency daily information. |
Keywords: | VaR, Quantiles, Subsample averaging, Bootstrap averaging, Forecast combination, High-frequency data. |
JEL: | C53 G32 C22 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201409&r=ecm |
By: | Tetsuya Takaishi |
Abstract: | A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics. |
Date: | 2014–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1409.0118&r=ecm |
By: | Yoonseok Lee (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Sung Jae Jun (Pennsylvania State University); Youngki Shin (University of Western Ontario) |
Abstract: | We propose the sharp identifiable bounds of the distribution functions of potential outcomes using a panel with fixed T. We allow for the possibility that the statistical randomization of treatment assignments is not achieved until unobserved heterogeneity is properly controlled for. We use certain stationarity assumptions to obtain the bounds. Dynamics in the treatment decisions is allowed as long as the stationarity assumptions are satisfied. In particular, we present an example where our assumptions are satisfied and the treatment decision of the present time may depend on the treatments and the observed outcomes of the past. As an empirical illustration we study the effect of smoking during pregnancy on infant birth weights. We found that for the group of switchers the birth weight with smoking is first order stochastically dominated by that with non-smoking. |
Keywords: | Treatment Effects, Dynamic Treatment Decisions, Partial Identification, Unobserved Heterogeneity, Stochastic Dominance, Panel Data |
JEL: | C12 C21 C23 |
Date: | 2014–07 |
URL: | http://d.repec.org/n?u=RePEc:max:cprwps:169&r=ecm |
By: | Dedy Dwi Prastyo; Wolfgang Karl Härdle; ; |
Abstract: | Using a local adaptive Forward Intensities Approach (FIA) we investigate multiperiod corporate defaults and other delisting schemes. The proposed approach is fully datadriven and is based on local adaptive estimation and the selection of optimal estimation windows. Time-dependent model parameters are derived by a sequential testing procedure that yields adapted predictions at every time point. Applying the proposed method to monthly data on 2000 U.S. public rms over a sample period from 1991 to 2011, we estimate default probabilities over various prediction horizons. The prediction performance is evaluated against the global FIA that employs all past observations. For the six months prediction horizon, the local adaptive FIA performs with the same accuracy as the benchmark. The default prediction power is improved for the longer horizon (one to three years). Our local adaptive method can be applied to any other specications of forward intensities. |
Keywords: | Accuracy ratio; Forward default intensity; Local adaptive; Mutiperiod prediction |
JEL: | C41 C53 C58 G33 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2014-040&r=ecm |
By: | Mauricio Romero; Álvaro Riascos; Diego Jara |
Abstract: | Multiple choice exams are frequently used as an efficient and objective instrument to evaluate knowledge. Nevertheless, they are more vulnerable to answer-copying than tests based on open questions. Several statistical tests (known as indices) have been proposed to detect cheating but to the best of our knowledge they all lack a mathematical support that guarantees optimality in any sense. This work aims at filling this void by deriving the uniform most powerful (UMP) test assuming the response distribution is known. In practice we must estimate a behavioral model that yields a response distribution for each question. We calculate the empirical type-I and type-II error rates for several indices, that assume different behavioral models, using simulations based on real data from twelve nation wide multiple choice exams taken by 5th and 9th graders in Colombia. We find that the index with the highest power among those studied, subject to the restriction of preserving the type-I error, is the one that uses a nominal response model for item answering, conditions on the answers of the individual suspected of being the source of copy and calculates critical values via a normal approximation. This index was first studied by Wollack (1997) and later by W. Van der Linden and Sotaridona (2006) and is superior to the indices studied and developed by Wesolowsky (2000) and Frary, Tideman, and Watts (1977). Furthermore, we compare the performance of the indices on examination rooms with different levels of proctoring and find that increasing the level of proctoring can reduce copying by as much as 50% and that simple strategies such as having different students answer different portions of the test at different times canal so reduce cheating by over 50%. Finally, a Bonferroni type false discovery rate procedure is used to detect massive cheating. The application is straightforward and we believe it could be use to make entire examination rooms retake an exam under stricter surveillance conditions. |
Keywords: | : Index, Answer Copying, False Discovery Rate, Neyman-Pearson Lemma |
JEL: | C19 I20 |
Date: | 2014–08–13 |
URL: | http://d.repec.org/n?u=RePEc:col:000089:012061&r=ecm |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Weiping Yang (Capital One Financial Research) |
Abstract: | This paper considers the Granger-causality in conditional quantile and examines the potential of improving conditional quantile forecasting by accounting for such a causal relationship between financial markets. We consider Granger-causality in distributions by testing whether the copula function of a pair of two financial markets is the independent copula. Among returns on stock markets in the US, Japan and U.K., we find significant Granger-causality in distribution. For a pair of the financial markets where the dependent (conditional) copula is found, we invert the conditional copula to obtain the conditional quantiles. Dependence between returns of two financial markets is modeled using a parametric copula. Different copula functions are compared to test for Granger-causality in distribution and in quantiles. We find significant Granger-causality in the different quantiles of the conditional distributions between foreign stock markets and the US stock market. Granger-causality from foreign stock markets to the US stock market is more significant from UK than from Japan, while causality from the US stock market to UK and Japan stock markets is almost equally significant. |
Keywords: | Contagion in Financial Markets. Copula Functions. Inverting Conditional Copula. Granger-causality in Conditional Quantiles. |
JEL: | C5 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201406&r=ecm |