|
on Econometrics |
By: | Juan Carlos Escanciano (Universidad de Navarra); Carlos Velasco (Universidad Carlos III) |
Abstract: | This paper proposes an omnibus test for testing a generalized version of the martingale difference hypothesis (MDH). This generalized hypothesis includes the usual MDH, testing for conditional moments constancy such as conditional homoscedasticity (ARCH effects) or testing for directional predictability. Here we propose a unified approach for dealing with all of these testing problems. These hypotheses are long standing problems in econometric time series analysis, and typically have been tested using the sample autocorrelations or in the spectral domain using the periodogram. Since these hypotheses cover also nonlinear predictability, tests based on those second order statistics are inconsistent against uncorrelated processes in the alternative hypothesis. To circumvent this problem we introduce the pairwise integrated regression functions as measures of linear and nonlinear dependence. With our test there is no need to choose a lag order depending on sample size, to smooth the data or to formulate a parametric alternative model. Moreover, our test is robust to higher order dependence, in particular to conditional heteroskedasticity. Under general dependence the asymptotic null distribution depends on the data generating process, so a bootstrap procedure is considered and a Monte Carlo study examines its finite sample performance. Then we investigate the martingale and conditional heteroskedasticity properties of the Pound/Dollar exchange rate. |
JEL: | C12 |
URL: | http://d.repec.org/n?u=RePEc:una:unccee:wp0606&r=ecm |
By: | Javier Hualde (Universidad de Navarra); Carlos Velasco (Universidad Carlos III) |
Abstract: | We propose tests of the null of spurious relationship against the alternative of fractional cointegration among the components of a vector of fractionally integrated time series. Our test statistics have an asymptotic chi-square distribution under the null and rely on GLS-type of corrections which control for the short run correlation of the weak dependent components of the fractionally integrated processes. We emphasize corrections based on nonparametric modelization of the innovations' autocorrelation, relaxing important conditions which are standard in the literature, and, in particular, being able to consider simultaneously (asymptotically) stationary or nonstationary processes. Relatively weak conditions on the corresponding short run and memory parameter estimates are assumed. The new tests are consistent with a divergence rate that, in most of the cases, as we show in a simple situation, depends on the cointegration degree. Finite-sample properties of the tests are analysed by means of a Monte Carlo experiment. |
JEL: | C12 C13 C22 |
URL: | http://d.repec.org/n?u=RePEc:una:unccee:wp0806&r=ecm |
By: | Alberto Abadie; Guido W. Imbens |
Abstract: | Matching estimators are widely used for the evaluation of programs or treatments. Often researchers use bootstrapping methods for inference. However, no formal justification for the use of the bootstrap has been provided. Here we show that the bootstrap is in general not valid, even in the simple case with a single continuous covariate when the estimator is root-N consistent and asymptotically normally distributed with zero asymptotic bias. Due to the extreme non-smoothness of nearest neighbor matching, the standard conditions for the bootstrap are not satisfied, leading the bootstrap variance to diverge from the actual variance. Simulations confirm the difference between actual and nominal coverage rates for bootstrap confidence intervals predicted by the theoretical calculations. To our knowledge, this is the first example of a root-N consistent and asymptotically normal estimator for which the bootstrap fails to work. |
JEL: | C14 C21 C52 |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0325&r=ecm |
By: | Kleijnen,Jack P.C. (Tilburg University, Center for Economic Research) |
Abstract: | Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise. By definition, white noise is normally, independently, and identically distributed with zero mean. This survey tries to answer the following questions: (i) How realistic are these classic assumptions in simulation practice? (ii) How can these assumptions be tested? (iii) If assumptions are violated, can the simulation's I/O data be transformed such that the assumptions hold? (iv) If not, which alternative statistical methods can then be applied? |
Keywords: | metamodels;experimental designs;generalized least squares;multivariate analysis;normality;jackknife;bootstrap;heteroscedasticity;common random numbers; validation |
JEL: | C0 C1 C9 C15 C44 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:dgr:kubcen:200650&r=ecm |
By: | Javier Hualde (Universidad de Navarra); Peter Robinson (London School of Economics) |
Abstract: | A semiparametric bivariate fractionally cointegrated system is considered, integration orders possibly being unknown and I(0) unobservable inputs having nonparametric spectral density. Two kinds of estimate of the cointegrating parameter ν are considered, one involving inverse spectral weighting and the other, unweighted statistics with a spectral estimate at frequency zero. We establish under quite general conditions the asymptotic distributional properties of the estimates of ν, both in case of "strong cointegration" (when the difference between integration orders of observables and cointegrating errors exceeds 1/2) and in case of "weak cointegration" (when that difference is less than 1/2), which includes the case of (asymptotically) stationary observables. Across both cases, the same Wald test statistic has the same standard null χ² limit distribution, irrespective of whether integration orders are known or estimated. The regularity conditions include unprimitive ones on the integration orders and spectral density estimates, but we check these under more primitive conditions on particular estimates. Finite-sample properties are examined in a Monte Carlo study.A semiparametric bivariate fractionally cointegrated system is considered, integration orders possibly being unknown and I(0) unobservable inputs having nonparametric spectral density. Two kinds of estimate of the cointegrating parameter ν are considered, one involving inverse spectral weighting and the other, unweighted statistics with a spectral estimate at frequency zero. We establish under quite general conditions the asymptotic distributional properties of the estimates of ν, both in case of "strong cointegration" (when the difference between integration orders of observables and cointegrating errors exceeds 1/2) and in case of "weak cointegration" (when that difference is less than 1/2), which includes the case of (asymptotically) stationary observables. Across both cases, the same Wald test statistic has the same standard null χ² limit distribution, irrespective of whether integration orders are known or estimated. The regularity conditions include unprimitive ones on the integration orders and spectral density estimates, but we check these under more primitive conditions on particular estimates. Finite-sample properties are examined in a Monte Carlo study. |
JEL: | C32 |
URL: | http://d.repec.org/n?u=RePEc:una:unccee:wp0706&r=ecm |
By: | Eva Boj del Val, Mª Mercè Claramunt Bielsa and Josep Fortiana Gregori (Universitat de Barcelona) |
Abstract: | Distance-based regression is a prediction method consisting of two steps: from distances between observations we obtain latent variables which, in turn, are the regressors in an ordinary least squares linear model. Distances are computed from actually observed predictors by means of a suitable dissimilarity function. Being in general nonlinearly related with the response their selection by the usual F tests is unavailable. In this paper we propose a solution to this predictor selection problem, by defining generalized test statistics and adapting a non-parametric bootstrap method to estimate their p-values. We include a numerical example with automobile insurance data. |
Keywords: | Distance-based regression; Predictors selection; Non-parametric bootstrap; Automobile insurance data. |
JEL: | C12 C14 C15 G22 C80 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:bar:bedcje:2006154&r=ecm |
By: | James H. Stock; Mark W. Watson |
Abstract: | The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees of freedom adjustment), applied to the fixed effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than two) as the number of entities n increases. We provide a bias-adjusted HR estimator that is (nT)1/2 -consistent under any sequences (n, T) in which n and/or T increase to ∞.The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees of freedom adjustment), applied to the fixed effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than two) as the number of entities n increases. We provide a bias-adjusted HR estimator that is (nT)1/2 -consistent under any sequences (n, T) in which n and/or T increase to ∞. |
JEL: | C23 C12 |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0323&r=ecm |
By: | Heiss, Florian |
Abstract: | In applied microeconometric panel data analyses, time-constant random effects and first-order Markov chains are the most prevalent structures to account for intertemporal correlations in limited dependent variable models. An example from health economics shows that the addition of a simple autoregressive error terms leads to a more plausible and parsimonious model which also captures the dynamic features better. The computational problems encountered in the estimation of such models - and a broader class formulated in the framework of nonlinear state space models - hampers their widespread use. This paper discusses the application of different nonlinear filtering approaches developed in the time-series literature to these models and suggests that a straightforward algorithm based on sequential Gaussian quadrature can be expected to perform well in this setting. This conjecture is impressively confirmed by an extensive analysis of the example application. |
Keywords: | LDV models; panel data; state space; numerical integration; health |
JEL: | C15 C23 C35 I10 |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:lmu:muenec:1157&r=ecm |
By: | Manuel G—mez (School of Economics, Universidad de Guanajuato); Daniel Ventosa-Santaularia (School of Economics, Universidad de Guanajuato) |
Abstract: | We further investigate the properties of the Dickey Fuller test when the Data Generating Process of the variable under consideration is in fact mean stationary with breaks. Asymptotic theory, Monte Carlo simulations and empirical evidence reveal that under plausible values, like the number of breaks and the size of the breaks empirically found, the DF test tends to reject the unit root null hypothesis. |
Keywords: | Dickey-Fuller test, Mean Stationary Process, Structural Breaks. |
JEL: | C12 C22 E31 |
URL: | http://d.repec.org/n?u=RePEc:gua:wpaper:em200601&r=ecm |
By: | Bernard M.S. van Praag (Universiteit van Amsterdam, CESifo, IZA, & SCHOLAR); Ada Ferrer-i-Carbonell (Universiteit van Amsterdam, SCHOLAR, AIAS Amsterdam Institute of Labour Studies) |
Abstract: | In this paper we propose an alternative approach to the estimation of ordered response models. We show that the Probit-method may be replaced by a simple OLS-approach, called P(robit)OLS, without any loss of efficiency. This method can be generalized to the analysis of panel data. For large-scale examples with random fixed effects we found that computing time was reduced from 90 minutes to less than one minute. Conceptually, the method removes the gap between traditional multivariate models and discrete variable models |
Keywords: | Categorical data; Ordered probit model; Ordered response models; Subjective data; Subjective well-being |
JEL: | C25 |
Date: | 2006–05–22 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20060047&r=ecm |
By: | Richard K. Crump; V. Joseph Hotz; Guido W. Imbens; Oscar A. Mitnik |
Abstract: | A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement. |
JEL: | C14 C21 C52 |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0324&r=ecm |
By: | Stephen Gordon; Michel Truchon |
Abstract: | We approach the social choice problem as one of optimal statistical inference. If individual voters or judgtes observe the true order ona set of alternatives with error, then it is possible to use the set of individual rankings to make probability statements about the correct social order. Given the posterior distribution for orders and a suitably chosen loss function, an optimal order is one that minimises expected posterior loss. The paper develops a statistical model describing the behaviour of judges, and discusses Markov Chain Monte Carlo estimation. We also discuss criteria for choosing the appropriate loss functions. We apply our methods to a well-known problem: determining the correct ranking for figure skaters competing at the Olympic Games. |
Keywords: | Vote aggregation, ranking rules, figure skating, Bayesian methods, optimal inference, Markov Chain Monte Carlo |
JEL: | D71 C11 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:lvl:lacicr:0624&r=ecm |
By: | Michel Truchon; Stephen Gordon |
Abstract: | If individual voters observe the true ranking on a set of alternatives with error, then the social choice problem, that is, the problem of aggregating their observations, is one of statistical inference. This study develops a statistical methodology that can be used to evaluate the properties of a given or aggregation rule. These techniques are then applied to some well-known rules. |
Keywords: | Vote aggregation, ranking rules, figure skating, maximum likelihood, optimal inference, Monte Carlo, Kemeny, Borda |
JEL: | D71 C15 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:lvl:lacicr:0625&r=ecm |
By: | Michel Truchon |
Abstract: | Drissi-Bakhkhat and Truchon ["Maximum Likelihood Approach to Vote Aggregation with Variable Probabilities," Social Choice and Welfare, 23, (2004), 161-185.] extend the Condorcet-Kemeny-Young maximum likelihood approach to vote aggregation by relaxing the assumption that the probability of correctly ordering two alternatives is the same for all pairs of alternatives. They let this probability increase with the distance between the two alternatives in the true order, to reflect the intuition that a judge or voter is more prone to errors when confronted to two comparable alternatives than when confronted to a good alternative and a bad one. In this note, it is shown than, for a suitably chosen probability function, the maximum likelihood rule coincides with the Borda rule, thus, partially reconciling the Borda and the Condorcet methods. |
Keywords: | Vote aggregation, ranking rules, maximum likelihood, Borda |
JEL: | D71 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:lvl:lacicr:0623&r=ecm |
By: | Roger Lord (Faculty of Economics and Business Economics, Erasmus Universiteit Rotterdam); Remmert Koekkoek (Robeco Alternative Investments); Dick van Dijk (Faculty of Economics and Business Economics, Erasmus Universiteit Rotterdam) |
Abstract: | When using an Euler discretisation to simulate a mean-reverting square root process, one runs into the problem that while the process itself is guaranteed to be nonnegative, the discretisation is not. Although an exact and efficient simulation algorithm exists for this process, at present this is not the case for the Heston stochastic volatility model, where the variance is modelled as a square root process. Consequently, when using an Euler discretisation, one must carefully think about how to fix negative variances. Our contribution is threefold. Firstly, we unify all Euler fixes into a single general framework. Secondly, we introduce the new full truncation scheme, tailored to minimise the upward bias found when pricing European options. Thirdly and finally, we numerically compare all Euler fixes to a recent quasi-second order scheme of Kahl and Jäckel and the exact scheme of Broadie and Kaya. The choice of fix is found to be extremely important. The full truncation scheme by far outperforms all biased schemes in terms of bias, root-mean-squared error, and hence should be the preferred discretisation method for simulation of the Heston model and extensions thereof. |
Keywords: | Stochastic volatility; Heston; square root process; Euler-Maruyama; discretisation; strong convergence; weak convergence; boundary behaviour |
JEL: | C63 G13 |
Date: | 2006–05–18 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20060046&r=ecm |
By: | Roberto Patuelli (Vrije Universiteit Amsterdam); Daniel A. Griffith (University of Texas at Dallas); Michael Tiefelsdorf (University of Texas at Dallas); Peter Nijkamp (Vrije Universiteit Amsterdam) |
Abstract: | Socio-economic interrelationships among regions can be measured in terms of economic flows, migration, or physical geographically-based measures, such as distance or length of shared areal unit boundaries. In general, proximity and openness tend to favour a similar economic performance among adjacent regions. Therefore, proper forecasting of socio-economic variables, such as employment, requires an understanding of spatial (or spatio-temporal) autocorrelation effects associated with a particular geographic configuration of a system of regions. Several spatial econometric techniques have been developed in recent years to identify spatial interaction effects within a parametric framework. Alternatively, newly devised spatial filtering techniques aim to achieve this end as well through the use of a semi-parametric approach. Experiments presented in this paper deal with the analysis of and accounting for spatial autocorrelation by means of spatial filtering t! echniques for data pertaining to regional unemployment in Germany. The available data set comprises information about the share of unemployed workers in 439 German districts (the NUTS-III regional aggregation level). Results based upon an eigenvector spatial filter model formulation (that is, the use of orthogonal map pattern components), constructed for the 439 German districts, are presented, with an emphasis on their consistency over several years. Insights obtained by applying spatial filtering to the database are also discussed. |
Keywords: | spatial autocorrelation; spatial filtering; unemployment; Germany |
JEL: | C14 C21 C23 R23 |
Date: | 2006–05–22 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20060049&r=ecm |
By: | Monica Hernandez (University of Birmingham); Stephen Pudney (Institute for Social and Economic Research) |
Abstract: | We consider the problem of modelling welfare participation when measurement error may affect simulated welfare entitlement. We identify a flaw in past implementations of the ML approach and develop a more appropriate ML approach. A model of welfare oarticipation is estimated for British pensioners, linking the probability of participation to the value of benefit entitlement, incorporating the nonlinear rule relating entitlement to the household's income and financial assets. The model is used to evaluate the claim costs incurred by participants. When we allow for measurement errors in income and assets, estimated claim costs are substantially reduced. |
Keywords: | benefits, means-tested benefits, measurement error, take-up |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:ese:iserwp:2006-29&r=ecm |
By: | René van den Brink (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam); Robert P. Gilles (Department of Economics, Virginia Tech, Virginia) |
Abstract: | A ranking method assigns to every weighted directed graph a (weak) ordering of the nodes. In this paper we axiomatize the ranking method that ranks the nodes according to their outflow using four independent axioms. This outflow ranking method generalizes the ranking by outdegree for directed graphs. Furthermore, we compare our axioms with other axioms discussed in the literature. |
Keywords: | Decision analysis; Weighted directed graph; Ranking method |
JEL: | C65 D71 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20060044&r=ecm |