
on Econometrics 
By:  Ron Mittelhammer (Washington State University); George Judge (University of California, Berkeley and Giannini Foundation); Douglas Miller (Purdue University); N. Scott Cardell (Salford Systems, Inc., San Diego, CA) 
Abstract:  This paper introduces a new class of estimators based on minimization of the CressieRead (CR)power divergence measure for binary choice models, where neither a parameterized distribution nor a parameterization of the mean is specified explicitly in the statistical model. By incorporating sample information in the form of conditional moment conditions and estimating choice probabilities by optimizing a member of the set of divergence measures in the CR family, a new class of nonparametric estimators evolves that requires less a priori model structure than conventional parametric estimators such as probit or logit. Asymptotic properties are derived under general regularity conditions and finite sampling properties are illustrated by Monte Carlo sampling experiments. Except for some special cases in which the general regularity conditions do not hold, the estimators have asymptotic normal distributions, similar to conventional parametric estimators of the binary choice model. The sampling experiments focus on the mean square errors in the choice probability predictions and the probability derivatives with respect to the response variable values. The simulation results suggest that estimators within the CR class are more robust than conventional methods of estimation across varying probability distributions underlying the Bernoulli process. The size and power of test statistics based on the asymptotics of the CRbased estimators exhibit behavior similar to those based on conventional parametric methods. Overall, the new class of nonparametric estimators for the binary response model is a promising and potentially more robust alternative to the arametric methods often used in empirical practice. 
Keywords:  nonparametric binary response models and estimators, conditional moment equations, finite sample bias and precision, squared error loss, response variables, CressieRead statistic, information theoretic methods, 
Date:  2005–08–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:agrebk:998&r=ecm 
By:  Nadia Solaro (University of MilanBicocca, Milan, Italy); Pier Alda Ferrari (Department of Economics, Business and Statistics) 
Abstract:  In this paper we examine maximum likelihood estimation procedures in multilevel models for two level nesting structures. Usually, for fixed effects and variance components estimation, levelone error terms and random effects are assumed to be normally distributed. Nevertheless, in some circumstances this assumption might not be realistic, especially as concerns random effects. Thus we assume for random effects the family of multivariate exponential power distributions (MEP); subsequently, by means of Monte Carlo simulation procedures, we study robustness of maximum likelihood estimators under normal assumption when, actually, random effects are MEP distributed. 
Keywords:  Hierarchical data, ML and REML estimation, Multilevel model, Multivariate exponential power distribution, 
Date:  2005–10–03 
URL:  http://d.repec.org/n?u=RePEc:bep:unimip:1013&r=ecm 
By:  John Stachurski (Department of Economics, University of Melbourne) 
Abstract:  This paper studies a Monte Carlo algorithm for computing distributions of state variables when the underlying model is a Markov process. It is shown that the L1 error of the estimator always converges to zero with probability one, and often at a parametric rate. A related technique for computing stationary distributions is also investigated. 
Keywords:  Distributions, Markov processes, simulation. 
JEL:  C15 C22 C63 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:615&r=ecm 
By:  Erik Hjalmarsson 
Abstract:  This paper considers the estimation of average autoregressive rootsnearunity in panels where the timeseries have heterogenous localtounity parameters. The pooled estimator is shown to have a potentially severe bias and a robust median based procedure is proposed instead. This median estimator has a small asymptotic bias that can be eliminated almost completely by a bias correction procedure. The asymptotic normality of the estimator is proved. The methods proposed in the paper provide a useful way of summarizing the persistence in a panel data set, as well as a complement to more traditional panel unit root tests. 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:852&r=ecm 
By:  Lesley Chiou (Department of Economics, Occidental College); Joan Walker (Center for Transportation Studies, Boston University) 
Abstract:  We present examples based on actual and synthetic datasets to illustrate how simulation methods can mask identification problems in the estimation of mixed logit models. Typically, simulation methods approximate an integral (without a closed form) by taking draws from the underlying distribution of the random variable of integration. Our examples reveal how a low number of draws can generate estimates that appear identified, but in fact, are either not theoretically identified by the model or not empirically identified by the data. We also show how intelligent drawing techniques require a fewer number of draws than psuedorandom to uncover identification issues. 
Keywords:  simulation methods, discrete choice 
JEL:  C15 C25 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:occ:wpaper:5&r=ecm 
By:  Erik Hjalmarsson 
Abstract:  I develop new asymptotic results for longhorizon regressions with overlapping observations. I show that rather than using autocorrelation robust standard errors, the standard tstatistic can simply be divided by the square root of the forecasting horizon to correct for the effects of the overlap in the data. Further, when the regressors are persistent and endogenous, the longrun OLS estimator suffers from the same problems as does the shortrun OLS estimator, and similar corrections and test procedures as those proposed for the shortrun case should also be used in the longrun. In addition, I show that under an alternative of predictability, longhorizon estimators have a slower rate of convergence than shortrun estimators and their limiting distributions are nonstandard and fundamentally different from those under the null hypothesis. These asymptotic results are supported by simulation evidence and suggest that under standard econometric specifications, shortrun inference is generally preferable to longrun inference. The theoretical results are illustrated with an application to longrun stockreturn predictability. 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:853&r=ecm 
By:  Lokshin Boris (METEOR) 
Abstract:  This paper compares the performance of three recently proposed estimators for dynamic panel data models (LSDV biascorrected, MLE and MDE) along with GMM. Using MonteCarlo, we find that MLE and biascorrected estimators have the smallest bias and are good alternatives for the GMM. SystemGMM outperforms the rest in ‘difficult’ designs. Unfortunately, biascorrected estimator is not reliable in these designs which may limit its applicability. 
Keywords:  econometrics; 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2006012&r=ecm 
By:  Erik Hjalmarsson 
Abstract:  I show that the test procedure derived by Campbell and Yogo (2005, Journal of Financial Economics, forthcoming) for regressions with nearly integrated variables can be interpreted as the natural ttest resulting from a fully modified estimation with nearunitroot regressors. This clearly establishes the methods of Campbell and Yogo as an extension of previous unitroot results. 
Keywords:  Regression analysis 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:854&r=ecm 
By:  Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT); Giuseppe Porro (Department of Economics and Statistics, University of Trieste, Italy) 
Abstract:  In this paper we describe some applications of the Random Recursive Partitioning (RRP) method. This method generates a proximity matrix which can be used in non parametric hotdeck missing data imputation, classification, prediction, average treatment effect estimation and, more generally, in matching problems. RRP is a Monte Carlo procedure that randomly generates nonempty recursive partitions of the data and evaluates the proximity between observations as the empirical frequency they fall in the same cell of these random partitions over all the replications. RRP works also in the presence of missing data and is invariant under monotonic transformations of the data. No other formal properties of the method are known yet, therefore Monte Carlo experiments are provided in order to explore the performance of the method. A companion software is available in the form of a package for the R statistical environment. 
Keywords:  recursive partitioning, average treatment effect estimation, classification, missing data imputation, 
Date:  2006–02–21 
URL:  http://d.repec.org/n?u=RePEc:bep:unimip:1022&r=ecm 
By:  Peter Haan; Arne Uhlendorff 
Abstract:  In this paper we suggest a Stata routine for multinomial logit models with unobserved heterogeneity using maximum simulated likelihood based on Halton sequences. The purpose of this paper is twofold: First, we provide a description of the technical implementation of the estimation routine and discuss its properties. Further, we compare our estimation routine to the Stata program gllamm which solves integration using Gauss Hermite quadrature or Bayesian adaptive quadrature. For the analysis we draw on multilevel data about schooling. Our empirical findings show that the estimation techniques lead to approximately the same estimation results. The advantage of simulation over Gauss Hermite quadrature is a marked reduction in computational time for integrals with higher dimensions. Bayesian quadrature, however, leads to very stable results with only a few quadrature points, thus the computational advantage of Halton based simulation vanishes in our example with one and two dimensional integrals. 
Keywords:  multinomial logit model, panel data, unobserved heterogeneity, maximum simulated likelihood, Halton sequences 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp573&r=ecm 
By:  Cuong Le Van (Cermsem, Universite Paris 1 PantheonSorbonne); John Stachurski (Institute of Economic Research, Kyoto University) 
Abstract:  For Markovian economic models, longrun equilibria are typically identified with the stationary (invariant) distributions generated by the model. In this paper we provide new sufficient conditions for continuity in the map from parameters to these equilibria. Several existing results are shown to be special cases of our theorem. 
Keywords:  Markov processes, parametric continuity. 
JEL:  C61 C62 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:616&r=ecm 
By:  Kreider, Brent 
Abstract:  This paper derives easytocompute identification regions for the population's true rate of health insurance coverage in the presence of household reporting errors. These regions reduce the degree of uncertainty about the unknown parameter compared with Horowitz and Manski's (1995) nonparametric contaminated sampling bounds. 
Keywords:  health insurance, contaminated sampling, nonparametric bounds, classification error 
JEL:  C1 I1 
Date:  2006–04–15 
URL:  http://d.repec.org/n?u=RePEc:isu:genres:12588&r=ecm 
By:  Palm Franz C.; Smeekes Stephan; Urbain JeanPierre (METEOR) 
Abstract:  In this paper we study and compare the properties of several bootstrap unit root tests recently proposed in the literature. The tests are DickeyFuller or Augmented DFtests, either based on residuals from an autoregression and the use of the block bootstrap (Paparoditis & Politis, 2003) or on first differenced data and the use of the stationary bootstrap (Swensen, 2003a) or sieve bootstrap (Psaradakis, 2001; Chang & Park, 2003). We extend the analysis by interchanging the data transformations (differences versus residuals), the types of bootstrap and the presence or absence of a correction for autocorrelation in the tests. We prove that two sieve bootstrap tests based on residuals remain asymptotically valid, thereby completing the proofs of validity for all the types of DF bootstrap tests. In contrast to the literature which basically focuses on a comparison of the bootstrap tests with an asymptotic test, we compare the bootstrap tests among them using response surfaces for their size and power in a simulation study. We also investigate how the tests behave when accounting for a deterministic trend, even in the absence of such a trend in the data. This study leads to the following conclusions: (i) augmented DFtests are always preferred to standard DFtests; (ii) the sieve bootstrap performs slightly better than the block bootstrap; (iii) differencebased and residualbased tests behave similarly in terms of size although the latter appear more powerful. The results for the response surfaces allow us to make statements about the behaviour of the bootstrap tests as sample size increases. 
Keywords:  Economics ; 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2006014&r=ecm 
By:  Stewart,Trevor; Strijbosch,Leo; Moors,Hans; Batenburg,Paul van (Tilburg University, Center for Economic Research) 
Abstract:  The exact expression for the convolution of gamma distributions with different scale parameters is quite complicated. The approximation by means of another gamma distribution is shown to be remarkably accurate for wide ranges of the parameter values, in particular if more than two variables are involved. 
Keywords:  approximating distributions;convolution;gamma distribution 
JEL:  C16 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200627&r=ecm 
By:  Richard K. Crump (University of California at Berkeley); V. Joseph Hotz (University of California at Los Angeles); Guido W. Imbens (University of California at Berkeley); Oscar A. Mitnik (University of Miami and IZA Bonn) 
Abstract:  A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement. 
Keywords:  average treatment effects, causality, unconfoundedness, treatment effect heterogeneity 
JEL:  C14 C21 C52 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp2091&r=ecm 
By:  Bartholdy, Jan (Department of Business Studies); Olson, Dennis (American University of Sharjah); Peare, Paula (Department of Business Studies) 
Abstract:  This paper analyses whether it is possible to perform an event study on a small stock exchange with thinly trade stocks. The main conclusion is that event studies can be performed provided that certain adjustments are made. First, a minimum of 25 events appears necessary to obtain acceptable size and power in statistical tests. Second, trade to trade returns should be used. Third, one should not expect to consistently detect abnormal performance of less than about 1% (or perhaps even 2%), unless the sample contains primarily thickly traded stocks. Fourth, nonparametric tests are generally preferable to parametric tests of abnormal performance. Fifth, researchers should present separate results for thickly and thinly traded stock groups. Finally, when nonnormality, event induced variance, unknown event day, and problems of very thin trading are all considered simultaneously, no one test statistic or type of test statistic dominates the others 
Keywords:  Event studies; Thin trading 
Date:  2006–04–25 
URL:  http://d.repec.org/n?u=RePEc:hhb:aaracc:07003&r=ecm 
By:  M. Hashem Pesaran; Ron P. Smith; Takashi Yamagata; Liudmyla Hvozdyk 
Abstract:  In this paper we adopt a new approach to testing for purchasing power parity, PPP, that is robust to base country effects, crosssection dependence, and aggregation. We test for PPP applying a pairwise approach to the disaggregated data set recently analysed by Imbs, Mumtaz, Ravan and Rey (2005, QJE). We consider a variety of tests applied to all 66 possible pairs of real exchange rate among the 12 countries and estimate the proportion of the pairs that are stationary, for the aggregates and each of the 19 commodity groups. To deal with small sample problems, we use a factor augmented sieve bootstrap approach and present bootstrap pairwise estimates of the proportions that are stationary. The bootstrapped rejection frequencies at 26%49% based on unit root tests suggest some evidence in favour of the PPP in the case of the disaggregate data as compared to 6%14% based on aggregate price series. 
Keywords:  Purchasing Power Parity, Panel Data, Pairwise Approach, Cross Section Dependence. 
JEL:  C23 F31 F41 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:0634&r=ecm 
By:  John Stachurski (Department of Economics, University of Melbourne) 
Abstract:  This note considers finite state Markov chains which overlap supports. While the overlapping supports condition is known to be necessary and sufficient for stability of these chains, the result is typically presented in a more general context. As such, one objective of the note is to provide an exposition, along with simple proofs corresponding to the finite case. Second, the note provides an additional equivalent condition which should be useful in applications. 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:614&r=ecm 
By:  Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT); Giuseppe Porro (Department of Economics and Statistics, University of Trieste, Italy) 
Abstract:  In this paper we introduce the Random Recursive Partitioning (RRP) method. This method generates a proximity matrix which can be used in applications like average treatment effect estimation in observational studies. RRP is a Monte Carlo method that randomly generates nonempty recursive partitions of the data and evaluates the proximity between two observations as the empirical frequency they fall in a same cell of these random partitions over all the replications. From the proximity matrix it is possible to derive both graphical and analytical tools to evaluate the extent of the common support between two datasets. The RRP method is ``honest'' in that it does not match observations ``at any cost'': if two datasets are separated, the method clearly states it.This method is affine under invariant transformation of the data and hence it is an equal percent bias reduction (EPBR) method when data come from ellipsoidal and symmetric distributions. Average treatment effect estimators derived from the proximity matrix seem to be competitive compared to more commonly used methods (like, e.g., Mahalanobis full match with calipers within propensity scores) even outside the hypotheses leading to EPBR. RRP method does not require a particular structure of the data and for this reason it can be applied when distances like Mahalanobis or Euclidean are not suitable. As a method working on the original data (i.e. on a multidimensional space instead of a one dimensional measure), RRP is affected by the curse of dimensionality when the number of continuous covariates is too high.Asymptotic properties as well as the behaviour of the RRP method under different data distributions are explored using Monte Carlo methods. 
Keywords:  average treatment effect, recursive partitioning, matching estimators, observational studies, 
Date:  2006–02–06 
URL:  http://d.repec.org/n?u=RePEc:bep:unimip:1018&r=ecm 
By:  Maximilian Auffhammer (University of California, Berkeley) 
Abstract:  The United States Energy Information Administration publishes annual forecasts of nationally aggregated energy consumption, production, prices, intensity and GDP. These government issued forecasts often serve as reference cases in the calibration of simulation and econometric models, which climate and energy policy are based on. This study tests for rationality of published EIA forecasts under symmetric and asymmetric loss. We find strong empirical evidence of asymmetric loss for oil, coal and gas prices as well as natural gas consumption, GDP and energy intensity. 
Keywords:  Forecasting, Asymmetric Loss, Energy Intensity, Energy Information Administration, 
Date:  2005–12–16 
URL:  http://d.repec.org/n?u=RePEc:cdl:agrebk:1009&r=ecm 
By:  Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT); Davide La Torre (Department of Economics, Business and Statistics, University of Milan, IT) 
Abstract:  Several methods are currently available to simulate paths of the Brownian motion. In particular, paths of the BM can be simulated using the properties of the increments of the process like in the Euler scheme, or as the limit of a random walk or via L^2 decomposition like the KacSiegert/KarnounenLoeve series.In this paper we first propose a IFSM (Iterated Function Systems with Maps) operator whose fixed point is the trajectory of the BM. We then use this representation of the process to simulate its trajectories. The resulting simulated trajectories are selfaffine, continuous and fractal by construction. This fact produces more realistic trajectories than other schemes in the sense that their geometry is closer to the one of the true BM's trajectories. The IFSM trajectory of the BM can then be used to generate more realistic solutions of stochastic differential equations. 
Keywords:  iterated function systems, Brownian motion, simulation of stochastic differential equations, 
Date:  2006–01–13 
URL:  http://d.repec.org/n?u=RePEc:bep:unimip:1016&r=ecm 
By:  Charpentier,Arthur; Segers,Johan (Tilburg University, Center for Economic Research) 
Abstract:  Convergence of a sequence of bivariate Archimedean copulas to another Archimedean copula or to the comonotone copula is shown to be equivalent with convergence of the corresponding sequence of Kendall distribution functions. No extra differentiability conditions on the generators are needed. 
Keywords:  Archimedean copula;generator;Kendall distribution function 
JEL:  C14 C16 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200628&r=ecm 
By:  Helena Veiga 
Abstract:  This paper compares empirically the forecasting performance of a continuous time stochastic volatility model with two volatility factors (SV2F) to a set of alternative models (GARCH, FIGARCH, HYGARCH, FIEGARCH and Component GARCH). We use two loss functions and two outofsample periods in the forecasting evaluation. The two outofsample periods are characterized by different patterns of volatility. The volatility is rather low and constant over the first period but shows a significant increase over the second outofsample period. The empirical results evidence that the performance of the alternative models depends on the characteristics of the outofsample periods and on the forecasting horizons. Contrarily, the SV2F forecasting performance seems to be unaffected by these two facts, since the model provides the most accurate volatility forecasts according to the loss functions we consider. 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws062509&r=ecm 
By:  Erik Hjalmarsson 
Abstract:  Using Monte Carlo simulations, I show that typical outofsample forecast exercises for stock returns are unlikely to produce any evidence of predictability, even when there is in fact predictability and the correct model is estimated. 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:855&r=ecm 
By:  Eric Ghysels; Jonathan H. Wright 
Abstract:  Surveys of forecasters, containing respondents' predictions of future values of growth, inflation and other key macroeconomic variables, receive a lot of attention in the financial press, from investors, and from policy makers. They are apparently widely perceived to provide useful information about agents' expectations. Nonetheless, these survey forecasts suffer from the crucial disadvantage that they are often quite stale, as they are released only infrequently, such as on a quarterly basis. In this paper, we propose methods for using asset price data to construct daily forecasts of upcoming survey releases, which we can then evaluate. Our methods allow us to estimate what professional forecasters would predict if they were asked to make a forecast each day, making it possible to measure the effects of events and news announcements on expectations. We apply these methods to forecasts for several macroeconomic variables from both the Survey of Professional Forecasters and Consensus Forecasts. 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:200610&r=ecm 