
on Econometrics 
By:  Isabel Molina; J.N.K. Rao 
Abstract:  We propose to estimate nonlinear small area population quantities by using Empirical Best (EB) estimators based on a nested error model. EB estimators are obtained by Monte Carlo approximation. We focus on poverty indicators as particular nonlinear quantities of interest, but the proposed methodology is applicable to general nonlinear quantities. Small sample properties of EB estimators are analyzed by modelbased and designbased simulation studies. Results show large reductions in mean squared error relative to direct estimators and estimators obtained by simulated censuses. An application is also given to estimate poverty incidences and poverty gaps in Spanish provinces by sex with mean squared errors estimated by parametric bootstrap. In the Spanish data, results show a significant reduction in coefficient of variation of the proposed EB estimators over direct estimators for most domains. 
Keywords:  Empirical best estimator, Parametric bootstrap, Poverty mapping, Small area estimation 
Date:  2009–03 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws091505&r=ecm 
By:  Marmer, Vadim 
Abstract:  This paper presents tests for the null hypothesis of no regime switching in Hamilton's (1989) regime switching model. The test procedures exploit similarities between regime switching models, autoregressions with measurement errors, and finite mixture models. The proposed tests are computationally simple and, contrary to likelihood based tests, have a standard distribution under the null. When the methodology is applied to US GDP growth rates, no strong evidence of regime switching is found. 
Keywords:  regime switching, LM tests, GMM, matching methods, GDP growth rates 
Date:  2009–11–02 
URL:  http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer200959&r=ecm 
By:  Valentina Corradi; Norman R. Swanson 
Abstract:  This paper develops tests for comparing the accuracy of predictive densities derived from (possibly misspecified) diffusion models. In particular, the authors first outline a simple simulationbased framework for constructing predictive densities for onefactor and stochastic volatility models. Then, they construct accuracy assessment tests that are in the spirit of Diebold and Mariano (1995) and White (2000). In order to establish the asymptotic properties of their tests, the authors also develop a recursive variant of the nonparametric simulated maximum likelihood estimator of Fermanian and Salanié (2004). In an empirical illustration, the predictive densities from several models of the onemonth federal funds rates are compared. 
Keywords:  Econometric models  Evaluation ; Stochastic analysis 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fedpwp:0929&r=ecm 
By:  David Powell 
Abstract:  Quantile treatment effects are difficult to estimate in the presence of fixed effects. Panel data are used when fixed effects or differences are necessary to identify the parameters of interest. The inclusion of fixed effects or differencing the data, however, redefines the quantiles. This paper introduces a quantile estimator for panel data which conditions on the fixed effect for identification purposes but allows the parameters to be interpreted in the same manner as crosssectional quantile estimates. The quantiles are unconditional in the fixed effect and are defined by the "total residual," including the fixed effect. 
Keywords:  Quantile regression, panel data, fixed effects, instrumental variables 
JEL:  C13 C31 C33 C51 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:ran:wpaper:710&r=ecm 
By:  Vo Phuong Mai Le; Patrick Minford; Michael Wickens 
Abstract:  We review the methods used in many papers to evaluate DSGE models by comparing their simulated moments with data moments. We compare these with the method of Indirect Inference to which they are closely related. We illustrate the comparison with contrasting assessments of a twocountry model in two recent papers. We conclude that Indirect Inference is the proper end point of the puzzles methodology. 
Keywords:  Bootstrap, USEU Model, DSGE, VAR, Indirect Inference, Wald Statistic, Anomaly, Puzzle. 
JEL:  C12 C32 C52 E1 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:san:cdmacp:0903&r=ecm 
By:  Miller, Douglas (University of California, Davis); Cameron, A. Colin (University of California, Davis); Gelbach, Jonah (University of Arizona) 
Abstract:  In this paper we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables clusterrobust inference when there is twoway or multiway clustering that is nonnested. The variance estimator extends the standard clusterrobust variance estimator or sandwich estimator for oneway clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer clusterrobust standard errors when there is oneway clustering. The method is demonstrated by a Monte Carlo analysis for a twoway random effects model; a Monte Carlo analysis of a placebo law that extends the stateyear effects example of Bertrand et al. (2004) to two dimensions; and by application to studies in the empirical literature where twoway clustering is present. 
JEL:  C12 C21 C23 
Date:  2009–05 
URL:  http://d.repec.org/n?u=RePEc:ecl:ucdeco:099&r=ecm 
By:  Marmer, Vadim 
Abstract:  Implications of nonlinearity, nonstationarity and misspecification are considered from a forecasting perspective. Our model allows for small departures from the martingale difference sequence hypothesis by including a nonlinear component, formulated as a general, integrable transformation of the I(1) predictor. We assume that the true generating mechanism is unknown to the econometrician and he is therefore forced to use some approximating functions. It is shown that in this framework the linear regression techniques lead to spurious forecasts. Improvements of the forecast accuracy are possible with properly chosen nonlinear transformations of the predictor. The paper derives the limiting distribution of the forecasts' MSE. In the case of square integrable approximants, it depends on the Lâ‚‚distance between the nonlinear component and approximating function. Optimal forecasts are available for a given class of approximants. 
Keywords:  Forecasting; integrated time series; misspecified models; nonlinear transformations; stock returns 
Date:  2009–11–03 
URL:  http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer200960&r=ecm 
By:  Bijwaard, Govert (NIDI  Netherlands Interdisciplinary Demographic Institute); Ridder, Geert (University of Southern California) 
Abstract:  Ridder and Woutersen (2003) have shown that under a weak condition on the baseline hazard there exist rootN consistent estimators of the parameters in a semiparametric Mixed Proportional Hazard model with a parametric baseline hazard and unspecified distribution of the unobserved heterogeneity. We extend the Linear Rank Estimator (LRE) of Tsiatis (1990) and Robins and Tsiatis (1991) to this class of models. The optimal LRE is a twostep estimator. We propose a simple firststep estimator that is close to optimal if there is no unobserved heterogeneity. The efficiency gain associated with the optimal LRE increases with the degree of unobserved heterogeneity. 
Keywords:  mixed proportional hazard, linear rank estimation, counting process 
JEL:  C41 C14 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp4543&r=ecm 
By:  Gary Koop (Department of Economics, University of Strathclyde and RCEA); Dimitris Korobilis (Department of Economics, University of Strathclyde and RCEA) 
Abstract:  There is a large literature on forecasting inflation using the generalized Phillips curve (i.e. using forecasting models where inflation depends on past inflation, the unemployment rate and other predictors). The present paper extends this literature through the use of econometric methods which incorporate dynamic model averaging. These not only allow for coefficients to change over time (i.e. the marginal effect of a predictor for inflation can change), but also allows for the entire forecasting model to change over time (i.e. different sets of predictors can be relevant at different points in time). In an empirical exercise involving quarterly US inflation, we fi nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark approaches (e.g. random walk or recursive OLS forecasts) and more sophisticated approaches such as those using time varying coefficient models. 
Keywords:  Option Pricing; Modular Neural Networks; Nonparametric Methods 
JEL:  E31 E37 C11 C53 
Date:  2009–01 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:34_09&r=ecm 
By:  Andersson, Jonas (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Møen, Jarle (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration) 
Abstract:  Two measures of an errorridden explanatory variable make it possible to solve the classical errorsinvariable problem by using one measure as an instrument for the other. It is well known that a second IV estimate can be obtained by reversing the roles of the two measures. We explore a simple estimator that is the linear combination of these two estimates, that minimizes the asymptotic mean squared error. In a Monte Carlo study we show that the gain in precision is significant compared to using only one of the original IV estimates. The proposed estimator also compares well with full information maximum likelihood under normality. 
Keywords:  Measurement errors; Classical ErrorsinVariables; multiple indicator method; Instrumental variable techniques 
JEL:  C13 C30 C80 
Date:  2009–09–15 
URL:  http://d.repec.org/n?u=RePEc:hhs:nhhfms:2009_010&r=ecm 
By:  Sébastien Laurent; Jeroen V.K. Rombouts; Francesco Violante 
Abstract:  A large number of parameterizations have been proposed to model conditional variance dynamics in a multivariate framework. However, little is known about the ranking of multivariate volatility models in terms of their forecasting ability. The ranking of multivariate volatility models is inherently problematic because it requires the use of a proxy for the unobservable volatility matrix and this substitution may severely affect the ranking. We address this issue by investigating the properties of the ranking with respect to alternative statistical loss functions used to evaluate model performances. We provide conditions on the functional form of the loss function that ensure the proxybased ranking to be consistent for the true one – i.e., the ranking that would be obtained if the true variance matrix was observable. We identify a large set of loss functions that yield a consistent ranking. In a simulation study, we sample data from a continuous time multivariate diffusion process and compare the ordering delivered by both consistent and inconsistent loss functions. We further discuss the sensitivity of the ranking to the quality of the proxy and the degree of similarity between models. An application to three foreign exchange rates, where we compare the forecasting performance of 16 multivariate GARCH specifications, is provided. 
Keywords:  Volatility, multivariate GARCH, Matrix norm, Loss function, Model confidence set 
JEL:  C10 C32 C51 C52 C53 G10 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:0948&r=ecm 
By:  Xin Jin; John M Maheu 
Abstract:  This paper proposes a new dynamic model of realized covariance (RCOV) matrices based on recent work in timevarying Wishart distributions. The specifications can be linked to returns for a joint multivariate model of returns and covariance dynamics that is both easy to estimate and forecast. Realized covariance matrices are constructed for 5 stocks using highfrequency intraday prices based on positive semidefinite realized kernel estimates. We extend the model to capture the strong persistence properties in RCOV. Outofsample performance based on statistical and economic metrics show the importance of this. We discuss which features of the model are necessary to provide improvements over a traditional multivariate GARCH model that only uses daily returns. 
Keywords:  eigenvalues, dynamic conditional correlation, predictive likelihoods, MCMC 
JEL:  C11 C32 C53 
Date:  2009–11–10 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa382&r=ecm 
By:  Öller , LE; Stockhammar, P 
Abstract:  Normality is often mechanically and without sufficient reason assumed in econometric models. In this paper three important and significantly heteroscedastic GDP series are studied. Heteroscedasticity is removed and the distributions of the filtered series are then compared to a Normal, a NormalMixture and NormalAsymmetric Laplace (NAL) distributions. NAL represents a reduced and empirical form of the Aghion and Howitt (1992) model for economic growth, based on Schumpeter's idea of creative destruction. Statistical properties of the NAL distributions are provided and it is shown that NAL competes well with the alternatives. 
Keywords:  The AghionHowitt model; asymmetric innovations; mixed normal asymmetric Laplace distribution; Kernel density estimation; Method of Moments estimation. 
JEL:  C16 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:18581&r=ecm 
By:  Markus Jochmann (Department of Economics, University of Strathclyde) 
Abstract:  This paper develops stochastic search variable selection (SSVS) for zeroinflated count models which are commonly used in health economics. This allows for either model averaging or model selection in situations with many potential regressors. The proposed techniques are applied to a data set from Germany considering the demand for health care. A package for the free statistical software environment R is provided. 
Keywords:  Bayesian, model selection, model averaging, count data, zeroinflation, demand for health care 
JEL:  C11 C25 I11 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:str:wpaper:0923&r=ecm 
By:  Öller, LE; Stockhammar, P 
Abstract:  The distribution of differences in logarithms of the Dow Jones share index is compared to the normal (N), normal mixture (NM) and a weighted sum of a normal and an Assymetric Laplace distribution (NAL). It is found that the NAL fits best. We came to this result by studying samples with high, medium and low volatility, thus circumventing strong heteroscedasticity in the entire series. The NAL distribution also fitted economic growth, thus revealing a new analogy between financial data and real growth. 
Keywords:  Density forecasting; heteroscedasticity; mixed Normal Asymmetric Laplace distribution; Method of Moments estimation; connection with economic growth. 
JEL:  C20 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:18582&r=ecm 
By:  Andres Fernandez; Norman R. Swanson 
Abstract:  In this paper, the authors empirically assess the extent to which early release inefficiency and definitional change affect prediction precision. In particular, they carry out a series of exante prediction experiments in order to examine: the marginal predictive content of the revision process, the tradeoffs associated with predicting different releases of a variable, the importance of particular forms of definitional change, which the authors call "definitional breaks," and the rationality of early releases of economic variables. An important feature of our rationality tests is that they are based solely on the examination of exante predictions, rather than being based on insample regression analysis, as are many tests in the extant literature. Their findings point to the importance of making realtime datasets available to forecasters, as the revision process has marginal predictive content, and because predictive accuracy increases when multiple releases of data are used when specifying and estimating prediction models. The authors also present new evidence that early releases of money are rational, whereas prices and output are irrational. Moreover, they find that regardless of which release of our price variable one specifies as the "target" variable to be predicted, using only "first release" data in model estimation and prediction construction yields mean square forecast error (MSFE) "best" predictions. On the other hand, models estimated and implemented using "latest available release" data are MSFEbest for predicting all releases of money. The authors argue that these contradictory findings are due to the relevance of definitional breaks in the data generating processes of the variables that they examine. In an empirical analysis, they examine the realtime predictive content of money for income, and they find that vector autoregressions with money do not perform significantly worse than autoregressions, when predicting output during the last 20 years. 
Keywords:  Economic forecasting ; Econometrics 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fedpwp:0928&r=ecm 
By:  Domenico Giannone (ECARES, Université Libre de Bruxelles and CEPR); Lucrezia Reichlin (London Business School and CEPR); Saverio Simonelli (Università di Napoli Federico II, EUI and CSEF) 
Abstract:  This paper assesses the role of surveys for the early estimates of GDP in the euro area in a modelbased automated procedures which exploits the timeliness of their release. The analysis is conducted using both an historical evaluation and a real time case study on the current conjuncture. 
Keywords:  Forecasting; factor model; real time data; large data sets; survey 
JEL:  E52 C33 C53 
Date:  2009–11–06 
URL:  http://d.repec.org/n?u=RePEc:sef:csefwp:240&r=ecm 
By:  Oliver Blaskowitz; Helmut Herwartz 
Abstract:  It is commonly accepted that information is helpful if it can be exploited to improve a decision mak ing process. In economics, decisions are often based on forecasts of up{ or downward movements of the variable of interest. We point out that directional forecasts can provide a useful framework to assess the economic forecast value when loss functions (or success measures) are properly formu lated to account for realized signs and realized magnitudes of directional movements. We discuss a general approach to evaluate (directional) forecasts which is simple to implement, robust to outlying or unreasonable forecasts and which provides an economically interpretable loss/success functional framework. As such, the measure of directional forecast value is a readily available alternative to the commonly used squared error loss criterion. 
Keywords:  Directional forecasts, directional forecast value, forecast evaluation, economic forecast value, mean squared forecast error, mean absolute forecast error 
JEL:  C52 E17 E27 E37 E47 F17 F37 F47 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009052&r=ecm 
By:  Di Novi, Cinzia 
Abstract:  This paper proposes a specification of Wooldridge's (1995) two step estimation method in which selectivity bias is due to two sources rather than one. The main objective of the paper is to show how the method can be applied in practice. The application concerns an important problem in health economics: the presence of adverse selection in the private health insurance markets on which there exists a large literature. The data for the empirical application is drawn from the 2003/2004 Medical Expenditure Panel Survey in conjunction with the 2002 National Health Interview Survey. 
JEL:  I11 D82 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:uca:ucapdv:137&r=ecm 
By:  Stéphane Auray (Université Lille 3 (GREMARS), Université de Sherbrooke (GREDI) and CIRPÉE); Aurélien Eyquem (GATE, UMR 5824, Université de Lyon and Ecole Normale Supérieure Lettres et Sciences Humaines, France); Frédéric JouneauSion (Universites Lille Nord de France) 
Abstract:  This paper proposes a structural approach to growth modeling relying on random return scale. An RBClike model in which return to scale may be strictly increasing or decreasing depending on shocks is explicitly derived. We show that relevant component of usual macroeconomic models (including capital, growth, various relative prices) are all related to a Random Autoregressive Coefficient model. Recent works on extreme behavior on dependent process emphasize some properties of this model that are worth from the economic viewpoint. First, this model typically displays fat tail behavior event if the shocks do not. Second, records (both historically high and low points) are less frequent than in the usual stationary case but tend to appear in cluster. We show that both fat tails and clustering of extreme values are consistent with arbitrarily small variations of the autoregressive coefficient around the usual unitroot case. As such, distinguishing RCA from a more usual constant autoregressive model from available macro data may then be difficult and typically require very long data set. To this end, we propose a direct test based on the annual sequence of real wages in England recorded since the XIIIth century onwards. The test clearly reject the constant AR model and supports the Random coefficient hypothesis. 
Keywords:  Economic growth, extremal behavior, dependent processes 
JEL:  C22 C46 N13 O41 O47 
Date:  2009–09–01 
URL:  http://d.repec.org/n?u=RePEc:shr:wpaper:0917&r=ecm 
By:  Antonio Merlo (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania) 
Abstract:  Stochastic sequential bargaining games (Merlo and Wilson (1995, 1998)) have found wide applications in various fields including political economy and macroeconomics due to their flexibility in explaining delays in reaching agreement. In this paper, we present new results in nonparametric identification of such models under different scenarios of data availability. First, with complete data on players’ decisions, the sizes of the surplus to be shared (cakes) and the agreed allocations, both the mapping from states to the total surplus (i.e. the "cake function") and the players’ common discount rate are identified, if the unobservable state variable (USV) is independent of observable ones (OSV), and the total surplus is strictly increasing in the USV conditional on the OSV. Second, when the cake size is only observed under agreements and is additively separable in OSV and USV, the contribution by OSV is identified provided the USV distribution satisfies some distributional exclusion restrictions. Third, if data only report when an agreement is reached but never report the cake sizes, we propose a simple algorithm that exploits exogenously given shape restrictions on the cake function and the independence of USV from OSV to recover all rationalizable probabilities for reaching an agreement under counterfactual state transitions. Numerical examples show the set of rationalizable counterfactual outcomes so recovered can be informative. 
Keywords:  Nonparametric identification, noncooperative bargaining, stochastic sequential bargaining, rationalizable counterfactual outcomes 
JEL:  C14 C35 C73 C78 
Date:  2009–10–15 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:09037&r=ecm 
By:  Bo E. Honoré (Department of Economics, Princeton University); Aureo de Paula (Department of Economics, University of Pennsylvania) 
Abstract:  This paper studies the identification of a simultaneous equation model involving duration measures. It proposes a game theoretic model in which durations are determined by strategic agents. In the absence of strategic motives, the model delivers a version of the generalized accelerated failure time model. In its most general form, the system resembles a classical simultaneous equation model in which endogenous variables interact with observable and unobservable exogenous components to characterize an economic environment. In this paper, the endogenous variables are the individually chosen equilibrium durations. Even though a unique solution to the game is not always attainable in this context, the structural elements of the economic system are shown to be semiparametrically identified. We also present a brief discussion of estimation ideas and a set of simulation studies on the model. 
Keywords:  duration, empirical games, identification 
JEL:  C10 C30 C41 
Date:  2009–11–04 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:09039&r=ecm 
By:  Masato Ubukata (Graduate School of Economics, Osaka University) 
Abstract:  The objective of this paper is to examine effects of realized covariance matrix estimators based on intraday returns on largescale minimumvariance equity portfolio optimization. We empirically assess outofsample performance of portfolios with different covariance matrix estimators: the realized covariance matrix estimators and Bayesian shrinkage estimators based on the past monthly and daily returns. The main results are: (1) the realized covariance matrix estimators using the past intraday returns yield a lower standard deviation of the largescale portfolio returns than the Bayesian shrinkage estimators based on the monthly and daily historical returns; (2) gains to switching to strategies using the realized covariance matrix estimators are higher for an investor with higher relative risk aversion; and (3) the better portfolio performance of the realized covariance approach implied by expost returns in excess of the riskfree rate, the standard deviations of the excess returns, the return per unit of risk (Sharpe ratio) and the switching fees seems to be robust to the level of transaction costs. 
Keywords:  Largescale portfolio selection; Realized covariance matrix; Intraday data 
JEL:  G11 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:osk:wpaper:0930&r=ecm 
By:  Velden, M. van de; Takane, Y. (Erasmus Econometric Institute) 
Abstract:  Two new methods for dealing with missing values in generalized canonical correlation analysis are introduced. The first approach, which does not require iterations, is a generalization of the Test Equating method available for principal component analysis. In the second approach, missing values are imputed in such a way that the generalized canonical correlation analysis objective function does not increase in subsequent steps. Convergence is achieved when the value of the objective function remains constant. By means of a simulation study, we assess the performance of the new methods. We compare the results with those of two available methods; the missingdata passive method, introduced Gifi's homogeneity analysis framework, and the GENCOM algorithm developed by Green and Carroll. 
Keywords:  generalized canoncial correlation analysis;missing values 
Date:  2009–11–02 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765017106&r=ecm 
By:  Cathy Ning (Department of Economics, Ryerson University, Toronto, Canada); Dinghai Xu (Department of Economics, University of Waterloo, Waterloo, Ontario, Canada); Tony Wirjanto (School of Accounting & Finance and Department of Statistics & Actuarial Science,University of Waterloo, Waterloo, Ontario, Canada) 
Abstract:  Volatility clustering is a wellknown stylized feature of financial asset returns. In this paper, we investigate the asymmetric pattern of volatility clustering on both the stock and foreign exchange rate markets. To this end, we employ copulabased semiparametric univariate timeseries models that accommodate the clusters of both large and small volatilities in the analysis. Using daily realized volatilities of the individual company stocks, stock indices and foreign exchange rates constructed from high frequency data, we find that volatility clustering is strongly asymmetric in the sense that clusters of large volatilities tend to be much stronger than those of small volatilities. In addition, the asymmetric pattern of volatility clusters continues to be visible even when the clusters are allowed to be changing over time, and the volatility clusters themselves remain persistent even after forty days. 
Keywords:  Volatility clustering, Copulas, Realized volatility, Highfrequency data. 
JEL:  C51 G32 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:rye:wpaper:wp006&r=ecm 
By:  Schluter, Christian (University of Southampton); Wahba, Jackline (University of Southampton) 
Abstract:  We consider the issue of illegal migration from Mexico to the US, and examine whether the lack of legal status causally impacts on outcomes, specifically wages and remitting behavior. These outcomes are of particular interest given the extent of legal and illegal migration, and the resulting financial flows. We formalize this question and highlight the principal empirical problem using a potential outcome framework with endogenous selection. The selection bias is captured by a control function, which is estimated nonparametrically. The framework for remitting is extended to allow for endogenous regressors (e.g. wages). We propose a new reparametrisation of the control function, which is linear in case of a normal error structure, and test linearity. Using Mexican Migration project data, we find considerable and robust illegality effects on wages, the penalty being about 12% in the 1980s and 22% in the 1990s. For the latter period, the selection bias is not created by a normal error structure; wrongly imposing normality overestimates the illegality effect on wages by 50%, while wrongly ignoring selection leads to a 50% underestimate. In contrast to these wage penalties, legal status appears to have mixed effects on remitting behavior. 
Keywords:  nonparametric estimation, control functions, selection, counterfactuals, illegality effects, illegal migration, intermediate outcomes, Mexican Migration Project 
JEL:  J61 J30 J40 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp4527&r=ecm 
By:  Laurens CHERCHYE; Thomas DEMUYNCK; Bram DE ROCK 
Abstract:  Focusing on the testable implications on the equilibrium manifold, we show that the rationalizability problem is NPcomplete. Subsequently, we present an integer programming (IP) approach to characterizing general equilibrium models. This approach avoids the use of the TarskiSeidenberg algorithm for quantifier elimination that is commonly used in the literature. The IP approach naturally applies to settings with any number of observations, which is attractive for empirical applications. In addition, it can easily be adjusted to analyze the testable implications of alternative general equilibrium models (that include, e.g., public goods, externalities and/or production). Further, we show that the IP framework can easily address recoverability questions (pertaining to the structural model that underlies the observed equilibrium behavior), and account for empirical issues when bringing the IP methodology to the data (such as goodnessoffit and power). Finally, we show how to develop easytoimplement heuristics that give a quick (but possibly inconclusive) answer to whether or not the data satisfy the general equilibrium models. 
Keywords:  General equilibrium, equilibrium manifold, exchange economies, production economies, NPcompleteness, nonparametric restrictions, GARP, integer programming. 
JEL:  C60 D10 D51 
Date:  2009–07 
URL:  http://d.repec.org/n?u=RePEc:ete:ceswps:ces09.14&r=ecm 
By:  Kurt Schmidheiny (Universitat Pompeu Fabra); Marius Brülhart (Université de Lausanne) 
Abstract:  It is well understood that the two most popular empirical models of location choiceconditional logit and Poisson  return identical coefficient estimates when the regressors are not individual specific. We show that these two models differ starkly in terms of their implied predictions. The conditional logit model represents a zerosum world, in which one region's gain is the other regions' loss. In contrast, the Poisson model implies a positivesum economy, in which one region's gain is no other region's loss. We also show that all intermediate cases can be represented as a nested logit model with a single outside option. The nested logit turns out to be a linear combination of the conditional logit and Poisson models. Conditional logit and Poisson elasticities mark the polar cases and can therefore serve as boundary values in applied research. 
Keywords:  firm location, residential choice, conditional logit, nested logit, Poisson count model 
JEL:  C25 R3 H73 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:ieb:wpaper:2009/10/doc200914&r=ecm 
By:  Moore, Rebecca (University of Georgia); Bishop, Richard C. (University of Wisconsin); Provencher, Bill (University of Wisconsin); Champ, Patricia (US Forest Service) 
Abstract:  In this paper we develop an econometric model of willingness to pay that integrates data on respondent uncertainty regarding their own willingness to pay. The integration is utility consistent and does not involve calibrating the contingent responses to actual payment data, and so the approach can "stand alone". In an application to a valuation study related to whooping crane restoration, we find that this model generates a statistically lower expected WTP than the standard CV model. Moreover, the WTP function estimated with this model is not statistically different from that estimated using actual payment data, suggesting that when properly analyzed using data on respondent uncertainty, contingent valuation decisions can simulate actual payment decisions. This method allows for more reliable estimates of WTP that incorporates respondent uncertainty without the need for collecting comparable actual payment data. 
Date:  2009–05 
URL:  http://d.repec.org/n?u=RePEc:ecl:wisagr:537&r=ecm 