
on Econometrics 
By:  Giovanni Mellace (Faculty of Economics, University of Rome "Tor Vergata"); Roberto Rocci (Faculty of Economics, University of Rome "Tor Vergata") 
Abstract:  The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true distribution of the error terms with a mixture of Gaussian. We propose an EM type algorithm for ML estimation. In a simulation study we show that our estimator has lower MSE than the ML and twostep Heckman estimators with any non normal distribution considered for the error terms. Finally we provide an application to the Job Corps training program. 
Keywords:  Causal inference, principal stratification, mixture models, EM algorithm, sample selection. 
JEL:  C10 C13 C31 C34 
Date:  2011–05–02 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:194&r=ecm 
By:  David E. Giles (Department of Economics, University of Victoria) 
Abstract:  In this paper we consider the asymptotic properties of the Instrumental Variables (IV) estimator of the parameters in a linear regression model with some random regressors, and other regressors that are dummy variables. The latter have the special property that the number of nonzero values is fixed, and does not increase with the sample size. We prove that the IV estimator of the coefficient vector for the dummy variables is inconsistent, while that for the other regressors is weakly consistent under standard assumptions. However, the usual estimator for the asymptotic covariance matrix of the I.V. estimator for all of the coefficients retains its usual consistency. The ttest statistics for the dummy variable coefficients are still asymptotically standard normal, despite the inconsistency of the associated IV coefficient estimator. These results extend the earlier results of Hendry and Santos (2005), which relate to a fixedregressor model, in which the dummy variables are nonzero for just a single observation, and OLS estimation is used. 
Keywords:  Dummy variables; indicator variables; instrumental variables; inconsistent estimator 
JEL:  C20 C49 
Date:  2011–05–04 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:1106&r=ecm 
By:  Jason R. Blevins (Department of Economics, Ohio State University) 
Abstract:  This paper develops methods for estimating dynamic structural microeconomic models with serially correlated latent state variables. The proposed estimators are based on sequential Monte Carlo methods, or particle filters, and simultaneously estimate both the structural parameters and the trajectory of the unobserved state variables for each observational unit in the dataset. We focus two important special cases: single agent dynamic discrete choice models and dynamic games of incomplete information. The methods are applicable to both discrete and continuous state space models. We first develop a broad nonlinear state space framework which includes as special cases many dynamic structural models commonly used in applied microeconomics. Next, we discuss the nonlinear filtering problem that arises due to the presence of a latent state variable and show how it can be solved using sequential Monte Carlo methods. We then turn to estimation of the structural parameters and consider two approaches: an extension of the standard fullsolution maximum likelihood procedure (Rust, 1987) and an extension of the twostep estimation method of Bajari, Benkard, and Levin (2007), in which the structural parameters are estimated using revealed preference conditions. Finally, we introduce an extension of the classic bus engine replacement model of Rust (1987) and use it both to carry out a series of Monte Carlo experiments and to provide empirical results using the original data. 
Keywords:  dynamic discrete choice, latent state variables, serial correlation, sequential Monte Carlo methods, particle filtering 
JEL:  C13 C15 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:osu:osuewp:1101&r=ecm 
By:  Wolfgang Polasek (Institute for Advanced Studies, Austria; University of Porto, Portugal; The Rimini Centre for Economic Analysis (RCEA), Italy) 
Abstract:  The HodrickPrescott (HP) method was originally developed to smooth time series, i.e. to get a smooth (longterm) component. We show that the HP smoother can be viewed as a Bayesian linear model with a strong prior for the smoothness component. Extending this Bayesian approach in a linear model setup is possible by a conjugate and a nonconjugate model using MCMC. The Bayesian HP smoothing model is also extended to a spatial smoothing model. We have to define spatial neighbors for each observation and we can use in a similar way a smoothness prior as for the HP filter in time series. The new smoothing approaches are applied to the (textbook) airline passenger data for time series and to the problem of smoothing spatial regional data. This new approach can be used for a new class of modelbased smoothers for time series and spatial models. 
Keywords:  HodrickPrescott (HP) smoothers, Spatial econometrics, MCMC estimation, Airline passenger time series, Spatial smoothing of regional data, NUTS: nomenclature of territorial units for statistics 
JEL:  C11 C15 C52 E17 R12 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:25_11&r=ecm 
By:  BOUEZMARNI, Taoufik (University of Sherbrooke, Canada); VAN BELLEGEM, Sébastien (Université catholique de Louvain, CORE, B1348 LouvainlaNeuve, Belgium; Toulouse School of Economics, France) 
Abstract:  The paper introduces a new nonparametric estimator of the spectral density that is given in smoothing the periodogram by the probability density of Beta random variable (Beta kernel). The estimator is proved to be bounded for short memory data, and diverges at the origin for long memory data. The convergence in probability of the relative error and Monte Carlo simulations suggest that the estimator automaticaly adapts to the long or the shortrange dependency of the process. A crossvalidation procedure is also studied in order to select the nuisance parameter of the estimator. Illustrations on historical as well as most recent returns and absolute returns of the S&P500 index show the reasonable performance of the estimation, and show that the datadriven estimator is a valuable tool for the detection of longmemory as well as hidden periodicities in stock returns. 
Keywords:  spectral density, long range dependence, nonparametric estimation, periodogram, kernel smoothing, Beta kernel, crossvalidation 
JEL:  C14 C22 
Date:  2011–01–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2011004&r=ecm 
By:  Kenneth G. Stewart (Department of Economics, University of Victoria) 
Abstract:  Interpreted as an instrumental variables estimator, nonlinear least squares constructs its instruments optimally from the explanatory variables using the nonlinear specification of the regression function. This has implications for the use of GMM estimators in nonlinear regression models, including systems of nonlinear regressions, where the explanatory variables are exogenous or predetermined and so serve as their own instruments, and where the restrictions under test are the only source of overidentification. In such situations the use of GMM test criteria involves a suboptimal construction of instruments; the use of optimally constructed instruments leads to conventional nonGMM test criteria. These implications are illustrated with two empirical examples, one a classic study of models of the shortterm interest rate. 
Keywords:  optimal instruments, nonlinear regression, generalized method of moments 
JEL:  C12 C13 
Date:  2011–05–04 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:1107&r=ecm 
By:  Armelle Guillou (IRMA  Institut de Recherche Mathématique Avancée  CNRS : UMR7501  Université de Strasbourg); Stéphane Loisel (SAF  Laboratoire de Sciences Actuarielle et Financière  Université Claude Bernard  Lyon I : EA2429); Gilles Stupfler (IRMA  Institut de Recherche Mathématique Avancée  CNRS : UMR7501  Université de Strasbourg) 
Abstract:  We present a new model of loss processes in insurance. The process is a couple $(N, \, L)$ where $N$ is a univariate Markovmodulated Poisson process (MMPP) and $L$ is a multivariate loss process whose behaviour is driven by $N$. We prove the strong consistency of the maximum likelihood estimator of the parameters of this model, and present an EM algorithm to compute it in practice. The method is illustrated with simulations and real sets of insurance data. 
Date:  2011–04–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00589696&r=ecm 
By:  Joseph P. Romano; Michael Wolf 
Abstract:  Many postulated relations in finance imply that expected asset returns should monotonically increase in a certain characteristic. To examine the validity of such a claim, one typically considers a finite number of return categories, ordered according to the underlying characteristic. A standard approach is to simply test for a difference in expected returns between the highest and the lowest return category. However, such an approach can be misleading, since the relation of expected returns could be flat, or even decreasing, in the range of intermediate categories. A new test, taking the entire range of categories into account, has been proposed by Patton and Timmermann (2010). Unfortunately, the test is based on an additional assumption that can be violated in many applications of practical interest. As a consequence, it can be quite likely for the test to ‘establish’ strict monotonicity of expected asset returns when such a relation actually does not exist. We offer some alternative tests which do not share this problem. The behavior of the various tests is illustrated via Monte Carlo studies. We also present empirical applications to real data. 
Keywords:  Bootstrap, CAPM, monotonicity tests, systematic relation 
JEL:  C12 G12 G14 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:zur:econwp:017&r=ecm 
By:  Shin Kanaya (Dept. of Economics, Nuffield College and OxfordMan Institute); Taisuke Otsu (Cowles Foundation, Yale University) 
Abstract:  This paper studies large and moderate deviation properties of a realized volatility statistic of high frequency financial data. We establish a large deviation principle for the realized volatility when the number of high frequency observations in a fixed time interval increases to infinity. Our large deviation result can be used to evaluate tail probabilities of the realized volatility. We also derive a moderate deviation rate function for a standardized realized volatility statistic. The moderate deviation result is useful for assessing the validity of normal approximations based on the central limit theorem. In particular, it clarifies that there exists a tradeoff between the accuracy of the normal approximations and the path regularity of an underlying volatility process. Our large and moderate deviation results complement the existing asymptotic theory on high frequency data. In addition, the paper contributes to the literature of large deviation theory in that the theory is extended to a high frequency data environment. 
Keywords:  Realized volatility, Large deviation, Moderate deviation 
JEL:  C10 C20 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1798&r=ecm 
By:  Konstantopoulos, Spyros (Michigan State University) 
Abstract:  Metaanalytic methods have been widely applied to education, medicine, and the social sciences. Much of metaanalytic data are hierarchically structured since effect size estimates are nested within studies, and in turn studies can be nested within level3 units such as laboratories or investigators, and so forth. Thus, multilevel models are a natural framework for analyzing metaanalytic data. This paper discusses the application of a Fisher scoring method in two and threelevel metaanalysis that takes into account random variation at the second and at the third levels. The usefulness of the model is demonstrated using data that provide information about school calendar types. SAS proc mixed and HLM can be used to compute the estimates of fixed effects and variance components. 
Keywords:  metaanalysis, multilevel models, random effects 
JEL:  C00 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5678&r=ecm 
By:  Edoardo Marcucci (University of Roma Tre, Italy); Valerio Gatta (Sapienza University of Rome, Italy) 
Abstract:  This paper investigates alternative methods to account for preference heterogeneity in choice experiments. The main interest lies in assessing the different results obtainable when investigating heterogeneity in various ways. This comparison can be performed on the basis of model performance and, more interesting, by evaluating willingness to pay measures. Preference heterogeneity analysis relates to the methods used to search for it. Socioeconomic variables can be interacted with attributes and/or alternativespecific constants. Similarly one can consider different subsets of data (strata variables) and estimate a multinomial logit model for each of them. Heterogeneity in preferences can be investigated by including it in the systematic component of utility or in the stochastic one. Mixed logit and latent class models are examples of the first approach. The former, in its random variable specification, allows for random taste variations assuming a specific distribution of the attribute coefficients over the population and permit to capture additional heterogeneity by consenting parameters to vary across individuals both randomly and systematically with observable variables. In other words it accounts for heterogeneity in the mean and in the variance of the distribution of the random parameters due to individual characteristics. Latent class models capture heterogeneity by considering a discrete underlying distribution of tastes. The small number of mass points are the unobserved segments or behavioral groups within which preferences are assumed homogeneous. The probability of membership in a latent class can be additionally made a function of individual characteristics. Alternatively, heterogeneity can be incorporated in terms of the random component of utility. The covariance heterogeneity model adopts the second approach representing a generalization of the nested logit model and can be used to explain heteroscedastic error structures in the data. It allows the inclusive value parameter to be a function of choice alternative attributes and/or individual characteristics. An alternative method refers to an extension of the multinomial logit model in which the integration of unobserved heterogeneity is performed through random error components distributed according to a tree. An interesting improvement in modeling preference heterogeneity is related to its simultaneous inclusion in both systematic and stochastic parts. A valid example is the inclusion of an error component part in a random coefficient specification of the mixed multinomial logit model. The empirical data used for comparing the various methods tested relates to departure airport choice in a multiairport region. The area of study includes two regions in central Italy, Marche and EmiliaRomagna, and four airports: Ancona, Rimini, Forlì and Bologna. A fractional factorial experimental design was adopted to construct a four alternative choice set and five hypothetical choice exercises in each questionnaire. The selection of the potentially most important attributes and their relative levels was developed on the basis of previous research. 
Keywords:  heterogeneity, airport choice, stated preferences, discrete choice model. 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:rcr:wpaper:03_11&r=ecm 
By:  Doppelhofer, Gernot (Dept. of Economics, Norwegian School of Economics and Business Administration); Weeks, Melvyn (University of Cambridge) 
Abstract:  This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi monious mixture modeling that allows for fattailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth by SalaiMartin et al. (2004) are sensitive to deviations from benchmark model averaging. For example, the GDP shares of mining or government consumption, are no longer robust or economically significant once deviations from the normal benchmark assumptions are allowed. The paper identifies outlying observations { most notably Botswana { in explaining economic growth in a crosssection of countries. 
Keywords:  Determinants of Economic Growth; Robust Model Averaging; Heteroscedasticity; Outliers; Mixture models. 
JEL:  C11 C21 C52 O47 O50 
Date:  2011–02–07 
URL:  http://d.repec.org/n?u=RePEc:hhs:nhheco:2011_003&r=ecm 