
on Econometrics 
By:  Dennis Kristensen (Dep. of Economics, Columbia University and CREATES); Anders Rahbek (Department of Economics, University of Copenhagen and CREATES) 
Abstract:  In this paper, we consider a general class of vector error correction models which allow for asymmetric and nonlinear error correction. We provide asymptotic results for (quasi)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing for linearity is of particular interest as parameters of nonlinear components vanish under the null. To solve the latter type of testing, we use the socalled sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis of nonstationary nonlinear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and nonstandard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. With respect to testing, this makes implementation of testing involved, and bootstrap versions of the tests are proposed in order to facilitate their usage. The asymptotic results regarding the QML estimators extend results in Kristensen and Rahbek (2010, Journal of Econometrics) where symmetric nonlinear error correction considered. A simulation study shows that the fi?nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes. 
Keywords:  Nonlinear error correction, cointegration, testing nonlinearity, nonlinear time series, sup tests, vanishing parameters, testing. 
JEL:  C30 C32 
Date:  2010–01–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201068&r=ecm 
By:  Shin Kanaya (Department of Economics, OxfordMan Institute and Nuffield College); Dennis Kristensen (Department of Economics, Columbia University, and CREATES) 
Abstract:  A twostep estimation method of stochastic volatility models is proposed: In the first step, we estimate the (unobserved) instantaneous volatility process using the estimator of Kristensen (2010, Econometric Theory 26). In the second step, standard estimation methods for fully observed diffusion processes are employed, but with the filtered volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and we give theoretical results for both. The resulting estimators of the drift and diffusion terms of the volatility model will carry additional biases and variances due to the firststep estimation, but under regularity conditions these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finitesample properties of the proposed estimators. 
Keywords:  Realized spot volatility, stochastic volatility, kernel estimation, nonparametric, semiparametric 
JEL:  C14 C32 
Date:  2010–01–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201067&r=ecm 
By:  D. F. BENOIT; D. VAN DEN POEL 
Abstract:  In this article, we develop a Bayesian method for quantile regression in the case of dichotomous response data. The frequentist approach to this type of regression has proven problematic in both optimizing the objective function and making inference on the regression parameters. By accepting additional distributional assumptions on the error terms, the Bayesian method proposed sets the problem in a parametric framework in which these problems are avoided, i.e. it is relatively straightforward to calculate point predictions of the estimators with their corresponding credible intervals. To test the applicability of the method, we ran two MonteCarlo experiments and applied it to Horowitz’ (1993) often studied worktrip mode choice dataset. Compared to previous estimates for the latter dataset, the method proposed interestingly leads to a different economic interpretation. 
Keywords:  quantile regression, binary regression, maximum score, asymmetric Laplace distribution, Bayesian inference, worktrip mode choice 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:10/662&r=ecm 
By:  Olivier Ledoit; Michael Wolf 
Abstract:  Applied researchers often test for the difference of the variance of two investment strategies; in particular, when the investment strategies under consideration aim to implement the global minimum variance portfolio. A popular tool to this end is the Ftest for the equality of variances. Unfortunately, this test is not valid when the returns are correlated, have tails heavier than the normal distribution, or are of time series nature. Instead, we propose the use of robust inference methods. In particular, we suggest to construct a studentized time series bootstrap confidence interval for the ratio of the two variances and to declare the two variances different if the value one is not contained in the obtained interval. This approach has the advantage that one can simply resample from the observed data as opposed to some nullrestricted data. A simulation study demonstrates the improved finitesample performance compared to existing methods. 
Keywords:  Bootstrap, HAC inference, variance 
JEL:  C12 C14 C22 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:zur:iewwpx:516&r=ecm 
By:  Landajo, Manuel; Presno, María José 
Abstract:  The framework of stationarity testing is extended to allow a generic smooth trend function estimated nonparametrically. The asymptotic behavior of the pseudoLagrange Multiplier test is analyzed in this setting. The proposed implementation delivers a consistent test whose limiting null distribution is standard normal. Theoretical analyses are complemented with simulation studies and some empirical applications. 
Keywords:  Time series; stationarity testing; limiting distribution; nonparametric regression; nonparametric hypothesis testing 
JEL:  C14 C22 
Date:  2010–10–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:25659&r=ecm 
By:  Olivier Ledoit; Michael Wolf 
Abstract:  Many applied problems require an estimate of a covariance matrix and/or its inverse. When the matrix dimension is large compared to the sample size, which happens frequently, the sample covariance matrix is known to perform poorly and may suffer from illconditioning. There already exists an extensive literature concerning improved estimators in such situations. In the absence of further knowledge about the structure of the true covariance matrix, the most successful approach so far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to a multiple of the identity, by taking a weighted average of the two, turns out to be equivalent to linearly shrinking the sample eigenvalues to their grand mean, while retaining the sample eigenvectors. Our paper extends this approach by considering nonlinear transformations of the sample eigenvalues. We show how to construct an estimator which is asymptotically equivalent to an oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlo simulations, the resulting bona fide estimator can result in sizeable improvements over the sample covariance matrix and also over linear shrinkage. 
Keywords:  Largedimensional asymptotics, nonlinear shrinkage, rotation equivariance 
JEL:  C13 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:zur:iewwpx:515&r=ecm 
By:  Leslie G. Godrey 
Abstract:  The problem of testing nonnested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticityrobust joint test against a combination of the artificial alternatives used for autocorrelation and nonnested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a wellbehaved procedure and gives better control of finite sample significance levels than asymptotic critical values. 
Keywords:  nonnested models, heteroskedasticityrobust, wild bootstrap 
JEL:  C12 C15 C52 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:10/22&r=ecm 
By:  Baltagi, Badi H. (Syracuse University); Bresson, Georges (University of Paris 2) 
Abstract:  This paper proposes maximum likelihood estimators for panel seemingly unrelated regressions with both spatial lag and spatial error components. We study the general case where spatial effects are incorporated via spatial errors terms and via a spatial lag dependent variable and where the heterogeneity in the panel is incorporated via an error component specification. We generalize the approach of Wang and Kockelman (2007) and propose joint and conditional Lagrange Multiplier tests for spatial autocorrelation and random effects for this spatial SUR panel model. The small sample performance of the proposed estimators and tests are examined using Monte Carlo experiments. An empirical application to hedonic housing prices in Paris illustrates these methods. The proposed specification uses a system of three SUR equations corresponding to three types of flats within 80 districts of Paris over the period 19902003. We test for spatial effects and heterogeneity and find reasonable estimates of the shadow prices for housing characteristics. 
Keywords:  hedonic housing prices, Lagrange multiplier tests, maximum likelihood, panel spatial dependence, spatial lag, spatial error, SUR 
JEL:  C31 C33 R21 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5227&r=ecm 
By:  Vladimir Panov 
Abstract:  Let a highdimensional random vector X can be represented as a sum of two components  a signal S, which belongs to some lowdimensional subspace S, and a noise component N. This paper presents a new approach for estimating the subspace S based on the ideas of the NonGaussian Component Analysis. Our approach avoids the technical difficulties that usually exist in similar methods  it doesn’t require neither the estimation of the inverse covariance matrix of X nor the estimation of the covariance matrix of N. 
Keywords:  dimension reduction, nonGaussian components, NGCA 
JEL:  C13 C14 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010050&r=ecm 
By:  Tatiana Komarova; Thomas Severini; Elie Tamer (Institute for Fiscal Studies and Northwestern University) 
Abstract:  <p>We introduce a notion of median uncorrelation that is a natural extension of mean (linear) uncorrelation. A scalar random variable Y is median uncorrelated with a kdimensional random vector X if and only if the slope from an LAD regression of Y on X is zero. Using this simple definition, we characterize properties of median uncorrelated random variables, and introduce a notion of multivariate median uncorrelation. We provide measures of median uncorrelation that are similar to the linear correlation coefficient and the coefficient of determination. We also extend this median uncorrelation to other loss functions. As two stage least squares exploits mean uncorrelation between an instrument vector and the error to derive consistent estimators for parameters in linear regressions with endogenous regressors, the main result of this paper shows how a median uncorrelation assumption between an instrument vector and the error can similarly be used to derive consistent estimators in these linear models with endogenous regressors. We also show how median uncorrelation can be used in linear panel models with quantile restrictions and in linear models with measurement errors.</p> 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:26/10&r=ecm 
By:  Shimotsu, Katsumi 
Abstract:  Semiparametric estimation of a bivariate fractionally cointegrated system is considered. We propose a twostep procedure that accommodates both (asymptotically) stationary (d<1/2) and nonstationary (d>=1/2) stochastic trend and/or equilibrium error. A tapered version of the local Whittle estimator of Robinson (2008) is used as the firststage estimator, and the secondstage estimator employs the exact local Whittle approach of Shimotsu and Phillips (2005). The consistency and asymptotic distribution of the twostep estimator are derived. The estimator of the memory parameters has the same Gaussian asymptotic distribution in both the stationary and nonstationary case. The convergence rate and the asymptotic distribution of the estimator of the cointegrating vector are affected by the difference between the memory parameters. Further, the estimator has a Gaussian asymptotic distribution when the difference between the memory parameters is less than 1/2. 
Keywords:  discrete Fourier transform, fractional cointegration, long memory, nonstationarity, semiparametric estimation, Whittle likelihood 
JEL:  C22 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:hit:econdp:201011&r=ecm 
By:  Arie Beresteanu (Institute for Fiscal Studies and Duke); Ilya Molchanov; Francesca Molinari (Institute for Fiscal Studies and Cornell University) 
Abstract:  <p><p><p><p>We provide a tractable characterization of the sharp identification region of the parameters θ in a broad class of incomplete econometric models. Models in this class have set valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted Θ<sub>1</sub>, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in Θ<sub>1</sub>. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.</p></p> </p><p></p><p><p>This paper is a revised version of CWP27/09.</p></p></p> 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:25/10&r=ecm 
By:  Nicolas Debarsy (CERPE  Centre de Recherches en Economie Régionale et Politique Economique  Facultés Universitaires Notre Dame de la Paix); Cem Ertur (LEO  Laboratoire d'économie d'Orleans  CNRS : UMR6221  Université d'Orléans); James P. Lesage (Texas State University  Texas State University) 
Abstract:  There is a great deal of literature regarding the asymptotic properties of various approaches to estimating simultaneous spacetime panel models, but little attention has been paid to how the model estimates should be interpreted. The motivation for use of spacetime panel models is that they can provide us with information not available from crosssectional spatial regressions. LeSage and Pace (2009) show that crosssectional simultaneous spatial autoregressive models can be viewed as a limiting outcome of a dynamic spacetime autoregressive process. A valuable aspect of dynamic spacetime panel data models is that the own and crosspartial derivatives that relate changes in the explanatory variables to those that arise in the dependent variable are explicit. This allows us to employ parameter estimates from these models to quantify dynamic responses over time and space as well as spacetime diffusion impacts. We illustrate our approach using the demand for cigarettes over a 30 year period from 19631992, where the motivation for spatial dependence is a bootlegging effect where buyers of cigarettes near state borders purchase in neighboring states if there is a price advantage to doing so. 
Keywords:  Dynamic spacetime panel data model; MCMC estimation; dynamic responses over time and space 
Date:  2010–08–25 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00525740_v1&r=ecm 
By:  Croux, C.; Gelper, S.; Mahieu, K. (Tilburg University, Center for Economic Research) 
Abstract:  This article presents a control chart for time series data, based on the onestep ahead forecast errors of the HoltWinters forecasting method. We use robust techniques to prevent that outliers affect the estimation of the control limits of the chart. Moreover, robustness is important to maintain the reliability of the control chart after the occurrence of alarm observations. The properties of the new control chart are examined in a simulation study and on a real data example. 
Keywords:  Control chart;HoltWinters;Nonstationary time series;Out lier detection;Robustness;Statistical process control. 
JEL:  C44 C53 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2010107&r=ecm 
By:  R. KRUSE; M. FRÖMMEL; L. MENKHOFF; P. SIBBERTSEN 
Abstract:  Nonlinear modeling of adjustments to purchasing power parity has recently gained much attention. However, a huge body of the empirical literature applies ES TAR models and neglects the existence of other competing nonlinear models. Among these, the Markov Switching AR model has a strong substantiation in international finance. Our contribution to the literature is fivefold: First, we compare ESTAR and MSAR models from a unit root perspective. To this end, we propose a new unit root test against MSAR as the second contribution. Thirdly, we study the case of misspeci fied alternatives in a Monte Carlo setup with real world parameter constellations. The ESTAR unit root test is not indicative, while the MSAR unit test is robust. Fourthly, we consider the case of correctly specified alternatives and observe low power of the ESTAR but not for the MSAR unit root test. Fifthly, an empirical application to real exchange rates suggests that they may indeed be explained by Markov Switching dy namics rather than ESTAR. 
Keywords:  Real exchange rates , unit root test , ESTAR , Markov Switching , PPP 
JEL:  C12 C22 F31 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:10/667&r=ecm 
By:  Andrzej Jarosz 
Abstract:  I apply the method of planar diagrammatic expansion to solve the problem of finding the mean spectral density of the nonHermitian timelagged covariance estimator for a system of i.i.d. Gaussian random variables. I confirm the result in a much simpler way using a recent conjecture about nonHermitian random matrix models with rotationallysymmetric spectra. I conjecture and test numerically a form of finitesize corrections to the mean spectral density featuring the complementary error function. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1010.2981&r=ecm 
By:  Richard Ashley 
Abstract:  The volatility clustering frequently observed in financial/economic time series is often ascribed to GARCH and/or stochastic volatility models. This paper demonstrates the usefulness of re conceptualizing the usual definition of conditional heteroscedasticity as the (h = 1) special case of hstepahead conditional heteroscedasticity, where the conditional volatility in period t depends on observable variables up through period t  h. Here it is shown that, for h > 1, hstepahead conditional heteroscedasticity arises â€“ necessarily and endogenously â€“ from nonlinear serial dependence in a time series; whereas onestepahead conditional heteroscedasticity (i.e., h= 1) requires multiple and heterogeneouslyskedastic innovation terms. Consequently, the best response to observed volatility clustering may often be to model the nonlinear serial dependence which is likely causing it, rather than â€˜tacking onâ€™ an ad hoc volatility model. Even where such nonlinear modeling is infeasible â€“ or where volatility is quantified using, say, a modelfree implied volatility measure rather than squared returns â€“ these results suggest a reconsideration of the usefulness of lagone terms in volatility models. An application to observed daily stock returns is given. 
Keywords:  nonlinearity; nonlinear serial dependence; conditional heteroscedasticity;ARCH models; GARCH models. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:vpi:wpaper:e0723&r=ecm 
By:  Mapa, Dennis S.; Cayton, Peter Julian; Lising, Mary Therese 
Abstract:  Financial institutions hold risks in their investments that can potentially affect their ability to serve their clients. For banks to weigh their risks, ValueatRisk (VaR) methodology is used, which involves studying the distribution of losses and formulating a statistic from this distribution. From the myriad of models, this paper proposes a method of formulating VaR using the Generalized Pareto distribution (GPD) with timevarying parameter through explanatory variables (TiVEx)  peaks over thresholds model (POT). The time varying parameters are linked to the linear predictor variables through link functions. To estimate parameters of the linear predictors, maximum likelihood estimation is used with the timevarying parameters being replaced from the likelihood function of the GPD. The test series used for the paper was the Philippine PesoUS Dollar exchange rate with horizon from January 2, 1997 to March 13, 2009. Explanatory variables used were GARCH volatilities, quarter dummies, number of holidayweekends passed, and annual trend. Three selected permutations of modeling through TiVExPOT by dropping other covariates were also conducted. Results show that econometric models and static POT models were betterperforming in predicting losses from exchange rate risk, but simple TiVEx models have potential as part of the VaR modelling philosophy since it has consistent green status on the number exemptions and lower quadratic loss values. 
Keywords:  ValueatRisk; Extreme Value Theory; Generalized Pareto Distribution; TimeVarying Parameters; Use of Explanatory Variables; GARCH modeling; PeaksoverThresholds Model 
JEL:  G12 C53 C22 C01 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:25772&r=ecm 
By:  Fabio Fornari (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Wolfgang Lemke (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) 
Abstract:  We forecast recession probabilities for the United States, Germany and Japan. The predictions are based on the widelyused probit approach, but the dynamics of regressors are endogenized using a VAR. The combined model is called a ‘ProbVAR’. At any point in time, the ProbVAR allows to generate conditional recession probabilities for any sequence of forecast horizons. At the same time, the ProbVAR is as easy to implement as traditional probit regressions. The slope of the yield curve turns out to be a successful predictor, but forecasts can be markedly improved by adding other financial variables such as the shortterm interest rate, stock returns or corporate bond spreads. The forecasting performance is very good for the United States: for the outofsample exercise (1995 to 2009), the best ProbVAR specification correctly identifies the expost classification of recessions and nonrecessions 95% of the time for the onequarter forecast horizon and 87% of the time for the fourquarter horizon. Moreover, the ProbVAR turns out to significantly improve upon survey forecasts. Relative to the good performance reached for the United States, the ProbVAR forecasts are slightly worse for Germany, but considerably inferior for Japan. JEL Classification: C25, C32, E32, E37. 
Keywords:  Recessions, forecasting, probit, VAR. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101255&r=ecm 
By:  Nikolai Dokuchaev 
Abstract:  This short note suggests a heuristic method for detecting the dependence of random time series that can be used in the case when this dependence is relatively weak and such that the traditional methods are not effective. The method requires to compare some special functionals on the sample characteristic functions with the same functionals computed for the benchmark time series with a known degree of correlation. Some experiments for financial time series are presented. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1010.2576&r=ecm 
By:  K. DE WITTE; M. VERSCHELDE 
Abstract:  Various applications require multilevel settings (e.g., for estimating fixed and random effects). However, due to the curse of dimensionality, the literature on nonparametric efficiency analysis did not yet explore the estimation of performance drivers in highly multilevel settings. As such, it lacks models which are particularly designed for multilevel estimations. This paper suggests a semiparametric twostage framework in which, in a first stage, nonparametric efficiency estimators are determined. As such, we do not require any a priori information on the production possibility set. In a second stage, a semiparametric Generalized Additive Mixed Model (GAMM) examines the sign and significance of both discrete and continuous background characteristics. The proper working of the procedure is illustrated by simulated data. Finally, the model is applied on real life data. In particular, using the proposed robust twostage approach, we examine a claim by the Dutch Ministry of Education in that three out of the twelve Dutch provinces would provide lower quality education. When properly controlled for abilities, background variables, peer group and ability track effects, we do not observe differences among the provinces in educational attainments 
Keywords:  Productivity estimation; Multilevel setting; Generalized Additive Mixed Model; Education; Social segregation 
JEL:  C14 C25 I21 
Date:  2010–07 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:10/657&r=ecm 
By:  Antonio Bassanetti (Bank of Italy); Michele Caivano (Bank of Italy); Alberto Locarno (Bank of Italy) 
Abstract:  The aim of the paper is to estimate a reliable quarterly timeseries of potential output for the Italian economy, exploiting four alternative approaches: a Bayesian unobserved component method, a univariate timevarying autoregressive model, a production function approach and a structural VAR. Based on a wide range of evaluation criteria, all methods generate output gaps that accurately describe the Italian business cycle over the past three decades. All output gap measures are subject to nonnegligible revisions when new data become available. Nonetheless they still prove to be informative about the current cyclical phase and, unlike the evidence reported in most of the literature, helpful at predicting inflation compared with simple benchmarks. We assess also the performance of output gap estimates obtained by combining the four original indicators, using either equal weights or Bayesian averaging, showing that the resulting measures (i) are less sensitive to revisions; (ii) are at least as good as the originals at tracking business cycle fluctuations; (iii) are more accurate as inflation predictors. 
Keywords:  potential output, business cycle, Phillips curve, output gap 
JEL:  E37 C52 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_771_10&r=ecm 
By:  Marina Theodosiou (Central Bank of Cyprus) 
Abstract:  In the current paper, we investigate the bias introduced through the calendar time sampling of the price process of financial assets. We analyze results from a Monte Carlo simulation which point to the conclusion that the multitude of jumps reported in the literature might be, to a large extent, an artifact of the bias introduced through the previous tick sampling scheme, used for the time homogenization the price series. We advocate the use of Akima cubic splines as an alternative to the popular previous tick method. Monte Carlo simulation results confirm the suitability of Akima cubic splines in high frequency applications and the advantages of these over other calendar time sampling schemes, such as the linear interpolation and the previous tick method. Empirical results from the FX market complement the analysis. 
Keywords:  Sampling schemes, previous tick method, quadratic variation, jumps, stochastic volatility,realized measures, highfrequency data 
JEL:  C12 C14 G10 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:cyb:wpaper:20107&r=ecm 
By:  Lam, K.Y.; Koning, A.J.; Franses, Ph.H.B.F. 
Abstract:  In this paper we consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we do not need to compute highdimensional integrals. We extend the approximation technique proposed by \citet{Henery1981} in the ThurstoneMostellerDaniels model for any Thurstone order statistics model and we show that our approach allows for a unified approach. Moreover, our approach also allows for the analysis of any partial ranking. Partial rankings are essential in practical conjoint analysis to collect data efficiently to relieve respondents' task burden. 
Keywords:  conjoint experiments;partial rankings;thurstone order statistics model 
Date:  2010–10–12 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765020937&r=ecm 
By:  Erwan Gautier (LEMNA  Laboratoire d'économie et de management de Nantes Atlantique  Université de Nantes : EA4272, BF  Banque de France  Banque de France); Hervé Le Bihan (BF  Banque de France  Banque de France) 
Abstract:  A recent strand of empirical work uses (S,s) models with timevarying stochastic bands to describe infrequent adjustments of prices and other variables. The present paper examines some properties of this model, which encompasses most microfounded adjustment rules rationalizing infrequent changes. We illustrate that this model is flexible enough to fit data characterized by infrequent adjustment and variable adjustment size. We show that, to the extent that there is variability in the size of adjustments (e.g. if both small and large price changes are observed), i) a large band parameter is needed to fit the data and ii) the average band of inaction underlying the model may differ strikingly from the typical observed size of adjustment. The paper thus provides a rationalization for a recurrent empirical result: very large estimated values for the parameters measuring the band of inaction. 
Date:  2010–10–14 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00526295_v1&r=ecm 
By:  Richard Blundell (Institute for Fiscal Studies and University College London); Rosa Matzkin (Institute for Fiscal Studies and UCLA) 
Abstract:  <p>The control function approach (Heckman and Robb (1985)) in a system of linear simultaneous equations provides a convenient procedure to estimate one of the functions in the system using reduced form residuals from the other functions as additional regressors. The conditions on the structural system under which this procedure can be used in nonlinear and nonparametric simultaneous equations has thus far been unknown. In this note, we define a new property of functions called control function separability and show it provides a complete characterization of the structural systems of simultaneous equations in which the control function procedure is valid.</p> 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:28/10&r=ecm 
By:  Hanming Fang (Department of Economics, University of Pennsylvania); Yang Wang (Department of Economics, Lafayette College) 
Abstract:  We extend the semiparametric estimation method for dynamic discrete choice models using Hotz and Miller’s (1993) conditional choice probability (CCP) approach to the setting where individuals may have hyperbolic discounting time preferences and may be naive about their time inconsistency. We illustrate the proposed estimation method with an empirical application of adult women’s decisions to undertake mammography to evaluate the importance of present bias and naivety in the underutilization of this preventive health care. Our results show evidence for both present bias and naivety. 
Keywords:  Time Inconsistent Preferences, Intrapersonal Games, Dynamic Discrete Choices, Preventive Care 
JEL:  C14 I1 
Date:  2010–10–04 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:10033&r=ecm 
By:  Steven F. Koch (Department of Economics, University of Pretoria) 
Abstract:  The research presented here considers the performance of the Fractional Multinomial Logit (FMNL) model in explaining expenditure shares using data from the 2005/06 South African Income and Expenditure Survey. The results suggest that the FMNL performs favourably, when the dataset is large enough, but that it does not perform as well, when the dataset is limited. Expenditure elasticities were also estimated, and compared to the expenditure shares from a QUAIDS model. The resulting expenditure shares are fairly similar across model specification; however, the FMNL model does incorporate additional curvature, which is easily observed when comparing the QUAIDS elasticities to the FMNL elasticities. 
Keywords:  Expenditure Shares, Multinomial Fractional Logit. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:pre:wpaper:201021&r=ecm 
By:  L. Spadafora; G. P. Berman; F. Borgonovi 
Abstract:  In the BlackScholes context we consider the probability distribution function (PDF) of financial returns implied by volatility smile and we study the relation between the decay of its tails and the fitting parameters of the smile. We show that, considering a scaling law derived from data, it is possible to get a new fitting procedure of the volatility smile that considers also the exponential decay of the real PDF of returns observed in the financial markets. Our study finds application in the Risk Management activities where the tails characterization of financial returns PDF has a central role for the risk estimation. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1010.2184&r=ecm 
By:  Carolyn Heinrich; Alessandro Maffioli; Gonzalo Vázquez 
Abstract:  The use of microeconometric techniques to estimate the effects of development policies has become a common approach not only for scholars, but also for policymakers engaged in designing, implementing and evaluating projects in different fields. Among these techniques, PropensityScore Matching (PSM) is increasingly applied in the policy evaluation community. This technical note provides a guide to the key aspects of implementing PSM methodology for an audience of practitioners interested in understanding its applicability to specific evaluation problems. The note summarizes the basic conditions under which PSM can be used to estimate the impact of a program and the data required. It explains how the Conditional Independence Assumption, combined with the Overlap Condition, reduces selection bias when participation in a program is determined by observable characteristics. It also describes different matching algorithms and some tests to assess the quality of the matching. Case studies are used throughout to illustrate important concepts in impact evaluation and PSM. In the annexes, the note provides an outline of the main technical aspects and a list of statistical and econometric software for implementing PSM. 
Keywords:  Policy Evaluation, Microeconometrics, PropensityScore Matching, Average Treatment Effect on the Treated, Development Effectiveness 
JEL:  C21 C40 H43 O12 O22 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:idb:spdwps:1005&r=ecm 