
on Econometrics 
By:  Dennis Kristensen (Department of Economics, University of Columbia, NY) 
Abstract:  We propose novel misspecification tests of semiparametric and fully parametric univariate diffusion models based on the estimators developed in Kristensen (Journal of Econometrics, 2010). We first demonstrate that given a preliminary estimator of either the drift or the diffusion term in a diffusion model, nonparametric kernel estimators of the remaining term can be obtained. We then propose misspecification tests of semparametric and fully parametric diffusion models that compare estimators of the transition density under the relevant null and alternative. The asymptotic distribution of the estimators and tests under the null are derived, and the power properties are analyzed by considering contiguous alternatives. Test directly comparing the drift and diffusion estimators under the relevant null and alternative are also analyzed. Markov Bootstrap versions of the test statistics are proposed to improve on the finitesample approximations. The finite sample properties of the estimators are examined in a simulation study. 
Keywords:  diffusion process; kernel estimation; nonparametric; specification testing; semiparametric; transition density 
JEL:  C12 C13 C14 C22 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:1010&r=ecm 
By:  BAUWENS, Luc (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); ROMBOUTS, Jeroen (Institute of Applied Economics at HEC MontrŽal) 
Abstract:  Changepoint models are useful for modeling time series subject to structural breaks. For interpretation and forecasting, it is essential to estimate correctly the number of change points in this class of models. In Bayesian inference, the number of change points is typically chosen by the marginal likelihood criterion, computed by Chib's method. This method requires to select a value in the parameter space at which the computation is done. We explain in detail how to perform Bayesian inference for a changepoint dynamic regression model and how to compute its marginal likelihood. Motivated by our results from three empirical illustrations, a simulation study shows that Chib's method is robust with respect to the choice of the parameter value used in the computations, among posterior mean, mode and quartiles. Furthermore, the performance of the Bayesian information criterion, which is based on maximum likelihood estimates, in selecting the correct model is comparable to that of the marginal likelihood. 
Keywords:  BIC, changepoint model, Chib's method, marginal likelihood 
JEL:  C11 C22 C53 
Date:  2009–10–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2009061&r=ecm 
By:  Joanne S. Ercolani 
Abstract:  This paper considers a fractional noise model in continuous time and examines the asymptotic properties of a feasible frequency domain maximum likelihood estimator of the long memory parameter. The feasible estimator is one that maximises an approximation to the likelihood function (the approximation arises from the fact that the spectral density function involves the finite truncatin of an infinite summation). It is of interest therefore to explore the conditions required of this approximation to ensure the consistency and asymptotic normality of this estimator. It is shown that the truncation parameter has to be a function of the sample size and that the optimal rate is different for stocks and flows and is a function of the long memory parameter itself. The results of a simulation exercise are provided to assess the small sample properties of the estimator. 
Keywords:  Continuous time models, long memory processes 
JEL:  C22 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:bir:birmec:1009&r=ecm 
By:  Davide Ferrari; Sandra Paterlini 
Abstract:  We consider a new robust parametric estimation procedure, which minimizes an empirical version of the HavrdaCharv_atTsallis entropy. The resulting estimator adapts according to the discrepancy between the data and the assumed model by tuning a single constant q, which controls the tradeo_ between robustness and e_ciency. The method is applied to expected re turn and volatility estimation of _nancial asset returns under multivariate normality. Theoretical properties, ease of implementability and empirical re sults on simulated and _nancial data make it a valid alternative to classic robust estimators and semiparametric minimum divergence methods based on kernel smoothing 
Keywords:  qentropy, robust estimation, powerdivergence, _nancial returns 
JEL:  C13 G11 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:mod:depeco:0623&r=ecm 
By:  Delphine Cassart; Marc Hallin; Davy Paindaveine 
Abstract:  Rankbased inference and, in particular, Restimation, is a red thread running through Jana Jure?ckov´a’s entire scientific career, starting with her dissertation in 1967, where she laid the foundations of a pointestimation counterpart to Jaroslav H´ajek’s celebrated theory of rank tests. Crossinformation quantities in that context play an essential role. In location/ regression problems, these quantities take the form ?(u)?g(u)du where ? is a score function and ??g(u) := g?(G?1(u))/g(G?1(u)) is the logderivative of the unknown actual underlying density g computed at the quantile G?1(u); in other models, they involve more general scores. Such quantities appear in the local powers of rank tests and the asymptotic variance of Restimators. Estimating them consistently is a delicate problem that has been extensively considered in the literature. We provide here a new, flexible, and very general method for that problem, which furthermore applies well beyond the traditional case of regression models. 
Keywords:  Rank tests, Restimation, crossinformation, local power, asymptotic variance. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2010_010&r=ecm 
By:  S. Bordignon; D. Raggi 
Abstract:  Goal of this paper is to analyze and forecast realized volatility through nonlinear and highly persistent dynamics. In particular, we propose a model that simultaneously captures long memory and nonlinearities in which level and persistence shift through a Markov switching dynamics. We consider an efficient Markov chain Monte Carlo (MCMC) algorithm to estimate parameters, latent process and predictive densities. The insample results show that both long memory and nonlinearities are significant and improve the description of the data. The outsample results at several forecast horizons, show that introducing these nonlinearities produces superior forecasts over those obtained from nested models. 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:bol:bodewp:694&r=ecm 
By:  Yves Dominicy; David Veredas 
Abstract:  We introduce an inference method based on quantiles matching, which is useful for situations where the density function does not have a closed form –but it is simple to simulate– and/or moments do not exist. Functions of theoretical quantiles, which depend on the parameters of the assumed probability law, are matched with sample quantiles, which depend on observations. Since the theoretical quantiles may not be available analytically, the optimization is based on simulations. We illustrate the method with the estimation of alphastable distributions. A thorough Monte Carlo study and an illustration to 22 financial indexes show the usefulness of the method. 
Keywords:  Quantiles, simulated methods, alphastable distribution, fat tails. 
JEL:  C32 G14 E44 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2010_008&r=ecm 
By:  BOUEZMARNI, Taoufik; ROMBOUTS, Jeroen (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); TAAMOUTI, Abderrahim 
Keywords:  nonparametric tests, conditional independence, Granger noncausality, Bernstein density copula, bootstrap, finance, volatility asymmetry, leverage effect, volatility feedback effect, macroeconomics 
JEL:  C12 C14 C15 C19 G1 G12 E3 E4 E52 
Date:  2009–06–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2009041&r=ecm 
By:  HEINEN, AndrŽas (Departamento de Estadistica, Universidad Carlos III de Madrid, Spain); VALDESOGO, Alfonso (CREA, University of Luxembourg, Luxembourg) 
Abstract:  We propose a new dynamic model for volatility and dependence in high dimensions, that allows for departures from the normal distribution, both in the marginals and in the dependence. The dependence is modeled with a dynamic canonical vine copula, which can be decomposed into a cascade of bivariate conditional copulas. Due to this decomposition, the model does not suffer from the curse of dimensionality. The canonical vine autoregressive (CAVA) captures asymmetries in the dependence structure. The model is applied to 95 S&P500 stocks. For the marginal distributions, we use nonGaussian GARCH models, that are designed to capture skewness and kurtosis. By conditioning on the market index and on sector indexes, the dependence structure is much simplified and the model can be considered as a nonlinear version of the CAPM or of a market model with sector effects. The model is shown to deliver good forecasts of ValueatRisk. 
Keywords:  asymmetric dependence, high dimension, multivariate copula, multivariate GARCH, ValueatRisk 
JEL:  C32 C53 G10 
Date:  2009–11–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2009069&r=ecm 
By:  ROMBOUTS, Jeroen V.K.; STENTOFT, Lars 
Keywords:  Bayesian inference, option pricing, finite mixture models, outofsample prediction, GARCH models 
JEL:  C11 C15 C22 G13 
Date:  2009–03–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2009013&r=ecm 
By:  Wang, Yafeng; Graham , Brett 
Abstract:  We propose a dataconstrained generalized maximum entropy (GME) estimator for discrete sequential move games of perfect information which can be easily implemented on optimization software with highlevel interfaces such as GAMS. Unlike most other work on the estimation of complete information games, the method we proposed is data constrained and does not require simulation and normal distribution of random preference shocks. We formulate the GME estimation as a (convex) mixedinteger nonlinear optimization problem (MINLP) which is well developed over the last few years. The model is identified with only weak scale and location normalizations, monte carlo evidence demonstrates that the estimator can perform well in moderately size samples. As an application, we study the social security acceptance decisions in dual career households. 
Keywords:  GameTheoretic Econometric Models; SequentialMove Game; Generalized Maximum Entropy; MixedInteger Nonlinear Programming 
JEL:  C13 C01 
Date:  2009–12–20 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:21331&r=ecm 
By:  Adam Clements (QUT); Annastiina Silvennoinen (QUT) 
Abstract:  Volatility forecasts are important inputs into financial decisions such as portfolio allocation. While the forecasts are often used in such economic applications, the parameters of these models are traditionally estimated within a statistical framework. This leads to an inconsistency between the loss functions under which the model is estimated and under which it is applied. This paper examines the impact of the choice of loss function on model performance in a portfolio allocation setting. It is found that employing a utility based estimation criteria is preferred over likelihood estimation, however a simple mean squared error criteria performs in a similar manner. These finding have obvious implications for the manner in which volatility models are estimated when one wishes to inform the portfolio allocation decision. 
Keywords:  Volatility, utility, portfolio allocation, realized volatility, MIDAS 
JEL:  C22 G11 
Date:  2010–03–10 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:2010_01&r=ecm 
By:  Ivan Faiella (Bank of Italy) 
Abstract:  While there is a wide consensus in using survey weights when estimating population parameters, it is not clear what to do when using survey data for analytic purposes (i.e. with the objective of making inference about model parameters). In the modelbased framework (MB), under the hypothesis that the underlying model is correctly specified, using survey weights in regression analysis potentially involves a loss of efficiency. In a designbased perspective (DB), weighted estimates are both design consistent and can provide robustness to model misspecification. In this paper, I suggest that the choice of using survey weights can be seen in a regression diagnostic set. The survey data analyst should check if the design information included in survey weights has some explanatory power in describing the model outcome. To accomplish this task a set of econometric tests is suggested, that could be supplemented by the analysis of model features under the two strategies. 
Keywords:  survey methods, model evaluation and testing 
JEL:  C42 C52 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_739_10&r=ecm 
By:  Timotheos Angelidis; Alexandros Benos; Stavros Degiannakis 
Abstract:  We evaluate the performance of an extensive family of ARCH models in modelling daily ValueatRisk (VaR) of perfectly diversified portfolios in five stock indices, using a number of distributional assumptions and sample sizes. We find, first, that leptokurtic distributions are able to produce better onestepahead VaR forecasts; second, the choice of sample size is important for the accuracy of the forecast, whereas the specification of the conditional mean is indifferent. Finally, the ARCH structure producing the most accurate forecasts is different for every portfolio and specific to each equity index. 
Keywords:  Value at Risk, GARCH estimation, Backtesting, Volatility forecasting, Quantile Loss Function. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:uop:wpaper:0048&r=ecm 
By:  Antonio Merlo (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania) 
Abstract:  Stochastic sequential bargaining games (Merlo and Wilson (1995, 1998)) have found wide applications in various fields including political economy and macroeconomics due to their flexibility in explaining delays in reaching an agreement. In this paper, we present new results in nonparametric identification of such models under different scenarios of data availability. First, we give conditions for an observed distribution of players’ decisions and agreed allocations of the surplus, or the "cake", to be rationalized by a sequential bargaining model. We show the common discount rate is identified, provided the surplus is monotonic in unobservable states (USV) given observed ones (OSV). Then the mapping from states to surplus, or the "cake function", is also recovered under appropriate normalizations. Second, when the cake is only observed under agreements, the discount rate and the impact of observable states on the cake can be identified, if the distribution of USV satisfies some exclusion restrictions and the cake is additively separable in OSV and USV. Third, if data only report when an agreement is reached but never report the size of the cake, we propose a simple algorithm that exploits shape restrictions on the cake function and the independence of USV to recover all rationalizable probabilities for agreements under counterfactual state transitions. Numerical examples show the set of rationalizable counterfactual outcomes so recovered can be informative. 
Keywords:  Nonparametric identification, noncooperative bargaining, stochastic sequential bargaining, rationalizable counterfactual outcomes 
JEL:  C14 C35 C73 C78 
Date:  2009–10–15 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:10008&r=ecm 
By:  Jaromir Benes; Marianne Johnson; Kevin Clinton; Troy Matheson; Douglas Laxton 
Abstract:  This paper outlines a simple approach for incorporating extraneous predictions into structural models. The method allows the forecaster to combine predictions derived from any source in a way that is consistent with the underlying structure of the model. The method is flexible enough that predictions can be upweighted or downweighted on a casebycase basis. We illustrate the approach using a small quarterly structural and realtime data for the United States. 
Keywords:  Economic forecasting , Economic indicators , Economic models , Monetary policy , 
Date:  2010–03–09 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:10/56&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris); Justin Leroux (Institute for Applied Economics  HEC MONTRÉAL) 
Abstract:  We propose a nouvel methodology for forecasting chaotic systems which uses information on local Lyapunov exponents (LLEs) to improve upon existing predictors by correcting for their inevitable bias. Using simulations of the Rössler, Lorenz and Chua attractors, we find that accuracy gains can be substantial. Also, we show that the candidate selection problem identified in Guégan and Leroux (2009a,b) can be solved irrespective of the value of LLEs. An important corrolary follows : the focal value of zero, which traditionally distinguishes order from chaos, plays no role whatsoever when forecasting deterministic systems. 
Keywords:  Chaos theory, forecasting, Lyapunov exponent, Lorenz attractor, Rössler attractor, Chua attractor, Monte Carlo simulations. 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00462454_v1&r=ecm 
By:  Christian Belzil (Department of Economics, Ecole Polytechnique  CNRS : UMR7176  Polytechnique  X, ENSAE  École Nationale de la Statistique et de l'Administration Économique  ENSAE, IZA  Institute for the Study of Labor); J. Hansen (IZA  Institute for the Study of Labor, CIREQ  Centre Interuniversitaire de Recherche en Economie Quantitative, CIRANO  Montréal, Department of Economics, Concordia University  Concordia University) 
Abstract:  We investigate if, and under which conditions, the distinction between dictatorial and incentivebased policy interventions, affects the capacity of Instrument Variable (IV) methods to estimate the relevant treatment effect parameter of an outcome equation. The analysis is set in a nontrivial framework, in which the righthand side variable of interest is affected by selectivity, and the error term is driven by a sequence of unobserved lifecycle endogenous choices. We show that, for a wide class of outcome equations, incentivebased policies may be designed so to generate a sufficient degree of postintervention randomization (a lesser degree of selection on individual endowments among the subpopulation affected). This helps the instrument to fulfill the orthogonality condition. However, for a same class of outcome equation, dictatorial policies that enforce minimum consumption cannot meet this condition. We illustrate these concepts within a calibrated dynamic life cycle model of human capital accumulation, and focus on the estimation of the returns to schooling using instruments generated from mandatory schooling reforms and education subsidies. We show how the nature of the skill accumulation process (substitutability vs complementarity) may play a fundamental role in interpreting IV estimates of the returns to schooling. 
Keywords:  Returns to schooling, Instrumental Variable methods, Dynamic Discrete Choice, Dynamic Programming, Local Average Treatment Effects . 
Date:  2010–03–15 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00463877_v1&r=ecm 
By:  Dong Heon Kim (Department of Economics, Korea University) 
Abstract:  This paper characterizes the nonlinear relation between oil price change and GDP growth, focusing on the panel data of various industrialized countries. Toward this end, the paper extends a flexible nonlinear inference to the panel data analysis where the random error components are incorporated into the flexible approach. The paper reports clear evidence of nonlinearity in the panel and confirms earlier claims in the literature  oil price increases are much more important than decreases and previous upheaval in oil prices causes the marginal effect of any given oil price change to be reduced. Our result suggests that the nonlinear oilmacroeconomy relation is generally observable over different industrialized countries and it is desirable for one to use the nonlinear function of oil price change for GDP forecast. 
Keywords:  Oil shock; Nonlinear flexible inference; Panel data; Error components model, Economic fluctuation 
JEL:  E32 C33 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:iek:wpaper:1007&r=ecm 
By:  SBRANA, Giacomo; SILVESTRINI, Andrea 
Keywords:  contemporaneous aggregation, forecasting 
JEL:  C10 C32 C43 C52 
Date:  2009–03–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2009020&r=ecm 
By:  Angrist, Joshua (MIT); Pischke, JörnSteffen (London School of Economics) 
Abstract:  This essay reviews progress in empirical economics since Leamer's (1983) critique. Leamer highlighted the benefits of sensitivity analysis, a procedure in which researchers show how their results change with changes in specification or functional form. Sensitivity analysis has had a salutary but not a revolutionary effect on econometric practice. As we see it, the credibility revolution in empirical work can be traced to the rise of a designbased approach that emphasizes the identification of causal effects. Designbased studies typically feature either real or natural experiments and are distinguished by their prima facie credibility and by the attention investigators devote to making the case for a causal interpretation of the findings their designs generate. Designbased studies are most often found in the microeconomic fields of Development, Education, Environment, Labor, Health, and Public Finance, but are still rare in Industrial Organization and Macroeconomics. We explain why IO and Macro would do well to embrace a designbased approach. Finally, we respond to the charge that the designbased revolution has overreached. 
Keywords:  structural models, research design, natural experiments, quasiexperiments 
JEL:  C01 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp4800&r=ecm 