nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒03‒28
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Semi-Nonparametric Estimation and Misspecification Testing of Diffusion Models By Dennis Kristensen
  2. On marginal likelihood computation in change-point models By BAUWENS, Luc; ROMBOUTS, Jeroen
  3. On the Asymptotic Properties of a Feasible Estimator of the Continuous Time Long Memory Parameter By Joanne S. Ercolani
  4. Efficient and robust estimation for financial returns: an approach based on q-entropy By Davide Ferrari; Sandra Paterlini
  5. On the Estimation of Cross-Information Quantities in Rank-Based Inference By Delphine Cassart; Marc Hallin; Davy Paindaveine
  6. Long memory and nonlinearities in realized volatility: a Markov switching approach. By S. Bordignon; D. Raggi
  7. The Method of Simulated Quantiles By Yves Dominicy; David Veredas
  8. A nonparametric copula based test for conditional independence with applications to Granger causality By BOUEZMARNI, Taoufik; ROMBOUTS, Jeroen; TAAMOUTI, Abderrahim
  9. Asymmetric CAPM dependence for large dimensions: the Canonical Vine Autoregressive Model By HEINEN, AndrŽas; VALDESOGO, Alfonso
  10. Bayesian option pricing using mixed normal heteroskedasticity models By ROMBOUTS, Jeroen V.K.; STENTOFT, Lars
  11. Generalized Maximum Entropy estimation of discrete sequential move games of perfect information By Wang, Yafeng; Graham , Brett
  12. Estimation of a volatility model and portfolio allocation By Adam Clements; Annastiina Silvennoinen
  13. The use of survey weights in regression analysis By Ivan Faiella
  14. The Use of GARCH Models in VaR Estimation By Timotheos Angelidis; Alexandros Benos; Stavros Degiannakis
  15. Identification of Stochastic Sequential Bargaining Models, Second Version By Antonio Merlo; Xun Tang
  16. Structural Models in Real Time By Jaromir Benes; Marianne Johnson; Kevin Clinton; Troy Matheson; Douglas Laxton
  17. Predicting chaos with Lyapunov exponents : Zero plays no role in forecasting chaotic systems By Dominique Guegan; Justin Leroux
  18. The distinction between dictatorial and incentive policy interventions and its implication for IV estimation By Christian Belzil; J. Hansen
  19. What is an oil shock? Panel data evidence By Dong Heon Kim
  20. What do we know about comparing aggregate and disaggregate forecasts? By SBRANA, Giacomo; SILVESTRINI, Andrea
  21. The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics By Angrist, Joshua; Pischke, Jörn-Steffen

  1. By: Dennis Kristensen (Department of Economics, University of Columbia, NY)
    Abstract: We propose novel misspecification tests of semiparametric and fully parametric univariate diffusion models based on the estimators developed in Kristensen (Journal of Econometrics, 2010). We first demonstrate that given a preliminary estimator of either the drift or the diffusion term in a diffusion model, nonparametric kernel estimators of the remaining term can be obtained. We then propose misspecification tests of semparametric and fully parametric diffusion models that compare estimators of the transition density under the relevant null and alternative. The asymptotic distribution of the estimators and tests under the null are derived, and the power properties are analyzed by considering contiguous alternatives. Test directly comparing the drift and diffusion estimators under the relevant null and alternative are also analyzed. Markov Bootstrap versions of the test statistics are proposed to improve on the finite-sample approximations. The finite sample properties of the estimators are examined in a simulation study.
    Keywords: diffusion process; kernel estimation; nonparametric; specification testing; semiparametric; transition density
    JEL: C12 C13 C14 C22
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1010&r=ecm
  2. By: BAUWENS, Luc (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); ROMBOUTS, Jeroen (Institute of Applied Economics at HEC MontrŽal)
    Abstract: Change-point models are useful for modeling time series subject to structural breaks. For interpretation and forecasting, it is essential to estimate correctly the number of change points in this class of models. In Bayesian inference, the number of change points is typically chosen by the marginal likelihood criterion, computed by Chib's method. This method requires to select a value in the parameter space at which the computation is done. We explain in detail how to perform Bayesian inference for a change-point dynamic regression model and how to compute its marginal likelihood. Motivated by our results from three empirical illustrations, a simulation study shows that Chib's method is robust with respect to the choice of the parameter value used in the computations, among posterior mean, mode and quartiles. Furthermore, the performance of the Bayesian information criterion, which is based on maximum likelihood estimates, in selecting the correct model is comparable to that of the marginal likelihood.
    Keywords: BIC, change-point model, Chib's method, marginal likelihood
    JEL: C11 C22 C53
    Date: 2009–10–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2009061&r=ecm
  3. By: Joanne S. Ercolani
    Abstract: This paper considers a fractional noise model in continuous time and examines the asymptotic properties of a feasible frequency domain maximum likelihood estimator of the long memory parameter. The feasible estimator is one that maximises an approximation to the likelihood function (the approximation arises from the fact that the spectral density function involves the finite truncatin of an infinite summation). It is of interest therefore to explore the conditions required of this approximation to ensure the consistency and asymptotic normality of this estimator. It is shown that the truncation parameter has to be a function of the sample size and that the optimal rate is different for stocks and flows and is a function of the long memory parameter itself. The results of a simulation exercise are provided to assess the small sample properties of the estimator.
    Keywords: Continuous time models, long memory processes
    JEL: C22
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:bir:birmec:10-09&r=ecm
  4. By: Davide Ferrari; Sandra Paterlini
    Abstract: We consider a new robust parametric estimation procedure, which minimizes an empirical version of the Havrda-Charv_at-Tsallis entropy. The resulting estimator adapts according to the discrepancy between the data and the assumed model by tuning a single constant q, which controls the trade-o_ between robustness and e_ciency. The method is applied to expected re- turn and volatility estimation of _nancial asset returns under multivariate normality. Theoretical properties, ease of implementability and empirical re- sults on simulated and _nancial data make it a valid alternative to classic robust estimators and semi-parametric minimum divergence methods based on kernel smoothing
    Keywords: q-entropy, robust estimation, power-divergence, _nancial returns
    JEL: C13 G11
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:mod:depeco:0623&r=ecm
  5. By: Delphine Cassart; Marc Hallin; Davy Paindaveine
    Abstract: Rank-based inference and, in particular, R-estimation, is a red thread running through Jana Jure?ckov´a’s entire scientific career, starting with her dissertation in 1967, where she laid the foundations of a point-estimation counterpart to Jaroslav H´ajek’s celebrated theory of rank tests. Cross-information quantities in that context play an essential role. In location/ regression problems, these quantities take the form ?(u)?g(u)du where ? is a score function and ??g(u) := g?(G?1(u))/g(G?1(u)) is the log-derivative of the unknown actual underlying density g computed at the quantile G?1(u); in other models, they involve more general scores. Such quantities appear in the local powers of rank tests and the asymptotic variance of R-estimators. Estimating them consistently is a delicate problem that has been extensively considered in the literature. We provide here a new, flexible, and very general method for that problem, which furthermore applies well beyond the traditional case of regression models.
    Keywords: Rank tests, R-estimation, cross-information, local power, asymptotic variance.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2010_010&r=ecm
  6. By: S. Bordignon; D. Raggi
    Abstract: Goal of this paper is to analyze and forecast realized volatility through nonlinear and highly persistent dynamics. In particular, we propose a model that simultaneously captures long memory and nonlinearities in which level and persistence shift through a Markov switching dynamics. We consider an efficient Markov chain Monte Carlo (MCMC) algorithm to estimate parameters, latent process and predictive densities. The insample results show that both long memory and nonlinearities are significant and improve the description of the data. The out-sample results at several forecast horizons, show that introducing these nonlinearities produces superior forecasts over those obtained from nested models.
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:bol:bodewp:694&r=ecm
  7. By: Yves Dominicy; David Veredas
    Abstract: We introduce an inference method based on quantiles matching, which is useful for situations where the density function does not have a closed form –but it is simple to simulate– and/or moments do not exist. Functions of theoretical quantiles, which depend on the parameters of the assumed probability law, are matched with sample quantiles, which depend on observations. Since the theoretical quantiles may not be available analytically, the optimization is based on simulations. We illustrate the method with the estimation of alpha-stable distributions. A thorough Monte Carlo study and an illustration to 22 financial indexes show the usefulness of the method.
    Keywords: Quantiles, simulated methods, alpha-stable distribution, fat tails.
    JEL: C32 G14 E44
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2010_008&r=ecm
  8. By: BOUEZMARNI, Taoufik; ROMBOUTS, Jeroen (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); TAAMOUTI, Abderrahim
    Keywords: nonparametric tests, conditional independence, Granger non-causality, Bernstein density copula, bootstrap, finance, volatility asymmetry, leverage effect, volatility feedback effect, macroeconomics
    JEL: C12 C14 C15 C19 G1 G12 E3 E4 E52
    Date: 2009–06–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2009041&r=ecm
  9. By: HEINEN, AndrŽas (Departamento de Estadistica, Universidad Carlos III de Madrid, Spain); VALDESOGO, Alfonso (CREA, University of Luxembourg, Luxembourg)
    Abstract: We propose a new dynamic model for volatility and dependence in high dimensions, that allows for departures from the normal distribution, both in the marginals and in the dependence. The dependence is modeled with a dynamic canonical vine copula, which can be decomposed into a cascade of bivariate conditional copulas. Due to this decomposition, the model does not suffer from the curse of dimensionality. The canonical vine autoregressive (CAVA) captures asymmetries in the dependence structure. The model is applied to 95 S&P500 stocks. For the marginal distributions, we use non-Gaussian GARCH models, that are designed to capture skewness and kurtosis. By conditioning on the market index and on sector indexes, the dependence structure is much simplified and the model can be considered as a non-linear version of the CAPM or of a market model with sector effects. The model is shown to deliver good forecasts of Value-at-Risk.
    Keywords: asymmetric dependence, high dimension, multivariate copula, multivariate GARCH, Value-at-Risk
    JEL: C32 C53 G10
    Date: 2009–11–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2009069&r=ecm
  10. By: ROMBOUTS, Jeroen V.K.; STENTOFT, Lars
    Keywords: Bayesian inference, option pricing, finite mixture models, out-of-sample prediction, GARCH models
    JEL: C11 C15 C22 G13
    Date: 2009–03–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2009013&r=ecm
  11. By: Wang, Yafeng; Graham , Brett
    Abstract: We propose a data-constrained generalized maximum entropy (GME) estimator for discrete sequential move games of perfect information which can be easily implemented on optimization software with high-level interfaces such as GAMS. Unlike most other work on the estimation of complete information games, the method we proposed is data constrained and does not require simulation and normal distribution of random preference shocks. We formulate the GME estimation as a (convex) mixed-integer nonlinear optimization problem (MINLP) which is well developed over the last few years. The model is identified with only weak scale and location normalizations, monte carlo evidence demonstrates that the estimator can perform well in moderately size samples. As an application, we study the social security acceptance decisions in dual career households.
    Keywords: Game-Theoretic Econometric Models; Sequential-Move Game; Generalized Maximum Entropy; Mixed-Integer Nonlinear Programming
    JEL: C13 C01
    Date: 2009–12–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:21331&r=ecm
  12. By: Adam Clements (QUT); Annastiina Silvennoinen (QUT)
    Abstract: Volatility forecasts are important inputs into financial decisions such as portfolio allocation. While the forecasts are often used in such economic applications, the parameters of these models are traditionally estimated within a statistical framework. This leads to an inconsistency between the loss functions under which the model is estimated and under which it is applied. This paper examines the impact of the choice of loss function on model performance in a portfolio allocation setting. It is found that employing a utility based estimation criteria is preferred over likelihood estimation, however a simple mean squared error criteria performs in a similar manner. These finding have obvious implications for the manner in which volatility models are estimated when one wishes to inform the portfolio allocation decision.
    Keywords: Volatility, utility, portfolio allocation, realized volatility, MIDAS
    JEL: C22 G11
    Date: 2010–03–10
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2010_01&r=ecm
  13. By: Ivan Faiella (Bank of Italy)
    Abstract: While there is a wide consensus in using survey weights when estimating population parameters, it is not clear what to do when using survey data for analytic purposes (i.e. with the objective of making inference about model parameters). In the model-based framework (MB), under the hypothesis that the underlying model is correctly specified, using survey weights in regression analysis potentially involves a loss of efficiency. In a design-based perspective (DB), weighted estimates are both design consistent and can provide robustness to model mis-specification. In this paper, I suggest that the choice of using survey weights can be seen in a regression diagnostic set. The survey data analyst should check if the design information included in survey weights has some explanatory power in describing the model outcome. To accomplish this task a set of econometric tests is suggested, that could be supplemented by the analysis of model features under the two strategies.
    Keywords: survey methods, model evaluation and testing
    JEL: C42 C52
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_739_10&r=ecm
  14. By: Timotheos Angelidis; Alexandros Benos; Stavros Degiannakis
    Abstract: We evaluate the performance of an extensive family of ARCH models in modelling daily Value-at-Risk (VaR) of perfectly diversified portfolios in five stock indices, using a number of distributional assumptions and sample sizes. We find, first, that leptokurtic distributions are able to produce better one-step-ahead VaR forecasts; second, the choice of sample size is important for the accuracy of the forecast, whereas the specification of the conditional mean is indifferent. Finally, the ARCH structure producing the most accurate forecasts is different for every portfolio and specific to each equity index.
    Keywords: Value at Risk, GARCH estimation, Backtesting, Volatility forecasting, Quantile Loss Function.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:uop:wpaper:0048&r=ecm
  15. By: Antonio Merlo (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania)
    Abstract: Stochastic sequential bargaining games (Merlo and Wilson (1995, 1998)) have found wide applications in various fields including political economy and macroeconomics due to their flexibility in explaining delays in reaching an agreement. In this paper, we present new results in nonparametric identification of such models under different scenarios of data availability. First, we give conditions for an observed distribution of players’ decisions and agreed allocations of the surplus, or the "cake", to be rationalized by a sequential bargaining model. We show the common discount rate is identified, provided the surplus is monotonic in unobservable states (USV) given observed ones (OSV). Then the mapping from states to surplus, or the "cake function", is also recovered under appropriate normalizations. Second, when the cake is only observed under agreements, the discount rate and the impact of observable states on the cake can be identified, if the distribution of USV satisfies some exclusion restrictions and the cake is additively separable in OSV and USV. Third, if data only report when an agreement is reached but never report the size of the cake, we propose a simple algorithm that exploits shape restrictions on the cake function and the independence of USV to recover all rationalizable probabilities for agreements under counterfactual state transitions. Numerical examples show the set of rationalizable counterfactual outcomes so recovered can be informative.
    Keywords: Nonparametric identification, non-cooperative bargaining, stochastic sequential bargaining, rationalizable counterfactual outcomes
    JEL: C14 C35 C73 C78
    Date: 2009–10–15
    URL: http://d.repec.org/n?u=RePEc:pen:papers:10-008&r=ecm
  16. By: Jaromir Benes; Marianne Johnson; Kevin Clinton; Troy Matheson; Douglas Laxton
    Abstract: This paper outlines a simple approach for incorporating extraneous predictions into structural models. The method allows the forecaster to combine predictions derived from any source in a way that is consistent with the underlying structure of the model. The method is flexible enough that predictions can be up-weighted or down-weighted on a case-by-case basis. We illustrate the approach using a small quarterly structural and real-time data for the United States.
    Keywords: Economic forecasting , Economic indicators , Economic models , Monetary policy ,
    Date: 2010–03–09
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:10/56&r=ecm
  17. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Justin Leroux (Institute for Applied Economics - HEC MONTRÉAL)
    Abstract: We propose a nouvel methodology for forecasting chaotic systems which uses information on local Lyapunov exponents (LLEs) to improve upon existing predictors by correcting for their inevitable bias. Using simulations of the Rössler, Lorenz and Chua attractors, we find that accuracy gains can be substantial. Also, we show that the candidate selection problem identified in Guégan and Leroux (2009a,b) can be solved irrespective of the value of LLEs. An important corrolary follows : the focal value of zero, which traditionally distinguishes order from chaos, plays no role whatsoever when forecasting deterministic systems.
    Keywords: Chaos theory, forecasting, Lyapunov exponent, Lorenz attractor, Rössler attractor, Chua attractor, Monte Carlo simulations.
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00462454_v1&r=ecm
  18. By: Christian Belzil (Department of Economics, Ecole Polytechnique - CNRS : UMR7176 - Polytechnique - X, ENSAE - École Nationale de la Statistique et de l'Administration Économique - ENSAE, IZA - Institute for the Study of Labor); J. Hansen (IZA - Institute for the Study of Labor, CIREQ - Centre Interuniversitaire de Recherche en Economie Quantitative, CIRANO - Montréal, Department of Economics, Concordia University - Concordia University)
    Abstract: We investigate if, and under which conditions, the distinction between dictatorial and incentive-based policy interventions, affects the capacity of Instrument Variable (IV) methods to estimate the relevant treatment effect parameter of an outcome equation. The analysis is set in a non-trivial framework, in which the right-hand side variable of interest is affected by selectivity, and the error term is driven by a sequence of unobserved life-cycle endogenous choices. We show that, for a wide class of outcome equations, incentive-based policies may be designed so to generate a sufficient degree of post-intervention randomization (a lesser degree of selection on individual endowments among the sub-population affected). This helps the instrument to fulfill the orthogonality condition. However, for a same class of outcome equation, dictatorial policies that enforce minimum consumption cannot meet this condition. We illustrate these concepts within a calibrated dynamic life cycle model of human capital accumulation, and focus on the estimation of the returns to schooling using instruments generated from mandatory schooling reforms and education subsidies. We show how the nature of the skill accumulation process (substitutability vs complementarity) may play a fundamental role in interpreting IV estimates of the returns to schooling.
    Keywords: Returns to schooling, Instrumental Variable methods, Dynamic Discrete Choice, Dynamic Programming, Local Average Treatment Effects .
    Date: 2010–03–15
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00463877_v1&r=ecm
  19. By: Dong Heon Kim (Department of Economics, Korea University)
    Abstract: This paper characterizes the nonlinear relation between oil price change and GDP growth, focusing on the panel data of various industrialized countries. Toward this end, the paper extends a flexible nonlinear inference to the panel data analysis where the random error components are incorporated into the flexible approach. The paper reports clear evidence of nonlinearity in the panel and confirms earlier claims in the literature - oil price increases are much more important than decreases and previous upheaval in oil prices causes the marginal effect of any given oil price change to be reduced. Our result suggests that the nonlinear oil-macroeconomy relation is generally observable over different industrialized countries and it is desirable for one to use the nonlinear function of oil price change for GDP forecast.
    Keywords: Oil shock; Nonlinear flexible inference; Panel data; Error components model, Economic fluctuation
    JEL: E32 C33
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:iek:wpaper:1007&r=ecm
  20. By: SBRANA, Giacomo; SILVESTRINI, Andrea
    Keywords: contemporaneous aggregation, forecasting
    JEL: C10 C32 C43 C52
    Date: 2009–03–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2009020&r=ecm
  21. By: Angrist, Joshua (MIT); Pischke, Jörn-Steffen (London School of Economics)
    Abstract: This essay reviews progress in empirical economics since Leamer's (1983) critique. Leamer highlighted the benefits of sensitivity analysis, a procedure in which researchers show how their results change with changes in specification or functional form. Sensitivity analysis has had a salutary but not a revolutionary effect on econometric practice. As we see it, the credibility revolution in empirical work can be traced to the rise of a design-based approach that emphasizes the identification of causal effects. Design-based studies typically feature either real or natural experiments and are distinguished by their prima facie credibility and by the attention investigators devote to making the case for a causal interpretation of the findings their designs generate. Design-based studies are most often found in the microeconomic fields of Development, Education, Environment, Labor, Health, and Public Finance, but are still rare in Industrial Organization and Macroeconomics. We explain why IO and Macro would do well to embrace a design-based approach. Finally, we respond to the charge that the design-based revolution has overreached.
    Keywords: structural models, research design, natural experiments, quasi-experiments
    JEL: C01
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4800&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.