nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒03‒24
thirteen papers chosen by
Sune Karlsson
Orebro University

  1. Forecasts of US Short-term Interest Rates: A Flexible Forecast Combination Approach By Guidolin, Massimo; Timmermann, Allan G
  2. No-Arbitrage Semi-Martingale Restrictions for Continuous-Time Volatility Models subject to Leverage Effects, Jumps and i.i.d. Noise: Theory and Testable Distributional Implications By Torben G. Andersen; Tim Bollerslev; Dobrislav Dobrev
  3. Heavy tails and electricity prices: Do time series models with non-Gaussian noise forecast better? By Weron, Rafal; Misiorek, Adam
  4. Quantile-Based Nonparametric Inference for First-Price Auctions By Marmer, Vadim; Shneyerov, Artyom
  5. On Econometric Analysis of Structural Systems with Permanent and Transitory Shocks and Exogenous Variables By Adrian Pagan; M. Hashem Pesaran
  7. Modeling Long Memory and Structural Breaks in Conditional Variances: An Adaptive FIGARCH Approach By Richard T. Baillie; Claudio Morana
  8. Are Correlations Constant Over Time? Application of the CC-TRIGt-test to Return Series from Different Asset Classes. By Matthias Fischer
  10. Estimation Risk Effects on Backtesting For Parametric Value-at-Risk Models By Juan Carlos Escanciano; Jose Olmo
  11. Modeling Intersection Driving Behaviors: A Hidden Markov Model Approach By Xi Zou; David Levinson
  12. A measure of association (correlation) in nominal data (contingency tables), using determinants By Colignatus, Thomas
  13. A Proposal to Distinguish State Dependence and Unobserved Heterogeneity in Binary Brand Choice Models By José M. Labeaga; Mercedes Martos-Partal

  1. By: Guidolin, Massimo; Timmermann, Allan G
    Abstract: This paper develops a flexible approach to combine forecasts of future spot rates with forecasts from time-series models or macroeconomic variables. We find empirical evidence that accounting for both regimes in interest rate dynamics and combining forecasts from different models helps improve the out-of-sample forecasting performance for US short-term rates. Imposing restrictions from the expectations hypothesis on the forecasting model are found to help at long forecasting horizons.
    Keywords: forecast combinations; term structure of interest rates
    JEL: C53 G12
    Date: 2007–03
  2. By: Torben G. Andersen; Tim Bollerslev; Dobrislav Dobrev
    Abstract: We develop a sequential procedure to test the adequacy of jump-diffusion models for return distributions. We rely on intraday data and nonparametric volatility measures, along with a new jump detection technique and appropriate conditional moment tests, for assessing the import of jumps and leverage effects. A novel robust-to-jumps approach is utilized to alleviate microstructure frictions for realized volatility estimation. Size and power of the procedure are explored through Monte Carlo methods. Our empirical findings support the jump-diffusive representation for S&P500 futures returns but reveal it is critical to account for leverage effects and jumps to maintain the underlying semi-martingale assumption.
    JEL: C15 C22 C52 C80 G10
    Date: 2007–03
  3. By: Weron, Rafal; Misiorek, Adam
    Abstract: This paper is a continuation of our earlier studies on short-term price forecasting of California electricity prices with time series models. Here we focus on whether models with heavy-tailed innovations perform better in terms of forecasting accuracy than their Gaussian counterparts. Consequently, we limit the range of analyzed models to autoregressive time series approaches that have been found to perform well for pre-crash California power market data. We expand them by allowing for heavy-tailed innovations in the form of α-stable or generalized hyperbolic noise.
    Keywords: Electricity; price forecasting; heavy tails; time series; α-stable distribution; generalized hyperbolic distribution
    JEL: C53 C46 C22 Q40
    Date: 2007–03
  4. By: Marmer, Vadim; Shneyerov, Artyom
    Abstract: We propose a quantile-based nonparametric approach to inference on the probability density function (PDF) of the private values in first-price sealed-bid auctions with independent private values. Our method of inference is based on a fully nonparametric kernel-based estimator of the quantiles and PDF of observable bids. Our estimator attains the optimal rate of Guerre, Perrigne, and Vuong (2000), and is also asymptotically normal with the appropriate choice of the bandwidth. As an application, we consider the problem of inference on the optimal reserve price.
    Keywords: First-price auctions; independent private values; nonparametric estimation; kernel estimation; quantiles; optimal reserve price
    JEL: C14 D44
    Date: 2006–10
  5. By: Adrian Pagan (Queensland University of Technology); M. Hashem Pesaran (CIMF, Cambridge University and IZA)
    Abstract: This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah (1989), and shows that structural equations for which there are known permanent shocks must have no error correction terms present in them, thereby freeing up the latter to be used as instruments in estimating their parameters. The proposed approach is illustrated by a re-examination of the identification scheme used in a monetary model by Wickens and Motta (2001), and in a well known paper by Gali (1992) which deals with the construction of an IS-LM model with supply-side effects. We show that the latter imposes more short-run restrictions than are needed because of a failure to fully utilize the cointegration information.
    Keywords: permanent shocks, structural identification, error correction models, IS-LM models
    JEL: C30 C32 E10
    Date: 2007–02
  6. By: M Ballin; M Scanu; Paola Vicard
    Abstract: A class of estimators based on the dependency structure of a multivariate variable of interest and the survey design is defined. The dependency structure is the one described by the Bayesian networks. This class allows ratio type estimators as a subclass identified by a particular dependency structure. It will be shown by a Monte Carlo simulation how the adoption of the estimator corresponding to the population structure is more efficient than the others. It will also be underlined how this class adapts to the problem of integration of information from two surveys through probability updating system of the Bayesian networks.
    Keywords: Graphical models, probability update, survey design
  7. By: Richard T. Baillie (Michigan State University and Queen Mary, University of London); Claudio Morana (Michigan State University, Università del Piemonte Orientale and ICER)
    Abstract: This paper introduces a new long memory volatility process, denoted by Adaptive <i>FIGARCH</i>, or <i>A-FIGARCH</i>, which is designed to account for both long memory and structural change in the conditional variance process. Structural change is modeled by allowing the intercept to follow a slowly varying function, specified by Gallant (1984)'s flexible functional form. A Monte Carlo study finds that the <i>A-FIGARCH</i> model outperforms the standard <i>FIGARCH</i> model when structural change is present, and performs at least as well in the absence of structural instability. An empirical application to stock market volatility is also included to illustrate the usefulness of the technique.
    Keywords: <i>FIGARCH</i>, Long memory, Structural change, Stock market volatility
    JEL: C15 C22 F31
    Date: 2007–03
  8. By: Matthias Fischer
    Abstract: A new test for constant correlation is proposed. Based on the bivariate Student-t distribution, this test is derived as Lagrange multiplier (LM) test. Whereas most of the traditional tests (e.g. Jennrich, 1970, Tang, 1995 and Goetzmann, Li & Rouwenhorst, 2005) specify the unknown correlations as piecewise constant, our model-setup for the correlation coefficient is based on trigonometric functions. Applying this test to assets from different financial markets (stocks, exchange rates, metals) there is empirical evidence that many of the correlations vary over time.
    Keywords: Lagrange multiplier test, constant correlation, trigonometric functions.
    JEL: C22 C32 G12
    Date: 2007–03
  9. By: Caterina Conigliani; A Tancredi
    Abstract: We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavytailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution, and in particular to model accurately the tail of the distribution, which is highly influential in estimating the population mean. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging: instead of choosing a single parametric model, we specify a set of plausible models for costs and estimate the mean cost with its posterior expectation, that can be obtained as a weighted mean of the posterior expectations under each model, with weights given by the posterior model probabilities. The results are compared with those obtained with a semi-parametric approach that does not require any assumption about the distribution of costs. 1 Introduction
  10. By: Juan Carlos Escanciano; Jose Olmo (City University, London)
    Abstract: One of the implications of the creation of Basel Committee on Banking Supervision was the implementation of Value-at-Risk (VaR) as the standard tool for measuring market risk. Thereby the correct specification of parametric VaR models became of crucial importance in order to provide accurate and reliable risk measures. If the underlying risk model is not correctly specified, VaR estimates understate/overstate risk exposure. This can have dramatic consequences on stability and reputation of financial institutions or lead to sub-optimal capital allocation. We show that the use of the standard unconditional backtesting procedures to assess VaR models is completely misleading. These tests do not consider the impact of estimation risk and therefore use wrong critical values to assess market risk. The purpose of this paper is to quantify such estimation risk in a very general class of dynamic parametric VaR models and to correct standard backtesting procedures to provide valid inference in specification analyses. A Monte Carlo study illustrates our theoretical findings in finite-samples. Finally, an application to S&P500 Index shows the importance of this correction and its impact on capital requirements as imposed by Basel Accord, and on the choice of dynamic parametric models for risk management.
    Keywords: Backtesting; Basel Accord; Model Risk; Risk Management; Value at Risk; Conditional Quantile
    JEL: C52 C22 G21 G32
    Date: 2007–03
  11. By: Xi Zou; David Levinson (Nexus (Networks, Economics, and Urban Systems) Research Group, Department of Civil Engineering, University of Minnesota)
    Abstract: Driving behaviors at intersection are complex because drivers have to perceive more traffic events than normal road driving and thus are exposed to more errors with safety consequences. Drivers make real-time responsesin a stochastic manner. This paper presents our study using Hidden Markov Models (HMM) to model driving behaviors at intersections. Observed vehicle movement data are used to build up the model. A single HMM is used to cluster the vehicle movements when they are close to intersection. The re-estimated clustered HMMs provide better prediction of the vehicle movements compared to traditional car-following models. Only through vehicles on major roads are considered in this paper.
    JEL: R41
    Date: 2006
  12. By: Colignatus, Thomas
    Abstract: Nominal data currently lack a correlation coefficient, such as has already defined for real data. A measure is possible using the determinant, with the useful interpretation that the determinant gives the ratio between volumes. With M a n × m contingency table and n ≤ m the suggested measure is r = Sqrt[det[A.A']] with A = Normalized[M]. With M an n1 × n2 × ... × nk contingency matrix, we can construct a matrix of pairwise correlations R so that the overall correlation is r = det[R]. For a 2 × 2 matrix the measure gives the normal correlation coefficient when the nominal categories are replaced with {1, -1}.
    Keywords: association; correlation; contingency table; volume ratio; determinant; nonparametric methods; nominal data; nominal scale; categorical data; Fisher’s exact test; odds ratio; tetrachoric correlation coefficient; phi; Cramer’s V; Pearson; contingency coefficient; uncertainty coefficient; Theil’s U; eta; meta-analysis; Simpson’s paradox; causality; statistical independence
    JEL: C10
    Date: 2007–03–20
  13. By: José M. Labeaga; Mercedes Martos-Partal
    Abstract: This paper uses binary choice models that specify four possible sources of observed regularity in the consumer brand choice decision over purchase occasion: namely, state dependence, observed and unobserved heterogeneity and correlation effects. The objective is to distinguish correctly among the effects of these four variables. The estimation method proposed is an alternative to the most commonly used estimation methods in marketing choice models. We consider that the alternative method appropriately controls for observed heterogeneity and unobserved heterogeneity correlated with the state dependence variable because of the way the state dependence variable is built. The model is used for the first time in marketing following the methodology proposed by Chamberlain (1984). A relationship for unobserved heterogeneity is specified, taking into account the correlation among unobserved heterogeneity and other choice determinants. In this way, we split the influence of household state dependence and tastes on brand choice. The findings are very conclusive. We find that because the individual effects and the covariates are correlated, traditional estimation methods cannot be used to split state dependence and unobserved heterogeneity. The proposed model is found to yield better measures of predictive performance than the conventional model. The results are found to be robust across categories of laundry detergent and have significant implications for marketing policy.

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.