
on Econometrics 
By:  Guidolin, Massimo; Timmermann, Allan G 
Abstract:  This paper develops a flexible approach to combine forecasts of future spot rates with forecasts from timeseries models or macroeconomic variables. We find empirical evidence that accounting for both regimes in interest rate dynamics and combining forecasts from different models helps improve the outofsample forecasting performance for US shortterm rates. Imposing restrictions from the expectations hypothesis on the forecasting model are found to help at long forecasting horizons. 
Keywords:  forecast combinations; term structure of interest rates 
JEL:  C53 G12 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:6188&r=ecm 
By:  Torben G. Andersen; Tim Bollerslev; Dobrislav Dobrev 
Abstract:  We develop a sequential procedure to test the adequacy of jumpdiffusion models for return distributions. We rely on intraday data and nonparametric volatility measures, along with a new jump detection technique and appropriate conditional moment tests, for assessing the import of jumps and leverage effects. A novel robusttojumps approach is utilized to alleviate microstructure frictions for realized volatility estimation. Size and power of the procedure are explored through Monte Carlo methods. Our empirical findings support the jumpdiffusive representation for S&P500 futures returns but reveal it is critical to account for leverage effects and jumps to maintain the underlying semimartingale assumption. 
JEL:  C15 C22 C52 C80 G10 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:12963&r=ecm 
By:  Weron, Rafal; Misiorek, Adam 
Abstract:  This paper is a continuation of our earlier studies on shortterm price forecasting of California electricity prices with time series models. Here we focus on whether models with heavytailed innovations perform better in terms of forecasting accuracy than their Gaussian counterparts. Consequently, we limit the range of analyzed models to autoregressive time series approaches that have been found to perform well for precrash California power market data. We expand them by allowing for heavytailed innovations in the form of αstable or generalized hyperbolic noise. 
Keywords:  Electricity; price forecasting; heavy tails; time series; αstable distribution; generalized hyperbolic distribution 
JEL:  C53 C46 C22 Q40 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2292&r=ecm 
By:  Marmer, Vadim; Shneyerov, Artyom 
Abstract:  We propose a quantilebased nonparametric approach to inference on the probability density function (PDF) of the private values in firstprice sealedbid auctions with independent private values. Our method of inference is based on a fully nonparametric kernelbased estimator of the quantiles and PDF of observable bids. Our estimator attains the optimal rate of Guerre, Perrigne, and Vuong (2000), and is also asymptotically normal with the appropriate choice of the bandwidth. As an application, we consider the problem of inference on the optimal reserve price. 
Keywords:  Firstprice auctions; independent private values; nonparametric estimation; kernel estimation; quantiles; optimal reserve price 
JEL:  C14 D44 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1977&r=ecm 
By:  Adrian Pagan (Queensland University of Technology); M. Hashem Pesaran (CIMF, Cambridge University and IZA) 
Abstract:  This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah (1989), and shows that structural equations for which there are known permanent shocks must have no error correction terms present in them, thereby freeing up the latter to be used as instruments in estimating their parameters. The proposed approach is illustrated by a reexamination of the identification scheme used in a monetary model by Wickens and Motta (2001), and in a well known paper by Gali (1992) which deals with the construction of an ISLM model with supplyside effects. We show that the latter imposes more shortrun restrictions than are needed because of a failure to fully utilize the cointegration information. 
Keywords:  permanent shocks, structural identification, error correction models, ISLM models 
JEL:  C30 C32 E10 
Date:  2007–02 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp2634&r=ecm 
By:  M Ballin; M Scanu; Paola Vicard 
Abstract:  A class of estimators based on the dependency structure of a multivariate variable of interest and the survey design is defined. The dependency structure is the one described by the Bayesian networks. This class allows ratio type estimators as a subclass identified by a particular dependency structure. It will be shown by a Monte Carlo simulation how the adoption of the estimator corresponding to the population structure is more efficient than the others. It will also be underlined how this class adapts to the problem of integration of information from two surveys through probability updating system of the Bayesian networks. 
Keywords:  Graphical models, probability update, survey design 
URL:  http://d.repec.org/n?u=RePEc:rtr:wpaper:0054&r=ecm 
By:  Richard T. Baillie (Michigan State University and Queen Mary, University of London); Claudio Morana (Michigan State University, Università del Piemonte Orientale and ICER) 
Abstract:  This paper introduces a new long memory volatility process, denoted by Adaptive <i>FIGARCH</i>, or <i>AFIGARCH</i>, which is designed to account for both long memory and structural change in the conditional variance process. Structural change is modeled by allowing the intercept to follow a slowly varying function, specified by Gallant (1984)'s flexible functional form. A Monte Carlo study finds that the <i>AFIGARCH</i> model outperforms the standard <i>FIGARCH</i> model when structural change is present, and performs at least as well in the absence of structural instability. An empirical application to stock market volatility is also included to illustrate the usefulness of the technique. 
Keywords:  <i>FIGARCH</i>, Long memory, Structural change, Stock market volatility 
JEL:  C15 C22 F31 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp593&r=ecm 
By:  Matthias Fischer 
Abstract:  A new test for constant correlation is proposed. Based on the bivariate Studentt distribution, this test is derived as Lagrange multiplier (LM) test. Whereas most of the traditional tests (e.g. Jennrich, 1970, Tang, 1995 and Goetzmann, Li & Rouwenhorst, 2005) specify the unknown correlations as piecewise constant, our modelsetup for the correlation coefficient is based on trigonometric functions. Applying this test to assets from different financial markets (stocks, exchange rates, metals) there is empirical evidence that many of the correlations vary over time. 
Keywords:  Lagrange multiplier test, constant correlation, trigonometric functions. 
JEL:  C22 C32 G12 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007012&r=ecm 
By:  Caterina Conigliani; A Tancredi 
Abstract:  We consider the problem of assessing new and existing technologies for their costeffectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the costeffectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavytailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution, and in particular to model accurately the tail of the distribution, which is highly influential in estimating the population mean. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into costeffectiveness analyses, we consider an approach based on Bayesian model averaging: instead of choosing a single parametric model, we specify a set of plausible models for costs and estimate the mean cost with its posterior expectation, that can be obtained as a weighted mean of the posterior expectations under each model, with weights given by the posterior model probabilities. The results are compared with those obtained with a semiparametric approach that does not require any assumption about the distribution of costs. 1 Introduction 
URL:  http://d.repec.org/n?u=RePEc:rtr:wpaper:0064&r=ecm 
By:  Juan Carlos Escanciano; Jose Olmo (City University, London) 
Abstract:  One of the implications of the creation of Basel Committee on Banking Supervision was the implementation of ValueatRisk (VaR) as the standard tool for measuring market risk. Thereby the correct specification of parametric VaR models became of crucial importance in order to provide accurate and reliable risk measures. If the underlying risk model is not correctly specified, VaR estimates understate/overstate risk exposure. This can have dramatic consequences on stability and reputation of financial institutions or lead to suboptimal capital allocation. We show that the use of the standard unconditional backtesting procedures to assess VaR models is completely misleading. These tests do not consider the impact of estimation risk and therefore use wrong critical values to assess market risk. The purpose of this paper is to quantify such estimation risk in a very general class of dynamic parametric VaR models and to correct standard backtesting procedures to provide valid inference in specification analyses. A Monte Carlo study illustrates our theoretical findings in finitesamples. Finally, an application to S&P500 Index shows the importance of this correction and its impact on capital requirements as imposed by Basel Accord, and on the choice of dynamic parametric models for risk management. 
Keywords:  Backtesting; Basel Accord; Model Risk; Risk Management; Value at Risk; Conditional Quantile 
JEL:  C52 C22 G21 G32 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2007005&r=ecm 
By:  Xi Zou; David Levinson (Nexus (Networks, Economics, and Urban Systems) Research Group, Department of Civil Engineering, University of Minnesota) 
Abstract:  Driving behaviors at intersection are complex because drivers have to perceive more traffic events than normal road driving and thus are exposed to more errors with safety consequences. Drivers make realtime responsesin a stochastic manner. This paper presents our study using Hidden Markov Models (HMM) to model driving behaviors at intersections. Observed vehicle movement data are used to build up the model. A single HMM is used to cluster the vehicle movements when they are close to intersection. The reestimated clustered HMMs provide better prediction of the vehicle movements compared to traditional carfollowing models. Only through vehicles on major roads are considered in this paper. 
JEL:  R41 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:nex:wpaper:hiddenmarkov&r=ecm 
By:  Colignatus, Thomas 
Abstract:  Nominal data currently lack a correlation coefficient, such as has already defined for real data. A measure is possible using the determinant, with the useful interpretation that the determinant gives the ratio between volumes. With M a n × m contingency table and n ≤ m the suggested measure is r = Sqrt[det[A.A']] with A = Normalized[M]. With M an n1 × n2 × ... × nk contingency matrix, we can construct a matrix of pairwise correlations R so that the overall correlation is r = det[R]. For a 2 × 2 matrix the measure gives the normal correlation coefficient when the nominal categories are replaced with {1, 1}. 
Keywords:  association; correlation; contingency table; volume ratio; determinant; nonparametric methods; nominal data; nominal scale; categorical data; Fisher’s exact test; odds ratio; tetrachoric correlation coefficient; phi; Cramer’s V; Pearson; contingency coefficient; uncertainty coefficient; Theil’s U; eta; metaanalysis; Simpson’s paradox; causality; statistical independence 
JEL:  C10 
Date:  2007–03–20 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2339&r=ecm 
By:  José M. Labeaga; Mercedes MartosPartal 
Abstract:  This paper uses binary choice models that specify four possible sources of observed regularity in the consumer brand choice decision over purchase occasion: namely, state dependence, observed and unobserved heterogeneity and correlation effects. The objective is to distinguish correctly among the effects of these four variables. The estimation method proposed is an alternative to the most commonly used estimation methods in marketing choice models. We consider that the alternative method appropriately controls for observed heterogeneity and unobserved heterogeneity correlated with the state dependence variable because of the way the state dependence variable is built. The model is used for the first time in marketing following the methodology proposed by Chamberlain (1984). A relationship for unobserved heterogeneity is specified, taking into account the correlation among unobserved heterogeneity and other choice determinants. In this way, we split the influence of household state dependence and tastes on brand choice. The findings are very conclusive. We find that because the individual effects and the covariates are correlated, traditional estimation methods cannot be used to split state dependence and unobserved heterogeneity. The proposed model is found to yield better measures of predictive performance than the conventional model. The results are found to be robust across categories of laundry detergent and have significant implications for marketing policy. 
URL:  http://d.repec.org/n?u=RePEc:fda:fdaddt:200702&r=ecm 