nep-ecm New Economics Papers
on Econometrics
Issue of 2006‒05‒13
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Simple Tests for Cointegration in Dependent Panels with Structural Breaks By Westerlund , Joakim; Edgerton, David
  2. Calculation of Multivariate Normal Probabilities by Simulation, with Applications to Maximum Simulated Likelihood Estimation By Lorenzo Cappellari; Stephen P. Jenkins
  3. Varying coefficient GARCH versus local constant volatility modeling. Comparison of the predictive power By Jörg Polzehl; Vladimir Spokoiny
  4. Forward and reverse representations for Markov chains By Grigori Milstein; John Schoenmakers; Vladimir Spokoiny
  5. In Search of Non-Gaussian Components of a High-Dimensional Distribution By Gilles Blanchard; Motoaki Kawanabe; Masashi Sugiyama; Vladimir Spokoiny; Klaus-Robert Müller
  6. Spatial aggregation of local likelihood estimates with applications to classification By Denis Belomestny; Vladimir Spokoiny
  7. On Priors on Cointegrating Spaces By Rodney W. Strachan
  8. Granger-causality in Markov Switching Models By Monica Billio; Silvestro Di Sanzo
  9. Approximate Solutions to Dynamic Models – Linear Methods By Harald Uhlig
  10. Spectral calibration of exponential Lévy Models [2] By Denis Belomestny; Markus Reiß
  11. Discrete Choice Survey Experiments: A Comparison Using Flexible Models By Siikamaki, Juha; Layton, David F.
  12. Forecasting interest rate swap spreads using domestic and international risk factors: Evidence from linear and non-linear models By Costas Milas; Ilias Lekkos; Theodore Panagiotidis

  1. By: Westerlund , Joakim (Department of Economics, Lund University); Edgerton, David (Department of Economics, Lund University)
    Abstract: This paper develops two very simple tests for the null hypothesis of no cointegration in panel data. The tests are general enough to allow for heteroskedastic and serially correlated errors, unit specific time trends, cross-sectional dependence and an unknown structural break in both the intercept and slope of the cointegrated regression, which may be located different dates for different units. The limiting distributions of the tests are derived, and are found to be normal and free of nuisance parameters under the null. A small simulation study is also conducted to investigate the small-sample properties of the tests.
    Keywords: Panel Cointegration; Cointegration Test; Structural Break; Cross-Sectional Dependence; Common Factor
    JEL: C12 C32 C33
    Date: 2006–05–04
  2. By: Lorenzo Cappellari (Catholic University of Milan, ISER and IZA Bonn); Stephen P. Jenkins (ISER, University of Essex, DIW Berlin and IZA Bonn)
    Abstract: We discuss methods for calculating multivariate normal probabilities by simulation and two new Stata programs for this purpose: -mdraws- for deriving draws from the standard uniform density using either Halton or pseudo-random sequences, and an egen function -mvnp()- for calculating the probabilities themselves. Several illustrations show how the programs may be used for maximum simulated likelihood estimation.
    Keywords: simulation estimation, maximum simulated likelihood, multivariate probit, Halton sequences, pseudo-random sequences, multivariate normal, GHK simulator
    JEL: C15 C51 C87
    Date: 2006–05
  3. By: Jörg Polzehl; Vladimir Spokoiny
    Abstract: GARCH models are widely used in financial econometrics. However, we show by mean of a simple simulation example that the GARCH approach may lead to a serious model misspecification if the assumption of stationarity is violated. In particular, the well known integrated GARCH effect can be explained by nonstationarity of the time series. We then introduce a more general class of GARCH models with time varying coefficients and present an adaptive procedure which can estimate the GARCH coefficients as a function of time. We also discuss a simpler semiparametric model in which the beta-parameter is fixed. Finally we compare the performance of the parametric, time varying nonparametric and semiparametric GARCH(1,1) models and the locally constant model from Polzehl and Spokoiny (2002) by means of simulated and real data sets using different forecasting criteria. Our results indicate that the simple locally constant model outperforms the other models in almost all cases. The GARCH(1,1) model also demonstrates a relatively good forecasting performance as far as the short term forecasting horizon is considered. However, its application to long term forecasting seems questionable because of possible misspecification of the model parameters.
    Keywords: varying coefficient GARCH, adaptive weights
    JEL: C14 C22 C53
    Date: 2006–04
  4. By: Grigori Milstein; John Schoenmakers; Vladimir Spokoiny
    Abstract: In this paper we carry over the concept of reverse probabilistic representations developed in Milstein, Schoenmakers, Spokoiny (2004) for diffusion processes, to discrete time Markov chains. We outline the construction of reverse chains in several situations and apply this to processes which are connected with jump-diffusion models and finite state Markov chains. By combining forward and reverse representations we then construct transition density estimators for chains which have root-N accuracy in any dimension and consider some applications.
    Keywords: transition density estimation, forward and reverse Markov chains, Monte Carlo simulation, estimation of risk
    JEL: C13 C15
    Date: 2006–05
  5. By: Gilles Blanchard; Motoaki Kawanabe; Masashi Sugiyama; Vladimir Spokoiny; Klaus-Robert Müller
    Abstract: Finding non-Gaussian components of high-dimensional data is an important preprocessing step for effcient information processing. This article proposes a new linear method to identify the ``non-Gaussian subspace´´ within a very general semi-parametric framework. Our proposed method, called NGCA (Non-Gaussian Component Analysis), is essentially based on a linear operator which, to any arbitrary nonlinear (smooth) function, associates a vector which belongs to the low dimensional non-Gaussian target subspace up to an estimation error. By applying this operator to a family of different nonlinear functions, one obtains a family of different vectors lying in a vicinity of the target space. As a final step, the target space itself is estimated by applying PCA to this family of vectors. We show that this procedure is consistent in the sense that the estimaton error tends to zero at a parametric rate, uniformly over the family, Numerical examples demonstrate the usefulness of our method.
    Keywords: non-Gaussian components, dimension reduction
    JEL: C14 C51
    Date: 2005–10
  6. By: Denis Belomestny; Vladimir Spokoiny
    Abstract: This paper presents a new method for spatially adaptive local likelihood estimation which applies to a broad class of nonparametric models, including the Gaussian, Poisson and binary response models. The main idea of themethod is given a sequence of local likelihood estimates (``weak´´ estimates),to construct a new aggregated estimate whose pointwise risk is of order of the smallest risk among all ``weak´´ estimates. We also propose a new approach towards selecting the parameters of the procedure by providing the prescribed behavior of the resulting estimate in the simple parametric situation. We establish a number of important theoretical results concerning the optimality of the aggregated estimate. In particular, our ``oracle´´ results claims that its risk is up to some logarithmic multiplier equal to the smallest risk for the given family of estimates. The performance of the procedure is illustrated by application to the classification problem. A numerical study demonstrates its nice performance in simulated and real life examples.
    Keywords: adaptive weights, local likelihood, exponential family, classification
    JEL: C13 C14
    Date: 2006–04
  7. By: Rodney W. Strachan (Keele University, Department of Economics)
    Abstract: The focus of inference in Bayesian cointegration analysis has recently shifted from the cointegrating vectors to the cointegrating space. Two recent papers - Strachan and Inder (2004) and Villani (2004) - present uniform priors for the cointegrating space using different specifications for identification of the cointegrating vectors. This note clarifies the links between these approaches and shows that while the implied priors on the cointegrating space are identical, the posteriors have very different forms and this difference has implications for the inferences that can be obtained and for computational ease. Central to explaining these results is the specification of the adjustment coefficients under different identifying restrictions. The discussion extends to results on the priors in Geweke (1996) and Kleibergen and Paap (2002) and the interpretation of cointegrating vectors with linear identifying restrictions.
    Keywords: Bayesian cointegration; Grassman manifold;Weak exogeneity; Identifying restrictions.
    JEL: C11 C32 C52
    Date: 2004–06
  8. By: Monica Billio (Department of Economics, University Of Venice Cà Foscari); Silvestro Di Sanzo (Departamento de Fundamentos del Analisis Economico, Universidad de Alicante)
    Abstract: In this paper we propose a new parametrisation of transition probabilities that allows us to characterize and test Granger-causality in Markov switching models by means of an appropriate specification of the transition matrix. Test for independence are also provided. We illustrate our methodology with an empirical application. In particular, we investigate the causality and interdependence between financial and economic cycles using a bivariate Markov switching model. When applied to U.S. data, we find that financial variables are useful for forecasting the direction of aggregate economic activity, and vice versa.
    Keywords: Granger Causality, Markov Chains, Switching Models
    JEL: C53 C32
    Date: 2006
  9. By: Harald Uhlig
    Abstract: Linear Methods are often used to compute approximate solutions to dynamic models, as these models often cannot be solved analytically. Linear methods are very popular, as they can easily be implemented. Also, they provide a useful starting point for understanding more elaborate numerical methods. It shall be described here first for the example of a simple real business cycle model, including how to easily generate the log-linearized equations needed before solving the linear system. For a general framework, formulas are provided for calculating the recursive law of motion. The algorithm described here is implemented with the "toolkit" programs available per "" .
    Keywords: numerical methods, linear solution method, loglinearization, dynamic stochastic general equilibrium methods, recursive law of motion
    JEL: C60 C61 C63 E32
    Date: 2006–04
  10. By: Denis Belomestny; Markus Reiß
    Abstract: The calibration of financial models has become rather important topic in recent years mainly because of the need to price increasingly complex options in a consistent way. The choice of the underlying model is crucial for the good performance of any calibration procedure. Recent empirical evidences suggest that more complex models taking into account such phenomenons as jumps in the stock prices, smiles in implied volatilities and so on should be considered. Among most popular such models are Levy ones which are on the one hand able to produce complex behavior of the stock time series including jumps, heavy tails and on other hand remain tractable with respect to option pricing. The work on calibration methods for financial models based on Lévy processes has mainly focused on certain parametrisations of the underlying Lévy process with the notable exception of Cont and Tankov (2004). Since the characteristic triplet of a Lévy process is a priori an infinite-dimensional object, the parametric approach is always exposed to the problem of misspecification, in particular when there is no inherent economic foundation of the parameters and they are only used to generate different shapes of possible jump distributions. In this work we propose and test a non-parametric calibration algorithm which is based on the inversion of the explicit pricing formula via Fourier transforms and a regularisation in the spectral domain. Using the Fast Fourier Transformation, the procedure is fast, easy to implement and yields good results in simulations in view of the severe ill-posedness of the underlying inverse problem.
    Keywords: European option, jump diffusion, minimax rates, severely ill-posed, nonlinear inverse problem, spectral cut-off
    JEL: G13 C14
    Date: 2006–04
  11. By: Siikamaki, Juha (Resources for the Future); Layton, David F.
    Abstract: This study investigates the convergent validity of discrete choice contingent valuation (CV) and contingent rating/ranking (CR) methods using flexible econometric methods. Our results suggest that CV and CR can produce consistent data (achieve convergent validity) when respondent’s preferred choices and the same changes in environmental quality are considered. We also find that CR models that go beyond modeling the preferred choice and include additional ranks cannot be pooled with the CV models. Accounting for preference heterogeneity via random coefficient models and their flexible structure does not make rejection of the hypothesis of convergent validity less likely, but instead rejects the hypothesis to about the same degree or perhaps more frequently than fixed parameter models commonly used in the literature.
    Keywords: valuation, stated preferences, data pooling, random coefficients, Rayleigh, habitat conservation
    Date: 2006–04–26
  12. By: Costas Milas (Keele University, Centre for Economic Research and School of Economic and Management Studies); Ilias Lekkos (Research Department, Eurobank Ergasias, Greece); Theodore Panagiotidis (Department of Economics, Loughborough University, UK)
    Abstract: This paper explores the ability of factor models to predict the dynamics of US and UK interest rate swap spreads within a linear and a non-linear framework. We reject linearity for the US and UK swap spreads in favour of a regime-switching smooth transition vector autoregressive (STVAR) model, where the switching between regimes is controlled by the slope of the US term structure of interest rates. We compare the ability of the STVAR model to predict swap spreads with that of a non-linear nearest-neighbours model as well as that of linear AR and VAR models.We find some evidence that the non-linear models predict better than the linear ones. At short horizons, the nearest-neighbours (NN) model predicts better than the STVAR model US swap spreads in periods of increasing risk conditions and UK swap spreads in periods of decreasing risk conditions. At long horizons, the STVAR model increases its forecasting ability over the linear models, whereas the NN model does not outperform the rest of the models.
    Keywords: Interest rate swap spreads, term structure of interest rates, factor models, regime switching, smooth transition models, nearest-neighbours, forecasting.
    JEL: C51 C52 C53 E43
    Date: 2006–04

This nep-ecm issue is ©2006 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.