nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒06‒02
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. A Simple Efficient Instrumental Variable Estimator in Panel AR(p) Models By Kazuhiko Hayakawa
  2. Comparing smooth transition and Markov switching autoregressive models of US Unemployment By Philippe J. Deschamps
  3. Dynamic Panel Data Models with Cross Section Dependence and Heteroscedasticity By Kazuhiko Hayakawa
  4. A note on the coefficient of determination in regression models with infinite-variance variables By Kim, Jeong-ryeol; Loretan, Michael Stanislaus
  5. Diagnostic Tests of Cross Section Independence for Nonlinear Panel Data Models By Hsiao, C.; Pesaran, M.H.; Pick, A.
  6. A look into the factor model black box - publication lags and the role of hard and soft data in forecasting GDP By Marta Ba?bura; Gerhard Rünstler
  7. Reconsidering the role of monetary indicators for euro area inflation from a Bayesian perspective using group inclusion probabilities By Scharnagl, Michael; Schumacher, Christian
  8. Wavelet Analysis and Denoising: New Tools for Economists By Iolanda Lo Cascio
  9. Econometric analyses with backdated data - unified Germany and the euro area By Elena Angelini; Massimiliano Marcellino
  10. Does implied volatility reflect a wider information set than econometric forecasts? By Ralf Becker; Adam Clements; James Curchin
  11. Credit Risk Monte Carlos Simulation Using Simplified Creditmetrics' Model: the joint use of importance sampling and descriptive sampling By Jaqueline Terra Moura Marins; Eduardo Saliby
  12. Volatility modelling and accurate minimun capital risk requirements : a comparison among several approaches By Aurea Grane; Helena Veiga
  13. Estimating the Structural Credit Risk Model When Equity Prices Are Contaminated by Trading Noises By Duan, Jin-Chuan; Fulop, Andras
  14. Out-Of-The_Money Monte Carlo Simulation Option Pricing: the join use of Importance Sampling and Descriptive Sampling By Jaqueline Terra Moura Marins; Eduardo Saliby; Joséte Florencio do Santos
  15. Inference in a Synchronization Game with Social Interactions By Aureo de Paula

  1. By: Kazuhiko Hayakawa
    Abstract: In this paper, we show that for panel AR(p) models with iid errors, an instrumental variable (IV) estimator with instruments in the backward orthogonal deviation has the same asymptotic distribution as the infeasible optimal IV estimator when both N and T, the dimensions of the cross section and the time series, are large. If we assume that the errors are normally distributed, the asymptotic variance of the proposed IV estimator is shown to attain the lower bound when both N and T are large. A simulation study is conducted to assess the estimator.
    Keywords: panel AR(p) models, the optimal instruments, the backward orthogonal deviation
    JEL: C12 C23
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:hst:hstdps:d07-213&r=ecm
  2. By: Philippe J. Deschamps (Department of Quantitative Economics)
    Abstract: Logistic smooth transition and Markov switching autoregressive models of a logistic transform of the monthly US unemployment rate are estimated by Markov chain Monte Carlo methods. The Markov switching model is identified by constraining the first autoregression coefficient to differ across regimes. The transition variable in the LSTAR model is the lagged seasonal difference of the unemployment rate. Out of sample forecasts are obtained from Bayesian predictive densities. Although both models provide very similar descriptions, Bayes factors and predictive efficiency tests (both Bayesian and classical) favor the smooth transition model.
    Keywords: Logistic smooth transition autoregressions; Hidden Markov models; Density forecasts; Markov chain Monte Carlo; Bridge sampling; Unemployment rate
    JEL: C11 C22 C53 E24 E27
    Date: 2007–05–24
    URL: http://d.repec.org/n?u=RePEc:fri:dqewps:wp0007&r=ecm
  3. By: Kazuhiko Hayakawa
    Abstract: In this paper, we show that the bias-corrected first-difference (BCFD) estimator suggested by Chowdhury (1987) can be applied to the case where the error terms are cross-sectionally dependent and heteroscedastic. By deriving the finite sample bias of the BCFD estimator, we find that the BCFD estimator has small bias when T, the dimension of the time series, is not very large and ƒÏ, the autoregressive parameter, is close to one. Simulation results show that the BCFD estimator performs better than existing estimators, especially when T is not very large.
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:hst:hstdps:d07-212&r=ecm
  4. By: Kim, Jeong-ryeol; Loretan, Michael Stanislaus
    Abstract: Since Mandelbrot's seminal work (1963), alpha-stable distributions with infinite variance have been regarded as a more realistic distributional assumption than the normal distribution for some economic variables, especially financial data. After providing a brief survey of theoretical results on estimation and hypothesis testing in regression models with infinite-variance variables, we examine the statistical properties of the coefficient of determination in regression models with infinite-variance variables. These properties differ in several important aspects from those in the well-known finite variance case. In the infinite-variance case when the regressor and error term share the same index of stability, the coefficient of determination has a nondegenerate asymptotic distribution on the entire [0,1] interval, and the probability density function of this distribution is unbounded at 0 and 1. We provide closedform expressions for the cumulative distribution function and probability density function of this limit random variable. In an empirical application, we revisit the Fama-MacBeth two-stage regression and show that in the infinite variance case the coefficient of determination of the second-stage regression converges to zero asymptotically.
    Keywords: Regression models, alpha-stable distributions, infinite variance, coefficient of determination, Fama-MacBeth regression, Monte Carlo simulation
    JEL: C12 C13 C21 G12
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:5574&r=ecm
  5. By: Hsiao, C.; Pesaran, M.H.; Pick, A.
    Abstract: In this paper we discuss tests for residual cross section dependence in nonlinear panel data models. The tests are based on average pair-wise residual correlation coefficients. In nonlinear models, the definition of the residual is ambiguous and we consider two approaches: deviations of the observed dependent variable from its expected value and generalized residuals. We show the asymptotic consistency of the cross section dependence (CD) test of Pesaran (2004). In Monte Carlo experiments it emerges that the CD test has the correct size for any combination of N and T whereas the LM test relies on T large relative to N. We then analyze the roll-call votes of the 104th U.S. Congress and find considerable dependence between the votes of the members of Congress.
    Keywords: Cross-section dependence, nonlinear panel data model.
    JEL: C12 C33 C35
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:0716&r=ecm
  6. By: Marta Ba?bura (ECARES, Université Libre de Bruxelles, Avenue Franklin D. Roosevelt 50, B-1050 Brussels, Belgium.); Gerhard Rünstler (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: We derive forecast weights and uncertainty measures for assessing the role of individual series in a dynamic factor model (DFM) to forecast euro area GDP from monthly indicators. The use of the Kalman filter allows us to deal with publication lags when calculating the above measures. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts. However, this is discovered only, if their more timely publication is properly taken into account. Differences in publication lags play a very important role and should be considered in forecast evaluation. JEL Classification: E37, C53.
    Keywords: Dynamic factor models, forecasting, filter weights.
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20070751&r=ecm
  7. By: Scharnagl, Michael; Schumacher, Christian
    Abstract: This paper addresses the relative importance of monetary indicators for forecasting inflation in the euro area in a Bayesian framework. Bayesian Model Averaging (BMA)based on predictive likelihoods provides a framework that allows for the estimation of inclusion probabilities of a particular variable, that is the probability of that variable being in the forecast model. A novel aspect of the paper is the discussion of group-wise inclusion probabilities, which helps to address the empirical question whether the group of monetary variables is relevant for forecasting euro area inflation. In our application, we consider about thirty monetary and non-monetary indicators for inflation. Using this data, BMA provides inclusion probabilities and weights for Bayesian forecast combination. The empirical results for euro area data show that monetary aggregates and non-monetary indicators together play an important role for forecasting inflation, whereas the isolated information content of both groups is limited. Forecast combination can only partly outperform single-indicator benchmark models.
    Keywords: inflation forecasting, monetary indicators, Bayesian Model Averaging, inclusion probability
    JEL: C11 C52 E31 E37
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:5573&r=ecm
  8. By: Iolanda Lo Cascio (Queen Mary, University of London)
    Abstract: This paper surveys the techniques of wavelets analysis and the associated methods of denoising. The Discrete Wavelet Transform and its undecimated version, the Maximum Overlapping Discrete Wavelet Transform, are described. The methods of wavelets analysis can be used to show how the frequency content of the data varies with time. This allows us to pinpoint in time such events as major structural breaks. The sparse nature of the wavelets representation also facilitates the process of noise reduction by nonlinear <i>wavelet shrinkage</i>, which can be used to reveal the underlying trends in economic data. An application of these techniques to the UK real GDP (1873-2001) is described. The purpose of the analysis is to reveal the true structure of the data - including its local irregularities and abrupt changes - and the results are surprising.
    Keywords: Wavelets, Denoising, Structural breaks, Trend estimation
    JEL: C22 C14 C53
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp600&r=ecm
  9. By: Elena Angelini (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Massimiliano Marcellino (IGIER, CEPR and IEP - Bocconi University, Via Sarfatti 25, 20136 Milano, Italy.)
    Abstract: In this paper we compare alternative approaches for the construction of time series of macroeconomic variables for Unified Germany prior to 1991, and then use them for the construction of corresponding time series for the euro area. The resulting series for Germany and the euro area are compared with existing ones on the basis of both descriptive statistics and results of econometric analyses conducted with the alternative time series. We find that more sophisticated time series methods for backdating can yield sizeable gains. JEL Classification: C32, C43, C82.
    Keywords: Backdating, Factor Model, Unified Germany, Euro Area.
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20070752&r=ecm
  10. By: Ralf Becker; Adam Clements; James Curchin
    Abstract: Much research has addressed the relative performance of option implied volatilities and econometric model based forecasts in terms of forecasting asset return volatility. The general theme to come from this body of work is that implied volatility is a superior forecast. Some authors attribute this to the fact that option markets use a wider information set when forming their forecasts of volatility. This article considers this issue and determines whether S&P 500 implied volatility reflects a set of economic information beyond its impact on the prevailing level of volatility. It is found, that while the implied volatility subsumes this information, as do model based forecasts, this is only due to its impact on the current or prevailing level of volatility. Therefore, it appears as though implied volatility does not reflect a wider information set than model based forecasts, implying that implied volatility forecasts simply reflect volatility persistence in much the same way of as do econometric models.
    Keywords: Implied volatility, VIX, volatility forecasts, informational efficiency
    JEL: C12 C22 G00 G14
    Date: 2007–05–22
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2007-9&r=ecm
  11. By: Jaqueline Terra Moura Marins; Eduardo Saliby
    Abstract: Monte Carlo simulation is implemented in some of the main models for estimating portfolio credit risk, such as CreditMetrics, developed by Gupton, Finger and Bhatia (1997). As in any Monte Carlo application, credit risk simulation according to this model produces imprecise estimates. In order to improve precision, simulation sampling techniques other than traditional Simple Random Sampling become indispensable. Importance Sampling (IS) has already been successfully implemented by Glasserman and Li (2005) on a simplified version of CreditMetrics, in which only default risk is considered. This paper tries to improve even more the precision gains obtained by IS over the same simplified CreditMetrics' model. For this purpose, IS is here combined with Descriptive Sampling (DS), another simulation technique which has proved to be a powerful variance reduction procedure. IS combined with DS was successful in obtaining more precise results for credit risk estimates than its standard form.
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:132&r=ecm
  12. By: Aurea Grane; Helena Veiga
    Abstract: In this paper we estimate, for several investment horizons, minimum capital risk requirements for short and long positions, using the unconditional distribution of three daily indexes futures returns and a set of GARCH-type and stochastic volatility models. We consider the possibility that errors follow a t-Student distribution in order to capture the kurtosis of the returns distributions. The results suggest that an accurate modeling of extreme returns obtained for long and short trading investment positions is possible with a simple autoregressive stochastic volatility model. Moreover, modeling volatility as a fractional integrated process produces, in general, excessive volatility persistence and consequently leads to large minimum capital risk requirement estimates. The performance of models is assessed with the help of out-of-sample tests and p-values of them are reported.
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws074713&r=ecm
  13. By: Duan, Jin-Chuan (Rotman School of Management, University of Toronto); Fulop, Andras (ESSEC Business School)
    Abstract: The transformed-data maximum likelihood estimation (MLE) method for structural credit risk models developed by Duan (1994) is extended to account for the fact that observed equity prices may have been contaminated by trading noises. With the presence of trading noises, the likelihood function based on the observed equity prices can only be evaluated via some nonlinear filtering scheme. We devise a particle filtering algorithm that is practical for conducting the MLE estimation of the structural credit risk model of Merton (1974). We implement the method on the Dow Jones 30 firms and on 100 randomly selected firms, and find that ignoring trading noises can lead to significantly over-estimating the firm’s asset volatility. The estimated magnitude of trading noise is in line with the direction that a firm’s liquidity will predict based on three common liquidity proxies. A simulation study is then conducted to ascertain the performance of the estimation method.
    Keywords: Credit Risk; Maximum Likelihood; Microstructure; Option Pricing; Particle Filtering
    JEL: C22
    Date: 2006–10
    URL: http://d.repec.org/n?u=RePEc:ebg:essewp:dr-06015&r=ecm
  14. By: Jaqueline Terra Moura Marins; Eduardo Saliby; Joséte Florencio do Santos
    Abstract: As in any Monte Carlo application, simulation option valuation produces imprecise estimates. In such an application, Descriptive Sampling (DS) has proven to be a powerful Variance Reduction Technique. However, this performance deteriorates as the probability of exercising an option decreases. In the case of out of the money options, the solution is to use Importance Sampling (IS). Following this track, the joint use of IS and DS is deserving of attention. Here, we evaluate and compare the benefits of using standard IS method with the joint use of IS and DS. We also investigate the influence of the problem dimensionality in the variance reduction achieved. Although the combination IS+DS showed gains over the standard IS implementation, the benefits in the case of out-of-the-money options were mainly due to the IS effect. On the other hand, the problem dimensionality did not affect the gains. Possible reasons for such results are discussed.
    Date: 2006–09
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:116&r=ecm
  15. By: Aureo de Paula (Department of Economics, University of Pennsylvania)
    Abstract: This paper studies inference in a continuous-time game where an agent’s decision to quit an activity depends on the participation of other players. In equilibrium, similar actions can be explained not only by direct influences, but also by correlated factors. Our model can be seen as a simultaneous duration model with multiple decision makers and interdependent durations. We study the problem of determining existence and uniqueness of equilibrium stopping strategies in this setting. This paper provides results and conditions for the detection of these endogenous effects. First, we show that the presence of such effects is a necessary and sufficient condition for simultaneous exits. This allows us to set up a nonparametric test for the presence of such influences which is robust to multiple equilibria. Second, we provide conditions under which parameters in the game are identified. Finally, we apply the model to data on desertion in the Union Army during the American Civil War and find evidence of endogenous influences.
    Keywords: duration models; social interactions; empirical games; optimal stopping
    JEL: C10 C70 D70
    Date: 2004–10–01
    URL: http://d.repec.org/n?u=RePEc:pen:papers:07-017&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.