nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒12‒13
25 papers chosen by
Sune Karlsson
Orebro University

  1. Methods for Computing Marginal Data Densities from the Gibbs Output By Cristina Fuentes-Albero; Leonardo Melosi
  2. Sensitivity Analysis in Semiparametric Likelihood Models By Xiaohong Chen; Elie Tamer; Alexander Torgovitsky
  3. Rank Tests for Elliptical Graphical Modeling By Davy Paindaveine; Thomas Verdebout
  4. Testing Overidentifying Restrictions with Many Instruments and Heteroskedasticity By Norman R. Swanson; John C. Chao; Jerry A. Hausman; Whitney K. Newey; Tiemen Woutersen
  5. "Set Inference in Latent Variables Models" By Isabel Mourifie; Marc Henry
  6. Instrumental Variable Estimation with Heteroskedasticity and Many Instruments By Norman R. Swanson; John C. Chao; Jerry A. Hausman; Whitney K. Newey; Tiemen Woutersen
  7. Comparison of Bayesian Model Selection Criteria and Conditional Kolmogorov Test as Applied to Spot Asset Pricing Models By Xiangjin Shen; Hiroki Tsurumi
  8. Learning a bayesian network from ordinal data By Flaminia Musella
  9. Threshold estimation in price transmission analysis By Friederike Greb; Stephan von Cramon-Taubadel; Tatyana Krivobokova; Axel Munk
  10. Estimation of Equicorrelated Diffusions from Incomplete Data By Robert A. Jones; Mohammad Zanganeh
  11. An econometric Study for Vine Copulas By Dominique Guegan; Pierre-André Maugis
  12. Stability of Long-run Relationships for Countries in Transition: A Hansen Test Study By Ewa Syczewska
  13. Predictive Inference for Integrated Volatility By Norman R. Swanson; Valentina Corradi; Walter Distaso
  14. Some critical remarks on Zhang's gamma test for independence By Klein, Ingo; Tinkl, Fabian
  15. Asymptotic Distribution of JIVE in a Heteroskedastic IV Regression with Many Instruments By Norman R. Swanson; John C. Chao; Jerry A. Hausman; Whitney K. Newey; Tiemen Woutersen
  16. Nonlinear Panel Data Models with Expected a Posteriori Values of Correlated Random Effects By Amaresh Tiwari; Franz Palm
  17. Approximated maximum likelihood estimation in multifractal random walks By Ola L{\o}vsletten; Martin Rypdal
  18. Volatility in Discrete and Continuous Time Models: A Survey with New Evidence on Large and Small Jumps By Diep Duong; Norman R. Swanson
  19. Estimating Noncooperative and Cooperative Models of Bargaining: An Empirical Comparison By Masanori Mitsutsune; Takanori Adachi
  20. Markov Chains application to the financial-economic time series prediction By Vladimir Soloviev; Vladimir Saptsin; Dmitry Chabanenko
  21. Combining benchmarking and chain-linking for short-term regional forecasting By Ángel Cuevas; Enrique M. Quilis; Antoni Espasa
  22. A hierarchical Archimedean copula for portfolio credit risk modelling By Puzanova, Natalia
  23. Decomposing R2 with the Owen value By Hüttner, Frank; Sunder, Marco
  24. Quantifying survey expectations: What's wrong with the probability approach? By Breitung, Jörg; Schmeling, Maik
  25. Factor models By In Choi; Jorg Breitung

  1. By: Cristina Fuentes-Albero (Rutgers, The State University of New Jersey); Leonardo Melosi (London Business School)
    Abstract: We introduce two new methods for estimating the Marginal Data Density (MDD) from the Gibbs output, which are based on exploiting the analytical tractability condition. Such a condition requires that some parameter blocks can be analytically integrated out from the conditional posterior densities. Our estimators are applicable to densely parameterized time series models such as VARs or DFMs. An empirical application to six-variate VAR models shows that the bias of a fully computational estimator is sufficiently large to distort the implied model rankings. One estimator is fast enough to make multiple computations of MDDs in densely parameterized models feasible.
    Keywords: Marginal likelihood, Gibbs sampler, time series econometrics, Bayesian econometrics, reciprocal importance sampling
    JEL: C11 C15 C16
    Date: 2011–10–17
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201131&r=ecm
  2. By: Xiaohong Chen (Cowles Foundation, Yale University); Elie Tamer (Dept. of Economics, Northwestern University); Alexander Torgovitsky (Dept. of Economics, Northwestern University)
    Abstract: We provide methods for inference on a finite dimensional parameter of interest, theta in Re^{d_theta}, in a semiparametric probability model when an infinite dimensional nuisance parameter, g, is present. We depart from the semiparametric literature in that we do not require that the pair (theta, g) is point identified and so we construct confidence regions for theta that are robust to non-point identification. This allows practitioners to examine the sensitivity of their estimates of theta to specification of g in a likelihood setup. To construct these confidence regions for theta, we invert a profiled sieve likelihood ratio (LR) statistic. We derive the asymptotic null distribution of this profiled sieve LR, which is nonstandard when theta is not point identified (but is chi^2 distributed under point identification). We show that a simple weighted bootstrap procedure consistently estimates this complicated distribution's quantiles. Monte Carlo studies of a semiparametric dynamic binary response panel data model indicate that our weighted bootstrap procedures performs adequately in finite samples. We provide three empirical illustrations to contrast our procedure to the ones obtained using standard (less robust) methods.
    Keywords: Semiparametric models, Partial identification, Irregular functionals, Sieve likelihood ratio, Weighted bootstrap
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1836&r=ecm
  3. By: Davy Paindaveine; Thomas Verdebout
    Abstract: As a reaction to the restrictive Gaussian assumptions that are usually part of graphical models, Vogel and Fried [17] recently introduced elliptical graphical models, in which the vector of variables at hand is assumed to have an elliptical distribution. The present work introduces a class of rank tests in the context of elliptical graphical models. The proposed tests are valid under any elliptical density, and in particular do not require any moment assumption. They achieve local and asymptotic optimality under correctly specified densities. Their asymptotic properties are investigated both under the null and under sequences of local alternatives. Asymptotic relative efficiencies with respect to the corresponding pseudo-Gaussian competitors are derived, which allows to show that, when based on normal scores, the proposed rank tests uniformly dominate the pseudo-Gaussian tests in the Pitman sense. The asymptotic results are confirmed through a Monte-Carlo study.
    Keywords: conditional independence; graphical models; local symptotic normality; psuedo-gaussian tests; rank tests; scatter matrix; signed ranks
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/104766&r=ecm
  4. By: Norman R. Swanson (Rutgers University); John C. Chao (University of Maryland); Jerry A. Hausman (MIT); Whitney K. Newey (MIT); Tiemen Woutersen (Johns Hopkins University)
    Abstract: This paper gives a test of overidentifying restrictions that is robust to many instruments and heteroskedasticity. It is based on a jackknife version of the Sargan test statistic, having a numerator that is the objective function minimized by the JIVE2 estimator of Angrist, Imbens, and Krueger (1999). Correct asymptotic critical values are derived for this test when the number of instruments grows large, at a rate up to the sample size. It is also shown that the test is valid when the number instruments is fixed and there is homoskedasticity. This test improves on recently proposed tests by allowing for heteroskedasticity and by avoiding assumptions on the instrument projection matrix. The asymptotics is based on the heteroskedasticity robust many instrument asymptotics of Chao et. al. (2010).
    Keywords: heteroskedasticity, instrumental variables, jackknife estimation, many instruments, weak instruments
    JEL: C13 C31
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201118&r=ecm
  5. By: Isabel Mourifie (Departement de Sciences Economiques, Universite de Montleal); Marc Henry (Departement de Sciences Economiques, Universite de Montleal)
    Abstract: We propose a methodology for constructing valid confidence regions in incomplete models with latent variables satisfying moment equality restrictions. These include moment equality and inequality models with latent variables. The confidence regions are obtained by inverting tests based on the characterization of the identified set derived in Ekeland, Galichon, and Henry (2010). A valid boot- strap approximation of the distribution of the test statistic is derived under mild conditions and the confidence regions are shown to have correct asymptotic size.
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2011cf820&r=ecm
  6. By: Norman R. Swanson (Rutgers University); John C. Chao (University of Maryland); Jerry A. Hausman (MIT); Whitney K. Newey (MIT); Tiemen Woutersen (Johns Hopkins University)
    Abstract: This paper gives a relatively simple, well behaved solution to the problem of many instruments in heteroskedastic data. Such settings are common in microeconometric applications where many instruments are used to improve efficiency and allowance for heteroskedasticity is generally important. The solution is a Fuller (1977) like estimator and standard errors that are robust to heteroskedasticity and many instruments. We show that the estimator has finite moments and high asymptotic efficiency in a range of cases. The standard errors are easy to compute, being like White’s (1982), with additional terms that account for many instruments. They are consistent under standard, many instrument, and many weak instrument asymptotics. Based on a series of Monte Carlo experiments, we find that the estimators perform as well as LIML or Fuller (1977) under homoskedasticity, and have much lower bias and dispersion under heteroskedasticity, in nearly all cases considered.
    Keywords: Instrumental Variables , Jackknife, Many Instruments, Heteroskedasticity
    JEL: C12 C13 C23
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201111&r=ecm
  7. By: Xiangjin Shen (Rutgers University); Hiroki Tsurumi (Rutgers University)
    Abstract: We compare Bayesian and sample theory model specification criteria. For the Bayesian criteria we use the deviance information criterion and the cumulative density of the mean squared errors of forecast. For the sample theory criterion we use the conditional Kolmogorov test. We use Markov chain Monte Carlo methods to obtain the Bayesian criteria and bootstrap sampling to obtain the conditional Kolmogorov test. Two non-nested models we consider are the CIR and Vasicek models for spot asset prices. Monte Carlo experiments show that the DIC performs better than the cumulative density of the mean squared errors of forecast and the CKT. According to the DIC and the mean squared errors of forecast, the CIR model explains the daily data on uncollateralized Japanese call rate from January 1 1990 to April 18 1996; but according to the CKT, neither the CIR nor Vasicek models explains the daily data.
    Keywords: Deviance information criterion, Markov chain Monte Carlo algorithms, Block bootstrap, Conditional Kolmogorov test, CIR and Vasicek models
    JEL: C1 C5 G0
    Date: 2011–06–07
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201126&r=ecm
  8. By: Flaminia Musella
    Abstract: Bayesian networks are graphical models that represent the joint distributionof a set of variables using directed acyclic graphs. When the dependence structure is unknown (or partially known) the network can be learnt from data. In this paper, we propose a constraint-based method to perform Bayesian networks structural learning in presence of ordinal variables. The new procedure, called OPC, represents a variation of the PC algorithm. A nonparametric test, appropriate for ordinal variables, has been used. It will be shown that, in some situation, the OPC algorithm is a solution more efficient than the PC algorithm.
    Keywords: Structural Learning, Monotone Association, Nonparametric Methods
    JEL: C14 C51
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:rtr:wpaper:0139&r=ecm
  9. By: Friederike Greb (Georg-August-University Göttingen); Stephan von Cramon-Taubadel (Georg-August-University Göttingen); Tatyana Krivobokova (Georg-August-University Göttingen); Axel Munk (Georg-August-University Göttingen)
    Abstract: The threshold vector error correction model is an increasingly popular tool in the analysis of price transmission and market integration. In the literature, the profile likelihood estimator is the preferred choice for estimating this model. Yet, in certain settings this estimator performs poorly. In particular, if the true thresholds leave only a small set of observations in one of the regimes, if unknown model parameters are numerous or if differences between regimes are small, estimates can be biased and variable. Such settings are likely when studying price transmission. For simpler, but related threshold models an alternative estimator, the regularized Bayesian estimator, which does not exhibit these deficits, has been developed (Greb et al. 2011). In this article, we explore the properties of this estimator for threshold vector error correction models. Simulation results show that it clearly outperforms the profile likelihood estimator, especially in situations in which the profile likelihood estimator fails. Two empirical applications -a reassessment of the estimates in the seminal paper by Goodwin and Piggott (2001) and an analysis of price transmission between German and Spanish markets for pork- demonstrate the relevance of the new approach for price transmission analysis.
    Keywords: Bayesian estimator; market integration; spatial arbitrage; TVECM
    Date: 2011–12–06
    URL: http://d.repec.org/n?u=RePEc:got:gotcrc:103&r=ecm
  10. By: Robert A. Jones (Simon Fraser University); Mohammad Zanganeh (Simon Fraser University)
    Abstract: The paper derives maximum likelihood parameter estimators for symmetrically correlated Weiner processes observed at discrete intervals. Such processes arise when pricing and determining Value-at-Risk for portfolio derivatives. Cases of driftless and mean-reverting state variables are considered. The procedure is applicable to samples with missing data of any pattern and to high dimensional systems. The estimation procedure is illustrated using a sample of stock prices.
    Keywords: Maximum likelihood; Equicorrelation; Correlated diusions; Wiener process; Missing data
    JEL: C51 C58 G11 G21
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp11-03&r=ecm
  11. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon Sorbonne, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Pierre-André Maugis (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon Sorbonne, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris)
    Abstract: We present a new recursive algorithm to construct vine copulas based on an underlying tree structure. This new structure is interesting to compute multivariate distributions for dependent random variables. We proove the asymptotic normality of the vine copula parameter estimator and show that all vine copula parameter estimators have comparable variance. Both results are crucial to motivate any econometrical work based on vine copulas. We provide an application of vine copulas to estimate the VaR of a portfolio, and show they offer significant improvement as compared to a benchmark estimator based on a GARCH model.
    Keywords: Vines - multivariate copulas - risk management.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00645799&r=ecm
  12. By: Ewa Syczewska (Warsaw School of Economics)
    Abstract: The results of a Monte Carlo research for the Hansen Lc, MeanF and SupF stability tests for long-run relationships are reported. The tests are related to the Phillips-Hansen and Hansen semiparametric methods, which involve kernel density estimation of the long-run covariance matrix. We compare the effect of the choice of kernel on the performance of the tests, and also check the effect of misspecification of the model (lags for the disturbances) on the behaviour of test statistics, and behaviour of percentiles. The results indicate that the best – both in terms of efficiency and robustness to misspecification error – is the Parzen kernel.
    Keywords: Hansen stability tests, Phillips-Hansen estimators, long-run covariance matrix, spectral analysis, semiparametric estimates
    JEL: C01 C02 C12 C14 C46
    Date: 2011–12–04
    URL: http://d.repec.org/n?u=RePEc:wse:wpaper:58&r=ecm
  13. By: Norman R. Swanson (Rutgers University); Valentina Corradi (University of Warwick); Walter Distaso (Queen Mary)
    Abstract: In recent years, numerous volatility-based derivative products have been engineered. This has led to interest in constructing conditional predictive den- sities and con¯dence intervals for integrated volatility. In this paper, we propose nonparametric estimators of the aforementioned quantities, based on model free volatility estimators. We establish consistency and asymptotic normality for the feasible estimators and study their ¯nite sample properties through a Monte Carlo experiment. Finally, using data from the New York Stock Exchange, we provide an empirical application to volatility directional predictability.
    Keywords: Diffusions, realized volatility measures, kernels, microstructure noise, jumps
    JEL: C22 C53 C14
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201109&r=ecm
  14. By: Klein, Ingo; Tinkl, Fabian
    Abstract: Zhang (2008) defines the quotient correlation coefficient to test for dependence and tail dependence of bivariate random samples. He shows that asymptotically the test statistics are gamma distributed. Therefore, he called the corresponding test gamma test. We want to investigate the speed of convergence by a simulation study. Zhang discusses a rank-based version of this gamma test that depends on random numbers drawn from a standard Frechet distribution. We propose an alternative that does not depend on random numbers. We compare the size and the power of this alternative with the well-known t-test, the van der Waerden and the Spearman rank test. Zhang proposes his gamma test also for situations where the dependence is neither strictly increasing nor strictly decreasing. In contrast to this, we show that the quotient correlation coefficient can only measure monotone patterns of dependence. --
    Keywords: test on dependence,rank correlation test,Spearman's p,copula,Lehmann ordering
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:faucse:872010&r=ecm
  15. By: Norman R. Swanson (Rutgers University); John C. Chao (University of Maryland); Jerry A. Hausman (MIT); Whitney K. Newey (MIT); Tiemen Woutersen (Johns Hopkins University)
    Abstract: This paper derives the limiting distributions of alternative jackknife IV (JIV ) estimators and gives formulae for accompanying consistent standard errors in the presence of heteroskedasticity and many instruments. The asymptotic framework includes the many instrument sequence of Bekker (1994) and the many weak instrument sequence of Chao and Swanson (2005). We show that J IV estimators are asymptotically normal; and that standard errors are consistent provided that √Kn/rn → 0, as n → ∞, where Kn and rn denote, respectively, the number of instruments and the rate of growth of the concentration parameter. This is in contrast to the asymptotic behavior of such classical IV estimators as LIML, B2SLS, and 2SLS, all of which are inconsistent in the presence of heteroskedasticity, unless Kn/rn → 0. We also show that the rate of convergence and the form of the asymptotic covariance matrix of the JIV estimators will in general depend on strength of the instruments as measured by the relative orders of magnitude of rn and Kn.
    Keywords: heteroskedasticity , instrumental variables, jackknife estimation, many instruments, weak instruments
    JEL: C13 C31
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201110&r=ecm
  16. By: Amaresh Tiwari; Franz Palm
    Abstract: We develop a two step estimation procedure to estimate nonlinear panel data models. Our approach combines the “correlated random effect” and the “control function” approach to handel endogeneity of regressors that are correlated with both the unobserved heterogeneity as well as the idiosyncratic component. The novelty here lies in integrating out the unobserved heterogeneity on which the structural equations are conditioned. The integration is performed with respect to the posterior distribution of the individual effects obtained from the first stage reduced form estimation. Our framework suggests separate tests for correlation between unobserved heterogeneity and the covariates, and correlation between idiosyncratic component and the covariates. Average partial effects (APEs) of covariates are also easily obtained.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:rpp:wpaper:1113&r=ecm
  17. By: Ola L{\o}vsletten; Martin Rypdal
    Abstract: We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry et al., Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1112.0105&r=ecm
  18. By: Diep Duong (Rutgers University); Norman R. Swanson (Rutgers University)
    Abstract: The topic of volatility measurement and estimation is central to …nancial and more generally time series econo- metrics. In this paper, we begin by surveying models of volatility, both discrete and continuous, and then we summarize some selected empirical …ndings from the literature. In particular, in the …rst sections of this paper, we discuss important developments in volatility models, with focus on time varying and stochastic volatility as well as nonparametric volatility estimation. The models discussed share the common feature that volatilities are unobserved, and belong to the class of missing variables. We then provide empirical evidence on "small" and "large" jumps from the perspective of their contribution to overall realized variation, using high frequency price return data on 25 stocks in the DOW 30. Our "small" and "large" jump variations are constructed at three truncation levels, using extant methodology of Barndor¤-Nielsen and Shephard (2006), Andersen, Bollerslev and Diebold (2007) and Aït-Sahalia and Jacod (2009a,b,c). Evidence of jumps is found in around 22.8% of the days during the 1993-2000 period, much higher than the corresponding …gure of 9.4% during the 2001-2008 period. While the overall role of jumps is lessening, the role of large jumps has not decreased, and indeed, the relative role of large jumps, as a proportion of overall jumps has actually increased in the 2000s.
    Keywords: Itô semi-martingale; realized volatility, jumps; multipower variation, tripower variation, truncated power variation; quarticity, infinite activity jumps
    JEL: C22 C58
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201117&r=ecm
  19. By: Masanori Mitsutsune (Tokyo Metropolitan Government); Takanori Adachi (School of Economics, Nagoya University)
    Abstract: This paper examines the issue of model selection in studies of strategic situations. In particular, we compare estimation results from Adachi and Watanabe's (2008) noncooperative formulation of government formulation with those from two alternative cooperative formulations. Although the estimates of the ministerial ranking are similar, statistical testing suggests that Adachi and Watanabe's(2008) noncooperative formulation is best fitted to the observed data among the alternative models. This result implies that modeling the time structure in bargaining situations is crucially important at the risk of possibly misspecifying the details of the game.
    Keywords: Model selection, Bargaining, Government formation, Structural estimation
    JEL: C52 C71 C72 C78 H19
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:799&r=ecm
  20. By: Vladimir Soloviev; Vladimir Saptsin; Dmitry Chabanenko
    Abstract: In this research the technology of complex Markov chains is applied to predict financial time series. The main distinction of complex or high-order Markov Chains and simple first-order ones is the existing of aftereffect or memory. The technology proposes prediction with the hierarchy of time discretization intervals and splicing procedure for the prediction results at the different frequency levels to the single prediction output time series. The hierarchy of time discretizations gives a possibility to use fractal properties of the given time series to make prediction on the different frequencies of the series. The prediction results for world's stock market indices is presented.
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1111.5254&r=ecm
  21. By: Ángel Cuevas; Enrique M. Quilis; Antoni Espasa
    Abstract: In this paper we propose a methodology to estimate and forecast the GDP of the different regions of a country, providing quarterly profiles for the annual official observed data. Thus the paper offers a new instrument for short-term monitoring that allow the analysts to quantify the degree of synchronicity among regional business cycles. Technically, we combine time series models with benchmarking methods for forecast short-term quarterly indicators and to estimate quarterly regional GDPs ensuring their temporal and transversal consistency with the National Accounts data. The methodology addresses the issue of non-additivity taking into account linked volume indexes used by the National Accounts and provides an efficient combination of structural as well as short-term information. The methodology is illustrated by an application to the Spanish economy, providing real-time quarterly GDP estimates and forecasts at the regional level (i.e., with a minimum compilation delay with respect to the national quarterly GDP).
    Keywords: Forecasting, Spanish economy, Regional analysis, Benchmarking, Chain-linking
    JEL: C53 C43 C82 R11
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws114130&r=ecm
  22. By: Puzanova, Natalia
    Abstract: I introduce a novel, hierarchical model of tail dependent asset returns which can be particularly useful for measuring portfolio credit risk within the structural framework. To allow for a stronger dependence within sub-portfolios than between them, I utilise the concept of nested Archimedean copulas, but modify the nesting procedure to ensure the compatibility of copula generators by construction. This makes sampling straightforward. Moreover, I provide details on a particular specification based on a gamma mixture of powers. This model allows for lower tail dependence, resulting in a more conservative credit risk assessment than a comparable Gaussian model. I illustrate the extent of model risk when calculating VaR or Expected Shortfall for a credit portfolio. --
    Keywords: portfolio credit risk,nested Archimedean copula,tail dependence,hierarchical dependence structure
    JEL: C46 C63 G21
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp2:201114&r=ecm
  23. By: Hüttner, Frank; Sunder, Marco
    Abstract: We provide an axiomatization-based justification for applying the Owen value to decompose R2 in OLS models if prior knowledge can be used to form groups of regressor variables. The assumptions made by the axioms are not only plausible with respect to the variables but also clarify the meaning of the exogenous grouping of variables. --
    Keywords: Shapley value,Owen value,OLS,variance decomposition,German Socio-Economic Panel
    JEL: C20
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:leiwps:100&r=ecm
  24. By: Breitung, Jörg; Schmeling, Maik
    Abstract: We study a matched sample of individual stock market forecasts consisting of both qualitative and quantitative forecasts. This allows us to test for the quality of forecast quantification methods by comparing quantified qualitative forecasts with actual quantitative forecasts. Focusing mainly on the widely used quantification framework advocated by Carlson and Parkin (1975), the so-called "probability approach", we find that quantified expectations derived from the probability approach display a surprisingly weak correlation with reported quantitative stock return forecasts. We trace the reason for this low correlation to the importance of asymmetric and time-varying thresholds, whereas individual heterogeneity across forecasters seems to play a minor role. Hence, our results suggest that qualitative survey data may not be a very useful device to obtain quantitative forecasts and we suggest ways to remedy this problem when designing qualitative surveys.
    Keywords: Quantification, Stock Market Expectations, Probability Approach, Heterogeneity
    JEL: C53 D84 G17
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-485&r=ecm
  25. By: In Choi (Department of Economics, Sogang University, Seoul); Jorg Breitung (Institute of Econometrics, University of Bonn, Adenauerallee 2442, 53113 Bonn, Germany)
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:1121&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.