Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research2014-07-21Walter FrischIssues in Comparing Stochastic Volatility Models Using the Deviance Information Criterion
http://d.repec.org/n?u=RePEc:een:camaaa:2014-51&r=ore
The deviance information criterion (DIC) has been widely used for Bayesian model comparison. In particular, a popular metric for comparing stochastic volatility models is the DIC based on the conditional likelihood—obtained by conditioning on the latent variables. However, some recent studies have argued against the use of the conditional DIC on both theoretical and practical grounds. We show via a Monte Carlo study that the conditional DIC tends to favor overfitted models, whereas the DIC calculated using the observed-data likelihood—obtained by integrating out the latent variables—seems to perform well. The main challenge for obtaining the latter DIC for stochastic volatility models is that the observed-data likelihoods are not available in closed-form. To overcome this difficulty, we propose fast algorithms for estimating the observed-data likelihoods for a variety of stochastic volatility models using importance sampling. We demonstrate the methodology with an application involving daily returns on the Standard & Poors (S&P) 500 index.Joshua C.C. Chan, Angelia L. Grant2014-07Bayesian model comparison, nonlinear state space, DIC, jumps, moving average, S&P 500Estimating Long-Run PD, Asset Correlation, and Portfolio Level PD by Vasicek Models
http://d.repec.org/n?u=RePEc:pra:mprapa:57244&r=ore
In this paper, we propose a Vasicek-type of models for estimating portfolio level probability of default (PD). With these Vasicek models, asset correlation and long-run PD for a risk homogenous portfolio both have analytical solutions, longer external time series for market and macroeconomic variables can be included, and the traditional asymptotic maximum likelihood approach can be shown to be equivalent to least square regression, which greatly simplifies parameter estimation. The analytical formula for long-run PD, for example, explicitly quantifies the contribution of uncertainty to an increase of long-run PD. We recommend the bootstrap approach to addressing the serial correlation issue for a time series sample. To validate the proposed models, we estimate the asset correlations for 13 industry sectors using corporate annual default rates from S&P for years 1981-2011, and long-run PD and asset correlation for a US commercial portfolio, using US delinquent rate for commercial and industry loans from US Federal Reserve.Yang, Bill Huajian2013-07-10Portfolio level PD, long-run PD, asset correlation, time series, serial correlation, bootstrapping, binomial distribution, maximum likelihood, least square regression, Vasicek modelMarkovian Equilibrium in a Model of Investment Under Imperfect Competition
http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01020398&r=ore
This paper develops and analyzes a dynamic model of partially irreversible investment under cournot competition and stochastic evolution of demand. In this framework, I characterize the markov perfect equilibrium in which player's strategies are continuous in the state variable. There exists a zone in the space of capacities, named the no-move zone, such that if firms capacity belongs to this area, no firm invest nor disinvest at the equilibrium. Thereby, initial asymmetry between firms capacity can be preserved. If firms are outside this area, they invest in order to reached the no-move zone. The equilibrium as an efficiency property: the point of this area which is reached by the firms minimizes the investment cost of the all industry.Thomas Fagart2014-05Capacity investment and disinvestment; dynamic stochastic games; Markov perfect equilibrium; real option gamesAdaptive Models and Heavy Tails
http://d.repec.org/n?u=RePEc:bbk:bbkefp:1409&r=ore
This paper proposes a novel and ‡exible framework to estimate autoregressive models with time-varying parameters. Our setup nests various adaptive algorithms that are commonly used in the macroeconometric literature, such as learning-expectations and forgetting-factor algorithms. These are generalized along several directions: specifically, we allow for both Student-t distributed innovations as well as time-varying volatility. Meaningful restrictions are imposed to the model parameters, so as to attain local stationarity and bounded mean values. The model is applied to the analysis of inflation dynamics. Allowing for heavy-tails leads to a significant improvement in terms of fit and forecast. Moreover, it proves to be crucial in order to obtain well-calibrated density forecasts.Davide Delle Monache, Ivan Petrella2014-07Time-Varying Parameters, Score-driven Models, Heavy-Tails, Adaptive Algorithms, Inflation.A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process
http://d.repec.org/n?u=RePEc:cbt:econwp:14/19&r=ore
One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions. The derivation of DCC from a vector random coefficient moving average process raises three important issues: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters.Christian M. Hafner, Michael McAleer2014-07-09Dynamic conditional correlation, dynamic conditional covariance, vector random coefficient moving average, stationarity, invertibility, asymptotic propertiesEstimating and Forecasting the Yield Curve Using a Markov Switching Dynamic Nelson and Siegel Model
http://d.repec.org/n?u=RePEc:bbk:bbkcam:1403&r=ore
We estimate versions of the Nelson-Siegel model of the yield curve of U.S. government bonds using a Markov switching latent variable model that allows for discrete changes in the stochastic process followed by the interest rates. Our modelling approach is motivated by evidence suggesting the existence of breaks in the behaviour of the U.S. yield curve that depend, for example, on whether the economy is in a recession or a boom, or on the stance of monetary policy. Our model is parsimonious, relatively easy to estimate, and flexible enough to match the changing shapes of the yield curve over time. We also derive the discrete time non-arbitrage restrictions for the Markov switching model. We compare the forecasting performance of these models with that of the standard dynamic Nelson and Siegel model and an extension that allows the decay rate parameter to be time-varying. We show that some parameterizations of our model with regime shifts outperform the single regime Nelson and Siegel model and other standard empirical models of the yield curve.Constantino Hevia, Martin Gonzalez-Rozada, Martin Sola, Fabio Spagnolo2014-07Yield Curve, Term structure of interest rates, Markov regime switching, Maximum likelihood, Risk premium.Meeting our D€STINY. A Disaggregated €uro area Short Term INdicator model to forecast GDP (Y) growth
http://d.repec.org/n?u=RePEc:bde:wpaper:1323&r=ore
In this paper we propose a new real-time forecasting model for euro area GDP growth, D€STINY, which attempts to bridge the existing gap in the literature between large- and small-scale dynamic factor models. By adopting a disaggregated modelling approach, D€STINY uses most of the information available for the euro area and the member countries (around 100 economic indicators), but without incurring in the nite sample problems of the large-scale methods, since all the estimated models are of a small scale. An empirical pseudo-real time application for the period 2004-2013 shows that D€STINY´s forecasting performance is clearly better than the standard alternative models and than the publicly available forecasts of other institutions. This is especially true for the period since the beginning of the crisis, which suggests that our approach may be more robust to periods of highly volatile data and to the possible presence of structural breaks in the sample.Pablo Burriel, María Isabel García-Belmonte2013-12business cycles, output growth, time series, Euro-STING model, large-scale modelTerm structure estimation, liquidity-induced heteroskedasticity and the price of liquidity risk
http://d.repec.org/n?u=RePEc:bde:wpaper:1308&r=ore
Since the seminal paper of Vasicek and Fong (1982), the term structures of interest rates have been fitted assuming that yields are cross-sectionally homoskedastic. We show that this assumption does not hold when there are differences in liquidity, even for bonds of the same issuer. Lower turnover implies higher volatility. In addition, a minimum tick size for bond price negotiation will produce higher volatility for bonds approaching their maturity dates. To show these effects, we use data for Spanish sovereign bonds from 1988 to 2010, covering more than 700 bonds and 5000 trading days. We estimate the out-ofsample error for each bond and day. The variance of these errors is found to be negatively correlated with each bond’s turnover and duration, while the mean of the errors is found to be directly correlated with the estimated variance. As a result, we propose a modified Svensson (1994) yield curve model to fit the term structure, adding a liquidity term and estimating parameters by weighted least-squared errors to take into account the liquidityinduced heteroskedasticity.Emma Berenguer, Ricardo Gimeno, Juan M. Nave2013-05heteroskedasticity, liquidity premium, yield curve fitting, Spanish sovereign bonds