Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research2014-12-13Walter FrischA theory of pruning
http://d.repec.org/n?u=RePEc:ecb:ecbwps:20141696&r=ore
Often, numerical simulations for dynamic, stochastic models in economics are needed. Higher order methods can be attractive, but bear the danger of generating explosive solutions in originally stationary models. Kim-Kim-Schaumburg-Sims (2008) proposed pruning to deal with this challenge for second order approximations. In this paper, we provide a theory of pruning and formulas for pruning of any order. We relate it to results described by Judd (1998) on perturbing dynamical systems. JEL Classification: C63, C02, C62Lombardo, Giovanni, Uhlig, Harald2014-07numerical economics, numerical simulation, perturbation methods, pruning, Taylor expansionApproximate Bayesian Computation in State Space Models
http://d.repec.org/n?u=RePEc:msh:ebswps:2014-20&r=ore
A new approach to inference in state space models is proposed, based on approximate Bayesian computation (ABC). ABC avoids evaluation of the likelihood function by matching observed summary statistics with statistics computed from data simulated from the true process; exact inference being feasible only if the statistics are sufficient. With finite sample sufficiency unattainable in the state space setting, we seek asymptotic sufficiency via the maximum likelihood estimator (MLE) of the parameters of an auxiliary model. We prove that this auxiliary model-based approach achieves Bayesian consistency, and that - in a precise limiting sense - the proximity to (asymptotic) sufficiency yielded by the MLE is replicated by the score. In multiple parameter settings a separate treatment of scalar parameters, based on integrated likelihood techniques, is advocated as a way of avoiding the curse of dimensionality. Some attention is given to a structure in which the state variable is driven by a continuous time process, with exact inference typically infeasible in this case as a result of intractable transitions. The ABC method is demonstrated using the unscented Kalman filter as a fast and simple way of producing an approximation in this setting, with a stochastic volatility model for financial returns used for illustration.Gael M. Martin, Brendan P.M. McCabe, Worapree Maneesoonthorn, Christian P. Robert2014Likelihood-free methods, latent diffusion models, linear Gaussian state space models, asymptotic sufficiency, unscented Kalman filter, stochastic volatility.Volatility vs. downside risk: optimally protecting against drawdowns and maintaining portfolio performance
http://d.repec.org/n?u=RePEc:ven:wpaper:2014:18&r=ore
As a consequence of recent market conditions an increasing number of investors are realizing the importance of controlling tail risk to reduce drawdowns thus increasing possibilities of achieving long-term objectives. Recently, so called volatility control strategies and volatility target approaches to investment have gained a lot of interest as strategies able to mitigate tail risk and produce better risk-adjusted returns. Essentially these are rule-based backward looking strategies in which no optimization is considered. In this contribution we focus on the role of volatility in downside risk reduction and, in particular, in tail risk reduction. The first contribution of our paper is to provide a viable way to integrate a target volatility approach, into a multiperiod portfolio optimization model, through the introduction of a local volatility control approach. Our optimized volatility control is contrasted with existing rule-based target volatility strategies, in an out-of sample simulation on real data, to assess the improvement that can be obtained from the optimization process. A second contribution of this work is to study the interaction between volatility control and downside risk control. We show that combining the two tools we can enhance the possibility of achieving the desired performance objectives and, simultaneously, we reduce the cost of hedging. The multiperiod portfolio optimization problem is formulated in a stochastic programming framework that provides the necessary flexibility for dealing with different constraints and multiple sources of risk.Diana Barro, Elio Canestrelli, Fabio LanzaVolatility, tail risk, stochastic programming, risk management.Preemption Games under Levy Uncertainty
http://d.repec.org/n?u=RePEc:tex:wpaper:131101&r=ore
We study a stochastic version of Fudenberg--Tirole's preemption game. Two firms contemplate entering a new market with stochastic demand. Firms differ in sunk costs of entry. If the demand process has no upward jumps, the low cost firm enters first, and the high cost firm follows. If leader's optimization problem has an interior solution, the leader enters at the optimal threshold of a monopolist; otherwise, the leader enters earlier than the monopolist. If the demand admits positive jumps, then the optimal entry threshold of the leader can be lower than the monopolist's threshold even if the solution is interior; simultaneous entry can happen either as an equilibrium or a coordination failure; the high cost firm can become the leader. We characterize subgame perfect equilibrium strategies in terms of stopping times and value functions. Analytical expressions for the value functions and thresholds that define stopping times are derived.Svetlana Boyarchenko, Sergei Levendorskii2011-05stopping time games, preemption, Levy uncertaintyA Bayesian Latent Variable Mixture Model for Filtering Firm Profit Rate
http://d.repec.org/n?u=RePEc:epa:cepawp:2014-1&r=ore
By using Bayesian Markov chain Monte Carlo methods we select the proper subset of competitive firms and find striking evidence for Laplace shaped firm profit rate distributions. Our approach enables us to extract more information from data than previous research. We filter US firm-level data into signal and noise distributions by Gibbs-sampling from a latent variable mixture distribution, extracting a sharply peaked, negatively skewed Laplace-type profit rate distribution. A Bayesian change point analysis yields the subset of large firms with symmetric and stationary Laplace distributed profit rates, adding to the evidence for statistical equilibrium at the economy wide and sectoral levels.Gregor Semieniuk, Ellis Scharfenaker2014-02Firm competition, Laplace distribution, Gibbs sampler, Profit rate, Statistical equilibriumFinancial regimes and uncertainty shocks
http://d.repec.org/n?u=RePEc:bbk:bbkcam:1404&r=ore
Financial markets are central to the transmission of uncertainty shocks. This paper documents a new aspect of the interaction between the two by showing that uncertainty shocks have radically different macroeconomic implications depending on the state financial markets are in when they occur. Using monthly US data, we estimate a nonlinear VAR where economic uncertainty is proxied by the (unobserved) volatility of the structural shocks, and a regime change occurs whenever credit conditions cross a critical threshold. An exogenous increase in uncertainty has recessionary effects in both good and bad credit regimes, but its impact on output is estimated to be five times larger when the economy is experiencing financial distress. Accounting for this nonlinearity, uncertainty accounts for about 1% of the peak fall in industrial production observed in the 2007-2009 recession.Piergiorgio Alessandri, Haroon Mumtaz2014-10Uncertainty, Stochastic Volatility, Financial Markets, Threshold VAR.Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis
http://d.repec.org/n?u=RePEc:pra:mprapa:59973&r=ore
Ng and Perron (2001) designed a unit root test which incorporates the properties of DF-GLS and Phillips Perron test. Ng and Perron claim that the test performs exceptionally well especially in the presence of negative moving average. However, the performance of test depends heavily on the choice of spectral density estimators used in the construction of test. There are various estimators for spectral density available in literature, having crucial impact on the output of test however there is no clarity on which of these estimators gives optimal size and power properties. This study aims to evaluate the performance of Ng-Perron for different choices of spectral density estimators in the presence of negative and positive moving average using Monte Carlo simulations. The results for large samples show that: (a) in the presence of positive moving average, test with kernel based estimator give good effective power and no size distortion (b) in the presence of negative moving average, autoregressive estimator gives better effective power, however, huge size distortion is observed in several specifications of data generating processMalik, Muhammad Irfan, Rehman, Atiq-ur-2014-11-17Ng-Perron test, Monte Carlo, Spectral Density, Unit Root TestingLinRegInteractive: An R Package for the Interactive Interpretation of Linear Regression Models
http://d.repec.org/n?u=RePEc:bwu:schdps:sdp14014&r=ore
The package provides the generic function fxInteractive() to facilitate the interpretation of various kinds of regression models. It allows to observe the effects of variations of metric covariates in an interactive manner by means of termplots for different model classes. Currently linear regression models, generalized linear models, generalized additive models and linear mixed-effects models are supported. Due to the interactive approach the function provides an intuitive understanding of the mechanics of a particular model and is therefore especially useful for educational purposes. Technically the package is based on the package rpanel and the only mandatory argument for the main function is an appropriate fitted-model object. Given this, the linear predictors, the marginal effects and, for generalized linear models, the responses are calculated automatically. For the marginal effects a numerical approach is used to handle non-constant marginal effects automatically. If there are two or more categorical covariates the corresponding effects are presented in a novel way. For publication purposes the user can customize the appearance of the termplots to a large extent. Tables of the effects and marginal effects can be printed to the R Console, optionally as copy-and-paste-ready LaTeX-code.Martin Meermeyer2014-11production planning and control, lot sizing, production simulationAssessing systemic fragility: A probabilistic perspective
http://d.repec.org/n?u=RePEc:zbw:safewp:70&r=ore
We outline a procedure for consistent estimation of marginal and joint default risk in the euro area financial system. We interpret the latter risk as the intrinsic financial system fragility and derive several systemic fragility indicators for euro area banks and sovereigns, based on CDS prices. Our analysis documents that although the fragility of the euro area banking system had started to deteriorate before Lehman Brothers' file for bankruptcy, investors did not expect the crisis to affect euro area sovereigns' solvency until September 2008. Since then, and especially after November 2009, joint sovereign default risk has outpaced the rise of systemic risk within the banking system.Radev, Deyan2014Banking Stability,Financial Distress,Tail Risk,ContagionA Matlab program and user's guide for the fractionally cointegrated VAR model
http://d.repec.org/n?u=RePEc:qed:wpaper:1330&r=ore
This manual describes the usage of the accompanying freely available Matlab program for estimation and testing in the fractionally cointegrated vector autoregressive (FCVAR) model. This program replaces an earlier Matlab program by Nielsen and Morin (2014), and although the present Matlab program is not compatible with the earlier one, we encourage use of the new program.Morten Ã˜rregaard Nielsen, MichaÅ‚ Ksawery Popiel2014-10cofractional process, cointegration rank, computer program, fractional autoregressive model, fractional cointegration, fractional unit root, Matlab, VAR modelEstimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments
http://d.repec.org/n?u=RePEc:nbr:nberwo:20622&r=ore
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM.Phillipp Eisenhauer, James J. Heckman, Stefano Mosso2014-10Dealing with unobservable common trends in small samples: a panel cointegration approach
http://d.repec.org/n?u=RePEc:sas:wpaper:20145&r=ore
Non stationary panel models allowing for unobservable common trends have recently become very popular. However, standard methods, which are based on factor extraction or models augmented with cross-section averages, require large sample sizes, not always available in practice. In these cases we propose the simple and robust alternative of augmenting the panel regres- sion with common time dummies. The underlying assumption of additive e¤ects can be tested by means of a panel cointegration test, with no need of estimating a general interactive e¤ects model. An application to modelling labour productivity growth in the four major European economies (France, Germany, Italy and UK) illustrates the method.Francesca Di Iorio, Stefano Fachin2014-11Common trends, Panel cointegration, TFP.Forecasting the Volatility of the Dow Jones Islamic Stock Market Index: Long Memory vs. Regime Switching
http://d.repec.org/n?u=RePEc:zbw:fmpwps:2&r=ore
The financial crisis has fueled interest in alternatives to traditional asset classes that might be less a ected by large market gyrations and, thus, provide for a less volatile development of a portfolio. One attempt at selecting stocks that are less prone to extreme risks, is obeyance of Islamic Sharia rules. In this light, we investigate the statistical properties of the Dow Jones Islamic Stock Market Index (DJIM) and explore its volatility dynamics using a number of up-to-date statistical models allowing for long memory and regime-switching dynamics. We find that the DJIM shares all stylized facts of traditional asset classes, and estimation results and forecasting performance for various volatility models are also in line with prevalent findings in the literature. Overall, the relatively new Markov-switching multifractal model performs best under the majority of time horizons and loss criteria. Long memory GARCH-type models always improve upon the short-memory GARCH specification and additionally allowing for regime changes can further improve their performance.Ben Nasr, Adnen, Lux, Thomas, Ajmi, Ahdi Noomen, Gupta, Rangan2014Islamic finance,volatility dynamics,long memory,multifractalsA Note on Values for Markovian Coalition Processes
http://d.repec.org/n?u=RePEc:hal:journl:halshs-00912889&r=ore
The Shapley value is defined as the average marginal contribution of a player, taken over all possible ways to form the grand coalition $N$ when one starts from the empty coalition and adds players one by one. The authors have proposed in a previous paper an allocation scheme for a general model of coalition formation where the evolution of the coalition of active players is ruled by a Markov chain, and need not finish at the grand coalition. The aim of this note is to develop some explanations in the general context of time discrete stochastic processes, exhibit new properties of the model, correct some inaccuracies in the original paper, and give a new version of the axiomatization.Ulrich Faigle, Michel Grabisch2013coalitional game; coalition formation process; Shapley value