Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research2015-05-22Walter FrischEstimating rational stock-market bubbles with sequential Monte Carlo methods
http://d.repec.org/n?u=RePEc:cqe:wpaper:4015&r=ore
Considering the present-value stock-price model, we propose a new rational parametric bubble specification that is able to generate periodically recurring and stochastically deflating trajectories. Our bubble model is empirically more plausible than its predecessor variants and has neatly interpretable parameters. We transform our entire stock-price-bubble framework into a nonlinear state-space form and implement a fully-fledged estimation framework based on sequential Monte Carlo methods. This particle-filtering approach, originally stemming from the engineering literature, enables us (a) to obtain accurate parameter estimates, and (b) to reveal the (unobservable) trajectories of arbitrary rational bubble specifications. We fit our new bubble process to artificial and real-world data and demonstrate how to use parameter estimates to compare important characteristics of historical bubbles having emerged in different stock-markets with each other.Benedikt Rotermann, Bernd Wilfling2015-05Present-value model, rational bubble nonlinear state space model, particle-filter estimation, EM algorithmForecasting volatility of wind power production
http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-026&r=ore
The increasing share of wind energy in the portfolio of energy sources highlights its uncertainties due to changing weather conditions. To account for the uncertainty in predicting wind power production, this article examines the volatility forecasting abilities of different GARCH-type models for wind power production. Moreover, due to characteristic features of the wind power process, such as heteroscedasticity and nonlinearity, we also investigate the use of a Markov regime-switching GARCH (MRS-GARCH) model on forecasting volatility of wind power. The realized volatility, which is derived from lower-scale data, serves as a benchmark for the latent volatility. We find that the MRS-GARCH model significantly outperforms traditional GARCH models in predicting the volatility of wind power, while the exponential GARCH model is superior among traditional GARCH models.Zhiwei Shen, Matthias Ritter, , 2015-05Wind energy, volatility forecasting, GARCH models, Markov regime-switching, realized volatilityQuantile forecasts of inflation under model uncertainty
http://d.repec.org/n?u=RePEc:pra:mprapa:64341&r=ore
Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.Korobilis, Dimitris2015-04Bayesian model averaging; quantile regression; inflation forecasts; fan chartsFloGARCH : Realizing long memory and asymmetries in returns volatility
http://d.repec.org/n?u=RePEc:nbb:reswpp:201504-280&r=ore
We introduce the class of FloGARCH models in this paper. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models.Harry Vander Elst2015-04Realized GARCH models, high-frequency data, long memory, realized measures.Robustly Strategic Consumption-Portfolio Rules with Informational Frictions
http://d.repec.org/n?u=RePEc:pra:mprapa:64312&r=ore
This paper provides a tractable continuous-time constant-absolute-risk averse (CARA)-Gaussian framework to explore how the interactions of fundamental uncertainty, model uncertainty due to a preference for robustness (RB), and state uncertainty due to information-processing constraints (rational inattention or RI) affect strategic consumption-portfolio rules and precautionary savings in the presence of uninsurable labor income. Specifically, after solving the model explicitly, I compute and compare the elasticities of strategic asset allocation and precautionary savings to risk aversion, robustness, and inattention. Furthermore, for plausibly estimated and calibrated model parameters, I quantitatively analyze how the interactions of model uncertainty and state uncertainty affect the optimal share invested in the risky asset, and show that they can provide a potential explanation for the observed stockholding behavior of households with different education and income levels.Luo, Yulei2015Robustness, Model Uncertainty, Rational Inattention, Uninsurable Labor Income, Strategic Asset Allocation, Precautionary SavingsThe Impact of Jumps and Leverage in Forecasting Co-Volatility
http://d.repec.org/n?u=RePEc:ucm:doicae:1502&r=ore
The paper investigates the impact of jumps in forecasting co-volatility, accommodating leverage effects. We modify the jump-robust two time scale covariance estimator of Boudt and Zhang (2013) such that the estimated matrix is positive definite. Using this approach we can disentangle the estimates of the integrated co-volatility matrix and jump variations from the quadratic covariation matrix. Empirical results for three stocks traded on the New York Stock Exchange indicate that the co-jumps of two assets have a significant impact on future co-volatility, but that the impact is negligible for forecasting weekly and monthly horizons.Manabu Asai, Michael McAleer2015-02Co-Volatility; Forecasting; Jump; Leverage Effects; Realized Covariance; Threshold Estimation.No such thing like perfect hammer: comparing different objective function specifications for optimal control
http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2015-005&r=ore
The linear-quadratic (LQ) optimization is a close to standard technique in the optimal control framework. LQ is very well researched and there are many extensions for more sophisticated scenarios like nonlinear models. Usually, the quadratic objective function is taken as a prerequisite for calculating derivative-based solutions of optimal control problems. However, it is not clear whether this framework is so universal as it is considered. In particular, we address the question on whether the objective function specification and the corresponding penalties applied, are well suited in case of a large exogenous shock an economy can experience because of, e.g., the European debt crisis. While one can still efficiently minimize quadratic deviations in state and control variables around policy targets, the economy itself has to go through a period of turbulence with economic indicators, such as unemployment, inflation or public debt, changing considerably over time. In this study we test four alternative designs of the objective function: a least median of squares based approach, absolute deviations, cubic and quartic objective functions. The analysis is performed based on a small-scale model of the Austrian economy and finds that there is a certain trade-off between quickly finding optimal solution using the LQ technique (reaching defined policy targets) and accounting for alternative objectives, such as limiting volatility in the economic performance.Dmitri Blueschke, Ivan Savin2015-05-04Differential evolution, nonlinear optimization, optimal control, least median of squares, cubic optimization, quartic optimizationForecasting Euro Area Macroeconomic Variables with Bayesian Adaptive Elastic Net
http://d.repec.org/n?u=RePEc:knz:dpteco:1512&r=ore
I use the adaptive elastic net in a Bayesian framework and test its forecasting performance against lasso, adaptive lasso and elastic net (all used in a Bayesian framework) in a series of simulations, as well as in an empirical exercise for macroeconomic Euro area data. The results suggest that elastic net is the best model among the four Bayesian methods considered. Adaptive lasso, on the other hand, shows the worst forecasting performance. Lasso is generally better then adaptive lasso, but worse than adaptive elastic net. The differences in the performance of these models become especially large when the number of regressors grows considerably relative to the number of available observations. The results point to the fact that the ridge regression component in the elastic net is responsible for its improvement in forecasting performance over lasso. The adaptive shrinkage in some of the models does not seem to play a major role, and may even lead to a deterioration of the performance.Sandra Stankiewicz2015-05-13Elastic net, Lasso, Bayesian, ForecastingStrategic behaviour in Schelling dynamics: Theory and experimental evidence
http://d.repec.org/n?u=RePEc:eec:wpaper:1504&r=ore
In this paper we experimentally test Schelling’s (1971) segregation model and confirm the striking result of segregation. In addition, we extend Schelling’s model theoretically by adding strategic behaviour and moving costs. We obtain a unique subgame perfect equilibrium in which rational agents facing moving costs may find it optimal not to move (anticipating other participants’ movements). This equilibrium is far from full segregation. We run experiments for this extended Schelling model, and find that the percentage of full segregated societies notably decreases with the cost of moving and that the degree of segregation depends on the distribution of strategic subjects.Juan M. Benito-Ostolaza, Pablo Brañas-Garza, Penélope Hern´andez2015-05Subgame perfect equilibrium, segregation, experimental gamesA Two-Step Estimator for Missing Values in Probit Model Covariates
http://d.repec.org/n?u=RePEc:hhs:oruesi:2015_003&r=ore
This paper includes a simulation study on the bias and MSE properties of a two-step probit model estimator for handling missing values in covariates by conditional imputation. In one smaller simulation it is compared with an asymptotically ecient estimator and in one larger it is compared with the probit ML on complete cases after listwise deletion. Simulation results obtained favors the use of the two-step probit estimator and motivates further developments of the methodology.Laitila, Thomas, Wang, Lisha2015-04-27binary variable; imputation; OLS; heteroskedasticity