Operations Research
http://lists.repec.orgmailman/listinfo/nep-ore
Operations Research
2017-07-23
Identification and Decompositions in Probit and Logit Models
http://d.repec.org/n?u=RePEc:inh:wpaper:2017-8&r=ore
Probit and logit models typically require a normalization on the error variance for model identification. This paper shows that in the context of sample mean probability decompositions, error variance normalizations preclude estimation of the effects of group differences in the latent variable model parameters. An empirical example is provided for a model in which the error variances are identified. This identification allows the effects of group differences in the latent variable model parameters to be estimated.
Chung Choe
SeEun Jung
Ronald L. Oaxaca
Decompositions, Probit, Logit, Identification
2017-07
Sufficient conditions of stochastic dominance for general transformations and its application in option strategy
http://d.repec.org/n?u=RePEc:zbw:ifwedp:201740&r=ore
A counterexample is presented to show that the sufficient condition for one transformation dominating another by the second degree stochastic dominance, proposed by Theorem 5 of Levy (Stochastic dominance and expected utility: Survey and analysis, 1992), does not hold. Then, by restricting the monotone property of the dominating transformation, a revised exact sufficient condition for one transformation dominating another is given. Next, the stochastic dominance criteria, proposed by Meyer (Stochastic Dominance and transformations of random variables, 1989) and developed by Levy (Stochastic dominance and expected utility: Survey and analysis, 1992), are extended to the most general transformations. Moreover, such criteria are further generalized to transformations on discrete random variables. Finally, the authors employ this method to analyze the transformations resulting from holding a stock with the corresponding call option.
Gao, Jianwei
Zhao, Feng
stochastic dominance,transformation,utility theory,option strategy
2017
Making the Grossman Model Stochastic: Investment in Health as a Stochastic Control Problem
http://d.repec.org/n?u=RePEc:cch:wpaper:170009&r=ore
It is well known that uncertainty is a key consideration in theoretical health economics analysis. The literature has shown that uncertainty is a multifaceted concept, with the individual's optimal response depending on the formal nature of the uncertainty and the time horizon involved. This paper extends the literature by considering uncertainty with regards to the cumulative effect on health capital of on-going health behaviours. It uses techniques of stochastic optimal control to analyze uncertainty which can be represented as a Weiner process and shows how, in a Grossman health investment framework, the optimal lifetime health investment trajectory might be affected.
Audrey Laporte
Brian Ferguson
2017-07
Regional inequality in decentralized countries: a multi-country analysis using LIS
http://d.repec.org/n?u=RePEc:lis:liswps:697&r=ore
The aim of this paper is to analyze the regional disparities of six decentralized countries using LIS microdata. In order to determine the extent of the territorial variable in the explanation of income inequality, we carry out two complementary analyses. On the one hand, we perform the classical decomposition by population subgroups of different inequality measures. On the other hand, we implement a semiparametric decomposition analysis based on the method proposed by DiNardo, Fortin and Lemieux.
Javier Martín-Román
Luis Ayala
Juan Vicente
income inequality,regional inequality,subgroup decomposition,semiparametric decomposition
2017-05
A Durbin-Levinson Regularized Estimator of High Dimensional Autocovariance Matrices
http://d.repec.org/n?u=RePEc:rtv:ceisrp:410&r=ore
We consider the problem of estimating the high-dimensional autocovariance matrix of a stationary random process, with the purpose of out of sample prediction and feature extraction. This problem has received several solutions. In the nonparametric framework, the literature has concentrated on banding and tapering the sample autocovariance matrix. This paper proposes and evaluates an alternative approach, based on regularizing the sample partial autocorrelation function, via a modified Durbin-Levinson algorithm that receives as input the banded and tapered partial autocorrelations and returns a sample autocovariance sequence which is positive definite. We show that the regularized estimator of the autocovariance matrix is consistent and its convergence rates is established. We then focus on constructing the optimal linear predictor and we assess its properties. The computational complexity of the estimator is of the order of the square of the banding parameter, which renders our method scalable for high-dimensional time series. The performance of the autocovariance estimator and the corresponding linear predictor is evaluated by simulation and empirical applications.
Tommaso Proietti
Alessandro Giovannelli
Toeplitz systems; Optimal linear prediction; Partial autocorrelation function
2017-07-17
Regulatory Learning: how to supervise machine learning models? An application to credit scoring
http://d.repec.org/n?u=RePEc:mse:cesdoc:17034&r=ore
The arrival of big data strategies is threatening the lastest trends in financial regulation related to the simplification of models and the enhancement of the comparability of approaches chosen by financial institutions. Indeed, the intrinsic dynamic philosophy of Big Data strategies is almost incompatible with the current legal and regulatory framework as illustrated in this paper. Besides, as presented in our application to credit scoring, the model selection may also evolve dynamically forcing both practitioners and regulators to develop libraries of models, strategies allowing to switch from one to the other as well as supervising approaches allowing financial institutions to innovate in a risk mitigated environment. The purpose of this paper is therefore to analyse the issues related to the Big Data environment and in particular to machine learning models highlighting the issues present in the current framework confronting the data flows, the model selection process and the necessity to generate appropriate outcomes
Dominique Guegan
Bertrand Hassani
Financial Regulation; Algorithm; Big Data; Risk
2017-07
Estimating and accounting for the output gap with large Bayesian vector autoregressions
http://d.repec.org/n?u=RePEc:een:camaaa:2017-46&r=ore
We demonstrate how Bayesian shrinkage can address problems with utilizing large information sets to calculate trend and cycle via a multivariate Beveridge-Nelson (BN) decomposition. We illustrate our approach by estimating the U.S. output gap with large Bayesian vector autoregressions that include up to 138 variables. Because the BN trend and cycle are linear functions of historical forecast errors, we are also able to account for the estimated output gap in terms of different sources of information, as well as particular underlying structural shocks given identification restrictions. Our empirical analysis suggests that, in addition to output growth, the unemployment rate, CPI inflation, and, to a lesser extent, housing starts, consumption, stock prices, real M1, and the federal funds rate are important conditioning variables for estimating the U.S. output gap, with estimates largely robust to incorporating additional variables. Using standard identification restrictions, we find that the role of monetary policy shocks in driving the output gap is small, while oil price shocks explain about 10% of the variance over different horizons.
James Morley
Benjamin Wong
Beveridge-Nelson decomposition, output gap, Bayesian estimation, multivariate information
2017-07
Testing a parametric transformation model versus a nonparametric alternative
http://d.repec.org/n?u=RePEc:lec:leecon:17/15&r=ore
Despite an abundance of semiparametric estimators of the transformation model, no procedure has been proposed yet to test the hypothesis that the transformation function belongs to a finite-dimensional parametric family against a nonparametric alternative. In this paper we introduce a bootstrap test based on integrated squared distance between a nonparametric estimator and a parametric null. As a special case, our procedure can be used to test the parametric specification of the integrated baseline hazard in a semiparametric mixed proportional hazard (MPH) model. We investigate the finite sample performance of our test in a Monte Carlo study. Finally, we apply the proposed test to Kennan’s strike durations data.
Arkadiusz Szyd?owski
Specification testing, Transformation model, Duration model, Bootstrap, Rank estimation
2017-07
War and Conflict in Economics: Theories, Applications, and Recent Trends
http://d.repec.org/n?u=RePEc:pra:mprapa:80277&r=ore
We review the main economic models of war and conflict. These models vary in details, but their implications are qualitatively consistent, highlighting key commonalities across a variety of conflict settings. Recent empirical literature, employing both laboratory and field data, in many cases confirms the basic implications of conflict theory. However, this literature also presents important challenges to the way economists traditionally model conflict. We finish our review by suggesting ways to address these challenges.
Kimbrough, Erik
Laughren, Kevin
Sheremeta, Roman
conflict, war, contest, all-pay auction, war of attrition
2017-07-19