|
on Computational Economics |
Issue of 2017‒07‒16
seven papers chosen by |
By: | Guopeng Song; Daniel Kowalczyk; Roel Leus |
Abstract: | We define and solve the robust machine availability problem in a parallel machine environment, which aims to minimize the number of identical machines required while completing all the jobs before a given deadline. Our formulation preserves a user-defined robustness level regarding possible deviations in the job durations. For better computational performance, a branch-andprice procedure is proposed based on a set covering reformulation. We use zero-suppressed binary decision diagrams (ZDDs) for solving the pricing problem, which enable us to manage the difficulty entailed by the robustness considerations as well as by extra constraints imposed by branching decisions. Computational results are reported that show the effectiveness of a pricing solver with ZDDs compared with a MIP solver. |
Keywords: | Parallel machine scheduling, Machine availability, Robust optimization, Branch and price, ZDD |
Date: | 2017–06 |
URL: | http://d.repec.org/n?u=RePEc:ete:kbiper:585093&r=cmp |
By: | B. Garbinti; J. Goupille-Lebret; T. Piketty |
Abstract: | This paper combines different sources and methods (income tax data, inheritance registers, national accounts, wealth surveys) in order to deliver consistent, unified wealth distribution series for France over the 1800-2014 period. We find a large decline of the top 10% wealth share from the 1910s to the 1980s, mostly to the benefit of the middle 40% of the distribution. Since the 1980s-90s, we observe a moderate rise of wealth concentration, with large fluctuations due to asset price movements. In effect, rising inequality in saving rates and rates of return pushes toward rising wealth concentration, in spite of the contradictory effect of housing prices. We develop a simple simulation model highlighting how the combination of unequal saving rates, rates of return and labor earnings leads to large multiplicative effects and high steady-state wealth concentration. Small changes in the key parameters appear to matter a lot for long-run inequality. We discuss the conditions under which rising concentration is likely to continue in the coming decades. |
Keywords: | saving rate, steady-state, wealth inequality. |
JEL: | D31 E21 N34 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:bfr:banfra:633&r=cmp |
By: | Roberto Roson (Department of Economics, University Of Venice Cà Foscari and IEFE, Bocconi University, Milan) |
Abstract: | This work analyzes some system-wide macroeconomic consequences of lower (sustainable) water availability, when global economic growth is postulated according to the Shared Socio-Economic Pathway 1 (SSP1), for the reference year 2050. After finding that the rather optimistic forecasts of economic development cannot be met in most water scarce macro-regions, we assess what consequences for the structure of the economy, welfare and the terms of trade, the insufficiency of water resources would imply. The analysis is undertaken by means of numerical simulations with a global computable general equilibrium model, under a set of alternative hypotheses. In particular, we consider whether (or not) the regional economic systems have a differentiated capability of adaptation (by means of innovation and modification of economic processes), and whether (or not) the scarce water resources can be allocated among industries, such that more water is assigned where its economic value is greater. |
Keywords: | Water, Economic Growth, Shared Socio-economic Pathways, Computable General Equilibrium, Virtual Water Trade |
JEL: | C68 F18 F43 O11 Q01 Q25 Q32 Q56 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:2017:07&r=cmp |
By: | Mariusz Tarnopolski |
Abstract: | The long-term dependence of Bitcoin (BTC), manifesting itself through a Hurst exponent $H>0.5$, is exploited in order to predict future BTC/USD price. A Monte Carlo simulation with $10^5$ fractional Brownian motion realisations is performed as extensions on historical data. The accuracy of statistical inferences is 20\%. The most probable Bitcoin price in 180 days is 4537 USD. |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1707.03746&r=cmp |
By: | Victor Chernozhukov; Denis Chetverikov; Mert Demirer; Esther Duflo; Christian Hansen; Whitney Newey; James Robins |
Abstract: | We revisit the classic semiparametric problem of inference on a low dimensional parameter θ_0 in the presence of high-dimensional nuisance parameters η_0. We depart from the classical setting by allowing for η_0 to be so high-dimensional that the traditional assumptions, such as Donsker properties, that limit complexity of the parameter space for this object break down. To estimate η_0, we consider the use of statistical or machine learning (ML) methods which are particularly well-suited to estimation in modern, very high-dimensional cases. ML methods perform well by employing regularization to reduce variance and trading off regularization bias with overfitting in practice. However, both regularization bias and overfitting in estimating η_0 cause a heavy bias in estimators of θ_0 that are obtained by naively plugging ML estimators of η_0 into estimating equations for θ_0. This bias results in the naive estimator failing to be N^(-1/2) consistent, where N is the sample size. We show that the impact of regularization bias and overfitting on estimation of the parameter of interest θ_0 can be removed by using two simple, yet critical, ingredients: (1) using Neyman-orthogonal moments/scores that have reduced sensitivity with respect to nuisance parameters to estimate θ_0, and (2) making use of cross-fitting which provides an efficient form of data-splitting. We call the resulting set of methods double or debiased ML (DML). We verify that DML delivers point estimators that concentrate in a N^(-1/2)-neighborhood of the true parameter values and are approximately unbiased and normally distributed, which allows construction of valid confidence statements. The generic statistical theory of DML is elementary and simultaneously relies on only weak theoretical requirements which will admit the use of a broad array of modern ML methods for estimating the nuisance parameters such as random forests, lasso, ridge, deep neural nets, boosted trees, and various hybrids and ensembles of these methods. We illustrate the general theory by applying it to provide theoretical properties of DML applied to learn the main regression parameter in a partially linear regression model, DML applied to learn the coefficient on an endogenous variable in a partially linear instrumental variables model, DML applied to learn the average treatment effect and the average treatment effect on the treated under unconfoundedness, and DML applied to learn the local average treatment effect in an instrumental variables setting. In addition to these theoretical applications, we also illustrate the use of DML in three empirical examples. |
JEL: | C01 |
Date: | 2017–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:23564&r=cmp |
By: | Jun E Rentschler (University College London, Institute for Sustainable Resources, UK / Oxford Institute for Energy Studies, Oxford, UK / National Graduate Institute for Policy Studies, Tokyo, Japan); Nobuhiro Hosoe (National Graduate Institute for Policy Studies, Tokyo, Japan) |
Abstract: | This study develops a computable general equilibrium model for Nigeria to study the impact of fossil fuel subsidy reform - and energy taxes - on key economic parameters, including consumption, income distribution, tax incidence, and fiscal efficiency. The model also examines the role of informality, tax evasion, and fuel smuggling, and shows that these factors can substantially strengthen the argument in favour of subsidy reform. The study shows that redistributing revenues from subsidy reform using uniform cash transfers has a strong progressive (i.e. pro-poor) distributional effect. Moreover, redistributing reform revenues by cutting pre-existing labour taxes not only increases fiscal effciency, but also reduces the welfare losses associated with tax evasion, which in turn reduces the welfare costs of reform by up to 40%. Regardless of the method of revenue redistribution, reducing subsidies diminishes the incentives for fuel smuggling, and hence the welfare losses associated with it. |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:ngi:dpaper:17-05&r=cmp |
By: | Jorge Marco (University of Girona); Renan Goetz (University of Girona) |
Abstract: | This study revisits the problem of the tragedy of the commons. Extracting agents participate in an evolutionary game in a complex social network and are subject to social pressure if they do not comply with the social norms. Social pressure depends on the dynamics of the resource, the network and the population of compliers. We analyze the influence the network structure has on the agents’ behavior and determine the economic value of the intangible good - social pressure. For a socially optimal management of the resource, an initially high share of compliers is necessary but is not sufficient. The analysis shows the extent to which the remaining level of the resource, the share of compliers and the size, density and local cohesiveness of the network contribute to overcoming the tragedy of the commons. The study suggests that the origin of the problem – shortsighted behavior - is also the starting point for a solution in the form of a one-time payment. A numerical analysis of a social network comprising 7500 agents and a realistic topological structure is performed using empirical data from the western La Mancha aquifer in Spain. |
Keywords: | Tragedy of the Commons, Cooperation, Evolutionary Game, Social Network, Social Punishment |
JEL: | C71 D85 Q25 |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:fem:femwpa:2017.35&r=cmp |