nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒02‒12
eleven papers chosen by



  1. Optimal policy design: A CGE approach By Martin Cicowiez; Bernard Decaluwé; Mustapha Nabli
  2. When does a disaster become a systemic event? Estimating indirect economic losses from natural disasters By Sebastian Poledna; Stefan Hochrainer-Stigler; Michael Gregor Miess; Peter Klimek; Stefan Schmelzer; Johannes Sorger; Elena Shchekinova; Elena Rovenskaya; JoAnne Linnerooth-Bayer; Ulf Dieckmann; Stefan Thurner
  3. Assessing the Economic and Social Impact of Tax and Transfer System Reforms: A General Equilibrium Microsimulation Approach By Peter, Benczur; Gabor, Katay; Aron, Kiss
  4. Machine learning for time series forecasting - a simulation study By Fischer, Thomas; Krauss, Christopher; Treichel, Alex
  5. A simple, deterministic, and efficient knowledge-driven heuristic for the vehicle routing problem By ARNOLD, Florian; SÖRENSEN, Kenneth
  6. Capital allocation under Fundamental Review of Trading Book By Luting Li; Hao Xing
  7. GDP-linked Bonds: Some Simulations on EU Countries By Nicolas Carnot; Stéphanie Pamies Sumner
  8. An exploratory study towards applying and demystifying deep learning classification on behavioral big data By DE CNUDDE, Sofie; MARTENS, David; PROVOST, Foster
  9. Big Data and Machine Learning in Government Projects: Expert Evaluation Case By Nikitinsky, Nikita; Shashev, Sergey; Kachurina, Polina; Bespalov, Aleksander
  10. Economic recommendation based on pareto efficient resource allocation By Zhang, Yongfeng; Zhang, Yi; Friedman, Daniel
  11. Robust optimization of uncertain multistage inventory systems with inexact data in decision rules By de Ruiter, Frans; Ben-Tal, A.; Brekelmans, Ruud; den Hertog, Dick

  1. By: Martin Cicowiez; Bernard Decaluwé; Mustapha Nabli
    Abstract: In this paper we extend an existing computable general equilibrium (CGE) model to perform optimal policy design exercises. Specifically, to an otherwise standard CGE model, we add an objective function that allows us to compute optimal values for selected policy variables. In turn, the CGE model operates as the constraint of the optimization problem. In addition, we illustrate the usefulness of the proposed approach to optimal policy design. For this purpose, we develop an exercise with real data from Argentina.
    Keywords: Computable General Equilibrium, Optimal Policy Design
    JEL: D58 E61
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:lvl:mpiacr:2017-07&r=cmp
  2. By: Sebastian Poledna; Stefan Hochrainer-Stigler; Michael Gregor Miess; Peter Klimek; Stefan Schmelzer; Johannes Sorger; Elena Shchekinova; Elena Rovenskaya; JoAnne Linnerooth-Bayer; Ulf Dieckmann; Stefan Thurner
    Abstract: Reliable estimates of indirect economic losses arising from natural disasters are currently out of scientific reach. To address this problem, we propose a novel approach that combines a probabilistic physical damage catastrophe model with a new generation of macroeconomic agent-based models (ABMs). The ABM moves beyond the state of the art by exploiting large data sets from detailed national accounts, census data, and business information, etc., to simulate interactions of millions of agents representing \emph{each} natural person or legal entity in a national economy. The catastrophe model introduces a copula approach to assess flood losses, considering spatial dependencies of the flood hazard. These loss estimates are used in a damage scenario generator that provides input for the ABM, which then estimates indirect economic losses due to the event. For the first time, we are able to link environmental and economic processes in a computer simulation at this level of detail. We show that moderate disasters induce comparably small but positive short- to medium-term, and negative long-term economic impacts. Large-scale events, however, trigger a pronounced negative economic response immediately after the event and in the long term, while exhibiting a temporary short- to medium-term economic boost. We identify winners and losers in different economic sectors, including the fiscal consequences for the government. We quantify the critical disaster size beyond which the resilience of an economy to rebuild reaches its limits. Our results might be relevant for the management of the consequences of systemic events due to climate change and other disasters.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.09740&r=cmp
  3. By: Peter, Benczur (European Commission – JRC); Gabor, Katay (European Commission – JRC); Aron, Kiss (European Commission)
    Abstract: We present a general-equilibrium behavioural microsimulation model designed to assess long-run macroeconomic, fiscal and social consequences of reforms to the tax and transfer system. The behaviour of labour supply is assessed along both the extensive and intensive margins, by merging the discrete choice and the elasticity of taxable income approaches. General-equilibrium feedback effects are simulated by embedding microsimulation in a parsimonious macro model of a small open economy. We estimate and calibrate the model to Hungary, and then perform three sets of simulations. The first one explores the impact of personal income tax reductions that are identical in cost but different in structure. The second one compares three different tax shift scenarios, while the third one evaluates actual policy measures between 2008 and 2013. The results suggest that while a cut in the marginal tax rate of high-income individuals may boost output, it does not have a significant employment effect. On the other hand, programs like the Employee Tax Credit do have a significant employment effect. We find that the policy measures introduced since 2008 substantially increase income inequality in the long run; the contribution of the changes after 2010 are about four times that of the changes before 2010. Our results highlight that taking account of household heterogeneity is crucial in the analysis of the macroeconomic effects of tax and transfer reforms.
    Keywords: behavioural microsimulation; linked micro macro model; tax system; transfers
    JEL: H22 H31 C63
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:jrs:wpaper:201709&r=cmp
  4. By: Fischer, Thomas; Krauss, Christopher; Treichel, Alex
    Abstract: We present a comprehensive simulation study to assess and compare the performance of popular machine learning algorithms for time series prediction tasks. Specifically, we consider the following algorithms: multilayer perceptron (MLP), logistic regression, naïve Bayes, k-nearest neighbors, decision trees, random forests, and gradient-boosting trees. These models are applied to time series from eight data generating processes (DGPs) - reflecting different linear and nonlinear dependencies (base case). Additional complexity is introduced by adding discontinuities and varying degrees of noise. Our findings reveal that advanced machine learning models are capable of approximating the optimal forecast very closely in the base case, with nonlinear models in the lead across all DGPs - particularly the MLP. By contrast, logistic regression is remarkably robust in the presence of noise, thus yielding the most favorable accuracy metrics on raw data, prior to preprocessing. When introducing adequate preprocessing techniques, such as first differencing and local outlier factor, the picture is reversed, and the MLP as well as other nonlinear techniques once again become the modeling techniques of choice.
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:022018&r=cmp
  5. By: ARNOLD, Florian; SÖRENSEN, Kenneth
    Abstract: In this paper we develop a heuristic for the capacitated vehicle routing problem that revolves around three complementary local search operators, embedded in a guided local search framework. The efficiency of the operators is guaranteed by using knowledge, obtained through data mining, on the attributes of undesirable edges. In spite of its straightforward design and the fact that it is completely deterministic, the heuristic is competitive with the best heuristics in the literature in terms of accuracy and speed. Moreover, it can be readily extended to solve a wide range of vehicle routing problems, which we demonstrate by applying it to the multi-depot vehicle routing problem.
    Keywords: Vehicle routing problems, Heuristics, Metaheuristics
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:ant:wpaper:2017012&r=cmp
  6. By: Luting Li; Hao Xing
    Abstract: The Fundamental Review of Trading Book (FRTB) from the Basel Committee overhauls the regulatory framework for minimum capital requirements for market risk. Facing the tightened regulation, banks need to allocate their capital to each of their risk positions to evaluate the capital efficiency of their strategies. This paper proposes two computational efficient allocation methods under the FRTB framework. Simulation analysis shows that both these two methods provide more liquidity horizon weighted, more stable, and less negative allocations than the standard methods under the current regulatory framework.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.07358&r=cmp
  7. By: Nicolas Carnot; Stéphanie Pamies Sumner
    Abstract: The economic and fiscal outlook has recently improved for European economies, raising the odds that high public debts inherited from the crisis will be gradually wound down in line with EU fiscal rules. This will however take time and future debt trajectories remain exposed to significant uncertainties. In this context, this paper explores some implications of GDP-linked bonds (GLBs), an instrument for national debt management that has recently sparked growing interest. Based on the data and tools of the Commission Debt Sustainability Monitor, our results suggest significant potential benefits from GLBs in reducing debt uncertainties for all European economies. These benefits would be notably large in countries characterised by medium-to-high debt, high macroeconomic volatility and limited alternative tools to smoothen shocks. A risk premium would not eliminate the debt-stabilisation benefits brought by GLBs. The fall in the probability of explosive debt paths could also reduce the premium demanded by investors on conventional bonds in high-debt countries. The issuance of a fraction of GLBs can however be no substitute for pursuing sound economic and budgetary policies curbing national debts.
    JEL: H63 F34 E62
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:euf:dispap:073&r=cmp
  8. By: DE CNUDDE, Sofie; MARTENS, David; PROVOST, Foster
    Abstract: The superior performance of deep learning algorithms in fields such as computer vision and natural language processing has fueled an increased interest towards these algorithms in both research and in practice. Ever since, many studies have applied these algorithms to other machine learning contexts with other types of data in the hope of achieving comparable superior performance. This study departs from the latter motivation and investigates the application of deep learning classification techniques on big behavioral data while comparing its predictive performance with 11 widely-used shallow classifiers. In addition to the application on a new type of data and a structured comparison of its performance with commonlyused classifiers, this study attempts to shed light onto when and why deep learning techniques perform better. Regarding the specific characteristics of applying deep learning on this unique class of data, we demonstrate that an unsupervised pretraining step does not improve classification performance and that a tanh nonlinearity achieves the best predictive performance. The results from applying deep learning on 15 big behavioral data sets demonstrate as good as or better results compared to traditionally-used, shallow classifiers. However, no significant performance improvement can be recorded. Investigating when deep learning performs better, we find that worse performance is obtained for data sets with low signal-from-noise separability. In order to gain insight into why deep learning generally performs well on this type of data, we investigate the value of the distributed, hierarchical characteristic of the learning process. The neurons in the distributed representation seem to identify more nuances in the many behavioral features as compared to shallow classifiers. We demonstrate these nuances in an intuitive manner and validate them through comparison with feature engineering techniques. This is the first study to apply and validate the use of nonlinear deep learning classification on fine-grained, human-generated data while proposing efficient con guration settings for its practical implementation. As deep learning classification is often characterized by being a black-box approach, we also provide a first attempt towards the disentanglement regarding when and why these techniques perform well.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:ant:wpaper:2018002&r=cmp
  9. By: Nikitinsky, Nikita; Shashev, Sergey; Kachurina, Polina; Bespalov, Aleksander
    Abstract: In this paper, we present the Expert Hub System, which was designed to help governmental structures find the best experts in different areas of expertise for better reviewing of the incoming grant proposals. In order to define the areas of expertise with topic modeling and clustering, and then to relate experts to corresponding areas of expertise and rank them according to their proficiency in certain areas of expertise, the Expert Hub approach uses the data from the Directorate of Science and Technology Programmes. Furthermore, the paper discusses the use of Big Data and Machine Learning in the Russian government project.
    Keywords: government project, Big Data, Machine Learning, expert evaluation, clustering
    JEL: O38
    Date: 2016–07–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:82865&r=cmp
  10. By: Zhang, Yongfeng; Zhang, Yi; Friedman, Daniel
    Abstract: A fundamentally important role of the Web economy is Online Resource Allocation (ORA) from producers to consumers, such as product allocation in E-commerce, job allocation in freelancing platforms, and driver resource allocation in P2P riding services. Since users have the freedom to choose, such allocations are not provided in a forced manner, but usually in forms of personalized recommendation, where users have the right to refuse. Current recommendation approaches mostly provide allocations to match the preference of each individual user, instead of treating the Web application as a whole economic system where users therein are mutually correlated on the allocations. This lack of global view leads to Pareto inefficiency, i.e., we can actually improve the recommendations by bettering some users while not hurting the others, and it means that the system did not achieve its best possible allocation. This problem is especially severe when the total amount of each resource is limited, so that its allocation to one (set of) user means that other users are left out. In this paper, we propose Pareto Efficient Economic Recommendation (PEER) - that the system provides the best possible (i.e., Pareto optimal) recommendations, where no user can gain further benefits without hurting the others. To this end, we propose a Multi-Objective Optimization (MOO) framework to maximize the surplus of each user simultaneously, and provide recommendations based on the resulting Pareto optima. To benefit the many existing recommendation algorithms, we further propose a Pareto Improvement Process (PIP) to turn their recommendations into Pareto efficient ones. Experiments on real-world datasets verify that PIP improves existing algorithms on recommendation performance and consumer surplus, while the direct PEER approach gains the best performance on both aspects.
    Keywords: Pareto Efficiency,Online Resource Allocation,Multi-Objective Optimization,Economic Recommendation,Computational Economics
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:wzbmdn:spii2017503&r=cmp
  11. By: de Ruiter, Frans (Tilburg University, School of Economics and Management); Ben-Tal, A. (Tilburg University, School of Economics and Management); Brekelmans, Ruud (Tilburg University, School of Economics and Management); den Hertog, Dick (Tilburg University, School of Economics and Management)
    Abstract: In production-inventory problems customer demand is often subject to uncertainty. Therefore, it is challenging to design production plans that satisfy both demand and a set of constraints on e.g. production capacity and required inventory levels. Adjustable robust optimization (ARO) is a technique to solve these dynamic (multistage) production-inventory problems. In ARO, the decision in each stage is a function of the data on the realizations of the uncertain demand gathered from the previous periods. These data, however, are often inaccurate; there is much evidence in the information management literature that data quality in inventory systems is often poor. Reliance on data “as is” may then lead to poor performance of “data-driven” methods such as ARO. In this paper, we remedy this weakness of ARO by introducing a model that treats past data itself as an uncertain model parameter. We show that computational tractability of the robust counterparts associated with this extension of ARO is still maintained. The benefits of the new model are demonstrated by a numerical test case of a well-studied production-inventory problem. Our approach is also applicable to other ARO models outside the realm of production-inventory planning.
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:tiu:tiutis:fc2ce516-9c34-4389-830f-f6f816d4ed8d&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.