nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒10‒05
nineteen papers chosen by
Stan Miles
Thompson Rivers University

  1. The merge of two worlds: Integrating artificial neural networks into agent-based electricity market simulation By Fraunholz, Christoph; Kraft, Emil; Keles, Dogan; Fichtner, Wolf
  2. An Adaptive Strategy for Connected Eco-Driving under Uncertain Traffic and Signal Conditions By Hao, Peng; Wei, Zhensong; Bai, Zhengwei; Barth, Matthew
  3. Europe Beyond Coal – An Economic and Climate Impact Assessment By Böhringer, Christoph; Rosendahl, Knut Einar
  4. Avoiding Root-Finding in the Krusell-Smith Algorithm Simulation By Ivo Bakota
  5. Population Ageing and the Impact of Later Retirement on the Pension System in China: An Applied Dynamic General Equilibrium Analysis By Xuejin Zuo; Xiujian Peng; Xin Yang; Philip Adams; Meifeng Wang
  6. A Deep Learning Approach to Estimate Forward Default Intensities By Marc-Aurèle Divernois
  7. Economic Implications of Global Energy Interconnection By Feng Shenghao; Philip Adams; Zhang Keyu; Peng Xiujian; Yang Jun
  8. Volatility Forecasting with 1-dimensional CNNs via transfer learning By Bernadett Aradi; G\'abor Petneh\'azi; J\'ozsef G\'all
  9. Agent based models in Mata: Modelling aggregate processes, like the spread of a disease By Maarten Buis
  10. Predictor-corrector interior-point algorithm for sufficient linear complementarity problems based on a new type of algebraic equivalent transformation technique By Darvay, Zsolt; Illés, Tibor; Rigó, Petra Renáta
  11. Supervised learning for the prediction of firm dynamics By Falco J. Bargagli-Stoffi; Jan Niederreiter; Massimo Riccaboni
  12. Probing the mechanism: lending rate setting in a data-driven agent-based model By Papadopoulos, Georgios
  13. Regularized Solutions to Linear Rational Expectations Models By Majid M. Al-Sadoon
  14. Platform Design when Sellers Use Pricing Algorithms By Johnson, Justin Pappas; Rhodes, Andrew; Wildenbeest, Matthij
  15. microWELT: A Dynamic Microsimulation Model for the Study of Welfare Transfer Flows in Ageing Societies from a Comparative Welfare State Perspective By Martin Spielauer; Thomas Horvath; Marian Fink
  16. Optimal market making under partial information and numerical methods for impulse control games with applications By Diego Zabaljauregui
  17. Fancy graphics: Force-directed diagrams By Philippe van Kerm
  18. Man vs. Machine Learning: The Term Structure of Earnings Expectations and Conditional Biases By Jules H. van Binsbergen; Xiao Han; Alejandro Lopez-Lira
  19. R By Eric B. Budish

  1. By: Fraunholz, Christoph; Kraft, Emil; Keles, Dogan; Fichtner, Wolf
    Abstract: Machine learning and agent-based modeling are two popular tools in energy research. In this article, we propose an innovative methodology that combines these methods. For this purpose, we develop an electricity price forecasting technique using artificial neural networks and integrate the novel approach into the established agent-based electricity market simulation model PowerACE. In a case study covering ten interconnected European countries and a time horizon from 2020 until 2050 at hourly resolution, we benchmark the new forecasting approach against a simpler linear regression model as well as a naive forecast. Contrary to most of the related literature, we also evaluate the statistical significance of the superiority of one approach over another by conducting Diebold-Mariano hypothesis tests. Our major results can be summarized as follows. Firstly, in contrast to real-world electricity price forecasts, we find the naive approach to perform very poorly when deployed model-endogenously. Secondly, although the linear regression performs reasonably well, it is outperformed by the neural network approach. Thirdly, the use of an additional classifier for outlier handling substantially improves the forecasting accuracy, particularly for the linear regression approach. Finally, the choice of the model-endogenous forecasting method has a clear impact on simulated electricity prices. This latter finding is particularly crucial since these prices are a major results of electricity market models.
    Keywords: Agent-based simulation,Artificial neural network,Electricity price forecasting,Electricity market
    Date: 2020
  2. By: Hao, Peng; Wei, Zhensong; Bai, Zhengwei; Barth, Matthew
    Abstract: Connected and automated vehicle technology could bring about transformative reductions in traffic congestion, greenhouse gas emissions, air pollution, and energy consumption. Connected and automated vehicles (CAVs) can directly communicate with other vehicles and road infrastructure and use sensing technology and artificial intelligence to respond to traffic conditions and optimize fuel consumption. An eco-approach and departure application for connected and automated vehicles has been widely studied as a means of calculating the most energy-efficient speed profile and guiding a vehicle through signalized intersections without unnecessary stops and starts. Simulations using this application on roads with fixed-timing traffic signals have produced 12% reductions in fuel consumption and greenhouse gas emissions. But real-world traffic conditions are much more complex—uncertainties and the limited sensing range of automated vehicles create challenges for determining the most energy-efficient speed. To account for this uncertainty, researchers from the University of California, Riverside, propose a prediction-based, adaptive connected eco-driving strategy. The proposed strategy analyzes the possible upcoming traffic and signal scenarios based on historical data and live information collected from communication and sensing devices, and then chooses the most energy-efficient speed. This approach can be extended to accommodate different vehicle powertrains and types of roadway infrastructure. This research brief summarizes findings from the research and provides research implications. View the NCST Project Webpage
    Keywords: Engineering, Autonomous vehicles, Connected vehicles, Ecodriving, Energy consumption, Machine learning, Microsimulation, Signalized intersections, Vehicle mix
    Date: 2020–09–01
  3. By: Böhringer, Christoph (Department of Business Administration, Economics and Law, University of Oldenburg); Rosendahl, Knut Einar (School of Economics and Business, Norwegian University of Life Sciences)
    Abstract: Several European countries have decided to phase out coal power generation. Emissions from electricity generation are already regulated by the EU Emissions Trading System (ETS), and in some countries like Germany the phaseout of coal will be accompanied with cancellation of emissions allowances. In this paper we examine the consequences of phasing out coal, both for the broader economy, the electricity sector, and for CO2 emissions. We show analytically how the welfare impacts for a phaseout region depend on i) whether and how allowances are canceled, ii) whether other countries join phaseout policies, and iii) terms-of-trade effects in the ETS market. Based on numerical simulations with a computable general equilibrium model for the European economy, we quantify the economic and environmental impacts of alternative phaseout scenarios, considering both unilateral and multilateral phaseout. We find that terms-of-trade effects in the ETS market play an important role for the welfare effects across EU member states. For Germany, coal phaseout combined with unilateral cancellation of allowances is found to be welfare-improving if the German citizens value emissions reductions at 65 Euro per ton or more.
    Keywords: Coal phaseout; emissions trading; electricity market
    JEL: D61 F18 H23 Q54
    Date: 2020–06–30
  4. By: Ivo Bakota
    Abstract: This paper proposes a novel method to compute the simulation part of the Krusell-Smith (1997, 1998) algorithm when the agents can trade in more than one asset (for example, capital and bonds). The Krusell-Smith algorithm is used to solve general equilibrium models with both aggregate and uninsurable idiosyncratic risk and can be used to solve bounded rationality equilibria and to approximate rational expectations equilibria. When applied to solve a model with more than one financial asset, in the simulation, the standard algorithm has to impose equilibria for each additional asset (find the market-clearing price), for each period simulated. This procedure entails root-finding for each period, which is computationally very expensive. I show that it is possible to avoid this rootfinding by not imposing the equilibria each period, but instead by simulating the model without market clearing. The method updates the law of motion for asset prices by using Newton-like methods (Broyden’s method) on the simulated excess demand, instead of imposing equilibrium for each period and running regressions on the clearing prices. Since the method avoids the root-finding for each time period simulated, it leads to a significant reduction in computation time. In the example model, the proposed version of the algorithm leads to a 32% decrease in computational time, even when measured conservatively. This method could be especially useful in computing asset pricing models (for example, models with risky and safe assets) with both aggregate and uninsurable idiosyncratic risk since methods which use linearization in the neighborhood of the aggregate steady state are considered to be less accurate than global solution methods for these particular types of models.
    Keywords: portfolio choice; heterogeneous agents; Krusell-Smith;
    JEL: E44 G12 C63
    Date: 2020–09
  5. By: Xuejin Zuo; Xiujian Peng; Xin Yang; Philip Adams; Meifeng Wang
    Abstract: China's population is rapidly ageing because of the sustained low fertility and increasing life expectancy. At the end of 2019, the elderly 65 and older accounted for 12.6 percent of the total population, compared to around seven percent in 2000. It will continue to increase to 31 percent in 2050. Rapid ageing imposes a big challenge to sustainable growth. The Chinese government is considering increasing the retirement age as a remedy to the challenge of population ageing. Using a dynamic general equilibrium model of the Chinese economy, this paper explores the implications of raising the retirement age on economic growth and pension sustainability in China over the period of 2020 to 2100. In the baseline scenario, we assume that China maintains its current retirement age. The simulation results reveal that growth in the labour force would turn negative because of population ageing. Thus China has to rely on technology improvement and capital stock increases to support its economic growth. Without reforming the current pension system, China's pension account will accumulate huge debts. The debt plus the interest obligation will put high pressure on the general government budget. By the end of this century, the general government budget deficit will reach to 22 percent of GDP. In the policy scenario, we assume that China will gradually increase the retirement age from 58 to 65 years old starting from 2020. The simulation results show that increasing the retirement age is a powerful policy in the short to medium term. It will boost China's economic growth and reduce the pension fund deficit significantly because it will not only increase the labour force but also reduce the number of pensioners by delaying them access to the pension fund. However, the effectiveness of the policy depends on how much the labour force participation rate for people aged 58 to 65 can be increased.
    Keywords: Population ageing, retirement age, labour force participation, pension, economic growth, CGE model
    JEL: J11 J26 C68
    Date: 2020–04
  6. By: Marc-Aurèle Divernois (EPFL; Swiss Finance Institute)
    Abstract: This paper proposes a machine learning approach to estimate physical forward default intensities. Default probabilities are computed using artificial neural networks to estimate the intensities of the inhomogeneous Poisson processes governing default process. The major contribution to previous literature is to allow the estimation of non-linear forward intensities by using neural networks instead of classical maximum likelihood estimation. The model specification allows an easy replication of previous literature using linear assumption and shows the improvement that can be achieved.
    Keywords: Bankruptcy, Credit Risk, Default, Machine Learning, Neural Networks, Doubly Stochastic, Forward Poisson Intensities
    JEL: C22 C23 C53 C58 G33 G34
    Date: 2020–07
  7. By: Feng Shenghao; Philip Adams; Zhang Keyu; Peng Xiujian; Yang Jun
    Abstract: This study uses a Computable General Equilibrium (CGE) model to quantify the economic implications of the proposed Global Electricity Interconnection (GEI) electricity system. Enhancements to the model for this study include: (a) a detailed and up-to-date electricity database; (b) a new fuel-factor nesting structure; (c) re-estimated values for the constant elasticity of substitution (CES) parameters between fossil fuel power generation and non-fossil fuel power generation; (d) a base-case (for years between 2011-2050) consistent with the New Policy Scenario outlined in the World Energy Outlook 2018; and (e) the stylized characteristics of the operation of the GEI network. Modelling results suggest that, by 2050, compared to the base-case: (1) the GEI network will increase world GDP by 0.33 per cent; (2) all regions will benefit from GEI development; (3) world output of coal, oil and gas will fall by 1.4, 0.2 and 0.9 per cent, respectively; (4) the shares of renewable energy in total electricity and total primary energy will increase by 4.3 and 2.9 percentage points; and (5) global CO2 emissions will fall by 0.72 per cent.
    Keywords: GEI (global energy interconnection) CGE (computable general equilibrium) nesting structure CES (constant elasticity of substitution) Economic impacts
    JEL: C68 F17 Q43
    Date: 2020–09
  8. By: Bernadett Aradi; G\'abor Petneh\'azi; J\'ozsef G\'all
    Abstract: Volatility is a natural risk measure in finance as it quantifies the variation of stock prices. A frequently considered problem in mathematical finance is to forecast different estimates of volatility. What makes it promising to use deep learning methods for the prediction of volatility is the fact, that stock price returns satisfy some common properties, referred to as `stylized facts'. Also, the amount of data used can be high, favoring the application of neural networks. We used 10 years of daily prices for hundreds of frequently traded stocks, and compared different CNN architectures: some networks use only the considered stock, but we tried out a construction which, for training, uses much more series, but not the considered stocks. Essentially, this is an application of transfer learning, and its performance turns out to be much better in terms of prediction error. We also compare our dilated causal CNNs to the classical ARIMA method using an automatic model selection procedure.
    Date: 2020–09
  9. By: Maarten Buis (University of Konstanz)
    Abstract: An Agent Based Model (ABM) is a simulation in which agents that each follow simple rules interact with one another and thus produce an often surprising outcome at the macro level. The purpose of an ABM is to explore mechanisms through which actions of the individual agents add up to a macro outcome by varying the rules that agents have to follow or varying with whom the agent can interact (for example, varying the network). These models have many applications, like the study of segregation of neighborhoods or the adoption of new technologies. However, the application that is currently most topical is the spread of a disease. In this talk, I will give introduction on how to implement an ABM in Mata, by going through the simple models I (a sociologist, not an epidemiologist) used to make sense of what is happening with the COVID-19 pandemic.Creation-Date: 20200911
    Date: 2020–09–11
  10. By: Darvay, Zsolt; Illés, Tibor; Rigó, Petra Renáta
    Abstract: We propose a new predictor-corrector (PC) interior-point algorithm (IPA) for solving linear complementarity problem (LCP) with P_* (κ)-matrices. The introduced IPA uses a new type of algebraic equivalent transformation (AET) on the centering equations of the system defining the central path. The new technique was introduced by Darvay et al. [21] for linear optimization. The search direction discussed in this paper can be derived from positive-asymptotic kernel function using the function φ(t)=t^2 in the new type of AET. We prove that the IPA has O(1+4κ)√n log⁡〖(3nμ^0)/ε〗 iteration complexity, where κ is an upper bound of the handicap of the input matrix. To the best of our knowledge, this is the first PC IPA for P_* (κ)-LCPs which is based on this search direction.
    Keywords: Predictor-corrector interior-point algorithm, P_* (κ)-linear complementarity problem, new search direction, polynomial iteration complexity
    JEL: C61
    Date: 2020–09–14
  11. By: Falco J. Bargagli-Stoffi; Jan Niederreiter; Massimo Riccaboni
    Abstract: Thanks to the increasing availability of granular, yet high-dimensional, firm level data, machine learning (ML) algorithms have been successfully applied to address multiple research questions related to firm dynamics. Especially supervised learning (SL), the branch of ML dealing with the prediction of labelled outcomes, has been used to better predict firms' performance. In this contribution, we will illustrate a series of SL approaches to be used for prediction tasks, relevant at different stages of the company life cycle. The stages we will focus on are (i) startup and innovation, (ii) growth and performance of companies, and (iii) firms exit from the market. First, we review SL implementations to predict successful startups and R&D projects. Next, we describe how SL tools can be used to analyze company growth and performance. Finally, we review SL applications to better forecast financial distress and company failure. In the concluding Section, we extend the discussion of SL methods in the light of targeted policies, result interpretability, and causality.
    Date: 2020–09
  12. By: Papadopoulos, Georgios
    Abstract: The mechanism underlying banks' interest rate setting behaviour is an important element in the study of economic systems with important policy implications associated with the potential of monetary and -recently- macroprudential policies to affect the real economy. In the agent-based modelling literature, lending rate setting has so far been modelled in an ad-hoc manner, based almost exclusively on theoretical grounds with the specifics usually chosen in an arbitrary fashion. This study tries to empirically identify the mechanism that approximates the observed patterns of consumer credit interest rates within a data-driven, agent-based model (ABM). The analysis suggests that there is heterogeneity across countries, both in terms of the rule itself as well as its specific parameters and that often a simple, borrower-risk only mechanism adequately approximates the historical series. More broadly, the validation exercise shows that the model is able to replicate the dynamics of several variables of interest, thus providing a way to bring ABMs "close to the data".
    Keywords: Agent-based modelling, Lending rate mechanism, Consumer credit, Model validation, Rule discovery
    JEL: C63 E21 E27 E43
    Date: 2020–09–04
  13. By: Majid M. Al-Sadoon
    Abstract: This paper proposes computational methods for regularized solutions to linear rational expectations models. The algorithm allows for regularization cross-sectionally as well as across frequencies. The algorithm is illustrated by a variety of examples.
    Date: 2020–09
  14. By: Johnson, Justin Pappas; Rhodes, Andrew; Wildenbeest, Matthij
    Abstract: Using both economic theory and Artificial Intelligence (AI) pricing algorithms, we investigate the ability of a platform to design its marketplace to promote competition, improve consumer surplus, and even raise its own profits. We allow sellers to use Q-learning algorithms (a common reinforcement-learning technique from the computer-science literature) to devise pricing strategies in a setting with repeated interactions, and consider the effect of steering policies that reward firms that cut prices with additional exposure to consumers. Overall, the evidence from our experiments suggests that platform design decisions can meaningfully benefit consumers even when algorithmic collusion might otherwise emerge but that achieving these gains may require more than the simplest steering policies when algorithms value the future highly. We also find that policies that raise consumer surplus can raise the profits of the platform, depending on the platform’s revenue model. Finally, we document several learning challenges faced by the algorithms.
    Date: 2020–09–08
  15. By: Martin Spielauer (WIFO); Thomas Horvath; Marian Fink
    Abstract: This paper introduces the microWELT model. Starting from its objectives, we discuss design choices, the model architecture and key features. microWELT provides a demographic projection tool reproducing Eurostat population projections but adding details such as education, intergenerational transmission of education, fertility by education, partnership patterns, and mortality differentials by education. The model integrates transfer flows as captured by the National Transfer Account (NTA) and National Time Transfer Account (NTTA) accounting framework and calculates a set of indicators based on NTA literature. Individual accounts allow the study of transfers over the whole life cycle by cohorts and between generations.
    Keywords: Microsimulation, Welfare State, Demographic Change, National Transfer Accounts
    Date: 2020–09–21
  16. By: Diego Zabaljauregui
    Abstract: The topics treated in this thesis are inherently two-fold. The first part considers the problem of a market maker optimally setting bid/ask quotes over a finite time horizon, to maximize her expected utility. The intensities of the orders she receives depend not only on the spreads she quotes, but also on unobservable factors modelled by a hidden Markov chain. This stochastic control problem under partial information is solved by means of stochastic filtering, control and PDMPs theory. The value function is characterized as the unique continuous viscosity solution of its dynamic programming equation and numerically compared with its full information counterpart. The optimal full information spreads are shown to be biased when the exact market regime is unknown, as the market maker needs to adjust for additional regime uncertainty in terms of PnL sensitivity and observable order flow volatility. The second part deals with numerically solving nonzero-sum stochastic impulse control games. These offer a realistic and far-reaching modelling framework, but the difficulty in solving such problems has hindered their proliferation. A policy-iteration-type solver is proposed to solve an underlying system of quasi-variational inequalities, and it is validated numerically with reassuring results. Eventually, the focus is put on games with a symmetric structure and an improved algorithm is put forward. A rigorous convergence analysis is undertaken with natural assumptions on the players strategies, which admit graph-theoretic interpretations in the context of weakly chained diagonally dominant matrices. The algorithm is used to compute with high precision equilibrium payoffs and Nash equilibria of otherwise too challenging problems, and even some for which results go beyond the scope of the currently available theory.
    Date: 2020–09
  17. By: Philippe van Kerm (University of Luxembourg; Luxembourg Institute of Socio-Economic Research)
    Abstract: This short talk discusses and illustrates implementation of force-directed diagrams in Stata. Force-directed layouts use simple stochastic simulation algorithms to position nodes and vertices in a two-way plot. They can be used in a range of data visualisation applications, such as network visualisation, or representation of clustering and relationships among observations in the data. We will discuss implementation, examine some examples and discuss pros and cons of using Stata for producing such displays.
    Date: 2020–09–11
  18. By: Jules H. van Binsbergen; Xiao Han; Alejandro Lopez-Lira
    Abstract: We use machine learning to construct a statistically optimal and unbiased benchmark for firms' earnings expectations. We show that analyst expectations are on average biased upwards, and that this bias exhibits substantial time-series and cross-sectional variation. On average, the bias increases in the forecast horizon, and analysts revise their expectations downwards as earnings announcement dates approach. We find that analysts' biases are associated with negative cross-sectional return predictability, and the short legs of many anomalies consist of firms for which the analysts' forecasts are excessively optimistic relative to our benchmark. Managers of companies with the greatest upward biased earnings forecasts are more likely to issue stocks.
    JEL: D22 D83 D84 G11 G12 G14 G31
    Date: 2020–09
  19. R
    By: Eric B. Budish (University of Chicago)
    Abstract: This note suggests that we view R
    Date: 2020

This nep-cmp issue is ©2020 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.