
on Computational Economics 
By:  Fraunholz, Christoph; Kraft, Emil; Keles, Dogan; Fichtner, Wolf 
Abstract:  Machine learning and agentbased modeling are two popular tools in energy research. In this article, we propose an innovative methodology that combines these methods. For this purpose, we develop an electricity price forecasting technique using artificial neural networks and integrate the novel approach into the established agentbased electricity market simulation model PowerACE. In a case study covering ten interconnected European countries and a time horizon from 2020 until 2050 at hourly resolution, we benchmark the new forecasting approach against a simpler linear regression model as well as a naive forecast. Contrary to most of the related literature, we also evaluate the statistical significance of the superiority of one approach over another by conducting DieboldMariano hypothesis tests. Our major results can be summarized as follows. Firstly, in contrast to realworld electricity price forecasts, we find the naive approach to perform very poorly when deployed modelendogenously. Secondly, although the linear regression performs reasonably well, it is outperformed by the neural network approach. Thirdly, the use of an additional classifier for outlier handling substantially improves the forecasting accuracy, particularly for the linear regression approach. Finally, the choice of the modelendogenous forecasting method has a clear impact on simulated electricity prices. This latter finding is particularly crucial since these prices are a major results of electricity market models. 
Keywords:  Agentbased simulation,Artificial neural network,Electricity price forecasting,Electricity market 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitiip:45&r=all 
By:  Hao, Peng; Wei, Zhensong; Bai, Zhengwei; Barth, Matthew 
Abstract:  Connected and automated vehicle technology could bring about transformative reductions in traffic congestion, greenhouse gas emissions, air pollution, and energy consumption. Connected and automated vehicles (CAVs) can directly communicate with other vehicles and road infrastructure and use sensing technology and artificial intelligence to respond to traffic conditions and optimize fuel consumption. An ecoapproach and departure application for connected and automated vehicles has been widely studied as a means of calculating the most energyefficient speed profile and guiding a vehicle through signalized intersections without unnecessary stops and starts. Simulations using this application on roads with fixedtiming traffic signals have produced 12% reductions in fuel consumption and greenhouse gas emissions. But realworld traffic conditions are much more complex—uncertainties and the limited sensing range of automated vehicles create challenges for determining the most energyefficient speed. To account for this uncertainty, researchers from the University of California, Riverside, propose a predictionbased, adaptive connected ecodriving strategy. The proposed strategy analyzes the possible upcoming traffic and signal scenarios based on historical data and live information collected from communication and sensing devices, and then chooses the most energyefficient speed. This approach can be extended to accommodate different vehicle powertrains and types of roadway infrastructure. This research brief summarizes findings from the research and provides research implications. View the NCST Project Webpage 
Keywords:  Engineering, Autonomous vehicles, Connected vehicles, Ecodriving, Energy consumption, Machine learning, Microsimulation, Signalized intersections, Vehicle mix 
Date:  2020–09–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:itsdav:qt0bd7g3cz&r=all 
By:  Böhringer, Christoph (Department of Business Administration, Economics and Law, University of Oldenburg); Rosendahl, Knut Einar (School of Economics and Business, Norwegian University of Life Sciences) 
Abstract:  Several European countries have decided to phase out coal power generation. Emissions from electricity generation are already regulated by the EU Emissions Trading System (ETS), and in some countries like Germany the phaseout of coal will be accompanied with cancellation of emissions allowances. In this paper we examine the consequences of phasing out coal, both for the broader economy, the electricity sector, and for CO2 emissions. We show analytically how the welfare impacts for a phaseout region depend on i) whether and how allowances are canceled, ii) whether other countries join phaseout policies, and iii) termsoftrade effects in the ETS market. Based on numerical simulations with a computable general equilibrium model for the European economy, we quantify the economic and environmental impacts of alternative phaseout scenarios, considering both unilateral and multilateral phaseout. We find that termsoftrade effects in the ETS market play an important role for the welfare effects across EU member states. For Germany, coal phaseout combined with unilateral cancellation of allowances is found to be welfareimproving if the German citizens value emissions reductions at 65 Euro per ton or more. 
Keywords:  Coal phaseout; emissions trading; electricity market 
JEL:  D61 F18 H23 Q54 
Date:  2020–06–30 
URL:  http://d.repec.org/n?u=RePEc:hhs:nlsseb:2020_005&r=all 
By:  Ivo Bakota 
Abstract:  This paper proposes a novel method to compute the simulation part of the KrusellSmith (1997, 1998) algorithm when the agents can trade in more than one asset (for example, capital and bonds). The KrusellSmith algorithm is used to solve general equilibrium models with both aggregate and uninsurable idiosyncratic risk and can be used to solve bounded rationality equilibria and to approximate rational expectations equilibria. When applied to solve a model with more than one financial asset, in the simulation, the standard algorithm has to impose equilibria for each additional asset (find the marketclearing price), for each period simulated. This procedure entails rootfinding for each period, which is computationally very expensive. I show that it is possible to avoid this rootfinding by not imposing the equilibria each period, but instead by simulating the model without market clearing. The method updates the law of motion for asset prices by using Newtonlike methods (Broyden’s method) on the simulated excess demand, instead of imposing equilibrium for each period and running regressions on the clearing prices. Since the method avoids the rootfinding for each time period simulated, it leads to a significant reduction in computation time. In the example model, the proposed version of the algorithm leads to a 32% decrease in computational time, even when measured conservatively. This method could be especially useful in computing asset pricing models (for example, models with risky and safe assets) with both aggregate and uninsurable idiosyncratic risk since methods which use linearization in the neighborhood of the aggregate steady state are considered to be less accurate than global solution methods for these particular types of models. 
Keywords:  portfolio choice; heterogeneous agents; KrusellSmith; 
JEL:  E44 G12 C63 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:cer:papers:wp669&r=all 
By:  Xuejin Zuo; Xiujian Peng; Xin Yang; Philip Adams; Meifeng Wang 
Abstract:  China's population is rapidly ageing because of the sustained low fertility and increasing life expectancy. At the end of 2019, the elderly 65 and older accounted for 12.6 percent of the total population, compared to around seven percent in 2000. It will continue to increase to 31 percent in 2050. Rapid ageing imposes a big challenge to sustainable growth. The Chinese government is considering increasing the retirement age as a remedy to the challenge of population ageing. Using a dynamic general equilibrium model of the Chinese economy, this paper explores the implications of raising the retirement age on economic growth and pension sustainability in China over the period of 2020 to 2100. In the baseline scenario, we assume that China maintains its current retirement age. The simulation results reveal that growth in the labour force would turn negative because of population ageing. Thus China has to rely on technology improvement and capital stock increases to support its economic growth. Without reforming the current pension system, China's pension account will accumulate huge debts. The debt plus the interest obligation will put high pressure on the general government budget. By the end of this century, the general government budget deficit will reach to 22 percent of GDP. In the policy scenario, we assume that China will gradually increase the retirement age from 58 to 65 years old starting from 2020. The simulation results show that increasing the retirement age is a powerful policy in the short to medium term. It will boost China's economic growth and reduce the pension fund deficit significantly because it will not only increase the labour force but also reduce the number of pensioners by delaying them access to the pension fund. However, the effectiveness of the policy depends on how much the labour force participation rate for people aged 58 to 65 can be increased. 
Keywords:  Population ageing, retirement age, labour force participation, pension, economic growth, CGE model 
JEL:  J11 J26 C68 
Date:  2020–04 
URL:  http://d.repec.org/n?u=RePEc:cop:wpaper:g303&r=all 
By:  MarcAurèle Divernois (EPFL; Swiss Finance Institute) 
Abstract:  This paper proposes a machine learning approach to estimate physical forward default intensities. Default probabilities are computed using artificial neural networks to estimate the intensities of the inhomogeneous Poisson processes governing default process. The major contribution to previous literature is to allow the estimation of nonlinear forward intensities by using neural networks instead of classical maximum likelihood estimation. The model specification allows an easy replication of previous literature using linear assumption and shows the improvement that can be achieved. 
Keywords:  Bankruptcy, Credit Risk, Default, Machine Learning, Neural Networks, Doubly Stochastic, Forward Poisson Intensities 
JEL:  C22 C23 C53 C58 G33 G34 
Date:  2020–07 
URL:  http://d.repec.org/n?u=RePEc:chf:rpseri:rp2079&r=all 
By:  Feng Shenghao; Philip Adams; Zhang Keyu; Peng Xiujian; Yang Jun 
Abstract:  This study uses a Computable General Equilibrium (CGE) model to quantify the economic implications of the proposed Global Electricity Interconnection (GEI) electricity system. Enhancements to the model for this study include: (a) a detailed and uptodate electricity database; (b) a new fuelfactor nesting structure; (c) reestimated values for the constant elasticity of substitution (CES) parameters between fossil fuel power generation and nonfossil fuel power generation; (d) a basecase (for years between 20112050) consistent with the New Policy Scenario outlined in the World Energy Outlook 2018; and (e) the stylized characteristics of the operation of the GEI network. Modelling results suggest that, by 2050, compared to the basecase: (1) the GEI network will increase world GDP by 0.33 per cent; (2) all regions will benefit from GEI development; (3) world output of coal, oil and gas will fall by 1.4, 0.2 and 0.9 per cent, respectively; (4) the shares of renewable energy in total electricity and total primary energy will increase by 4.3 and 2.9 percentage points; and (5) global CO2 emissions will fall by 0.72 per cent. 
Keywords:  GEI (global energy interconnection) CGE (computable general equilibrium) nesting structure CES (constant elasticity of substitution) Economic impacts 
JEL:  C68 F17 Q43 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:cop:wpaper:g307&r=all 
By:  Bernadett Aradi; G\'abor Petneh\'azi; J\'ozsef G\'all 
Abstract:  Volatility is a natural risk measure in finance as it quantifies the variation of stock prices. A frequently considered problem in mathematical finance is to forecast different estimates of volatility. What makes it promising to use deep learning methods for the prediction of volatility is the fact, that stock price returns satisfy some common properties, referred to as `stylized facts'. Also, the amount of data used can be high, favoring the application of neural networks. We used 10 years of daily prices for hundreds of frequently traded stocks, and compared different CNN architectures: some networks use only the considered stock, but we tried out a construction which, for training, uses much more series, but not the considered stocks. Essentially, this is an application of transfer learning, and its performance turns out to be much better in terms of prediction error. We also compare our dilated causal CNNs to the classical ARIMA method using an automatic model selection procedure. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.05508&r=all 
By:  Maarten Buis (University of Konstanz) 
Abstract:  An Agent Based Model (ABM) is a simulation in which agents that each follow simple rules interact with one another and thus produce an often surprising outcome at the macro level. The purpose of an ABM is to explore mechanisms through which actions of the individual agents add up to a macro outcome by varying the rules that agents have to follow or varying with whom the agent can interact (for example, varying the network). These models have many applications, like the study of segregation of neighborhoods or the adoption of new technologies. However, the application that is currently most topical is the spread of a disease. In this talk, I will give introduction on how to implement an ABM in Mata, by going through the simple models I (a sociologist, not an epidemiologist) used to make sense of what is happening with the COVID19 pandemic.CreationDate: 20200911 
Date:  2020–09–11 
URL:  http://d.repec.org/n?u=RePEc:boc:usug20:03&r=all 
By:  Darvay, Zsolt; Illés, Tibor; Rigó, Petra Renáta 
Abstract:  We propose a new predictorcorrector (PC) interiorpoint algorithm (IPA) for solving linear complementarity problem (LCP) with P_* (κ)matrices. The introduced IPA uses a new type of algebraic equivalent transformation (AET) on the centering equations of the system defining the central path. The new technique was introduced by Darvay et al. [21] for linear optimization. The search direction discussed in this paper can be derived from positiveasymptotic kernel function using the function φ(t)=t^2 in the new type of AET. We prove that the IPA has O(1+4κ)√n log〖(3nμ^0)/ε〗 iteration complexity, where κ is an upper bound of the handicap of the input matrix. To the best of our knowledge, this is the first PC IPA for P_* (κ)LCPs which is based on this search direction. 
Keywords:  Predictorcorrector interiorpoint algorithm, P_* (κ)linear complementarity problem, new search direction, polynomial iteration complexity 
JEL:  C61 
Date:  2020–09–14 
URL:  http://d.repec.org/n?u=RePEc:cvh:coecwp:2020/03&r=all 
By:  Falco J. BargagliStoffi; Jan Niederreiter; Massimo Riccaboni 
Abstract:  Thanks to the increasing availability of granular, yet highdimensional, firm level data, machine learning (ML) algorithms have been successfully applied to address multiple research questions related to firm dynamics. Especially supervised learning (SL), the branch of ML dealing with the prediction of labelled outcomes, has been used to better predict firms' performance. In this contribution, we will illustrate a series of SL approaches to be used for prediction tasks, relevant at different stages of the company life cycle. The stages we will focus on are (i) startup and innovation, (ii) growth and performance of companies, and (iii) firms exit from the market. First, we review SL implementations to predict successful startups and R&D projects. Next, we describe how SL tools can be used to analyze company growth and performance. Finally, we review SL applications to better forecast financial distress and company failure. In the concluding Section, we extend the discussion of SL methods in the light of targeted policies, result interpretability, and causality. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.06413&r=all 
By:  Papadopoulos, Georgios 
Abstract:  The mechanism underlying banks' interest rate setting behaviour is an important element in the study of economic systems with important policy implications associated with the potential of monetary and recently macroprudential policies to affect the real economy. In the agentbased modelling literature, lending rate setting has so far been modelled in an adhoc manner, based almost exclusively on theoretical grounds with the specifics usually chosen in an arbitrary fashion. This study tries to empirically identify the mechanism that approximates the observed patterns of consumer credit interest rates within a datadriven, agentbased model (ABM). The analysis suggests that there is heterogeneity across countries, both in terms of the rule itself as well as its specific parameters and that often a simple, borrowerrisk only mechanism adequately approximates the historical series. More broadly, the validation exercise shows that the model is able to replicate the dynamics of several variables of interest, thus providing a way to bring ABMs "close to the data". 
Keywords:  Agentbased modelling, Lending rate mechanism, Consumer credit, Model validation, Rule discovery 
JEL:  C63 E21 E27 E43 
Date:  2020–09–04 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:102749&r=all 
By:  Majid M. AlSadoon 
Abstract:  This paper proposes computational methods for regularized solutions to linear rational expectations models. The algorithm allows for regularization crosssectionally as well as across frequencies. The algorithm is illustrated by a variety of examples. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.05875&r=all 
By:  Johnson, Justin Pappas; Rhodes, Andrew; Wildenbeest, Matthij 
Abstract:  Using both economic theory and Artificial Intelligence (AI) pricing algorithms, we investigate the ability of a platform to design its marketplace to promote competition, improve consumer surplus, and even raise its own profits. We allow sellers to use Qlearning algorithms (a common reinforcementlearning technique from the computerscience literature) to devise pricing strategies in a setting with repeated interactions, and consider the effect of steering policies that reward firms that cut prices with additional exposure to consumers. Overall, the evidence from our experiments suggests that platform design decisions can meaningfully benefit consumers even when algorithmic collusion might otherwise emerge but that achieving these gains may require more than the simplest steering policies when algorithms value the future highly. We also find that policies that raise consumer surplus can raise the profits of the platform, depending on the platform’s revenue model. Finally, we document several learning challenges faced by the algorithms. 
Date:  2020–09–08 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:124696&r=all 
By:  Martin Spielauer (WIFO); Thomas Horvath; Marian Fink 
Abstract:  This paper introduces the microWELT model. Starting from its objectives, we discuss design choices, the model architecture and key features. microWELT provides a demographic projection tool reproducing Eurostat population projections but adding details such as education, intergenerational transmission of education, fertility by education, partnership patterns, and mortality differentials by education. The model integrates transfer flows as captured by the National Transfer Account (NTA) and National Time Transfer Account (NTTA) accounting framework and calculates a set of indicators based on NTA literature. Individual accounts allow the study of transfers over the whole life cycle by cohorts and between generations. 
Keywords:  Microsimulation, Welfare State, Demographic Change, National Transfer Accounts 
Date:  2020–09–21 
URL:  http://d.repec.org/n?u=RePEc:wfo:wpaper:y:2020:i:609&r=all 
By:  Diego Zabaljauregui 
Abstract:  The topics treated in this thesis are inherently twofold. The first part considers the problem of a market maker optimally setting bid/ask quotes over a finite time horizon, to maximize her expected utility. The intensities of the orders she receives depend not only on the spreads she quotes, but also on unobservable factors modelled by a hidden Markov chain. This stochastic control problem under partial information is solved by means of stochastic filtering, control and PDMPs theory. The value function is characterized as the unique continuous viscosity solution of its dynamic programming equation and numerically compared with its full information counterpart. The optimal full information spreads are shown to be biased when the exact market regime is unknown, as the market maker needs to adjust for additional regime uncertainty in terms of PnL sensitivity and observable order flow volatility. The second part deals with numerically solving nonzerosum stochastic impulse control games. These offer a realistic and farreaching modelling framework, but the difficulty in solving such problems has hindered their proliferation. A policyiterationtype solver is proposed to solve an underlying system of quasivariational inequalities, and it is validated numerically with reassuring results. Eventually, the focus is put on games with a symmetric structure and an improved algorithm is put forward. A rigorous convergence analysis is undertaken with natural assumptions on the players strategies, which admit graphtheoretic interpretations in the context of weakly chained diagonally dominant matrices. The algorithm is used to compute with high precision equilibrium payoffs and Nash equilibria of otherwise too challenging problems, and even some for which results go beyond the scope of the currently available theory. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.06521&r=all 
By:  Philippe van Kerm (University of Luxembourg; Luxembourg Institute of SocioEconomic Research) 
Abstract:  This short talk discusses and illustrates implementation of forcedirected diagrams in Stata. Forcedirected layouts use simple stochastic simulation algorithms to position nodes and vertices in a twoway plot. They can be used in a range of data visualisation applications, such as network visualisation, or representation of clustering and relationships among observations in the data. We will discuss implementation, examine some examples and discuss pros and cons of using Stata for producing such displays. 
Date:  2020–09–11 
URL:  http://d.repec.org/n?u=RePEc:boc:usug20:09&r=all 
By:  Jules H. van Binsbergen; Xiao Han; Alejandro LopezLira 
Abstract:  We use machine learning to construct a statistically optimal and unbiased benchmark for firms' earnings expectations. We show that analyst expectations are on average biased upwards, and that this bias exhibits substantial timeseries and crosssectional variation. On average, the bias increases in the forecast horizon, and analysts revise their expectations downwards as earnings announcement dates approach. We find that analysts' biases are associated with negative crosssectional return predictability, and the short legs of many anomalies consist of firms for which the analysts' forecasts are excessively optimistic relative to our benchmark. Managers of companies with the greatest upward biased earnings forecasts are more likely to issue stocks. 
JEL:  D22 D83 D84 G11 G12 G14 G31 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:27843&r=all 
By:  Eric B. Budish (University of Chicago) 
Abstract:  This note suggests that we view R 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:bfi:wpaper:202031&r=all 