
on Computational Economics 
By:  Joel Dyer; Patrick Cannon; J. Doyne Farmer; Sebastian Schmon 
Abstract:  Simulation models, in particular agentbased models, are gaining popularity in economics. The considerable flexibility they offer, as well as their capacity to reproduce a variety of empirically observed behaviours of complex systems, give them broad appeal, and the increasing availability of cheap computing power has made their use feasible. Yet a widespread adoption in realworld modelling and decisionmaking scenarios has been hindered by the difficulty of performing parameter estimation for such models. In general, simulation models lack a tractable likelihood function, which precludes a straightforward application of standard statistical inference techniques. Several recent works have sought to address this problem through the application of likelihoodfree inference techniques, in which parameter estimates are determined by performing some form of comparison between the observed data and simulation output. However, these approaches are (a) founded on restrictive assumptions, and/or (b) typically require many hundreds of thousands of simulations. These qualities make them unsuitable for largescale simulations in economics and can cast doubt on the validity of these inference methods in such scenarios. In this paper, we investigate the efficacy of two classes of blackbox approximate Bayesian inference methods that have recently drawn significant attention within the probabilistic machine learning community: neural posterior estimation and neural density ratio estimation. We present benchmarking experiments in which we demonstrate that neural network based blackbox methods provide state of the art parameter inference for economic simulation models, and crucially are compatible with generic multivariate timeseries data. In addition, we suggest appropriate assessment criteria for future benchmarking of approximate Bayesian inference procedures for economic simulation models. 
Date:  2022–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2202.00625&r= 
By:  Klockmann, Victor; von Schenk, Alicia; Villeval, MarieClaire 
Abstract:  With Big Data, decisions made by machine learning algorithms depend on training data generated by many individuals. In an experiment, we identify the effect of varying individual responsibility for the moral choices of an artificially intelligent algorithm. Across treatments, we manipulated the sources of training data and thus the impact of each individual's decisions on the algorithm. Diffusing such individual pivotality for algorithmic choices increased the share of selfish decisions and weakened revealed prosocial preferences. This does not result from a change in the structure of incentives. Rather, our results show that Big Data offers an excuse for selfish behavior through lower responsibility for one's and others' fate. 
Keywords:  Artificial Intelligence,Big Data,Pivotality,Ethics,Experiment 
JEL:  C49 C91 D10 D63 D64 O33 
Date:  2022 
URL:  http://d.repec.org/n?u=RePEc:zbw:safewp:336&r= 
By:  Babii, Andrii; Ghysels, Eric (Université catholique de Louvain, LIDAM/CORE, Belgium); Striaukas, Jonas 
Abstract:  This paper introduces structured machine learning regressions for highdimensional time series data potentially sampled at different frequencies. The sparsegroup LASSO estimator can take advantage of such time series data structures and outperforms the unstructured LASSO. We establish oracle inequalities for the sparsegroup LASSO estimator within a framework that allows for the mixing processes and recognizes that the financial and the macroeconomic data may have heavier than exponential tails. An empirical application to nowcasting US GDP growth indicates that the estimator performs favorably compared to other alternatives and that text data can be a useful addition to more traditional numerical data. Our methodology is implemented in the R package midasml, available from CRAN. 
Keywords:  highdimensional time series, fat tails, taumixing, sparsegroup LASSO, mixed frequency data, textual news data 
Date:  2021–01–01 
URL:  http://d.repec.org/n?u=RePEc:ajf:louvlf:2021004&r= 
By:  Mohsen Asgari; Seyed Hossein Khasteh 
Abstract:  Deep Reinforcement Learning solutions have been applied to different control problems with outperforming and promising results. In this research work we have applied Proximal Policy Optimization, Soft ActorCritic and Generative Adversarial Imitation Learning to strategy design problem of three cryptocurrency markets. Our input data includes price data and technical indicators. We have implemented a Gym environment based on cryptocurrency markets to be used with the algorithms. Our test results on unseen data shows a great potential for this approach in helping investors with an expert system to exploit the market and gain profit. Our highest gain for an unseen 66 day span is 4850 US dollars per 10000 US dollars investment. We also discuss on how a specific hyperparameter in the environment design can be used to adjust risk in the generated strategies. 
Date:  2022–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2201.05906&r= 
By:  Jiaxiong Yao; Mr. Yunhui Zhao 
Abstract:  To reach the global netzero goal, the level of carbon emissions has to fall substantially at speed rarely seen in history, highlighting the need to identify structural breaks in carbon emission patterns and understand forces that could bring about such breaks. In this paper, we identify and analyze structural breaks using machine learning methodologies. We find that downward trend shifts in carbon emissions since 1965 are rare, and most trend shifts are associated with nonclimate structural factors (such as a change in the economic structure) rather than with climate policies. While we do not explicitly analyze the optimal mix between climate and nonclimate policies, our findings highlight the importance of the nonclimate policies in reducing carbon emissions. On the methodology front, our paper contributes to the climate toolbox by identifying countryspecific structural breaks in emissions for top 20 emitters based on a userfriendly machinelearning tool and interpreting the results using a decomposition of carbon emission ( Kaya Identity). 
Keywords:  Climate Policies, Carbon Emissions, Machine Learning, Structural Break, Kaya Identity 
Date:  2022–01–21 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:2022/009&r= 
By:  Isaiah Andrews; Drew Fudenberg; Annie Liang; Chaofeng Wu 
Abstract:  Whether a model's performance on a given domain can be extrapolated to other settings depends on whether it has learned generalizable structure. We formulate this as the problem of theory transfer, and provide a tractable way to measure a theory's transferability. We derive confidence intervals for transferability that ensure coverage in finite samples, and apply our approach to evaluate the transferability of predictions of certainty equivalents across different subject pools. We find that models motivated by economic theory perform more reliably than blackbox machine learning methods at this transfer prediction task. 
Date:  2022–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2202.04796&r= 
By:  Yen Thuan Trinh; Bernard Hanzon 
Abstract:  The binomial tree method and the Monte Carlo (MC) method are popular methods for solving option pricing problems. However in both methods there is a tradeoff between accuracy and speed of computation, both of which are important in applications. We introduce a new method, the MCTree method, that combines the MC method with the binomial tree method. It employs a mixing distribution on the tree parameters, which are restricted to give prescribed mean and variance. For the family of mixing densities proposed here, the corresponding compound densities of the tree outcomes at final time are obtained. Ideally the compound density would be (after a logarithmic transformation of the asset prices) Gaussian. Using the fact that in general, when mean and variance are prescribed, the maximum entropy distribution is Gaussian, we look for mixing densities for which the corresponding compound density has high entropy level. The compound densities that we obtain are not exactly Gaussian, but have entropy values close to the maximum possible Gaussian entropy. Furthermore we introduce techniques to correct for the deviation from the ideal Gaussian pricing measure. One of these (distribution correction technique) ensures that expectations calculated with the method are taken with respect to the desired Gaussian measure. The other one (biascorrection technique) ensures that the probability distributions used are riskneutral in each of the trees. Apart from option pricing, we apply our techniques to develop an algorithm for calculation of the Credit Valuation Adjustment (CVA) to the price of an American option. Numerical examples of the workings of the MCTree approach are provided, which show good performance in terms of accuracy and computational speed. 
Date:  2022–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2202.00785&r= 
By:  Francesco Bova; Avi Goldfarb; Roger G. Melko 
Abstract:  A quantum computer exhibits a quantum advantage when it can perform a calculation that a classical computer is unable to complete. It follows that a company with a quantum computer would be a monopolist in the market for solving such a calculation if its only competitor was a company with a classical computer. Conversely, economic outcomes are unclear in settings where quantum computers do not exhibit a quantum advantage. We model a duopoly where a quantum computing company competes against a classical computing company. The model features an asymmetric variable cost structure between the two companies and the potential for an asymmetric fixed cost structure, where each firm can invest in scaling its hardware to expand its respective market. We find that even if: 1) the companies can complete identical calculations, and thus there is no quantum advantage, and 2) it is more expensive to scale the quantum computer, the quantum computing company can not only be more profitable but also invest more in market creation. The results suggest that quantum computers may not need to display a quantum advantage to be able to generate a quantum economic advantage for the companies that develop them. 
JEL:  L63 M15 O3 
Date:  2022–02 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:29724&r= 
By:  Kohei Hayashi; Kei Nakagawa 
Abstract:  In this paper, we focus on generation of timeseries data using neural networks. It is often the case that input timeseries data, especially taken from real financial markets, is irregularly sampled, and its noise structure is more complicated than i.i.d. type. To generate time series with such a property, we propose fSDENet: neural fractional Stochastic Differential Equation Network. It generalizes the neural SDE model by using fractional Brownian motion with Hurst index larger than half, which exhibits longterm memory property. We derive the solver of fSDENet and theoretically analyze the existence and uniqueness of the solution to fSDENet. Our experiments demonstrate that the fSDENet model can replicate distributional properties well. 
Date:  2022–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2201.05974&r= 
By:  Andreeva, Andriyana; Yolova, Galina 
Abstract:  The paper examines the question of building an ecosystem of trust in the use of artificial intelligence in the employment relationship. To achieve this aim an analysis is made of the relevant norms of the Labour Legislation – national and European. Based on the analysis summaries, conclusions and recommendations are made. 
Keywords:  ecosystem of trust, artificial intelligence, employment relations employer, employee 
JEL:  K31 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:111726&r= 