nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒02‒08
seventeen papers chosen by



  1. The MEGA Regional General Equilibrium Model By Roberto Roson
  2. How to Identify Investor's types in real financial markets by means of agent based simulation By Filippo Neri
  3. Digital Innovation and its Potential Consequences: the Elasticity Augmenting Approach By Bertani, Filippo; Raberto, Marco; Teglio, Andrea; Cincotti, Silvano
  4. Neural networks-based algorithms for stochastic control and PDEs in finance * By Maximilien Germain; Huyên Pham; Xavier Warin
  5. On the typicality of the representative agent By Teglio, Andrea
  6. Deep Reinforcement Learning for Active High Frequency Trading By Antonio Briola; Jeremy Turiel; Riccardo Marcaccioli; Tomaso Aste
  7. Bias and Productivity in Humans and Machines By Bo Cowgill
  8. A Reinforcement Learning Based Encoder-Decoder Framework for Learning Stock Trading Rules By Mehran Taghian; Ahmad Asadi; Reza Safabakhsh
  9. Portfolio Optimization with 2D Relative-Attentional Gated Transformer By Tae Wan Kim; Matloob Khushi
  10. Forecasting Commodity Prices Using Long Short-Term Memory Neural Networks By Racine Ly; Fousseini Traore; Khadim Dia
  11. A comparative study of scoring systems by simulations By L\'aszl\'o Csat\'o
  12. Prime locations By Gabriel M. Ahlfeldt; Thilo N. H. Albers; Kristian Behrens
  13. Visual Analytics approach for finding spatiotemporal patterns from COVID19 By Arunav Das
  14. Deep Reinforcement Learning with Function Properties in Mean Reversion Strategies By Sophia Gu
  15. Machine Learning and Causality: The Impact of Financial Crises on Growth By Andrew J Tiffin
  16. The impact of incorrect social information on collective wisdom in human groups By Bertrand Jayles; Ramon Escobedo; Stéphane Cezera; Adrien Blanchet; Tatsuya Kameda; Clément Sire; Guy Théraulaz
  17. Deep ReLU Network Expression Rates for Option Prices in high-dimensional, exponential L\'evy models By Lukas Gonon; Christoph Schwab

  1. By: Roberto Roson (Department of Economics, University Of Venice Cà Foscari; Loyola Andalusia University)
    Abstract: This paper presents the structure, data sources, assumptions and simulation methods of the Modelo de Equilibrio General para Andalusia_ (MEGA), a regional CGE model that has been designed for the analysis of the Andalusian economic structure, but which could also be applied to other regional economies. The document is intended to be a reference for simulation and assessment exercises based on this model.
    Keywords: Computable General Equilibrium Models, Regional Economics, Numerical Simulations, Computational Economics
    JEL: C51 C68 D58 R13 R15
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2021:06&r=all
  2. By: Filippo Neri
    Abstract: The paper proposes a computational adaptation of the principles underlying principal component analysis with agent based simulation in order to produce a novel modeling methodology for financial time series and financial markets. Goal of the proposed methodology is to find a reduced set of investor s models (agents) which is able to approximate or explain a target financial time series. As computational testbed for the study, we choose the learning system L FABS which combines simulated annealing with agent based simulation for approximating financial time series. We will also comment on how L FABS s architecture could exploit parallel computation to scale when dealing with massive agent simulations. Two experimental case studies showing the efficacy of the proposed methodology are reported.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.03127&r=all
  3. By: Bertani, Filippo; Raberto, Marco; Teglio, Andrea; Cincotti, Silvano
    Abstract: Digital technologies have been experiencing in the last thirty years a considerable development which has radically changed our economy and lives. In particular, the advent of new intangible technologies, represented by software, artificial intelligence and deep learning algorithms, has deeply affected our production systems from manufacturing to services, thanks also to further improvement of tangible computational assets. Investments in digital technologies have been increasing in most of developed countries, posing the issue of forecasting potential scenarios and consequences deriving form this new technological wave. The contribution of this paper is both theoretical and related to model design. First of all we present a new production function based on the concept of organizational units. Then, we enrich the macroeconomic model Eurace integrating this new function in the production processes in order to investigate the potential effects deriving from digital technologies innovation both at the micro and macro level.
    Keywords: Elasticity of substitution, Elasticity augmenting approach, Digital transformation, Agent-based economics, Organizational unit.
    JEL: C63 O33
    Date: 2021–01–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:105326&r=all
  4. By: Maximilien Germain (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UP - Université de Paris, EDF R&D - EDF R&D - EDF - EDF); Huyên Pham (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - UPD7 - Université Paris Diderot - Paris 7 - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique, FiME Lab - Laboratoire de Finance des Marchés d'Energie - EDF R&D - EDF R&D - EDF - EDF - CREST - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres); Xavier Warin (EDF R&D - EDF R&D - EDF - EDF, FiME Lab - Laboratoire de Finance des Marchés d'Energie - EDF R&D - EDF R&D - EDF - EDF - CREST - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres)
    Abstract: This paper presents machine learning techniques and deep reinforcement learningbased algorithms for the efficient resolution of nonlinear partial differential equations and dynamic optimization problems arising in investment decisions and derivative pricing in financial engineering. We survey recent results in the literature, present new developments, notably in the fully nonlinear case, and compare the different schemes illustrated by numerical tests on various financial applications. We conclude by highlighting some future research directions.
    Date: 2021–01–19
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03115503&r=all
  5. By: Teglio, Andrea
    Abstract: The aim of this paper is to explore under which conditions a representative agent (RA) model is able to correctly approximate the output of a more realistic model based on the "true" assumption of many interacting agents. The starting point is the widespread Keynesian cross diagram, which is compared to an extended versions that explicitly considers a multiplicity of interacting households and firms, and collapses into the original model when the number of agents is one per type. Results show that the RA Keynesian cross diagram model is not a good approximation of the extended model when (i) the network structure of the economy is not symmetric enough, e.g. firms have different sizes, or (ii) the rationality of agents is not high enough. When income inequality is considered, through the introduction of capitalists, the representative agent model is no more a good approximation, even if the agents are rational. A fiscal policy that targets income redistribution improves the prediction of the RA model. In general, all features that increase overall rationality in the economy and decrease its heterogeneity, tend to improve the performance of the RA approximation.
    Keywords: macroeconomics; rationality; inequality; Keynesian cross-diagram; representative agent; agent-based models; networks; simulation; complex adaptive systems
    JEL: C63 E00 E12
    Date: 2020–10–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:105407&r=all
  6. By: Antonio Briola; Jeremy Turiel; Riccardo Marcaccioli; Tomaso Aste
    Abstract: We introduce the first end-to-end Deep Reinforcement Learning based framework for active high frequency trading. We train DRL agents to to trade one unit of Intel Corporation stocks by employing the Proximal Policy Optimization algorithm. The training is performed on three contiguous months of high frequency Limit Order Book data. In order to maximise the signal to noise ratio in the training data, we compose the latter by only selecting training samples with largest price changes. The test is then carried out on the following month of data. Hyperparameters are tuned using the Sequential Model Based Optimization technique. We consider three different state characterizations, which differ in the LOB-based meta-features they include. Agents learn trading strategies able to produce stable positive returns in spite of the highly stochastic and non-stationary environment, which is remarkable itself. Analysing the agents' performances on the test data, we argue that the agents are able to create a dynamic representation of the underlying environment highlighting the occasional regularities present in the data and exploiting them to create long-term profitable trading strategies.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.07107&r=all
  7. By: Bo Cowgill (Columbia University)
    Abstract: Where should better learning technology (such as machine learning or AI) improve decisions? I develop a model of decision-making in which better learning technology is complementary with experimentation. Noisy, inconsistent decision-making introduces quasi-experimental variation into training datasets, which complements learning. The model makes heterogeneous predictions about when machine learning algorithms can improve human biases. These algorithms can remove human biases exhibited in historical training data, but only if the human training decisions are sufficiently noisy; otherwise, the algorithms will codify or exacerbate existing biases. Algorithms need only a small amount of noise to correct biases that cause large productivity distortions. As the amount of noise increases, the machine learning can correct both large and increasingly small productivity distortions. The theoretical conditions necessary to completely eliminate bias are extreme and unlikely to appear in real datasets. The model provides theoretical microfoundations for why learning from biased historical datasets may lead to a decrease (if not a full elimination) of bias, as has been documented in several empirical settings. The model makes heterogeneous predictions about the use of human expertise in machine learning. Expert-labeled training datasets may be suboptimal if experts are insufficiently noisy, as prior research suggests. I discuss implications for regulation, labor markets, and business strategy.
    Keywords: machine learning, training data, decision algorithm, decision-making, human biases
    JEL: C44 C45 D80 O31 O33
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:upj:weupjo:19-309&r=all
  8. By: Mehran Taghian; Ahmad Asadi; Reza Safabakhsh
    Abstract: A wide variety of deep reinforcement learning (DRL) models have recently been proposed to learn profitable investment strategies. The rules learned by these models outperform the previous strategies specially in high frequency trading environments. However, it is shown that the quality of the extracted features from a long-term sequence of raw prices of the instruments greatly affects the performance of the trading rules learned by these models. Employing a neural encoder-decoder structure to extract informative features from complex input time-series has proved very effective in other popular tasks like neural machine translation and video captioning in which the models face a similar problem. The encoder-decoder framework extracts highly informative features from a long sequence of prices along with learning how to generate outputs based on the extracted features. In this paper, a novel end-to-end model based on the neural encoder-decoder framework combined with DRL is proposed to learn single instrument trading strategies from a long sequence of raw prices of the instrument. The proposed model consists of an encoder which is a neural structure responsible for learning informative features from the input sequence, and a decoder which is a DRL model responsible for learning profitable strategies based on the features extracted by the encoder. The parameters of the encoder and the decoder structures are learned jointly, which enables the encoder to extract features fitted to the task of the decoder DRL. In addition, the effects of different structures for the encoder and various forms of the input sequences on the performance of the learned strategies are investigated. Experimental results showed that the proposed model outperforms other state-of-the-art models in highly dynamic environments.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.03867&r=all
  9. By: Tae Wan Kim; Matloob Khushi
    Abstract: Portfolio optimization is one of the most attentive fields that have been researched with machine learning approaches. Many researchers attempted to solve this problem using deep reinforcement learning due to its efficient inherence that can handle the property of financial markets. However, most of them can hardly be applicable to real-world trading since they ignore or extremely simplify the realistic constraints of transaction costs. These constraints have a significantly negative impact on portfolio profitability. In our research, a conservative level of transaction fees and slippage are considered for the realistic experiment. To enhance the performance under those constraints, we propose a novel Deterministic Policy Gradient with 2D Relative-attentional Gated Transformer (DPGRGT) model. Applying learnable relative positional embeddings for the time and assets axes, the model better understands the peculiar structure of the financial data in the portfolio optimization domain. Also, gating layers and layer reordering are employed for stable convergence of Transformers in reinforcement learning. In our experiment using U.S. stock market data of 20 years, our model outperformed baseline models and demonstrated its effectiveness.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.03138&r=all
  10. By: Racine Ly; Fousseini Traore; Khadim Dia
    Abstract: This paper applies a recurrent neural network (RNN) method to forecast cotton and oil prices. We show how these new tools from machine learning, particularly Long-Short Term Memory (LSTM) models, complement traditional methods. Our results show that machine learning methods fit reasonably well the data but do not outperform systematically classical methods such as Autoregressive Integrated Moving Average (ARIMA) models in terms of out of sample forecasts. However, averaging the forecasts from the two type of models provide better results compared to either method. Compared to the ARIMA and the LSTM, the Root Mean Squared Error (RMSE) of the average forecast was 0.21 and 21.49 percent lower respectively for cotton. For oil, the forecast averaging does not provide improvements in terms of RMSE. We suggest using a forecast averaging method and extending our analysis to a wide range of commodity prices.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.03087&r=all
  11. By: L\'aszl\'o Csat\'o
    Abstract: Scoring rules aggregate individual rankings by assigning some points to each position in each ranking such that the total sum of points provides the overall ranking of the alternatives. They are widely used in sports competitions consisting of multiple contests. We study the tradeoff between two risks in this setting: (1) the threat of early clinch when the title has been clinched before the last contest(s) of the competition take place; (2) the danger of winning the competition without finishing first in any contest. In particular, four historical points scoring systems of the Formula One World Championship are compared with the family of geometric scoring rules that have favourable axiomatic properties. The formers are found to be competitive or even better. The current scheme seems to be a reasonable compromise in optimising the above goals. Our results shed more light on the evolution of the Formula One points scoring systems and contribute to the issue of choosing the set of point values.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.05744&r=all
  12. By: Gabriel M. Ahlfeldt; Thilo N. H. Albers; Kristian Behrens
    Abstract: We harness big data to detect prime locations - large clusters of knowledge-based tradable services - in 125 global cities and track changes in the within-city geography of prime service jobs over a century. Historically smaller cities that did not develop early public transit networks are less concentrated today and have prime locations farther from their historic cores. We rationalize these findings in an agent-based model that features extreme agglomeration, multiple equilibria, and path dependence. Both city size and public transit networks anchor city structure. Exploiting major disasters and using a novel instrument - subway potential - we provide causal evidence for these mechanisms and disentangle size- from transport network effects.
    Keywords: prime services, internal city structure, agent-based model, multiple equilibria and path dependence, transport networks, cities, economic geography
    JEL: R38 R52 R58
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:cep:cepdps:dp1725&r=all
  13. By: Arunav Das
    Abstract: Bounce Back Loan is amongst a number of UK business financial support schemes launched by UK Government in 2020 amidst pandemic lockdown. Through these schemes, struggling businesses are provided financial support to weather economic slowdown from pandemic lockdown. {\pounds}43.5bn loan value has been provided as of 17th Dec2020. However, with no major checks for granting these loans and looming prospect of loan losses from write-offs from failed businesses and fraud, this paper theorizes prospect of applying spatiotemporal modelling technique to explore if geospatial patterns and temporal analysis could aid design of loan grant criteria for schemes. Application of Clustering and Visual Analytics framework to business demographics, survival rate and Sector concentration shows Inner and Outer London spatial patterns which historic business failures and reversal of the patterns under COVID-19 implying sector influence on spatial clusters. Combination of unsupervised clustering technique with multinomial logistic regression modelling on research datasets complimented by additional datasets on other support schemes, business structure and financial crime, is recommended for modelling business vulnerability to certain types of financial market or economic condition. The limitations of clustering technique for high dimensional is discussed along with relevance of an applicable model for continuing the research through next steps.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.06476&r=all
  14. By: Sophia Gu
    Abstract: With the recent advancement in Deep Reinforcement Learning in the gaming industry, we are curious if the same technology would work as well for common quantitative financial problems. In this paper, we will investigate if an off-the-shelf library developed by OpenAI can be easily adapted to mean reversion strategy. Moreover, we will design and test to see if we can get better performance by narrowing the function space that the agent needs to search for.We achieve this through augmenting the reward function by a carefully picked penalty term.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.03418&r=all
  15. By: Andrew J Tiffin
    Abstract: Machine learning tools are well known for their success in prediction. But prediction is not causation, and causal discovery is at the core of most questions concerning economic policy. Recently, however, the literature has focused more on issues of causality. This paper gently introduces some leading work in this area, using a concrete example—assessing the impact of a hypothetical banking crisis on a country’s growth. By enabling consideration of a rich set of potential nonlinearities, and by allowing individually-tailored policy assessments, machine learning can provide an invaluable complement to the skill set of economists within the Fund and beyond.
    Keywords: Machine learning;Financial crises;Exchange rate flexibility;WP,machine-learning literature,instrumental-variables approach,treatment variable,confidence interval,ML technique
    Date: 2019–11–01
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2019/228&r=all
  16. By: Bertrand Jayles (Unknown); Ramon Escobedo (Unknown); Stéphane Cezera (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Adrien Blanchet (IAST - Institute for Advanced Study in Toulouse); Tatsuya Kameda (Unknown); Clément Sire (Unknown); Guy Théraulaz (IAST - Institute for Advanced Study in Toulouse)
    Abstract: A major problem resulting from the massive use of social media is the potential spread of incorrect information. Yet, very few studies have investigated the impact of incorrect information on individual and collective decisions. We performed experiments in which participants had to estimate a series of quantities, before and after receiving social information. Unbeknownst to them, we controlled the degree of inaccuracy of the social information through ‘virtual influencers', who provided some incorrect information. We find that a large proportion of individuals only partially follow the social information, thus resisting incorrect information. Moreover, incorrect information can help improve group performance more than correct information, when going against a human underestimation bias. We then design a computational model whose predictions are in good agreement with the empirical data, and sheds light on the mechanisms underlying our results. Besides these main findings, we demonstrate that the dispersion of estimates varies a lot between quantities, and must thus be considered when normalizing and aggregating estimates of quantities that are very different in nature. Overall, our results suggest that incorrect information does not necessarily impair the collective wisdom of groups, and can even be used to dampen the negative effects of known cognitive biases.
    Keywords: human collective behaviour,incorrect information,social influence,computational modelling,wisdom of crowds
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03019820&r=all
  17. By: Lukas Gonon; Christoph Schwab
    Abstract: We study the expression rates of deep neural networks (DNNs for short) for option prices written on baskets of $d$ risky assets, whose log-returns are modelled by a multivariate L\'evy process with general correlation structure of jumps. We establish sufficient conditions on the characteristic triplet of the L\'evy process $X$ that ensure $\varepsilon$ error of DNN expressed option prices with DNNs of size that grows polynomially with respect to $\mathcal{O}(\varepsilon^{-1})$, and with constants implied in $\mathcal{O}(\cdot)$ which grow polynomially with respect $d$, thereby overcoming the curse of dimensionality and justifying the use of DNNs in financial modelling of large baskets in markets with jumps. In addition, we exploit parabolic smoothing of Kolmogorov partial integrodifferential equations for certain multivariate L\'evy processes to present alternative architectures of ReLU DNNs that provide $\varepsilon$ expression error in DNN size $\mathcal{O}(|\log(\varepsilon)|^a)$ with exponent $a \sim d$, however, with constants implied in $\mathcal{O}(\cdot)$ growing exponentially with respect to $d$. Under stronger, dimension-uniform non-degeneracy conditions on the L\'evy symbol, we obtain algebraic expression rates of option prices in exponential L\'evy models which are free from the curse of dimensionality. In this case the ReLU DNN expression rates of prices depend on certain sparsity conditions on the characteristic L\'evy triplet. We indicate several consequences and possible extensions of the present results.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.11897&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.