nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒09‒12
ten papers chosen by
Stan Miles
Thompson Rivers University

  1. Transformer-Based Deep Learning Model for Stock Price Prediction: A Case Study on Bangladesh Stock Market By Tashreef Muhammad; Anika Bintee Aftab; Md. Mainul Ahsan; Maishameem Meherin Muhu; Muhammad Ibrahim; Shahidul Islam Khan; Mohammad Shafiul Alam
  2. Landscape-scale effects of farmers’ restoration decision making and investments in central Malawi: an agent-based modeling approach By Djenontin, Ida N.S.; Ligmann-Zielinska, Arika; Zulu, Leo C.
  3. Can a Machine Correct Option Pricing Models? By Caio Almeida; Jianqing Fan; Gustavo Freire; Francesca Tang
  4. GAM(L)A: An econometric model for interpretable machine learning By Sullivan Hué
  5. Deep Hedging: Continuous Reinforcement Learning for Hedging of General Portfolios across Multiple Risk Aversions By Phillip Murray; Ben Wood; Hans Buehler; Magnus Wiese; Mikko S. Pakkanen
  6. Quantitative Stock Investment by Routing Uncertainty-Aware Trading Experts: A Multi-Task Learning Approach By Shuo Sun; Rundong Wang; Bo An
  7. k-Means Clusterization and Machine Learning Prediction of European Most Cited Scientific Publications By Leogrande, Angelo; Costantiello, Alberto; Laureti, Lucio
  8. Classical and deep pricing for Path-dependent options in non-linear generalized affine models By Benedikt Geuchen; Katharina Oberpriller; Thorsten Schmidt
  9. Computing Bayes: From Then `Til Now By Gael M. Martin; David T. Frazier; Christian P. Robert
  10. Reducing Socio-Economic Inequality Policies: Exploring the Possibilities of Simulation Using CGE Modelling By Chtouki Zakaria; Deriouch Kaoutar

  1. By: Tashreef Muhammad; Anika Bintee Aftab; Md. Mainul Ahsan; Maishameem Meherin Muhu; Muhammad Ibrahim; Shahidul Islam Khan; Mohammad Shafiul Alam
    Abstract: In modern capital market the price of a stock is often considered to be highly volatile and unpredictable because of various social, financial, political and other dynamic factors. With calculated and thoughtful investment, stock market can ensure a handsome profit with minimal capital investment, while incorrect prediction can easily bring catastrophic financial loss to the investors. This paper introduces the application of a recently introduced machine learning model - the Transformer model, to predict the future price of stocks of Dhaka Stock Exchange (DSE), the leading stock exchange in Bangladesh. The transformer model has been widely leveraged for natural language processing and computer vision tasks, but, to the best of our knowledge, has never been used for stock price prediction task at DSE. Recently the introduction of time2vec encoding to represent the time series features has made it possible to employ the transformer model for the stock price prediction. This paper concentrates on the application of transformer-based model to predict the price movement of eight specific stocks listed in DSE based on their historical daily and weekly data. Our experiments demonstrate promising results and acceptable root mean squared error on most of the stocks.
    Date: 2022–08
  2. By: Djenontin, Ida N.S.; Ligmann-Zielinska, Arika; Zulu, Leo C.
    Abstract: Local farmers’ engagement and contributions are increasingly underscored in resources restoration policy. Yet, empirical context-situated understanding of the environmental impacts of farmer-led restoration remains scant. Using six Agent-based Modeling (ABM) simulations that integrate multi-type data, we explore the potential spatial-temporal aggregate patterns and outcomes of local restoration actions in Central Malawi. Findings uncover a 10-year positive trend and spatially explicit potential restoration extent and intensity, greenness, and land productivity, all varying by farmer’s participation level. Landscape regreening is modestly promising with fluctuating greenness levels and low, slightly incremental, then steady land-productivity levels. Findings also show appropriate incentives, restoration knowledge, and inspiring local leadership as propitious management options for boosting local restoration. Bundling these enabling management and policy options would maximize local restoration. Findings suggest empowering bottom-up restoration efforts for enhanced environmental impacts. We also demonstrate the potential of using ABM to offer insights for spatially targeted, evidence-based restoration policy implementation and monitoring.
    Keywords: Forest Landscape Restoration (FLR); greenness; participation; productivity; Space-time patterns
    JEL: Q15
    Date: 2022–05–24
  3. By: Caio Almeida (Princeton University); Jianqing Fan (Princeton University); Gustavo Freire (Erasmus School of Economics); Francesca Tang (Princeton University)
    Abstract: We introduce a novel two-step approach to predict implied volatility surfaces. Given any fitted parametric option pricing model, we train a feedforward neural network on the model-implied pricing errors to correct for mispricing and boost performance. Using a large dataset of S&P 500 options, we test our nonparametric correction on several parametric models ranging from ad-hoc Black-Scholes to structural stochastic volatility models and demonstrate the boosted performance for each model. Out-of-sample prediction exercises in the cross-section and in the option panel show that machine-corrected models always outperform their respective original ones, often by a large extent. Our method is relatively indiscriminate, bringing pricing errors down to a similar magnitude regardless of the misspecification of the original parametric model. Even so, correcting models that are less misspecified usually leads to additional improvements in performance and also outperforms a neural network fitted directly to the implied volatility surface.
    Keywords: Deep Learning, Boosting, Implied Volatility, Stochastic Volatility, Model Correction
    JEL: C45 C58 G13
    Date: 2022–07
  4. By: Sullivan Hué (Aix-Marseille Université, AMSE)
    Abstract: Despite their high predictive performance, random forest and gradient boosting are often considered as black boxes or uninterpretable models, which has raised concerns from practitioners and regulators. As an alternative, I propose to use partial linear models that are inherently interpretable. Specifically, this presentation introduces GAM-lasso (GAMLA) and GAM-autometrics (GAMA), denoted as GAM(L)A in short. GAM(L)A combines parametric and non-parametric functions to accurately capture linearities and nonlinearities prevailing between dependent and explanatory variables and a variable-selection procedure to control for overfitting issues. Estimation relies on a two-step procedure building upon the double residual method. I illustrate the predictive performance and interpretability of GAM(L)A on a regression and a classification problem. The results show that GAM(L)A outperforms parametric models augmented by quadratic, cubic, and interaction effects. Moreover, the results also suggest that the performance of GAM(L)A is not significantly different from that of random forest and gradient boosting.
    Date: 2022–08–01
  5. By: Phillip Murray; Ben Wood; Hans Buehler; Magnus Wiese; Mikko S. Pakkanen
    Abstract: We present a method for finding optimal hedging policies for arbitrary initial portfolios and market states. We develop a novel actor-critic algorithm for solving general risk-averse stochastic control problems and use it to learn hedging strategies across multiple risk aversion levels simultaneously. We demonstrate the effectiveness of the approach with a numerical example in a stochastic volatility environment.
    Date: 2022–07
  6. By: Shuo Sun; Rundong Wang; Bo An
    Abstract: Quantitative investment is a fundamental financial task that highly relies on accurate stock prediction and profitable investment decision making. Despite recent advances in deep learning (DL) have shown stellar performance on capturing trading opportunities in the stochastic stock market, we observe that the performance of existing DL methods is sensitive to random seeds and network initialization. To design more profitable DL methods, we analyze this phenomenon and find two major limitations of existing works. First, there is a noticeable gap between accurate financial predictions and profitable investment strategies. Second, investment decisions are made based on only one individual predictor without consideration of model uncertainty, which is inconsistent with the workflow in real-world trading firms. To tackle these two limitations, we first reformulate quantitative investment as a multi-task learning problem. Later on, we propose AlphaMix, a novel two-stage mixture-of-experts (MoE) framework for quantitative investment to mimic the efficient bottom-up trading strategy design workflow of successful trading firms. In Stage one, multiple independent trading experts are jointly optimized with an individual uncertainty-aware loss function. In Stage two, we train neural routers (corresponding to the role of a portfolio manager) to dynamically deploy these experts on an as-needed basis. AlphaMix is also a universal framework that is applicable to various backbone network architectures with consistent performance gains. Through extensive experiments on long-term real-world data spanning over five years on two of the most influential financial markets (US and China), we demonstrate that AlphaMix significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
    Date: 2022–06
  7. By: Leogrande, Angelo; Costantiello, Alberto; Laureti, Lucio
    Abstract: In this article we investigate the determinants of the European “Most Cited Publications”. We use data from the European Innovation Scoreboard-EIS of the European Commission for the period 2010-2019. Data are analyzed with Panel Data with Fixed Effects, Panel Data with Random Effects, WLS, and Pooled OLS. Results show that the level of “Most Cited Publications” is positively associated, among others, to “Innovation Index” and “Enterprise Birth” and negatively associated, among others, to “Government Procurement of Advanced Technology Products” and “Human Resources”. Furthermore, we perform a cluster analysis with the k-Means algorithm either with the Silhouette Coefficient and the Elbow Method. We find that the Elbow Method shows better results than the Silhouette Coefficient with a number of clusters equal to 3. In adjunct we perform a network analysis with the Manhattan distance, and we find the presence of 4 complex and 2 simplified network structures. Finally, we present a confrontation among 10 machine learning algorithms to predict the level of “Most Cited Publication” either with Original Data-OD either with Augmented Data-AD. Results show that the best machine learning algorithm to predict the level of “Most Cited Publication” with Original Data-OD is SGD, while Linear Regression is the best machine learning algorithm for the prediction of “Most Cited Publications” with Augmented Data-AD.
    Keywords: Innovation, and Invention: Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation.
    JEL: O3 O30 O31 O32 O33
    Date: 2022–08–20
  8. By: Benedikt Geuchen; Katharina Oberpriller; Thorsten Schmidt
    Abstract: In this work we consider one-dimensional generalized affine processes under the paradigm of Knightian uncertainty (so-called non-linear generalized affine models). This extends and generalizes previous results in Fadina et al. (2019) and L\"utkebohmert et al. (2022). In particular, we study the case when the payoff is allowed to depend on the path, like it is the case for barrier options or Asian options. To this end, we develop the path-dependent setting for the value function which we do by relying on functional It\^o-calculus. We establish a dynamic programming principle which then leads to a functional non-linear Kolmogorov equation describing the evolution of the value function. While for Asian options, the valuation can be traced back to PDE methods, this is no longer possible for more complicated payoffs like barrier options. To handle these in an efficient manner, we approximate the functional derivatives with deep neural networks and show that numerical valuation under parameter uncertainty is highly tractable.
    Date: 2022–07
  9. By: Gael M. Martin; David T. Frazier; Christian P. Robert
    Abstract: This paper takes the reader on a journey through the history of Bayesian computation, from the 18th century to the present day. Beginning with the one-dimensional integral first confronted by Bayes in 1763, we highlight the key contributions of: Laplace, Metropolis (and, importantly, his coauthors!), Hammersley and Handscomb, and Hastings, all of which set the foundations for the computational revolution in the late 20th century -- led, primarily, by Markov chain Monte Carlo (MCMC) algorithms. A very short outline of 21st century computational methods -- including pseudo-marginal MCMC, Hamiltonian Monte Carlo, sequential Monte Carlo, and the various `approximate' methods -- completes the paper.
    Keywords: History of Bayesian computation, Laplace approximation, Metropolis-Hastings algorithm, importance sampling, Markov chain Monte Carlo, pseudo-marginal methods, Hamiltonian Monte Carlo, sequential Monte Carlo, approximate Bayesian methods
    Date: 2022
  10. By: Chtouki Zakaria (University of Mohammed V); Deriouch Kaoutar (University of Mohammed V)
    Abstract: In a climate of risk where global crises succeed one another, the widening socio-economic inequalities observed are synonymous with social vulnerability to these crises. In order to centre the social block around a median income capable of providing citizens with the necessary resources to overcome the hazards of imported inflation, the public authorities must establish redistribution policies to reduce social disparities and support the most disadvantaged. The aim of the paper is to study the range of traditional policies and social democratic initiatives proposed to combat absolute and relative poverty. Furthermore, to provide a feasibility study of the possibilities of modelling the impact of these policies within the framework of a computable general equilibrium (CGE) approach.
    Abstract: Dans un climat de risque où les crises mondiales se succèdent, le creusement des inégalités socio-économiques constatées, même dans les pays développés, est synonyme de vulnérabilité sociale face à ces crises. Pour centrer le bloc social autour d'un revenu médian capable d'apporter aux citoyens les ressources nécessaires de surmonter les aléas de l'inflation importée, les autorités se doivent de mettre en place des politiques de redistribution pour résorber les disparités sociales et soutenir les plus démunis. L'objectif du papier est d'étudier l'ensemble des politiques traditionnelles et les initiatives social-démocratiques proposées pour lutter contre la pauvreté absolue et relative. D'autant plus, apporter une étude de faisabilité des possibilités de modélisation de l'impact de ces politiques dans le cadre d'une approche d'équilibre général calculable (EGC).
    Keywords: Inégalités socioéconomiques,Politiques de redistributions,Simulations,Modèles EGC
    Date: 2022–07

This nep-cmp issue is ©2022 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.