nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒03‒14
ten papers chosen by



  1. Can a Machine Correct Option Pricing Models? By Caio Almeida; Jianqing Fan; Francesca Tang
  2. Comparative Study of Machine Learning Models for Stock Price Prediction By Ogulcan E. Orsel; Sasha S. Yamada
  3. What Drives Financial Sector Development in Africa? Insights from Machine Learning By Isaac K. Ofori; Christopher Quaidoo; Pamela E. Ofori
  4. Dual-CLVSA: a Novel Deep Learning Approach to Predict Financial Markets with Sentiment Measurements By Jia Wang; Hongwei Zhu; Jiancheng Shen; Yu Cao; Benyuan Liu
  5. Dependence model assessment and selection with DecoupleNets By Marius Hofert; Avinash Prasad; Mu Zhu
  6. StonkBERT: Can Language Models Predict Medium-Run Stock Price Movements? By Stefan Pasch; Daniel Ehnes
  7. The Effect of borrower-specific Loan-to-Value policies on household debt, wealth inequality and consumption volatility By Ruben Tarne; Dirk Bezemer; Thomas Theobald
  8. The network origins of aggregate Fluctuations: A demand-side approach By Emanuele Citera; Shyam Gouri Suresh; Mark Setterfield
  9. FiNCAT: Financial Numeral Claim Analysis Tool By Sohom Ghosh; Sudip Kumar Naskar
  10. Application of K-means Clustering Algorithm in Evaluation and Statistical Analysis of Internet Financial Transaction Data By Shi Bo

  1. By: Caio Almeida (Princeton University); Jianqing Fan (Princeton University); Francesca Tang (Princeton University)
    Abstract: We introduce a novel approach to capture implied volatility smiles. Given any parametric option pricing model used to fit a smile, we train a deep feedforward neural network on the model’s orthogonal residuals to correct for potential mispricings and boost performance. Using a large number of recent S&P500 options, we compare our hybrid machine-corrected model to several standalone parametric models ranging from ad-hoc corrections of Black-Scholes to more structural noarbitrage stochastic volatility models. Empirical results based on out-of-sample fitting errors - in cross-sectional and time-series dimensions - consistently confirm that a machine can in fact correct existing models without overfitting. Moreover, we find that our two-step technique is relatively indiscriminate: regardless of the bias or structure of the original parametric model, our boosting approach is able to correct it to approximately the same degree. Hence, our methodology is adaptable and versatile in its application to a large range of parametric option pricing models. As an overarching theme, machine corrected methods, guided by an implied volatility model as a template, outperform pure machine learning methods.
    Keywords: Deep Learning, Boosting, Implied Volatility, Stochastic Volatility, Model Correction
    JEL: E37
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:pri:econom:2021-44&r=
  2. By: Ogulcan E. Orsel; Sasha S. Yamada
    Abstract: In this work, we apply machine learning techniques to historical stock prices to forecast future prices. To achieve this, we use recursive approaches that are appropriate for handling time series data. In particular, we apply a linear Kalman filter and different varieties of long short-term memory (LSTM) architectures to historical stock prices over a 10-year range (1/1/2011 - 1/1/2021). We quantify the results of these models by computing the error of the predicted values versus the historical values of each stock. We find that of the algorithms we investigated, a simple linear Kalman filter can predict the next-day value of stocks with low-volatility (e.g., Microsoft) surprisingly well. However, in the case of high-volatility stocks (e.g., Tesla) the more complex LSTM algorithms significantly outperform the Kalman filter. Our results show that we can classify different types of stocks and then train an LSTM for each stock type. This method could be used to automate portfolio generation for a target return rate.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.03156&r=
  3. By: Isaac K. Ofori (University of Insubria, Varese, Italy); Christopher Quaidoo (Legon, Accra, Ghana); Pamela E. Ofori (University of Insubria, Varese, Italy)
    Abstract: This study uses machine learning techniques to identify the key drivers of financial development in Africa. To this end, four regularization techniques— the Standard lasso, Adaptive lasso, the minimum Schwarz Bayesian information criterion lasso, and the Elasticnet are trained based on a dataset containing 86 covariates of financial development for the period 1990 – 2019. The results show that variables such as cell phones, economic globalisation, institutional effectiveness, and literacy are crucial for financial sector development in Africa. Evidence from the Partialing-out lasso instrumental variable regression reveals that while inflation and agricultural sector employment suppress financial sector development, cell phones and institutional effectiveness are remarkable in spurring financial sector development in Africa. Policy recommendations are provided in line with the rise in globalisation, and technological progress in Africa.
    Keywords: Africa, Elasticnet, Financial Development, Financial Inclusion, Lasso, Regularization, Variable Selection
    JEL: C01 C14 C52 C53 C55 E5 O55
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:abh:wpaper:21/074&r=
  4. By: Jia Wang; Hongwei Zhu; Jiancheng Shen; Yu Cao; Benyuan Liu
    Abstract: It is a challenging task to predict financial markets. The complexity of this task is mainly due to the interaction between financial markets and market participants, who are not able to keep rational all the time, and often affected by emotions such as fear and ecstasy. Based on the state-of-the-art approach particularly for financial market predictions, a hybrid convolutional LSTM Based variational sequence-to-sequence model with attention (CLVSA), we propose a novel deep learning approach, named dual-CLVSA, to predict financial market movement with both trading data and the corresponding social sentiment measurements, each through a separate sequence-to-sequence channel. We evaluate the performance of our approach with backtesting on historical trading data of SPDR SP 500 Trust ETF over eight years. The experiment results show that dual-CLVSA can effectively fuse the two types of data, and verify that sentiment measurements are not only informative for financial market predictions, but they also contain extra profitable features to boost the performance of our predicting system.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.03158&r=
  5. By: Marius Hofert; Avinash Prasad; Mu Zhu
    Abstract: Neural networks are suggested for learning a map from $d$-dimensional samples with any underlying dependence structure to multivariate uniformity in $d'$ dimensions. This map, termed DecoupleNet, is used for dependence model assessment and selection. If the data-generating dependence model was known, and if it was among the few analytically tractable ones, one such transformation for $d'=d$ is Rosenblatt's transform. DecoupleNets only require an available sample and are applicable to $d'
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.03406&r=
  6. By: Stefan Pasch; Daniel Ehnes
    Abstract: To answer this question, we fine-tune transformer-based language models, including BERT, on different sources of company-related text data for a classification task to predict the one-year stock price performance. We use three different types of text data: News articles, blogs, and annual reports. This allows us to analyze to what extent the performance of language models is dependent on the type of the underlying document. StonkBERT, our transformer-based stock performance classifier, shows substantial improvement in predictive accuracy compared to traditional language models. The highest performance was achieved with news articles as text source. Performance simulations indicate that these improvements in classification accuracy also translate into above-average stock market returns.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.02268&r=
  7. By: Ruben Tarne (Macroeconomic Policy Institute IMK and Faculty of economics and Business, University of Groningen); Dirk Bezemer; Thomas Theobald (Macroeconomic Policy Institute IMK)
    Abstract: This paper analyses the effects of borrower-specific credit constraints on macroeconomic outcomes in an agent-based housing market model, calibrated using U.K. household survey data. We apply different Loan-to-Value (LTV) caps for different types of agents: first-time-buyers, second and subsequent buyers, and buy-to-let investors. We then analyse the outcomes on household debt, wealth inequality and consumption volatility. The households' consumption function, in the model, incorporates a wealth term and income-dependent marginal propensities to consume. These characteristics cause the consumption-to-income ratios to move procyclically with the housing cycle. In line with the empirical literature, LTV caps in the model are overall effective while generating (distributional) side effects. Depending on the specification, we find that borrower-specific LTV caps affect household debt, wealth inequality and consumption volatility differently, mediated by changes in the housing market transaction patterns of the model. Restricting investors' access to credit leads to substantial reductions in debt, wealth inequality and consumption volatility. Limiting first-time and subsequent buyers produces only weak effects on household debt and consumption volatility, while limiting first-time buyers even increases wealth inequality. Hence, our findings emphasise the importance of applying borrower-specific macroprudential policies and, specifically, support a policy approach of primarily restraining buy-to-let investors' access to credit.
    Keywords: Agent-based modeling, Macroprudential regulation, Household indebtedness, Housing market, Wealth inequality
    JEL: G51 E58 C63
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:imk:wpaper:212-2021&r=
  8. By: Emanuele Citera (New School for Social Research); Shyam Gouri Suresh (Davidson College); Mark Setterfield (New School for Social Research)
    Abstract: We construct a model of cyclical growth with agent-based features designed to study the network origins of aggregate fluctuations from a demand-side perspective. In our model, aggregate fluctuations result from variations in investment behavior at firm level motivated by endogenously-generated changes in `animal spirits' or the state of long run expectations(SOLE). In addition to being influenced by their own economic conditions, firms pay attention to the performance of first-degree network neighbours, weighted (to differing degrees) by the centrality of these neighbours in the network, when revising their SOLE. This allows us to analyze the effects of the centrality of linked network neighbours on the amplitude of aggregate fluctuations. We show that the amplitude of fluctuations is significantly affected by the eigenvector centrality, and the weight attached to the eigenvector centrality, of linked network neighbours. The dispersion of this effect about its mean is shown to be similarly important, resulting in the possibility that network properties can result in `great moderations' giving way to sudden increases in the volatility of aggregate economic performance.
    Keywords: Aggregate fluctuations, cyclical growth, animal spirits, state of long run expectations, agent-based model, random network, preferential attachment, small world.
    JEL: C63 E12 E32 E37 O41
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:imk:fmmpap:72-2021&r=
  9. By: Sohom Ghosh; Sudip Kumar Naskar
    Abstract: While making investment decisions by reading financial documents, investors need to differentiate between in-claim and outof-claim numerals. In this paper, we present a tool which does it automatically. It extracts context embeddings of the numerals using one of the transformer based pre-trained language model called BERT. After this, it uses a Logistic Regression based model to detect whether the numerals is in-claim or out-of-claim. We use FinNum-3 (English) dataset to train our model. After conducting rigorous experiments we achieve a Macro F1 score of 0.8223 on the validation set. We have open-sourced this tool and it can be accessed from https://github.com/sohomghosh/FiNCAT_Fin ancial_Numeral_Claim_Analysis_Tool
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.00631&r=
  10. By: Shi Bo
    Abstract: The purpose is to promote the orderly development of China's Internet financial transactions and minimize default and delinquency in Internet financial transactions. Based on the typical big data algorithm (K-means algorithm), this paper discusses the concepts of the K-means algorithm and Internet financial transactions, as well as the significance of big data algorithms for Internet financial transaction data evaluation and statistical analysis. Meanwhile, the existing Internet financial transaction systems are reviewed, and their deficiencies are summarized, based on which relevant countermeasures and suggestions are put forward. At the same time, the K-means clustering algorithm is applied to evaluate financial transaction data, finding that it can improve the accuracy of data and reduce the error by 40%. But when the number of clusters is 7, the output result distribution interval of the K-means clustering algorithm is 4 days, and when the number of clusters is 10, the output result distribution interval of the K-means clustering algorithm is 6 days, indicating that the convergence effect of this algorithm is relatively good. Additionally, many small and micro individuals still hold a negative attitude towards the innovation and adjustment of Internet financial transactions, indicating that the construction of China's Internet financial transaction system needs further optimization. The satisfaction of most small and micro individuals with innovation and adjustment also shows that the proposed Internet financial transaction adjustment measures are feasible, can provide references for relevant Internet financial transactions, and contributes to the development of Internet financial transactions in China.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.03146&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.