nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒01‒23
nineteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Quantum neural network for continuous variable prediction By Prateek Jain; Alberto Garcia Garcia
  2. Hierarchical Deep Reinforcement Learning for VWAP Strategy Optimization By Xiaodong Li; Pangjing Wu; Chenxin Zou; Qing Li
  3. Quantum-Inspired Tensor Neural Networks for Option Pricing By Raj G. Patel; Chia-Wei Hsing; Serkan Sahin; Samuel Palmer; Saeed S. Jahromi; Shivam Sharma; Tomas Dominguez; Kris Tziritas; Christophe Michel; Vincent Porte; Mustafa Abid; Stephane Aubert; Pierre Castellani; Samuel Mugel; Roman Orus
  4. Machine learning methods in finance: Recent applications and prospects By Hoang, Daniel; Wiegratz, Kevin
  5. A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management By MohammadAmin Fazli; Mahdi Lashkari; Hamed Taherkhani; Jafar Habibi
  6. Dominant Drivers of National Inflation By Jan Ditzen; Francesco Ravazzolo
  7. Using Machine Learning for Efficient Flexible Regression Adjustment in Economic Experiments By John A. List; Ian Muir; Gregory K. Sun
  8. A Comparative Study On Forecasting Consumer Price Index Of India Amongst XGBoost, Theta, ARIMA, Prophet And LSTM Algorithms. By Asati, Akshita
  9. Using Intermarket Data to Evaluate the Efficient Market Hypothesis with Machine Learning By N'yoma Diamond; Grant Perkins
  10. Quantifying fairness and discrimination in predictive models By Arthur Charpentier
  11. Multi-step-ahead Stock Price Prediction Using Recurrent Fuzzy Neural Network and Variational Mode Decomposition By Hamid Nasiri; Mohammad Mehdi Ebadzadeh
  12. Deep Runge-Kutta schemes for BSDEs By Jean-Fran\c{c}ois Chassagneux; Junchao Chen; Noufel Frikha
  13. Did the policy responses to COVID-19 protect Italian households’ incomes? Evidence from incomes nowcasting in microsimulation models By Maria Teresa Monteduro; Dalila De Rosa; Chiara Subrizi
  14. Prediction of Auto Insurance Risk Based on t-SNE Dimensionality Reduction By Joseph Levitas; Konstantin Yavilberg; Oleg Korol; Genadi Man
  15. Nothing Propinks Like Propinquity: Using Machine Learning to Estimate the Effects of Spatial Proximity in the Major League Baseball Draft By Majid Ahmadi; Nathan Durst; Jeff Lachman; John A. List; Mason List; Noah List; Atom T. Vayalinkal
  16. Does IFRS information on tax loss carryforwards and negative performance improve predictions of earnings and cash flows? By Dreher, Sandra; Eichfelder, Sebastian; Noth, Felix
  17. Why is economics the only discipline with so many curves going up and down? And are they of any use? By Giovanni Dosi
  18. Efficient L2 Batch Posting Strategy on L1 By Akaki Mamageishvili; Edward W. Felten
  19. The Fight Against Corruption at Global Level. A Metric Approach By Laureti, Lucio; Costantiello, Alberto; Leogrande, Angelo

  1. By: Prateek Jain; Alberto Garcia Garcia
    Abstract: Within this decade, quantum computers are predicted to outperform conventional computers in terms of processing power and have a disruptive effect on a variety of business sectors.It is predicted that the financial sector would be one of the first to benefit from quantum computing both in the short and long terms. In this research work we use Hybrid Quantum Neural networks to present a quantum machine learning approach for Continuous variable prediction.
    Date: 2022–12
  2. By: Xiaodong Li; Pangjing Wu; Chenxin Zou; Qing Li
    Abstract: Designing an intelligent volume-weighted average price (VWAP) strategy is a critical concern for brokers, since traditional rule-based strategies are relatively static that cannot achieve a lower transaction cost in a dynamic market. Many studies have tried to minimize the cost via reinforcement learning, but there are bottlenecks in improvement, especially for long-duration strategies such as the VWAP strategy. To address this issue, we propose a deep learning and hierarchical reinforcement learning jointed architecture termed Macro-Meta-Micro Trader (M3T) to capture market patterns and execute orders from different temporal scales. The Macro Trader first allocates a parent order into tranches based on volume profiles as the traditional VWAP strategy does, but a long short-term memory neural network is used to improve the forecasting accuracy. Then the Meta Trader selects a short-term subgoal appropriate to instant liquidity within each tranche to form a mini-tranche. The Micro Trader consequently extracts the instant market state and fulfils the subgoal with the lowest transaction cost. Our experiments over stocks listed on the Shanghai stock exchange demonstrate that our approach outperforms baselines in terms of VWAP slippage, with an average cost saving of 1.16 base points compared to the optimal baseline.
    Date: 2022–12
  3. By: Raj G. Patel; Chia-Wei Hsing; Serkan Sahin; Samuel Palmer; Saeed S. Jahromi; Shivam Sharma; Tomas Dominguez; Kris Tziritas; Christophe Michel; Vincent Porte; Mustafa Abid; Stephane Aubert; Pierre Castellani; Samuel Mugel; Roman Orus
    Abstract: Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
    Date: 2022–12
  4. By: Hoang, Daniel; Wiegratz, Kevin
    Abstract: We study how researchers can apply machine learning (ML) methods in finance. We first establish that the two major categories of ML (supervised and unsupervised learning) address fundamentally different problems than traditional econometric approaches. Then, we review the current state of research on ML in finance and identify three archetypes of applications: i) the construction of superior and novel measures, ii) the reduction of prediction error, and iii) the extension of the standard econometric toolset. With this taxonomy, we give an outlook on potential future directions for both researchers and practitioners. Our results suggest large benefits of ML methods compared to traditional approaches and indicate that ML holds great potential for future research in finance.
    Keywords: Machine Learning, Artificial Intelligence, Big Data
    JEL: C45 G00
    Date: 2022
  5. By: MohammadAmin Fazli; Mahdi Lashkari; Hamed Taherkhani; Jafar Habibi
    Abstract: Solving portfolio management problems using deep reinforcement learning has been getting much attention in finance for a few years. We have proposed a new method using experts signals and historical price data to feed into our reinforcement learning framework. Although experts signals have been used in previous works in the field of finance, as far as we know, it is the first time this method, in tandem with deep RL, is used to solve the financial portfolio management problem. Our proposed framework consists of a convolutional network for aggregating signals, another convolutional network for historical price data, and a vanilla network. We used the Proximal Policy Optimization algorithm as the agent to process the reward and take action in the environment. The results suggested that, on average, our framework could gain 90 percent of the profit earned by the best expert.
    Date: 2022–12
  6. By: Jan Ditzen; Francesco Ravazzolo
    Abstract: For western economies a long-forgotten phenomenon is on the horizon: rising inflation rates. We propose a novel approach christened D2ML to identify drivers of national inflation. D2ML combines machine learning for model selection with time dependent data and graphical models to estimate the inverse of the covariance matrix, which is then used to identify dominant drivers. Using a dataset of 33 countries, we find that the US inflation rate and oil prices are dominant drivers of national inflation rates. For a more general framework, we carry out Monte Carlo simulations to show that our estimator correctly identifies dominant drivers.
    Date: 2022–12
  7. By: John A. List; Ian Muir; Gregory K. Sun
    Abstract: This study investigates how to use regression adjustment to reduce variance in experimental data. We show that the estimators recommended in the literature satisfy an orthogonality property with respect to the parameters of the adjustment. This observation greatly simplifies the derivation of the asymptotic variance of these estimators and allows us to solve for the efficient regression adjustment in a large class of adjustments. Our efficiency results generalize a number of previous results known in the literature. We then discuss how this efficient regression adjustment can be feasibly implemented. We show the practical relevance of our theory in two ways. First, we use our efficiency results to improve common practices currently employed in field experiments. Second, we show how our theory allows researchers to robustly incorporate machine learning techniques into their experimental estimators to minimize variance.
    JEL: C9 C90 C91 C93
    Date: 2022–12
  8. By: Asati, Akshita
    Abstract: CPI often referred to as the Consumer Price Index is a crucial and thorough method employed to estimate price changes over a fixed time interval within a country which is representative of consumption expenditure in a country‘s economy. CPI being an economic indicator engenders therefore the popular metric called inflation of the country. Thus, if we can accurately forecast the CPI, the country‘s economy can be controlled well in time and appropriate decision-making can be enabled. Hence, for a decade CPI index forecasting, especially in a developing country like India, has been always a matter of interest and research topic for economists and policy of the government. To forecast CPI, humans (decision makers) required vast domain knowledge and experience. Moreover, traditional CPI forecasting involved a multitude of human interventions and discussions for the same. However, with the recent advancements in the domain of time series forecasting techniques encompassing dependable modern machine learning, statistical as well as deep learning models there exists a potential scope in leveraging modern technology to forecast CPI of India which can technically aid towards this important decision-making step in a diverse country like India. In this paper, a comparative study is carried out exploring MAD, RMSE, and MAPE as comparison criteria amongst Machine Learning (XGBoost), Statistical Learning (Theta, ARIMA, Prophet) and Deep Learning (LSTM) algorithms. Furthermore, from this comparative univariate time series forecasting study, it can be demonstrated that technological solutions in the domain of forecasting show promising results with reasonable forecast accuracy.
    Date: 2022–12–21
  9. By: N'yoma Diamond; Grant Perkins
    Abstract: In its semi-strong form, the Efficient Market Hypothesis (EMH) implies that technical analysis will not reveal any hidden statistical trends via intermarket data analysis. If technical analysis on intermarket data reveals trends which can be leveraged to significantly outperform the stock market, then the semi-strong EMH does not hold. In this work, we utilize a variety of machine learning techniques to empirically evaluate the EMH using stock market, foreign currency (Forex), international government bond, index future, and commodities future assets. We train five machine learning models on each dataset and analyze the average performance of these models for predicting the direction of future S&P 500 movement as approximated by the SPDR S&P 500 Trust ETF (SPY). From our analysis, the datasets containing bonds, index futures, and/or commodities futures data notably outperform baselines by substantial margins. Further, we find that the usage of intermarket data induce statistically significant positive impacts on the accuracy, macro F1 score, weighted F1 score, and area under receiver operating characteristic curve for a variety of models at the 95% confidence level. This provides strong empirical evidence contradicting the semi-strong EMH.
    Date: 2022–12
  10. By: Arthur Charpentier
    Abstract: The analysis of discrimination has long interested economists and lawyers. In recent years, the literature in computer science and machine learning has become interested in the subject, offering an interesting re-reading of the topic. These questions are the consequences of numerous criticisms of algorithms used to translate texts or to identify people in images. With the arrival of massive data, and the use of increasingly opaque algorithms, it is not surprising to have discriminatory algorithms, because it has become easy to have a proxy of a sensitive variable, by enriching the data indefinitely. According to Kranzberg (1986), "technology is neither good nor bad, nor is it neutral", and therefore, "machine learning won't give you anything like gender neutrality `for free' that you didn't explicitely ask for", as claimed by Kearns et a. (2019). In this article, we will come back to the general context, for predictive models in classification. We will present the main concepts of fairness, called group fairness, based on independence between the sensitive variable and the prediction, possibly conditioned on this or that information. We will finish by going further, by presenting the concepts of individual fairness. Finally, we will see how to correct a potential discrimination, in order to guarantee that a model is more ethical
    Date: 2022–12
  11. By: Hamid Nasiri; Mohammad Mehdi Ebadzadeh
    Abstract: Financial time series prediction, a growing research topic, has attracted considerable interest from scholars, and several approaches have been developed. Among them, decomposition-based methods have achieved promising results. Most decomposition-based methods approximate a single function, which is insufficient for obtaining accurate results. Moreover, most existing researches have concentrated on one-step-ahead forecasting that prevents stock market investors from arriving at the best decisions for the future. This study proposes two novel methods for multi-step-ahead stock price prediction based on the issues outlined. DCT-MFRFNN, a method based on discrete cosine transform (DCT) and multi-functional recurrent fuzzy neural network (MFRFNN), uses DCT to reduce fluctuations in the time series and simplify its structure and MFRFNN to predict the stock price. VMD-MFRFNN, an approach based on variational mode decomposition (VMD) and MFRFNN, brings together their advantages. VMD-MFRFNN consists of two phases. The input signal is decomposed to several IMFs using VMD in the decomposition phase. In the prediction and reconstruction phase, each of the IMFs is given to a separate MFRFNN for prediction, and predicted signals are summed to reconstruct the output. Three financial time series, including Hang Seng Index (HSI), Shanghai Stock Exchange (SSE), and Standard & Poor's 500 Index (SPX), are used for the evaluation of the proposed methods. Experimental results indicate that VMD-MFRFNN surpasses other state-of-the-art methods. VMD-MFRFNN, on average, shows 35.93%, 24.88%, and 34.59% decreases in RMSE from the second-best model for HSI, SSE, and SPX, respectively. Also, DCT-MFRFNN outperforms MFRFNN in all experiments.
    Date: 2022–12
  12. By: Jean-Fran\c{c}ois Chassagneux; Junchao Chen; Noufel Frikha
    Abstract: We propose a new probabilistic scheme which combines deep learning techniques with high order schemes for backward stochastic differential equations belonging to the class of Runge-Kutta methods to solve high-dimensional semi-linear parabolic partial differential equations. Our approach notably extends the one introduced in [Hure Pham Warin 2020] for the implicit Euler scheme to schemes which are more efficient in terms of discrete-time error. We establish some convergence results for our implemented schemes under classical regularity assumptions. We also illustrate the efficiency of our method for different schemes of order one, two and three. Our numerical results indicate that the Crank-Nicolson schemes is a good compromise in terms of precision, computational cost and numerical implementation.
    Date: 2022–12
  13. By: Maria Teresa Monteduro (Ministry of Economy and Finance); Dalila De Rosa (Ministry of Economy and Finance); Chiara Subrizi (Ministry of Economy and Finance)
    Abstract: This paper addresses the economic impact of the COVID-19 pandemic by providing timely and accurate information on Italian households’ income distribution, inequality and poverty risk, assessing the effects of policy responses during 2020. By building a unique and wide database with the latest survey, tax and administrative data at individual and firm level, and by using the micro-simulation model Taxben-DF from the Italian Department of Finance, the analysis nowcasts the income loss due to the economic shutdown since March 2020 and simulates most of the interventions adopted by the Government from March to December 2020. Results suggest that policy measures in response to the first pandemic year have been effective in keeping overall income inequality under control, not being able yet to avoid a concerning polarization of incomes and large heterogeneous effects in terms of both income losses and measures’ compensation.
    Keywords: COVID-19, inequalities, administrative and survey data, microsimulation
    JEL: C63 C81 D31 D63 H31
    Date: 2023–01
  14. By: Joseph Levitas; Konstantin Yavilberg; Oleg Korol; Genadi Man
    Abstract: Correct scoring of a driver's risk is of great significance to auto insurance companies. While the current tools used in this field have been proven in practice to be quite efficient and beneficial, we argue that there is still a lot of room for development and improvement in the auto insurance risk estimation process. To this end, we develop a framework based on a combination of a neural network together with a dimensionality reduction technique t-SNE (t-distributed stochastic neighbour embedding). This enables us to visually represent the complex structure of the risk as a two-dimensional surface, while still preserving the properties of the local region in the features space. The obtained results, which are based on real insurance data, reveal a clear contrast between the high and low risk policy holders, and indeed improve upon the actual risk estimation performed by the insurer. Due to the visual accessibility of the portfolio in this approach, we argue that this framework could be advantageous to the auto insurer, both as a main risk prediction tool and as an additional validation stage in other approaches.
    Date: 2022–12
  15. By: Majid Ahmadi; Nathan Durst; Jeff Lachman; John A. List; Mason List; Noah List; Atom T. Vayalinkal
    Abstract: Recent models and empirical work on network formation emphasize the importance of propinquity in producing strong interpersonal connections. Yet, one might wonder how deep such insights run, as thus far empirical results rely on survey and lab-based evidence. In this study, we examine propinquity in a high-stakes setting of talent allocation: the Major League Baseball (MLB) Draft from 2000-2019 (30, 000 players were drafted from a player pool of more than a million potential draftees). Our findings can be summarized in four parts. First, propinquity is alive and well in our setting, and spans even the latter years of our sample, when higher-level statistical exercises have become the norm rather than the exception. Second, the measured effect size is consequential, as MLB clubs pay a significant opportunity cost in terms of inferior talent acquired due to propinquity bias: for example, their draft picks are 38% less likely to ever play a MLB game relative to players drafted without propinquity bias. Third, those players who benefit from propinquity bias fare better both in terms of the timing of their draft picks and their initial financial contract, conditional on draft order. Finally, the effect is found to be the most pronounced in later rounds of the draft, where the Scouting Director has the greatest latitude.
    JEL: C93 D4 J30 J7
    Date: 2022–12
  16. By: Dreher, Sandra; Eichfelder, Sebastian; Noth, Felix
    Abstract: We analyze the usefulness of accounting information on tax loss carryforwards and negative performance to predict earnings and cash flows. We use hand-collected information on tax loss carryforwards and the corresponding deferred taxes from the International Financial Reporting Standards tax footnotes for listed firms from Germany. Our out-of-sample tests show that considering accounting information on tax loss carryforwards does not enhance the accuracy of performance predictions and even worsens predictions. Besides, common forecasting approaches that deal with negative performance are prone to prediction errors. We provide a simple empirical specification to reduce forecast errors. We find evidence that more elaborate machine learning models (least absolute shrinkage and selection operator method) typically do not perform better or even worse than our simple specification in out-of-sample tests.
    Keywords: performance forecast, out-of-sample tests, deferred tax assets, tax loss carryforwards
    JEL: M40 M41 C53
    Date: 2022
  17. By: Giovanni Dosi
    Abstract: Even the most rudimentary training from Economics 101 starts with demand curves going down and supply curves going up. They are so 'natural' that they sound even more obvious than the Euclidian postulates in mathematics. But are they? What do they actually mean? Start with ''demand curves''. Are they hypothetical 'psychological constructs' on individual preferences? Propositions on aggregation over them? Reduced forms of actual dynamic proposition of time profiles of prices and demanded quantities? Similar considerations apply to ''supply curves'' The point here, drawing upon the chapter by Kirman and Dosi, in Dosi (2023), is that the forest of demand and supply curves is basically there to populate the analysis with double axiomatic notions of equilibria, both 'in the head' of individual agents, and in environments in which they operate. And the issue is even thornier when dealing with ''curves'' going up and down in macroeconomic contexts where one is basically talking of a mystical construction of a meta meta meta loci of equilibrium - first, in the head of each agent, next, in each market (for goods, for savings, etc.), finally in the overall economy. Supply and demand ''curves'', I am arguing, are one of the three major methodological stumbling blocks on the way of progress in economics - the others being 'utility functions' and 'production functions' -. There is an alternative: represent markets and industries how they actually works, and model them both via fully fledged Agent Based Models and via lower dimensional dynamical systems.
    Keywords: Demand and supply curves; aggregation; costs and prices; dynamical systems.
    Date: 2023–01–07
  18. By: Akaki Mamageishvili; Edward W. Felten
    Abstract: We design efficient algorithms for the batch posting of Layer 2 chain calldata on the Layer 1 chain, using tools from operations research. We relate the costs of posting and delaying, by converting them to the same units. The algorithm that keeps the average and maximum queued number of batches tolerable enough improves the posting costs of the trivial algorithm that posts batches immediately when they are created by 8%. On the other hand, the algorithm that only cares moderately about queue length can improve the trivial algorithm posting costs by 29%.
    Date: 2022–12
  19. By: Laureti, Lucio; Costantiello, Alberto; Leogrande, Angelo
    Abstract: In this article we estimate the level of Control of Corruption for 193 countries in the period 2011-2020 using data from the ESG World Bank Database. Various econometric techniques are applied i.e.: Panel Data with Random Effects, Panel Data with Fixed Effects, Pooled OLS, WLS. Results show that “Control of Corruption” is positively associated, among others, to “Government Effectiveness” and “Political Stability and Absence of Violence/Terrorism”, while it is negatively associated among others to “Agriculture, Forestry, and Fishing Value Added as Percentage of GDP” and “GHG Net Emissions/Removals by LUCF”. A cluster analysis implemented with the k-Means algorithm optimized with the Elbow Method shows four clusters. A confrontation among eight Machine Learning algorithms is proposed for the prediction of Control of Corruption. Polynomial Regression is the best predictor for the training data. The level of Control of Corruption is expected to growth by 10.36% on average.
    Keywords: D7, D70, D72, D73, D78.
    JEL: D70 D72 D73 D78
    Date: 2022–12–30

This nep-cmp issue is ©2023 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.