nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒03‒11
twenty papers chosen by

  1. Forecasting Economics and Financial Time Series: ARIMA vs. LSTM By Sima Siami-Namini; Akbar Siami Namin
  2. Gaussian Process Regression for Pricing Variable Annuities with Stochastic Volatility and Interest Rate By Ludovic Gouden\`ege; Andrea Molent; Antonino Zanette
  3. Liquidity Management of Canadian Corporate Bond Mutual Funds: A Machine Learning Approach By Rohan Arora; Chen Fan; Guillaume Ouellet Leblanc
  4. Easily implementable time series forecasting techniques for resource provisioning in cloud computing By Michel Fliess; Cédric Join; Maria Bekcheva; Alireza Moradi; Hugues Mounier
  5. Optimal Investment-Consumption-Insurance with Durable and Perishable Consumption Goods in a Jump Diffusion Market By Jin Sun; Ryle S. Perera; Pavel V. Shevchenko
  6. Economic impacts of the vale-cultura (culture voucher): a computable general equilibrium model By Gustavo Fernandes Souza; Ana Flávia Machado; Edson Paulo Domingues
  7. Pricing foreign exchange options under stochastic volatility and interest rates using an RBF--FD method By Fazlollah Soleymani; Andrey Itkin
  8. Using Artificial Intelligence to Recapture Norms: Did #metoo change gender norms in Sweden? By Sara Moricz
  9. The economy-wide implications of a tax policy to reduce water pollution: a case of the Olifants river basin, South Africa By Kyei, C.; Hassan, R.
  10. A brief history of forecasting competitions By Rob J Hyndman
  11. A General Control Variate Method for L´evy Models in Finance By Kenichiro Shiraya; Hiroki Uenishi; Akira Yamazaki
  12. Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction By Ajay Agrawal; Joshua S. Gans; Avi Goldfarb
  13. Games and Network Structures on Corruption, Income Inequality, and Tax Control By Elena Gubar; Edgar Javier Sanchez Carrera; Suriya Kumacheva; Ekaterina Zhitkova; Galina Tomilina
  14. Model Selection in Utility-Maximizing Binary Prediction By Jiun-Hua Su
  15. The Sound Of Many Funds Rebalancing By Chinco, Alex; Fos, Vyacheslav
  16. Artificial Counselor System for Stock Investment By Hadi NekoeiQachkanloo; Benyamin Ghojogh; Ali Saheb Pasand; Mark Crowley
  17. Piketty's second fundamental law of capitalism as an emergent property in a kinetic wealth-exchange model of economic growth By D. S. Quevedo; C. J. Quimbay
  18. Metrics for Measuring the Performance of Machine Learning Prediction Models: An Application to the Housing Market By Miriam Steurer; Robert Hill
  19. Identifying Bid Leakage In Procurement Auctions: Machine Learning Approach By Dmitry I. Ivanov; Alexander S. Nesterov
  20. An innovative feature selection method for support vector machines and its test on the estimation of the credit risk of default By Sariev, Eduard; Germano, Guido

  1. By: Sima Siami-Namini; Akbar Siami Namin
    Abstract: Forecasting time series data is an important subject in economics, business, and finance. Traditionally, there are several techniques to effectively forecast the next lag of time series data such as univariate Autoregressive (AR), univariate Moving Average (MA), Simple Exponential Smoothing (SES), and more notably Autoregressive Integrated Moving Average (ARIMA) with its many variations. In particular, ARIMA model has demonstrated its outperformance in precision and accuracy of predicting the next lags of time series. With the recent advancement in computational power of computers and more importantly developing more advanced machine learning algorithms and approaches such as deep learning, new algorithms are developed to forecast time series data. The research question investigated in this article is that whether and how the newly developed deep learning-based algorithms for forecasting time series data, such as "Long Short-Term Memory (LSTM)", are superior to the traditional algorithms. The empirical studies conducted and reported in this article show that deep learning-based algorithms such as LSTM outperform traditional-based algorithms such as ARIMA model. More specifically, the average reduction in error rates obtained by LSTM is between 84 - 87 percent when compared to ARIMA indicating the superiority of LSTM to ARIMA. Furthermore, it was noticed that the number of training times, known as "epoch" in deep learning, has no effect on the performance of the trained forecast model and it exhibits a truly random behavior.
    Date: 2018–03
  2. By: Ludovic Gouden\`ege; Andrea Molent; Antonino Zanette
    Abstract: In this paper we develop an efficient approach based on a Machine Learning technique which allows one to quickly evaluate insurance products considering stochastic volatility and interest rate. Specifically, following De Spiegeleer et al., we apply Gaussian Process Regression to compute the price and the Greeks of a GMWB Variable Annuity. Starting from observed prices previously computed by means of a Hybrid Tree PDE approach for some known combinations of model parameters, it is possible to approximate the whole target function on a bounded domain. The regression algorithm consists of two main steps: algorithm training and evaluation. In particular, the first step is the most time demanding, but it needs to be performed only once, while the prediction step is very fast and requires to be performed only when evaluating the function. The developed method, as well as for the calculation of prices and Greeks, can also be employed to compute the no-arbitrage fee, which is a common practice in the Variable Annuities sector. We consider three increasing complexity models, namely the Black-Scholes, the Heston and the Heston Hull-White models, which extend the sources of randomness up to consider stochastic volatility and stochastic interest rate together. Numerical experiments show that the accuracy of the estimated values is high, while the computational cost is much lower than the one required by a direct calculation with standard approaches. Finally, we stress out that the analysis is carried out for a GMWB annuity but it could be generalized to other insurance products. Machine Learning seems to be a very promising and interesting tool for insurance risk management.
    Date: 2019–03
  3. By: Rohan Arora; Chen Fan; Guillaume Ouellet Leblanc
    Abstract: How do Canadian corporate bond mutual funds meet investor redemptions? We revisit this question using decision tree and random forest algorithms. We uncover new patterns in the decisions made by fund managers: the interaction between a larger, market-wide term spread and relatively less-liquid holdings increases the probability that a fund manager will sell less-liquid assets (corporate bonds) to meet redemptions. The evidence also shows that machine learning algorithms can extract new knowledge that is not apparent using a classical linear modelling approach.
    Keywords: Financial markets; Financial stability
    JEL: G1 G20 G23
    Date: 2019
  4. By: Michel Fliess (LIX - Laboratoire d'informatique de l'École polytechnique [Palaiseau] - CNRS - Centre National de la Recherche Scientifique - X - École polytechnique, AL.I.E.N. - ALgèbre pour Identification & Estimation Numériques); Cédric Join (CRAN - Centre de Recherche en Automatique de Nancy - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique, AL.I.E.N. - ALgèbre pour Identification & Estimation Numériques); Maria Bekcheva (Inagral, L2S - Laboratoire des signaux et systèmes - UP11 - Université Paris-Sud - Paris 11 - CentraleSupélec - CNRS - Centre National de la Recherche Scientifique); Alireza Moradi (Inagral); Hugues Mounier (L2S - Laboratoire des signaux et systèmes - UP11 - Université Paris-Sud - Paris 11 - CentraleSupélec - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Workload predictions in cloud computing is obviously an important topic. Most of the existing publications employ various time series techniques, that might be difficult to implement. We suggest here another route, which has already been successfully used in financial engineering and photovoltaic energy. No mathematical modeling and machine learning procedures are needed. Our computer simulations via realistic data, which are quite convincing, show that a setting mixing algebraic estimation techniques and the daily seasonality behaves much better. An application to the computing resource allocation, via virtual machines, is sketched out.
    Date: 2019–04–23
  5. By: Jin Sun; Ryle S. Perera; Pavel V. Shevchenko
    Abstract: We investigate an optimal investment-consumption and optimal level of insurance on durable consumption goods with a positive loading in a continuous-time economy. We assume that the economic agent invests in the financial market and in durable as well as perishable consumption goods to derive utilities from consumption over time in a jump-diffusion market. Assuming that the financial assets and durable consumption goods can be traded without transaction costs, we provide a semi-explicit solution for the optimal insurance coverage for durable goods and financial asset. With transaction costs for trading the durable good proportional to the total value of the durable good, we formulate the agent's optimization problem as a combined stochastic and impulse control problem, with an implicit intervention value function. We solve this problem numerically using stopping time iteration, and analyze the numerical results using illustrative examples.
    Date: 2019–03
  6. By: Gustavo Fernandes Souza (Cedeplar-UFMG); Ana Flávia Machado (Cedeplar-UFMG); Edson Paulo Domingues (Cedeplar-UFMG)
    Abstract: Restricted access to consumption of cultural goods and services is one of the major problems faced by this sector in Brazil. To address this issue, the federal government created the Vale-Cultura, a voucher in which individuals receive an income transfer to be used exclusively for purchasing cultural goods and services. The aim of this study is to analyze the impacts of the Vale-Cultura in the cultural sector and in the economy in general. The methodology applied is the Brazilian Recursive Dynamic General Equilibrium model (BRIDGE). Our simulations found that GDP growth is driven mainly by the increase in household consumption. There is a positive variation in welfare, assessed by the equivalent and compensating variations in income for beneficiaries of the vouchers. Finally, a positive growth is projected at the level of activity of cultural sectors and negative growth in others, showing a reallocation of productive factors.
    Keywords: Culture, Vale-Cultura, CGE, Impacts, Consumption, Well-being, Sectoral Analysis.
    JEL: Z18 R13 I38
    Date: 2019–02
  7. By: Fazlollah Soleymani; Andrey Itkin
    Abstract: This paper proposes a numerical method for pricing foreign exchange (FX) options in a model which deals with stochastic interest rates and stochastic volatility of the FX rate. The model considers four stochastic drivers, each represented by an It\^{o}'s diffusion with time--dependent drift, and with a full matrix of correlations. It is known that prices of FX options in this model can be found by solving an associated backward partial differential equation (PDE). However, it contains non--affine terms, which makes its difficult to solve it analytically. Also, a standard approach of solving it numerically by using traditional finite--difference (FD) or finite elements (FE) methods suffers from the high computational burden. Therefore, in this paper a flavor of a localized radial basis functions (RBFs) method, RBF--FD, is developed which allows for a good accuracy at a relatively low computational cost. Results of numerical simulations are presented which demonstrate efficiency of such an approach in terms of both performance and accuracy for pricing FX options and computation of the associated Greeks.
    Date: 2019–03
  8. By: Sara Moricz
    Abstract: Norms are challenging to define and measure, but this paper takes advantage of text data and the recent development in machine learning to create an encompassing measure of norms. An LSTM neural network is trained to detect gendered language. The network functions as a tool to create a measure on how gender norms changes in relation to the Metoo movement on Swedish Twitter. This paper shows that gender norms on average are less salient half a year after the date of the first appearance of the hashtag #Metoo. Previous literature suggests that gender norms change over generations, but the current result suggests that norms can change in the short run.
    Date: 2019–03
  9. By: Kyei, C.; Hassan, R.
    Abstract: The Olifants river basin, which is one of the nine river basins in South Africa ranks as the third most water-stressed basin as well as the most polluted due to pollution from mining activities, irrigation agriculture, and industrial waste disposal. As a result, the government has implemented a series of pollution control measures with the view to mitigating pollution and water shortage in the basin. In this paper, we analysed the regional economic and environmental impacts of a tax policy to reduce water pollution using a Computable General Equilibrium (CGE) model. Firstly, an extended Social Accounting Matrix (SAM) which includes water pollution related activities was constructed for the basin using the framework of environmentally extended SAM. Secondly, we simulate a reduction in current pollution load by increasing the pollution tax rate under alternative revenue recycling schemes. The analyses reveal that internalising the cost of pollution control will effectively reduce the pollution situation in the river basin with marginal negative impact on Real Regional Gross Domestic Product (RRGDP). However, revenue recycling through uniform lump-sum transfers may positively impact RRGDP. In addition, the policy will lead to a change in regional production structure from heavy polluting sectors to less pollution-intensive sectors with benefits to sustainable development and the aquatic ecosystem. JEL codes: C68, Q25, Q28
    Keywords: water quality, Olifants River, computable general equilibrium model, South Africa, market-based incentives; Public Economics
    Date: 2018–09–25
  10. By: Rob J Hyndman
    Abstract: Forecasting competitions are now so widespread that it is often forgotten how controversial they were when first held, and how influential they have been over the years. I briefly review the history of forecasting competitions, and discuss what we have learned about their design and implementation, and what they can tell us about forecasting. I also provide a few suggestions for potential future competitions, and for research about forecasting based on competitions.
    Keywords: evaluation, forecasting accuracy, Kaggle, M competitions, neural networks, prediction intervals, probability scoring, time series
    Date: 2019
  11. By: Kenichiro Shiraya (Graduate School of Economics, University of Tokyo); Hiroki Uenishi (Graduate School of Economics, University of Tokyo); Akira Yamazaki (Graduate School of Business Administration, Hosei University)
    Abstract: This paper proposes a new control variate method for L´evy models in finance. Our control variate method generates a process of the control variate whose initial and terminal values coincide with those of the target L´evy model process, and both the processes are driven by the same Brownian motion in the simulation. These features efficiently reduce the variance of the Monte Carlo simulation. As a typical application of this method, we provide the calculation scheme for pricing pathdependent exotic options. In numerical experiments, we examine the validity of our method for both continuously and discretely monitored path-dependent options under variance gamma and normal inverse Gaussian models.
    Date: 2019–02
  12. By: Ajay Agrawal; Joshua S. Gans; Avi Goldfarb
    Abstract: Recent advances in artificial intelligence are primarily driven by machine learning, a prediction technology. Prediction is useful because it is an input into decision-making. In order to appreciate the impact of artificial intelligence on jobs, it is important to understand the relative roles of prediction and decision tasks. We describe and provide examples of how artificial intelligence will affect labor, emphasizing differences between when automating prediction leads to automating decisions versus enhancing decision-making by humans.
    JEL: J20 O33
    Date: 2019–02
  13. By: Elena Gubar (Faculty of Applied Mathematics and Control Processes, St. Petersburg State University); Edgar Javier Sanchez Carrera (Department of Economics, Society & Politics, Università di Urbino Carlo Bo); Suriya Kumacheva (Faculty of Applied Mathematics and Control Processes, St. Petersburg State University); Ekaterina Zhitkova (Faculty of Applied Mathematics and Control Processes, St. Petersburg State University); Galina Tomilina (Faculty of Applied Mathematics and Control Processes, St. Petersburg State University)
    Abstract: IWe study taxpayers’ decisions according to their personal income, individual preferences with respect to the audit and tax control information perceived in their social environment. We consider that citizens are classified by two social groups, the rich and the poor. When public authorities are corrupt, we show that the poor group is the most affected by corruption. However, when taxpayers are corrupt or tax evaders, we implement mechanisms to audit and control this corrupt behaviour. We show that this situation can be represented by several well-known theoretical games. Then, evolutionary dynamics of the game in networks considering that each taxpayer receives information from his neighbours about the probability of audit is analyzed. Our simulation analysis shows that the initial and final preferences of taxpayers depend on important parameters, i.e. taxes and fines, audit information and costs.
    Keywords: Behavioral economics; Corrupt behavior; Income distribution; Income taxation system; Network Games; Population games
    JEL: C72 C73 O11 O12 O55 K42
    Date: 2018
  14. By: Jiun-Hua Su
    Abstract: The semiparametric maximum utility estimation proposed by Elliott and Lieli (2013) can be viewed as cost-sensitive binary classification; thus, its in-sample overfitting issue is similar to that of perceptron learning in the machine learning literature. Based on structural risk minimization, a utility-maximizing prediction rule (UMPR) is constructed to alleviate the in-sample overfitting of the maximum utility estimation. We establish non-asymptotic upper bounds on the difference between the maximal expected utility and the generalized expected utility of the UMPR. Simulation results show that the UMPR with an appropriate data-dependent penalty outweighs some common estimators in binary classification if the conditional probability of the binary outcome is misspecified, or a decision maker's preference is ignored.
    Date: 2019–03
  15. By: Chinco, Alex; Fos, Vyacheslav
    Abstract: This paper proposes that computational complexity generates noise. In modern financial markets, it is common to find the same asset held for completely different reasons by funds following a wide variety of threshold-based trading rules. Under these conditions, we show that it can be computationally infeasible to predict how these various trading rules will interact with one another. Formally, we prove that it is NP hard to predict the sign of the net demand coming from a large interacting mass of funds at a rate better than chance. Thus, market participants will treat these demand shocks as random noise even if they are fully rational. This noise-generating mechanism can produce noise in a wide range of markets and also predicts how noise will vary across assets. We verify this prediction empirically using data on the exchange-traded fund (ETF) market.
    Keywords: Complexity; Indexing; noise; thresholds
    JEL: G14
    Date: 2019–03
  16. By: Hadi NekoeiQachkanloo; Benyamin Ghojogh; Ali Saheb Pasand; Mark Crowley
    Abstract: This paper proposes a novel trading system which plays the role of an artificial counselor for stock investment. In this paper, the stock future prices (technical features) are predicted using Support Vector Regression. Thereafter, the predicted prices are used to recommend which portions of the budget an investor should invest in different existing stocks to have an optimum expected profit considering their level of risk tolerance. Two different methods are used for suggesting best portions, which are Markowitz portfolio theory and fuzzy investment counselor. The first approach is an optimization-based method which considers merely technical features, while the second approach is based on Fuzzy Logic taking into account both technical and fundamental features of the stock market. The experimental results on New York Stock Exchange (NYSE) show the effectiveness of the proposed system.
    Date: 2019–03
  17. By: D. S. Quevedo; C. J. Quimbay
    Abstract: We propose in this work a kinetic wealth-exchange model of economic growth by introducing saving as a non consumed fraction of production. In this new model, which starts also from microeconomic arguments, it is found that economic transactions between pairs of agents leads the system to a macroscopic behavior where total wealth is not conserved and it is possible to have an economic growth which is assumed as the increasing of total production in time. This last macroeconomic result, that we find both numerically through a Monte Carlo based simulation method and analytically in the framework of a mean field approximation, corresponds to the economic growth scenario described by the well known Solow model developed in the economic neoclassical theory. If additionally to the income related with production due to return on individual capital, it is also included the individual labor income in the model, then the Thomas Piketty's second fundamental law of capitalism is found as a emergent property of the system. We consider that the results obtained in this paper shows how Econophysics can help to understand the connection between macroeconomics and microeconomics.
    Date: 2019–03
  18. By: Miriam Steurer (University of Graz, Austria); Robert Hill (University of Graz, Austria)
    Abstract: With the rapid growth of machine learning (ML) methods and datasets to which they can be applied, the question of how one can compare the predictive performance of competing models is becoming an issue of high importance. The existing literature is interdisciplinary, making it hard for users to locate and evaluate the set of available metrics. In this article we collect a number of such metrics from various sources. We classify them by type and then evaluate them with respect to two novel symmetry conditions. While none of these metrics satisfy both conditions, we propose a number of new metrics that do. In total we consider a portfolio of 56 performance metrics. To illustrate the problem of choosing between them, we provide an application in which five ML methods are used to predict apartment prices. We show that the most popular metrics for evaluating performance in the AVM literature generate misleading results. A different picture emerges when the full set of metrics is considered, and especially when we focus on the class of metrics with the best symmetry properties. We conclude by recommending four key metrics for evaluating model predictive performance.
    Keywords: Machine learning; Performance metric; Prediction error; Automated valuation model
    JEL: C45 C53
    Date: 2019–02
  19. By: Dmitry I. Ivanov; Alexander S. Nesterov
    Abstract: We propose a novel machine-learning-based approach to detect bid leakage in first-price sealed-bid auctions. We extract and analyze the data on more than 1.4 million Russian procurement auctions between 2014 and 2018. As bid leakage in each particular auction is tacit, the direct classification is impossible. Instead, we reduce the problem of bid leakage detection to Positive-Unlabeled Classification. The key idea is to regard the losing participants as fair and the winners as possibly corrupted. This allows us to estimate the prior probability of bid leakage in the sample, as well as the posterior probability of bid leakage for each specific auction. We find that at least 16\% of auctions are exposed to bid leakage. Bid leakage is more likely in auctions with a higher reserve price, lower number of bidders and lower price fall, and where the winning bid is received in the last hour before the deadline.
    Date: 2019–03
  20. By: Sariev, Eduard; Germano, Guido
    Abstract: Support vector machines (SVM) have been extensively used for classification problems in many areas such as gene, text and image recognition. However, SVM have been rarely used to estimate the probability of default (PD) in credit risk. In this paper, we advocate the application of SVM, rather than the popular logistic regression (LR) method, for the estimation of both corporate and retail PD. Our results indicate that most of the time SVM outperforms LR in terms of classification accuracy for the corporate and retail segments. We propose a new wrapper feature selection based on maximizing the distance of the support vectors from the separating hyperplane and apply it to identify the main PD drivers. We used three datasets to test the PD estimation, containing (1) retail obligors from Germany, (2) corporate obligors from Eastern Europe, and (3) corporate obligors from Poland. Total assets, total liabilities, and sales are identified as frequent default drivers for the corporate datasets, whereas current account status and duration of the current account are frequent default drivers for the retail dataset.
    Keywords: default risk; logistic regression; support vector machines; ES/ K002309/1
    JEL: C10 C13
    Date: 2018–11–28

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.