|
on Computational Economics |
Issue of 2019‒07‒15
sixteen papers chosen by |
By: | Bradley J. Pillay; Absalom E. Ezugwu |
Abstract: | The prediction of stock prices is an important task in economics, investment and financial decision-making. It has for several decades, spurred the interest of many researchers to design stock price predictive models. In this paper, the symbiotic organisms search algorithm, a new metaheuristic algorithm is employed as an efficient method for training feedforward neural networks (FFNN). The training process is used to build a better stock price predictive model. The Straits Times Index, Nikkei 225, NASDAQ Composite, S&P 500, and Dow Jones Industrial Average indices were utilized as time series data sets for training and testing proposed predic-tive model. Three evaluation methods namely, Root Mean Squared Error, Mean Absolute Percentage Error and Mean Absolution Deviation are used to compare the results of the implemented model. The computational results obtained revealed that the hybrid Symbiotic Organisms Search Algorithm exhibited outstanding predictive performance when compared to the hybrid Particle Swarm Optimization, Genetic Algorithm, and ARIMA based models. The new model is a promising predictive technique for solving high dimensional nonlinear time series data that are difficult to capture by traditional models. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.10121&r=all |
By: | Wenhang Bao; Xiao-yang Liu |
Abstract: | Liquidation is the process of selling a large number of shares of one stock sequentially within a given time frame, taking into consideration the costs arising from market impact and a trader's risk aversion. The main challenge in optimizing liquidation is to find an appropriate modeling system that can incorporate the complexities of the stock market and generate practical trading strategies. In this paper, we propose to use multi-agent deep reinforcement learning model, which better captures high-level complexities comparing to various machine learning methods, such that agents can learn how to make the best selling decisions. First, we theoretically analyze the Almgren and Chriss model and extend its fundamental mechanism so it can be used as the multi-agent trading environment. Our work builds the foundation for future multi-agent environment trading analysis. Secondly, we analyze the cooperative and competitive behaviours between agents by adjusting the reward functions for each agent, which overcomes the limitation of single-agent reinforcement learning algorithms. Finally, we simulate trading and develop an optimal trading strategy with practical constraints by using a reinforcement learning method, which shows the capabilities of reinforcement learning methods in solving realistic liquidation problems. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.11046&r=all |
By: | Sebastian Ankargren; Paulina Jon\'eus |
Abstract: | There is currently an increasing interest in large vector autoregressive (VAR) models. VARs are popular tools for macroeconomic forecasting and use of larger models has been demonstrated to often improve the forecasting ability compared to more traditional small-scale models. Mixed-frequency VARs deal with data sampled at different frequencies while remaining within the realms of VARs. Estimation of mixed-frequency VARs makes use of simulation smoothing, but using the standard procedure these models quickly become prohibitive in nowcasting situations as the size of the model grows. We propose two algorithms that alleviate the computational efficiency of the simulation smoothing algorithm. Our preferred choice is an adaptive algorithm, which augments the state vector as necessary to sample also monthly variables that are missing at the end of the sample. For large VARs, we find considerable improvements in speed using our adaptive algorithm. The algorithm therefore provides a crucial building block for bringing the mixed-frequency VARs to the high-dimensional regime. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.01075&r=all |
By: | Giuseppe De Marco (Università di Napoli Parthenope and CSEF); Chiara Donnini (Università di Napoli Parthenope); Federica Gioia (Università di Napoli Parthenope); Francesca Perla (Università di Napoli Parthenope) |
Abstract: | In the literature on financial contagion, the possibility to deal only with imprecise information about the overall interbank exposures and the implications in the analysis of the stability of the financial system seems to be a relevant problem. In particular, previous literature has shown that fuzzy data arise naturally in this framework and turn to be sufficiently friendly to handle from the computational point of view. The present paper generalizes the well known_fictitious default algorithm to the fuzzy setting, providing an existence result for the corresponding fuzzy fixed points, the convergence of the algorithm to fixed points, an implementation of the algorithm in MATLAB and numerical simulations. |
Keywords: | Financial networks, fuzzy financial data, fictitious default, fixed point. |
Date: | 2019–07–12 |
URL: | http://d.repec.org/n?u=RePEc:sef:csefwp:535&r=all |
By: | Jean-Marc MONTAUD; Nicolas PECASTAING; Jorge DAVALOS |
Abstract: | This study assesses the potential economic impacts of investments dedicated to filling infrastructure gaps in Peru. By using a national database at the firm level, we start by empirically estimating the positive externalities of Peruvian infrastructure on private activities’ output. In the second step, these estimates are introduced in a dynamic Computable General Equilibrium model used to conduct counterfactual simulations of various investment plans in infrastructure over a 15-year period. These simulations show to what extent scaling-up infrastructure could be a worthwhile strategy to achieve economic growth in Peru; however, they also show that these benefits depend on the choice of funding schemes related to such public spending. |
Keywords: | Infrastructure, Productivity, CGE model, Peru |
JEL: | D58 H54 O47 |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:tac:wpaper:2018-2019_9&r=all |
By: | Maria Glenski; Tim Weninger; Svitlana Volkova |
Abstract: | Social media signals have been successfully used to develop large-scale predictive and anticipatory analytics. For example, forecasting stock market prices and influenza outbreaks. Recently, social data has been explored to forecast price fluctuations of cryptocurrencies, which are a novel disruptive technology with significant political and economic implications. In this paper we leverage and contrast the predictive power of social signals, specifically user behavior and communication patterns, from multiple social platforms GitHub and Reddit to forecast prices for three cyptocurrencies with high developer and community interest - Bitcoin, Ethereum, and Monero. We evaluate the performance of neural network models that rely on long short-term memory units (LSTMs) trained on historical price data and social data against price only LSTMs and baseline autoregressive integrated moving average (ARIMA) models, commonly used to predict stock prices. Our results not only demonstrate that social signals reduce error when forecasting daily coin price, but also show that the language used in comments within the official communities on Reddit (r/Bitcoin, r/Ethereum, and r/Monero) are the best predictors overall. We observe that models are more accurate in forecasting price one day ahead for Bitcoin (4% root mean squared percent error) compared to Ethereum (7%) and Monero (8%). |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.00558&r=all |
By: | Jonas Boone (Universiteit Antwerpen); Johannes Derboven (Universiteit Antwerpen); Sarah Kuypers (Universiteit Antwerpen); Francesco Figari (Università dell'Insubria); Gerlinde Verbist (Universiteit Antwerpen) |
Abstract: | Taxing wealth has received increased attention in both the academic and political debate, as a way to reduce inequality of both income and wealth. However, analytical tools are still underdeveloped when it comes to empirical analyses of different types of wealth-related taxes and policies. New household surveys as those developed as part of the Eurosystem Household Finance and Consumption Survey (HFCS) represent a milestone for this purpose. Yet, distributional analysis of income and wealth requires information on disposable income and wealth which are not available, as the new Eurosystem data includes only gross income values. Moreover, in order to simulate the effects of wealth taxes and (budget neutral) reforms to the current direct taxes a microsimulation model such as EUROMOD is needed. Integrating the HFCS data in EUROMOD makes it possible to assess the effect of different current and hypothetical wealth taxes and policies on the distribution of income and wealth. In this report we build further on a pilot study (see Kuypers et al., 2017), in which the HFCS data have been converted into a EUROMOD database for six countries that were part of the first wave of the HFCS. More specifically, the HFCS-EUROMOD combination is applied for the second wave of the HFCS data and the scope has been broadened to 17 EU countries (6 original from the pilot study and 11 new ones). We discuss the process of how the HFCS data have been transformed to fit the EUROMOD context, how the simulation of wealth taxes and policies has been added to the EUROMOD country files and we assess how the simulation results compare with other available sources. Finally, we also briefly discuss an example of a simulation that can be performed by using the new tool. |
Keywords: | Wealth taxation, EUROMOD, HFCS |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:ipt:taxref:201907&r=all |
By: | Asiya Maskaeva; Joel Mmasa; Nicodemas Lema; Mgeni Msafiri |
Abstract: | The Tanzanian government has established a goal to transform the country into a middle-income and semi-industrialized state by 2025. To promote this transformation, the government exempted the Value Added Tax on capital commodities in FY 2017-2018 as a way to promote utilization of these commodities by manufacturing industries and generate growth, employment, and increased incomes. This study analyzes the impact of a reduction in Value Added Tax on capital commodities (electricity, vehicles, machinery, and equipment) under two different closure rules: (1) fixed governmental expenditures and flexible governmental savings (2) flexible governmental expenditures and fixed governmental savings. Under the first regime, government savings declined and industries that depended heavily on government investments suffered. In the second, output increased for all industrial sectors, leading to a decrease in average unemployment. Real consumption increased for all but the richest household categories. |
Keywords: | Fiscal Policy, Government Budget, Household Income, CGE Modelling, Social Accounting Matrix |
JEL: | C68 E62 H50 E64 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:lvl:mpiacr:2019-07&r=all |
By: | D. Belomestny; M. Kaledin; J. Schoenmakers |
Abstract: | In this article we propose a Weighted Stochastic Mesh (WSM) Algorithm for approximating the value of a discrete and continuous time optimal stopping problem. We prove that in the discrete case the WSM algorithm leads to semi-tractability of the corresponding optimal problems in the sense that its complexity is bounded in order by $\varepsilon^{-4}\log^{d+2}(1/\varepsilon)$ with $d$ being the dimension of the underlying Markov chain. Furthermore we study the WSM approach in the context of continuous time optimal stopping problems and derive the corresponding complexity bounds. Although we can not prove semi-tractability in this case, our bounds turn out to be the tightest ones among the bounds known for the existing algorithms in the literature. We illustrate our theoretical findings by a numerical example. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.09431&r=all |
By: | Sam Ganzfried; Max Chiswick |
Abstract: | Poker is a large complex game of imperfect information, which has been singled out as a major AI challenge problem. Recently there has been a series of breakthroughs culminating in agents that have successfully defeated the strongest human players in two-player no-limit Texas hold 'em. The strongest agents are based on algorithms for approximating Nash equilibrium strategies, which are stored in massive binary files and unintelligible to humans. A recent line of research has explored approaches for extrapolating knowledge from strong game-theoretic strategies that can be understood by humans. This would be useful when humans are the ultimate decision maker and allow humans to make better decisions from massive algorithmically-generated strategies. Using techniques from machine learning we have uncovered a new simple, fundamental rule of poker strategy that leads to a significant improvement in performance over the best prior rule and can also easily be applied by human players. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.09895&r=all |
By: | Joshua Zoen Git Hiew; Xin Huang; Hao Mou; Duan Li; Qi Wu; Yabo Xu |
Abstract: | Traditional sentiment construction in finance relies heavily on the dictionary-based approach, with a few exceptions using simple machine learning techniques such as Naive Bayes classifier. While the current literature has not yet invoked the rapid advancement in the natural language processing, we construct in this research a textual-based sentiment index using a novel model BERT recently developed by Google, especially for three actively trading individual stocks in Hong Kong market with hot discussion on Weibo.com. On the one hand, we demonstrate a significant enhancement of applying BERT in sentiment analysis when compared with existing models. On the other hand, by combining with the other two existing methods commonly used on building the sentiment index in the financial literature, i.e., option-implied and market-implied approaches, we propose a more general and comprehensive framework for financial sentiment analysis, and further provide convincing outcomes for the predictability of individual stock return for the above three stocks using LSTM (with a feature of a nonlinear mapping), in contrast to the dominating econometric methods in sentiment influence analysis that are all of a nature of linear regression. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.09024&r=all |
By: | Bi, Huixin (Federal Reserve Bank of Kansas City); Traum, Nora |
Abstract: | This paper examines how newspaper reporting affects government bond prices during the U.S. state default of the 1840s. Using unsupervised machine learning algorithms, the paper first constructs novel ``fiscal information indices'' for state governments based on U.S. newspapers at the time. The impact of the indices on government bond prices varied over time. Before the crisis, the entry of new western states into the bond market spurred competition: more state-specific fiscal news imposed downward pressure on bond prices for established states in the market. During the crisis, more state-specific fiscal information increased (lowered) bond prices for states with sound (unsound) fiscal policy. |
Keywords: | Sovereign Default; Information; Fiscal Policy |
JEL: | E62 H30 N41 |
Date: | 2019–06–01 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp19-04&r=all |
By: | Küfeoğlu, S.; Liu, G.; Anaya, K.; Pollitt, M. |
Abstract: | This paper reviews digitalisation in energy sector by looking at the business models of 40 interesting new start-up energy companies from around the world. These start-ups have been facilitated by the rise of distributed generation, much of it intermittent in nature. We review Artificial Intelligence (AI), Machine Learning, Deep Learning and Blockchain applications in energy sector. We discuss the rise of prosumers and small-scale renewable generation, highlighting the role of Feed-in-Tariffs (FITs), the Distribution System Platform concept and the potential for Peer-to-Peer (P2P) trading. Our aim is to help energy regulators calibrate their support new business models. |
Keywords: | Feed-in tariff, Distribution System Platform, Peer-to-Peer, Blockchain |
JEL: | L94 |
Date: | 2019–06–25 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1956&r=all |
By: | Lechner, Michael; Okasa, Gabriel |
Abstract: | In econometrics so-called ordered choice models are popular when interest is in the estimation of the probabilities of particular values of categorical outcome variables with an inherent ordering, conditional on covariates. In this paper we develop a new machine learning estimator based on the random forest algorithm for such models without imposing any distributional assumptions. The proposed Ordered Forest estimator provides a flexible estimation method of the conditional choice probabilities that can naturally deal with nonlinearities in the data, while taking the ordering information explicitly into account. In addition to common machine learning estimators, it enables the estimation of marginal effects as well as conducting inference thereof and thus providing the same output as classical econometric estimators based on ordered logit or probit models. An extensive simulation study examines the finite sample properties of the Ordered Forest and reveals its good predictive performance, particularly in settings with multicollinearity among the predictors and nonlinear functional forms. An empirical application further illustrates the estimation of the marginal effects and their standard errors and demonstrates the advantages of the flexible estimation compared to a parametric benchmark model. |
Keywords: | Ordered choice models, random forests, probabilities, marginal effects, machine learning |
JEL: | C14 C25 C40 |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2019:08&r=all |
By: | Samuel Bazzi; Robert A. Blair; Christopher Blattman; Oeindrila Dube; Matthew Gudgeon; Richard Merton Peck |
Abstract: | Policymakers can take actions to prevent local conflict before it begins, if such violence can be accurately predicted. We examine the two countries with the richest available sub-national data: Colombia and Indonesia. We assemble two decades of fine-grained violence data by type, alongside hundreds of annual risk factors. We predict violence one year ahead with a range of machine learning techniques. Models reliably identify persistent, high-violence hot spots. Violence is not simply autoregressive, as detailed histories of disaggregated violence perform best. Rich socio-economic data also substitute well for these histories. Even with such unusually rich data, however, the models poorly predict new outbreaks or escalations of violence. "Best case" scenarios with panel data fall short of workable early-warning systems. |
JEL: | C52 C53 D74 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:25980&r=all |
By: | Joseph Staudt; Yifang Wei; Lisa Singh; Shawn Klimek; J. Bradford Jensen; Andrew L. Baer |
Abstract: | Between the 2007 and 2012 Economic Censuses (EC), the count of franchise-affiliated establishments declined by 9.8%. One reason for this decline was a reduction in resources that the Census Bureau was able to dedicate to the manual evaluation of survey responses in the franchise section of the EC. Extensive manual evaluation in 2007 resulted in many establishments, whose survey forms indicated they were not franchise-affiliated, being recoded as franchise-affiliated. No such evaluation could be undertaken in 2012. In this paper, we examine the potential of using external data harvested from the web in combination with machine learning methods to automate the process of evaluating responses to the franchise section of the 2017 EC. Our method allows us to quickly and accurately identify and recode establishments have been mistakenly classified as not being franchise-affiliated, increasing the unweighted number of franchise-affiliated establishments in the 2017 EC by 22%-42%. |
JEL: | C81 L8 |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:cen:wpaper:19-20&r=all |