|
on Computational Economics |
Issue of 2019‒01‒07
eighteen papers chosen by |
By: | Bulent Ozel (Department of Economics, Universitat Jaume I, Castellón, Spain); Mario Eboli (Department of Economics, Gabriele d’Annunzio University, Italy); Andrea Toto (Department of Economics, Gabriele d’Annunzio University, Italy); Andrea Teglio (LEE & Department of Economics, Universitat Jaume I, Castellón, Spain; Ca Foscari University of Venice, Italy) |
Abstract: | This paper presents a layered simulation model and the results from its initial employment. In this study, we focus on financial contagion due to debt exposure and structural concentration at interbank networks. Our results suggest that a medium density of connections in regular networks is already sufficient to induce a ’robust-yet-fragile’ response to insolvency shocks, while the same occurs in star networks only when the centralization is very high. The simulation model enables us to create stock-flow-consistent interbank networks with desired level of network connectivity and centralization. A parsimonious set of network configuration parameters can be employed not only to create stylized network structures with exact connectivity and centralization features but also random core-periphery network representations of a two-tier banking system. Our generic setup decouples the steps of a research on financial contagion. The layers of the simulator covers phases of a research from interbank network configuration to probing the details of a contagion. The presented version enables researchers (i) to create an interbank system of a desired network structure, (ii) to initialize bank balance sheets where the network in previous step can optionally be used as an input, (iii) to configure a controlled or randomized sequence of exogenous shock vectors, (iv) to simulate and inspect detailed process of a single contagion process via tables, graphs and plots generated by the simulator, (v) to design and run automated Monte Carlo simulations, (vi) to analyze results of Monte Carlo simulations via tools from the simulation analysis library. |
Keywords: | Contagion, interbank networks, two-tear systems, core-periphery networks |
JEL: | C32 C63 D53 D85 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:jau:wpaper:2018/15&r=all |
By: | Maximilian Beikirch; Simon Cramer; Martin Frank; Philipp Otte; Emma Pabich; Torsten Trimborn |
Abstract: | We study the qualitative and quantitative appearance of stylized facts in several agent-based computational economic market (ABCEM) models. We perform our simulations with the SABCEMM (Simulator for Agent-Based Computational Economic Market Models) tool recently introduced by the authors (Trimborn et al. 2018a). The SABCEMM simulator is implemented in C++ and is well suited for large scale computations. Thanks to its object-oriented software design, the SABCEMM tool enables the creation of new models by plugging together novel and existing agent and market designs as easily as plugging together pieces of a puzzle. We present new ABCEM models created by recombining existing models and study them with respect to stylized facts as well. The code is available on GitHub (Trimborn et al. 2018b), such that all results can be reproduced by the reader. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.02726&r=all |
By: | Huicheng Liu |
Abstract: | Stock market prediction is one of the most attractive research topic since the successful prediction on the market's future movement leads to significant profit. Traditional short term stock market predictions are usually based on the analysis of historical market data, such as stock prices, moving averages or daily returns. However, financial news also contains useful information on public companies and the market. Existing methods in finance literature exploit sentiment signal features, which are limited by not considering factors such as events and the news context. We address this issue by leveraging deep neural models to extract rich semantic features from news text. In particular, a Bidirectional-LSTM are used to encode the news text and capture the context information, self attention mechanism are applied to distribute attention on most relative words, news and days. In terms of predicting directional changes in both Standard & Poor's 500 index and individual companies stock price, we show that this technique is competitive with other state of the art approaches, demonstrating the effectiveness of recent NLP technology advances for computational finance. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.06173&r=all |
By: | Mostafa Zandieh; Seyed Omid Mohaddesi |
Abstract: | In this paper, we solve portfolio rebalancing problem when security returns are represented by uncertain variables considering transaction costs. The performance of the proposed model is studied using constant-proportion portfolio insurance (CPPI) as rebalancing strategy. Numerical results showed that uncertain parameters and different belief degrees will produce different efficient frontiers, and affect the performance of the proposed model. Moreover, CPPI strategy performs as an insurance mechanism and limits downside risk in bear markets while it allows potential benefit in bull markets. Finally, using a globally optimization solver and genetic algorithm (GA) for solving the model, we concluded that the problem size is an important factor in solving portfolio rebalancing problem with uncertain parameters and to gain better results, it is recommended to use a meta-heuristic algorithm rather than a global solver. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.07635&r=all |
By: | Rastin Matin; Casper Hansen; Christian Hansen; Pia M{\o}lgaard |
Abstract: | Corporate distress models typically only employ the numerical financial variables in the firms' annual reports. We develop a model that employs the unstructured textual data in the reports as well, namely the auditors' reports and managements' statements. Our model consists of a convolutional recurrent neural network which, when concatenated with the numerical financial variables, learns a descriptive representation of the text that is suited for corporate distress prediction. We find that the unstructured data provides a statistically significant enhancement of the distress prediction performance, in particular for large firms where accurate predictions are of the utmost importance. Furthermore, we find that auditors' reports are more informative than managements' statements and that a joint model including both managements' statements and auditors' reports displays no enhancement relative to a model including only auditors' reports. Our model demonstrates a direct improvement over existing state-of-the-art models. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.05270&r=all |
By: | Zhuoran Xiong (Bruce); Xiao-Yang Liu (Bruce); Shan Zhong (Bruce); Hongyang (Bruce); Yang; Anwar Walid |
Abstract: | Stock trading strategy plays a crucial role in investment companies. However, it is challenging to obtain optimal strategy in the complex and dynamic stock market. We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return. 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment. We train a deep reinforcement learning agent and obtain an adaptive trading strategy. The agent's performance is evaluated and compared with Dow Jones Industrial Average and the traditional min-variance portfolio allocation strategy. The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.07522&r=all |
By: | Javier Franco-Pedroso; Joaquin Gonzalez-Rodriguez; Maria Planas; Jorge Cubero; Rafael Cobo; Fernando Pablos |
Abstract: | This paper presents an evaluation framework that attempts to quantify the "degree of realism" of simulated financial time series, whatever the simulation method could be, with the aim of discover unknown characteristics that are not being properly reproduced by such methods in order to improve them. For that purpose, the evaluation framework is posed as a machine learning problem in which some given time series examples have to be classified as simulated or real financial time series. The "challenge" is proposed as an open competition, similar to those published at the Kaggle platform, in which participants must send their classification results along with a description of the features and the classifiers used. The results of these "challenges" have revealed some interesting properties of financial data, and have lead to substantial improvements in our simulation methods under research, some of which will be described in this work. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.07792&r=all |
By: | Shenhao Wang; Jinhua Zhao |
Abstract: | It is an enduring question how to combine revealed preference (RP) and stated preference (SP) data to analyze travel behavior. This study presents a new approach of using multitask learning deep neural network (MTLDNN) to combine RP and SP data and incorporate the traditional nest logit approach as a special case. Based on a combined RP and SP survey in Singapore to examine the demand for autonomous vehicles (AV), we designed, estimated and compared one hundred MTLDNN architectures with three major findings. First, the traditional nested logit approach of combining RP and SP can be regarded as a special case of MTLDNN and is only one of a large number of possible MTLDNN architectures, and the nested logit approach imposes the proportional parameter constraint under the MTLDNN framework. Second, out of the 100 MTLDNN models tested, the best one has one shared layer and five domain-specific layers with weak regularization, but the nested logit approach with proportional parameter constraint rivals the best model. Third, the proportional parameter constraint works well in the nested logit model, but is too restrictive for deeper architectures. Overall, this study introduces the MTLDNN model to combine RP and SP data, relates the nested logit approach to the hyperparameter space of MTLDNN, and explores hyperparameter training and architecture design for the joint demand analysis. |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1901.00227&r=all |
By: | Suren Harutyunyan; Adri\`A Masip Borr\`As |
Abstract: | In this paper we study recent developments in the approximation of the spread option pricing. As the Kirk\'s Approximation is extremely flawed in the cases when the correlation is very high, we explore a recent development that allows approximating with simplicity and accuracy the option price. To assess the goodness of fit of the new method, we increase dramatically the number of simulations and scenarios to test the new method and compare it with the original Kirk\'s formula. The simulations confirmed that the Modified Kirk\'s Approximation method is extremely accurate, improving Kirk\'s approach for two-asset spread options. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.04272&r=all |
By: | Robert Waschik; Jonathan Chew; John Madden; Joshua Sidgwick; Glyn Wittwer |
Abstract: | The study analyses the impacts of selected regional universities on regional economies within Australia using a multi-regional CGE model, VU-TERM. Universities enhance a community's knowledge base through teaching and research, raising productivity within the region. To depict the regional economic contribution of universities, we simulate a hypothetical removal of regional campuses. We estimate demand-side shocks using expenditure patterns of university enrolees. Supply-side impacts use inputs from econometric studies estimating rates-of-return to levels of educational attainment. Armidale's local economy is hit hardest by a hypothetical removal of its university. Other regions suffering substantial losses include Ballarat, Toowoomba and Rockhampton. |
Keywords: | CGE Modelling, Regional Universities, Economic Contribution |
JEL: | C68 O18 |
Date: | 2018–08 |
URL: | http://d.repec.org/n?u=RePEc:cop:wpaper:g-286&r=all |
By: | Gabriel A. Madeira; Mailliw Serafim; Sergio Mikio Koyama; Fernando Kuwer |
Abstract: | In Brazil, about 40% of the credit to firms originates from earmarking credit policies. These loans are heavily subsidized, with interest rates substantially lower than the others. Only about 18% of formal firms are benefited by these loans. However, these firms receive about 80% of total corporate credit from banks. It is reasonable to assume that the effects of these policies on the economy are substantial. To evaluate them, we elaborate a general equilibrium model with heterogeneous agents and credit restrictions that incorporates the credit earmarking policies practiced in Brazil. Using theoretical and numerical resources recently incorporated into the economic literature, we adjusted the model to the Brazilian data in order to simulate the effects of the removal of credit earmarking policies. Our simulations indicate that the extinction of earmarked credit programs would generate several positive effects, such as increased output and productivity, reduced inequality and financial inclusion. Next, we simulate variations in earmarking policies, evaluating the impacts of giving greater focus to poorer or more productive entrepreneurs. While these changes can lead to improvements, our simulations indicate smaller gains than the mere removal of earmarking programs. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:bcb:wpaper:490&r=all |
By: | Viehmann, Johannes (Energiewirtschaftliches Institut an der Universitaet zu Koeln (EWI)); Lorenczik, Stefan (IEA); Malischek, Raimund (IEA) |
Abstract: | There is an ongoing debate on the appropriate auction design for competitive electricity balancing markets. Uniform (UPA)and discriminatory price auctions (DPA), the prevalent designs in use today, are assumed to have different properties with regard to prices and effciencies. These properties cannot be thoroughly described using analytical methods due to the complex strategy space in repeated multi-unit multiple bid auctions. Therefore, using an agent-based Q-learning model, we simulate the strategic bidding behaviour in these auctions under a variety of market conditions. We find that UPAs lead to higher prices in all analysed market settings. This is mainly due to the fact that players engage in bid shading more aggressively. Moreover, small players in UPAs learn to free ride on the price setting of large players and learn higher profits per unit of capacity owned, while they are disadvantaged in DPAs. UPAs also generally feature higher effciencies, but there are exceptions to this observation. If demand is varying and players are provided with additional information about scarcity in the market, market prices increase only in case asymmetric players are present. |
Keywords: | Agent-based computational economics; Auction design; Electricity markets |
JEL: | C63 D43 D44 L94 |
Date: | 2018–12–18 |
URL: | http://d.repec.org/n?u=RePEc:ris:ewikln:2018_003&r=all |
By: | Buncic, Daniel; Stern, Cord |
Abstract: | We use a dynamic model averaging (DMA) approach to construct forecasts of individual equity returns for a large cross-section of stocks contained in the SP500, FTSE100, DAX30, CAC40 and SPX30 headline indices, taking value, momentum, and quality factors as predictor variables. Fixing the set of ‘forgetting factors’ in the DMA prediction framework, we show that highly significant return forecasts relative to the historic average benchmark are obtained for 173 (281) individual equities at the 1% (5%) level, from a total of 895 stocks. These statistical forecast improvements also translate into considerable economic gains, producing out-of-sample R 2 values above 5% (10%) for 283 (166) of the 895 individual stocks. Equally weighted long only portfolios constructed from a ranking of the best 25% forecasts in each headline index can generate sizable returns in excess of a passive investment strategy in that index itself, even when transaction costs and risk taking are accounted for. |
Keywords: | Active factor models, model averaging and selection, computational finance, quantitative equity investing, stock selection strategies, return-based factor models. |
JEL: | C11 C52 F37 G11 G15 G17 |
Date: | 2018–11–25 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:90382&r=all |
By: | Claude Meidinger (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Whether there is a pre-existing common "language" that ties down the literal meanings of cheap talk messages or not is a distinction plainly important in practice. But it is assumed irrelevant in traditional game theory because it affects neither the payoff structure nor the theoretical possibilities for signaling. And when in experiments the "common-language" assumption is simplicitly implemented, such situations ignore the meta-coordination problem created by communication. Players must coordinate their beliefs on what various messages mean before they can use messages to coordinate on what to do. Using simulations with populations of artificial agents, the paper investigates the way according to which a common meaning can be constituted through a collective process of learning and compares the results thus obtained with those available in some experiments. |
Abstract: | Le fait de savoir s'il existe ou non un « langage » commun préexistant qui détermine les significations littérales des messages cheap talk est manifestement important en pratique. Cependant ce fait est considéré comme non pertinent dans la théorie traditionnelle des jeux car il n'affecte ni la structure des gains ni les possibilités théoriques de signaler. Et quand dans les expériences l'hypothèse d'un « langage commun » est implicitement implémentée, de telles situations ignorent le problème de méta-coordination créé par la communication. Les joueurs doivent coordonner leurs croyances sur ce que signifient les différents messages avant d'utiliser les messages pour coordonner leurs actions. A l'aide de simulations au sein de populations d'agents artificiels, ce papier étudie la manière selon laquelle une signification commune de messages peut se constituer dans le cadre d'un processus collectif de learning et compare les résultats obtenus avec ceux résultant d'expériences. |
Keywords: | Experimental Economics,Computational Economics,Signaling games,Economie expérimentale,Economie computationnelle,Jeux avec communication |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01960762&r=all |
By: | Shenhao Wang; Jinhua Zhao |
Abstract: | Deep neural network (DNN) has been increasingly applied to microscopic demand analysis. While DNN often outperforms traditional multinomial logit (MNL) model, it is unclear whether we can obtain interpretable economic information from DNN-based choice model beyond prediction accuracy. This paper provides an empirical method of numerically extracting valuable economic information such as choice probability, probability derivatives (or elasticities), and marginal rates of substitution. Using a survey collected in Singapore, we find that when the economic information is aggregated over population or models, DNN models can reveal roughly S-shaped choice probability curves, inverse bell-shaped driving probability derivatives regarding costs and time, and reasonable median value of time (VOT). However at the disaggregate level, choice probability curves of DNN models can be non-monotonically decreasing with costs and highly sensitive to the particular estimation; derivatives of choice probabilities regarding costs and time can be positive at some region; VOT can be infinite, undefined, zero, or arbitrarily large. Some of these patterns can be seen as counter-intuitive, while others can potentially be regarded as advantages of DNN for its flexibility to reflect certain behavior peculiarities. These patterns broadly relate to two theoretical challenges of DNN, irregularity of its probability space and large estimation errors. Overall, this study provides a practical guidance of using DNN for demand analysis with two suggestions: First, researchers can use numerical methods to obtain behaviorally intuitive choice probabilities, probability derivatives, and reasonable VOT. Second, given the large estimation errors and irregularity of the probability space of DNN, researchers should always ensemble either over population or individual models to obtain stable economic information. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.04528&r=all |
By: | Dmitriy Volinskiy (ATB Financial); Lana Cuthbertson (ATB Financial); Omid Ardakanian (University of Alberta) |
Abstract: | Economies and societal structures in general are complex stochastic systems which may not lend themselves well to algebraic analysis. An addition of subjective value criteria to the mechanics of interacting agents will further complicate analysis. The purpose of this short study is to demonstrate capabilities of agent-based computational economics to be a platform for fairness or equity analysis in both a broad and practical sense. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.02311&r=all |
By: | Babak Mahdavi-Damghani; Konul Mustafayeva; Stephen Roberts; Cristin Buescu |
Abstract: | We investigate the problem of dynamic portfolio optimization in continuous-time, finite-horizon setting for a portfolio of two stocks and one risk-free asset. The stocks follow the Cointelation model. The proposed optimization methods are twofold. In what we call an Stochastic Differential Equation approach, we compute the optimal weights using mean-variance criterion and power utility maximization. We show that dynamically switching between these two optimal strategies by introducing a triggering function can further improve the portfolio returns. We contrast this with the machine learning clustering methodology inspired by the band-wise Gaussian mixture model. The first benefit of the machine learning over the Stochastic Differential Equation approach is that we were able to achieve the same results though a simpler channel. The second advantage is a flexibility to regime change. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.10183&r=all |
By: | Nadine M Walters; Conrad Beyers; Gusti van Zyl; Rolf van den Heever |
Abstract: | We present a network-based framework for simulating systemic risk that considers shock propagation in banking systems. In particular, the framework allows the modeller to reflect a top-down framework where a shock to one bank in the system affects the solvency and liquidity position of other banks, through systemic market risks and consequential liquidity strains. We illustrate the framework with an application using South African bank balance sheet data. Spikes in simulated assessments of systemic risk agree closely with spikes in documented subjective assessments of this risk. This indicates that network models can be useful for monitoring systemic risk levels. The model results are sensitive to liquidity risk and market sentiment and therefore the related parameters are important considerations when using a network approach to systemic risk modelling. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.04223&r=all |