|
on Computational Economics |
Issue of 2021‒11‒01
fourteen papers chosen by |
By: | Shareefuddin Mohammed; Rusty Bealer; Jason Cohen |
Abstract: | In the world of advice and financial planning, there is seldom one right answer. While traditional algorithms have been successful in solving linear problems, its success often depends on choosing the right features from a dataset, which can be a challenge for nuanced financial planning scenarios. Reinforcement learning is a machine learning approach that can be employed with complex data sets where picking the right features can be nearly impossible. In this paper, we will explore the use of machine learning for financial forecasting, predicting economic indicators, and creating a savings strategy. Vanguard ML algorithm for goals-based financial planning is based on deep reinforcement learning that identifies optimal savings rates across multiple goals and sources of income to help clients achieve financial success. Vanguard learning algorithms are trained to identify market indicators and behaviors too complex to capture with formulas and rules, instead, it works to model the financial success trajectory of investors and their investment outcomes as a Markov decision process. We believe that reinforcement learning can be used to create value for advisors and end-investors, creating efficiency, more personalized plans, and data to enable customized solutions. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.12003&r= |
By: | Börschlein, Benjamin; Bossler, Mario |
JEL: | J31 J38 C49 C21 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc21:242441&r= |
By: | Tugce Karatas; Ali Hirsa |
Abstract: | Risk arbitrage or merger arbitrage is a well-known investment strategy that speculates on the success of M&A deals. Prediction of the deal status in advance is of great importance for risk arbitrageurs. If a deal is mistakenly classified as a completed deal, then enormous cost can be incurred as a result of investing in target company shares. On the contrary, risk arbitrageurs may lose the opportunity of making profit. In this paper, we present an ML and DL based methodology for takeover success prediction problem. We initially apply various ML techniques for data preprocessing such as kNN for data imputation, PCA for lower dimensional representation of numerical variables, MCA for categorical variables, and LSTM autoencoder for sentiment scores. We experiment with different cost functions, different evaluation metrics, and oversampling techniques to address class imbalance in our dataset. We then implement feedforward neural networks to predict the success of the deal status. Our preliminary results indicate that our methodology outperforms the benchmark models such as logit and weighted logit models. We also integrate sentiment scores into our methodology using different model architectures, but our preliminary results show that the performance is not changing much compared to the simple FFNN framework. We will explore different architectures and employ a thorough hyperparameter tuning for sentiment scores as a future work. |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.09315&r= |
By: | Jaydip Sen; Rajdeep Sen; Abhishek Dutta |
Abstract: | The paradigm of machine learning and artificial intelligence has pervaded our everyday life in such a way that it is no longer an area for esoteric academics and scientists putting their effort to solve a challenging research problem. The evolution is quite natural rather than accidental. With the exponential growth in processing speed and with the emergence of smarter algorithms for solving complex and challenging problems, organizations have found it possible to harness a humongous volume of data in realizing solutions that have far-reaching business values. This introductory chapter highlights some of the challenges and barriers that organizations in the financial services sector at the present encounter in adopting machine learning and artificial intelligence-based models and applications in their day-to-day operations. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.11999&r= |
By: | Nadia Lakhal (Lamided, ISG, Université de Sousse - LAMIDED); Asma Guizani (LIRSA - Laboratoire interdisciplinaire de recherche en sciences de l'action - CNAM - Conservatoire National des Arts et Métiers [CNAM]); Asma Sghaier (Department of Finance, University Of Sousse, Sousse, Tunisia.); Mohammed El َamine Abdelli (University of Brest); Imen Ben Slimene (UGA [2016-2019] - Université Grenoble Alpes [2016-2019]) |
Keywords: | CSR,Investment Efficiency,Machine learning,Stakeholder Theory |
Date: | 2021–09–28 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03375264&r= |
By: | Maria Begicheva; Oleg Travkin; Alexey Zaytsev |
Abstract: | Macroeconomic indexes are of high importance for banks: many risk-control decisions utilize these indexes. A typical workflow of these indexes evaluation is costly and protracted, with a lag between the actual date and available index being a couple of months. Banks predict such indexes now using autoregressive models to make decisions in a rapidly changing environment. However, autoregressive models fail in complex scenarios related to appearances of crises. We propose to use clients' financial transactions data from a large Russian bank to get such indexes. Financial transactions are long, and a number of clients is huge, so we develop an efficient approach that allows fast and accurate estimation of macroeconomic indexes based on a stream of transactions consisting of millions of transactions. The approach uses a neural networks paradigm and a smart sampling scheme. The results show that our neural network approach outperforms the baseline method on hand-crafted features based on transactions. Calculated embeddings show the correlation between the client's transaction activity and bank macroeconomic indexes over time. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.12000&r= |
By: | Artem Kuriksha |
Abstract: | This paper proposes a new way to model behavioral agents in dynamic macro-financial environments. Agents are described as neural networks and learn policies from idiosyncratic past experiences. I investigate the feedback between irrationality and past outcomes in an economy with heterogeneous shocks similar to Aiyagari (1994). In the model, the rational expectations assumption is seriously violated because learning of a decision rule for savings is unstable. Agents who fall into learning traps save either excessively or save nothing, which provides a candidate explanation for several empirical puzzles about wealth distribution. Neural network agents have a higher average MPC and exhibit excess sensitivity of consumption. Learning can negatively affect intergenerational mobility. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.11582&r= |
By: | Douglas Castilho; Tharsis T. P. Souza; Soong Moon Kang; Jo\~ao Gama; Andr\'e C. P. L. F. de Carvalho |
Abstract: | We propose a model that forecasts market correlation structure from link- and node-based financial network features using machine learning. For such, market structure is modeled as a dynamic asset network by quantifying time-dependent co-movement of asset price returns across company constituents of major global market indices. We provide empirical evidence using three different network filtering methods to estimate market structure, namely Dynamic Asset Graph (DAG), Dynamic Minimal Spanning Tree (DMST) and Dynamic Threshold Networks (DTN). Experimental results show that the proposed model can forecast market structure with high predictive performance with up to $40\%$ improvement over a time-invariant correlation-based benchmark. Non-pair-wise correlation features showed to be important compared to traditionally used pair-wise correlation measures for all markets studied, particularly in the long-term forecasting of stock market structure. Evidence is provided for stock constituents of the DAX30, EUROSTOXX50, FTSE100, HANGSENG50, NASDAQ100 and NIFTY50 market indices. Findings can be useful to improve portfolio selection and risk management methods, which commonly rely on a backward-looking covariance matrix to estimate portfolio risk. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.11751&r= |
By: | Michael Macgregor Perry; Hadi El-Amine |
Abstract: | In this paper we address the computational feasibility of the class of decision theoretic models referred to as adversarial risk analyses (ARA). These are models where a decision must be made with consideration for how an intelligent adversary may behave and where the decision-making process of the adversary is unknown, and is elicited by analyzing the adversary's decision problem using priors on his utility function and beliefs. The motivation of this research was to develop a computational algorithm that can be applied across a broad range of ARA models; to the best of our knowledge, no such algorithm currently exists. Using a two-person sequential model, we incrementally increase the size of the model and develop a simulation-based approximation of the true optimum where an exact solution is computationally impractical. In particular, we begin with a relatively large decision space by considering a theoretically continuous space that must be discretized. Then, we incrementally increase the number of strategic objectives which causes the decision space to grow exponentially. The problem is exacerbated by the presence of an intelligent adversary who also must solve an exponentially large decision problem according to some unknown decision-making process. Nevertheless, using a stylized example that can be solved analytically we show that our algorithm not only solves large ARA models quickly but also accurately selects to the true optimal solution. Furthermore, the algorithm is sufficiently general that it can be applied to any ARA model with a large, yet finite, decision space. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.12572&r= |
By: | Blanka Horvath; Zacharia Issa; Aitor Muguruza |
Abstract: | The problem of rapid and automated detection of distinct market regimes is a topic of great interest to financial mathematicians and practitioners alike. In this paper, we outline an unsupervised learning algorithm for clustering financial time-series into a suitable number of temporal segments (market regimes). As a special case of the above, we develop a robust algorithm that automates the process of classifying market regimes. The method is robust in the sense that it does not depend on modelling assumptions of the underlying time series as our experiments with real datasets show. This method -- dubbed the Wasserstein $k$-means algorithm -- frames such a problem as one on the space of probability measures with finite $p^\text{th}$ moment, in terms of the $p$-Wasserstein distance between (empirical) distributions. We compare our WK-means approach with a more traditional clustering algorithms by studying the so-called maximum mean discrepancy scores between, and within clusters. In both cases it is shown that the WK-means algorithm vastly outperforms all considered competitor approaches. We demonstrate the performance of all approaches both in a controlled environment on synthetic data, and on real data. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.11848&r= |
By: | Shumilov, Andrei |
Abstract: | This paper presents a survey of studies analyzing various uncertainties in integrated assessment models of the economics of climate change. Applications of techniques for both deterministic models (Monte Carlo simulations, sensitivity analysis) and stochastic IAMs (stochastic dynamic programming) are reviewed. |
Keywords: | greenhouse gases emissions; global warming; integrated assessment models; uncertainty |
JEL: | C6 D81 Q54 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:110171&r= |
By: | Lorenzo Di Domenico (University of Warsaw (PL)) |
Abstract: | The paper discusses the implications of disaggregation within the theoretical debate on the long-run convergence of the degree of capacity utilization towards the normal one. To this end, we develop an Agent Based – Stock Flow Consistent version of a demand-led growth model based on the capacity adjustment principle, fixed normal rate of capacity utilization and non-capacity creating autonomous component of demand. We show that, once the implicit assumption on the centralized control over the aggregate productive capacity characterizing aggregate models is removed, the economy displays emergent properties: the fluctuations of the business cycle endogenously arise, and the long-run aggregate degree of capacity utilization fluctuates around a level lower than the normal one. These proprieties help to explain some empirical evidence about the tendential under-utilization of productive capacity and confute both the traditional wisdom according to which there is only one degree of capacity utilization (the normal one) compatible with a stable accumulation and the neo-Kaleckian “closure”. To this extent, we point out that the long-run growth path determined within a Supermultiplier model can be somehow characterized by neo-Kaleckian features but, differently from the last one, such “undesired equilibrium” does not present Harrodian Instability: in the quasi-steady state firms keep trying to restore the exogenously given normal degree of capacity utilization without succeeding in that. The emerging phenomena derive, precisely, from considering a multiplicity of firms rather than the aggregate macro firm, and not by their heterogeneity. In particular, for any given distribution of demand across firms, the decentralized control over aggregate productive capacity produces over-investment with respect to the normal growth path. |
Keywords: | Post-Keynesian economics; Economic growth; Agent Based – Stock Flow Consistent models |
JEL: | C63 E11 E12 O42 P16 |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:pke:wpaper:pkwp2116&r= |
By: | Jonathan Taglialatela; Andrea Mina |
Abstract: | The paper focuses on the capital structure of firms in their early years of operation. Through the lens of Pecking Order Theory, we study how the pursuit of innovation influences the reliance of firms on different types of internal and external finance. Panel analyses of data on 7,394 German start-ups show that innovation activities are relevant predictors of the start-ups' revealed preferences for finance, and that the nature of these effects on the type and order of financing sources depends on the degree of information asymmetries specific to research and development activities, human capital endowments, and the market introduction of new products and processes. |
Keywords: | Innovation; information asymmetries; start-up; pecking order; entrepreneurial finance. |
Date: | 2021–10–23 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2021/36&r= |
By: | Nada Wasi; Chinnawat Devahastin Na Ayudhya; Pucktada Treeratpituk; Chommanart Nittayo |
Abstract: | While understanding labor market dynamics is crucial for designing the country’s social protection programs, prohibitive longitudinal surveys are rarely available in less developed countries. We illustrate that employment history from Social Security records can provide several important insights by using data from a middle-income country, Thailand. First, in contrary to the traditional view, we find that the formal and informal sectors are quite connected. Our analysis of millions of individual histories by a machine learning technique shows that more than half of registered workers left the formal sector either seasonally or permanently long before their retirement age. This finding raises a question of whether the social protection schemes being separately designed for formal and informal workers are effective. Second, the semi-formal workers also had a much flatter wage-age profile compared to those always staying in the formal sector. This observation calls for effective redistributive tools to prevent earnings inequality to translate into disparities in old-age and transmit to the next generation. Lastly, on the employer size, we find that almost half of formally registered firms had fewer than five employees, the benchmark often used to define informal firms. This result suggests that the distributions of firm sizes differ across countries and the employer size alone is unlikely sufficient to define informal workers. |
Keywords: | Employment; Work History; Social Security; K-means Clustering; Thailand |
JEL: | J01 J08 J21 J60 |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:pui:dpaper:147&r= |