nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒11‒09
25 papers chosen by



  1. Deep Reinforcement Learning for Asset Allocation in US Equities By Miquel Noguer i Alonso; Sonam Srivastava
  2. Is Image Encoding Beneficial for Deep Learning in Finance? An Analysis of Image Encoding Methods for the Application of Convolutional Neural Networks in Finance By Dan Wang; Tianrui Wang; Ionu\c{t} Florescu
  3. Interpretable Neural Networks for Panel Data Analysis in Economics By Yucheng Yang; Zhong Zheng; Weinan E
  4. A random forest-based approach to identifying the most informative seasonality tests By Ollech, Daniel; Webel, Karsten
  5. How to Talk When a Machine is Listening: Corporate Disclosure in the Age of AI By Sean Cao; Wei Jiang; Baozhong Yang; Alan L. Zhang
  6. The Effects of Financial Decoupling of the U.S. and China: Simulations with a Global Financial CGE Model By P.B. Dixon; J.A. Giesecke; J. Nassios; ; M.T. Rimmer
  7. Analysis of the impact of maker-taker fees on the stock market using agent-based simulation By Isao Yagi; Mahiro Hoshino; Takanobu Mizuta
  8. Differentially Private Secure Multi-Party Computation for Federated Learning in Financial Applications By David Byrd; Antigoni Polychroniadou
  9. Data science in economics: comprehensive review of advanced machine learning and deep learning methods By Nosratabadi, Saeed; Mosavi, Amir; Duan, Puhong; Ghamisi, Pedram; Filip, Ferdinand; Band, Shahab S.; Reuter, Uwe; Gama, Joao; Gandomi, Amir H.
  10. Bridging the gap between Markowitz planning and deep reinforcement learning By Eric Benhamou; David Saltiel; Sandrine Ungari; Abhishek Mukhopadhyay
  11. Theory-based residual neural networks: A synergy of discrete choice models and deep neural networks By Shenhao Wang; Baichuan Mo; Jinhua Zhao
  12. Oil-Price Uncertainty and the U.K. Unemployment Rate: A Forecasting Experiment with Random Forests Using 150 Years of Data By Rangan Gupta; Christian Pierdzioch; Afees A. Salisu
  13. When Bots Take Over the Stock Market: Evasion Attacks Against Algorithmic Traders By Elior Nehemya; Yael Mathov; Asaf Shabtai; Yuval Elovici
  14. Transnational machine learning with screens for flagging bid-rigging cartels By Huber, Martin; Imhof, David
  15. Automated coding using machine-learning and remapping the U.S. nonprofit sector: A guide and benchmark By Ma, Ji
  16. Deciphering Federal Reserve Communication via Text Analysis of Alternative FOMC Statements By Taeyoung Doh; Dongho Song; Shu-Kuei X. Yang
  17. The Economy, the Pandemic, and Machine Learning By Patrick T. Harker
  18. Lessons from Pandemics: Computational agent-based model approach for estimation of downstream and upstream measures to achieve requisite societal behavioural changes By Pradipta Banerjee; Subhrabrata Choudhury
  19. Management and Planning of Airport Gate Capacity: A New Microcomputer Based Gate Assignment Simulation Model By Hamzawi, Salah G.
  20. Are Crises Predictable? A Review of the Early Warning Systems in Currency and Stock Markets By Peiwan Wang; Lu Zong
  21. Contracting, pricing, and data collection under the AI flywheel effect By Huseyin Gurkan; Francis de Véricourt
  22. Towards Self-Regulating AI: Challenges and Opportunities of AI Model Governance in Financial Services By Eren Kurshan; Hongda Shen; Jiahao Chen
  23. Information Coefficient as a Performance Measure of Stock Selection Models By Feng Zhang; Ruite Guo; Honggao Cao
  24. Using Fuzzy Approach to Model Skill Shortage in Vietnam’s Labor Market in the Context of Industry 4.0 By Tien Ha Duong, My; Van Nguyen, Diep; Thanh Nguyen, Phong
  25. Piecewise-Linear Approximations and Filtering for DSGE Models with Occasionally Binding Constraints By S. Borağan Aruoba; Pablo Cuba-Borda; Kenji Higa-Flores; Frank Schorfheide; Sergio Villalvazo

  1. By: Miquel Noguer i Alonso; Sonam Srivastava
    Abstract: Reinforcement learning is a machine learning approach concerned with solving dynamic optimization problems in an almost model-free way by maximizing a reward function in state and action spaces. This property makes it an exciting area of research for financial problems. Asset allocation, where the goal is to obtain the weights of the assets that maximize the rewards in a given state of the market considering risk and transaction costs, is a problem easily framed using a reinforcement learning framework. It is first a prediction problem for expected returns and covariance matrix and then an optimization problem for returns, risk, and market impact. Investors and financial researchers have been working with approaches like mean-variance optimization, minimum variance, risk parity, and equally weighted and several methods to make expected returns and covariance matrices' predictions more robust. This paper demonstrates the application of reinforcement learning to create a financial model-free solution to the asset allocation problem, learning to solve the problem using time series and deep neural networks. We demonstrate this on daily data for the top 24 stocks in the US equities universe with daily rebalancing. We use a deep reinforcement model on US stocks using different architectures. We use Long Short Term Memory networks, Convolutional Neural Networks, and Recurrent Neural Networks and compare them with more traditional portfolio management. The Deep Reinforcement Learning approach shows better results than traditional approaches using a simple reward function and only being given the time series of stocks. In Finance, no training to test error generalization results come guaranteed. We can say that the modeling framework can deal with time series prediction and asset allocation, including transaction costs.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.04404&r=all
  2. By: Dan Wang; Tianrui Wang; Ionu\c{t} Florescu
    Abstract: In 2012, SEC mandated all corporate filings for any company doing business in US be entered into the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system. In this work we are investigating ways to analyze the data available through EDGAR database. This may serve portfolio managers (pension funds, mutual funds, insurance, hedge funds) to get automated insights into companies they invest in, to better manage their portfolios. The analysis is based on Artificial Neural Networks applied to the data.} In particular, one of the most popular machine learning methods, the Convolutional Neural Network (CNN) architecture, originally developed to interpret and classify images, is now being used to interpret financial data. This work investigates the best way to input data collected from the SEC filings into a CNN architecture. We incorporate accounting principles and mathematical methods into the design of three image encoding methods. Specifically, two methods are derived from accounting principles (Sequential Arrangement, Category Chunk Arrangement) and one is using a purely mathematical technique (Hilbert Vector Arrangement). In this work we analyze fundamental financial data as well as financial ratio data and study companies from the financial, healthcare and IT sectors in the United States. We find that using imaging techniques to input data for CNN works better for financial ratio data but is not significantly better than simply using the 1D input directly for fundamental data. We do not find the Hilbert Vector Arrangement technique to be significantly better than other imaging techniques.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.08698&r=all
  3. By: Yucheng Yang; Zhong Zheng; Weinan E
    Abstract: The lack of interpretability and transparency are preventing economists from using advanced tools like neural networks in their empirical work. In this paper, we propose a new class of interpretable neural network models that can achieve both high prediction accuracy and interpretability in regression problems with time series cross-sectional data. Our model can essentially be written as a simple function of a limited number of interpretable features. In particular, we incorporate a class of interpretable functions named persistent change filters as part of the neural network. We apply this model to predicting individual's monthly employment status using high-dimensional administrative data in China. We achieve an accuracy of 94.5% on the out-of-sample test set, which is comparable to the most accurate conventional machine learning methods. Furthermore, the interpretability of the model allows us to understand the mechanism that underlies the ability for predicting employment status using administrative data: an individual's employment status is closely related to whether she pays different types of insurances. Our work is a useful step towards overcoming the "black box" problem of neural networks, and provide a promising new tool for economists to study administrative and proprietary big data.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.05311&r=all
  4. By: Ollech, Daniel; Webel, Karsten
    Abstract: Virtually each seasonal adjustment software includes an ensemble of seasonality tests for assessing whether a given time series is in fact a candidate for seasonal adjustment. However, such tests are certain to produce either the same resultor conflicting results, raising the question if there is a method that is capable of identifying the most informative tests in order (1) to eliminate the seemingly non-informative ones in the former case and (2) to find a final decision in the more severe latter case. We argue that identifying the seasonal status of a given time series is essentially a classification problem and, thus, can be solved with machine learning methods. Using simulated seasonal and non-seasonal ARIMA processes that are representative of the Bundesbank's time series database, we compare certain popular methods with respect to accuracy, interpretability and availability of unbiased variable importance measures and find random forests of conditional inference trees to be the method which best balances these key requirements. Applying this method to the seasonality tests implemented in the seasonal adjustment software JDemetra+ finally reveals that the modifiedQSand Friedman tests yield by far the most informative results.
    Keywords: binary classification,conditional inference trees,correlated predictors,JDemetra+,simulation study,supervised machine learning
    JEL: C12 C14 C22 C45 C63
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:552020&r=all
  5. By: Sean Cao; Wei Jiang; Baozhong Yang; Alan L. Zhang
    Abstract: This paper analyzes how corporate disclosure has been reshaped by machine processors, employed by algorithmic traders, robot investment advisors, and quantitative analysts. Our findings indicate that increasing machine and AI readership, proxied by machine downloads, motivates firms to prepare filings that are more friendly to machine parsing and processing. Moreover, firms with high expected machine downloads manage textual sentiment and audio emotion in ways catered to machine and AI readers, such as by differentially avoiding words that are perceived as negative by computational algorithms as compared to those by human readers, and by exhibiting speech emotion favored by machine learning software processors. The publication of Loughran and McDonald (2011) is instrumental in attributing the change in the measured sentiment to machine and AI readership. While existing research has explored how investors and researchers apply machine learning and computational tools to quantify qualitative information from disclosure and news, this study is the first to identify and analyze the feedback effect on corporate disclosure decisions, i.e., how companies adjust the way they talk knowing that machines are listening.
    JEL: G14 G30
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27950&r=all
  6. By: P.B. Dixon; J.A. Giesecke; J. Nassios; ; M.T. Rimmer
    Abstract: We add a financial module to the GTAP model, built around an 18-region asset-liability matrix. We simulate financial decoupling between the U.S. and China. We find that the U.S. would gain by limiting its capital flows to China, leading to a redirection of finance to the domestic economy. This would stimulate investment in the U.S. with favorable effects on employment, capital stocks, real GDP, wealth and real wage rates. At the same time investment in China would decline with negative effects on the Chinese economy. Similarly, China would gain by limiting its capital flows to the U.S. and the U.S. would lose. In a tit-for-tat situation in which each country reduces it financial-asset holding in the other country by x per cent, the winner would be China. We conduct additional simulations to compare the effects of trade decoupling with those of financial decoupling.
    Keywords: Financial decoupling U S -China economic relations Trade decoupling Financial module in GTAP CGE simulations
    JEL: C68 F17 F37 F51
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-309&r=all
  7. By: Isao Yagi; Mahiro Hoshino; Takanobu Mizuta
    Abstract: Recently, most stock exchanges in the U.S. employ maker-taker fees, in which an exchange pays rebates to traders placing orders in the order book and charges fees to traders taking orders from the order book. Maker-taker fees encourage traders to place many orders that provide market liquidity to the exchange. However, it is not clear how maker-taker fees affect the total cost of a taking order, including all the charged fees and the market impact. In this study, we investigated the effect of maker-taker fees on the total cost of a taking order with our artificial market model, which is an agent-based model for financial markets. We found that maker-taker fees encourage market efficiency but increase the total costs of taking orders.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.08992&r=all
  8. By: David Byrd; Antigoni Polychroniadou
    Abstract: Federated Learning enables a population of clients, working with a trusted server, to collaboratively learn a shared machine learning model while keeping each client's data within its own local systems. This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters. Most federated learning systems therefore use differential privacy to introduce noise to the parameters. This adds uncertainty to any attempt to reveal private client data, but also reduces the accuracy of the shared model, limiting the useful scale of privacy-preserving noise. A system can further reduce the coordinating server's ability to recover private client information, without additional accuracy loss, by also including secure multiparty computation. An approach combining both techniques is especially relevant to financial firms as it allows new possibilities for collaborative learning without exposing sensitive client data. This could produce more accurate models for important tasks like optimal trade execution, credit origination, or fraud detection. The key contributions of this paper are: We present a privacy-preserving federated learning protocol to a non-specialist audience, demonstrate it using logistic regression on a real-world credit card fraud data set, and evaluate it using an open-source simulation platform which we have adapted for the development of federated learning systems.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.05867&r=all
  9. By: Nosratabadi, Saeed; Mosavi, Amir; Duan, Puhong; Ghamisi, Pedram; Filip, Ferdinand; Band, Shahab S.; Reuter, Uwe; Gama, Joao; Gandomi, Amir H.
    Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
    Date: 2020–10–15
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:yc6e2&r=all
  10. By: Eric Benhamou; David Saltiel; Sandrine Ungari; Abhishek Mukhopadhyay
    Abstract: While researchers in the asset management industry have mostly focused on techniques based on financial and risk planning techniques like Markowitz efficient frontier, minimum variance, maximum diversification or equal risk parity, in parallel, another community in machine learning has started working on reinforcement learning and more particularly deep reinforcement learning to solve other decision making problems for challenging task like autonomous driving, robot learning, and on a more conceptual side games solving like Go. This paper aims to bridge the gap between these two approaches by showing Deep Reinforcement Learning (DRL) techniques can shed new lights on portfolio allocation thanks to a more general optimization setting that casts portfolio allocation as an optimal control problem that is not just a one-step optimization, but rather a continuous control optimization with a delayed reward. The advantages are numerous: (i) DRL maps directly market conditions to actions by design and hence should adapt to changing environment, (ii) DRL does not rely on any traditional financial risk assumptions like that risk is represented by variance, (iii) DRL can incorporate additional data and be a multi inputs method as opposed to more traditional optimization methods. We present on an experiment some encouraging results using convolution networks.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.09108&r=all
  11. By: Shenhao Wang; Baichuan Mo; Jinhua Zhao
    Abstract: Researchers often treat data-driven and theory-driven models as two disparate or even conflicting methods in travel behavior analysis. However, the two methods are highly complementary because data-driven methods are more predictive but less interpretable and robust, while theory-driven methods are more interpretable and robust but less predictive. Using their complementary nature, this study designs a theory-based residual neural network (TB-ResNet) framework, which synergizes discrete choice models (DCMs) and deep neural networks (DNNs) based on their shared utility interpretation. The TB-ResNet framework is simple, as it uses a ($\delta$, 1-$\delta$) weighting to take advantage of DCMs' simplicity and DNNs' richness, and to prevent underfitting from the DCMs and overfitting from the DNNs. This framework is also flexible: three instances of TB-ResNets are designed based on multinomial logit model (MNL-ResNets), prospect theory (PT-ResNets), and hyperbolic discounting (HD-ResNets), which are tested on three data sets. Compared to pure DCMs, the TB-ResNets provide greater prediction accuracy and reveal a richer set of behavioral mechanisms owing to the utility function augmented by the DNN component in the TB-ResNets. Compared to pure DNNs, the TB-ResNets can modestly improve prediction and significantly improve interpretation and robustness, because the DCM component in the TB-ResNets stabilizes the utility functions and input gradients. Overall, this study demonstrates that it is both feasible and desirable to synergize DCMs and DNNs by combining their utility specifications under a TB-ResNet framework. Although some limitations remain, this TB-ResNet framework is an important first step to create mutual benefits between DCMs and DNNs for travel behavior modeling, with joint improvement in prediction, interpretation, and robustness.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.11644&r=all
  12. By: Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, 0002, South Africa); Christian Pierdzioch (Department of Economics, Helmut Schmidt University, Holstenhofweg 85, P.O.B. 700822, 22008 Hamburg, Germany); Afees A. Salisu (Centre for Econometric & Allied Research, University of Ibadan, Ibadan, Nigeria)
    Abstract: We analyze the predictive role of oil-price uncertainty for changes in the UK unemployment rate using more than a century of monthly data covering the period from 1859 to 2020. To this end, we use a machine-learning technique known as random forests. Random forests render it possible to model the potentially nonlinear link between oil-price uncertainty and subsequent changes in the unemployment rate in an entirely data-driven way, where it is possible to control for the impact of several other macroeconomic variables and other macroeconomic and financial uncertainties. Upon estimating random forests on rolling-estimation windows, we find evidence that oil-price uncertainty predicts out-ofsample changes in the unemployment rate, where the relative importance of oil-price uncertainty has undergone substantial swings during the history of the modern petroleum industry that started with the drilling of the first oil well at Titusville (Pennsylvania, United States) in 1859.
    Keywords: Machine learning, Random forests, Oil uncertainty, Macroeconomic and financial uncertainties, Unemployment rate, United Kingdom
    JEL: C22 C53 E24 E43 F31 G10 Q02
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:202095&r=all
  13. By: Elior Nehemya; Yael Mathov; Asaf Shabtai; Yuval Elovici
    Abstract: In recent years, machine learning has become prevalent in numerous tasks, including algorithmic trading. Stock market traders utilize learning models to predict the market's behavior and execute an investment strategy accordingly. However, learning models have been shown to be susceptible to input manipulations called adversarial examples. Yet, the trading domain remains largely unexplored in the context of adversarial learning. This is mainly because of the rapid changes in the market which impair the attacker's ability to create a real-time attack. In this study, we present a realistic scenario in which an attacker gains control of an algorithmic trading bots by manipulating the input data stream in real-time. The attacker creates an universal perturbation that is agnostic to the target model and time of use, while also remaining imperceptible. We evaluate our attack on a real-world market data stream and target three different trading architectures. We show that our perturbation can fool the model at future unseen data points, in both white-box and black-box settings. We believe these findings should serve as an alert to the finance community about the threats in this area and prompt further research on the risks associated with using automated learning models in the finance domain.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.09246&r=all
  14. By: Huber, Martin; Imhof, David
    Abstract: We investigate the transnational transferability of statistical screening methods originally developed using Swiss data for detecting bid-rigging cartels in Japan. We find that combining screens for the distribution of bids in tenders with machine learning to classify collusive vs. competitive tenders entails a correct classification rate of 88% to 93% when training and testing the method based on Japanese data from the so-called Okinawa bid-rigging cartel. As in Switzerland, bid rigging in Okinawa reduced the variance and increased the asymmetry in the distribution of bids. When pooling the data from both countries for training and testing the classification models, we still obtain correct classification rates of 82% to 88%. However, when training the models in data from one country to test their performance in the data from the other country, rates go down substantially, due to some screens for competitive Japanese tenders being similar to those for collusive Swiss tenders. Our results thus suggest that a country’s institutional context matters for the distribution of bids, such that a country-specific training of classification models is to be preferred over applying trained models across borders, even though some screens turn out to be more stable across countries than others.
    Keywords: Bid rigging; screening methods; machine learning; random forest; ensemble methods
    JEL: C21 C45 C52 D22 D40 K40
    Date: 2020–10–26
    URL: http://d.repec.org/n?u=RePEc:fri:fribow:fribow00519&r=all
  15. By: Ma, Ji (The University of Texas at Austin)
    Abstract: This research developed a machine-learning classifier that reliably automates the coding process using the National Taxonomy of Exempt Entities as a schema and remapped the U.S. nonprofit sector. I achieved 90% overall accuracy for classifying the nonprofits into nine broad categories and 88% for classifying them into 25 major groups. The intercoder reliabilities between algorithms and human coders measured by kappa statistics are in the "almost perfect" range of 0.80--1.00. The results suggest that a state-of-the-art machine-learning algorithm can approximate human coders and substantially improve researchers' productivity. I also reassigned multiple category codes to over 439 thousand nonprofits and discovered a considerable amount of organizational activities that were previously ignored. The classifier is an essential methodological prerequisite for large-N and Big Data analyses, and the remapped U.S. nonprofit sector can serve as an important instrument for asking or reexamining fundamental questions of nonprofit studies.
    Date: 2020–10–10
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:pt3q9&r=all
  16. By: Taeyoung Doh; Dongho Song; Shu-Kuei X. Yang
    Abstract: We apply a natural language processing algorithm to FOMC statements to construct a new measure of monetary policy stance, including the tone and novelty of a policy statement. We exploit cross-sectional variations across alternative FOMC statements to identify the tone (for example, dovish or hawkish), and contrast the current and previous FOMC statements released after Committee meetings to identify the novelty of the announcement. We then use high-frequency bond prices to compute the surprise component of the monetary policy stance. Our text-based estimates of monetary policy surprises are not sensitive to the choice of bond maturities used in estimation, are highly correlated with forward guidance shocks in the literature, and are associated with lower stock returns after unexpected policy tightening. The key advantage of our approach is that we are able to conduct a counterfactual policy evaluation by replacing the released statement with an alternative statement, allowing us to perform a more detailed investigation at the sentence and paragraph level.
    Keywords: FOMC; Alternative FOMC statements; Counterfactual policy evaluation; Monetary policy stance; Text analysis; Natural language processing
    JEL: E30 E40 E50 G12
    Date: 2020–10–06
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:88946&r=all
  17. By: Patrick T. Harker
    Abstract: The U.S. economy is recovering more strongly than originally anticipated, but significant risks related to COVID-19 and fiscal policy remain, said Patrick T. Harker, president and CEO of the Federal Reserve Bank of Philadelphia. Harker, delivering a keynote address virtually at the Official Monetary and Financial Institutions Forum, focused on artificial intelligence and machine learning.
    Keywords: COVID-19
    Date: 2020–09–29
    URL: http://d.repec.org/n?u=RePEc:fip:fedpsp:88805&r=all
  18. By: Pradipta Banerjee; Subhrabrata Choudhury
    Abstract: Pandemics such as COVID-19 have lethal potential for inflicting long-lasting cyclic devastations if required preventive, curative and reformative steps are not taken up in time which puts forth mammoth multi-dimensional challenges for survival before mankind. Scientists and policymakers all around are striving to achieve R $\leq$ 1 alongside having less number of CoVID-19 patients. Lockdowns all across the globe have been implemented for the sake of social physical distancing. However, even if the desired R value status is achieved it becomes nowhere near safe. As normal social activity and inter-regional travel resumes, danger of contraction of the virus from undetected asymptomatic carriers and reactivation of the virus in previously affected patients looms over. The virus poses further threat due to its chances of resurgence, its mutative and adaptive nature thereby giving limited medical respite. The problems intensify with increasing population density whilst varying with several socio-economic-geo-cultural and human activity parameters. Such zoonotic pandemics unravel the primary challenges of all countries in securing the general wellbeing of the society. Ensuring a mechanism for policy designs envisaging crisis scenarios through continuous analysis of real-time region-specific data about societal activities and disease/health indicators can be the only solution. An approach perspective is discussed for addressing the tightly-coupled UN Sustainable goals (2, 3, 6, 12 and 13) for developing a general-scale computational agent-based model to estimate the downstream and upstream measures for achieving requisite societal behavioural changes with the prognostic knowledge concerning the conditions and options for future scenarios of stable sustainability.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.04833&r=all
  19. By: Hamzawi, Salah G.
    Keywords: Public Economics
    Date: 2020–10–22
    URL: http://d.repec.org/n?u=RePEc:ags:ctrf21:305954&r=all
  20. By: Peiwan Wang; Lu Zong
    Abstract: The study efforts to explore and extend the crisis predictability by synthetically reviewing and comparing a full mixture of early warning models into two constitutions: crisis identifications and predictive models. Given empirical results on Chinese currency and stock markets, three-strata findings are concluded as (i) the SWARCH model conditional on an elastic thresholding methodology can most accurately classify crisis observations and greatly contribute to boosting the predicting precision, (ii) stylized machine learning models are preferred given higher precision in predicting and greater benefit in practicing, (iii) leading factors sign the crisis in a diversified way for different types of markets and varied prediction periods.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.10132&r=all
  21. By: Huseyin Gurkan (ESMT European School of Management and Technology); Francis de Véricourt (ESMT European School of Management and Technology)
    Abstract: This paper explores how firms that lack expertise in machine learning (ML) can leverage the so-called AI Flywheel effect. This eff ect designates a virtuous cycle by which, as an ML product is adopted and new user data are fed back to the algorithm, the product improves, enabling further adoptions. However, managing this feedback loop is difficult, especially when the algorithm is contracted out. Indeed, the additional data that the AI Flywheel effect generates may change the provider's incentives to improve the algorithm over time. We formalize this problem in a simple two-period moral hazard framework that captures the main dynamics between machine learning, data acquisition, pricing, and contracting. We find that the firm's decisions crucially depend on how the amount of data on which the machine is trained interacts with the provider's effort. If this effort has a more (resp. less) significant impact on accuracy for larger volumes of data, the firm underprices (resp. overprices) the product. Interestingly, these distortions sometimes improve social welfare, which accounts for the customer surplus and profits of both the firm and provider. Further, the interaction between incentive issues and the positive externalities of the AI Flywheel effect have important implications for the firm's data collection strategy. In particular, the firm can boost its profit by increasing the product's capacity to acquire usage data only up to a certain level. If the product collects too much data per user, the firm's profit may actually decrease. As a result, the firm should consider reducing its product's data acquisition capacity when its initial dataset to train the algorithm is large enough.
    Keywords: Data, machine learning, data product, pricing, incentives, contracting
    Date: 2020–03–03
    URL: http://d.repec.org/n?u=RePEc:esm:wpaper:esmt-20-01_r1&r=all
  22. By: Eren Kurshan; Hongda Shen; Jiahao Chen
    Abstract: AI systems have found a wide range of application areas in financial services. Their involvement in broader and increasingly critical decisions has escalated the need for compliance and effective model governance. Current governance practices have evolved from more traditional financial applications and modeling frameworks. They often struggle with the fundamental differences in AI characteristics such as uncertainty in the assumptions, and the lack of explicit programming. AI model governance frequently involves complex review flows and relies heavily on manual steps. As a result, it faces serious challenges in effectiveness, cost, complexity, and speed. Furthermore, the unprecedented rate of growth in the AI model complexity raises questions on the sustainability of the current practices. This paper focuses on the challenges of AI model governance in the financial services industry. As a part of the outlook, we present a system-level framework towards increased self-regulation for robustness and compliance. This approach aims to enable potential solution opportunities through increased automation and the integration of monitoring, management, and mitigation capabilities. The proposed framework also provides model governance and risk management improved capabilities to manage model risk during deployment.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.04827&r=all
  23. By: Feng Zhang; Ruite Guo; Honggao Cao
    Abstract: Information coefficient (IC) is a widely used metric for measuring investment managers' skills in selecting stocks. However, its adequacy and effectiveness for evaluating stock selection models has not been clearly understood, as IC from a realistic stock selection model can hardly be materially different from zero and is often accompanies with high volatility. In this paper, we investigate the behavior of IC as a performance measure of stick selection models. Through simulation and simple statistical modeling, we examine the IC behavior both statically and dynamically. The examination helps us propose two practical procedures that one may use for IC-based ongoing performance monitoring of stock selection models.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.08601&r=all
  24. By: Tien Ha Duong, My; Van Nguyen, Diep; Thanh Nguyen, Phong
    Abstract: Human resources development is one of the main issues in the socio-economic development strategy and the transform of any region in the context of Industry 4.0. However, Vietnamese human resources have been poorly evaluated in the areas of quality, lack of dynamism, and creativity. Therefore, this paper presents a fuzzy logic approach to ranking seven skills shortage in Vietnam’s Labor Market, namely lifelong learning, adaptive capacity, information technology capacity, creativity and innovation capacity, problem-solving capacity, foreign language competency, and organizing and managing competency. The results showed that the problem-solving skill has the largest gap between an enterprise’s requirements and the actual response of employees.
    Keywords: fuzzy logic; industry 4.0; human resources; skill shortage; Vietnam
    JEL: B16 F6 J01 O14
    Date: 2019–11–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:103512&r=all
  25. By: S. Borağan Aruoba; Pablo Cuba-Borda; Kenji Higa-Flores; Frank Schorfheide; Sergio Villalvazo
    Abstract: We develop an algorithm to construct approximate decision rules that are piecewise-linear and continuous for DSGE models with an occasionally binding constraint. The functional form of the decision rules allows us to derive a conditionally optimal particle filter (COPF) for the evaluation of the likelihood function that exploits the structure of the solution. We document the accuracy of the likelihood approximation and embed it into a particle Markov chain Monte Carlo algorithm to conduct Bayesian estimation. Compared with a standard bootstrap particle filter, the COPF significantly reduces the persistence of the Markov chain, improves the accuracy of Monte Carlo approximations of posterior moments, and drastically speeds up computations. We use the techniques to estimate a small-scale DSGE model to assess the effects of the government spending portion of the American Recovery and Reinvestment Act in 2009 when interest rates reached the zero lower bound.
    JEL: C5 E4 E5
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27991&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.