nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒05‒31
29 papers chosen by
Stan Miles
Thompson Rivers University

  1. Can we imitate stock price behavior to reinforcement learn option price? By Xin Jin
  2. Applying Artificial Intelligence on Satellite Imagery to Compile Granular Poverty Statistics By Hofer, Martin; Sako, Tomas; Martinez, Jr., Arturo; Addawe, Mildred; Durante, Ron Lester
  3. Preaching to Social Media: Turkey’s Friday Khutbas and Their Effects on Twitter By Ozan Aksoy
  4. Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units By Zihao Zhang; Stefan Zohren
  5. Integrating Traffic Network Analysis and Communication Network Analysis at a Regional Scale to Support More Efficient Evacuation in Response to a Wildfire Event By Soga, Kenichi; Comfort, Louise; Zhao, Bingyu; Lorusso, Paola; Soysal, Sena
  6. Predicting Poverty Using Geospatial Data in Thailand By Puttanapong , Nattapong; Martinez, Jr. , Arturo; Addawe, Mildred; Bulan, Joseph; Durante , Ron Lester; Martillan , Marymell
  7. Financial Time Series Analysis and Forecasting with HHT Feature Generation and Machine Learning By Tim Leung; Theodore Zhao
  8. Enhancing Cross-Sectional Currency Strategies by Ranking Refinement with Transformer-based Architectures By Daniel Poh; Bryan Lim; Stefan Zohren; Stephen Roberts
  9. Using Machine Learning to Create an Early Warning System for Welfare Recipients By Sansone, Dario; Zhu, Anna
  10. Effects of COVID-19 and other shocks on Papua New Guinea’s food economy: A multi-market simulation analysis By Diao, Xinshen; Dorosh, Paul A.; Fang, Peixun; Schmidt, Emily
  11. Learning to make consumption-saving decisions in a changing environment: an AI approach By Rui; Shi
  12. Social and fiscal impacts of statutory minimum wages in EU countries: A microsimulation analysis with EUROMOD By Klaus Grünberger; Edlira Narazani; Stefano Filauro; Áron Kiss
  13. Arbitrage-free neural-SDE market models By Samuel N. Cohen; Christoph Reisinger; Sheng Wang
  14. Central Bank Communication: One Size Does Not Fit All By Joan Huang; John Simon
  15. Deep Kernel Gaussian Process Based Financial Market Predictions By Yong Shi; Wei Dai; Wen Long; Bo Li
  16. Contracting, pricing, and data collection under the AI flywheel effect By Huseyin Gurkan; Francis de Véricourt
  17. Predicting Nature of Default using Machine Learning Techniques By Longden, Elaine
  18. Quantum algorithm for credit valuation adjustments By Javier Alcazar; Andrea Cadarso; Amara Katabarwa; Marta Mauri; Borja Peropadre; Guoming Wang; Yudong Cao
  19. A Computational Model of the Institutional Analysis and Development Framework By Nieves Montes
  20. Option Valuation through Deep Learning of Transition Probability Density By Haozhe Su; M. V. Tretyakov; David P. Newton
  21. Assessing asset-liability risk with neural networks By Patrick Cheridito; John Ery; Mario V. W\"uthrich
  22. The Fractured-Land Hypothesis By Fernández-Villaverde, Jesús; Koyama, Mark; Lin, Youhong; Sng, Tuan-Hwee
  23. An Introduction To Regret Minimization In Algorithmic Trading: A Survey of Universal Portfolio Techniques By Thomas Orton
  24. Estimating DSGE Models: Recent Advances and Future Challenges By Fernández-Villaverde, Jesús; Guerron-Quintana, Pablo A.
  25. Mission-Oriented Policies and the "Entrepreneurial State" at Work: An Agent-Based Exploration By Giovanni Dosi; Francesco Lamperti; Mariana Mazzucato; Mauro Napoletano; Andrea Roventini
  26. Towards Artificial Intelligence Enabled Financial Crime Detection By Zeinab Rouhollahi
  27. Home health care scheduling activities By Rym Ben Bachouch; Jihène Tounsi; Chouari Borhen
  28. Belief Distortions and Macroeconomic Fluctuations By Bianchi, Francesco; Ludvigson, Sydney C.; Ma, Sai
  29. Bitcoin: Like a Satellite or Always Hardcore? A Core-Satellite Identification in the Cryptocurrency Market By Christoph J. B\"orner; Ingo Hoffmann; Jonas Krettek; Lars M. K\"urzinger; Tim Schmitz

  1. By: Xin Jin
    Abstract: This paper presents a framework of imitating the price behavior of the underlying stock for reinforcement learning option price. We use accessible features of the equities pricing data to construct a non-deterministic Markov decision process for modeling stock price behavior driven by principal investor's decision making. However, low signal-to-noise ratio and instability that appear immanent in equity markets pose challenges to determine the state transition (price change) after executing an action (principal investor's decision) as well as decide an action based on current state (spot price). In order to conquer these challenges, we resort to a Bayesian deep neural network for computing the predictive distribution of the state transition led by an action. Additionally, instead of exploring a state-action relationship to formulate a policy, we seek for an episode based visible-hidden state-action relationship to probabilistically imitate principal investor's successive decision making. Our algorithm then maps imitative principal investor's decisions to simulated stock price paths by a Bayesian deep neural network. Eventually the optimal option price is reinforcement learned through maximizing the cumulative risk-adjusted return of a dynamically hedged portfolio over simulated price paths of the underlying.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.11376&r=
  2. By: Hofer, Martin (Vienna University of Economics and Business); Sako, Tomas (Freelance data scientist); Martinez, Jr., Arturo (Asian Development Bank); Addawe, Mildred (Asian Development Bank); Durante, Ron Lester (Asian Development Bank)
    Abstract: The spatial granularity of poverty statistics can have a significant impact on the efficiency of targeting resources meant to improve the living conditions of the poor. However, achieving granularity typically requires increasing the sample sizes of surveys on household income and expenditure or living standards, an option that is not always practical for government agencies that conduct these surveys. Previous studies that examined the use of innovative (geospatial) data sources such as those from high-resolution satellite imagery suggest that such method may be an alternative approach of producing granular poverty maps. This study outlines a computational framework to enhance the spatial granularity of government-published poverty estimates using a deep layer computer vision technique applied on publicly available medium-resolution satellite imagery, household surveys, and census data from the Philippines and Thailand. By doing so, the study explores a potentially more cost-effective alternative method for poverty estimation method. The results suggest that even using publicly accessible satellite imagery, in which the resolutions are not as fine as those in commercially sourced images, predictions generally aligned with the distributional structure of government-published poverty estimates, after calibration. The study further contributes to the existing literature by examining robustness of the resulting estimates to user-specified algorithmic parameters and model specifications.
    Keywords: big data; computer vision; data for development; machine learning algorithm; official statistics; poverty; SDG
    JEL: C19 D31 I32 O15
    Date: 2020–12–29
    URL: http://d.repec.org/n?u=RePEc:ris:adbewp:0629&r=
  3. By: Ozan Aksoy (Centre for Quantitative Social Sciences in the Social Research Institute, University College London)
    Abstract: In this study I analyse through machine learning the content of all Friday khutbas (sermons) read to millions of citizens in thousands of Mosques of Turkey since 2015. I focus on six non-religious and recurrent topics that feature in the sermons, namely business, family, nationalism, health, trust, and patience. I demonstrate that the content of the sermons responds strongly to events of national importance. I then link the Friday sermons with ~4.8 million tweets on these topics to study whether and how the content of sermons affects social media behaviour. I find generally large effects of the sermons on tweets, but there is also heterogeneity by topic. It is strongest for nationalism, patience, and health and weakest for business. Overall, these results show that religious institutions in Turkey are influential in shaping the public’s social media content and that this influence is mainly prevalent on salient issues. More generally, these results show that mass offline religious activity can have strong effects on social media behaviour
    Keywords: text-as-data analysis, computational social science, social media, religion, Islam, Turkey
    JEL: C63 N35 Z12
    Date: 2021–05–01
    URL: http://d.repec.org/n?u=RePEc:qss:dqsswp:2117&r=
  4. By: Zihao Zhang; Stefan Zohren
    Abstract: We design multi-horizon forecasting models for limit order book (LOB) data by using deep learning techniques. Unlike standard structures where a single prediction is made, we adopt encoder-decoder models with sequence-to-sequence and Attention mechanisms, to generate a forecasting path. Our methods achieve comparable performance to state-of-art algorithms at short prediction horizons. Importantly, they outperform when generating predictions over long horizons by leveraging the multi-horizon setup. Given that encoder-decoder models rely on recurrent neural layers, they generally suffer from a slow training process. To remedy this, we experiment with utilising novel hardware, so-called Intelligent Processing Units (IPUs) produced by Graphcore. IPUs are specifically designed for machine intelligence workload with the aim to speed up the computation process. We show that in our setup this leads to significantly faster training times when compared to training models with GPUs.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.10430&r=
  5. By: Soga, Kenichi; Comfort, Louise; Zhao, Bingyu; Lorusso, Paola; Soysal, Sena
    Abstract: As demonstrated by the Camp Fire evacuation, communications (city-to-city, city-to-residents) play important roles in coordinating traffic operations and safeguarding region-wide evacuation processes in wildfire events. This collaborative report across multiple domains (fire, communication and traffic), documents a series of simulations and findings of the wildfire evacuation process for resource-strapped towns in Northern California. It consists of: (1) meteorological and vegetation-status dependent fire spread simulation (cellular automata model); (2) agency-level and agency-to-residents communication simulation (system dynamics model); and (3) dynamic traffic assignment (spatial-queue model). Two case studies are conducted: one for the town of Paradise (and the surrounding areas) and another for the community of Bolinas. The data and models are based on site visits and interviews with local agencies and residents. The integrated simulation framework is used to assess the interdependencies among the natural environment, the evacuation traffic and the communication networks from an interdisciplinary point of view, to determine the performance requirements to ensure viable evacuation strategies under urgent, dynamic wildfire conditions. The case study simulations identify both potential traffic and communication bottlenecks. This research supports integrating fire, communication and traffic simulation into evacuation performance assessments.
    Keywords: Engineering, Social and Behavioral Sciences, Wildfires, evacuation, communications, simulation, traffic simulation, mathematical models, hazards and emergency operations, case studies
    Date: 2021–05–01
    URL: http://d.repec.org/n?u=RePEc:cdl:itsrrp:qt1z913878&r=
  6. By: Puttanapong , Nattapong (Thammasat University); Martinez, Jr. , Arturo (Asian Development Bank); Addawe, Mildred (Asian Development Bank); Bulan, Joseph (Asian Development Bank); Durante , Ron Lester (Asian Development Bank); Martillan , Marymell (Asian Development Bank)
    Abstract: Poverty statistics are conventionally compiled using data from household income and expenditure survey or living standards survey. This study examines an alternative approach in estimating poverty by investigating whether readily available geospatial data can accurately predict the spatial distribution of poverty in Thailand. In particular, geospatial data examined in this study include night light intensity, land cover, vegetation index, land surface temperature, built-up areas, and points of interest. The study also compares the predictive performance of various econometric and machine learning methods such as generalized least squares, neural network, random forest, and support vector regression. Results suggest that intensity of night lights and other variables that approximate population density are highly associated with the proportion of an area’s population who are living in poverty. The random forest technique yielded the highest level of prediction accuracy among the methods considered in this study, perhaps due to its capability to fit complex association structures even with small and medium-sized datasets. Moving forward, additional studies are needed to investigate whether the relationships observed here remain stable over time, and therefore, may be used to approximate the prevalence of poverty for years when household surveys on income and expenditures are not conducted, but data on geospatial correlates of poverty are available.
    Keywords: big data; computer vision; data for development; machine learning algorithm; multidimensional poverty; official statistics; poverty; SDG; Thailand
    JEL: C19 D31 I32 O15
    Date: 2020–12–29
    URL: http://d.repec.org/n?u=RePEc:ris:adbewp:0630&r=
  7. By: Tim Leung; Theodore Zhao
    Abstract: We present the method of complementary ensemble empirical mode decomposition (CEEMD) and Hilbert-Huang transform (HHT) for analyzing nonstationary financial time series. This noise-assisted approach decomposes any time series into a number of intrinsic mode functions, along with the corresponding instantaneous amplitudes and instantaneous frequencies. Different combinations of modes allow us to reconstruct the time series using components of different timescales. We then apply Hilbert spectral analysis to define and compute the associated instantaneous energy-frequency spectrum to illustrate the properties of various timescales embedded in the original time series. Using HHT, we generate a collection of new features and integrate them into machine learning models, such as regression tree ensemble, support vector machine (SVM), and long short-term memory (LSTM) neural network. Using empirical financial data, we compare several HHT-enhanced machine learning models in terms of forecasting performance.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.10871&r=
  8. By: Daniel Poh; Bryan Lim; Stefan Zohren; Stephen Roberts
    Abstract: The performance of a cross-sectional currency strategy depends crucially on accurately ranking instruments prior to portfolio construction. While this ranking step is traditionally performed using heuristics, or by sorting outputs produced by pointwise regression or classification models, Learning to Rank algorithms have recently presented themselves as competitive and viable alternatives. Despite improving ranking accuracy on average however, these techniques do not account for the possibility that assets positioned at the extreme ends of the ranked list -- which are ultimately used to construct the long/short portfolios -- can assume different distributions in the input space, and thus lead to sub-optimal strategy performance. Drawing from research in Information Retrieval that demonstrates the utility of contextual information embedded within top-ranked documents to learn the query's characteristics to improve ranking, we propose an analogous approach: exploiting the features of both out- and under-performing instruments to learn a model for refining the original ranked list. Under a re-ranking framework, we adapt the Transformer architecture to encode the features of extreme assets for refining our selection of long/short instruments obtained with an initial retrieval. Backtesting on a set of 31 currencies, our proposed methodology significantly boosts Sharpe ratios -- by approximately 20% over the original LTR algorithms and double that of traditional baselines.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.10019&r=
  9. By: Sansone, Dario (University of Exeter); Zhu, Anna (RMIT University)
    Abstract: Using high-quality nation-wide social security data combined with machine learning tools, we develop predictive models of income support receipt intensities for any payment enrolee in the Australian social security system between 2014 and 2018. We show that off-the-shelf machine learning algorithms can significantly improve predictive accuracy compared to simpler heuristic models or early warning systems currently in use. Specifically, the former predicts the proportion of time individuals are on income support in the subsequent four years with greater accuracy, by a magnitude of at least 22% (14 percentage points increase in the R2), compared to the latter. This gain can be achieved at no extra cost to practitioners since the algorithms use administrative data currently available to caseworkers. Consequently, our machine learning algorithms can improve the detection of long-term income support recipients, which can potentially provide governments with large savings in accrued welfare costs.
    Keywords: income support, machine learning, Australia
    JEL: C53 H53 I38 J68
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp14377&r=
  10. By: Diao, Xinshen; Dorosh, Paul A.; Fang, Peixun; Schmidt, Emily
    Abstract: Understanding how the Papua New Guinea (PNG) agricultural economy and associated household consumption is affected by climate, market and other shocks requires attention to linkages and substitution effects across various products and the markets in which they are traded. In this study, we use a multi-market simulation model of the PNG food economy that explicitly includes production, consumption, external trade and prices of key agricultural commodities to quantify the likely impacts of a set of potential shocks on household welfare and food security in PNG. In this study, we use a multi-market simulation model of the PNG food economy that explicitly includes production, consumption, external trade and prices of key agricultural commodities to quantify the likely impacts of a set of potential shocks on household welfare and food security in PNG. We have built the model to be flexible in order to explore different potential scenarios and then identify where and how households are most affected by an unexpected shock. The model is designed using region and country-level data sources that inform the structure of the PNG food economy, allowing for a data-driven evaluation of potential impacts on agricultural production, food prices, and food consumption. Thus, as PNG confronts different unexpected challenges within its agricultural economy, the model presented in this paper can be adapted to evaluate the potential impact and necessary response by geographic region of an unexpected economic shock on the food economy of the country. We present ten simulations modeling the effects of various shocks on PNG’s economy. The first group of scenarios consider the effects of shocks to production of specific agricultural commodities including: 1) a decrease on maize and sorghum output due to Fall Armyworm; 2) reduction in pig production due to a potential outbreak of African Swine Fever; 3) decline in sweet potato production similar to the 2015/16 El Niño Southern Oscillation (ENSO) climate shock; and 4) a decline in poultry production due to COVID-19 restrictions on domestic mobility and trade. A synopsis of this report, which focuses on the COVID-19 related shocks on the PNG economy is also available online (Diao et al., 2020).1 The second group of simulations focus on COVID-19-related changes in international prices, increased marketing costs in international and domestic trade, and reductions in urban incomes. We simulate a 1) 30 percent increase in the price of imported rice, 2) a 30 percent decrease in world prices for major PNG agricultural exports, 3) higher trade transaction costs due to restrictions on the movement of people (traders) and goods given social distancing measures of COVID-19, and 4) potential economic recession causing urban household income to fall by 10 percent. Finally, the last simulation considers the combined effect of all COVID-19 related shocks combining the above scenarios into a single simulation. A key result of the analysis is that urban households, especially the urban poor, are particularly vulnerable to shocks related to the Covid-19 pandemic. Lower economic activity in urban areas (assumed to reduce urban non-agricultural incomes by 10 percent), increases in marketing costs due to domestic trade disruptions, and 30 percent higher imported rice prices combine to lower urban incomes by almost 15 percent for both poor and non-poor urban households. Urban poor households, however, suffer the largest drop in calorie consumption - 19.8 percent, compared to a 15.8 percent decline for urban non-poor households. Rural households are much less affected by the Covid-19 related shocks modeled in these simulations. Rural household incomes, affected mainly by reduced urban demand and market disruptions, fall by only about four percent. Nonetheless, calorie consumption for the rural poor and non-poor falls by 5.5 and 4.2 percent, respectively.
    Keywords: PAPUA NEW GUINEA; OCEANIA; Coronavirus; coronavirus disease; Coronavirinae; COVID-19; shock; market; prices; movement restrictions; agrifood sector; economic sectors
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:fpr:ifprid:2004&r=
  11. By: Rui (Aruhan); Shi
    Abstract: This exercise offers an innovative learning mechanism to model economic agent's decision making process using a deep reinforcement learning algorithm. In particular, this AI agent has limited or no information on the underlying economic structure and its own preference. I model how the AI agent learns in terms of how it collects and processes information. It is able to learn in real time through constantly interacting with the environment and adjusting its actions accordingly. I illustrate that the economic agent under deep reinforcement learning is adaptive to changes in a given environment in real time. AI agents differ in their ways of collecting and processing information, and this leads to different learning behaviours and welfare distinctions. The chosen economic structure can be generalised to other decision making processes and economic models.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.10099&r=
  12. By: Klaus Grünberger (European Commission - JRC); Edlira Narazani (European Commission - JRC); Stefano Filauro (European Commission - Directorate-General for Employment, Social Affairs and Inclusion); Áron Kiss (European Commission - Directorate-General for Economic and Financial Affairs)
    Abstract: This paper analyses the first-round effects of hypothetical minimum wage increases on social outcomes in 21 EU countries with a statutory national minimum wage based on a microsimulation approach using EUROMOD. The methodological challenges related to the use of available EU household survey data are described, along with the choices made to address these challenges. The paper assesses hypothetical scenarios in which countries with a statutory national minimum wage increase their minimum wage to various reference values, set in relation to the gross national median and average wage. The model simulations suggest that minimum wage increases can significantly reduce in-work poverty, wage inequality and the gender pay gap, while generally improving the public budget balance. The implied wage increases for the beneficiaries are substantial, while the implied increases in the aggregate wage bill and, as a consequence, possible negative employment impacts, are generally modest.
    Keywords: minimum wage, microsimulation, European Union, wage inequality, in-work poverty, gender pay gap.
    JEL: H31 I32 J31
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:ipt:taxref:202106&r=
  13. By: Samuel N. Cohen; Christoph Reisinger; Sheng Wang
    Abstract: Modelling joint dynamics of liquid vanilla options is crucial for arbitrage-free pricing of illiquid derivatives and managing risks of option trade books. This paper develops a nonparametric model for the European options book respecting underlying financial constraints and while being practically implementable. We derive a state space for prices which are free from static (or model-independent) arbitrage and study the inference problem where a model is learnt from discrete time series data of stock and option prices. We use neural networks as function approximators for the drift and diffusion of the modelled SDE system, and impose constraints on the neural nets such that no-arbitrage conditions are preserved. In particular, we give methods to calibrate \textit{neural SDE} models which are guaranteed to satisfy a set of linear inequalities. We validate our approach with numerical experiments using data generated from a Heston stochastic local volatility model.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.11053&r=
  14. By: Joan Huang (Reserve Bank of Australia); John Simon (Reserve Bank of Australia)
    Abstract: High-quality central bank communication can improve the effectiveness of monetary policy and is an essential element in providing greater central bank transparency. There is, however, no agreement on what high-quality communication looks like. To shed light on this, we investigate 3 important aspects of central bank communication. We focus on how different audiences perceive the readability and degree of reasoning within various economic publications; providing the reasons for decisions is a critical element of transparency. We find that there is little correlation between perceived readability and reasoning in the economic communications we analyse, which highlights that commonly used measures of readability can miss important aspects of communication. We also find that perceptions of communication quality can vary significantly between audiences; one size does not fit all. To dig deeper we use machine learning techniques and develop a model that predicts the way different audiences rate the readability of and reasoning within texts. The model highlights that simpler writing is not necessarily more readable nor more revealing of the author's reasoning. The results also show how readability and reasoning vary within and across documents; good communication requires a variety of styles within a document, each serving a different purpose, and different audiences need different styles. Greater central bank transparency and more effective communication require an emphasis not just on greater readability of a single document, but also on setting out the reasoning behind conclusions in a variety of documents that each meet the needs of different audiences.
    Keywords: central bank communications; machine learning; natural language processing; readability; central bank transparency
    JEL: C61 C83 D83 E58 Z13
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:rba:rbardp:rdp2021-05&r=
  15. By: Yong Shi; Wei Dai; Wen Long; Bo Li
    Abstract: The Gaussian Process with a deep kernel is an extension of the classic GP regression model and this extended model usually constructs a new kernel function by deploying deep learning techniques like long short-term memory networks. A Gaussian Process with the kernel learned by LSTM, abbreviated as GP-LSTM, has the advantage of capturing the complex dependency of financial sequential data, while retaining the ability of probabilistic inference. However, the deep kernel Gaussian Process has not been applied to forecast the conditional returns and volatility in financial market to the best of our knowledge. In this paper, grid search algorithm, used for performing hyper-parameter optimization, is integrated with GP-LSTM to predict both the conditional mean and volatility of stock returns, which are then combined together to calculate the conditional Sharpe Ratio for constructing a long-short portfolio. The experiments are performed on a dataset covering all constituents of Shenzhen Stock Exchange Component Index. Based on empirical results, we find that the GP-LSTM model can provide more accurate forecasts in stock returns and volatility, which are jointly evaluated by the performance of constructed portfolios. Further sub-period analysis of the experiment results indicates that the superiority of GP-LSTM model over the benchmark models stems from better performance in highly volatile periods.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.12293&r=
  16. By: Huseyin Gurkan (ESMT European School of Management and Technology); Francis de Véricourt (ESMT European School of Management and Technology)
    Abstract: This paper explores how firms that lack expertise in machine learning (ML) can leverage the so-called AI Flywheel effect. This effect designates a virtuous cycle by which, as an ML product is adopted and new user data are fed back to the algorithm, the product improves, enabling further adoptions. However, managing this feedback loop is difficult, especially when the algorithm is contracted out. Indeed, the additional data that the AI Flywheel effect generates may change the provider's incentives to improve the algorithm over time. We formalize this problem in a simple two-period moral hazard framework that captures the main dynamics among ML, data acquisition, pricing, and contracting. We find that the firm's decisions crucially depend on how the amount of data on which the machine is trained interacts with the provider's effort. If this effort has a more (less) significant impact on accuracy for larger volumes of data, the firm underprices (overprices) the product. Interestingly, these distortions sometimes improve social welfare, which accounts for the customer surplus and profits of both the firm and provider. Further, the interaction between incentive issues and the positive externalities of the AI Flywheel effect has important implications for the firm's data collection strategy. In particular, the firm can boost its profit by increasing the product's capacity to acquire usage data only up to a certain level. If the product collects too much data per user, the firm's profit may actually decrease, i.e., more data is not necessarily better. As a result, the firm should consider reducing its product's data acquisition capacity when its initial dataset to train the algorithm is large enough.
    Keywords: Data, machine learning, data product, pricing, incentives, contracting
    Date: 2020–03–03
    URL: http://d.repec.org/n?u=RePEc:esm:wpaper:esmt-20-01_r2&r=
  17. By: Longden, Elaine (Tilburg University, School of Economics and Management)
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:tiu:tiutis:e1d97882-8cf3-40a4-a82e-8ad900e59177&r=
  18. By: Javier Alcazar; Andrea Cadarso; Amara Katabarwa; Marta Mauri; Borja Peropadre; Guoming Wang; Yudong Cao
    Abstract: Quantum mechanics is well known to accelerate statistical sampling processes over classical techniques. In quantitative finance, statistical samplings arise broadly in many use cases. Here we focus on a particular one of such use cases, credit valuation adjustment (CVA), and identify opportunities and challenges towards quantum advantage for practical instances. To improve the depths of quantum circuits for solving such problem, we draw on various heuristics that indicate the potential for significant improvement over well-known techniques such as reversible logical circuit synthesis. In minimizing the resource requirements for amplitude amplification while maximizing the speedup gained from the quantum coherence of a noisy device, we adopt a recently developed Bayesian variant of quantum amplitude estimation using engineered likelihood functions (ELF). We perform numerical analyses to characterize the prospect of quantum speedup in concrete CVA instances over classical Monte Carlo simulations.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.12087&r=
  19. By: Nieves Montes
    Abstract: The Institutional Analysis and Development (IAD) framework is a conceptual toolbox put forward by Elinor Ostrom and colleagues in an effort to identify and delineate the universal common variables that structure the immense variety of human interactions. The framework identifies rules as one of the core concepts to determine the structure of interactions, and acknowledges their potential to steer a community towards more beneficial and socially desirable outcomes. This work presents the first attempt to turn the IAD framework into a computational model to allow communities of agents to formally perform what-if analysis on a given rule configuration. To do so, we define the Action Situation Language -- or ASL -- whose syntax is hgighly tailored to the components of the IAD framework and that we use to write descriptions of social interactions. ASL is complemented by a game engine that generates its semantics as an extensive-form game. These models, then, can be analyzed with the standard tools of game theory to predict which outcomes are being most incentivized, and evaluated according to their socially relevant properties.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.13151&r=
  20. By: Haozhe Su; M. V. Tretyakov; David P. Newton
    Abstract: Transition probability densities are fundamental to option pricing. Advancing recent work in deep learning, we develop novel transition density function generators through solving backward Kolmogorov equations in parametric space for cumulative probability functions, using neural networks to obtain accurate approximations of transition probability densities, creating ultra-fast transition density function generators offline that can be trained for any underlying. These are 'single solve' , so they do not require recalculation when parameters are changed (e.g. recalibration of volatility) and are portable to other option pricing setups as well as to less powerful computers, where they can be accessed as quickly as closed-form solutions. We demonstrate the range of application for one-dimensional cases, exemplified by the Black-Scholes-Merton model, two-dimensional cases, exemplified by the Heston process, and finally for a modified Heston model with time-dependent parameters that has no closed-form solution.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.10467&r=
  21. By: Patrick Cheridito; John Ery; Mario V. W\"uthrich
    Abstract: We introduce a neural network approach for assessing the risk of a portfolio of assets and liabilities over a given time period. This requires a conditional valuation of the portfolio given the state of the world at a later time, a problem that is particularly challenging if the portfolio contains structured products or complex insurance contracts which do not admit closed form valuation formulas. We illustrate the method on different examples from banking and insurance. We focus on value-at-risk and expected shortfall, but the approach also works for other risk measures.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.12432&r=
  22. By: Fernández-Villaverde, Jesús; Koyama, Mark; Lin, Youhong; Sng, Tuan-Hwee
    Abstract: Patterns of political unification and fragmentation have crucial implications for comparative economic development. Diamond (1997) famously argued that ``fractured land'' was responsible for China's tendency toward political unification and Europe's protracted political fragmentation. We build a dynamic model with granular geographical information in terms of topographical features and the location of productive agricultural land to quantitatively gauge the effects of ``fractured land'' on state formation in Eurasia. We find that either topography or productive land alone is sufficient to account for China's recurring political unification and Europe's persistent political fragmentation. The existence of a core region of high land productivity in Northern China plays a central role in our simulations. We discuss how our results map into observed historical outcomes and assess how robust our findings are.
    Keywords: China; Europe; Great Divergence; Political Centralization; Political Fragmentation; state capacity
    JEL: H56 N40 P48
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15209&r=
  23. By: Thomas Orton
    Abstract: In financial investing, universal portfolios are a means of constructing portfolios which guarantee a certain level of performance relative to a baseline, while making no statistical assumptions about the future market data. They fall under the broad category of regret minimization algorithms. This document covers an introduction and survey to universal portfolio techniques, covering some of the basic concepts and proofs in the area. Topics include: Constant Rebalanced Portfolios, Cover's Algorithm, Incorporating Transaction Costs, Efficient Computation of Portfolios, Including Side Information, and Follow The Leader Algorithm.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.13126&r=
  24. By: Fernández-Villaverde, Jesús; Guerron-Quintana, Pablo A.
    Abstract: We review the current state of the estimation of DSGE models. After introducing a general framework for dealing with DSGE models, the state-space representation, we discuss how to evaluate moments or the likelihood function implied by such a structure. We discuss, in varying degrees of detail, recent advances in the field, such as the tempered particle filter, approximated Bayesian computation, the Hamiltonian Monte Carlo, variational inference, and machine learning, methods that show much promise, but that have not been fully explored yet by the DSGE community. We conclude by outlining three future challenges for this line of research.
    Keywords: Bayesian methods; DSGE models; estimation; MCMC; Variational Inference
    JEL: C11 C13 E30
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15164&r=
  25. By: Giovanni Dosi; Francesco Lamperti; Mariana Mazzucato; Mauro Napoletano; Andrea Roventini
    Abstract: We study the impact of alternative innovation policies on the short- and long-run performance of the economy, as well as on public finances, extending the Schumpeter meeting Keynes agent- based model (Dosi et al., 2010). In particular, we consider market-based innovation policies such as R&D subsidies to firms, tax discount on investment, and direct policies akin to the "Entrepreneurial State" (Mazzucato, 2013), involving the creation of public research-oriented firms diffusing technologies along specific trajectories, and funding a Public Research Lab conducting basic research to achieve radical innovations that enlarge the technological opportunities of the economy. Simulation results show that all policies improve productivity and GDP growth, but the best outcomes are achieved by active discretionary State policies, which are also able to crowd-in private investment and have positive hysteresis effects on growth dynamics. For the same size of public resources allocated to market-based interventions, "Mission" innovation policies deliver significantly better aggregate performance if the government is patient enough and willing to bear the intrinsic risks related to innovative activities.
    Keywords: Innovation policy; mission-oriented R&D; entrepreneurial state; agent-based modelling.
    Date: 2021–05–24
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2021/18&r=
  26. By: Zeinab Rouhollahi
    Abstract: Recently, financial institutes have been dealing with an increase in financial crimes. In this context, financial services firms started to improve their vigilance and use new technologies and approaches to identify and predict financial fraud and crime possibilities. This task is challenging as institutions need to upgrade their data and analytics capabilities to enable new technologies such as Artificial Intelligence (AI) to predict and detect financial crimes. In this paper, we put a step towards AI-enabled financial crime detection in general and money laundering detection in particular to address this challenge. We study and analyse the recent works done in financial crime detection and present a novel model to detect money laundering cases with minimum human intervention needs.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.10866&r=
  27. By: Rym Ben Bachouch (Univ. Orléans, PRISME, CE - PRISME - Laboratoire Pluridisciplinaire de Recherche en Ingénierie des Systèmes, Mécanique et Energétique - UO - Université d'Orléans - ENSI Bourges - Ecole Nationale Supérieure d'Ingénieurs de Bourges); Jihène Tounsi (Université de Sousse, SMART-LAB - Strategies for Modelling and ARtificial inTelligence Laboratory - Université de Tunis); Chouari Borhen (Université de Sousse, SMART-LAB - Strategies for Modelling and ARtificial inTelligence Laboratory - Université de Tunis)
    Abstract: In this paper, we are interested in the Home health care (HHC) scheduling problem. The HHC office needs to minimize traveling costs and to optimize caregiver assignment to patients. Scheduling patient visits have to take into account unexpected situations based on the latest scheduling information. We first analyze the HHC scheduling problem as deterministic, then discuss the dynamic challenges and propose a rescheduling approach based on genetic algorithm. A platform is designed to evaluate the proposed approach. The obtained results showed that the scheduling system is able to compute high quality schedules and can deal with urgent unpredictable situations.
    Keywords: home health care,scheduling,routing,genetic algorithm,rescheduling
    Date: 2020–10–26
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03229580&r=
  28. By: Bianchi, Francesco; Ludvigson, Sydney C.; Ma, Sai
    Abstract: This paper combines a data rich environment with a machine learning algorithm to provide estimates of time-varying systematic expectational errors ("belief distortions") about the macroeconomy embedded in survey responses. We find that such distortions are large on average even for professional forecasters, with all respondent-types over-weighting their own forecast relative to other information. Forecasts of inflation and GDP growth oscillate between optimism and pessimism by quantitatively large amounts. To investigate the dynamic relation of belief distortions with the macroeconomy, we construct indexes of aggregate (across surveys and respondents) expectational biases in survey forecasts. Over-optimism is associated with an increase in aggregate economic activity. Our estimates provide a benchmark to evaluate theories for which information capacity constraints, extrapolation, sentiments, ambiguity aversion, and other departures from full information rational expectations play a role in business cycles.
    Keywords: beliefs; Biases; Expectations; Machine Learning
    JEL: E17 E27 E32 E7 G4
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15003&r=
  29. By: Christoph J. B\"orner; Ingo Hoffmann; Jonas Krettek; Lars M. K\"urzinger; Tim Schmitz
    Abstract: Cryptocurrencies (CCs) become more interesting for institutional investors' strategic asset allocation and will be a fixed component of professional portfolios in future. This asset class differs from established assets especially in terms of the severe manifestation of statistical parameters. The question arises whether CCs with similar statistical key figures exist. On this basis, a core market incorporating CCs with comparable properties enables the implementation of a tracking error approach. A prerequisite for this is the segmentation of the CC market into a core and a satellite, the latter comprising the accumulation of the residual CCs remaining in the complement. Using a concrete example, we segment the CC market into these components, based on modern methods from image / pattern recognition.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.12336&r=

This nep-cmp issue is ©2021 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.