nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒01‒06
29 papers chosen by



  1. Forecasting significant stock price changes using neural networks By Firuz Kamalov
  2. A Robust Predictive Model for Stock Price Prediction Using Deep Learning and Natural Language Processing By Sidra Mehtab; Jaydip Sen
  3. Alpha Discovery Neural Network based on Prior Knowledge By Jie Fang; Zhikang Xia; Xiang Liu; Shutao Xia; Yong Jiang; Jianwu Lin
  4. Role of Energy use in the Prediction of CO2 Emissions and Growth in India: An Application of Artificial Neural Networks (ANN) By K, Ashin Nishan M; ASHIQ, MUHAMMED V
  5. Stylized Facts and Agent-Based Modeling By Simon Cramer; Torsten Trimborn
  6. Deep Fictitious Play for Finding Markovian Nash Equilibrium in Multi-Agent Games By Jiequn Han; Ruimeng Hu
  7. Analyse systématique du modèle de Bhaduri-Marglin à prix flexibles. "Ca dépend de la valeur des paramètres" By Florian Botte; Thomas Dallery
  8. A fractional reaction–diffusion description of supply and demand By Michael Benzaquen; Jean-Philippe Bouchaud
  9. Artificial Consciousness as a Platform for Artificial General Intelligence By Kanai, Ryota; Fujisawa, Ippei; Tamai, Shinya; Magata, Atsushi; Yasumoto, Masahiro
  10. Intuitive Beliefs By Jawwad Noor
  11. "Don't know" Tells: Calculating Non-Response Bias in Firms' Inflation Expectations Using Machine Learning Techniques By Yosuke Uno; Ko Adachi
  12. The Impact of Local Taxes and Public Services on Property Values By Grodecka, Anna; Hull, Isaiah
  13. Wage Indexation and Jobs. A Machine Learning Approach By Gert Bijnens; Shyngys Karimov; Jozef Konings
  14. Generative Synthesis of Insurance Datasets By Kevin Kuo
  15. EXPERIMENTED KINETIC ENERGY AS FEATURES FOR NATURAL LANGUAGE CLASSIFICATION By Alexandru, Daia
  16. I NTRODUCING A NEW T ECHNICAL I NDICATOR BASED ON OCTAV O NICESCU I NFORMATIONAL E NERGY AND COMPARE IT WITH B OLLINGER BANDS FOR S&P 500 M OVEMENT P REDICTIONS By Alexandru, Daia
  17. EU Economic Modelling System By Olga Ivanova; d'Artis Kancs; Mark Thissen
  18. Neighborhood Effects and Housing Vouchers By Morris A. Davis; Jesse Gregory; Daniel A. Hartley; Kegon T. K. Tan
  19. Evidence accumulation clustering using combinations of features By Wong, William; Tsuchiya, Naotsugu
  20. Modeling market power on a constrained electricity network By Dahlke, Steven
  21. Older Workers Need Not Apply? Ageist Language in Job Ads and Age Discrimination in Hiring By Ian Burn; Patrick Button; Luis Felipe Munguia Corella; David Neumark
  22. Using Machine Learning to Detect and Forecast Accounting Fraud By KONDO Satoshi; MIYAKAWA Daisuke; SHIRAKI Kengo; SUGA Miki; USUKI Teppei
  23. Housing Prices and Property Descriptions: Using Soft Information to Value Real Assets By Lily Shen; Stephen L. Ross
  24. A Variable Neighborhood Search Heuristic for Rolling Stock Rescheduling By Hoogervorst, R.; Dollevoet, T.A.B.; Maróti, G.; Huisman, D.
  25. A Quantum algorithm for linear PDEs arising in Finance By Filipe Fontanela; Antoine Jacquier; Mugad Oumgari
  26. Sự trỗi dậy của Computational Entrepreneurship By Vuong, Quan-Hoang; Ho, Toan Manh
  27. Shocks to Supply Chain Networks and Firm Dynamics: An Application of Double Machine Learning By MIYAKAWA Daisuke
  28. A Mean Field Games Model for Cryptocurrency Mining By Zongxi Li; A. Max Reppen; Ronnie Sircar
  29. Double debiased machine learning nonparametric inference with continuous treatments By Kyle Colangelo; Ying-Ying Lee

  1. By: Firuz Kamalov
    Abstract: Stock price prediction is a rich research topic that has attracted interest from various areas of science. The recent success of machine learning in speech and image recognition has prompted researchers to apply these methods to asset price prediction. The majority of literature has been devoted to predicting either the actual asset price or the direction of price movement. In this paper, we study a hitherto little explored question of predicting significant changes in stock price based on previous changes using machine learning algorithms. We are particularly interested in the performance of neural network classifiers in the given context. To this end, we construct and test three neural network models including multi-layer perceptron, convolutional net, and long short term memory net. As benchmark models we use random forest and relative strength index methods. The models are tested using 10-year daily stock price data of four major US public companies. Test results show that predicting significant changes in stock price can be accomplished with a high degree of accuracy. In particular, we obtain substantially better results than similar studies that forecast the direction of price change.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.08791&r=all
  2. By: Sidra Mehtab; Jaydip Sen
    Abstract: Prediction of future movement of stock prices has been a subject matter of many research work. There is a gamut of literature of technical analysis of stock prices where the objective is to identify patterns in stock price movements and derive profit from it. Improving the prediction accuracy remains the single most challenge in this area of research. We propose a hybrid approach for stock price movement prediction using machine learning, deep learning, and natural language processing. We select the NIFTY 50 index values of the National Stock Exchange of India, and collect its daily price movement over a period of three years (2015 to 2017). Based on the data of 2015 to 2017, we build various predictive models using machine learning, and then use those models to predict the closing value of NIFTY 50 for the period January 2018 till June 2019 with a prediction horizon of one week. For predicting the price movement patterns, we use a number of classification techniques, while for predicting the actual closing price of the stock, various regression models have been used. We also build a Long and Short-Term Memory - based deep learning network for predicting the closing price of the stocks and compare the prediction accuracies of the machine learning models with the LSTM model. We further augment the predictive model by integrating a sentiment analysis module on twitter data to correlate the public sentiment of stock prices with the market sentiment. This has been done using twitter sentiment and previous week closing values to predict stock price movement for the next week. We tested our proposed scheme using a cross validation method based on Self Organizing Fuzzy Neural Networks and found extremely interesting results.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.07700&r=all
  3. By: Jie Fang; Zhikang Xia; Xiang Liu; Shutao Xia; Yong Jiang; Jianwu Lin
    Abstract: In financial automatic feature construction task, genetic programming is the state-of-the-art-technic. It uses reverse polish expression to represent features and then uses genetic programming to simulate the evolution process. With the development of deep learning, there are more powerful feature extractors for option. And we think that comprehending the relationship between different feature extractors and data shall be the key. In this work, we put prior knowledge into alpha discovery neural network, combined with different kinds of feature extractors to do this task. We find that in the same type of network, simple network structure can produce more informative features than sophisticated network structure, and it costs less training time. However, complex network is good at providing more diversified features. In both experiment and real business environment, fully-connected network and recurrent network are good at extracting information from financial time series, but convolution network structure can not effectively extract this information.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.11761&r=all
  4. By: K, Ashin Nishan M; ASHIQ, MUHAMMED V
    Abstract: The correspondence among energy use, carbon dioxide emissions and growth is a matter of discussion among policymakers, economists and researchers. It is not possible to deny that the concept of sustainable development inspires them for the enquiry into this arena. The primary aspiration of this work is to develop and use the machine learning technique in the prediction of carbon dioxide emissions and growth by taking energy use as the inputs variables. Our findings suggest that the prediction accuracy of the CO2 and growth can improve by using machine learning techniques. In this case, prediction using Adam optimisation is better than Stochastic Gradient Descent (SGD) in the context of carbon dioxide emissions and growth. Further, result highlights that movement from fossil fuel use to renewable energy use is a possible way to reduce carbon dioxide emissions without sacrificing economic growth.
    Date: 2019–12–08
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:gkpbu&r=all
  5. By: Simon Cramer; Torsten Trimborn
    Abstract: The existence of stylized facts in financial data has been documented in many studies. In the past decade the modeling of financial markets by agent-based computational economic market models has become a frequently used modeling approach. The main purpose of these models is to replicate stylized facts and to identify sufficient conditions for their creations. In this paper we introduce the most prominent examples of stylized facts and especially present stylized facts of financial data. Furthermore, we given an introduction to agent-based modeling. Here, we not only provide an overview of this topic but introduce the idea of universal building blocks for agent-based economic market models.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.02684&r=all
  6. By: Jiequn Han; Ruimeng Hu
    Abstract: We propose a deep neural network-based algorithm to identify the Markovian Nash equilibrium of general large $N$-player stochastic differential games. Following the idea of fictitious play, we recast the $N$-player game into $N$ decoupled decision problems (one for each player) and solve them iteratively. The individual decision problem is characterized by a semilinear Hamilton-Jacobi-Bellman equation, to solve which we employ the recently developed deep BSDE method. The resulted algorithm can solve large $N$-player games for which conventional numerical methods would suffer from the curse of dimensionality. Multiple numerical examples involving identical or heterogeneous agents, with risk-neutral or risk-sensitive objectives, are tested to validate the accuracy of the proposed algorithm in large group games. Even for a fifty-player game with the presence of common noise, the proposed algorithm still finds the approximate Nash equilibrium accurately, which, to our best knowledge, is difficult to achieve by other numerical algorithms.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.01809&r=all
  7. By: Florian Botte (CLERSE - Centre Lillois d’Études et de Recherches Sociologiques et Économiques - UMR 8019 - ULCO - Université du Littoral Côte d'Opale - Université de Lille - CNRS - Centre National de la Recherche Scientifique); Thomas Dallery (CLERSE - Centre Lillois d’Études et de Recherches Sociologiques et Économiques - UMR 8019 - ULCO - Université du Littoral Côte d'Opale - Université de Lille - CNRS - Centre National de la Recherche Scientifique)
    Abstract: The calibration and the study of the plausibility of models as stylised as the post-kaleckian model is debated. In this article, we present a numerical simulation method to study the stability properties of a model that depend on its calibration. Rather than calibrating a model by selecting the values of the non-observable parameters that produce "realistic" values of the equilibrium variables, we suggest looking at what happens in a model when estimating the non-observable parameters and then combined them: does the model then produce plausible values for the equilibrium variables? We use this method to the model of Bhaduri and Marglin with price adjustment. We show that the model struggles to produce a satisfactory number of plausible equilibrium values. We also study the stability conditions of the model (in size and proportion), and we demonstrate that theoretically conceivable configurations have no numerically plausible counterpart for realistic values of the parameters of the model. The fine study of a calibration thus conceived allows to relativize the probability to meet all the dynamics which are theoretically possible.
    Abstract: La calibration et l'étude de la plausibilité de modèles aussi stylisés que le modèle post-kaleckien fait débat. Dans cet article, nous présentons une méthode de simulation numérique permettant d'étudier les propriétés de stabilité d'un modèle qui dépendent de sa calibration. Plutôt que de calibrer un modèle en retenant les valeurs des paramètres non-observables qui produisent des valeurs « réalistes » des variables d'équilibre, nous proposons de regarder ce qui arrive dans un modèle quand on estime les paramètres non-observables et qu'on les combine: le modèle produit-il alors des valeurs plausibles pour les variables d'équilibre? Nous utilisons cette méthode au modèle de Bhaduri et Marglin avec ajustement des prix. Nous montrons que le modèle peine à produire un nombre satisfaisant de valeurs d'équilibres plausibles. Nous étudions également les conditions de stabilité du modèle (en dimension et en proportion), et nous démontrons que des configurations théoriquement envisageables n'ont pas de contrepartie numériquement plausibles pour des valeurs réalistes des paramètres composant le modèle. L'étude fine d'un calibrage ainsi pensé permet de relativiser la probabilité de rencontrer l'ensemble des dynamiques possibles sur le plan analytique.
    Keywords: plausibilité,simulations,Mots-clés: Instabilité,modèle kaleckien Codes JEL : E11,E12,C62
    Date: 2019–12–01
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02335695&r=all
  8. By: Michael Benzaquen (LadHyX - Laboratoire d'hydrodynamique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique); Jean-Philippe Bouchaud (SPEC - UMR3680 - Service de physique de l'état condensé - CEA - Commissariat à l'énergie atomique et aux énergies alternatives - Université Paris-Saclay - CNRS - Centre National de la Recherche Scientifique)
    Abstract: We suggest that the broad distribution of time scales in financial markets could be a crucial ingredient to reproduce realistic price dynamics in stylised Agent-Based Models. We propose a fractional reaction-diffusion model for the dynamics of latent liquidity in financial markets, where agents are very heterogeneous in terms of their characteristic frequencies. Several features of our model are amenable to an exact analytical treatment. We find in particular that the impact is a concave function of the transacted volume (aka the "square-root impact law"), as in the normal diffusion limit. However, the impact kernel decays as t −β with β = 1/2 in the diffusive case, which is inconsistent with market efficiency. In the sub-diffusive case the decay exponent β takes any value in [0, 1/2], and can be tuned to match the empirical value β ≈ 1/4. Numerical simulations confirm our theoretical results. Several extensions of the model are suggested.
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02323544&r=all
  9. By: Kanai, Ryota; Fujisawa, Ippei; Tamai, Shinya; Magata, Atsushi; Yasumoto, Masahiro
    Abstract: In this paper, we propose a hypothesis that consciousness has evolved to serve as a platform for general intelligence. This idea stems from considerations of potential biological functions of consciousness. Here we define general intelligence as the ability to apply knowledge and models acquired from past experiences to generate solutions to novel problems. Based on this definition, we propose three possible ways to establish general intelligence under existing methodologies for constructing AI systems, namely solution by simulation, solution by combination and solution by generation. Then, we relate those solutions to putative functions of consciousness put forward, respectively, by the information generation theory, the global workspace theory, and a form of higher order theory where qualia are regarded as meta-representations. Based on these insights, We propose that consciousness integrates a group of specialized generative/forward models and forms a complex in which combinations of those models are flexibly formed and that qualia are meta-representations of first-order mappings which endow an agent with the ability to choose which maps to use to solve novel problems. These functions can be implemented as an ``artificial consciousness''. Such systems can generate policies based on a small number of trial and error for solving novel problems. Finally, we propose possible directions for future research into artificial consciousness and artificial general intelligence.
    Date: 2019–11–08
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:e4jh2&r=all
  10. By: Jawwad Noor (e Department of Economics, Boston University)
    Abstract: Beliefs are intuitive if they rely on associative memory, which can be described as a network of associations between events. A belief-theoretic characterization of the model is provided, its uniqueness properties are established, and the intersection with the Bayesian model is characterized. The formation of intuitive beliefs is modelled after machine learning, whereby the network is shaped by past experience via minimization of the difference from an objective probability distribution. The model is shown to accommodate correlation misperception, the conjunction fallacy, base-rate neglect/conservatism, etc.
    Keywords: Beliefs, Intuition, Associative memory, Boltzmann machine, Energy-Based Neural Networks, Non-Bayesian updating
    JEL: C45 D01 D90
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2216&r=all
  11. By: Yosuke Uno (Bank of Japan); Ko Adachi (Bank of Japan)
    Abstract: This paper examines the "don't know" responses for questions concerning inflation expectations in the Tankan survey. Specifically, using machine learning techniques, we attempt to extract "don't know" responses where respondent firms are more likely to "know" in a sense. We then estimate the counterfactual inflation expectations of such respondents and examine the non-response bias based on the estimation results. Our findings can be summarized as follows. First, there is indeed a fraction of firms that respond "don't know" despite the fact that they seem to "know" something in a sense. Second, the number of such firms, however, is quite small. Third, the estimated counterfactual inflation expectations of such firms are not statistically significantly different from the corresponding official figures in the Tankan survey. Fourth and last, based on the above findings, the non-response bias in firms' inflation expectations likely is statistically negligible.
    Keywords: inflation expectations; PU classification; non-response bias
    JEL: C55 E31
    Date: 2019–12–25
    URL: http://d.repec.org/n?u=RePEc:boj:bojwps:wp19e17&r=all
  12. By: Grodecka, Anna (Lund University and Knut Wicksell Centre for Financial Studies); Hull, Isaiah (Research Department, Central Bank of Sweden)
    Abstract: Attempts to measure the capitalization of local taxes into property prices, starting with Oates (1969), have suffered from a lack of local public service controls. We revisit this vast literature with a novel dataset of 947 time-varying local characteristic and public service controls for all municipalities in Sweden over the 2010-2016 period. To make use of the high dimensional vector of controls, as well as time and geographic fixed effects, we employ a novel empirical approach that modifies the recently-introduced debiased machine learning estimator by coupling it with a deep-wide neural network. We find that existing estimates of tax capitalization in the literature, including quasi-experimental work, may understate the impact of taxes on house prices by as much as 50%. We also exploit the unique features of our dataset to test core assumptions of the Tiebout hypothesis and to estimate the impact of public services, education, and crime on house prices.
    Keywords: Local Public Goods; Tax Capitalization; Tiebout Hypothesis; Machine Learning; Property Prices
    JEL: C45 C55 H31 H41 R30
    Date: 2019–04–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0374&r=all
  13. By: Gert Bijnens; Shyngys Karimov; Jozef Konings
    Abstract: In 2015 Belgium suspended the automatic wage indexation for a period of 12 months in order to boost competitiveness and increase employment. This paper uses a novel, machine learning based approach to construct a counterfactual experiment. This artificial counterfactual allows us to analyze the employment impact of suspending the indexation mechanism. We find a positive impact on employment of 0.5 percent which corresponds to a labor demand elasticity of -0.25. This effect is more pronounced for manufacturing firms, where the impact on employment can reach 2 percent, which corresponds to a labor demand elasticity of -1.
    Keywords: labor demand, wage elasticity, counterfactual analysis, artificial control, machine learning
    Date: 2019–11–27
    URL: http://d.repec.org/n?u=RePEc:ete:ceswps:643831&r=all
  14. By: Kevin Kuo
    Abstract: One of the impediments in advancing actuarial research and developing open source assets for insurance analytics is the lack of realistic publicly available datasets. In this work, we develop a workflow for synthesizing insurance datasets leveraging state-of-the-art neural network techniques. We evaluate the predictive modeling efficacy of datasets synthesized from publicly available data in the domains of general insurance pricing and life insurance shock lapse modeling. The trained synthesizers are able to capture representative characteristics of the real datasets. This workflow is implemented via an R interface to promote adoption by researchers and data owners.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.02423&r=all
  15. By: Alexandru, Daia
    Abstract: This article describes various uses of kinetic Energy in Natural Language Processing (NLP) and why Natural Language Processing could be used in trading, with the potential to be use also in other applications, including psychology and medicine. Kinetic energy discovered by great Romanian mathematician Octave Onicescu (1892-1983), allows to do feature engineering in various domains including NLP which we did in this experiment. More than that we have run a machine learning model called xgboost to see feature importance and the features extracted by xgboost where captured the most important, in order to classify for simplicity of reader some authors by their content and type of writing
    Date: 2019–06–20
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:drwc6&r=all
  16. By: Alexandru, Daia
    Abstract: This research paper demonstrates the invention of the kinetic bands, based on Romanian mathematician and statistician Octav Onicescu’s kinetic energy, also known as “informational energy”, where we use historical data of foreign exchange currencies or indexes to predict the trend displayed by a stock or an index and whether it will go up or down in the future. Here, we explore the imperfections of the Bollinger Bands to determine a more sophisticated triplet of indicators that predict the future movement of prices in the Stock Market. An Extreme Gradient Boosting Modelling was conducted in Python using historical data set from Kaggle, the historical data set spanning all current 500 companies listed. An invariable importance feature was plotted. The results displayed that Kinetic Bands, derived from (KE) are very influential as features or technical indicators of stock market trends. Furthermore, experiments done through this invention provide tangible evidence of the empirical aspects of it. The machine learning code has low chances of error if all the proper procedures and coding are in play. The experiment samples are attached to this study for future references or scrutiny.
    Date: 2019–06–20
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:m478b&r=all
  17. By: Olga Ivanova; d'Artis Kancs; Mark Thissen
    Abstract: This is the first study that attempts to assess the regional economic impacts of the European Institute of Innovation and Technology (EIT) investments in a spatially explicit macroeconomic model, which allows us to take into account all key direct, indirect and spatial spillover effects of EIT investments via inter-regional trade and investment linkages and a spatial diffusion of technology via an endogenously determined global knowledge frontier with endogenous growth engines driven by investments in knowledge and human capital. Our simulation results of highly detailed EIT expenditure data suggest that, besides sizable direct effects in those regions that receive the EIT investment support, there are also significant spatial spillover effects to other (non-supported) EU regions. Taking into account all key indirect and spatial spillover effects is a particular strength of the adopted spatial general equilibrium methodology; our results suggest that they are important indeed and need to be taken into account when assessing the impacts of EIT investment policies on regional economies.
    Keywords: DSGE modelling, innovation, productivity, human capital, SCGE model, spatial spillovers.
    JEL: C68 D58 F12 R13 R30
    Date: 2019–10–10
    URL: http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2019_10&r=all
  18. By: Morris A. Davis; Jesse Gregory; Daniel A. Hartley; Kegon T. K. Tan
    Abstract: Researchers and policy-makers have explored the possibility of restricting the use of housing vouchers to neighborhoods that may positively affect the outcomes of children. Using the framework of a dynamic model of optimal location choice, we estimate preferences over neighborhoods of likely recipients of housing vouchers in Los Angeles. We combine simulations of the model with estimates of how locations affect adult earnings of children to understand how a voucher policy that restricts neighborhoods in which voucher-recipients may live affects both the location decisions of households and the adult earnings of children. We show the model can replicate the impact of the Moving to Opportunity experiment on the adult wages of children. Simulations suggest a policy that restricts housing vouchers to the top 20% of neighborhoods maximizes expected aggregate adult earnings of children of households offered these vouchers.
    Keywords: neighborhood choice, housing vouchers, Los Angeles, Moving to Opportunity, dynamic models
    JEL: I24 I31 I38 J13 R23
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:hka:wpaper:2019-084&r=all
  19. By: Wong, William; Tsuchiya, Naotsugu
    Abstract: Evidence accumulation clustering (EAC) is an ensemble clustering algorithm that can cluster data for arbitrary shapes and numbers of clusters. Here, we present a variant of EAC in which we aimed to better cluster data with a large number of features, many of which may be uninformative. Our new method builds on the existing EAC algorithm by populating the clustering ensemble with clusterings based on combinations of fewer features than the original dataset at a time. Our method also calls for prewhitening the recombined data and weighting the influence of each individual clustering by an estimate of its informativeness. We provide code of an example implementation of the algorithm in Matlab and demonstrate its effectiveness compared to ordinary evidence accumulation clustering with synthetic data.
    Date: 2019–05–30
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:epb6t&r=all
  20. By: Dahlke, Steven
    Abstract: A closed electricity network with three markets is modeled to illustrate the impacts of transmission constraints and market power on prices and economic welfare. Four scenarios are presented, the first two assume perfect competition with and without transmission constraints, while the second two model market power with and without transmission constraints. The results show that transmission constraints reduce total surplus relative to the unconstrained case. When firms exercise market power their profits increase, while consumer surplus and total surplus decrease. Some results are counter intuitive, such as price exceeding the marginal cost of the most inefficient generator in a market with perfect competition, caused by transmission constraints and Kirchoff’s voltage law governing power flows. The GAMS code used to solve the models is included in the appendix. Next steps for research involve building the model to replicate a real-world market, to simulate impacts of proposed market restructuring or to identify areas of deregulated markets at high-risk of market power abuse.
    Date: 2019–05–28
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:9vep7&r=all
  21. By: Ian Burn; Patrick Button; Luis Felipe Munguia Corella; David Neumark
    Abstract: We study the relationships between ageist stereotypes – as reflected in the language used in job ads – and age discrimination in hiring, exploiting the text of job ads and differences in callbacks to older and younger job applicants from a previous resume (correspondence study) field experiment (Neumark, Burn, and Button, 2019). Our analysis uses methods from computational linguistics and machine learning to directly identify, in a field-experiment setting, ageist stereotypes that underlie age discrimination in hiring. We find evidence that language related to stereotypes of older workers sometimes predicts discrimination against older workers. For men, our evidence points most strongly to age stereotypes about physical ability, communication skills, and technology predicting age discrimination, and for women, age stereotypes about communication skills and technology. The method we develop provides a framework for applied researchers analyzing textual data, highlighting the usefulness of various computer science techniques for empirical economics research.
    JEL: J14 J23 J7 J78
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26552&r=all
  22. By: KONDO Satoshi; MIYAKAWA Daisuke; SHIRAKI Kengo; SUGA Miki; USUKI Teppei
    Abstract: This study investigates the usefulness of machine learning methods for detecting and forecasting accounting fraud. First, we aim to "detect" accounting fraud and confirm an improvement in detection performance. We achieve this by using machine learning, which allows high-dimensional feature space, compared with a classical parametric model, which is based on limited explanatory variables. Second, we aim to "forecast" accounting fraud, by using the same approach. This area has not been studied significantly in the past, yet we confirm a solid forecast performance. Third, we interpret the model by examining how estimated score changes with respect to change in each predictor. The validation is done on public listed companies in Japan, and we confirm that the machine learning method increases the model performance, and that higher interaction of predictors, which machine learning made possible, contributes to large improvement in prediction.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:eti:dpaper:19103&r=all
  23. By: Lily Shen (Clemson University); Stephen L. Ross (University of Connecticut)
    Abstract: Recent research in economics and finance has recognized the potential of utilizing tex-tual “soft” data for valuing heterogeneous assets. This paper employs machine learning to quantify the value of “soft” information contained in real estate property descrip-tions. Textual descriptions contain information that traditional hedonic attributes cannot capture. A one standard deviation increase in unobserved quality based on our “soft” information leads to a 15% increase in property sale price. Further, annual hedonic house price indices ignoring our measure of unobserved quality overstate real estate prices by 11% to 16% and mistime the recovery of housing prices following the Great Recession.
    Keywords: Natural Language Processing, Unsupervised Machine Learning, Soft In-formation, Housing Prices, Price Indices, Property Descriptions
    JEL: R31 G12 G14 C45
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:uct:uconnp:2019-20&r=all
  24. By: Hoogervorst, R.; Dollevoet, T.A.B.; Maróti, G.; Huisman, D.
    Abstract: We present a Variable Neighborhood Search heuristic for the rolling stock rescheduling problem. Rolling stock rescheduling is needed when a disruption leads to cancellations in the timetable. In rolling stock rescheduling, we then assign duties, i.e., sequences of trips, to the available train units in such a way that both passenger comfort and operational performance are taken into account. For our heuristic, we introduce three neighborhoods that can be used for rolling stock rescheduling, which respectively focus on swapping duties between train units, on improving the individual duties and on changing the shunting that occurs between trips. These neighborhoods are used for both a Variable Neighborhood Descent local search procedure and for perturbing the current solution in order to escape from local optima. We apply our heuristic to instances of Netherlands Railways (NS). The results show that the heuristic is able to find high-quality solutions in a reasonable amount of time. This allows rolling stock dispatchers to use our heuristic in real-time rescheduling.
    Keywords: Disruption Management, Rolling Stock Rescheduling, Variable Neighborhood Search
    Date: 2019–12–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:122716&r=all
  25. By: Filipe Fontanela; Antoine Jacquier; Mugad Oumgari
    Abstract: We propose a hybrid quantum-classical algorithm, originated from quantum chemistry, to price European and Asian options in the Black-Scholes model. Our approach is based on the equivalence between the pricing partial differential equation and the Schrodinger equation in imaginary time. We devise a strategy to build a shallow quantum circuit approximation to this equation, only requiring few qubits. This constitutes a promising candidate for the application of Quantum Computing techniques (with large number of qubits affected by noise) in Quantitative Finance.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.02753&r=all
  26. By: Vuong, Quan-Hoang; Ho, Toan Manh (Thanh Tay University Hanoi)
    Abstract: Conventional entrepreneurship considers three main characteristics: risk-taking, small-scale, and self-employment. The concept of computational entrepreneurship shares the same characteristics, but the computational power transforms the business. In computational entrepreneurship, risk-taking has become calculated risk-taking employing the availability of extensive computing power; the payoff for individual transaction might be smaller but the entrepreneur may benefit from a larger scale of business due to the connectivity of the Internet. Hierarchical models of entrepreneurial activities may also be possible, together with the serendipity-based modality of spotting emerging business opportunities.
    Date: 2018–12–09
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:6n9wb&r=all
  27. By: MIYAKAWA Daisuke
    Abstract: We examine the association between changes in supply chain networks and firm dynamics. To determine the causal relationship, first, using data on over a million Japanese firms, we construct machine learning-based prediction models for the three modes of firm exit (i.e., default, voluntary closure, and dissolution) and firm sales growth. Given the high performance in those prediction models, second, we use the double machine learning method (Chernozhukov et al. 2018) to determine causal relationships running from the changes in supply chain networks to those indexes of firm dynamics. The estimated nuisance parameters suggest, first, that an increase in global and local centrality indexes results in lower probability of exits. Second, higher meso-scale centrality leads to higher probability of exits. Third, we also confirm the positive association of global and local centrality indexes with sales growth as well as the negative association of a meso-scale centrality index with sales growth. Fourth, somewhat surprisingly, we found that an increase in one type of local centrality index shows a negative association with sales growth. These results reconfirm the already reported correlation between the centrality of firms in supply chain networks and firm dynamics in a causal relationship and further show the unique role of centralities measured in local and medium-sized clusters.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:eti:dpaper:19100&r=all
  28. By: Zongxi Li; A. Max Reppen; Ronnie Sircar
    Abstract: We propose a mean field game model to study the question of how centralization of reward and computational power occur in the Bitcoin-like cryptocurrencies. Miners compete against each other for mining rewards by increasing their computational power. This leads to a novel mean field game of jump intensity control, which we solve explicitly for miners maximizing exponential utility, and handle numerically in the case of miners with power utilities. We show that the heterogeneity of their initial wealth distribution leads to greater imbalance of the reward distribution, or a "rich get richer" effect. This concentration phenomenon is aggravated by a higher bitcoin price, and reduced by competition. Additionally, an advanced miner with cost advantages such as access to cheaper electricity, contributes a significant amount of computational power in equilibrium. Hence, cost efficiency can also result in the type of centralization seen among miners of cryptocurrencies.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.01952&r=all
  29. By: Kyle Colangelo (Institute for Fiscal Studies); Ying-Ying Lee (Institute for Fiscal Studies)
    Abstract: We propose a nonparametric inference method for causal e?ects of continuous treatment variables, under unconfoundedness and in the presence of high-dimensional or nonparametric nuisance parameters. Our simple kernel-based double debiased machine learning (DML) estimators for the average dose-response function (or the average structural function) and the partial e?ects are asymptotically normal with a nonparametric convergence rate. The nuisance estimators for the conditional expectation function and the generalized propensity score can be nonparametric kernel or series estimators or ML methods. Using doubly robust in?uence function and cross-?tting, we give tractable primitive conditions under which the nuisance estimators do not a?ect the ?rst-order large sample distribution of the DML estimators.
    Date: 2019–10–21
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:54/19&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.