nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒08‒09
twenty-six papers chosen by



  1. Stock Movement Prediction with Financial News using Contextualized Embedding from BERT By Qinkai Chen
  2. The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning By Stephan Zheng; Alexander Trott; Sunil Srinivasa; David C. Parkes; Richard Socher
  3. Machine Learning and Factor-Based Portfolio Optimization By Thomas Conlon; John Cotter; Iason Kynigakis
  4. "Asymptotic Expansion and Deep Neural Networks Overcome the Curse of Dimensionality in the Numerical Approximation of Kolmogorov Partial Differential Equations with Nonlinear Coefficients" By Akihiko Takahashi; Toshihiro Yamada
  5. Regional Climate Model Emulator Based on Deep Learning: Concept and First Evaluation of a Novel Hybrid Downscaling Approach By Gadat, Sébastien; Corre, Lola; Doury, Antoine; Ribes, Aurélien; Somot, Samuel
  6. Implementing the BBE Agent-Based Model of a Sports-Betting Exchange By Dave Cliff; James Hawkins; James Keen; Roberto Lau-Soto
  7. Data Analytics and Machine Learning paradigm to gauge performances combining classification, ranking and sorting for system analysis By Andrea Pontiggia; Giovanni Fasano
  8. Mission-Oriented Policies and the “Entrepreneurial State” at Work: An Agent-Based Exploration By Giovanni Dosi; Francesco Lamperti; Mariana Mazzucato; Mauro Napoletano; Andrea Roventini
  9. The impact of machine learning and big data on credit markets By Eccles, Peter; Grout, Paul; Siciliani, Paolo; Zalewska, Anna
  10. Adaptive Multilevel Monte Carlo for Probabilities By Abdul-Lateef Haji-Ali; Jonathan Spence; Aretha Teckentrup
  11. COVID-19 Tourism Recovery in the ASEAN and East Asia Region: Asymmetric Patterns and Implications By Stathis Polyzos; Anestis Fotiadis; Aristeidis Samitas
  12. Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules By NARITA Yusuke; YATA Kohei
  13. Systematizing the Lexicon of Platforms in Information Systems: A Data-Driven Study By Christian Bartelheimer
  14. "Deep Asymptotic Expansion with Weak Approximation " By Yuga Iguchi; Riu Naito; Yusuke Okano; Akihiko Takahashi; Toshihiro Yamada
  15. Online Supplements: Combining Crowd and Machine Intelligence to Detect False News in Social Media By Wei, Xuan; Zhang, Zhu; Zhang, Mingyue; Chen, Weiyun; Zeng, Daniel Dajun
  16. Practical Optimal Income Taxation By Jonathan Heathcote; Hitoshi Tsujiyama
  17. Design of a Covid-19 model for environmental impact: From the partial equilibrium to the Computable General Equilibrium model By Tchoffo, Rodrigue
  18. A Data-driven Explainable Case-based Reasoning Approach for Financial Risk Detection By Wei Li; Florentina Paraschiv; Georgios Sermpinis
  19. A data-science-driven short-term analysis of Amazon, Apple, Google, and Microsoft stocks By Shubham Ekapure; Nuruddin Jiruwala; Sohan Patnaik; Indranil SenGupta
  20. Sustainability of Global Economy as a Quantum Circuit By Antonino Claudio Bonan
  21. Pricing Exchange Option Based on Copulas by MCMC Algorithm By Wen Su
  22. International High-Frequency Arbitrage for Cross-Listed Stocks By Poutré, Cédric; Dionne, Georges; Yergeau, Gabriel
  23. On the classification of financial data with domain agnostic features By João A. Bastos; Jorge Caiado
  24. Dynamics of Imitation versus Innovation in Technological Leadership Change: Latecomers’ Catch-up Strategies in Diverse Technological Regimes By Chang, Sungyong; Kim, Hyunseob; Song, Jaeyong; Lee, Keun
  25. Hamiltonian Monte Carlo for Regression with High-Dimensional Categorical Data By Szymon Sacher; Laura Battaglia; Stephen Hansen
  26. Long Term Cost-Effectiveness of Resilient Foods for Global Catastrophes Compared to Artificial General Intelligence Safety By Denkenberger, David; Sandberg, Anders; Tieman, Ross; Pearce, Joshua M.

  1. By: Qinkai Chen
    Abstract: News events can greatly influence equity markets. In this paper, we are interested in predicting the short-term movement of stock prices after financial news events using only the headlines of the news. To achieve this goal, we introduce a new text mining method called Fine-Tuned Contextualized-Embedding Recurrent Neural Network (FT-CE-RNN). Compared with previous approaches which use static vector representations of the news (static embedding), our model uses contextualized vector representations of the headlines (contextualized embeddings) generated from Bidirectional Encoder Representations from Transformers (BERT). Our model obtains the state-of-the-art result on this stock movement prediction task. It shows significant improvement compared with other baseline models, in both accuracy and trading simulations. Through various trading simulations based on millions of headlines from Bloomberg News, we demonstrate the ability of this model in real scenarios.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.08721&r=
  2. By: Stephan Zheng; Alexander Trott; Sunil Srinivasa; David C. Parkes; Richard Socher
    Abstract: AI and reinforcement learning (RL) have improved many areas, but are not yet widely adopted in economic policy design, mechanism design, or economics at large. At the same time, current economic methodology is limited by a lack of counterfactual data, simplistic behavioral models, and limited opportunities to experiment with policies and evaluate behavioral responses. Here we show that machine-learning-based economic simulation is a powerful policy and mechanism design framework to overcome these limitations. The AI Economist is a two-level, deep RL framework that trains both agents and a social planner who co-adapt, providing a tractable solution to the highly unstable and novel two-level RL challenge. From a simple specification of an economy, we learn rational agent behaviors that adapt to learned planner policies and vice versa. We demonstrate the efficacy of the AI Economist on the problem of optimal taxation. In simple one-step economies, the AI Economist recovers the optimal tax policy of economic theory. In complex, dynamic economies, the AI Economist substantially improves both utilitarian social welfare and the trade-off between equality and productivity over baselines. It does so despite emergent tax-gaming strategies, while accounting for agent interactions and behavioral change more accurately than economic theory. These results demonstrate for the first time that two-level, deep RL can be used for understanding and as a complement to theory for economic design, unlocking a new computational learning-based approach to understanding economic policy.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02755&r=
  3. By: Thomas Conlon (Smurfit Graduate Business School, University College Dublin); John Cotter (Smurfit Graduate Business School, University College Dublin); Iason Kynigakis (Smurfit Graduate Business School, University College Dublin)
    Abstract: We examine machine learning and factor-based portfolio optimization. We find that factors based on autoencoder neural networks exhibit a weaker relationship with commonly used characteristic-sorted portfolios than popular dimensionality reduction techniques. Machine learning methods also lead to covariance and portfolio weight structures that diverge from simpler estimators. Minimum-variance portfolios using latent factors derived from autoencoders and sparse methods outperform simpler benchmarks in terms of risk minimization. These effects are amplified for investors with an increased sensitivity to risk-adjusted returns, during high volatility periods or when accounting for tail risk. Covariance matrices with a time-varying error component improve portfolio performance at a cost of higher turnover.
    Keywords: Autoencoder, Covariance matrix, Dimensionality reduction, Factor models, Machine learning, Minimum-variance, Principal component analysis, Partial least squares, Portfolio optimization, Sparse principal component analysis, Sparse partial least squares
    JEL: C38 C4 C45 C5 C58 G1 G11
    Date: 2021–03–11
    URL: http://d.repec.org/n?u=RePEc:ucd:wpaper:202111&r=
  4. By: Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Toshihiro Yamada (Graduate School of Economics, Hitotsubashi University and Japan Science and Technology Agency (JST))
    Abstract: This paper proposes a new spatial approximation method without the curse of dimensionalityfor solving high-dimensional partial differential equations (PDEs) by using an asymptotic expan-sion method with a deep learning-based algorithm. In particular, the mathematical justi cationon the spatial approximation is provided, and a numerical example for a 100 dimensional Kol-mogorov PDE shows effectiveness of our method.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2021cf1167&r=
  5. By: Gadat, Sébastien; Corre, Lola; Doury, Antoine; Ribes, Aurélien; Somot, Samuel
    Abstract: Providing reliable information on climate change at local scale remains a challenge of first importance for impact studies and policymakers. Here, we propose a novel hybrid downscaling method combining the strengths of both empirical statistical downscaling methods and Regional Climate Models (RCMs). The aim of this tool is to enlarge the size of high-resolution RCM simulation ensembles at low cost. We build a statistical RCM-emulator by estimating the downscaling function included in the RCM. This framework allows us to learn the relationship between large-scale predictors and a local surface variable of interest over the RCM domain in present and future climate. Furthermore, the emulator relies on a neural network architecture, which grants computational efficiency. The RCM-emulator developed in this study is trained to produce daily maps of the near-surface temperature at the RCM resolution (12km). The emulator demonstrates an excellent ability to reproduce the complex spatial structure and daily variability simulated by the RCM and in particular the way the RCM refines locally the low-resolution climate patterns. Training in future climate appears to be a key feature of our emulator. Moreover, there is a huge computational benefit in running the emulator rather than the RCM, since training the emulator takes about 2 hours on GPU, and the prediction is nearly instantaneous. However, further work is needed to improve the way the RCM-emulator reproduces some of the temperature extremes, the intensity of climate change, and to extend the proposed methodology to different regions, GCMs, RCMs, and variables of interest.
    Keywords: Emulator, Hybrid downscaling , Regional Climate Modeling , Statistical Downscaling , Deep Neural Network, Machine Learning.
    Date: 2021–07–21
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:125808&r=
  6. By: Dave Cliff; James Hawkins; James Keen; Roberto Lau-Soto
    Abstract: We describe three independent implementations of a new agent-based model (ABM) that simulates a contemporary sports-betting exchange, such as those offered commercially by companies including Betfair, Smarkets, and Betdaq. The motivation for constructing this ABM, which is known as the Bristol Betting Exchange (BBE), is so that it can serve as a synthetic data generator, producing large volumes of data that can be used to develop and test new betting strategies via advanced data analytics and machine learning techniques. Betting exchanges act as online platforms on which bettors can find willing counterparties to a bet, and they do this in a way that is directly comparable to the manner in which electronic financial exchanges, such as major stock markets, act as platforms that allow traders to find willing counterparties to buy from or sell to: the platform aggregates and anonymises orders from multiple participants, showing a summary of the market that is updated in real-time. In the first instance, BBE is aimed primarily at producing synthetic data for in-play betting (also known as in-race or in-game betting) where bettors can place bets on the outcome of a track-race event, such as a horse race, after the race has started and for as long as the race is underway, with betting only ceasing when the race ends. The rationale for, and design of, BBE has been described in detail in a previous paper that we summarise here, before discussing our comparative results which contrast a single-threaded implementation in Python, a multi-threaded implementation in Python, and an implementation where Python header-code calls simulations of the track-racing events written in OpenCL that execute on a 640-core GPU -- this runs approximately 1000 times faster than the single-threaded Python. Our source-code for BBE is freely available on GitHub.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02419&r=
  7. By: Andrea Pontiggia (Dept. of Management, Università Ca' Foscari Venice); Giovanni Fasano (Dept. of Management, Università Ca' Foscari Venice)
    Abstract: We consider the problem of measuring the performances associated with members of a given group of homogeneous individuals. We provide both an analysis, relying on Machine Learning paradigms, along with a numerical experience based on three conceptually different real applications. A keynote aspect in the proposed approach is represented by our data–driven framework, where guidelines for evaluating individuals’ performance are derived from the data associated to the entire group. This makes our analysis and the relative outcomes quite versatile, so that a number of real problems can be studied in view of the proposed general perspective.
    Keywords: Performance Analysis, Data Analytics, Support Vector Machines, Human Resources
    JEL: M51 C38
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:vnm:wpdman:182&r=
  8. By: Giovanni Dosi (Laboratory of Economics and Management); Francesco Lamperti (Université Panthéon-Sorbonne - Paris 1 (UP1)); Mariana Mazzucato; Mauro Napoletano (Observatoire français des conjonctures économiques); Andrea Roventini
    Abstract: We study the impact of alternative innovation policies on the short- and long-run performance of the economy, as well as on public finances, extending the Schumpeter meeting Keynes agent-based model (Dosi et al., 2010). In particular, we consider market-based innovation policies such as R&D subsidies to firms, tax discount on investment, and direct policies akin to the “Entrepreneurial State” (Mazzucato, 2013), involving the creation of public research oriented firms diffusing technologies along specific trajectories, and funding a Public Research Lab conducting basic research to achieve radical innovations that enlarge the technological opportunities of the economy. Simu- lation results show that all policies improve productivity and GDP growth, but the best outcomes are achieved by active discretionary State policies, which are also able to crowd-in private investment and have positive hysteresis effects on growth dynamics. For the same size of public resources allocated to market-based interventions, “Mission” innovation policies deliver significantly better aggregate performance if the government is patient enough and willing to bear the intrinsic risks related to innovative activities.
    Keywords: Innovation policy, mission-oriented R&D, entrepreneurial state, agent-based modelling
    JEL: O33 O38 O31 O40 C63
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/401t6job098n79ch91o9giov9d&r=
  9. By: Eccles, Peter (Bank of England); Grout, Paul (Bank of England); Siciliani, Paolo (Bank of England); Zalewska, Anna (University of Bath)
    Abstract: There is evidence that machine learning (ML) can improve the screening of risky borrowers, but the empirical literature gives diverse answers as to the impact of ML on credit markets. We provide a model in which traditional banks compete with fintech (innovative) banks that screen borrowers using ML technology and show that the impact of the adoption of the ML technology on credit markets depends on the characteristics of the market (eg borrower mix, cost of innovation, the intensity of competition, precision of the innovative technology, etc.). We provide a series of scenarios. For example, we show that if implementing ML technology is relatively expensive and lower-risk borrowers are a significant proportion of all risky borrowers, then all risky borrowers will be worse off following the introduction of ML, even when the lower-risk borrowers can be separated perfectly from others. At the other extreme, we show that if costs of implementing ML are low and there are few lower-risk borrowers, then lower-risk borrowers gain from the introduction of ML, at the expense of higher-risk and safe borrowers. Implications for policy, including the potential for tension between micro and macroprudential policies, are explored.
    Keywords: Adverse selection; banking; big data; capital requirements; credit markets; fintech; machine learning; prudential regulation
    JEL: G21 G28 G32
    Date: 2021–07–09
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0930&r=
  10. By: Abdul-Lateef Haji-Ali; Jonathan Spence; Aretha Teckentrup
    Abstract: We consider the numerical approximation of $\mathbb{P}[G\in \Omega]$ where the $d$-dimensional random variable $G$ cannot be sampled directly, but there is a hierarchy of increasingly accurate approximations $\{G_\ell\}_{\ell\in\mathbb{N}}$ which can be sampled. The cost of standard Monte Carlo estimation scales poorly with accuracy in this setup since it compounds the approximation and sampling cost. A direct application of Multilevel Monte Carlo improves this cost scaling slightly, but returns sub-optimal computational complexities since estimation of the probability involves a discontinuous functional of $G_\ell$. We propose a general adaptive framework which is able to return the MLMC complexities seen for smooth or Lipschitz functionals of $G_\ell$. Our assumptions and numerical analysis are kept general allowing the methods to be used for a wide class of problems. We present numerical experiments on nested simulation for risk estimation, where $G = \mathbb{E}[X|Y]$ is approximated by an inner Monte Carlo estimate. Further experiments are given for digital option pricing, involving an approximation of a $d$-dimensional SDE.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.09148&r=
  11. By: Stathis Polyzos; Anestis Fotiadis; Aristeidis Samitas (College of Business, Zayed University, Abu Dhabi, UAE)
    Abstract: The aim of this paper is to produce forecasts for tourism flows and tourism revenue for ASEAN and East Asian countries after the end of the COVID-19 pandemic. By implementing two different machine-learning methodologies (the Long Short Term Memory neural network and the Generalised Additive Model) and using different training data sets, we aim to forecast the recovery patterns for these data series for the first 12 months after the end of crisis. We thus produce a baseline forecast, based on the averages of our different models, as well as a worst- and best-case scenario. We show that recovery is asymmetric across the group of countries in the ASEAN and East Asian region and that recovery in tourism revenue is generally slower than in tourist arrivals. We show significant losses of approximately 48%, persistent after 12 months, for some countries, while others display increases of approximately 40% when compared to pre-crisis levels. Our work aims to quantify the projected drop in tourist arrivals and tourism revenue for ASEAN and East Asian countries over the coming months. The results of the proposed research can be used by policymakers as they determine recovery plans, where tourism will undoubtedly play a very important role.
    Keywords: COVID-19, tourism, deep learning, ASEAN, East Asia
    JEL: H12 P46 Z32
    Date: 2021–06–08
    URL: http://d.repec.org/n?u=RePEc:era:wpaper:dp-2021-12&r=
  12. By: NARITA Yusuke; YATA Kohei
    Abstract: Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from differential geometry and geometric measure theory, which may be of independent interest. The practical performance of our method is first demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We finally apply our estimator to evaluate the effect of the Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than $10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little effect on COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:eti:dpaper:21057&r=
  13. By: Christian Bartelheimer (Paderborn University; Paderborn University; Hamm-Lippstadt University of Applied Sciences; Paderborn University)
    Abstract: While the Information Systems (IS) discipline has researched digital platforms extensively, the body of knowledge appertaining to platforms still appears fragmented and lacking conceptual consistency. Based on automated text mining and unsupervised machine learning, we collect, analyze, and interpret the IS discipline's comprehensive research on platforms—comprising 11,049 papers spanning 44 years of research activity. From a cluster analysis concerning these concepts’ semantically most similar words, we identify six research streams on platforms, each with their own platform terms. Based on interpreting the identified concepts vis-à-vis the extant research and considering a temporal perspective on the concepts’ application, we present a taxonomy and a lexicon of platform concepts, to guide further research on platforms in the IS discipline. Researchers and managers can build on our results to position their work appropriately, applying the platform concepts that fit their results best. On a community level, we contribute to establishing a more consistent view on digital platforms as a prevalent topic in IS research that keeps advancing to new frontiers.
    Keywords: platform; text mining; machine learning; data communications; interpretive research; systems design and implementation
    JEL: C71 D85 L22
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:pdn:dispap:79&r=
  14. By: Yuga Iguchi (MUFG Bank); Riu Naito (Japan Post Insurance and Hitotsubashi University); Yusuke Okano (SMBC Nikko Securitie); Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Toshihiro Yamada (Graduate School of Economics, Hitotsubashi University and Japan Science and Technology Agency (JST))
    Abstract: This paper proposes a new spatial approximation method without the curse of dimensionalityfor solving high-dimensional partial differential equations (PDEs) by using an asymptotic expan-sion method with a deep learning-based algorithm. In particular, the mathematical justi cationon the spatial approximation is provided, and a numerical example for a 100 dimensional Kol-mogorov PDE shows effectiveness of our method.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2021cf1168&r=
  15. By: Wei, Xuan; Zhang, Zhu; Zhang, Mingyue; Chen, Weiyun; Zeng, Daniel Dajun
    Abstract: This is the online supplements to paper "Combining Crowd and Machine Intelligence to Detect False News in Social Media", which is forthcoming at Management Information Systems Quarterly.
    Date: 2021–07–04
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:6svp9&r=
  16. By: Jonathan Heathcote; Hitoshi Tsujiyama
    Abstract: We review methods used to numerically compute optimal Mirrleesian tax and transfer schedules in heterogeneous agent economies. We show that the coarseness of the productivity grid, while a technical detail in terms of theory, is critical for delivering quantitative policy prescriptions. Existing methods are reliable only when a very fine grid is used. The problem is acute for computational approaches that use a version of the Diamond-Saez implicit optimal tax formula. If using a very fine grid for productivity is impractical, then optimizing within a flexible parametric class is preferable to the non-parametric Mirrleesian approach.
    Keywords: Ramsey taxation; Optimal income taxation; Mirrlees taxation
    JEL: H24 H21
    Date: 2021–07–30
    URL: http://d.repec.org/n?u=RePEc:fip:fedmsr:92932&r=
  17. By: Tchoffo, Rodrigue
    Abstract: The Covid-19 pandemic led to a loss of employment in many sectors of the economy around the world. This negatively affected the industry capacity of production of many countries. Linking the CO2 emissions to the production capacity, the total pollution is likely to decrease. We investigate this issue by designing a simple environmental model based on the partial equilibrium (PE). We test this theoretically and empirically using recent data on the total contamination for four regions and countries. Then, we link our model to the CGE model of Hosoe et al. (2010) to capture the impact on other sectors of the economy. The final model PE-CGE is therefore designed through the household consumption demand channel. Broadly, our findings show that the environmental impact of the pandemic depends on the structure of the economy. While the USA, China and Sub-Saharan Africa reduce their CO2 emissions, that of the EU rather increases.
    Keywords: Partial Equilibrium, Computable General Equilibrium, Covid-19, CO2 emissions, Employment, Production
    JEL: C68 F14 Q51
    Date: 2021–07–27
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:108920&r=
  18. By: Wei Li; Florentina Paraschiv; Georgios Sermpinis
    Abstract: The rapid development of artificial intelligence methods contributes to their wide applications for forecasting various financial risks in recent years. This study introduces a novel explainable case-based reasoning (CBR) approach without a requirement of rich expertise in financial risk. Compared with other black-box algorithms, the explainable CBR system allows a natural economic interpretation of results. Indeed, the empirical results emphasize the interpretability of the CBR system in predicting financial risk, which is essential for both financial companies and their customers. In addition, our results show that the proposed automatic design CBR system has a good prediction performance compared to other artificial intelligence methods, overcoming the main drawback of a standard CBR system of highly depending on prior domain knowledge about the corresponding field.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.08808&r=
  19. By: Shubham Ekapure; Nuruddin Jiruwala; Sohan Patnaik; Indranil SenGupta
    Abstract: In this paper, we implement a combination of technical analysis and machine/deep learning-based analysis to build a trend classification model. The goal of the paper is to apprehend short-term market movement, and incorporate it to improve the underlying stochastic model. Also, the analysis presented in this paper can be implemented in a \emph{model-independent} fashion. We execute a data-science-driven technique that makes short-term forecasts dependent on the price trends of current stock market data. Based on the analysis, three different labels are generated for a data set: $+1$ (buy signal), $0$ (hold signal), or $-1$ (sell signal). We propose a detailed analysis of four major stocks- Amazon, Apple, Google, and Microsoft. We implement various technical indicators to label the data set according to the trend and train various models for trend estimation. Statistical analysis of the outputs and classification results are obtained.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.14695&r=
  20. By: Antonino Claudio Bonan (ARPAV - Agenzia Regionale per la Prevenzione e la Protezione Ambientale del Veneto)
    Abstract: In economy, viewed as a quantum system working as a circuit, each process at the microscale is a quantum gate among agents. The global configuration of economy is addressed by optimizing the sustainability of the whole circuit. This is done in terms of geodesics, starting from some approximations. A similar yet somehow different approach is applied for the closed system of the whole and for economy as an open system. Computations may partly be explicit, especially when the reality is represented in a simplified way. The circuit can be also optimized by minimizing its complexity, with a partly similar formalism, yet generally not along the same paths.
    Keywords: Geometric optimization,Econophysics,Quantum economics,Quantum computation
    Date: 2021–07–20
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03291997&r=
  21. By: Wen Su
    Abstract: This paper focus on pricing exchange option based on copulas by MCMC algorithm. Initially, we introduce the methodologies concerned about risk-netural pricing, copulas and MCMC algorithm. After the basic knowledge, we compare the option prices given by different models, the results show except Gumbel copula, the other model provide similar estimation.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.10225&r=
  22. By: Poutré, Cédric (Université de Montréal); Dionne, Georges (HEC Montreal, Canada Research Chair in Risk Management); Yergeau, Gabriel (HEC Montreal, Canada Research Chair in Risk Management)
    Abstract: We explore latency arbitrage activities with a new arbitrage strategy that we test with high-frequency data during the first six months of 2019. We study the profitability of mean-reverting arbitrage activities of 74 cross-listed stocks involving three exchanges in Canada and the United States. Our arbitrage strategy is a hybrid between triangular arbitrage and pairs trading. We synchronize the high-frequency data feeds from the three exchange venues considering explicitly the latency that comes from the transportation of information between the exchanges and its treatment time. Other trading costs and arbitrage risks are also considered. The annual net profit of an HFT firm that uses limit orders is around CAD $8 million (USD $6 million), a result that we consider reasonable when compared with the previous literature. International latency arbitrage with market orders is never profitable.
    Keywords: Latency arbitrage; cross-listed stock; high-frequency trading; limit order; market order; synthetic hedging instrument; mean-reverting arbitrage; international arbitrage; supervised machine learning
    JEL: G02 G10 G11 G14 G15 G22
    Date: 2021–07–20
    URL: http://d.repec.org/n?u=RePEc:ris:crcrmw:2021_004&r=
  23. By: João A. Bastos; Jorge Caiado
    Abstract: We compare a data-driven domain agnostic set of canonical features with a smaller collection of features that capture well-known stylized facts about financial asset returns. We show that these facts discriminate better different asset types than general-purpose features. Therefore, financial time series analysis is a domain where well-informed expert knowledge may not be disregarded in favor of agnosticrepresentations of the data.
    Keywords: Financial economics, Time series, Clustering, Classification, Machine learning
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:ise:remwps:wp01852021&r=
  24. By: Chang, Sungyong (London Business School); Kim, Hyunseob (Jackson State University); Song, Jaeyong; Lee, Keun
    Abstract: We examine the role of latecomers’ optimal resource allocation between innovation and imitation in latecomers’ catch-up under diverse technological regimes. Building on Nelson and Winter (1982), we develop computational models of technological leadership change. The results suggest that one-sided dependency upon either imitation or innovation deters technological leadership change. At an early stage with low-level technologies, latecomers should focus on imitation; then, as the technological gap decreases, they should allocate more R&D resource to innovation. We also examine the role of several variables, such as appropriability, cumulativeness, and cycle time of technologies (CTT), as related to technological regimes. The simulation results show that while low appropriability tends to increase the probability of technological leadership change, it makes imitation a more e˙ective strategy compared to innovation; in addition, while a higher level of cumulativeness tends to reduce the probability of leadership change, it makes imitation a more valuable option because innovation becomes more diÿcult for latecomers. We also find an inverted U-shaped relationship between the CTT and the probability of technological leadership change. When the CTT is short, it makes sense for latecomers to allocate more resources to imitation, especially when their technology level is initially low.
    Date: 2021–07–29
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:b8fae&r=
  25. By: Szymon Sacher; Laura Battaglia; Stephen Hansen
    Abstract: Latent variable models are becoming increasingly popular in economics for high-dimensional categorical data such as text and surveys. Often the resulting low-dimensional representations are plugged into downstream econometric models that ignore the statistical structure of the upstream model, which presents serious challenges for valid inference. We show how Hamiltonian Monte Carlo (HMC) implemented with parallelized automatic differentiation provides a computationally efficient, easy-to-code, and statistically robust solution for this problem. Via a series of applications, we show that modeling integrated structure can non-trivially affect inference and that HMC appears to markedly outperform current approaches to inference in integrated models.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.08112&r=
  26. By: Denkenberger, David; Sandberg, Anders; Tieman, Ross; Pearce, Joshua M. (Michigan Technological University)
    Abstract: Global agricultural catastrophes, which include nuclear winter and abrupt climate change, could have long-term consequences on humanity such as the collapse and nonrecovery of civilization. Using Monte Carlo (probabilistic) models, we analyze the long-term cost-effectiveness of resilient foods (alternative foods) - roughly those independent of sunlight such as mushrooms. One version of the model populated partly by a survey of global catastrophic risk researchers finds the confidence that resilient foods is more cost effective than artificial general intelligence safety is ~86% and ~99% for the 100 millionth dollar spent on resilient foods at the margin now, respectively. Another version of the model based on one of the authors produced ~95% and ~99% confidence, respectively. Considering uncertainty represented within our models, our result is robust: reverting the conclusion required simultaneously changing the 3-5 most important parameters to the pessimistic ends. However, as predicting the long-run trajectory of human civilization is extremely difficult, and model and theory uncertainties are very large, this significantly reduces our overall confidence. Because the agricultural catastrophes could happen immediately and because existing expertise relevant to resilient foods could be co-opted by charitable giving, it is likely optimal to spend most of the money for resilient foods in the next few years. Both cause areas generally save expected current lives inexpensively and should attract greater investment.
    Date: 2021–07–28
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:vrmpf&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.