nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒01‒17
twenty-six papers chosen by
Stan Miles
Thompson Rivers University

  1. Efficient differentiable quadratic programming layers: an ADMM approach By Andrew Butler; Roy Kwon
  2. Estimation of the weather-yield nexus with Artificial Neural Networks By Schmidt, Lorenz; Odening, Martin; Ritter, Matthias
  3. Machine Learning for Predicting Stock Return Volatility By Damir Filipović; Amir Khalilzadeh
  4. Solving the Data Sparsity Problem in Predicting the Success of the Startups with Machine Learning Methods By Dafei Yin; Jing Li; Gaosheng Wu
  5. DeepHAM: A Global Solution Method for Heterogeneous Agent Models with Aggregate Shocks By Jiequn Han; Yucheng Yang; Weinan E
  6. "An application of deep learning for exchange rate forecasting". By Oscar Claveria; Enric Monte; Petar Soric; Salvador Torra
  7. The Virtue of Complexity in Machine Learning Portfolios By Bryan T. Kelly; Semyon Malamud; Kangying Zhou
  8. Reinforcement Learning with Dynamic Convex Risk Measures By Anthony Coache; Sebastian Jaimungal
  9. Efficient Calibration of Multi-Agent Market Simulators from Time Series with Bayesian Optimization By Yuanlu Bai; Henry Lam; Svitlana Vyetrenko; Tucker Balch
  10. NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting By Linyi Yang; Jiazheng Li; Ruihai Dong; Yue Zhang; Barry Smyth
  11. FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven Deep Reinforcement Learning in Quantitative Finance By Xiao-Yang Liu; Jingyang Rui; Jiechao Gao; Liuqing Yang; Hongyang Yang; Zhaoran Wang; Christina Dan Wang; Jian Guo
  12. A fast Monte Carlo scheme for additive processes and option pricing By Michele Azzone; Roberto Baviera
  13. Efficient Estimation of Average Derivatives in NPIV Models: Simulation Comparisons of Neural Network Estimators By Jiafeng Chen; Xiaohong Chen; Elie Tamer
  14. Combining Reinforcement Learning and Inverse Reinforcement Learning for Asset Allocation Recommendations By Igor Halperin; Jiayu Liu; Xiao Zhang
  15. Ensemble methods for credit scoring of Chinese peer-to-peer loans By Wei Cao; Yun He; Wenjun Wang; Weidong Zhu; Yves Demazeau
  16. A level-set approach to the control of state-constrained McKean-Vlasov equations: application to renewable energy storage and portfolio selection By Maximilien Germain; Huyên Pham; Xavier Warin
  17. How Polarized are Citizens? Measuring Ideology from the Ground-Up By Draca, Mirko; Schwarz, Carlo
  18. Who should get vaccinated? Individualized allocation of vaccines over SIR network By Toru Kitagawa; Guanyi Wang
  19. TransBoost: A Boosting-Tree Kernel Transfer Learning Algorithm for Improving Financial Inclusion By Yiheng Sun; Tian Lu; Cong Wang; Yuan Li; Huaiyu Fu; Jingran Dong; Yunjie Xu
  20. Labour market projections and time allocation in Myanmar: Application of a new computable general equilibrium (CGE) model By Henning Tarp Jensen; Marcus Keogh-Brown; Finn Tarp
  21. Trading with the Momentum Transformer: An Intelligent and Interpretable Architecture By Kieran Wood; Sven Giegerich; Stephen Roberts; Stefan Zohren
  22. Multivariate Realized Volatility Forecasting with Graph Neural Network By Qinkai Chen; Christian-Yann Robert
  23. Robustness, Heterogeneous Treatment Effects and Covariate Shifts By Pietro Emilio Spini
  24. Portfolio optimization under mean-CVaR simulation with copulas on the Vietnamese stock exchange By Le, Tuan Anh; Dao, Thi Thanh Binh
  25. Measuring counterparty risk in FMIs By Laine, Tatu; Korpinen, Kasperi
  26. Can Artificial Intelligence Reduce Regional Inequality? Evidence from China By Li, Shiyuan; Hao, Miao

  1. By: Andrew Butler; Roy Kwon
    Abstract: Recent advances in neural-network architecture allow for seamless integration of convex optimization problems as differentiable layers in an end-to-end trainable neural network. Integrating medium and large scale quadratic programs into a deep neural network architecture, however, is challenging as solving quadratic programs exactly by interior-point methods has worst-case cubic complexity in the number of variables. In this paper, we present an alternative network layer architecture based on the alternating direction method of multipliers (ADMM) that is capable of scaling to problems with a moderately large number of variables. Backward differentiation is performed by implicit differentiation of the residual map of a modified fixed-point iteration. Simulated results demonstrate the computational advantage of the ADMM layer, which for medium scaled problems is approximately an order of magnitude faster than the OptNet quadratic programming layer. Furthermore, our novel backward-pass routine is efficient, from both a memory and computation standpoint, in comparison to the standard approach based on unrolled differentiation or implicit differentiation of the KKT optimality conditions. We conclude with examples from portfolio optimization in the integrated prediction and optimization paradigm.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.07464&r=
  2. By: Schmidt, Lorenz; Odening, Martin; Ritter, Matthias
    Abstract: Weather is a pivotal factor for crop production as it is highly volatile and can hardly be controlled by farm management practices. Since there is a tendency towards increased weather extremes in the future, understanding the weather-related yield factors becomes increasingly important not only for yield prediction, but also for the design of insurance products that mitigate financial losses for farmers, but suffer from considerable basis risk. In this study, an artificial neural network is set up and calibrated to a rich set of farm-level yield data in Germany covering the period from 2003 to 2018. A nonlinear regression model, which uses rainfall, temperature, and soil moisture as explanatory variables for yield deviations, serves as a benchmark. The empirical application reveals that the gain in forecasting precision by using machine learning techniques compared with traditional estimation approaches is substantial and that the use of regionalized models and disaggregated high-resolution weather data improve the performance of artificial neural networks.
    Keywords: Agricultural Finance, Crop Production/Industries, Food Security and Poverty, Research and Development/Tech Change/Emerging Technologies
    Date: 2021–09–21
    URL: http://d.repec.org/n?u=RePEc:ags:haaewp:316598&r=
  3. By: Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute); Amir Khalilzadeh (Ecole Polytechnique Fédérale de Lausanne)
    Abstract: We use machine learning methods to predict stock return volatility. Our out-of-sample prediction of realised volatility for a large cross-section of US stocks over the sample period from 1992 to 2016 is on average 44.1% against the actual realised volatility of 43.8% with an R2 being as high as double the ones reported in the literature. We further show that machine learning methods can capture the stylized facts about volatility without relying on any assumption about the distribution of stock returns. Finally, we show that our long short-term memory model outperforms other models by properly carrying information from the past predictor values.
    Keywords: Volatility Prediction, Volatility Clustering, LSTM, Neural Networks, Regression Trees.
    JEL: C51 C52 C53 C58 G17
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2195&r=
  4. By: Dafei Yin; Jing Li; Gaosheng Wu
    Abstract: Predicting the success of startup companies is of great importance for both startup companies and investors. It is difficult due to the lack of available data and appropriate general methods. With data platforms like Crunchbase aggregating the information of startup companies, it is possible to predict with machine learning algorithms. Existing research suffers from the data sparsity problem as most early-stage startup companies do not have much data available to the public. We try to leverage the recent algorithms to solve this problem. We investigate several machine learning algorithms with a large dataset from Crunchbase. The results suggest that LightGBM and XGBoost perform best and achieve 53.03% and 52.96% F1 scores. We interpret the predictions from the perspective of feature contribution. We construct portfolios based on the models and achieve high success rates. These findings have substantial implications on how machine learning methods can help startup companies and investors.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.07985&r=
  5. By: Jiequn Han; Yucheng Yang; Weinan E
    Abstract: We propose an efficient, reliable, and interpretable global solution method, $\textit{Deep learning-based algorithm for Heterogeneous Agent Models, DeepHAM}$, for solving high dimensional heterogeneous agent models with aggregate shocks. The state distribution is approximately represented by a set of optimal generalized moments. Deep neural networks are used to approximate the value and policy functions, and the objective is optimized over directly simulated paths. Besides being an accurate global solver, this method has three additional features. First, it is computationally efficient for solving complex heterogeneous agent models, and it does not suffer from the curse of dimensionality. Second, it provides a general and interpretable representation of the distribution over individual states; and this is important for addressing the classical question of whether and how heterogeneity matters in macroeconomics. Third, it solves the constrained efficiency problem as easily as the competitive equilibrium, and this opens up new possibilities for studying optimal monetary and fiscal policies in heterogeneous agent models with aggregate shocks.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.14377&r=
  6. By: Oscar Claveria (AQR IREA, University of Barcelona (UB). Department of Econometrics, Statistics and Applied Economics, University of Barcelona, Diagonal 690, 08034 Barcelona, Spain. Tel.: +34-934021825); Enric Monte (Department of Signal Theory and Communications, Polytechnic University of Catalunya (UPC)); Petar Soric (Faculty of Economics & Business University of Zagreb.); Salvador Torra (Riskcenter–IREA, University of Barcelona (UB).)
    Abstract: This paper examines the performance of several state-of-the-art deep learning techniques for exchange rate forecasting (deep feedforward network, convolutional network and a long short-term memory). On the one hand, the configuration of the different architectures is clearly detailed, as well as the tuning of the parameters and the regularisation techniques used to avoid overfitting. On the other hand, we design an out-of-sample forecasting experiment and evaluate the accuracy of three different deep neural networks to predict the US/UK foreign exchange rate in the days after the Brexit took effect. Of the three configurations, we obtain the best results with the deep feedforward architecture. When comparing the deep learning networks to time-series models used as a benchmark, the obtained results are highly dependent on the specific topology used in each case. Thus, although the three architectures generate more accurate predictions than the time-series models, the results vary considerably depending on the specific topology. These results hint at the potential of deep learning techniques, but they also highlight the importance of properly configuring, implementing and selecting the different topologies.
    Keywords: Forecasting, Exchange rates, Deep learning, Deep neural networks, Convolutional networks, Long short-term memory. JEL classification: C45, C58, E47, F31, G17.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:ira:wpaper:202201&r=
  7. By: Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute); Kangying Zhou (Yale School of Management)
    Abstract: We theoretically characterize the behavior of machine learning portfolios in the high complexity regime, i.e. when the number of parameters exceeds the number of observations. We demonstrate a surprising \virtue of complexity:" Sharpe ratios of machine learning portfolios generally increase with model parameterization, even with minimal regularization. Empirically, we document the virtue of complexity in US equity market timing strategies. High complexity models deliver economically large and statistically significant out-of-sample portfolio gains relative to simpler models, due in large part to their remarkable ability to predict recessions.
    Keywords: Portfolio choice, machine learning, random matrix theory, benign overfit, overparameterization
    JEL: C3 C58 C61 G11 G12 G14
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2190&r=
  8. By: Anthony Coache; Sebastian Jaimungal
    Abstract: We develop an approach for solving time-consistent risk-sensitive stochastic optimization problems using model-free reinforcement learning (RL). Specifically, we assume agents assess the risk of a sequence of random variables using dynamic convex risk measures. We employ a time-consistent dynamic programming principle to determine the value of a particular policy, and develop policy gradient update rules. We further develop an actor-critic style algorithm using neural networks to optimize over policies. Finally, we demonstrate the performance and flexibility of our approach by applying it to optimization problems in statistical arbitrage trading and obstacle avoidance robot control.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.13414&r=
  9. By: Yuanlu Bai; Henry Lam; Svitlana Vyetrenko; Tucker Balch
    Abstract: Multi-agent market simulation is commonly used to create an environment for downstream machine learning or reinforcement learning tasks, such as training or testing trading strategies before deploying them to real-time trading. In electronic trading markets only the price or volume time series, that result from interaction of multiple market participants, are typically directly observable. Therefore, multi-agent market environments need to be calibrated so that the time series that result from interaction of simulated agents resemble historical -- which amounts to solving a highly complex large-scale optimization problem. In this paper, we propose a simple and efficient framework for calibrating multi-agent market simulator parameters from historical time series observations. First, we consider a novel concept of eligibility set to bypass the potential non-identifiability issue. Second, we generalize the two-sample Kolmogorov-Smirnov (K-S) test with Bonferroni correction to test the similarity between two high-dimensional time series distributions, which gives a simple yet effective distance metric between the time series sample sets. Third, we suggest using Bayesian optimization (BO) and trust-region BO (TuRBO) to minimize the aforementioned distance metric. Finally, we demonstrate the efficiency of our framework using numerical experiments.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.03874&r=
  10. By: Linyi Yang; Jiazheng Li; Ruihai Dong; Yue Zhang; Barry Smyth
    Abstract: Financial forecasting has been an important and active area of machine learning research because of the challenges it presents and the potential rewards that even minor improvements in prediction accuracy or forecasting may entail. Traditionally, financial forecasting has heavily relied on quantitative indicators and metrics derived from structured financial statements. Earnings conference call data, including text and audio, is an important source of unstructured data that has been used for various prediction tasks using deep earning and related approaches. However, current deep learning-based methods are limited in the way that they deal with numeric data; numbers are typically treated as plain-text tokens without taking advantage of their underlying numeric structure. This paper describes a numeric-oriented hierarchical transformer model to predict stock returns, and financial risk using multi-modal aligned earnings calls data by taking advantage of the different categories of numbers (monetary, temporal, percentages etc.) and their magnitude. We present the results of a comprehensive evaluation of NumHTML against several state-of-the-art baselines using a real-world publicly available dataset. The results indicate that NumHTML significantly outperforms the current state-of-the-art across a variety of evaluation metrics and that it has the potential to offer significant financial gains in a practical trading context.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.01770&r=
  11. By: Xiao-Yang Liu; Jingyang Rui; Jiechao Gao; Liuqing Yang; Hongyang Yang; Zhaoran Wang; Christina Dan Wang; Jian Guo
    Abstract: Deep reinforcement learning (DRL) has shown huge potentials in building financial market simulators recently. However, due to the highly complex and dynamic nature of real-world markets, raw historical financial data often involve large noise and may not reflect the future of markets, degrading the fidelity of DRL-based market simulators. Moreover, the accuracy of DRL-based market simulators heavily relies on numerous and diverse DRL agents, which increases demand for a universe of market environments and imposes a challenge on simulation speed. In this paper, we present a FinRL-Meta framework that builds a universe of market environments for data-driven financial reinforcement learning. First, FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy and provides open-source data engineering tools for financial big data. Second, FinRL-Meta provides hundreds of market environments for various trading tasks. Third, FinRL-Meta enables multiprocessing simulation and training by exploiting thousands of GPU cores. Our codes are available online at https://github.com/AI4Finance-Foundation /FinRL-Meta.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.06753&r=
  12. By: Michele Azzone; Roberto Baviera
    Abstract: In this paper, we present a fast Monte Carlo scheme for additive processes. We analyze in detail numerical error sources and propose a technique that reduces the two major sources of error. We also compare our results with a benchmark method: the jump simulation with Gaussian approximation. We show an application to additive normal tempered stable processes, a class of additive processes that calibrates "exactly" the implied volatility surface. Numerical results are relevant. The algorithm is an accurate tool for pricing path-dependent discretely-monitoring options with errors of one bp or below. The scheme is also fast: the computational time is of the same order of magnitude of standard algorithms for Brownian motions.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.08291&r=
  13. By: Jiafeng Chen (Department of Economics, Harvard University); Xiaohong Chen (Cowles Foundation, Yale University); Elie Tamer (Harvard University)
    Abstract: Artiï¬ cial Neural Networks (ANNs) can be viewed as {nonlinear sieves} that can approximate complex functions of high dimensional variables more effectively than linear sieves. We investigate the computational performance of various ANNs in nonparametric instrumental variables (NPIV) models of moderately high dimensional covariates that are relevant to empirical economics. We present two efficient procedures for estimation and inference on a weighted average derivative (WAD): an orthogonalized plug-in with optimally-weighted sieve minimum distance (OP-OSMD) procedure and a sieve efficient score (ES) procedure. Both estimators for WAD use ANN sieves to approximate the unknown NPIV function and are root-n asymptotically normal and first-order equivalent. We provide a detailed practitioner’s recipe for implementing both efficient procedures. This involves the choice of tuning parameters for the unknown NPIV, the conditional expectations and the optimal weighting function that are present in both procedures but also the choice of tuning parameters for the unknown Riesz representer in the ES procedure. We compare their finite-sample performances in various simulation designs that involve smooth NPIV function of up to 13 continuous covariates, different nonlinearities and covariate correlations. Some Monte Carlo ï¬ ndings include: 1) tuning and optimization are more delicate in ANN estimation; 2) given proper tuning, both ANN estimators with various architectures can perform well; 3) easier to tune ANN OP-OSMD estimators than ANN ES estimators; 4) stable inferences are more difficult to achieve with ANN (than spline) estimators; 5) there are gaps between current implementations and approximation theories. Finally, we apply ANN NPIV to estimate average partial derivatives in two empirical demand examples with multivariate covariates.
    Keywords: Artiï¬ cial neural networks, Relu, Sigmoid, Nonparametric instrumental variables, Weighted average derivatives, Optimal sieve minimum distance, Efficient influence, Semiparametric efficiency, Endogenous demand
    JEL: C14 C22
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2319&r=
  14. By: Igor Halperin; Jiayu Liu; Xiao Zhang
    Abstract: We suggest a simple practical method to combine the human and artificial intelligence to both learn best investment practices of fund managers, and provide recommendations to improve them. Our approach is based on a combination of Inverse Reinforcement Learning (IRL) and RL. First, the IRL component learns the intent of fund managers as suggested by their trading history, and recovers their implied reward function. At the second step, this reward function is used by a direct RL algorithm to optimize asset allocation decisions. We show that our method is able to improve over the performance of individual fund managers.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.01874&r=
  15. By: Wei Cao (HFUT - Hefei University of Technology); Yun He (HFUT - Hefei University of Technology); Wenjun Wang (HFUT - Hefei University of Technology); Weidong Zhu (HFUT - Hefei University of Technology); Yves Demazeau (LIG - Laboratoire d'Informatique de Grenoble - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes)
    Abstract: Risk control is a central issue for Chinese peer-to-peer (P2P) lending services. Although credit scoring has drawn much research interest and the superiority of ensemble models over single machine learning models has been proven, the question of which ensemble model is the best discrimination method for Chinese P2P lending services has received little attention. This study aims to conduct credit scoring by focusing on a Chinese P2P lending platform and selecting the optimal subset of features in order to find the best overall ensemble model. We propose a hybrid system to achieve these goals. Three feature selection algorithms are employed and combined to obtain the top 10 features. Six ensemble models with five base classifiers are then used to conduct comparisons after synthetic minority oversampling technique (SMOTE) treatment of the imbalanced data set. A real-world data set of 33 966 loans from the largest lending platform in China (ie, the Renren lending platform) is used to evaluate performance. The results show that the top 10 selected features can greatly improve performance compared with all features, particularly in terms of discriminating "bad" loans from "good" loans. Moreover, comparing the standard
    Keywords: credit scoring,ensemble learning,feature selection,synthetic minority oversampling technique (SMOTE) treatment,Chinese peer-to-peer (P2P) lending
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03434348&r=
  16. By: Maximilien Germain (EDF R&D OSIRIS - Optimisation, Simulation, Risque et Statistiques pour les Marchés de l’Energie - EDF R&D - EDF R&D - EDF - EDF, EDF R&D - EDF R&D - EDF - EDF, EDF - EDF, LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UP - Université de Paris); Huyên Pham (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UP - Université de Paris, CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - CNRS - Centre National de la Recherche Scientifique, FiME Lab - Laboratoire de Finance des Marchés d'Energie - EDF R&D - EDF R&D - EDF - EDF - CREST - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres); Xavier Warin (EDF R&D OSIRIS - Optimisation, Simulation, Risque et Statistiques pour les Marchés de l’Energie - EDF R&D - EDF R&D - EDF - EDF, EDF R&D - EDF R&D - EDF - EDF, EDF - EDF, FiME Lab - Laboratoire de Finance des Marchés d'Energie - EDF R&D - EDF R&D - EDF - EDF - CREST - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres)
    Abstract: We consider the control of McKean-Vlasov dynamics (or mean-field control) with probabilistic state constraints. We rely on a level-set approach which provides a representation of the constrained problem in terms of an unconstrained one with exact penalization and running maximum or integral cost. The method is then extended to the common noise setting. Our work extends (Bokanowski, Picarelli, and Zidani, SIAM J. Control Optim. 54.5 (2016), pp. 2568–2593) and (Bokanowski, Picarelli, and Zidani, Appl. Math. Optim. 71 (2015), pp. 125–163) to a mean-field setting. The reformulation as an unconstrained problem is particularly suitable for the numerical resolution of the problem, that is achieved from an extension of a machine learning algorithm from (Carmona, Laurière, arXiv:1908.01613 to appear in Ann. Appl. Prob., 2019). A first application concerns the storage of renewable electricity in the presence of mean-field price impact and another one focuses on a mean-variance portfolio selection problem with probabilistic constraints on the wealth. We also illustrate our approach for a direct numerical resolution of the primal Markowitz continuous-time problem without relying on duality.
    Keywords: mean-field control,state constraints,neural networks
    Date: 2021–12–20
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03498263&r=
  17. By: Draca, Mirko (Department of Economics, University of Warwick and Centre for Economic Performance, LSE); Schwarz, Carlo (Department of Economics, University of Warwick and Centre for Competitive Advantage in the Global Economy (CAGE))
    Abstract: Strong evidence has been emerging that major democracies have become more politically polarized, at least according to measures based on the ideological positions of political elites. We ask: have the general public (`citizens') followed the same pattern? Our approach is based on unsupervised machine learning models as applied to issue position survey data. This approach firstly indicates that coherent, latent ideologies are strongly apparent in the data, with a number of major, stable types that we label as: Liberal Centrist, Conservative Centrist, Left Anarchist and Right Anarchist. Using this framework, and a resulting measure of `citizen slant', we are then able to decompose the shift in ideological positions across the population over time. Specifically, we find evidence of a `disappearing center' in a range of countries with citizens shifting away from centrist ideologies into anti-establishment `anarchist' ideologies over time. This trend is especially pronounced for the US.
    Keywords: Polarization ; Ideology ; Unsupervised Learning JEL Classification: D72 ; C81
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:wrk:wqapec:07&r=
  18. By: Toru Kitagawa (Institute for Fiscal Studies and University College London); Guanyi Wang (Institute for Fiscal Studies)
    Abstract: How to allocate vaccines over heterogeneous individuals is one of the important policy decisions in pandemic times. This paper develops a procedure to estimate an individualized vaccine allocation policy under limited supply, exploiting social network data containing individual demographic characteristics and health status. We model the spillover effects of vaccination based on a Heterogeneous-Interacted-SIR network model and estimate an individualized vaccine allocation policy by maximizing an estimated social welfare (public health) criterion incorporating these spillovers. While this optimization problem is generally an NP-hard integer optimization problem, we show that the SIR structure leads to a submodular objective function, and provide a computationally attractive greedy algorithm for approximating a solution that has a theoretical performance guarantee. Moreover, we characterise a finite sample welfare regret bound and examine how its uniform convergence rate depends on the complexity and riskiness of the social network. In the simulation, we illustrate the importance of considering spillovers by comparing our method with targeting without network information.
    Date: 2021–07–20
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:28/21&r=
  19. By: Yiheng Sun; Tian Lu; Cong Wang; Yuan Li; Huaiyu Fu; Jingran Dong; Yunjie Xu
    Abstract: The prosperity of mobile and financial technologies has bred and expanded various kinds of financial products to a broader scope of people, which contributes to advocating financial inclusion. It has non-trivial social benefits of diminishing financial inequality. However, the technical challenges in individual financial risk evaluation caused by the distinct characteristic distribution and limited credit history of new users, as well as the inexperience of newly-entered companies in handling complex data and obtaining accurate labels, impede further promoting financial inclusion. To tackle these challenges, this paper develops a novel transfer learning algorithm (i.e., TransBoost) that combines the merits of tree-based models and kernel methods. The TransBoost is designed with a parallel tree structure and efficient weights updating mechanism with theoretical guarantee, which enables it to excel in tackling real-world data with high dimensional features and sparsity in $O(n)$ time complexity. We conduct extensive experiments on two public datasets and a unique large-scale dataset from Tencent Mobile Payment. The results show that the TransBoost outperforms other state-of-the-art benchmark transfer learning algorithms in terms of prediction accuracy with superior efficiency, shows stronger robustness to data sparsity, and provides meaningful model interpretation. Besides, given a financial risk level, the TransBoost enables financial service providers to serve the largest number of users including those who would otherwise be excluded by other algorithms. That is, the TransBoost improves financial inclusion.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.02365&r=
  20. By: Henning Tarp Jensen; Marcus Keogh-Brown; Finn Tarp
    Abstract: Myanmar has, in recent years, strengthened its focus on human capital as a development pillar, and introduced legislation and adopted conventions on child labour. But child exploitation continues, including use of forced labour by the military and children performing hazardous work. Moreover, Myanmar faces a rapidly closing window of opportunity within which to train its workforce to meet the future challenges of declining population growth and an ageing society.
    Keywords: Myanmar, Child labour, School reform, Decent Work, Household income, Income distribution, Education policy
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:unu:wpaper:wp-2021-180&r=
  21. By: Kieran Wood; Sven Giegerich; Stephen Roberts; Stefan Zohren
    Abstract: Deep learning architectures, specifically Deep Momentum Networks (DMNs) [1904.04912], have been found to be an effective approach to momentum and mean-reversion trading. However, some of the key challenges in recent years involve learning long-term dependencies, degradation of performance when considering returns net of transaction costs and adapting to new market regimes, notably during the SARS-CoV-2 crisis. Attention mechanisms, or Transformer-based architectures, are a solution to such challenges because they allow the network to focus on significant time steps in the past and longer-term patterns. We introduce the Momentum Transformer, an attention-based architecture which outperforms the benchmarks, and is inherently interpretable, providing us with greater insights into our deep learning trading strategy. Our model is an extension to the LSTM-based DMN, which directly outputs position sizing by optimising the network on a risk-adjusted performance metric, such as Sharpe ratio. We find an attention-LSTM hybrid Decoder-Only Temporal Fusion Transformer (TFT) style architecture is the best performing model. In terms of interpretability, we observe remarkable structure in the attention patterns, with significant peaks of importance at momentum turning points. The time series is thus segmented into regimes and the model tends to focus on previous time-steps in alike regimes. We find changepoint detection (CPD) [2105.13727], another technique for responding to regime change, can complement multi-headed attention, especially when we run CPD at multiple timescales. Through the addition of an interpretable variable selection network, we observe how CPD helps our model to move away from trading predominantly on daily returns data. We note that the model can intelligently switch between, and blend, classical strategies - basing its decision on patterns in the data.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.08534&r=
  22. By: Qinkai Chen; Christian-Yann Robert
    Abstract: The existing publications demonstrate that the limit order book data is useful in predicting short-term volatility in stock markets. Since stocks are not independent, changes on one stock can also impact other related stocks. In this paper, we are interested in forecasting short-term realized volatility in a multivariate approach based on limit order book data and relational data. To achieve this goal, we introduce Graph Transformer Network for Volatility Forecasting. The model allows to combine limit order book features and an unlimited number of temporal and cross-sectional relations from different sources. Through experiments based on about 500 stocks from S&P 500 index, we find a better performance for our model than for other benchmarks.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.09015&r=
  23. By: Pietro Emilio Spini
    Abstract: This paper studies the robustness of estimated policy effects to changes in the distribution of covariates. Robustness to covariate shifts is important, for example, when evaluating the external validity of quasi-experimental results, which are often used as a benchmark for evidence-based policy-making. I propose a novel scalar robustness metric. This metric measures the magnitude of the smallest covariate shift needed to invalidate a claim on the policy effect (for example, $ATE \geq 0$) supported by the quasi-experimental evidence. My metric links the heterogeneity of policy effects and robustness in a flexible, nonparametric way and does not require functional form assumptions. I cast the estimation of the robustness metric as a de-biased GMM problem. This approach guarantees a parametric convergence rate for the robustness metric while allowing for machine learning-based estimators of policy effect heterogeneity (for example, lasso, random forest, boosting, neural nets). I apply my procedure to the Oregon Health Insurance experiment. I study the robustness of policy effects estimates of health-care utilization and financial strain outcomes, relative to a shift in the distribution of context-specific covariates. Such covariates are likely to differ across US states, making quantification of robustness an important exercise for adoption of the insurance policy in states other than Oregon. I find that the effect on outpatient visits is the most robust among the metrics of health-care utilization considered.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.09259&r=
  24. By: Le, Tuan Anh; Dao, Thi Thanh Binh
    Abstract: This paper studies how to construct and compare various optimal portfolio frame-works for investors in the context of the Vietnamese stock market. The aim of the study is to help investors to find solutions for constructing an optimal portfolio strategy using modern investment frameworks in the Vietnamese stock market. The study contains a census of the top 43 companies listed on the Ho Chi Minh stock exchange (HOSE) over the ten-year period from July 2010 to January 2021. Optimal portfolios are constructed using Mean-Variance Framework, Mean-CVaR Framework under different copula simulations. Two-thirds of the data from 26/03/2014 to 27/1/2021 consists of the data of Vietnamese stocks during the COVID-19 recession, which caused depression globally; however, the results obtained during this period still provide a consistent outcome with the results for other periods. Furthermore, by randomly attempting different stocks in the research sample, the results also perform the same outcome as previous analyses. At about the same CvaR level of about 2.1%, for example, the Gaussian copula portfolio has daily Mean Return of 0.121%, the t copula portfolio has 0.12% Mean Return, while Mean-CvaR with the Raw Return portfolio has a lower Return at 0.103%, and the last portfolio of Mean-Variance with Raw Return has 0.102% Mean Return. Empirical results for all 10 portfolio levels showed that CVaR copula simulations significantly outperform the historical Mean-CVaR framework and Mean-Variance framework in the context of the Vietnamese stock exchange.
    Keywords: Gaussian copula, t copula, simulation, Mean-CVaR, Mean-Variance, portfolio optimization, Vietnam
    JEL: C61 G11 G17
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:111105&r=
  25. By: Laine, Tatu; Korpinen, Kasperi
    Abstract: This paper extends traditional payment system simulation analysis to counterparty liquidity risk exposures. The used stress test scenario corresponds to the counterparty stress scenario applied in the BCBS standard "Monitoring tools for intraday liquidity management" (BIS, 2013). This stress scenario is simulated for participants of the Finnish TARGET2 component with the new BoF-PSS3 simulator. Two liquidity deterioration indicators are introduced to quantify counterparty liquidity risk exposures. As comparison of liquidity risk projections to the available liquidity of participants in the system only yields a restricted and system-specific view of the severity of the scenarios, we compare the liquidity risks to high-quality liquid assets (HQLA) available at the group level to assess the overall liquidity risk that participants face in TARGET2. Our results generally comport with the literature and results reported elsewhere. Banking groups are exposed to a liquidity deterioration equivalent from 20 % to 60 % of their respective HQLA in just 0.35 % of the daily scenario observations. The exercise paper demonstrates that our proposed alternative form of payment system analysis can be helpful in banking supervision, micro- and macroprudential analysis, as well as resolution authorities' assessment of the effects of their actions on payment systems.
    Keywords: payment systems,stress testing,liquidity risks,counterparty risks,systemic risk,computer simulation
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:bofecr:92021&r=
  26. By: Li, Shiyuan; Hao, Miao
    Abstract: Based on the analysis of provincial-level data from 2001 to 2015, we find that regional inequality in China is not optimistic. Whether artificial intelligence, as a major technological change, will improve or worsen regional inequality is worthy of researching. We divide regional inequality into two dimensions: production and consumption, a total of three indicators. The empirical research is carried out to the eastern, central and western regions respectively. It is found that industrial intelligence improves the inequality of residents’ consumer welfare among regions, while at the same time there is the possibility of worsening regional inequality of innovation. We also clarify the heterogeneity of the mechanisms that artificial intelligence promotes innovation in different regions.
    Keywords: Artificial Intelligence; Regional Inequality; Innovation; Purchasing Power
    JEL: L25 O32
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:110973&r=

This nep-cmp issue is ©2022 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.