nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒04‒12
forty-five papers chosen by



  1. Automated and Distributed Statistical Analysis of Economic Agent-Based Models By Andrea Vandin; Daniele Giachini; Francesco Lamperti; Francesca Chiaromonte
  2. Comparing hundreds of machine learning classifiers and discrete choice models in predicting travel behavior: an empirical benchmark By Shenhao Wang; Baichuan Mo; Stephane Hess; Jinhua Zhao
  3. Monte Carlo algorithm for the extrema of tempered stable processes By Jorge Ignacio Gonz\'alez C\'azares; Aleksandar Mijatovi\'c
  4. Distributional Offline Continuous-Time Reinforcement Learning with Neural Physics-Informed PDEs (SciPhy RL for DOCTR-L) By Igor Halperin
  5. Assessing the Economic Impact of Lockdowns in Italy: A Computational Input-Output Approach By Severin Reissl; Alessandro Caiani; Francesco Lamperti; Mattia Guerini; Fabio Vanni; Giorgio Fagiolo; Tommaso Ferraresi; Leonardo Ghezzi; Mauro Napoletano; Andrea Roventini
  6. The Application of Machine Learning Algorithms for Spatial Analysis: Predicting of Real Estate Prices in Warsaw By Dawid Siwicki
  7. Stock price forecast with deep learning By Firuz Kamalov; Linda Smail; Ikhlaas Gurrib
  8. Forecasting with Deep Learning: S&P 500 index By Firuz Kamalov; Linda Smail; Ikhlaas Gurrib
  9. A Stochastic Time Series Model for Predicting Financial Trends using NLP By Pratyush Muthukumar; Jie Zhong
  10. Agent-Based Computational Economics: Overview and Brief History By Tesfatsion, Leigh
  11. Bounds and Heuristics for Multi-Product Personalized Pricing By Guillermo Gallego; Gerardo Berbeglia
  12. Artificial Neural Network and Analytical Hierarchy Process Integration: A Tool to Estimate Business Strategy of Bank By Mochammad Ridwan Ristyawan
  13. Monte Carlo Simulation of SDEs using GANs By Jorino van Rhijn; Cornelis W. Oosterlee; Lech A. Grzelak; Shuaiqiang Liu
  14. A Comparative Evaluation of Predominant Deep Learning Quantified Stock Trading Strategies By Haohan Zhang
  15. Asset Selection via Correlation Blockmodel Clustering By Wenpin Tang; Xiao Xu; Xun Yu Zhou
  16. Reinventing the Utility for DERs: A Proposal for a DSO-Centric Retail Electricity Market By Rabab Haider; David D'Achiardi; Venkatesh Venkataramanan; Anurag Srivastava; Anjan Bose; Anuradha M. Annaswamy
  17. Bridging the Income and Digital Divide with Shared Automated Electric Vehicles By Lazarus, Jessica; Bauer, Gordon PhD; Greenblatt, Jeffery PhD; Shaheen, Susan PhD
  18. Pyramid scheme in stock market: a kind of financial market simulation By Yong Shi; Bo Li; Guangle Du
  19. CRPS Learning By Jonathan Berrisch; Florian Ziel
  20. Basic Income Simulations for the Province of British Columbia By Green, David A.; Kesselman, Jonathan Rhys; Tedds, Lindsay M.; Crisan, I. Daria; Petit, Gillian
  21. A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management By Zhenhan Huang; Fumihide Tanaka
  22. Embeddings and Attention in Predictive Modeling By Kevin Kuo; Ronald Richman
  23. Behavioral Economics Approach to Interpretable Deep Image Classification. Rationally Inattentive Utility Maximization Explains Deep Image Classification By Kunal Pattanayak; Vikram Krishnamurthy
  24. Sustainability transition and digital trasformation: an agent-based perspective By Nieddu, Marcello; Bertani, Filippo; Ponta, Linda
  25. Correlated Bandits for Dynamic Pricing via the ARC algorithm By Samuel Cohen; Tanut Treetanthiploet
  26. Assessing Sensitivity of Machine Learning Predictions.A Novel Toolbox with an Application to Financial Literacy By Falco J. Bargagli Stoffi; Kenneth De Beckker; Joana E. Maldonado; Kristof De Witte
  27. Predicting Inflation with Neural Networks By Livia Paranhos
  28. Evolutionary Strategies with Analogy Partitions in p-guessing Games By Aymeric Vie
  29. Predicting Authoritarian Crackdowns: A Machine Learning Approach By Zhong, Weifeng; Chan, Julian
  30. A Genetic Algorithm approach to Asymmetrical Blotto Games with Heterogeneous Valuations By Aymeric Vie
  31. The VIX index under scrutiny of machine learning techniques and neural networks By Ali Hirsa; Joerg Osterrieder; Branka Hadji Misheva; Wenxin Cao; Yiwen Fu; Hanze Sun; Kin Wai Wong
  32. The Effect of Sport in Online Dating: Evidence from Causal Machine Learning By Boller, Daniel; Lechner, Michael; Okasa, Gabriel
  33. Who Benefits When Firms Game Corrective Policies? By Mathias Reynaert; James M. Sallee
  34. Ambiguous Outcome Magnitude in Economic Decision Making with Low and High Monetary Stakes By Zbozinek, Tomislav Damir; Charpentier, Caroline Juliette; Qi, Song; mobbs, dean
  35. Quarantine, Contact Tracing, and Testing: Implications of an Augmented SEIR Model By Andreas Hornstein
  36. Detailed Trade Policy Simulations Using a Global General Equilibrium Model By Aguiar, Angel; Erwin Corong; Dominique van der Mensbrugghe
  37. Research on Portfolio Liquidation Strategy under Discrete Times By Qixuan Luo; Yu Shi; Handong Li
  38. Words Speak Louder Than Numbers: Estimating China?s COVID Severity with Deep Learning By Zhong, Weifeng; Chan, Julian; Ho, Kwan-Yuet; Lee, Kit
  39. Accurate Stock Price Forecasting Using Robust and Optimized Deep Learning Models By Jaydip Sen; Sidra Mehtab
  40. A deep learning model for gas storage optimization By Nicolas Curin; Michael Kettler; Xi Kleisinger-Yu; Vlatka Komaric; Thomas Krabichler; Josef Teichmann; Hanna Wutte
  41. Evaluating the Existing Basic Income Simulation Literature By Tedds, Lindsay M.; Crisan, I. Daria
  42. Household Poverty in Egypt: Poverty Profile, Econometric Modeling and Policy Simulations By Nosier, Shereen; Beram, Reham; Mahrous, Mohamed
  43. Labor Informality and Credit Market Accessibility By Alina Malkova; Klara Sabirianova Peter; Jan Svejnar
  44. Distributional Impacts of Carbon Pricing Policies under Paris Agreement: Inter and Intra-Regional Perspectives By Chepeliev, Maksym; Israel Osorio Rodarte; Dominique van der Mensbrugghe
  45. Inference under Covariate-Adaptive Randomization with Imperfect Compliance By Federico A. Bugni; Mengsi Gao

  1. By: Andrea Vandin; Daniele Giachini; Francesco Lamperti; Francesca Chiaromonte
    Abstract: We propose a novel approach to the statistical analysis of simulation models and, especially, agent-based models (ABMs). Our main goal is to provide a fully automated and model-independent tool-kit to inspect simulations and perform counterfactual analysis. Our approach: (i) is easy-to-use by the modeller, (ii) improves reproducibility of results, (iii) optimizes running time given the modeller's machine, (iv) automatically chooses the number of required simulations and simulation steps to reach user-specified statistical confidence, and (v) automatically performs a variety of statistical tests. In particular, our framework is designed to distinguish the transient dynamics of the model from its steady-state behaviour (if any), estimate properties of the model in both "phases", and provide indications on the ergodic (or non-ergodic) nature of the simulated processes -- which, in turns allows one to gauge the reliability of a steady-state analysis. Estimates are equipped with statistical guarantees, allowing for robust comparisons across computational experiments. To demonstrate the effectiveness of our approach, we apply it to two models from the literature: a large scale macro-financial ABM and a small scale prediction market model. Compared to prior analyses of these models, we obtain new insights and we are able to identify and fix some erroneous conclusions.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.05405&r=all
  2. By: Shenhao Wang; Baichuan Mo; Stephane Hess; Jinhua Zhao
    Abstract: Researchers have compared machine learning (ML) classifiers and discrete choice models (DCMs) in predicting travel behavior, but the generalizability of the findings is limited by the specifics of data, contexts, and authors' expertise. This study seeks to provide a generalizable empirical benchmark by comparing hundreds of ML and DCM classifiers in a highly structured manner. The experiments evaluate both prediction accuracy and computational cost by spanning four hyper-dimensions, including 105 ML and DCM classifiers from 12 model families, 3 datasets, 3 sample sizes, and 3 outputs. This experimental design leads to an immense number of 6,970 experiments, which are corroborated with a meta dataset of 136 experiment points from 35 previous studies. This study is hitherto the most comprehensive and almost exhaustive comparison of the classifiers for travel behavioral prediction. We found that the ensemble methods and deep neural networks achieve the highest predictive performance, but at a relatively high computational cost. Random forests are the most computationally efficient, balancing between prediction and computation. While discrete choice models offer accuracy with only 3-4 percentage points lower than the top ML classifiers, they have much longer computational time and become computationally impossible with large sample size, high input dimensions, or simulation-based estimation. The relative ranking of the ML and DCM classifiers is highly stable, while the absolute values of the prediction accuracy and computational time have large variations. Overall, this paper suggests using deep neural networks, model ensembles, and random forests as baseline models for future travel behavior prediction. For choice modeling, the DCM community should switch more attention from fitting models to improving computational efficiency, so that the DCMs can be widely adopted in the big data context.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.01130&r=all
  3. By: Jorge Ignacio Gonz\'alez C\'azares; Aleksandar Mijatovi\'c
    Abstract: We develop a novel Monte Carlo algorithm for the vector consisting of the supremum, the time at which the supremum is attained and the position of an exponentially tempered L\'{e}vy process. The algorithm, based on the increments of the process without tempering, converges geometrically fast (as a function of the computational cost) for discontinuous and locally Lipschitz functions of the vector. We prove that the corresponding multilevel Monte Carlo estimator has optimal computational complexity (i.e. of order $\epsilon^{-2}$ if the mean squared error is at most $\epsilon^{2}$) and provide its central limit theorem (CLT). Using the CLT we construct confidence intervals for barrier option prices and various risk measures based on drawdown under the tempered stable (CGMY) model calibrated/estimated on real-world data. We provide non-asymptotic and asymptotic comparisons of our algorithm with existing approximations, leading to rule-of-thumb guidelines for users to the best method for a given set of parameters, and illustrate its performance with numerical examples.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.15310&r=all
  4. By: Igor Halperin
    Abstract: This paper addresses distributional offline continuous-time reinforcement learning (DOCTR-L) with stochastic policies for high-dimensional optimal control. A soft distributional version of the classical Hamilton-Jacobi-Bellman (HJB) equation is given by a semilinear partial differential equation (PDE). This `soft HJB equation' can be learned from offline data without assuming that the latter correspond to a previous optimal or near-optimal policy. A data-driven solution of the soft HJB equation uses methods of Neural PDEs and Physics-Informed Neural Networks developed in the field of Scientific Machine Learning (SciML). The suggested approach, dubbed `SciPhy RL', thus reduces DOCTR-L to solving neural PDEs from data. Our algorithm called Deep DOCTR-L converts offline high-dimensional data into an optimal policy in one step by reducing it to supervised learning, instead of relying on value iteration or policy iteration methods. The method enables a computable approach to the quality control of obtained policies in terms of both their expected returns and uncertainties about their values.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.01040&r=all
  5. By: Severin Reissl (IUSS Pavia); Alessandro Caiani (IUSS Pavia); Francesco Lamperti (Institute of Economics and EMbeDS, Scuola Superiore Sant'Anna; RFF-CMCC European Institute on Economics and the Environment); Mattia Guerini (Université Côte d'Azur, CNRS, GREDEG, France; Sant'Anna School of Advanced Studies; Sciences Po., OFCE); Fabio Vanni (Sciences Po, OFCE); Giorgio Fagiolo (Institute of Economics and EMbeDS, Scuola Superiore Sant'Anna); Tommaso Ferraresi (Istituto Regionale per la Programmazione Economica della Toscana); Leonardo Ghezzi (Istituto Regionale per la Programmazione Economica della Toscana); Mauro Napoletano (OFCE Sciences-Po; SKEMA Business School); Andrea Roventini (Institute of Economics and EMbeDS, Scuola Superiore Sant'Anna; Sciences Po, OFCE)
    Abstract: We build a novel computational input-output model to estimate the economic impact of lockdowns in Italy. The key advantage of our framework is to integrate the regional and sectoral dimensions of economic production in a very parsimonious numerical simulation framework. Lockdowns are treated as shocks to available labor supply and they are calibrated on regional and sectoral employment data coupled with the prescriptions of government decrees. We show that when estimated on data from the first "hard" lock-down, our model closely reproduces the observed economic dynamics during spring 2020. In addition, we show that the model delivers a good out-of-sample forecasting performance. We also analyze the e ects of the second "mild" lockdown in fall of 2020 which delivered a much more moderate negative impact on production compared to both the spring 2020 lockdown and to a hypothetical second "hard" lockdown.
    Keywords: Input-output, Covid-19, Lockdown, Italy
    JEL: C63 C67 D57 E17 I18 R15
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:gre:wpaper:2021-15&r=all
  6. By: Dawid Siwicki (Faculty of Economic Sciences, University of Warsaw)
    Abstract: The principal aim of this paper is to investigate the potential of machine learning algorithms in context of predicting housing prices. The most important issue in modelling spatial data is to consider spatial heterogeneity that can bias obtained results when is not taken into consideration. The purpose of this research is to compare prediction power of such methods: linear regression, artificial neural network, random forest, extreme gradient boosting and spatial error model. The evaluation was conducted using train, validation, test and k-Fold Cross-Validation methods. We also examined the ability of the above models to identify spatial dependencies, by calculating Moran’s I for residuals obtained on in-sample and out-of-sample data.
    Keywords: spatial analysis, machine learning, housing market, random forest, gradient boosting
    JEL: C31 C45 C52 C53 C55 R31
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2021-05&r=all
  7. By: Firuz Kamalov; Linda Smail; Ikhlaas Gurrib
    Abstract: In this paper, we compare various approaches to stock price prediction using neural networks. We analyze the performance fully connected, convolutional, and recurrent architectures in predicting the next day value of S&P 500 index based on its previous values. We further expand our analysis by including three different optimization techniques: Stochastic Gradient Descent, Root Mean Square Propagation, and Adaptive Moment Estimation. The numerical experiments reveal that a single layer recurrent neural network with RMSprop optimizer produces optimal results with validation and test Mean Absolute Error of 0.0150 and 0.0148 respectively.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.14081&r=all
  8. By: Firuz Kamalov; Linda Smail; Ikhlaas Gurrib
    Abstract: Stock price prediction has been the focus of a large amount of research but an acceptable solution has so far escaped academics. Recent advances in deep learning have motivated researchers to apply neural networks to stock prediction. In this paper, we propose a convolution-based neural network model for predicting the future value of the S&P 500 index. The proposed model is capable of predicting the next-day direction of the index based on the previous values of the index. Experiments show that our model outperforms a number of benchmarks achieving an accuracy rate of over 55%.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.14080&r=all
  9. By: Pratyush Muthukumar; Jie Zhong
    Abstract: Stock price forecasting is a highly complex and vitally important field of research. Recent advancements in deep neural network technology allow researchers to develop highly accurate models to predict financial trends. We propose a novel deep learning model called ST-GAN, or Stochastic Time-series Generative Adversarial Network, that analyzes both financial news texts and financial numerical data to predict stock trends. We utilize cutting-edge technology like the Generative Adversarial Network (GAN) to learn the correlations among textual and numerical data over time. We develop a new method of training a time-series GAN directly using the learned representations of Naive Bayes' sentiment analysis on financial text data alongside technical indicators from numerical data. Our experimental results show significant improvement over various existing models and prior research on deep neural networks for stock price forecasting.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.01290&r=all
  10. By: Tesfatsion, Leigh
    Abstract: Scientists seek to understand how real-world systems work. Models devised for scientific purposes must always simplify reality. However, scientists should be permitted to tailor these simplifications to purposes at hand; they should not be forced to distort reality in specific predetermined ways in order to apply a modeling approach. Adherence to this modeling precept was a key goal motivating my development of Agent-Based Computational Economics (ACE), a variant of agent-based modeling characterized by seven specific modeling principles. This perspective provides an overview of ACE and a brief history of its development.
    Date: 2021–03–29
    URL: http://d.repec.org/n?u=RePEc:isu:genstf:202103290700001125&r=all
  11. By: Guillermo Gallego; Gerardo Berbeglia
    Abstract: We present tight bounds and heuristics for personalized, multi-product pricing problems. Under mild conditions we show that the best price in the direction of a positive vector results in profits that are guaranteed to be at least as large as a fraction of the profits from optimal personalized pricing. For unconstrained problems, the fraction depends on the factor and on optimal price vectors for the different customer types. For constrained problems the factor depends on the factor and a ratio of the constraints. Using a factor vector with equal components results in uniform pricing and has exceedingly mild sufficient conditions for the bound to hold. A robust factor is presented that achieves the best possible performance guarantee. As an application, our model yields a tight lower-bound on the performance of linear pricing relative to optimal personalized non-linear pricing, and suggests effective non-linear price heuristics relative to personalized solutions. Additionally, in the context of multi-product bundling pricing, we use our model and provide profit guarantees for simple strategies such as bundle-size pricing and component-pricing with respect to the optimal personalized mixed bundling solution. Heuristics to cluster customer types are also developed with the goal of improving performance by allowing each cluster to price along its own factor. Numerical results are presented for a variety of demand models that illustrate the tradeoffs between using the economic factor and the robust factor for each cluster, as well as the tradeoffs between using a clustering heuristic with a worst case performance of two and a machine learning clustering algorithm. In our experiments economically motivated factors coupled with machine learning clustering heuristics performed significantly better than other combinations.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.03038&r=all
  12. By: Mochammad Ridwan Ristyawan (Department of Management, Faculty of Economics and Business, Universitas Tanjungpura, 78124, Pontianak, Indonesia Author-2-Name: Author-2-Workplace-Name: Author-3-Name: Author-3-Workplace-Name: Author-4-Name: Author-4-Workplace-Name: Author-5-Name: Author-5-Workplace-Name: Author-6-Name: Author-6-Workplace-Name: Author-7-Name: Author-7-Workplace-Name: Author-8-Name: Author-8-Workplace-Name:)
    Abstract: Objective - The disruption has been occurring in financial services. Thus, rethinking a new strategy for banking is needed to make a sustainable innovation in organizations. Studies mentioned that formulating strategy is a very costly, time-consuming, and comprehensive analysis. The purpose of this study is to present an integrated intelligence algorithm for estimating the bank's strategy in Indonesia. Methodology – This study used the integration model between two modules. The algorithm has two basic modules, called Artificial Neural Network (ANN) and Analytical Hierarchy Process (AHP). AHP is capable of handling a multi-level decision-making structure with the use of five expert judgments in the pairwise comparison process. Meanwhile, ANN is utilized as an inductive algorithm in discovering the predictive strategy of the bank and used to explain the strategic factors which improved in forward. Findings and Novelty – The empirical results indicate that ANN and AHP integration was proved to predict the business strategy of the bank in five scenarios. Strategy 5 was the best choice for the bank and Innovate Like Fintechs (ILF) is the most factor consideration. The strategy choice was appropriate for the condition of the bank's factors. This framework can be implemented to help bankers to decide on bank operations. Type of Paper - Empirical
    Keywords: Bank's strategy, ANN, AHP, BSC, Indonesia.
    JEL: M15 O32
    Date: 2021–03–31
    URL: http://d.repec.org/n?u=RePEc:gtr:gatrjs:jfbr179&r=all
  13. By: Jorino van Rhijn; Cornelis W. Oosterlee; Lech A. Grzelak; Shuaiqiang Liu
    Abstract: Generative adversarial networks (GANs) have shown promising results when applied on partial differential equations and financial time series generation. We investigate if GANs can also be used to approximate one-dimensional Ito stochastic differential equations (SDEs). We propose a scheme that approximates the path-wise conditional distribution of SDEs for large time steps. Standard GANs are only able to approximate processes in distribution, yielding a weak approximation to the SDE. A conditional GAN architecture is proposed that enables strong approximation. We inform the discriminator of this GAN with the map between the prior input to the generator and the corresponding output samples, i.e. we introduce a `supervised GAN'. We compare the input-output map obtained with the standard GAN and supervised GAN and show experimentally that the standard GAN may fail to provide a path-wise approximation. The GAN is trained on a dataset obtained with exact simulation. The architecture was tested on geometric Brownian motion (GBM) and the Cox-Ingersoll-Ross (CIR) process. The supervised GAN outperformed the Euler and Milstein schemes in strong error on a discretisation with large time steps. It also outperformed the standard conditional GAN when approximating the conditional distribution. We also demonstrate how standard GANs may give rise to non-parsimonious input-output maps that are sensitive to perturbations, which motivates the need for constraints and regularisation on GAN generators.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.01437&r=all
  14. By: Haohan Zhang
    Abstract: This study first reconstructs three deep learning powered stock trading models and their associated strategies that are representative of distinct approaches to the problem and established upon different aspects of the many theories evolved around deep learning. It then seeks to compare the performance of these strategies from different perspectives through trading simulations ran on three scenarios when the benchmarks are kept at historical low points for extended periods of time. The results show that in extremely adverse market climates, investment portfolios managed by deep learning powered algorithms are able to avert accumulated losses by generating return sequences that shift the constantly negative CSI 300 benchmark return upward. Among the three, the LSTM model's strategy yields the best performance when the benchmark sustains continued loss.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.15304&r=all
  15. By: Wenpin Tang; Xiao Xu; Xun Yu Zhou
    Abstract: We aim to cluster financial assets in order to identify a small set of stocks to approximate the level of diversification of the whole universe of stocks. We develop a data-driven approach to clustering based on a correlation blockmodel in which assets in the same cluster have the same correlations with all other assets. We devise an algorithm to detect the clusters, with a theoretical analysis and a practical guidance. Finally, we conduct an empirical analysis to attest the performance of the algorithm.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.14506&r=all
  16. By: Rabab Haider; David D'Achiardi; Venkatesh Venkataramanan; Anurag Srivastava; Anjan Bose; Anuradha M. Annaswamy
    Abstract: The increasing penetration of intermittent renewables, storage devices, and flexible loads is introducing operational challenges in distribution grids. The proper coordination and scheduling of these resources using a distributed approach is warranted, and can only be achieved through local retail markets employing transactive energy schemes. To this end, we propose a distribution-level retail market operated by a Distribution System Operator (DSO), which schedules DERs and determines the real-time distribution-level Locational Marginal Price (d-LPM). The retail market is built using a distributed Proximal Atomic Coordination (PAC) algorithm, which solves the optimal power flow model while accounting for network physics, rendering locationally and temporally varying d-LMPs. A numerical study of the market structure is carried out via simulations of the IEEE-123 node network using data from ISO-NE and Eversource in Massachusetts, US. The market performance is compared to existing retail practices, including demand response (DR) with no-export rules and net metering. The DSO-centric market increases DER utilization, permits continual market participation for DR, lowers electricity rates for customers, and eliminates the subsidies inherent to net metering programs. The resulting lower revenue stream for the DSO highlights the evolving business model of the modern utility, moving from commoditized markets towards performance-based ratemaking.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.01269&r=all
  17. By: Lazarus, Jessica; Bauer, Gordon PhD; Greenblatt, Jeffery PhD; Shaheen, Susan PhD
    Abstract: This research investigates strategies to improve the mobility of low-income travelers by incentivizing the use of electric SAVs (SAEVs) and public transit. We employ two agent-based simulation engines, an activity-based travel demand model of the San Francisco Bay Area, and vehicle movement data from the San Francisco Bay Area and the Los Angeles Basin to model emergent travel behavior of commute trips in response to subsidies for TNCs and public transit. Sensitivity analysis was conducted to assess the impacts of different subsidy scenarios on mode choices, TNC pooling and match rates, vehicle occupancies, vehicle miles traveled (VMT), and TNC revenues. The scenarios varied in the determination of which travel modes and income levels were eligible to receive a subsidy of $1.25, $2.50, or $5.00 per ride. Four different mode-specific subsidies were investigated, including subsidies for 1) all TNC rides, 2) pooled TNC rides only, 3) all public transit rides, and 4) TNC rides to/from public transit only. Each of the four modespecific subsidies were applied in scenarios which subsidized travelers of all income levels, as well as scenarios that only subsidized low-income travelers (earning less than $50,000 annual household income). Simulations estimating wait times for TNC trips in both the San Francisco Bay Area and Los Angeles regions also revealed that wait times are distributed approximately equally across low- and high-income trip requests.
    Keywords: Social and Behavioral Sciences
    Date: 2021–03–01
    URL: http://d.repec.org/n?u=RePEc:cdl:itsrrp:qt5f1359rd&r=all
  18. By: Yong Shi; Bo Li; Guangle Du
    Abstract: Artificial stock market simulation based on agent is an important means to study financial market. Based on the assumption that the investors are composed of a main fund, small trend and contrarian investors characterized by four parameters, we simulate and research a kind of financial phenomenon with the characteristics of pyramid schemes. Our simulation results and theoretical analysis reveal the relationships between the rate of return of the main fund and the proportion of the trend investors in all small investors, the small investors' parameters of taking profit and stopping loss, the order size of the main fund and the strategies adopted by the main fund. Our work are helpful to explain the financial phenomenon with the characteristics of pyramid schemes in financial markets, design trading rules for regulators and develop trading strategies for investors.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.02179&r=all
  19. By: Jonathan Berrisch; Florian Ziel
    Abstract: Combination and aggregation techniques can improve forecast accuracy substantially. This also holds for probabilistic forecasting methods where full predictive distributions are combined. There are several time-varying and adaptive weighting schemes like Bayesian model averaging (BMA). However, the performance of different forecasters may vary not only over time but also in parts of the distribution. So one may be more accurate in the center of the distributions, and other ones perform better in predicting the distribution's tails. Consequently, we introduce a new weighting procedure that considers both varying performance across time and the distribution. We discuss pointwise online aggregation algorithms that optimize with respect to the continuous ranked probability score (CRPS). After analyzing the theoretical properties of a fully adaptive Bernstein online aggregation (BOA) method, we introduce smoothing procedures for pointwise CRPS learning. The properties are confirmed and discussed using simulation studies. Additionally, we illustrate the performance in a forecasting study for carbon markets. In detail, we predict the distribution of European emission allowance prices.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.00968&r=all
  20. By: Green, David A.; Kesselman, Jonathan Rhys; Tedds, Lindsay M.; Crisan, I. Daria; Petit, Gillian
    Abstract: An important component of the work to be completed by the British Columbia’s Expert Panel on Basic Income is to design simulations to look at how various basic income (BI) models could work in B.C. (B.C. Poverty Reduction, 2018). The intent of these simulations is to identify the potential impacts and financial implications for B.C. residents of different variants of a BI. Given the poverty reduction targets passed by the B.C. government, detailed in Petit and Tedds (2020d), the potential impacts include those on the incidence and the depths of poverty in the province (B.C. Poverty Reduction, n.d.). The panel ran over 16,000 different BI scenarios to consider in B.C., which were modelled using Statistics Canada’s Social Policy Simulation Database and Model (SPSD/M) program. We evaluate different BI scenarios in terms of their implications for a variety of measures, including cost, number of recipients, rates of poverty, depths of poverty, distributional affects, and inequality impacts. This paper provides details regarding these simulations. Our goal in this paper is simply to consider different versions of a basic income in terms of both their cost implications and their implications for poverty reduction. We believe that identifying the most effective variants of a basic income in terms of these two criteria will help sharpen the conversation about the applicability of a basic income as a policy option for B.C.
    Keywords: Basic income; Simulations; Statistics Canada’s Social Policy Simulation Database and Model; Poverty Reduction; Distributional affects; Inequality.
    JEL: I38 I39
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:105918&r=all
  21. By: Zhenhan Huang; Fumihide Tanaka
    Abstract: Financial Portfolio Management is one of the most applicable problems in Reinforcement Learning (RL) by its sequential decision-making nature. Existing RL-based approaches, while inspiring, often lack scalability, reusability, or profundity of intake information to accommodate the ever-changing capital markets. In this paper, we design and develop MSPM, a novel Multi-agent Reinforcement learning-based system with a modularized and scalable architecture for portfolio management. MSPM involves two asynchronously updated units: Evolving Agent Module (EAM) and Strategic Agent Module (SAM). A self-sustained EAM produces signal-comprised information for a specific asset using heterogeneous data inputs, and each EAM possesses its reusability to have connections to multiple SAMs. A SAM is responsible for the assets reallocation of a portfolio using profound information from the EAMs connected. With the elaborate architecture and the multi-step condensation of the volatile market information, MSPM aims to provide a customizable, stable, and dedicated solution to portfolio management that existing approaches do not. We also tackle data-shortage issue of newly-listed stocks by transfer learning, and validate the necessity of EAM. Experiments on 8-year U.S. stock markets data prove the effectiveness of MSPM in profits accumulation by its outperformance over existing benchmarks.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.03502&r=all
  22. By: Kevin Kuo; Ronald Richman
    Abstract: We explore in depth how categorical data can be processed with embeddings in the context of claim severity modeling. We develop several models that range in complexity from simple neural networks to state-of-the-art attention based architectures that utilize embeddings. We illustrate the utility of learned embeddings from neural networks as pretrained features in generalized linear models, and discuss methods for visualizing and interpreting embeddings. Finally, we explore how attention based models can contextually augment embeddings, leading to enhanced predictive performance.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.03545&r=all
  23. By: Kunal Pattanayak; Vikram Krishnamurthy
    Abstract: Are deep convolutional neural networks (CNNs) for image classification consistent with utility maximization behavior with information acquisition costs? This paper demonstrates the remarkable result that a deep CNN behaves equivalently (in terms of necessary and sufficient conditions) to a rationally inattentive utility maximizer, a model extensively used in behavioral economics to explain human decision making. This implies that a deep CNN has a parsimonious representation in terms of simple intuitive human-like decision parameters, namely, a utility function and an information acquisition cost. Also the reconstructed utility function that rationalizes the decisions of the deep CNNs, yields a useful preference order amongst the image classes (hypotheses).
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.04594&r=all
  24. By: Nieddu, Marcello; Bertani, Filippo; Ponta, Linda
    Abstract: Digital transformation and sustainability transition are complex phenomena characterized by fun-damental uncertainty. The potential consequences deriving from these processes are the subject of open de-bates among economists and technologists. In this respect, adopting a modelling and simulation approachrepresents one of the best solution in order to forecast potential effects linked to these complex phenom-ena. Agent-based modelling represents an appropriate paradigm to address complexity. This research aimsat showing the potential of the large-scale macroeconomic agent-based model Eurace in order to investigatechallenges like sustainability transition and digital transformation. This paper discusses and compares resultsof previous works where the Eurace model was used to study the digital transformation, while it presents newresults concerning the framework on the sustainability transition, where a climate agent is introduced to ac-count the climate economy interaction. As regards the digital transformation, the Eurace model is able to cap-ture interesting business dynamics characterizing the so-called increasing returns world and, in case of highrates of digital technological progress, it shows a significant technological unemployment. As regard the sus-tainability transition, it displays a rebound effect on energy savings that compromises efforts to reduce greenhouse gases emissions via electricity efficiency improvements. Furthermore, it shows that a carbon tax couldbe not sufficient to decouple economy from carbon consumption, and that a feed-in tariff policy fosteringrenewable energy production growth may be more effective.
    Keywords: Sustainability; Climate change mitigation policies; Digital Transformation; Technologicalunemployment; Agent-Based Modelling
    JEL: C63 O33 Q43 Q54
    Date: 2021–04–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:106943&r=all
  25. By: Samuel Cohen; Tanut Treetanthiploet
    Abstract: The Asymptotic Randomised Control (ARC) algorithm provides a rigorous approximation to the optimal strategy for a wide class of Bayesian bandits, while retaining reasonable computational complexity. In particular, it allows a decision maker to observe signals in addition to their rewards, to incorporate correlations between the outcomes of different choices, and to have nontrivial dynamics for their estimates. The algorithm is guaranteed to asymptotically optimise the expected discounted payoff, with error depending on the initial uncertainty of the bandit. In this paper, we consider a batched bandit problem where observations arrive from a generalised linear model; we extend the ARC algorithm to this setting. We apply this to a classic dynamic pricing problem based on a Bayesian hierarchical model and demonstrate that the ARC algorithm outperforms alternative approaches.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.04263&r=all
  26. By: Falco J. Bargagli Stoffi; Kenneth De Beckker; Joana E. Maldonado; Kristof De Witte
    Abstract: Despite their popularity, machine learning predictions are sensitive to potential unobserved predictors. This paper proposes a general algorithm that assesses how the omission of an unobserved variable with high explanatory power could affect the predictions of the model. Moreover, the algorithm extends the usage of machine learning from pointwise predictions to inference and sensitivity analysis. In the application, we show how the framework can be applied to data with inherent uncertainty, such as students' scores in a standardized assessment on financial literacy. First, using Bayesian Additive Regression Trees (BART), we predict students' financial literacy scores (FLS) for a subgroup of students with missing FLS. Then, we assess the sensitivity of predictions by comparing the predictions and performance of models with and without a highly explanatory synthetic predictor. We find no significant difference in the predictions and performances of the augmented (i.e., the model with the synthetic predictor) and original model. This evidence sheds a light on the stability of the predictive model used in the application. The proposed methodology can be used, above and beyond our motivating empirical example, in a wide range of machine learning applications in social and health sciences.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.04382&r=all
  27. By: Livia Paranhos
    Abstract: This paper applies neural network models to forecast inflation. The use of a particular recurrent neural network, the long-short term memory model, or LSTM, that summarizes macroeconomic information into common components is a major contribution of the paper. Results from an exercise with US data indicate that the estimated neural nets usually present better forecasting performance than standard benchmarks, especially at long horizons. The LSTM in particular is found to outperform the traditional feed-forward network at long horizons, suggesting an advantage of the recurrent model in capturing the long-term trend of inflation. This finding can be rationalized by the so called long memory of the LSTM that incorporates relatively old information in the forecast as long as accuracy is improved, while economizing in the number of estimated parameters. Interestingly, the neural nets containing macroeconomic information capture well the features of inflation during and after the Great Recession, possibly indicating a role for nonlinearities and macro information in this episode. The estimated common components used in the forecast seem able to capture the business cycle dynamics, as well as information on prices.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.03757&r=all
  28. By: Aymeric Vie
    Abstract: In Keynesian Beauty Contests notably modeled by p-guessing games, players try to guess the average of guesses multiplied by p. Convergence of plays to Nash equilibrium has often been justified by agents' learning. However, interrogations remain on the origin of reasoning types and equilibrium behavior when learning takes place in unstable environments. When successive values of p can take values above and below 1, bounded rational agents may learn about their environment through simplified representations of the game, reasoning with analogies and constructing expectations about the behavior of other players. We introduce an evolutionary process of learning to investigate the dynamics of learning and the resulting optimal strategies in unstable p-guessing games environments with analogy partitions. As a validation of the approach, we first show that our genetic algorithm behaves consistently with previous results in persistent environments, converging to the Nash equilibrium. We characterize strategic behavior in mixed regimes with unstable values of p. Varying the number of iterations given to the genetic algorithm to learn about the game replicates the behavior of agents with different levels of reasoning of the level k approach. This evolutionary process hence proposes a learning foundation for endogenizing existence and transitions between levels of reasoning in cognitive hierarchy models.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.14379&r=all
  29. By: Zhong, Weifeng; Chan, Julian (Mercury Publication)
    Abstract: Abstract not available.
    Date: 2020–02–10
    URL: http://d.repec.org/n?u=RePEc:ajw:wpaper:10464&r=all
  30. By: Aymeric Vie
    Abstract: Blotto Games are a popular model of multi-dimensional strategic resource allocation. Two players allocate resources in different battlefields in an auction setting. While competition with equal budgets is well understood, little is known about strategic behavior under asymmetry of resources. We introduce a genetic algorithm, a search heuristic inspired from biological evolution, interpreted as social learning, to solve this problem. Most performant strategies are combined to create more performant strategies. Mutations allow the algorithm to efficiently scan the space of possible strategies, and consider a wide diversity of deviations. We show that our genetic algorithm converges to the analytical Nash equilibrium of the symmetric Blotto game. We present the solution concept it provides for asymmetrical Blotto games. It notably sees the emergence of "guerilla warfare" strategies, consistent with empirical and experimental findings. The player with less resources learns to concentrate its resources to compensate for the asymmetry of competition. When players value battlefields heterogeneously, counter strategies and bidding focus is obtained in equilibrium. These features are consistent with empirical and experimental findings, and provide a learning foundation for their existence.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.14372&r=all
  31. By: Ali Hirsa; Joerg Osterrieder; Branka Hadji Misheva; Wenxin Cao; Yiwen Fu; Hanze Sun; Kin Wai Wong
    Abstract: The CBOE Volatility Index, known by its ticker symbol VIX, is a popular measure of the market's expected volatility on the SP 500 Index, calculated and published by the Chicago Board Options Exchange (CBOE). It is also often referred to as the fear index or the fear gauge. The current VIX index value quotes the expected annualized change in the SP 500 index over the following 30 days, based on options-based theory and current options-market data. Despite its theoretical foundation in option price theory, CBOE's Volatility Index is prone to inadvertent and deliberate errors because it is weighted average of out-of-the-money calls and puts which could be illiquid. Many claims of market manipulation have been brought up against VIX in recent years. This paper discusses several approaches to replicate the VIX index as well as VIX futures by using a subset of relevant options as well as neural networks that are trained to automatically learn the underlying formula. Using subset selection approaches on top of the original CBOE methodology, as well as building machine learning and neural network models including Random Forests, Support Vector Machines, feed-forward neural networks, and long short-term memory (LSTM) models, we will show that a small number of options is sufficient to replicate the VIX index. Once we are able to actually replicate the VIX using a small number of SP options we will be able to exploit potential arbitrage opportunities between the VIX index and its underlying derivatives. The results are supposed to help investors to better understand the options market, and more importantly, to give guidance to the US regulators and CBOE that have been investigating those manipulation claims for several years.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.02119&r=all
  32. By: Boller, Daniel; Lechner, Michael; Okasa, Gabriel
    Abstract: Online dating emerged as a key platform for human mating. Previous research focused on socio-demographic characteristics to explain human mating in online dating environments, neglecting the commonly recognized relevance of sport. This research investigates the effect of sport activity on human mating by exploiting a unique data set from an online dating platform. Thereby, we leverage recent advances in the causal machine learning literature to estimate the causal effect of sport frequency on the contact chances. We find that for male users, doing sport on a weekly basis increases the probability to receive a first message from a woman by 50%, relatively to not doing sport at all. For female users, we do not find evidence for such an effect. In addition, for male users the effect increases with higher income.
    Keywords: Online dating, sports economics, big data, causal machine learning, effect heterogeneity, Modified Causal Forest
    JEL: J12 Z29 C21 C45
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2021:04&r=all
  33. By: Mathias Reynaert (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); James M. Sallee (Unknown)
    Abstract: Firms sometimes comply with externality-correcting policies by gaming the measure that determines policy. This harms buyers by eroding information, but it benefits them when cost savings are passed through into prices. We develop a model that highlights this tension and use it to analyze gaming of automobile carbon emission ratings in the EU. We document startling increases in gaming using novel data. We then analyze the effects of gaming in calibrated simulations. Over a wide range of parameters, we find that pass through substantially outweighs information distortions; on net, buyers benefit from gaming, even when they are fooled by it.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03167777&r=all
  34. By: Zbozinek, Tomislav Damir (California Institute of Technology); Charpentier, Caroline Juliette; Qi, Song; mobbs, dean
    Abstract: Most of life’s decisions involve risk and uncertainty regarding whether reward or loss will follow. A major approach to understanding decision-making under these circumstances comes from economics research. While many economic decision-making experiments have focused on gains/losses and risk (<100% probability of a given outcome), relatively few have studied ambiguity (i.e., uncertainty about the degree of risk or magnitude of gains/losses). Within ambiguity, most studies have focused on ambiguous risk (uncertainty regarding likelihood of outcomes), but few studies have investigated ambiguous outcome magnitude (i.e., uncertainty regarding how small/large the gain/loss will be). In the present report, we investigated the effects of ambiguous outcome magnitude, risk, and gains/losses in an economic decision-making task with low stakes (Study 1; $3.60-$5.70; N = 367) and high stakes (Study 2; $6-$48; N = 210) using the same participants in Study 2 as in Study 1. We conducted computational modeling to determine individuals’ preferences/aversions for ambiguous outcome magnitudes, risk, and gains/losses. Our results show that increasing stakes increases ambiguous gain aversion, unambiguous loss aversion, and unambiguous risk aversion, but increases ambiguous loss preference. These results suggest that as stakes increase, people tend to avoid uncertainty and loss in most domains but prefer ambiguous loss.
    Date: 2021–04–02
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:5q4g7&r=all
  35. By: Andreas Hornstein
    Abstract: I incorporate quarantine, contact tracing, and random testing in the basic SEIR model of infectious disease diffusion. A version of the model that is calibrated to known characteristics of the spread of COVID-19 is used to estimate the transmission rate of COVID-19 in the United States in 2020. The transmission rate is then decomposed into a part that reflects observable changes in employment and social contacts and a residual component that reflects disease properties and all other factors that affect the spread of the disease. I then construct counterfactuals for an alternative employment path that avoids the sharp employment decline in the second quarter of 2020 but also results in higher cumulative deaths due to a higher contact rate. For the simulations, a modest permanent increase of quarantine effectiveness counteracts the increase in deaths, and the introduction of contact tracing and random testing further reduces deaths, although at a diminishing rate. Using a conservative assumption on the statistical value of life, the value of improved health outcomes from the alternative policies far outweighs the economic gains in terms of increased output and the potential fiscal costs of these policies.
    Keywords: Quarantine; Testing
    Date: 2021–03–31
    URL: http://d.repec.org/n?u=RePEc:fip:fedrwp:90651&r=all
  36. By: Aguiar, Angel; Erwin Corong; Dominique van der Mensbrugghe
    Abstract: For the majority of studies using global general equilibrium models, the sectoral detail pro-vided by the GTAP Data Base would be sufficient. However, analyses aimed at supportingtrade negotiations or detailed trade policy regulations often require modeling at the tariff lineor Harmonized System (HS) level. We bridge this gap by providing an automated data andmodel workflow that allows GTAP-based analyses at the HS level. In this paper, we illustrateand explain the use of this workflow by first carrying out a data procedure where we re-defineGTAP sectors related to the auto industry—though the workflow is flexible enough to re-defineor disaggregate any GTAP sector into its HS associated components. Then, we use an extendedstandard GTAP version 7 (GTAP-HS) model to carry out explicit modeling of the additionalHS-level detail. Both data and model workflow that accompany this paper are available assupplementary materials. This new framework facilitates simulations of trade policy at the HSlevel, thereby allowing researchers to capture more nuances on the trade side. This is especiallyimportant when tariffs are highly differentiated across the HS components, which is the casefor the automotive sector—tariffs may be relatively low on auto parts (intermediate goods), buthigher on assembled vehicles (final goods)
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:gta:workpp:6199&r=all
  37. By: Qixuan Luo; Yu Shi; Handong Li
    Abstract: This paper presents an optimal strategy for portfolio liquidation under discrete time conditions. We assume that N risky assets held will be liquidated according to the same time interval and order quantity, and the basic price processes of assets are generated by an N-dimensional independent standard Brownian motion. The permanent impact generated by an asset in the portfolio during the liquidation will affect all assets, and the temporary impact generated by one asset will only affect itself. On this basis, we establish a liquidation cost model based on the VaR measurement and obtain an optimal liquidation time under discrete-time conditions. The optimal solution shows that the liquidation time is only related to the temporary impact rather than the permanent impact. In the simulation analysis, we give the relationship between volatility parameters, temporary price impact and the optimal liquidation strategy.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.15400&r=all
  38. By: Zhong, Weifeng; Chan, Julian; Ho, Kwan-Yuet; Lee, Kit (Mercury Publication)
    Abstract: Abstract not available.
    Date: 2020–12–10
    URL: http://d.repec.org/n?u=RePEc:ajw:wpaper:10955&r=all
  39. By: Jaydip Sen; Sidra Mehtab
    Abstract: Designing robust frameworks for precise prediction of future prices of stocks has always been considered a very challenging research problem. The advocates of the classical efficient market hypothesis affirm that it is impossible to accurately predict the future prices in an efficiently operating market due to the stochastic nature of the stock price variables. However, numerous propositions exist in the literature with varying degrees of sophistication and complexity that illustrate how algorithms and models can be designed for making efficient, accurate, and robust predictions of stock prices. We present a gamut of ten deep learning models of regression for precise and robust prediction of the future prices of the stock of a critical company in the auto sector of India. Using a very granular stock price collected at 5 minutes intervals, we train the models based on the records from 31st Dec, 2012 to 27th Dec, 2013. The testing of the models is done using records from 30th Dec, 2013 to 9th Jan 2015. We explain the design principles of the models and analyze the results of their performance based on accuracy in forecasting and speed of execution.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.15096&r=all
  40. By: Nicolas Curin; Michael Kettler; Xi Kleisinger-Yu; Vlatka Komaric; Thomas Krabichler; Josef Teichmann; Hanna Wutte
    Abstract: To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon. In this article, we utilize techniques inspired by reinforcement learning in order to optimize the operation plans of underground natural gas storage facilities. We provide a theoretical framework and assess the performance of the proposed method numerically in comparison to a state-of-the-art least-squares Monte-Carlo approach. Due to the inherent intricacy originating from the high-dimensional forward market as well as the numerous constraints and frictions, the optimization exercise can hardly be tackled by means of traditional techniques.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.01980&r=all
  41. By: Tedds, Lindsay M.; Crisan, I. Daria
    Abstract: This paper delves into the academic literature that exists that models, in detail, specific basic incomes for Canada to understand what main proposals already exist in the literature. This information will help inform the work of B.C.’s Expert Panel on Basic Income in two ways. First, it will inform the panel as to what program designs and choice elements should be considered specifically for B.C. Second, it will highlight for the panel the basic income implementation challenges raised by the choices among basic income design elements that are not addressed by the existing literature, and these would need to be solved in designing and implementing a basic income. In many cases, addressing these challenges may require any basic income policy proposal to be redesigned along the way. This paper does not provide a technical critique of this literature, which is taken up by other work.
    Keywords: Basic Income; Simulations; Design Elements; Implementation; Policy Design
    JEL: I38 I39
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:105915&r=all
  42. By: Nosier, Shereen; Beram, Reham; Mahrous, Mohamed
    Abstract: Recognizing and understanding the roots of poverty, its elements and determinants in Egypt is vital to coming up with policy recommendations that help eradicate poverty and ameliorate welfare. As a developing country, Egypt has been suffering from increasing poverty rates since year 2000. In this study, the three most recent Egyptian Households Income, Expenditure and Consumption Surveys conducted by the Central Agency for Public Mobilization and Statistics for the years 2011, 2013 and 2015 are utilized to analyze and model the determinants of poverty in Egypt. A comprehensive poverty profile is constructed for the three years, as well as a comparison for the changes that have occurred over time on the national, rural and urban levels. Some determinants were selected as the most important factors affecting poverty, such as: demographic characteristics, employment status and educational attainment. Additionally, two econometric techniques are employed to model the different factors affecting households’ consumption as well as probability of falling in poverty in Egypt; namely, Fixed Effects Regression and Logistic Regression. The results of the poverty profile, as well as both models, illustrate that the main variables which help reduce poverty are: having low family size, high number of earners and better educational attainment. On the other hand, factors that worsen poverty status of households are working in agriculture and construction sectors, depending on pension as the main source of income in addition to having high dependency ratio. Furthermore, poverty simulation analyses are conducted to assess the effect of changes in the levels of determinants of poverty on probability of being poor to show the possible consequences and effects of potential poverty lessening policies and plans.
    Date: 2021–04–09
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:d8spt&r=all
  43. By: Alina Malkova; Klara Sabirianova Peter; Jan Svejnar
    Abstract: The paper investigates the effects of the credit market development on the labor mobility between the informal and formal labor sectors. In the case of Russia, due to the absence of a credit score system, a formal lender may set a credit limit based on the verified amount of income. To get a loan, an informal worker must first formalize his or her income (switch to a formal job), and then apply for a loan. To show this mechanism, the RLMS data was utilized, and the empirical method is the dynamic multinomial logit model of employment. The empirical results show that a relaxation of credit constraints increases the probability of transition from an informal to a formal job, and improved CMA (by one standard deviation) increases the chances of informal sector workers to formalize by 5.4 ppt. These results are robust in different specifications of the model. Policy simulations show strong support for a reduction in informal employment in response to better CMA in credit-constrained communities.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.05803&r=all
  44. By: Chepeliev, Maksym; Israel Osorio Rodarte; Dominique van der Mensbrugghe
    Abstract: While bringing multiple benefits for the environment, achievement of the stringent global greenhouse gas emissions reduction target, like the one outlined in the Paris Climate Agreement, is associated with significant implementation costs and could impact different dimensions of human well-being, including welfare, poverty and distributional aspects. In this paper, we analyze the poverty and distributional impacts of different carbon pricing mechanisms consistent with reaching the Paris Agreement targets. We link a global recursive dynamic computable general equilibrium model ENVISAGE with the GIDD microsimulation model and explore three levels of mitigation effort and five carbon pricing options (trade coalitions). Results suggest that while there is a higher incidence of poverty in all scenarios, mainly driven by lower economic growth, Nationally Determined Contribution (NDC) policies result in progressive income distribution at the global level. Such progressivity is caused not only by lower relative prices of food versus non-food commodities, but also by a general decline in skill wage premia.
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:gta:workpp:6194&r=all
  45. By: Federico A. Bugni; Mengsi Gao
    Abstract: This paper studies inference in a randomized controlled trial (RCT) with covariate-adaptive randomization (CAR) and imperfect compliance of a binary treatment. In this context, we study inference on the LATE. As in Bugni et al. (2018,2019), CAR refers to randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve ``balance'' within each stratum. In contrast to these papers, however, we allow participants of the RCT to endogenously decide to comply or not with the assigned treatment status. We study the properties of an estimator of the LATE derived from a ``fully saturated'' IV linear regression, i.e., a linear regression of the outcome on all indicators for all strata and their interaction with the treatment decision, with the latter instrumented with the treatment assignment. We show that the proposed LATE estimator is asymptotically normal, and we characterize its asymptotic variance in terms of primitives of the problem. We provide consistent estimators of the standard errors and asymptotically exact hypothesis tests. In the special case when the target proportion of units assigned to each treatment does not vary across strata, we can also consider two other estimators of the LATE, including the one based on the ``strata fixed effects'' IV linear regression, i.e., a linear regression of the outcome on indicators for all strata and the treatment decision, with the latter instrumented with the treatment assignment. Our characterization of the asymptotic variance of the LATE estimators allows us to understand the influence of the parameters of the RCT. We use this to propose strategies to minimize their asymptotic variance in a hypothetical RCT based on data from a pilot study. We illustrate the practical relevance of these results using a simulation study and an empirical application based on Dupas et al. (2018).
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.03937&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.