nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒04‒13
twenty-one papers chosen by



  1. Reinforcement Learning in Economics and Finance By Arthur Charpentier; Romuald Elie; Carl Remlinger
  2. Exact Algorithms for the Multi-Compartment Vehicle Routing Problem with Flexible Compartment Sizes By Katrin Heßler
  3. Maintaining plausible calorie intakes, crop yields and crop land expansion in long-run simulations with Computable General Equilibrium Models By Britz, Wolfgang
  4. Using a data mining CRISP-DM methodology for rate of penetration (ROP) prediction in oil well drilling By Djamil Rezki; Leila Mouss; Abdelkader Baaziz
  5. ESG investments: Filtering versus machine learning approaches By Carmine de Franco; Christophe Geissler; Vincent Margot; Bruno Monnier
  6. Fiscal Reform -- Aid or Hindrance: A Computable General Equilibrium (CGE) Analysis for Saudi Arabia By Elizabeth L. Roos; Philip D. Adams
  7. An Exact Method for Assortment Optimization under the Nested Logit Model By Laurent Alfandari; Alborz Hassanzadeh; Ivana Ljubic
  8. Zero-Intelligence vs. Human Agents: An Experimental Analysis of the Efficiency of Double Auctions and Over-the-Counter Markets of Varying Sizes. By Giuseppe Attanasi; Samuele Centorrino; Elena Manzoni
  9. Firm-bank credit networks, business cycle and macroprudential policy By Riccetti, Luca; Russo, Alberto; Gallegati, Mauro
  10. Estimating the Green Potential of Occupations: A New Approach Applied to the U.S. Labor Market By Rutzer, Christian; Niggli, Matthias; Weder, Rolf
  11. Double Debiased Machine Learning Nonparametric Inference with Continuous Treatments By Kyle Colangelo; Ying-Ying Lee
  12. Double Machine Learning Based Program Evaluation under Unconfoundedness By Knaus, Michael C.
  13. The Network Dynamics of Social and Technological Conventions By Joshua Becker
  14. Contracting, pricing, and data collection under the AI flywheel effect By Francis de Véricourt,; Huseyin Gurkan,
  15. QuantNet: Transferring Learning Across Systematic Trading Strategies By Adriano Koshiyama; Sebastian Flennerhag; Stefano B. Blumberg; Nick Firoozye; Philip Treleaven
  16. Towards Explainability of Machine Learning Models in Insurance Pricing By Kevin Kuo; Daniel Lupton
  17. Data Science in Economics By Saeed Nosratabadi; Amir Mosavi; Puhong Duan; Pedram Ghamisi
  18. Revenu de base – Simulations en vue d’une expérimentation By Mahdi Ben Jelloul; Antoine Bozio; Sophie Cottet; Brice Fabre; Claire Leroy
  19. By Force of Habit: Self-Trapping in a Dynamical Utility Landscape By Jos\'e Moran; Antoine Fosset; Davide Luzzati; Jean-Philippe Bouchaud; Michael Benzaquen
  20. Le modèle de microsimulation TAXIPP -Version 1.1 By Mahdi Ben Jelloul; Antoine Bozio; Thomas Douenne; Brice Fabre; Claire Leroy
  21. Power Assisted Trend Following By Andreas A. Aigner; Walter Schrabmair

  1. By: Arthur Charpentier; Romuald Elie; Carl Remlinger
    Abstract: Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy -- a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.10014&r=all
  2. By: Katrin Heßler (Johannes Gutenberg-University Mainz, Germany)
    Abstract: The multi-compartment vehicle routing problem with flexible compartment sizes is a variant of the classical vehicle routing problem in which customers demand different product types and the vehicle capacity can be separated into different compartments each dedicated to a specific product type. The size of each compartment is not fixed beforehand but the number of compartments is limited. We consider two variants for dividing the vehicle capacity: On the one hand the vehicle capacity can be discretely divided into compartments and on the other hand compartment sizes can be chosen arbitrarily. The objective is to minimize the total distance of all vehicle routes such that all customer demands are met and vehicle capacities are respected. Modifying a branch-and-cut algorithm based on a three-index formulation for the discrete problem variant from the literature, we introduce an exact solution approach that is tailored to the continuous problem variant. Moreover, we propose two other exact solution approaches, namely a branch-and-cut algorithm based on a two-index formulation and a branch-price-and-cut algorithm based on a route-indexed formulation, that can tackle both packing restrictions with mild adaptions and can be combined into an effective two-stage approach. Extensive computational tests have been conducted to compare the different algorithms. For the continuous variant, we can solve instances with up to 50 customers to optimality and for the discrete variant, several previously open instances can now be solved to proven optimality. Moreover, we analyse the cost savings of using continuously flexible compartment sizes instead of discretely flexible compartment sizes.
    Keywords: routing, branch-price-and-cut, multi-compartment
    Date: 2020–04–03
    URL: http://d.repec.org/n?u=RePEc:jgu:wpaper:2007&r=all
  3. By: Britz, Wolfgang
    Abstract: We demonstrate how a combination of different elements can jointly provide plausible long-term trends for calorie intakes, crop yields and land use in Computable General Equilibrium (CGE) analysis. Specifically, we depict household demand based on a MAIDADS demand system estimated based on cross-sectional data. In order to control for calorie intake we first regress calorie intake on per capita income and construct a Leontief inverse to derive implicit calorie intakes from the final consumption of processed food. This allows jointly shifting preferences of the MAIDADS system by updating commitment terms and marginal budget shares, to arrive at plausible per capita calorie intakes during baseline construction. We control yields based on exogenous projections which we also use to parameterize our land supply functions. The contribution of the different elements is evaluated by comprising key developments in baselines up to 2050 constructed with different model variants
    Keywords: Agricultural and Food Policy, Food Security and Poverty
    Date: 2020–04–08
    URL: http://d.repec.org/n?u=RePEc:ags:ubfred:302922&r=all
  4. By: Djamil Rezki (UB2 - University of Batna 2 Mostefa Ben Boulaïd); Leila Mouss (UB2 - University of Batna 2 Mostefa Ben Boulaïd); Abdelkader Baaziz (AMU - Aix Marseille Université, IMSIC - Institut mediterranéen des sciences de l'information et de la communication - AMU - Aix Marseille Université - UTLN - Université de Toulon)
    Abstract: This work describes an implementation of a oil drilling data mining project approach based on the CRISP-DM methodology. Recent real-world data were collected from a from historical data of an actual oil drilling process in Hassi Terfa field, situated in South of Algeria. During the modelling process. The goal was to predict the rate of penetration (ROP) based on input parameters that are commonly used at the oil drilling process (weight on bit, rotation per minute, mud density , spp, ucs). At the data preparation stage, the data were cleaned and variables were selected and transformed. Next, at the modeling stage, a regression approach was adopted, where three learning methods were compared : Artificial Neural Network, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of correlation. The results of the experiment show that the proposed approach is able to effectively use the engineering data to provide effective prediction ROP, the ROP prediction allows the drilling engineer to select the best combination of the input parameters to have a better advancement.
    Keywords: Data mining,CRISP-DM,oilwell drilling,rate of penetration (ROP),prediction
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02482291&r=all
  5. By: Carmine de Franco (OSSIAM); Christophe Geissler (Advestis); Vincent Margot (Advestis); Bruno Monnier (OSSIAM)
    Abstract: We designed a machine learning algorithm that identifies patterns between ESG profiles and financial performances for companies in a large investment universe. The algorithm consists of regularly updated sets of rules that map regions into the high-dimensional space of ESG features to excess return predictions. The final aggregated predictions are transformed into scores which allow us to design simple strategies that screen the investment universe for stocks with positive scores. By linking the ESG features with financial performances in a non-linear way, our strategy based upon our machine learning algorithm turns out to be an efficient stock picking tool, which outperforms classic strategies that screen stocks according to their ESG ratings, as the popular best-in-class approach. Our paper brings new ideas in the growing field of financial literature that investigates the links between ESG behavior and the economy. We show indeed that there is clearly some form of alpha in the ESG profile of a company, but that this alpha can be accessed only with powerful, non-linear techniques such as machine learning.
    Keywords: Sustainable Investments,Best-in-class approach,ESG,Machine Learning,Portfolio Construction
    Date: 2018–10–22
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02481891&r=all
  6. By: Elizabeth L. Roos; Philip D. Adams
    Abstract: The oil price fell from around $US110 per barrel in 2014 to less than $US50 per barrel at the start of 2017. This put enormous pressure on government budgets within the Gulf Cooperation Council (GCC) region, especially the budgets of oil exporting countries. The focus of GCC economic policies quickly shifted to fiscal reform. In this paper we use a dynamic CGE model to investigate the economic impact of introducing a 5 per cent Value Added Tax (VAT) and a tax on business profit, with specific reference to the Kingdom of Saudi Arabia (KSA). Our study shows that although the introduction of new taxes improves government tax revenue, markets are distorted lowering economic efficiency and production due to a tax. In all simulations, real GDP, real investment and capital stock falls in the long-run. This highlights the importance of (1) understanding the potential harm caused to economic efficiency and production due to taxes, and (2) fiscal reform includes both government expenditure reform and identifying non-oil revenue sources. This allows for the design of an optimal tax system that meets all future requirements for each of the individual Gulf States.
    Keywords: Computable General Equilibrium (CGE) models Saudi Arabia Fiscal reform
    JEL: C68 D58 E62 O53
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-301&r=all
  7. By: Laurent Alfandari (ESSEC Business School - Essec Business School); Alborz Hassanzadeh (ESSEC Business School - Essec Business School); Ivana Ljubic (ESSEC Business School - Essec Business School)
    Abstract: We study the problem of finding an optimal assortment of products maximizing the expected revenue, in which customer preferences are modeled using a Nested Logit choice model. This problem is known to be polynomially solvable in a specific case and NP-hard otherwise, with only approximation algorithms existing in the literature. For the NP-hard cases, we provide a general exact method that embeds a tailored Branch-and-Bound algorithm into a fractional programming framework. Contrary to the existing literature, in which assumptions are imposed on either the structure of nests or the combination and characteristics of products, no assumptions on the input data are imposed, and hence our approach can solve the most general problem setting. We show that the parameterized subproblem of the fractional programming scheme, which is a binary highly non-linear optimization problem, is decomposable by nests, which is a main advantage of the approach. To solve the subproblem for each nest, we propose a two-stage approach. In the first stage, we identify those products that are undoubtedly beneficial to offer, or not, which can significantly reduce the problem size. In the second stage, we design a tailored Branch-and-Bound algorithm with problem-specific upper bounds. Numerical results show that the approach is able to solve assortment instances with up to 5,000 products per nest. The most challenging instances for our approach are those in which the dissimilarity parameters of nests can be either less or greater than one.
    Keywords: nested logit,fractional programming,combinatorial optimization,revenue management,assortment optimization
    Date: 2020–01
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02463159&r=all
  8. By: Giuseppe Attanasi; Samuele Centorrino; Elena Manzoni
    Abstract: We study two well-known electronic markets: an over-the-counter (OTC) market, in which each agent looks for the best counterpart through bilateral negotiations, and a double auction (DA) market, in which traders post their quotes publicly. We focus on the DA-OTC efficiency gap and show how it varies with different market sizes (10, 20, 40, and 80 traders). We compare experimental results from a sample of 6,400 undergraduate students in Economics and Management with zero-intelligent (ZI) agent-based simulations. Simulations with ZI traders show that the traded quantity (with respect to the e cient one) increases with market size under both DA and OTC. Experimental results with human traders confrm the same tendency under DA, while the share of periods in which the traded quantity is higher (lower) than the efficient one decreases (increases) with market size under OTC, ultimately leading to a DA-OTC efficiency gap increasing with market size. We rationalize these results by putting forward a novel game-theoretical model of OTC market as a repeated bargaining procedure under incomplete information on buyers' valuations and sellers' costs, showing how efficiency decreases slightly with size due to two counteracting e ects: acceptance rates in earlier periods decrease with size, and earlier offers increase, but not always enough to compensate for the decreasein acceptance rates.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:nys:sunysb:20-04&r=all
  9. By: Riccetti, Luca; Russo, Alberto; Gallegati, Mauro
    Abstract: We present an agent-based model to study firm-bank credit market interactions in different phases of the business cycle. The business cycle is exogenously set and it can give rise to various scenarios. Compared to other models in this literature strand, we improve the mechanism according to which the dividends are distributed, including the possibility of stock repurchase by firms. In addition, we locate firms and banks over a space and firms may ask credit to many banks, resulting in a complex spatial network. The model reproduces a long list of stylized facts and their dynamic evolution as described by the cross-correlations among model variables. The model allows us to test the effectiveness of rules designed by the current financial regulation, such as the Basel 3 countercyclical capital buffer. We find that the effectiveness of this rule changes in different business cycle environments and this should be considered by policy makers.
    Keywords: Agent-based modeling, credit network, business cycle, financial regulation, macroprudential policy
    JEL: C63 E32 E52 G1
    Date: 2020–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:98928&r=all
  10. By: Rutzer, Christian (University of Basel); Niggli, Matthias (University of Basel); Weder, Rolf (University of Basel)
    Abstract: This paper presents a new approach to estimate the green potential of occupations. Using data from O*NET on the skills that workers possess and the tasks they carry out, we train several machine learning algorithms to predict the green potential of U.S. occupations classified according to the 6-digit Standard Occupational Classication. Our methodology allows existing discrete classications of occupations to be extended to a continuum of classes. This improves the analysis of heterogeneous occupations in terms of their green potential. Our approach makes two contributions to the literature. First, as it more accurately ranks occupations in terms of their green potential, it leads to a better understanding of the extent to which a given workforce is prepared to cope with a transition to a green economy. Second, it allows for a more accurate analysis of differences between workforces across regions. We use U.S. occupational employment data to highlight both aspects.
    Keywords: green skills, green tasks, green potential, supervised learning, labor market
    JEL: C53 J21 J24 Q52
    Date: 2020–03–01
    URL: http://d.repec.org/n?u=RePEc:bsl:wpaper:2020/03&r=all
  11. By: Kyle Colangelo; Ying-Ying Lee
    Abstract: We propose a nonparametric inference method for causal effects of continuous treatment variables, under unconfoundedness and in the presence of high-dimensional or nonparametric nuisance parameters. Our double debiased machine learning (DML) estimators for the average dose-response function (or the average structural function) and the partial effects are asymptotically normal with nonparametric convergence rates. The nuisance estimators for the conditional expectation function and the conditional density can be nonparametric kernel or series estimators or ML methods. Using a kernel-based doubly robust influence function and cross-fitting, we give tractable primitive conditions under which the nuisance estimators do not affect the first-order large sample distribution of the DML estimators. We justify the use of kernel to localize the continuous treatment at a given value by the Gateaux derivative. We implement various ML methods in Monte Carlo simulations and an empirical application on a job training program evaluation.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.03036&r=all
  12. By: Knaus, Michael C. (University of St. Gallen)
    Abstract: This paper consolidates recent methodological developments based on Double Machine Learning (DML) with a focus on program evaluation under unconfoundedness. DML based methods leverage flexible prediction methods to control for confounding in the estimation of (i) standard average effects, (ii) different forms of heterogeneous effects, and (iii) optimal treatment assignment rules. We emphasize that these estimators build all on the same doubly robust score, which allows to utilize computational synergies. An evaluation of multiple programs of the Swiss Active Labor Market Policy shows how DML based methods enable a comprehensive policy analysis. However, we find evidence that estimates of individualized heterogeneous effects can become unstable.
    Keywords: causal machine learning, conditional average treatment effects, optimal policy learning, individualized treatment rules, multiple treatments
    JEL: C21
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13051&r=all
  13. By: Joshua Becker
    Abstract: When innovations compete for adoption, chance historical events can allow an inferior strategy to spread at the expense of superior alternatives. However, advantage is not always due to chance, and networks have emerged as an important determinant of organizational behavior. To understand what factors can impact the likelihood that the best alternative will be adopted, this paper asks: how does network structure shape the emergence of social and technological conventions? Prior research has found that highly influential people, or "central" nodes, can be beneficial from the perspective of a single innovation because promotion by central nodes can increase the speed of adoption. In contrast, when considering the competition of multiple strategies, the presence of central nodes may pose a risk, and the resulting "centralized" networks are not guaranteed to favor the optimal strategy. This paper uses agent-based simulation to investigate the effect of network structure on a standard model of convention formation, finding that network centralization increases the speed of convention formation but also decreases the likelihood that the best strategy will become widely adopted. Surprisingly, this finding does not indicate a speed/optimality trade-off: dense networks are both fast and optimal.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.12112&r=all
  14. By: Francis de Véricourt, (ESMT European School of Management and Technology and E.CA Economics); Huseyin Gurkan, (ESMT European School of Management and Technology)
    Abstract: This paper explores how firms that lack expertise in machine learning (ML) can leverage the so-called AI Flywheel effect. This effect designates a virtuous cycle by which, as an ML product is adopted and new user data are fed back to the algorithm, the product improves, enabling further adoptions. However, managing this feedback loop is difficult, especially when the algorithm is contracted out. Indeed, the additional data that the AI Flywheel effect generates may change the provider’s incentives to improve the algorithm over time. We formalize this problem in a simple two-period moral hazard framework that captures the main dynam- ics between machine learning, data acquisition, pricing and contracting. We find that the firm’s decisions crucially depend on how the amount of data on which the machine is trained interacts with the provider’s effort. If this effort has a more (resp. less) significant impact on accuracy for larger volumes of data, the firm underprices (resp. overprices) the product. Further, the firm’s starting dataset, as well as the data volume that its product collects per user, significantly affect its pricing and data collection strategies. The firm leverages the virtuous cycle less for larger starting datasets and sometimes more for larger data volumes per user. Interestingly, the presence of incentive issues can induce the firm to leverage the effect less when its product collects more data per user.
    Keywords: Data, machine learning, pricing, incentives and contracting
    Date: 2020–03–03
    URL: http://d.repec.org/n?u=RePEc:esm:wpaper:esmt-20-01&r=all
  15. By: Adriano Koshiyama; Sebastian Flennerhag; Stefano B. Blumberg; Nick Firoozye; Philip Treleaven
    Abstract: In this work we introduce QuantNet: an architecture that is capable of transferring knowledge over systematic trading strategies in several financial markets. By having a system that is able to leverage and share knowledge across them, our aim is two-fold: to circumvent the so-called Backtest Overfitting problem; and to generate higher risk-adjusted returns and fewer drawdowns. To do that, QuantNet exploits a form of modelling called Transfer Learning, where two layers are market-specific and another one is market-agnostic. This ensures that the transfer occurs across trading strategies, with the market-agnostic layer acting as a vehicle to share knowledge, cross-influence each strategy parameters, and ultimately the trading signal produced. In order to evaluate QuantNet, we compared its performance in relation to the option of not performing transfer learning, that is, using market-specific old-fashioned machine learning. In summary, our findings suggest that QuantNet performs better than non transfer-based trading strategies, improving Sharpe ratio in 15% and Calmar ratio in 41% across 3103 assets in 58 equity markets across the world. Code coming soon.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.03445&r=all
  16. By: Kevin Kuo; Daniel Lupton
    Abstract: Machine learning methods have garnered increasing interest among actuaries in recent years. However, their adoption by practitioners has been limited, partly due to the lack of transparency of these methods, as compared to generalized linear models. In this paper, we discuss the need for model interpretability in property & casualty insurance ratemaking, propose a framework for explaining models, and present a case study to illustrate the framework.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.10674&r=all
  17. By: Saeed Nosratabadi; Amir Mosavi; Puhong Duan; Pedram Ghamisi
    Abstract: This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The data science advances are investigated in three individual classes of deep learning models, ensemble models, and hybrid models. Application domains include stock market, marketing, E-commerce, corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on advancement of hybrid models as more than 51% of the reviewed articles applied hybrid model. On the other hand, it is found that based on the RMSE accuracy metric, hybrid models had higher prediction accuracy than other algorithms. While it is expected the trends go toward the advancements of deep learning models.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.13422&r=all
  18. By: Mahdi Ben Jelloul (IPP - Institut des politiques publiques); Antoine Bozio (IPP - Institut des politiques publiques, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Sophie Cottet (IPP - Institut des politiques publiques, PSE - Paris School of Economics, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique); Brice Fabre (PSE - Paris School of Economics, IPP - Institut des politiques publiques, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique); Claire Leroy (IPP - Institut des politiques publiques)
    Abstract: Le système de prestations sociales actuel suscite des débats sur de nombreuses dimensions : non-recours aux minima sociaux, empilement de dispositifs multiples, conditions restrictives d'éligibilité pour la population jeune, etc. Face à ces enjeux, 13 conseils départementaux (l'Ardèche, l'Ariège, l'Aude, la Dordogne, le Gers, la Gironde, la Haute-Garonne, l'Ille-et-Vilaine, les Landes, le Lot-et-Garonne, la Meurthe-et-Moselle, la Nièvre et la Seine-Saint-Denis) ont lancé un projet d'expérimentation de la mise en place d'un revenu de base simplifiant le système existant et ouvert à tout individu au-dessus d'un certain âge sous condition de ressources. Un préalable à la mise en œuvre de ce projet est la définition du ou des scénarios de réforme à expérimenter. Ce rapport s'inscrit dans cet objectif, en évaluant ex-ante les effets budgétaires et redistributifs de plusieurs scénarios de réforme définis par les conseils départementaux impliqués. À partir du modèle de microsimulation TAXIPP 1.0, qui mobilise à la fois des données administratives de source fiscale et des données d'enquête, ce rapport propose deux schémas de simplification du système existant : le remplacement du Revenu de Solidarité Active (RSA) et de la prime d'activité par un dispositif simplifié d'une part, et l'intégration des aides au logement dans le nouveau dispositif unifié d'autre part. Sont notamment évalués les effets de l'ouverture de ces dispositifs aux individus de 18 à 24 ans, qui sont aujourd'hui les plus touchés par la pauvreté.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-02514725&r=all
  19. By: Jos\'e Moran; Antoine Fosset; Davide Luzzati; Jean-Philippe Bouchaud; Michael Benzaquen
    Abstract: Historically, rational choice theory has focused on the utility maximization principle to describe how individuals make choices. In reality, there is a computational cost related to exploring the universe of available choices and it is often not clear whether we are truly maximizing an underlying utility function. In particular, memory effects and habit formation may dominate over utility maximisation. We propose a stylized model with a history-dependent utility function where the utility associated to each choice is increased when that choice has been made in the past, with a certain decaying memory kernel. We show that self-reinforcing effects can cause the agent to get stuck with a choice by sheer force of habit. We discuss the special nature of the transition between free exploration of the space of choice and self-trapping. We find in particular that the trapping time distribution is precisely a Zipf law at the transition, and that the self-trapped phase exhibits super-aging behaviour.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.13660&r=all
  20. By: Mahdi Ben Jelloul (IPP - Institut des politiques publiques); Antoine Bozio (PSE - Paris-Jourdan Sciences Economiques - CNRS - Centre National de la Recherche Scientifique - ENPC - École des Ponts ParisTech - EHESS - École des hautes études en sciences sociales - INRA - Institut National de la Recherche Agronomique - ENS Paris - École normale supérieure - Paris, IPP - Institut des politiques publiques, PSE - Paris School of Economics); Thomas Douenne (IPP - Institut des politiques publiques, PSE - Paris-Jourdan Sciences Economiques - CNRS - Centre National de la Recherche Scientifique - ENPC - École des Ponts ParisTech - EHESS - École des hautes études en sciences sociales - INRA - Institut National de la Recherche Agronomique - ENS Paris - École normale supérieure - Paris, PSE - Paris School of Economics); Brice Fabre (PSE - Paris School of Economics, IPP - Institut des politiques publiques, PSE - Paris-Jourdan Sciences Economiques - CNRS - Centre National de la Recherche Scientifique - ENPC - École des Ponts ParisTech - EHESS - École des hautes études en sciences sociales - INRA - Institut National de la Recherche Agronomique - ENS Paris - École normale supérieure - Paris); Claire Leroy (IPP - Institut des politiques publiques)
    Abstract: Le modèle TAXIPP est le modèle de microsimulation socio-fiscal développé au sein de l'Institut des politiques publiques (IPP). Cet outil vise à simuler les prélèvements obligatoires et transferts monétaires du système redistributif français. Ce document méthodologique présente la version 1.1 du modèle. Il décrit son fonctionnement dans le détail, à savoir les données utilisées, la structure du simulateur ainsi que l'ensemble des choix méthodologiques opérés. Par rapport à la version 1.0, la version 1.1 simule l'impôt de solidarité sur la fortune (ISF) et l'impôt sur la fortune immobilière (IFI) à partir des données administratives associées à ces deux impôts. La version 1.1 fait suite au récent passage à la version 1.0, qui a représenté un changement important tant dans les sources que dans l'architecture du simulateur. Les données utilisées dans ce modèle sont issues d'un appariement statistique entre données administratives et données d'enquête, l'idée étant d'utiliser des sources administratives autant que possible.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-02514276&r=all
  21. By: Andreas A. Aigner; Walter Schrabmair
    Abstract: 'The trend is your friend' is a common saying, the difficulty lies in determining if and when you are in a trend. Is the trend strong enough to trade? When does the trend reverse and how are you going to determine this? We will try and answer at least some of these questions here. We are deriving a novel indicator to measure the power of a trend using digital signal processing techniques, separating the Signal from the Noise. We apply these to examples as well as real data and evaluate the accuracy of these and the relation to PNL performance of the 'Volatility Index' trend following algorithm devised by J. Welles Wilder Jr. in 1978.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.09298&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.