|
on Computational Economics |
Issue of 2022‒05‒09
25 papers chosen by |
By: | Sebastian Baran; Przemys{\l}aw Rola |
Abstract: | The insurance industry, with its large datasets, is a natural place to use big data solutions. However it must be stressed, that significant number of applications for machine learning in insurance industry, like fraud detection or claim prediction, deals with the problem of machine learning on an imbalanced data set. This is due to the fact that frauds or claims are rare events when compared with the entire population of drivers. The problem of imbalanced learning is often hard to overcome. Therefore, the main goal of this work is to present and apply various methods of dealing with an imbalanced dataset in the context of claim occurrence prediction in car insurance. In addition, the above techniques are used to compare the results of machine learning algorithms in the context of claim occurrence prediction in car insurance. Our study covers the following techniques: logistic-regression, decision tree, random forest, xgBoost, feed-forward network. The problem is the classification one. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.06109&r= |
By: | Franco D. Albareti; Thomas Ankenbrand; Denis Bieri; Esther H\"anggi; Damian L\"otscher; Stefan Stettler; Marcel Sch\"ongens |
Abstract: | Quantum computers can solve specific problems that are not feasible on "classical" hardware. Harvesting the speed-up provided by quantum computers therefore has the potential to change any industry which uses computation, including finance. First quantum applications for the financial industry involving optimization, simulation, and machine learning problems have already been proposed and applied to use cases such as portfolio management, risk management, and pricing derivatives. This survey reviews platforms, algorithms, methodologies, and use cases of quantum computing for various applications in finance in a structured way. It is aimed at people working in the financial industry and serves to gain an overview of the current development and capabilities and understand the potential of quantum computing in the financial industry. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.10026&r= |
By: | Arkadiusz J\k{e}drzejewski; Jesus Lago; Grzegorz Marcjasz; Rafa{\l} Weron |
Abstract: | Electricity price forecasting (EPF) is a branch of forecasting on the interface of electrical engineering, statistics, computer science, and finance, which focuses on predicting prices in wholesale electricity markets for a whole spectrum of horizons. These range from a few minutes (real-time/intraday auctions and continuous trading), through days (day-ahead auctions), to weeks, months or even years (exchange and over-the-counter traded futures and forward contracts). Over the last 25 years, various methods and computational tools have been applied to intraday and day-ahead EPF. Until the early 2010s, the field was dominated by relatively small linear regression models and (artificial) neural networks, typically with no more than two dozen inputs. As time passed, more data and more computational power became available. The models grew larger to the extent where expert knowledge was no longer enough to manage the complex structures. This, in turn, led to the introduction of machine learning (ML) techniques in this rapidly developing and fascinating area. Here, we provide an overview of the main trends and EPF models as of 2022. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.00883&r= |
By: | Yiyang Zheng |
Abstract: | Predictions of short-term directional movement of the futures contract can be challenging as its pricing is often based on multiple complex dynamic conditions. This work presents a method for predicting the short-term directional movement of an underlying futures contract. We engineered a set of features from technical analysis, order flow, and order-book data. Then, Tabnet, a deep learning neural network, is trained using these features. We train our model on the Silver Futures Contract listed on Shanghai Futures Exchange and achieve an accuracy of 0.601 on predicting the directional change during the selected period. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.12457&r= |
By: | Zhuangwei Shi; Yang Hu; Guangliang Mo; Jian Wu |
Abstract: | Stock market plays an important role in the economic development. Due to the complex volatility of the stock market, the research and prediction on the change of the stock price, can avoid the risk for the investors. The traditional time series model ARIMA can not describe the nonlinearity, and can not achieve satisfactory results in the stock prediction. As neural networks are with strong nonlinear generalization ability, this paper proposes an attention-based CNN-LSTM and XGBoost hybrid model to predict the stock price. The model constructed in this paper integrates the time series model, the Convolutional Neural Networks with Attention mechanism, the Long Short-Term Memory network, and XGBoost regressor in a non-linear relationship, and improves the prediction accuracy. The model can fully mine the historical information of the stock market in multiple periods. The stock data is first preprocessed through ARIMA. Then, the deep learning architecture formed in pretraining-finetuning framework is adopted. The pre-training model is the Attention-based CNN-LSTM model based on sequence-to-sequence framework. The model first uses convolution to extract the deep features of the original stock data, and then uses the Long Short-Term Memory networks to mine the long-term time series features. Finally, the XGBoost model is adopted for fine-tuning. The results show that the hybrid model is more effective and the prediction accuracy is relatively high, which can help investors or institutions to make decisions and achieve the purpose of expanding return and avoiding risk. Source code is available at https://github.com/zshicode/Attention-CL X-stock-prediction. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.02623&r= |
By: | Bruno Spilak; Wolfgang Karl H\"ardle |
Abstract: | A portfolio allocation method based on linear and non-linear latent constrained conditional factors is presented. The factor loadings are constrained to always be positive in order to obtain long-only portfolios, which is not guaranteed by classical factor analysis or PCA. In addition, the factors are to be uncorrelated among clusters in order to build long-only portfolios. Our approach is based on modern machine learning tools: convex Non-negative Matrix Factorization (NMF) and autoencoder neural networks, designed in a specific manner to enforce the learning of useful hidden data structure such as correlation between the assets' returns. Our technique finds lowly correlated linear and non-linear conditional latent factors which are used to build outperforming global portfolios consisting of cryptocurrencies and traditional assets, similar to hierarchical clustering method. We study the dynamics of the derived non-linear factors in order to forecast tail losses of the portfolios and thus build more stable ones. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.02757&r= |
By: | Xianfei Hui; Baiqing Sun; Yan Zhou; Indranil SenGupta |
Abstract: | This paper models stochastic process of price time series of CSI 300 index in Chinese financial market, analyzes volatility characteristics of intraday high-frequency price data. In the new generalized Barndorff-Nielsen and Shephard model, the lag caused by asynchrony of market information is considered, and the problem of lack of long-term dependence is solved. To speed up the valuation process, several machine learning and deep learning algorithms are used to estimate parameter and evaluate forecast results. Tracking historical jumps of different magnitudes offers promising avenues for simulating dynamic price processes and predicting future jumps. Numerical results show that the deterministic component of stochastic volatility processes would always be captured over short and longer-term windows. Research finding could be suitable for influence investors and regulators interested in predicting market dynamics based on realized volatility. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.02891&r= |
By: | Kun Zhang; Ben Mingbin Feng; Guangwu Liu; Shiyu Wang |
Abstract: | Nested simulation is a natural approach to tackle nested estimation problems in operations research and financial engineering. The outer-level simulation generates outer scenarios and the inner-level simulations are run in each outer scenario to estimate the corresponding conditional expectation. The resulting sample of conditional expectations is then used to estimate different risk measures of interest. Despite its flexibility, nested simulation is notorious for its heavy computational burden. We introduce a novel simulation procedure that reuses inner simulation outputs to improve efficiency and accuracy in solving nested estimation problems. We analyze the convergence rates of the bias, variance, and MSE of the resulting estimator. In addition, central limit theorems and variance estimators are presented, which lead to asymptotically valid confidence intervals for the nested risk measure of interest. We conduct numerical studies on two financial risk measurement problems. Our numerical studies show consistent results with the asymptotic analysis and show that the proposed approach outperforms the standard nested simulation and a state-of-art regression approach for nested estimation problems. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.15929&r= |
By: | Sourav Medya; Mohammad Rasoolinejad; Yang Yang; Brian Uzzi |
Abstract: | Financial market analysis has focused primarily on extracting signals from accounting, stock price, and other numerical hard data reported in P&L statements or earnings per share reports. Yet, it is well-known that the decision-makers routinely use soft text-based documents that interpret the hard data they narrate. Recent advances in computational methods for analyzing unstructured and soft text-based data at scale offer possibilities for understanding financial market behavior that could improve investments and market equity. A critical and ubiquitous form of soft data are earnings calls. Earnings calls are periodic (often quarterly) statements usually by CEOs who attempt to influence investors' expectations of a company's past and future performance. Here, we study the statistical relationship between earnings calls, company sales, stock performance, and analysts' recommendations. Our study covers a decade of observations with approximately 100,000 transcripts of earnings calls from 6,300 public companies from January 2010 to December 2019. In this study, we report three novel findings. First, the buy, sell and hold recommendations from professional analysts made prior to the earnings have low correlation with stock price movements after the earnings call. Second, using our graph neural network based method that processes the semantic features of earnings calls, we reliably and accurately predict stock price movements in five major areas of the economy. Third, the semantic features of transcripts are more predictive of stock price movements than sales and earnings per share, i.e., traditional hard data in most of the cases. |
Date: | 2022–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.12460&r= |
By: | Pedro Lopez Merino (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique, ECODEVELOPPEMENT - Unité de recherche d'Écodéveloppement - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, ADEME - Agence de l'Environnement et de la Maîtrise de l'Energie); Juliette Rouchier (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | EasyChair preprints are intended for rapid dissemination of research results and are integrated with the rest of EasyChair. |
Date: | 2021–05–26 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03618377&r= |
By: | Jozef Barunik; Lubos Hanus |
Abstract: | We propose a deep learning approach to probabilistic forecasting of macroeconomic and financial time series. Being able to learn complex patterns from a data rich environment, our approach is useful for a decision making that depends on uncertainty of large number of economic outcomes. Specifically, it is informative to agents facing asymmetric dependence of their loss on outcomes from possibly non-Gaussian and non-linear variables. We show the usefulness of the proposed approach on the two distinct datasets where a machine learns the pattern from data. First, we construct macroeconomic fan charts that reflect information from high-dimensional data set. Second, we illustrate gains in prediction of stock return distributions which are heavy tailed, asymmetric and suffer from low signal-to-noise ratio. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.06848&r= |
By: | Xu, Jack |
Abstract: | Fundamental credit analysis is widely performed by fixed income analysts and financial institutions to assess the credit risk of individual companies based on their financial data, notably the financial statements reported by the companies. Yet, the conventional analysis has not developed a computational method to forecast, directly from a company’s financial statements, the default probability, the recovery rate, and ultimately the fundamental valuation of a company’s credit risk in terms of credit spreads to risk-free rate. This paper introduces a generalizable approach to achieve these goals by implementing fundamental credit analysis in dynamical models. When combined with Monte-Carlo simulation, the current methodology naturally combines several novel features in the same forecast algorithm: 1. integrating default (defined as the state of negative cash) and recovery rate (under liquidation scenario) through the same defaulted balance sheet, 2. valuing the corporate real options manifested as planning in the amount of borrowing and expenditure, 3. embedding macro-economic and macro-financing conditions, and 4. forecasting the joint default risk of multiple companies. The method is applied to the Chinese real estate industry to forecast for several listed developers their forward default probabilities and associated recovery rates, and the fair-value par coupon curves of senior unsecured debt, using as inputs 6-8 years of their annual financial statements with 2020 as the latest. The results show both agreements and disagreements with the market-traded credit spreads at early April 2021, the time of these forecasts. The models forecasted much wider than market spreads on the big three developers, particularly pricing Evergrande in distressed levels. After setting up additional generic industry models, the current methodology is capable of computing default risk and debt valuation on large-scale of companies based on their historical financial statements. |
Keywords: | fundamental credit analysis; financial statement analysis; default forecasting; bond valuation; debt valuation; dynamical models; joint default; corporate real options |
JEL: | C6 G17 |
Date: | 2022–04–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:112699&r= |
By: | Carl Remlinger (Université Gustave Eiffel, EDF R&D - EDF R&D - EDF - EDF, FiME Lab - Laboratoire de Finance des Marchés d'Energie - EDF R&D - EDF R&D - EDF - EDF - CREST - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres); Joseph Mikael (EDF R&D LME - Laboratoire des Matériels Électriques - EDF R&D - EDF R&D - EDF - EDF); Romuald Elie (Université Gustave Eiffel) |
Abstract: | A model solving a family of partial differential equations (PDEs) with a single training is proposed. Re-calibrating a risk factor model or re-training a solver every time the market conditions change is costly and unsatisfactory. We therefore want to solve PDEs when the environment is not stationary or for several initial conditions at the same time. By learning operators in a single training, we ensure of the robustness of optimal controls with variations of the models, options or constraints. But, ultimately, we want to generalize by solving the PDE with models or conditions that were not present during training. We confirm the effectiveness of the method with several risk management problems by comparing it with other machine learning approaches. We evaluate our DeepOHedger on option pricing tasks, including local volatility models and option spreads involved in energy markets. Finally, we present a purely data-driven approach to risk hedging, from time series generation to learning optimal policiy. Our model then solves a family of parametric PDE from synthetic samples produced by a deep generator previously trained on spot price data from different countries. |
Abstract: | Un modèle résolvant une famille d'équations différentielles partielles paramétriques (EDP) est proposé. Re-calibrer un modèle de facteur de risque ou ré-entraîner un solveur chaque fois que les conditions de marché changent est coûteux et insatisfaisant. Nous voulons donc résoudre les EDP lorsque l'environnement n'est pas stationnaire ou pour plusieurs conditions initiales en même temps. En apprenant les opérateurs avec un seul entraînement, nous nous assurons de la robustesse des contrôles optimaux avec des variations des modèles, des options ou des contraintes. Mais, finalement, nous voulons généraliser en résolvant l'EDP avec des modèles ou des conditions qui n'étaient pas présents lors de l'apprentissage. Nous confirmons l'efficacité de la méthode avec plusieurs problèmes de gestion des risques en la comparant avec d'autres approches d'apprentissage automatique. Nous évaluons notre DeepOHedger sur des tâches d'évaluation d'options, y compris les modèles de volatilité locale et spread d'options impliqués dans les marchés de l'énergie. Enfin, nous présentons une approche purement basée sur les données pour la couverture des risques, de la génération de séries temporelles à l'apprentissage de politiques optimales. Notre modèle résout alors une famille d'EDP paramétriques à partir d'échantillons synthétiques produits par un générateur profond préalablement entraîné sur des données de prix spot de différents pays. |
Date: | 2022–03–07 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03599726&r= |
By: | Charl Maree; Christian W. Omlin |
Abstract: | The common purpose of applying reinforcement learning (RL) to asset management is the maximization of profit. The extrinsic reward function used to learn an optimal strategy typically does not take into account any other preferences or constraints. We have developed a regularization method that ensures that strategies have global intrinsic affinities, i.e., different personalities may have preferences for certain assets which may change over time. We capitalize on these intrinsic policy affinities to make our RL model inherently interpretable. We demonstrate how RL agents can be trained to orchestrate such individual policies for particular personality profiles and still achieve high returns. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.09218&r= |
By: | Magnus Wiese; Phillip Murray |
Abstract: | We develop a risk-neutral spot and equity option market simulator for a single underlying, under which the joint market process is a martingale. We leverage an efficient low-dimensional representation of the market which preserves no static arbitrage, and employ neural spline flows to simulate samples which are free from conditional drifts and are highly realistic in the sense that among all possible risk-neutral simulators, the obtained risk-neutral simulator is the closest to the historical data with respect to the Kullback-Leibler divergence. Numerical experiments demonstrate the effectiveness and highlight both drift removal and fidelity of the calibrated simulator. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2202.13996&r= |
By: | Souhir Ben Amor; Heni Boubaker; Lotfi Belkacem |
Abstract: | Accurate electricity price forecasting is the main management goal for market participants since it represents the fundamental basis to maximize the profits for market players. However, electricity is a non-storable commodity and the electricity prices are affected by some social and natural factors that make the price forecasting a challenging task. This study investigates the predictive performance of a new hybrid model based on the Generalized long memory autoregressive model (k-factor GARMA), the Gegenbauer Generalized Autoregressive Conditional Heteroscedasticity(G-GARCH) process, Wavelet decomposition, and Local Linear Wavelet Neural Network (LLWNN) optimized using two different learning algorithms; the Backpropagation algorithm (BP) and the Particle Swarm optimization algorithm (PSO). The performance of the proposed model is evaluated using data from Nord Pool Electricity markets. Moreover, it is compared with some other parametric and non-parametric models in order to prove its robustness. The empirical results prove that the proposed method performs well than other competing techniques. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.09568&r= |
By: | Denteh, Augustine (Georgia State University); Liebert, Helge (University of Zurich) |
Abstract: | We provide new insights regarding the finding that Medicaid increased emergency department (ED) use from the Oregon experiment. We find meaningful heterogeneous impacts of Medicaid on ED use using causal machine learning methods. The treatment effect distribution is widely dispersed, and the average effect is not representative of most individualized treatment effects. A small group—about 14% of participants—in the right tail of the distribution drives the overall effect. We identify priority groups with economically significant increases in ED usage based on demographics and prior utilization. Intensive margin effects are an important driver of increases in ED utilization. |
Keywords: | Medicaid, ED use, effect heterogeneity, causal machine learning, optimal policy |
JEL: | H75 I13 I38 |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp15192&r= |
By: | Josten, Cecily (London School of Economics); Lordan, Grace (London School of Economics) |
Abstract: | This study identifies the job attributes, and in particular skills and abilities, which predict the likelihood a job is recently automatable drawing on the Josten and Lordan (2020) classification of automatability, EU labour force survey data and a machine learning regression approach. We find that skills and abilities which relate to non-linear abstract thinking are those that are the safest from automation. We also find that jobs that require 'people' engagement interacted with 'brains' are also less likely to be automated. The skills that are required for these jobs include soft skills. Finally, we find that jobs that require physically making objects or physicality more generally are most likely to be automated unless they involve interaction with 'brains' and/or 'people'. |
Keywords: | work, automatability, job skills, job abilities, EU Labour Force Survey |
JEL: | J21 J00 |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp15180&r= |
By: | Alessio Brini; Gabriele Tedeschi; Daniele Tantari |
Abstract: | In this paper we analyze the effect of a policy recommendation on the performances of an artificial interbank market. Financial institutions stipulate lending agreements following a public recommendation and their individual information. The former, modeled by a reinforcement learning optimal policy trying to maximize the long term fitness of the system, gathers information on the economic environment and directs economic actors to create credit relationships based on the optimal choice between a low interest rate or high liquidity supply. The latter, based on the agents' balance sheet, allows to determine the liquidity supply and interest rate that the banks optimally offer on the market. Based on the combination between the public and the private signal, financial institutions create or cut their credit connections over time via a preferential attachment evolving procedure able to generate a dynamic network. Our results show that the emergence of a core-periphery interbank network, combined with a certain level of homogeneity on the size of lenders and borrowers, are essential features to ensure the resilience of the system. Moreover, the reinforcement learning optimal policy recommendation plays a crucial role in mitigating systemic risk with respect to alternative policy instruments. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.07134&r= |
By: | Peng Hu (emlyon business school); Yaobin Lu; Yeming Gong |
Abstract: | Conversational Artificial Intelligence (AI) is digital agents that interact with users by natural language. To advance the understanding of trust in conversational AI, this study focused on two humanness factors manifested by conversational AI: speaking and listening. First, we explored users' heterogeneous perception patterns based on the two humanness factors. Next, we examined how this heterogeneity relates to trust in conversational AI. A two-stage survey was conducted to collect data. Latent profile analysis revealed three distinct patterns: para-human perception, para-machine perception, and asymmetric perception. Finite mixture modeling demonstrated that the benefit of humanizing AI's voice for competence-related trust can evaporate once AI's language understanding is perceived as poor. Interestingly, the asymmetry between humanness perceptions in speaking and listening can impede morality-related trust. By adopting a person-centered approach to address the relationship between dual humanness and user trust, this study contributes to the literature on trust in conversational AI and the practice of trust-inducing AI design. |
Keywords: | Artificial intelligence,Humanness perception,Trust,Person-centered approach,Finite mixture modeling |
Date: | 2021–06–01 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03598766&r= |
By: | Guglielmo Briscese; Maddalena Grignani; Stephen Stapleton |
Abstract: | Crises can cause important societal changes by shifting citizens' preferences and beliefs, but how such change happens remains an open question. Following a representative sample of Americans in a longitudinal multi-wave survey throughout 2020, we find that citizens reduced trust in public institutions and became more supportive of government spending after being directly impacted by the crisis, such as when they lost a sizeable portion of their income or knew someone hospitalized with the virus. These shifts occurred very rapidly, sometimes in a matter of weeks, and persisted over time. We also record an increase in the partisan gap on the same outcomes, which can be largely explained by misperceptions about the crisis inflated by the consumption of partisan leaning news. In an experiment, we expose respondents to the same source of information and find that it successfully recalibrates perceptions, with persistent effects. We complement our analysis by employing machine learning to estimate heterogeneous treatment effects, and show that our findings are robust to several specifications and estimation strategies. In sum, both lived experiences and media inflated misperceptions can alter citizens' beliefs rapidly during a crisis. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2202.12339&r= |
By: | Konstantin G\"orgen; Jonas Meirer; Melanie Schienle |
Abstract: | We study the estimation and prediction of the risk measure Value at Risk for cryptocurrencies. Using Generalized Random Forests (GRF) (Athey et al., 2019) that can be adapted to specifically fit the framework of quantile prediction, we show their superior performance over other established methods such as quantile regression and CAViaR, particularly in unstable times. We investigate the small-sample prediction properties in comparison to standard techniques in a Monte Carlo simulation study. In a comprehensive empirical assessment, we study the performance not only for the major cryptocurrencies but also in the stock market. Generally, we find that GRF outperforms established methods especially in crisis situations. We further identify important predictors during such times and show their influence on forecasting over time. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.08224&r= |
By: | Uwe Sunde; Dainis Zegners; Anthony Strittmatter |
Abstract: | This paper presents an empirical investigation of the relation between decision speed and decision quality for a real-world setting of cognitively-demanding decisions in which the timing of decisions is endogenous: professional chess. Move-by-move data provide exceptionally detailed and precise information about decision times and decision quality, based on a comparison of actual decisions to a computational benchmark of best moves constructed using the artificial intelligence of a chess engine. The results reveal that faster decisions are associated with better performance. The findings are consistent with the predictions of procedural decision models like drift-diffusion-models in which decision makers sequentially acquire information about decision alternatives with uncertain valuations. |
Keywords: | response times, speed-performance profile, drift-diffusion model, uncertain evaluations |
JEL: | D01 D90 C70 C80 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_9546&r= |
By: | Paola Mallia (PSE - Paris School of Economics - ENPC - École des Ponts ParisTech - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris sciences et lettres - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique - EHESS - École des hautes études en sciences sociales - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Paris 1 Panthéon-Sorbonne - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris sciences et lettres - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement) |
Abstract: | Adoption of improved seed varieties has the potential to lead to substantial pro ductivity increases in agriculture. However, only 36 percent of the farmers that grow an improved maize variety report doing so in Ethiopia. This paper provides the first causal evidence of the impact of misperception in improved maize varieties on farm ers' production decisions, productivity and profitability. We employ an Instrumental Variable approach that takes advantage of the roll-out of a governmental program that increases transparency in the seed sector. We find that farmers who correctly classify the improved maize variety grown experience large increases in inputs usage (urea, NPS, labor) and yields, but no statistically significant changes in other agricul tural practices or profits. Using machine learning techniques, we develop a model of interpolation to predict objectively measured varietal identification from farmers' self reported data which provides proof-of-concept towards scalable approaches to obtain reliable measures of crop varieties and allows us to extend the analysis to the nationally representative sample. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03597332&r= |
By: | Kerstin H\"otte; Taheya Tarannum; Vilhelm Verendel; Lauren Bennett |
Abstract: | Artificial Intelligence (AI) is often defined as the next general purpose technology (GPT) with profound economic and societal consequences. We examine how strongly four patent AI classification methods reproduce the GPT-like features of (1) intrinsic growth, (2) generality, and (3) innovation complementarities. Studying US patents from 1990-2019, we find that the four methods (keywords, scientific citations, WIPO, and USPTO approach) vary in classifying between 3-17% of all patents as AI. The keyword-based approach demonstrates the strongest intrinsic growth and generality despite identifying the smallest set of AI patents. The WIPO and science approaches generate each GPT characteristic less strikingly, whilst the USPTO set with the largest number of patents produces the weakest features. The lack of overlap and heterogeneity between all four approaches emphasises that the evaluation of AI innovation policies may be sensitive to the choice of classification method. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.10304&r= |