nep-cmp New Economics Papers
on Computational Economics
Issue of 2024‒03‒18
43 papers chosen by



  1. Explainable Automated Machine Learning for Credit Decisions: Enhancing Human Artificial Intelligence Collaboration in Financial Engineering By Marc Schmitt
  2. A Study on Stock Forecasting Using Deep Learning and Statistical Models By Himanshu Gupta; Aditya Jaiswal
  3. Hyperparameter Tuning for Causal Inference with Double Machine Learning: A Simulation Study By Philipp Bach; Oliver Schacht; Victor Chernozhukov; Sven Klaassen; Martin Spindler
  4. DiffsFormer: A Diffusion Transformer on Stock Factor Augmentation By Yuan Gao; Haokun Chen; Xiang Wang; Zhicai Wang; Xue Wang; Jinyang Gao; Bolin Ding
  5. Machine Learning for Continuous-Time Finance By Victor Duarte; Diogo Duarte; Dejanir H. Silva
  6. Monthly GDP nowcasting with Machine Learning and Unstructured Data By Juan Tenorio; Wilder Perez
  7. Free Trade Agreements and the Movement of Business People By Mayer, Thierry; Rapoport, Hillel; Umana-Dajud, Camilo
  8. Securing Transactions: A Hybrid Dependable Ensemble Machine Learning Model using IHT-LR and Grid Search By Md. Alamin Talukder; Rakib Hossen; Md Ashraf Uddin; Mohammed Nasir Uddin; Uzzal Kumar Acharjee
  9. Attention-based Dynamic Multilayer Graph Neural Networks for Loan Default Prediction By Sahab Zandi; Kamesh Korangi; Mar\'ia \'Oskarsd\'ottir; Christophe Mues; Cristi\'an Bravo
  10. Forecasting Imports in OECD Member Countries and Iran by Using Neural Network Algorithms of LSTM By Soheila Khajoui; Saeid Dehyadegari; Sayyed Abdolmajid Jalaee
  11. LLM-driven Imitation of Subrational Behavior : Illusion or Reality? By Andrea Coletta; Kshama Dwarakanath; Penghang Liu; Svitlana Vyetrenko; Tucker Balch
  12. ABIDES-Economist: Agent-Based Simulation of Economic Systems with Learning Agents By Kshama Dwarakanath; Svitlana Vyetrenko; Peyman Tavallali; Tucker Balch
  13. A robust record linkage approach for anomaly detection in granular insurance asset reporting By Vittoria La Serra; Emiliano Svezia
  14. A step towards the integration of machine learning and small area estimation By Tomasz \.Z\k{a}d{\l}o; Adam Chwila
  15. Borrower based measures analysis via a new agent based model of the Italian real estate sector By Gennaro Catapano
  16. A Hormetic Approach to the Value-Loading Problem: Preventing the Paperclip Apocalypse? By Nathan I. N. Henry; Mangor Pedersen; Matt Williams; Jamin L. B. Martin; Liesje Donkin
  17. End-to-End Policy Learning of a Statistical Arbitrage Autoencoder Architecture By Fabian Krause; Jan-Peter Calliess
  18. The Heterogeneous Aggregate Valence Analysis (HAVAN) Model: A Flexible Approach to Modeling Unobserved Heterogeneity in Discrete Choice Analysis By Connor R. Forsythe; Cristian Arteaga; John P. Helveston
  19. Building Metaknowledge in AI Literacy – The Effect of Gamified vs. Text-based Learning on AI Literacy Metaknowledge By Pinski, Marc; Haas, Miguel; Benlian, Alexander
  20. RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search By Tao Ren; Ruihan Zhou; Jinyang Jiang; Jiafeng Liang; Qinghao Wang; Yijie Peng
  21. Land for fish: Quantifying the connection between the aquaculture sector and agricultural markets By Heimann, Tobias; Delzeit, Ruth
  22. Attenuation and reinforcement mechanisms over the life course By Richiardi, Matteo; Bronka, Patryk; van de Ven, Justin
  23. Modeling the Presidential Approval Ratings of the United States using Machine-Learning: Does Climate Policy Uncertainty Matter? By Elie Bouri; Rangan Gupta; Christian Pierdzioch
  24. LLM Voting: Human Choices and AI Collective Decision Making By Joshua C. Yang; Marcin Korecki; Damian Dailisan; Carina I. Hausladen; Dirk Helbing
  25. Artificial intelligence and the transformation of higher education institutions By Evangelos Katsamakas; Oleg V. Pavlov; Ryan Saklad
  26. Political Fragility: Coups d’État and Their Drivers By Aliona Cebotari; Enrique Chueca-Montuenga; Yoro Diallo; Yunsheng Ma; Ms. Rima A Turk; Weining Xin; Harold Zavarce
  27. Rationality Report Cards: Assessing the Economic Rationality of Large Language Models By Narun Raman; Taylor Lundy; Samuel Amouyal; Yoav Levine; Kevin Leyton-Brown; Moshe Tennenholtz
  28. The revision of anti-poverty measures in Italy By Giulia Bovini; Emanuele Dicarlo; Antonella Tomasi
  29. Policy implications of shared e-scooter parking regulation: an agent-based approach By Paul Hurlet; Ouassim Manout; Azise Oumar Diallo
  30. GPT's Performance in Identifying Outcome Changes on ClinicalTrials.gov By Ying, Xiangji; Vorland, Colby J.; Qureshi, Riaz; Brown, Andrew William; Kilicoglu, Halil; Saldanha, Ian; DeVito, Nicholas J; Mayo-Wilson, Evan
  31. Connecting the dots: the network nature of shocks propagation in credit markets By Stefano Pietrosanti; Edoardo Rainone
  32. A monotone piecewise constant control integration approach for the two-factor uncertain volatility model By Duy-Minh Dang; Hao Zhou
  33. Simulating the Constant Cost Trade Model By Nazif Durmaz; Henry Thompson
  34. Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video By Rosa-García, Alfonso
  35. ChatGPT and Corporate Policies By Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang
  36. Tax Minimization by French Cohabiting Couples By Olivier Bargain; Damien Echevin; Audrey Etienne; Nicolas Moreau; Adrien Pacifico
  37. County Wildfire Risk Ratings in Northern California: FAIR Plan Insurance Policies and Simulation Models vs. Red Flag Warnings and Diablo Winds By Schmidt, James
  38. Aggregate uncertainty, HANK, and the ZLB By Lin, Alessandro; Peruffo, Marcel
  39. Alterungsschub und Rentenreform: Simulationen für GRV und Beamtenversorgung By Werding, Martin; Runschke, Benedikt; Schwarz, Milena
  40. Liberty Capital Accumulation and Economic Growth By Qixin Zhan; Heng-fu Zou
  41. Time-Delayed Game Strategy Analysis Among Japan, Other Nations, and the International Atomic Energy Agency in the Context of Fukushima Nuclear Wastewater Discharge Decision By Mingyang Li; Han Pengsihua; Fujiao Meng; Zejun Wang; Weian Liu
  42. Small Firm Growth and the VAT Threshold Evidence for the UK By Ms. Li Liu; Mr. Ben Lockwood; Eddy H.F. Tam
  43. Fukushima Nuclear Wastewater Discharge: An Evolutionary Game Theory Approach to International and Domestic Interaction and Strategic Decision-Making By Mingyang Li; Han Pengsihua; Songqing Zhao; Zejun Wang; Limin Yang; Weian Liu

  1. By: Marc Schmitt
    Abstract: This paper explores the integration of Explainable Automated Machine Learning (AutoML) in the realm of financial engineering, specifically focusing on its application in credit decision-making. The rapid evolution of Artificial Intelligence (AI) in finance has necessitated a balance between sophisticated algorithmic decision-making and the need for transparency in these systems. The focus is on how AutoML can streamline the development of robust machine learning models for credit scoring, while Explainable AI (XAI) methods, particularly SHapley Additive exPlanations (SHAP), provide insights into the models' decision-making processes. This study demonstrates how the combination of AutoML and XAI not only enhances the efficiency and accuracy of credit decisions but also fosters trust and collaboration between humans and AI systems. The findings underscore the potential of explainable AutoML in improving the transparency and accountability of AI-driven financial decisions, aligning with regulatory requirements and ethical considerations.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.03806&r=cmp
  2. By: Himanshu Gupta; Aditya Jaiswal
    Abstract: Predicting a fast and accurate model for stock price forecasting is been a challenging task and this is an active area of research where it is yet to be found which is the best way to forecast the stock price. Machine learning, deep learning and statistical analysis techniques are used here to get the accurate result so the investors can see the future trend and maximize the return of investment in stock trading. This paper will review many deep learning algorithms for stock price forecasting. We use a record of s&p 500 index data for training and testing. The survey motive is to check various deep learning and statistical model techniques for stock price forecasting that are Moving Averages, ARIMA which are statistical techniques and LSTM, RNN, CNN, and FULL CNN which are deep learning models. It will discuss various models, including the Auto regression integration moving average model, the Recurrent neural network model, the long short-term model which is the type of RNN used for long dependency for data, the convolutional neural network model, and the full convolutional neural network model, in terms of error calculation or percentage of accuracy that how much it is accurate which measures by the function like Root mean square error, mean absolute error, mean squared error. The model can be used to predict the stock price by checking the low MAE value as lower the MAE value the difference between the predicting and the actual value will be less and this model will predict the price more accurately than other models.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06689&r=cmp
  3. By: Philipp Bach; Oliver Schacht; Victor Chernozhukov; Sven Klaassen; Martin Spindler
    Abstract: Proper hyperparameter tuning is essential for achieving optimal performance of modern machine learning (ML) methods in predictive tasks. While there is an extensive literature on tuning ML learners for prediction, there is only little guidance available on tuning ML learners for causal machine learning and how to select among different ML learners. In this paper, we empirically assess the relationship between the predictive performance of ML methods and the resulting causal estimation based on the Double Machine Learning (DML) approach by Chernozhukov et al. (2018). DML relies on estimating so-called nuisance parameters by treating them as supervised learning problems and using them as plug-in estimates to solve for the (causal) parameter. We conduct an extensive simulation study using data from the 2019 Atlantic Causal Inference Conference Data Challenge. We provide empirical insights on the role of hyperparameter tuning and other practical decisions for causal estimation with DML. First, we assess the importance of data splitting schemes for tuning ML learners within Double Machine Learning. Second, we investigate how the choice of ML methods and hyperparameters, including recent AutoML frameworks, impacts the estimation performance for a causal parameter of interest. Third, we assess to what extent the choice of a particular causal model, as characterized by incorporated parametric assumptions, can be based on predictive performance metrics.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.04674&r=cmp
  4. By: Yuan Gao; Haokun Chen; Xiang Wang; Zhicai Wang; Xue Wang; Jinyang Gao; Bolin Ding
    Abstract: Machine learning models have demonstrated remarkable efficacy and efficiency in a wide range of stock forecasting tasks. However, the inherent challenges of data scarcity, including low signal-to-noise ratio (SNR) and data homogeneity, pose significant obstacles to accurate forecasting. To address this issue, we propose a novel approach that utilizes artificial intelligence-generated samples (AIGS) to enhance the training procedures. In our work, we introduce the Diffusion Model to generate stock factors with Transformer architecture (DiffsFormer). DiffsFormer is initially trained on a large-scale source domain, incorporating conditional guidance so as to capture global joint distribution. When presented with a specific downstream task, we employ DiffsFormer to augment the training procedure by editing existing samples. This editing step allows us to control the strength of the editing process, determining the extent to which the generated data deviates from the target domain. To evaluate the effectiveness of DiffsFormer augmented training, we conduct experiments on the CSI300 and CSI800 datasets, employing eight commonly used machine learning models. The proposed method achieves relative improvements of 7.2% and 27.8% in annualized return ratio for the respective datasets. Furthermore, we perform extensive experiments to gain insights into the functionality of DiffsFormer and its constituent components, elucidating how they address the challenges of data scarcity and enhance the overall model performance. Our research demonstrates the efficacy of leveraging AIGS and the DiffsFormer architecture to mitigate data scarcity in stock forecasting tasks.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06656&r=cmp
  5. By: Victor Duarte; Diogo Duarte; Dejanir H. Silva
    Abstract: We develop an algorithm for solving a large class of nonlinear high-dimensional continuous-time models in finance. We approximate value and policy functions using deep learning and show that a combination of automatic differentiation and Ito’s lemma allows for the computation of exact expectations, resulting in a negligible computational cost that is independent of the number of state variables. We illustrate the applicability of our method to problems in asset pricing, corporate finance, and portfolio choice and show that the ability to solve high-dimensional problems allows us to derive new economic insights.
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10909&r=cmp
  6. By: Juan Tenorio; Wilder Perez
    Abstract: In the dynamic landscape of continuous change, Machine Learning (ML) "nowcasting" models offer a distinct advantage for informed decision-making in both public and private sectors. This study introduces ML-based GDP growth projection models for monthly rates in Peru, integrating structured macroeconomic indicators with high-frequency unstructured sentiment variables. Analyzing data from January 2007 to May 2023, encompassing 91 leading economic indicators, the study evaluates six ML algorithms to identify optimal predictors. Findings highlight the superior predictive capability of ML models using unstructured data, particularly Gradient Boosting Machine, LASSO, and Elastic Net, exhibiting a 20% to 25% reduction in prediction errors compared to traditional AR and Dynamic Factor Models (DFM). This enhanced performance is attributed to better handling of data of ML models in high-uncertainty periods, such as economic crises.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.04165&r=cmp
  7. By: Mayer, Thierry (Sciences Po, Paris); Rapoport, Hillel (Paris School of Economics); Umana-Dajud, Camilo (CEPII, Paris)
    Abstract: Using provisions to ease the movement of business visitors in trade agreements, we show that removing barriers to the movement of business people promotes trade. We document the increasing complexity of Free Trade Agreements and develop an algorithm that combines machine learning and text analysis techniques to examine the content of FTAs. We use the algorithm to determine which FTAs include provisions to facilitate the movement of business people and whether these are included in dispute settlement mechanisms. We show that provisions facilitating business travel are effective in promoting them and eventually increase bilateral trade flows.
    Keywords: text analysis, machine learning, free trade agreements, business travel, migration
    JEL: F13 F22 F23
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp16789&r=cmp
  8. By: Md. Alamin Talukder; Rakib Hossen; Md Ashraf Uddin; Mohammed Nasir Uddin; Uzzal Kumar Acharjee
    Abstract: Financial institutions and businesses face an ongoing challenge from fraudulent transactions, prompting the need for effective detection methods. Detecting credit card fraud is crucial for identifying and preventing unauthorized transactions.Timely detection of fraud enables investigators to take swift actions to mitigate further losses. However, the investigation process is often time-consuming, limiting the number of alerts that can be thoroughly examined each day. Therefore, the primary objective of a fraud detection model is to provide accurate alerts while minimizing false alarms and missed fraud cases. In this paper, we introduce a state-of-the-art hybrid ensemble (ENS) dependable Machine learning (ML) model that intelligently combines multiple algorithms with proper weighted optimization using Grid search, including Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Multilayer Perceptron (MLP), to enhance fraud identification. To address the data imbalance issue, we employ the Instant Hardness Threshold (IHT) technique in conjunction with Logistic Regression (LR), surpassing conventional approaches. Our experiments are conducted on a publicly available credit card dataset comprising 284, 807 transactions. The proposed model achieves impressive accuracy rates of 99.66%, 99.73%, 98.56%, and 99.79%, and a perfect 100% for the DT, RF, KNN, MLP and ENS models, respectively. The hybrid ensemble model outperforms existing works, establishing a new benchmark for detecting fraudulent transactions in high-frequency scenarios. The results highlight the effectiveness and reliability of our approach, demonstrating superior performance metrics and showcasing its exceptional potential for real-world fraud detection applications.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.14389&r=cmp
  9. By: Sahab Zandi; Kamesh Korangi; Mar\'ia \'Oskarsd\'ottir; Christophe Mues; Cristi\'an Bravo
    Abstract: Whereas traditional credit scoring tends to employ only individual borrower- or loan-level predictors, it has been acknowledged for some time that connections between borrowers may result in default risk propagating over a network. In this paper, we present a model for credit risk assessment leveraging a dynamic multilayer network built from a Graph Neural Network and a Recurrent Neural Network, each layer reflecting a different source of network connection. We test our methodology in a behavioural credit scoring context using a dataset provided by U.S. mortgage financier Freddie Mac, in which different types of connections arise from the geographical location of the borrower and their choice of mortgage provider. The proposed model considers both types of connections and the evolution of these connections over time. We enhance the model by using a custom attention mechanism that weights the different time snapshots according to their importance. After testing multiple configurations, a model with GAT, LSTM, and the attention mechanism provides the best results. Empirical results demonstrate that, when it comes to predicting probability of default for the borrowers, our proposed model brings both better results and novel insights for the analysis of the importance of connections and timestamps, compared to traditional methods.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.00299&r=cmp
  10. By: Soheila Khajoui; Saeid Dehyadegari; Sayyed Abdolmajid Jalaee
    Abstract: Artificial Neural Networks (ANN) which are a branch of artificial intelligence, have shown their high value in lots of applications and are used as a suitable forecasting method. Therefore, this study aims at forecasting imports in OECD member selected countries and Iran for 20 seasons from 2021 to 2025 by means of ANN. Data related to the imports of such countries collected over 50 years from 1970 to 2019 from valid resources including World Bank, WTO, IFM, the data turned into seasonal data to increase the number of collected data for better performance and high accuracy of the network by using Diz formula that there were totally 200 data related to imports. This study has used LSTM to analyse data in Pycharm. 75% of data considered as training data and 25% considered as test data and the results of the analysis were forecasted with 99% accuracy which revealed the validity and reliability of the output. Since the imports is consumption function and since the consumption is influenced during Covid-19 Pandemic, so it is time-consuming to correct and improve it to be influential on the imports, thus the imports in the years after Covid-19 Pandemic has had a fluctuating trend.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.01648&r=cmp
  11. By: Andrea Coletta; Kshama Dwarakanath; Penghang Liu; Svitlana Vyetrenko; Tucker Balch
    Abstract: Modeling subrational agents, such as humans or economic households, is inherently challenging due to the difficulty in calibrating reinforcement learning models or collecting data that involves human subjects. Existing work highlights the ability of Large Language Models (LLMs) to address complex reasoning tasks and mimic human communication, while simulation using LLMs as agents shows emergent social behaviors, potentially improving our comprehension of human conduct. In this paper, we propose to investigate the use of LLMs to generate synthetic human demonstrations, which are then used to learn subrational agent policies though Imitation Learning. We make an assumption that LLMs can be used as implicit computational models of humans, and propose a framework to use synthetic demonstrations derived from LLMs to model subrational behaviors that are characteristic of humans (e.g., myopic behavior or preference for risk aversion). We experimentally evaluate the ability of our framework to model sub-rationality through four simple scenarios, including the well-researched ultimatum game and marshmallow experiment. To gain confidence in our framework, we are able to replicate well-established findings from prior human studies associated with the above scenarios. We conclude by discussing the potential benefits, challenges and limitations of our framework.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08755&r=cmp
  12. By: Kshama Dwarakanath; Svitlana Vyetrenko; Peyman Tavallali; Tucker Balch
    Abstract: We introduce a multi-agent simulator for economic systems comprised of heterogeneous Households, heterogeneous Firms, Central Bank and Government agents, that could be subjected to exogenous, stochastic shocks. The interaction between agents defines the production and consumption of goods in the economy alongside the flow of money. Each agent can be designed to act according to fixed, rule-based strategies or learn their strategies using interactions with others in the simulator. We ground our simulator by choosing agent heterogeneity parameters based on economic literature, while designing their action spaces in accordance with real data in the United States. Our simulator facilitates the use of reinforcement learning strategies for the agents via an OpenAI Gym style environment definition for the economic system. We demonstrate the utility of our simulator by simulating and analyzing two hypothetical (yet interesting) economic scenarios. The first scenario investigates the impact of heterogeneous household skills on their learned preferences to work at different firms. The second scenario examines the impact of a positive production shock to one of two firms on its pricing strategy in comparison to the second firm. We aspire that our platform sets a stage for subsequent research at the intersection of artificial intelligence and economics.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09563&r=cmp
  13. By: Vittoria La Serra (Bank of Italy); Emiliano Svezia (Bank of Italy)
    Abstract: Since 2016, insurance corporations have been reporting granular asset data in Solvency II templates on a quarterly basis. Assets are uniquely identified by codes that must be kept stable and consistent over time; nevertheless, due to reporting errors, unexpected changes in these codes may occur, leading to inconsistencies when compiling insurance statistics. The paper addresses this issue as a statistical matching problem and proposes a supervised classification approach to detect such anomalies. Test results show the potential benefits of machine learning techniques to data quality management processes, specifically of a selected random forest model for supervised binary classification, and the efficiency gains arising from automation.
    Keywords: insurance data, data quality management, record linkage, statistical matching, machine learning
    JEL: C18 C81 G22
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_821_23&r=cmp
  14. By: Tomasz \.Z\k{a}d{\l}o; Adam Chwila
    Abstract: The use of machine-learning techniques has grown in numerous research areas. Currently, it is also widely used in statistics, including the official statistics for data collection (e.g. satellite imagery, web scraping and text mining, data cleaning, integration and imputation) but also for data analysis. However, the usage of these methods in survey sampling including small area estimation is still very limited. Therefore, we propose a predictor supported by these algorithms which can be used to predict any population or subpopulation characteristics based on cross-sectional and longitudinal data. Machine learning methods have already been shown to be very powerful in identifying and modelling complex and nonlinear relationships between the variables, which means that they have very good properties in case of strong departures from the classic assumptions. Therefore, we analyse the performance of our proposal under a different set-up, in our opinion of greater importance in real-life surveys. We study only small departures from the assumed model, to show that our proposal is a good alternative in this case as well, even in comparison with optimal methods under the model. What is more, we propose the method of the accuracy estimation of machine learning predictors, giving the possibility of the accuracy comparison with classic methods, where the accuracy is measured as in survey sampling practice. The solution of this problem is indicated in the literature as one of the key issues in integration of these approaches. The simulation studies are based on a real, longitudinal dataset, freely available from the Polish Local Data Bank, where the prediction problem of subpopulation characteristics in the last period, with "borrowing strength" from other subpopulations and time periods, is considered.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07521&r=cmp
  15. By: Gennaro Catapano (Bank of Italy)
    Abstract: This paper presents a new agent-based model (ABM) of the real estate and credit sectors. The main purpose of the model is to study the effects of introducing a borrower-based macroprudential policy on the banking system, households, and the real estate market. The paper describes a comprehensive set of policy experiments simulating the effects of introducing different loan-to-value (LTV) caps on newly issued mortgages. The analysis sheds light on the relevance of the degree of heterogeneity in household indebtedness tolerance and its mean level. Moreover, it studies the impact of the phase-in period length and of the timing of the introduction of such a measure. While generally effective in reducing credit risk and curbing both house prices and household indebtedness growth, these measures may also have transitory negative side effects on banks’ balance sheets and real estate markets. The results suggest the scenarios, calibration, and timing under which the introduction of an LTV cap might have the most favorable outcomes.
    Keywords: agent based model, housing market, macroprudential policy
    JEL: D1 D31 E58 R2 R21 R31
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_822_23&r=cmp
  16. By: Nathan I. N. Henry; Mangor Pedersen; Matt Williams; Jamin L. B. Martin; Liesje Donkin
    Abstract: The value-loading problem is a significant challenge for researchers aiming to create artificial intelligence (AI) systems that align with human values and preferences. This problem requires a method to define and regulate safe and optimal limits of AI behaviors. In this work, we propose HALO (Hormetic ALignment via Opponent processes), a regulatory paradigm that uses hormetic analysis to regulate the behavioral patterns of AI. Behavioral hormesis is a phenomenon where low frequencies of a behavior have beneficial effects, while high frequencies are harmful. By modeling behaviors as allostatic opponent processes, we can use either Behavioral Frequency Response Analysis (BFRA) or Behavioral Count Response Analysis (BCRA) to quantify the hormetic limits of repeatable behaviors. We demonstrate how HALO can solve the 'paperclip maximizer' scenario, a thought experiment where an unregulated AI tasked with making paperclips could end up converting all matter in the universe into paperclips. Our approach may be used to help create an evolving database of 'values' based on the hedonic calculus of repeatable behaviors with decreasing marginal utility. This positions HALO as a promising solution for the value-loading problem, which involves embedding human-aligned values into an AI system, and the weak-to-strong generalization problem, which explores whether weak models can supervise stronger models as they become more intelligent. Hence, HALO opens several research avenues that may lead to the development of a computational value system that allows an AI algorithm to learn whether the decisions it makes are right or wrong.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07462&r=cmp
  17. By: Fabian Krause; Jan-Peter Calliess
    Abstract: In Statistical Arbitrage (StatArb), classical mean reversion trading strategies typically hinge on asset-pricing or PCA based models to identify the mean of a synthetic asset. Once such a (linear) model is identified, a separate mean reversion strategy is then devised to generate a trading signal. With a view of generalising such an approach and turning it truly data-driven, we study the utility of Autoencoder architectures in StatArb. As a first approach, we employ a standard Autoencoder trained on US stock returns to derive trading strategies based on the Ornstein-Uhlenbeck (OU) process. To further enhance this model, we take a policy-learning approach and embed the Autoencoder network into a neural network representation of a space of portfolio trading policies. This integration outputs portfolio allocations directly and is end-to-end trainable by backpropagation of the risk-adjusted returns of the neural policy. Our findings demonstrate that this innovative end-to-end policy learning approach not only simplifies the strategy development process, but also yields superior gross returns over its competitors illustrating the potential of end-to-end training over classical two-stage approaches.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08233&r=cmp
  18. By: Connor R. Forsythe; Cristian Arteaga; John P. Helveston
    Abstract: This paper introduces the Heterogeneous Aggregate Valence Analysis (HAVAN) model, a novel class of discrete choice models. We adopt the term "valence'' to encompass any latent quantity used to model consumer decision-making (e.g., utility, regret, etc.). Diverging from traditional models that parameterize heterogeneous preferences across various product attributes, HAVAN models (pronounced "haven") instead directly characterize alternative-specific heterogeneous preferences. This innovative perspective on consumer heterogeneity affords unprecedented flexibility and significantly reduces simulation burdens commonly associated with mixed logit models. In a simulation experiment, the HAVAN model demonstrates superior predictive performance compared to state-of-the-art artificial neural networks. This finding underscores the potential for HAVAN models to improve discrete choice modeling capabilities.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.00184&r=cmp
  19. By: Pinski, Marc; Haas, Miguel; Benlian, Alexander
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:142981&r=cmp
  20. By: Tao Ren; Ruihan Zhou; Jinyang Jiang; Jiafeng Liang; Qinghao Wang; Yijie Peng
    Abstract: The formulaic alphas are mathematical formulas that transform raw stock data into indicated signals. In the industry, a collection of formulaic alphas is combined to enhance modeling accuracy. Existing alpha mining only employs the neural network agent, unable to utilize the structural information of the solution space. Moreover, they didn't consider the correlation between alphas in the collection, which limits the synergistic performance. To address these problems, we propose a novel alpha mining framework, which formulates the alpha mining problems as a reward-dense Markov Decision Process (MDP) and solves the MDP by the risk-seeking Monte Carlo Tree Search (MCTS). The MCTS-based agent fully exploits the structural information of discrete solution space and the risk-seeking policy explicitly optimizes the best-case performance rather than average outcomes. Comprehensive experiments are conducted to demonstrate the efficiency of our framework. Our method outperforms all state-of-the-art benchmarks on two real-world stock sets under various metrics. Backtest experiments show that our alphas achieve the most profitable results under a realistic trading setting.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07080&r=cmp
  21. By: Heimann, Tobias; Delzeit, Ruth
    Abstract: This study employs a global Computable General Equilibrium (CGE) model to quantify the effects of aquaculture production on agricultural markets, food prices and land use. We conduct a scenario analysis simulating, first, the fish sector developments expected by FAO; second, a rebuilding of sustainable wild fish stocks to achieve SDG 14; and third, a stronger expansion in aquaculture production with varying fishmeal supply. The results show direct effects of aquaculture production and limited fishmeal supply on agricultural production, land use, and food prices. Substituting fishmeal with plant-based feed when rebuilding sustainable fish stocks has lower effects on agricultural markets than growth in aquaculture production comparable to the first decade of this century. In addition, expanding aquaculture production increases prices for capture fish via fishmeal demand, instead of reducing capture fish prices by substituting consumer demand. Finally, rebuilding sustainable fish stocks has significant adverse effects on food prices in marine fish dependent regions in the southern hemisphere, and these regions need support in the transition period until sustainable fish stocks are achieved. The results of this study illustrate the interconnectedness of SDG 14 (life below water), SDG 15 (life on land) and SDG 2 (zero hunger).
    Keywords: Computable general equilibrium (CGE), Aquaculture, Land use, Agricultural markets, Agricultural commodity trade, SDGs, Fishmeal, Soymeal
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwkie:281986&r=cmp
  22. By: Richiardi, Matteo; Bronka, Patryk; van de Ven, Justin
    Abstract: We analyse the complex dynamic feedback effects between different life domains over the life course, providing a quantification of the direct (not mediated) and indirect (mediated) effects. To extend the analysis in scope and time beyond limitation of existing data, we use a rich dynamic microsimulation model of individual life course trajectories parameterised and validated to the UK context. We interpret findings in terms of the implied attenuation or reinforcement mechanisms at play, and discuss implications for health and economic inequalities.
    Date: 2024–02–27
    URL: http://d.repec.org/n?u=RePEc:ese:cempwp:cempa2-24&r=cmp
  23. By: Elie Bouri (School of Business, Lebanese American University, Lebanon); Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Christian Pierdzioch (Department of Economics, Helmut Schmidt University, Holstenhofweg 85, P.O.B. 700822, 22008 Hamburg, Germany)
    Abstract: In the wake of a massive thrust on designing policies to tackle climate change, we study the role of climate policy uncertainty in impacting the presidential approval ratings of the United States (US). We control for other policy related uncertainties and geopolitical risks, over and above macroeconomic and financial predictors used in earlier literature on drivers of approval ratings of the US president. Because we study as many as 19 determinants, and nonlinearity is a well-established observation in this area of research, we utilize random forests, a machine-learning approach, to derive our results over the monthly period of 1987:04 to 2023:12. We find that, though the association of the presidential approval ratings with climate policy uncertainty is moderately negative and nonlinear, this type of uncertainty is in fact relatively more important than other measures of policy-related uncertainties, as well as many of the widely-used macroeconomic and financial indicators associated with presidential approval. In addition, and more importantly, we also detect that the importance of climate policy uncertainty has grown in recent years in terms of its impact on the approval ratings of the US president.
    Keywords: Presidential approval ratings, Climate policy uncertainty, Random forests
    JEL: C22 Q54
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:202406&r=cmp
  24. By: Joshua C. Yang; Marcin Korecki; Damian Dailisan; Carina I. Hausladen; Dirk Helbing
    Abstract: This paper investigates the voting behaviors of Large Language Models (LLMs), particularly OpenAI's GPT4 and LLaMA2, and their alignment with human voting patterns. Our approach included a human voting experiment to establish a baseline for human preferences and a parallel experiment with LLM agents. The study focused on both collective outcomes and individual preferences, revealing differences in decision-making and inherent biases between humans and LLMs. We observed a trade-off between preference diversity and alignment in LLMs, with a tendency towards more uniform choices as compared to the diverse preferences of human voters. This finding indicates that LLMs could lead to more homogenized collective outcomes when used in voting assistance, underscoring the need for cautious integration of LLMs into democratic processes.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.01766&r=cmp
  25. By: Evangelos Katsamakas; Oleg V. Pavlov; Ryan Saklad
    Abstract: Artificial intelligence (AI) advances and the rapid adoption of generative AI tools like ChatGPT present new opportunities and challenges for higher education. While substantial literature discusses AI in higher education, there is a lack of a systemic approach that captures a holistic view of the AI transformation of higher education institutions (HEIs). To fill this gap, this article, taking a complex systems approach, develops a causal loop diagram (CLD) to map the causal feedback mechanisms of AI transformation in a typical HEI. Our model accounts for the forces that drive the AI transformation and the consequences of the AI transformation on value creation in a typical HEI. The article identifies and analyzes several reinforcing and balancing feedback loops, showing how, motivated by AI technology advances, the HEI invests in AI to improve student learning, research, and administration. The HEI must take measures to deal with academic integrity problems and adapt to changes in available jobs due to AI, emphasizing AI-complementary skills for its students. However, HEIs face a competitive threat and several policy traps that may lead to decline. HEI leaders need to become systems thinkers to manage the complexity of the AI transformation and benefit from the AI feedback loops while avoiding the associated pitfalls. We also discuss long-term scenarios, the notion of HEIs influencing the direction of AI, and directions for future research on AI transformation.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08143&r=cmp
  26. By: Aliona Cebotari; Enrique Chueca-Montuenga; Yoro Diallo; Yunsheng Ma; Ms. Rima A Turk; Weining Xin; Harold Zavarce
    Abstract: The paper explores the drivers of political fragility by focusing on coups d’état as symptomatic of such fragility. It uses event studies to identify factors that exhibit significantly different dynamics in the runup to coups, and machine learning to identify these stressors and more structural determinants of fragility—as well as their nonlinear interactions—that create an environment propitious to coups. The paper finds that the destabilization of a country’s economic, political or security environment—such as low growth, high inflation, weak external positions, political instability and conflict—set the stage for a higher likelihood of coups, with overlapping stressors amplifying each other. These stressors are more likely to lead to breakdowns in political systems when demographic pressures and underlying structural weaknesses (especially poverty, exclusion, and weak governance) are present or when policies are weaker, through complex interactions. Conversely, strengthened fundamentals and macropolicies have higher returns in structurally fragile environments in terms of staving off political breakdowns, suggesting that continued engagement by multilateral institutions and donors in fragile situations is likely to yield particularly high dividends. The model performs well in predicting coups out of sample, having predicted a high probability of most 2020-23 coups, including in the Sahel region.
    Keywords: Fragility; Drivers of Fragility; Coup d’État; Machine Learning
    Date: 2024–02–16
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2024/034&r=cmp
  27. By: Narun Raman; Taylor Lundy; Samuel Amouyal; Yoav Levine; Kevin Leyton-Brown; Moshe Tennenholtz
    Abstract: There is increasing interest in using LLMs as decision-making "agents." Doing so includes many degrees of freedom: which model should be used; how should it be prompted; should it be asked to introspect, conduct chain-of-thought reasoning, etc? Settling these questions -- and more broadly, determining whether an LLM agent is reliable enough to be trusted -- requires a methodology for assessing such an agent's economic rationality. In this paper, we provide one. We begin by surveying the economic literature on rational decision making, taxonomizing a large set of fine-grained "elements" that an agent should exhibit, along with dependencies between them. We then propose a benchmark distribution that quantitatively scores an LLMs performance on these elements and, combined with a user-provided rubric, produces a "rationality report card." Finally, we describe the results of a large-scale empirical experiment with 14 different LLMs, characterizing the both current state of the art and the impact of different model sizes on models' ability to exhibit rational behavior.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09552&r=cmp
  28. By: Giulia Bovini (Bank of Italy); Emanuele Dicarlo (Bank of Italy); Antonella Tomasi (Bank of Italy)
    Abstract: The Italian Government has redesigned the anti-poverty measures, replacing the minimum income scheme (RdC) with a new inclusion allowance from 2024: the 'assegno di inclusione' (AdI). Based on the Bank of Italy’s microsimulation model (BIMic), fewer households can apply for this allowance because of the more stringent eligibility criteria. We provide an evaluation of the 'morning-after' effects of the reform, which hinges on the assumption that the introduction of the new scheme does not affect individual choices (labour supply, in particular). The AdI contains the incidence of absolute poverty and income inequality (as measured by the Gini index) compared with a scenario with no subsidy in place, but less than the RdC did because of the lower coverage. The static and non-behavioural nature of the model does not allow us to estimate how the reform impacts labour supply. Nevertheless, we show that the monetary disincentives to participate in the labour market fall by around one fourth for individuals belonging to the lowest quintile of the income distribution.
    Keywords: basic income, poverty, redistribution, microsimulation, occupation
    JEL: C15 C63 H23 H31 I32 J68
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_820_23&r=cmp
  29. By: Paul Hurlet (LAET - Laboratoire Aménagement Économie Transports - UL2 - Université Lumière - Lyon 2 - ENTPE - École Nationale des Travaux Publics de l'État - CNRS - Centre National de la Recherche Scientifique); Ouassim Manout (LAET - Laboratoire Aménagement Économie Transports - UL2 - Université Lumière - Lyon 2 - ENTPE - École Nationale des Travaux Publics de l'État - CNRS - Centre National de la Recherche Scientifique); Azise Oumar Diallo (IFPEN - IFP Energies nouvelles - IFPEN - IFP Energies nouvelles)
    Abstract: This work addresses the challenges of implementing shared e-scooter services (SSS) in urban areas. Despite their potential for sustainable mobility, issues like road safety and street cluttering persist. Policy regulation is crucial, and recent efforts have focused on free-floating e-scooter parking legislation. To assist decision-making, this paper proposes an agent-based framework to design SSS parking supply and evaluate its impact. The methodology is applied in Lyon, France, where the SSS is gaining more and more territory. The main outcomes show parking regulation can introduce conflicting objectives, with a reduction of SSS use due to an increase in the access and egress walking distance.
    Keywords: Shared e-Scooter Services (SSS), Micromobility, Regulation, Parking, Agent-Based Model (ABM), MATSim
    Date: 2024–04–23
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04422427&r=cmp
  30. By: Ying, Xiangji; Vorland, Colby J.; Qureshi, Riaz; Brown, Andrew William (Indiana University School of Public Health-Bloomington); Kilicoglu, Halil; Saldanha, Ian; DeVito, Nicholas J; Mayo-Wilson, Evan
    Abstract: Background: Selective non-reporting of studies and study results undermines trust in randomized controlled trials (RCTs). Changes to clinical trial outcomes are sometimes associated with bias. Manually comparing trial documents to identify changes in trial outcomes is time consuming. Objective: This study aims to assess the capacity of the Generative Pretrained Transformer 4 (GPT-4) large language model in detecting and describing changes in trial outcomes within ClinicalTrials.gov records. Methods: We will first prompt GPT-4 to define trial outcomes using five elements (i.e., domain, specific measurement, specific metric, method of aggregation, and time point). We will then prompt GPT-4 to identify outcome changes between the prospective versions of registrations and the most recent versions of registrations. We will use a random sample of 150 RCTs (~1, 500 outcomes) registered on ClinicalTrials.gov. We will include “Completed” trials categorized as “Phase 3” or “Not Applicable” and with results posted on ClinicalTrials.gov. Two independent raters will rate GPT-4’s judgements, and we will assess GPT-4’s accuracy and reliability. We will also explore the heterogeneity in GPT-4’s performance by the year of trial registration and trial type (i.e., applicable clinical trials, NIH-funded trials, and other trials). Discussion: We aim to develop methods that could assist systematic reviewers, peer reviewers, journal editors, and readers in monitoring changes in clinical trial outcomes, streamlining the review process, and improving transparency and reliability of clinical trial reporting.
    Date: 2024–02–29
    URL: http://d.repec.org/n?u=RePEc:osf:metaar:npvwr&r=cmp
  31. By: Stefano Pietrosanti (Bank of Italy); Edoardo Rainone (Bank of Italy)
    Abstract: We present a simple model of a credit market in which firms borrow from multiple banks and credit relationships are simultaneous and interdependent. In this environment, financial and real shocks induce credit reallocation across more and less affected lenders and borrowers. We show that the interdependence introduces a bias in the standard estimates of the effect of shocks on credit relationships. Moreover, we show that the use of firm fixed effects does not solve the issue, may magnify the problem and that the same bias contaminates fixed effects estimates. We propose a novel model that nests commonly used ones, uses the same information set, accounts for and quantifies spillover effects among credit relationships. We document its properties with Monte Carlo simulations and apply it to real credit register data. Evidence from the empirical application suggests that estimates not accounting for spillovers are indeed highly biased.
    Keywords: credit markets, shocks propagation, networks, identification
    JEL: C30 L14 G21
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1436_23&r=cmp
  32. By: Duy-Minh Dang; Hao Zhou
    Abstract: Prices of option contracts on two assets within uncertain volatility models for worst and best-case scenarios satisfy a two-dimensional Hamilton-Jacobi-Bellman (HJB) partial differential equation (PDE) with cross derivatives terms. Traditional methods mainly involve finite differences and policy iteration. This "discretize, then optimize" paradigm requires complex rotations of computational stencils for monotonicity. This paper presents a novel and more streamlined "decompose and integrate, then optimize" approach to tackle the aforementioned HJB PDE. Within each timestep, our strategy employs a piecewise constant control, breaking down the HJB PDE into independent linear two-dimensional PDEs. Using known closed-form expressions for the Fourier transforms of the Green's functions associated with these PDEs, we determine an explicit formula for these functions. Since the Green's functions are non-negative, the solutions to the PDEs, cast as two-dimensional convolution integrals, can be conveniently approximated using a monotone integration method. Such integration methods, including a composite quadrature rule, are generally available in popular programming languages. To further enhance efficiency, we propose an implementation of this monotone integration scheme via Fast Fourier Transforms, exploiting the Toeplitz matrix structure. Optimal control is subsequently obtained by efficiently synthesizing the solutions of the individual PDEs. The proposed monotone piecewise constant control method is demonstrated to be both $\ell_{\infty} $-stable and consistent in the viscosity sense, ensuring its convergence to the viscosity solution of the HJB equation. Numerical results show remarkable agreement with benchmark solutions obtained by unconditionally monotone finite differences, tree methods, and Monte Carlo simulation, underscoring the robustness and effectiveness of our method.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06840&r=cmp
  33. By: Nazif Durmaz; Henry Thompson
    Abstract: This paper simulates the constant cost trade model with labor inputs for three and five regions and products aggregated from the World Input-Output Database. The regions start with America, Asia, and Europe trading Resources, Manufactures, and Services. Each region maximizes Cobb-Douglas utility based on global consumption shares subject to balanced trade and global material balance. Simulated autarky and trade with the rest of the world lead to the full model with multiple potential equilibria. Diversified exports and import competition characterize the trade patterns with the gains from trade relative to autarky up to 20% for the five regions.
    Keywords: comparative advantage; relative prices; simulation; constant cost trade
    JEL: F10 F14
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:abn:wpaper:auwp2024-03&r=cmp
  34. By: Rosa-García, Alfonso
    Abstract: This study explores student responses to AI-generated educational content, specifically a teaching video delivered by an AI-replicant of their professor. Utilizing ChatGPT-4 for scripting and Heygen technology for avatar creation, the research investigates whether students' awareness of the AI's involvement influences their perception of the content's utility. With 97 participants from first-year economics and business programs, the findings reveal a significant difference in valuation between students informed of the AI origin and those who were not, with the former group valuing the content less. This indicates a bias against AI-generated materials based on their origin. The paper discusses the implications of these findings for the adoption of AI in educational settings, highlighting the necessity of addressing student biases and ethical considerations in the deployment of AI-generated educational materials. This research contributes to the ongoing debate on the integration of AI tools in education and their potential to enhance learning experiences.
    Keywords: AI-Generated Content; Virtual Avatars; Student Perceptions; Technology Adoption
    JEL: I23 O33
    Date: 2024–02–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:120135&r=cmp
  35. By: Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang
    Abstract: We create a firm-level ChatGPT investment score, based on conference calls, that measures managers' anticipated changes in capital expenditures. We validate the score with interpretable textual content and its strong correlation with CFO survey responses. The investment score predicts future capital expenditure for up to nine quarters, controlling for Tobin's q and other determinants, implying the investment score provides incremental information about firms' future investment opportunities. The investment score also separately forecasts future total, intangible, and R&D investments. High-investment-score firms experience significant negative future abnormal returns. We demonstrate ChatGPT's applicability to measure other policies, such as dividends and employment.
    JEL: C81 E22 G14 G31 G32 O33
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32161&r=cmp
  36. By: Olivier Bargain; Damien Echevin; Audrey Etienne; Nicolas Moreau (CEMOI - Centre d'Économie et de Management de l'Océan Indien - UR - Université de La Réunion); Adrien Pacifico
    Abstract: The present paper investigates the tax returns of French cohabiting couples with children, defined here as neither married nor in a civil union. These couples represent an interesting case, because they form two separate tax units according to French tax laws and must optimally assign their children to one of the parents' tax units to optimize tax rebates. Using administrative tax data and a microsimulation model, we analyze whether cohabiting couples allocate their children to minimize the joint tax burden of the family. We find, however, that children are not optimally allocated in 25% of cases. We interpret the reasons why couples fail to financially optimize their situation by discussing the usual explanations (e.g., transaction costs, "simple rule, " inertia) as well as a more specific reason: the potential non-cooperative behavior of cohabiting couples, possibly related to the lack of a binding agreement or potential asymmetries of information between partners. We also find suggestive evidence regarding heuristics (such as the equal split rule for an even number of children), a large degree of inertia (based on fiscal status changes over two years), and possible non-cooperation (suboptimal couples tend to separate more and marry less in the subsequent period).
    Date: 2022–06–01
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04440515&r=cmp
  37. By: Schmidt, James
    Abstract: Because of increasing wildfire risk and associated losses, fire insurance has become more difficult to obtain in Northern California. The only insurance alternative for homeowners who are unable to find conventional home insurance is limited and costly coverage available through the California FAIR Plan. Counties located in the Central Sierras have been particularly hard hit with insurance cancellations. FAIR Plan policies in several of those counties exceeded 20% of all policies in 2021. Results from three recent assessments, based on wildfire simulation models, agree that counties in the Central Sierras are among the most at-risk for wildfire-caused structure loss. Most housing losses in the 2013-2022 decade, however, were the result of wind-driven fires in the Northern Sierras and in the Northern San Francisco Bay Area. 85% of all losses occurred in fires where a Red Flag Warning (RFW) for high winds had been issued by the National Weather Service. The Northern Sierras and the North Bay Area averaged 60% more RFW days during the fall fire season compared to the Central Sierras. Strong downslope “Diablo” winds from the Great Basin deserts were involved in seven of the most destructive fires, accounting for 65% of the total housing losses. Based on records from 109 weather stations throughout the Sierras and the Bay Area, these wind events occur primarily in the Northern Sierras and the Bay Area. Climate models have predicted that Diablo-type winds should decrease as the interior deserts warm, but weather stations in both the Bay Area and the Sierras recorded a large increase in the number of strong DiabIo wind days in the 2017 through 2021 years. All seven of the Diablo wind fires occurred during that time span. Fires driven by strong Diablo winds fit into a category of disasters referred to as “black swan” events – rare occurrences that have very large effects. Because these fires occur so infrequently, they have minimal effect on risk estimates produced by averaging together the outcomes of thousands of simulations. Exceedance probability analysis (Ager et al., 2021) can help to identify the communities most at risk from such high-loss, low-probability events. Combining exceedance probability analysis with simulation models that capture the frequency and location of extreme wind events should cause county risk rankings to more closely match actual losses. As a result, the relative risk ratings (and FAIR Plan policies) assigned to the Central Sierras should be reduced.
    Keywords: Wildfire; Fire Insurance; FAIR Plan; Diablo Wind; Red Flag Warnings; Exceedance Probability; Black Swan; Simulation; FSIM; ELMFIRE; Exposure; Ignition Density; Risk; California; Downslope Winds; Climate models; RAWS; weather stations; Wildland Urban Interface; WUI; Camp Fire; Tubbs Fire; Central Sierras; San Francisco Bay Area; Northern Sierras;
    JEL: G22 Q0 Q54 Y1 Y91
    Date: 2024–02–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:120195&r=cmp
  38. By: Lin, Alessandro; Peruffo, Marcel
    Abstract: We propose a novel methodology for solving Heterogeneous Agents New Keynesian (HANK) models with aggregate uncertainty and the Zero Lower Bound (ZLB) on nominal interest rates. Our efficient solution strategy combines the sequence-state Jacobian methodology in Auclert et al. (2021) with a tractable structure for aggregate uncertainty by means of a two-regimes shock structure. We apply the method to a simple HANK model to show that: 1) in the presence of aggregate non-linearities such as the ZLB, a dichotomy emerges between the aggregate impulse responses under aggregate uncertainty against the deterministic case; 2) aggregate uncertainty amplifies downturns at the ZLB, and household heterogeneity increases the strength of this amplification; 3) the effects of forward guidance are stronger when there is aggregate uncertainty. JEL Classification: D14, E44, E52, E58
    Keywords: computational methods, liquidity traps, monetary policy, new-Keynesian models, zero lower bound
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20242911&r=cmp
  39. By: Werding, Martin; Runschke, Benedikt; Schwarz, Milena
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:svrwwp:283606&r=cmp
  40. By: Qixin Zhan (China Economics and Management Academy, Central University of Finance and Economics); Heng-fu Zou (The World Bank; Institute for Advanced Study, Wuhan University; Institute for Advanced Study, Shenzhen University)
    Abstract: This paper delves into the theoretical underpinnings of how freedom, grounded in the rule of law and property rights, shapes wealth accumulation and economic growth. By integrating liberty into the neoclassical growth model, we introduce the innovative concepts of "liberty consumption" and "liberty capital" and define utility and production functions on them. Through theoretical analysis and simulations, we ascertain that a robust preference for liberty nurtures sustained prosperity and heightened productivity. However, in scenarios where the costs associated with liberty consumption are substantial and liberty capital depreciates rapidly—indicating an environment inhospitable or constraining to liberty -- it adversely affects economic output and overall well-being. These insights underscore the significance of examining liberty dynamics in economic growth and development. Without the presence of liberty, property rights, and the rule of law within utility and production functions, society faces the peril of descending into either a Hobbesian state of "war of all against all" or a totalitarian state ruled by a singular authority. In either case, life becomes solitary, poor, nasty, brutish, and potentially short.
    Date: 2024–02–12
    URL: http://d.repec.org/n?u=RePEc:cuf:wpaper:619&r=cmp
  41. By: Mingyang Li; Han Pengsihua; Fujiao Meng; Zejun Wang; Weian Liu
    Abstract: This academic paper examines the strategic interactions between Japan, other nations, and the International Atomic Energy Agency (IAEA) regarding Japan's decision to release treated nuclear wastewater from the Fukushima Daiichi Nuclear Power Plant into the sea. It introduces a payoff matrix and time-delay elements in replicator dynamic equations to mirror real-world decision-making delays. The paper analyzes the stability of strategies and conditions for different stable states using characteristic roots of a linearized system and numerical simulations. It concludes that time delays significantly affect decision-making stability and evolution trajectories in nuclear wastewater disposal strategies. The study highlights the importance of efficient wastewater treatment technology, the impact of export tax revenue losses on Japan's strategies, and the role of international cooperation. The novelty of the research lies in integrating time-delay elements from ocean dynamics and governmental decision-making into the game-theoretical model.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07227&r=cmp
  42. By: Ms. Li Liu; Mr. Ben Lockwood; Eddy H.F. Tam
    Abstract: This paper studies the effect of the VAT threshold on firm growth in the UK, using exogenous variation over time in the threshold, combined with turnover bin fixed effects, for identification. We find robust evidence that annual growth in turnover slows by about 1 percentage point when firm turnover gets close to the threshold, with no evidence of higher growth when the threshold is passed. Growth in firm costs shows a similar pattern, indicating that the response to the threshold is likely to be a real response rather than an evasion response. Firms that habitually register even when their turnover is below the VAT threshold (voluntary registered firms) have growth that is unaffected by the threshold, whereas firms that select into the Flat-Rate Scheme have a less pronounced slowdown response than other firms. Similar patterns of turnover and cost growth around the threshold are also observed for non-incorporated businesses. Finally, simulation results clarify the relative contribution of ``crossers" (firms who eventually register for VAT) and ``non-crossers" (those who permanently stay below the threshold) in explaining our empirical findings.
    Keywords: VAT; size-based threshold; firm growth
    Date: 2024–02–16
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2024/033&r=cmp
  43. By: Mingyang Li; Han Pengsihua; Songqing Zhao; Zejun Wang; Limin Yang; Weian Liu
    Abstract: On August 24, 2023, Japan controversially decided to discharge nuclear wastewater from the Fukushima Daiichi Nuclear Power Plant into the ocean, sparking intense domestic and global debates. This study uses evolutionary game theory to analyze the strategic dynamics between Japan, other countries, and the Japan Fisheries Association. By incorporating economic, legal, international aid, and environmental factors, the research identifies three evolutionarily stable strategies, analyzing them via numerical simulations. The focus is on Japan's shift from wastewater release to its cessation, exploring the myriad factors influencing this transition and their effects on stakeholders' decisions. Key insights highlight the need for international cooperation, rigorous scientific research, public education, and effective wastewater treatment methods. Offering both a fresh theoretical perspective and practical guidance, this study aims to foster global consensus on nuclear wastewater management, crucial for marine conservation and sustainable development.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07210&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.