nep-cmp New Economics Papers
on Computational Economics
Issue of 2025–02–10
24 papers chosen by
Stan Miles, Thompson Rivers University


  1. Testing the cognitive limits of large language models By Fernando Perez-Cruz; Hyun Song Shin
  2. Forecasting Dutch inflation using machine learning methods By Robert-Paul Berben; Rajni Rasiawan; Jasper de Winter
  3. FinSphere: A Conversational Stock Analysis Agent Equipped with Quantitative Tools based on Real-Time Database By Shijie Han; Changhai Zhou; Yiqing Shen; Tianning Sun; Yuhua Zhou; Xiaoxia Wang; Zhixiao Yang; Jingshu Zhang; Hongguang Li
  4. Artificial intelligence in central banking By Douglas Kiarelly Godoy de Araujo; Sebastian Doerr; Leonardo Gambacorta; Bruno Tissot
  5. Assets Forecasting with Feature Engineering and Transformation Methods for LightGBM By Konstantinos-Leonidas Bisdoulis
  6. Inference for Regression with Variables Generated by AI or Machine Learning By Laura Battaglia; Timothy Christensen; Stephen Hansen; Szymon Sacher
  7. Isolating Location Value Using SHAP and Interaction Constraints By Nicola Stalder; Michael Mayer; Steven C. Bourassa; Martin Hoesli
  8. Artificial Intelligence Asset Pricing Models By Bryan T. Kelly; Boris Kuznetsov; Semyon Malamud; Teng Andrea Xu
  9. DeepHAM: A Global Solution Method for Heterogeneous Agent Models with Aggregate Shocks By Jiequn Han; Yucheng Yang; Weinan E
  10. Nowcasting Peru's GDP with Machine Learning Methods By Jairo Flores; Bruno Gonzaga; Walter Ruelas-Huanca; Juan Tang
  11. AI Agents in the Advertising Industry By Adesina, Toheeb
  12. The Moral Mind(s) of Large Language Models By Avner Seror
  13. An agent-based model of trickle-up growth and income inequality By Elisa Palagi; Mauro Napoletano; Andrea Roventini; Jean-Luc Gaffard
  14. Forecasting of Bitcoin Prices Using Hashrate Features: Wavelet and Deep Stacking Approach By Ramin Mousa; Meysam Afrookhteh; Hooman Khaloo; Amir Ali Bengari; Gholamreza Heidary
  15. Boosting the Accuracy of Stock Market Prediction via Multi-Layer Hybrid MTL Structure By Yuxi Hong
  16. Predicting Market Reactions to News: An LLM-Based Approach Using Spanish Business Articles By Jesús Villota
  17. Система селективно - комбинированного прогноза инфляции (SSCIF)// Selective-Combined Inflation Forecasting System By Адилханова Зарина // Adilkhanova Zarina; Ержан Ислам // Yerzhan Islam
  18. Perceptions of Justice: Assessing the Perceived Effectiveness of Punishments by Artificial Intelligence versus Human Judges By Gilles Grolleau; Murat C Mungan; Naoufel Mzoughi
  19. Class-Imbalanced-Aware Adaptive Dataset Distillation for Scalable Pretrained Model on Credit Scoring By Xia Li; Hanghang Zheng; Xiao Chen; Hong Liu; Mao Mao
  20. Modeling and Forecasting the Probability of Crypto-Exchange Closures: A Forecast Combination Approach By Magomedov, Said; Fantazzini, Dean
  21. Open Sourcing GPTs: Economics of Open Sourcing Advanced AI Models By Mahyar Habibi
  22. Adoption of circular economy innovations: The role of artificial intelligence By Dirk Czarnitzki; Robin Lepers; Maikel Pellens
  23. Deep Learning for Search and Matching Models By Jonathan Payne; Adam Rebei; Yucheng Yang
  24. Report of the International Capacity Building Training Program on Computable General Equilibrium (CGE) Modeling for Economic Policy Analysis By Nandi, Sukhendu; Barman, Subrata

  1. By: Fernando Perez-Cruz; Hyun Song Shin
    Abstract: When posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, large language models (LLMs) display a distinctive and revealing pattern of failure. The LLM performs flawlessly when presented with the original wording of the puzzle available on the internet but performs poorly when incidental details are changed, suggestive of a lack of true understanding of the underlying logic. Our findings do not detract from the considerable progress in central bank applications of machine learning to data management, macro analysis and regulation/supervision. They do, however, suggest that caution should be exercised in deploying LLMs in contexts that demand rigorous reasoning in economic analysis.
    Date: 2024–01–04
    URL: https://d.repec.org/n?u=RePEc:bis:bisblt:83
  2. By: Robert-Paul Berben; Rajni Rasiawan; Jasper de Winter
    Abstract: This paper examines the performance of machine learning models in forecasting Dutch inflation over the period 2010 to 2023, leveraging a large dataset and a range of machine learning techniques. The findings indicate that certain machine learning models outperform simple benchmarks, particularly in forecasting core inflation and services inflation. However, these models face challenges in consistently outperforming the primary inflation forecast of De Nederlandsche Bank for headline inflation, though they show promise in improving the forecast for non-energy industrial goods inflation. Models employing path averages rather than direct forecasting achieve greater accuracy, while the inclusion of non-linearities, factors, or targeted predictors provides minimal or no improvement in forecasting performance. Overall, Ridge regression has the best forecasting performance in our study.
    Keywords: Inflation forecasting; Big data; Machine learning; Random Forest; Ridge regression
    JEL: C22 C53 C55 E17 E31
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:dnb:dnbwpp:828
  3. By: Shijie Han; Changhai Zhou; Yiqing Shen; Tianning Sun; Yuhua Zhou; Xiaoxia Wang; Zhixiao Yang; Jingshu Zhang; Hongguang Li
    Abstract: Current financial Large Language Models (LLMs) struggle with two critical limitations: a lack of depth in stock analysis, which impedes their ability to generate professional-grade insights, and the absence of objective evaluation metrics to assess the quality of stock analysis reports. To address these challenges, this paper introduces FinSphere, a conversational stock analysis agent, along with three major contributions: (1) Stocksis, a dataset curated by industry experts to enhance LLMs' stock analysis capabilities, (2) AnalyScore, a systematic evaluation framework for assessing stock analysis quality, and (3) FinSphere, an AI agent that can generate high-quality stock analysis reports in response to user queries. Experiments demonstrate that FinSphere achieves superior performance compared to both general and domain-specific LLMs, as well as existing agent-based systems, even when they are enhanced with real-time data access and few-shot guidance. The integrated framework, which combines real-time data feeds, quantitative tools, and an instruction-tuned LLM, yields substantial improvements in both analytical quality and practical applicability for real-world stock analysis.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.12399
  4. By: Douglas Kiarelly Godoy de Araujo; Sebastian Doerr; Leonardo Gambacorta; Bruno Tissot
    Abstract: Central banks have been early adopters of machine learning techniques for statistics, macro analysis, payment systems oversight and supervision, with considerable success. Artificial intelligence brings many opportunities in support of central bank mandates, but also challenges – some general and others specific to central banks. Central bank collaboration, for instance through knowledge-sharing and pooling of expertise, holds great promise in keeping central banks at the vanguard of developments in artificial intelligence.
    Date: 2024–01–23
    URL: https://d.repec.org/n?u=RePEc:bis:bisblt:84
  5. By: Konstantinos-Leonidas Bisdoulis
    Abstract: Fluctuations in the stock market rapidly shape the economic world and consumer markets, impacting millions of individuals. Hence, accurately forecasting it is essential for mitigating risks, including those associated with inactivity. Although research shows that hybrid models of Deep Learning (DL) and Machine Learning (ML) yield promising results, their computational requirements often exceed the capabilities of average personal computers, rendering them inaccessible to many. In order to address this challenge in this paper we optimize LightGBM (an efficient implementation of gradient-boosted decision trees (GBDT)) for maximum performance, while maintaining low computational requirements. We introduce novel feature engineering techniques including indicator-price slope ratios and differences of close and open prices divided by the corresponding 14-period Exponential Moving Average (EMA), designed to capture market dynamics and enhance predictive accuracy. Additionally, we test seven different feature and target variable transformation methods, including returns, logarithmic returns, EMA ratios and their standardized counterparts as well as EMA difference ratios, so as to identify the most effective ones weighing in both efficiency and accuracy. The results demonstrate Log Returns, Returns and EMA Difference Ratio constitute the best target variable transformation methods, with EMA ratios having a lower percentage of correct directional forecasts, and standardized versions of target variable transformations requiring significantly more training time. Moreover, the introduced features demonstrate high feature importance in predictive performance across all target variable transformation methods. This study highlights an accessible, computationally efficient approach to stock market forecasting using LightGBM, making advanced forecasting techniques more widely attainable.
    Date: 2024–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.07580
  6. By: Laura Battaglia (Oxford University); Timothy Christensen (Yale University); Stephen Hansen (UCL, IFS, and CEPR); Szymon Sacher (Meta)
    Abstract: It has become common practice for researchers to use AI-powered information retrieval algorithms or other machine learning methods to estimate variables of economic interest, then use these estimates as covariates in a regression model. We show both theoretically and empirically that naively treating AI- and ML-generated variables as ÒdataÓ leads to biased estimates and invalid inference. We propose two methods to correct bias and perform valid inference: (i) an explicit bias correction with bias-corrected confidence intervals, and (ii) joint maximum likelihood estimation of the regression model and the variables of interest. Through several applications, we demonstrate that the common approach generates substantial bias, while both corrections perform well.
    Date: 2025–01–02
    URL: https://d.repec.org/n?u=RePEc:cwl:cwldpp:2421
  7. By: Nicola Stalder (University of Bern); Michael Mayer (Schweizerische Mobiliar Versicherungsgesellschaft); Steven C. Bourassa (University of Washington); Martin Hoesli (University of Geneva - Geneva School of Economics and Management (GSEM); Swiss Finance Institute; University of Aberdeen - Business School)
    Abstract: This paper describes how machine learning techniques and explainable artificial intelligence can be leveraged to estimate combined location value. We analyze listed apartment rents using gradient boosted trees, which allow for flexible modelling of non-linear effects and high order interactions among covariates. We then separate location value from structure value by imposing interaction constraints. Finally, we use the additivity property of SHapley Additive exPlanations (SHAP) to extract the combined effects of location-related covariates. These effects are then compared across different geographical levels (regional and national). The empirical analysis uses a rich dataset consisting of listed rents and property characteristics for approximately 300, 000 apartments in Switzerland. We start with an unconstrained model that allows for flexible interactions between location variables and structural characteristics. We then impose interaction constraints such that structural characteristics no longer interact with location variables or each other. This step is required to extract the pure value of location independent of any interactions with structural characteristics. The constrained model improves interpretability while retaining a high degree of accuracy. What would otherwise be a cumbersome calibration of locational values is replaced by a simple extraction of the corresponding feature effects using SHAP. The results should prove useful in improving hedonic models used by property tax assessors, mortgage underwriters, valuation firms, and regulatory authorities.
    Keywords: Hedonic models, SHAP values, location values, explainable artificial intelligence, machine learning, gradient boosting
    JEL: R31 G12
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:chf:rpseri:rp2502
  8. By: Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Boris Kuznetsov (Swiss Finance Institute); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute); Teng Andrea Xu (École Polytechnique Fédérale de Lausanne (EPFL))
    Abstract: The core statistical technology in artificial intelligence is the large-scale transformer network. We propose a new asset pricing model that implants a transformer in the stochastic discount factor. This structure leverages conditional pricing information via cross-asset information sharing and nonlinearity. We also develop a linear transformer that serves as a simplified surrogate from which we derive an intuitive decomposition of the transformer's asset pricing mechanisms. We find large reductions in pricing errors from our artificial intelligence pricing model (AIPM) relative to previous machine learning models and dissect the sources of these gains.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:chf:rpseri:rp2508
  9. By: Jiequn Han (Flatiron Institute); Yucheng Yang (University of Zurich; Swiss Finance Institute); Weinan E (Princeton University)
    Abstract: We propose an efficient, reliable, and interpretable global solution method, the Deep learning-based algorithm for Heterogeneous Agent Models (DeepHAM), for solving high dimensional heterogeneous agent models with aggregate shocks. The state distribution is approximately represented by a set of optimal generalized moments. Deep neural networks are used to approximate the value and policy functions, and the objective is optimized over directly simulated paths. In addition to being an accurate global solver, this method has three additional features. First, it is computationally efficient in solving complex heterogeneous agent models, and it does not suffer from the curse of dimensionality. Second, it provides a general and interpretable representation of the distribution over individual states, which is crucial in addressing the classical question of whether and how heterogeneity matters in macroeconomics. Third, it solves the constrained efficiency problem as easily as it solves the competitive equilibrium, which opens up new possibilities for normative studies. As a new application, we study constrained efficiency in heterogeneous agent models with aggregate shocks. We find that in the presence of aggregate risk, a utilitarian planner would raise aggregate capital for redistribution less than in absence of it because poor households do more precautionary savings and thus rely less on labor income.
    Keywords: Heterogeneous agent models, aggregate shocks, global solution, deep learning, generalized moments, constrained efficiency
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:chf:rpseri:rp2506
  10. By: Jairo Flores (Central Reserve Bank of Peru); Bruno Gonzaga (Central Reserve Bank of Peru); Walter Ruelas-Huanca (Central Reserve Bank of Peru); Juan Tang (Central Reserve Bank of Peru)
    Abstract: This paper explores the application of machine learning (ML) techniques to nowcast the monthly year-over-year growth rate of both total and non-primary GDP in Peru. Using a comprehensive dataset that includes over 170 domestic and international predictors, we assess the predictive performance of 12 ML models. The study compares these ML approaches against the traditional Dynamic Factor Model (DFM), which serves as the benchmark for nowcasting in economic research. We treat specific configurations, such as the feature matrix rotations and the dimensionality reduction technique, as hyperparameters that are optimized iteratively by the Tree-Structured Parzen Estimator. Our results show that ML models outperformed DFM in nowcasting total GDP, and that they achieve similar performance to this benchmark in nowcasting non-primary GDP. Furthermore, the bottom-up approach appears to be the most effective practice for nowcasting economic activity, as aggregating sectoral predictions improves the precision of ML methods. The findings indicate that ML models offer a viable and competitive alternative to traditional nowcasting methods.
    Keywords: GDP; Machine Learning; nowcasting
    JEL: C14 C32 E32 E52
    Date: 2025–02–03
    URL: https://d.repec.org/n?u=RePEc:gii:giihei:heidwp01-2025
  11. By: Adesina, Toheeb
    Abstract: This research investigates how artificial intelligence (AI) agents function in the advertising sector. It focuses on the transformation, applications, benefits, and concerns of Artificial Intelligence (AI) in the new era of marketing. The research used secondary data from industry reports, academic studies and case studies, on how AI agent enhances ad targeting, campaign optimization, personalization, and predictive analysis. The main conclusions show that AI agents significantly increase productivity and customer engagement, but there are still issues with algorithmic biases and data privacy. The study highlights the need for a well-rounded strategy for implementing AI, supporting both innovation and moral considerations. To improve the advertising ecosystem, these insights are meant to help marketers and legislators use AI responsibly.
    Keywords: Advertising, Marketing, Artificial intelligence, Machine learning, AI-powered advertising, Programmatic advertising, Personalization, Predictive analytics, Consumer engagement, Chatbots, Innovation
    JEL: M3 M31 M37
    Date: 2025–01–02
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:123413
  12. By: Avner Seror (Aix Marseille Univ, CNRS, AMSE, Marseille, France)
    Abstract: As large language models (LLMs) become integrated to decision-making across various sectors, a key question arises: do they exhibit an emergent "moral mind" - a consistent set of moral principles guiding their ethical judgments - and is this reasoning uniform or diverse across models? To investigate this, we presented about forty different models from the main providers with a large array of structured ethical scenarios, creating one of the largest datasets of its kind. Our rationality tests revealed that at least one model from each provider demonstrated behavior consistent with stable moral principles, effectively acting as approximately optimizing a utility function encoding ethical reasoning. We identified these utility functions and observed a notable clustering of models around neutral ethical stances. To investigate variability, we introduced a novel non-parametric permutation approach, revealing that the most rational models shared 59% to 76% of their ethical reasoning patterns. Despite this shared foundation, differences emerged: roughly half displayed greater moral adaptability, bridging diverse perspectives, while the remainder adhered to more rigid ethical structures.
    Keywords: Decision Theory, revealed preference, Rationality, artificial intelligence, LLM, PSM.
    JEL: D9 C9 C44
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:aim:wpaimx:2433
  13. By: Elisa Palagi (SSSUP - Scuola Universitaria Superiore Sant'Anna = Sant'Anna School of Advanced Studies [Pisa]); Mauro Napoletano (OFCE - Observatoire français des conjonctures économiques (Sciences Po) - Sciences Po - Sciences Po); Andrea Roventini (SKEMA Business School, Université Côte d'Azur (GREDEG)); Jean-Luc Gaffard (OFCE - Observatoire français des conjonctures économiques (Sciences Po) - Sciences Po - Sciences Po)
    Abstract: We build an agent-based model to study how coordination failures, credit constraints, and unequal access to investment opportunities affect inequality and aggregate income dynamics. We show that macroeconomic conditions are affected by income distribution and that the model features trickle-up growth dynamics. Redistribution toward poorer households raises demand and benefits all agents' income growth. Simulations show that our model reproduces several stylized facts concerning income inequality and social mobility. Finally, fiscal policies facilitating access to investment opportunities by poor households have the largest impact, raising income and decreasing inequality, with policy timing being crucial.
    Date: 2023–12
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04531031
  14. By: Ramin Mousa; Meysam Afrookhteh; Hooman Khaloo; Amir Ali Bengari; Gholamreza Heidary
    Abstract: Digital currencies have become popular in the last decade due to their non-dependency and decentralized nature. The price of these currencies has seen a lot of fluctuations at times, which has increased the need for prediction. As their most popular, Bitcoin(BTC) has become a research hotspot. The main challenge and trend of digital currencies, especially BTC, is price fluctuations, which require studying the basic price prediction model. This research presents a classification and regression model based on stack deep learning that uses a wavelet to remove noise to predict movements and prices of BTC at different time intervals. The proposed model based on the stacking technique uses models based on deep learning, especially neural networks and transformers, for one, seven, thirty and ninety-day forecasting. Three feature selection models, Chi2, RFE and Embedded, were also applied to the data in the pre-processing stage. The classification model achieved 63\% accuracy for predicting the next day and 64\%, 67\% and 82\% for predicting the seventh, thirty and ninety days, respectively. For daily price forecasting, the percentage error was reduced to 0.58, while the error ranged from 2.72\% to 2.85\% for seven- to ninety-day horizons. These results show that the proposed model performed better than other models in the literature.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.13136
  15. By: Yuxi Hong
    Abstract: Accurate stock market prediction provides great opportunities for informed decision-making, yet existing methods struggle with financial data's non-linear, high-dimensional, and volatile characteristics. Advanced predictive models are needed to effectively address these complexities. This paper proposes a novel multi-layer hybrid multi-task learning (MTL) framework aimed at achieving more efficient stock market predictions. It involves a Transformer encoder to extract complex correspondences between various input features, a Bidirectional Gated Recurrent Unit (BiGRU) to capture long-term temporal relationships, and a Kolmogorov-Arnold Network (KAN) to enhance the learning process. Experimental evaluations indicate that the proposed learning structure achieves great performance, with an MAE as low as 1.078, a MAPE as low as 0.012, and an R^2 as high as 0.98, when compared with other competitive networks.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.09760
  16. By: Jesús Villota (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: Markets do not always efficiently incorporate news, particularly when information is complex or ambiguous. Traditional text analysis methods fail to capture the economic structure of information and its firm-specific implications. We propose a novel methodology that guides LLMs to systematically identify and classify firm-specific economic shocks in news articles according to their type, magnitude, and direction. This economically-informed classification allows for a more nuanced understanding of how markets process complex information. Using a simple trading strategy, we demonstrate that our LLM-based classification significantly outperforms a benchmark based on clustering vector embeddings, generating consistent profits out-of-sample while maintaining transparent and durable trading signals. The results suggest that LLMs, when properly guided by economic frameworks, can effectively identify persistent patterns in how markets react to different types of firm-specific news. Our findings contribute to understanding market efficiency and information processing, while offering a promising new tool for analyzing financial narratives.
    Keywords: Large language models, business news, stock market reaction, market efficiency.
    JEL: G12 G14 C45 C58 C63 D83
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:cmf:wpaper:wp2025_2501
  17. By: Адилханова Зарина // Adilkhanova Zarina (National Bank of Kazakhstan); Ержан Ислам // Yerzhan Islam (National Bank of Kazakhstan)
    Abstract: В условиях нестабильной макроэкономической среды повышение точности прогнозирования инфляции является приоритетной задачей для центральных банков, особенно тех, которые придерживаются режима инфляционного таргетирования. Традиционные эконометрические модели сталкиваются с ограничениями при учёте волатильности, внешних шоков и нелинейных взаимосвязей. Данное исследование направлено на улучшение прогнозирования инфляции путём интеграции методов машинного обучения в существующую систему селективно-комбинированного прогнозирования инфляции. Включение таких алгоритмов, как Ridge Regression, Lasso Regression и Elastic Net, позволяет выявлять сложные паттерны в макроэкономических данных и повышать точность прогнозов. Сравнительный анализ прогнозов, полученных с использованием традиционных эконометрических моделей (OLS, LTAR, BVAR, RW) и алгоритмов машинного обучения, показывает, что гибридный подход значительно снижает ошибки прогнозирования и повышает надёжность прогнозов в краткосрочном периоде. Полученные результаты могут внести вклад в совершенствование инструментов макроэкономического прогнозирования и развитие более эффективной денежно-кредитной политики, поддерживая качество принятия решений центральными банками. // In an environment of macroeconomic instability, improving the accuracy of inflation forecasting is a priority for central banks, especially those operating under inflation targeting regimes. Traditional econometric models face limitations in accounting for volatility, external shocks, and nonlinear relationships. This study aims to enhance inflation forecasting by integrating machine learning methods into the existing Selective-Combined Inflation Forecasting System (SSCIF). The inclusion of algorithms such as Ridge Regression, Lasso Regression, and Elastic Net enables the identification of complex patterns in macroeconomic data, thereby improving forecast accuracy. A comparative analysis of forecasts generated using traditional econometric models (OLS, LTAR, BVAR, RW) and machine learning algorithms demonstrates that the hybrid approach significantly reduces forecasting errors and enhances the reliability of short-term forecasts. The results contribute to the advancement of macroeconomic forecasting tools and the development of more effective monetary policy, supporting better decision-making by central banks.
    Keywords: инфляция, прогнозирование, индекс потребительских цен, модель, машинное обучение, эконометрические модели, точность прогнозов, inflation, forecasting, consumer price index, model, machine learning, econometric models, forecast accuracy
    JEL: E31 E37 C52 C61
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:aob:wpaper:62
  18. By: Gilles Grolleau (ESSCA School of Management Lyon); Murat C Mungan (Texas A&M University – School of Law); Naoufel Mzoughi (ECODEVELOPPEMENT - Ecodéveloppement - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement)
    Abstract: Using an original experimental survey, we analyze how people perceive punishments generated by artificial intelligence (AI) compared to the same punishments generated by a human judge. We use two vignettes pertaining to two different albeit relatively common illegal behaviors, namely not picking up one's dog waste on public roads and setting fire in dry areas.In general, participants perceived AI judgements as having a larger deterrence effect compared to the those rendered by a judge. However, when we analyzed each scenario separately, we found that the differential effect of AI is only significant in the first scenario. We discuss the implications of these findings
    Keywords: Artificial intelligence, AI, Judges, Punishments, Unethical acts, Wrongdoings
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04854067
  19. By: Xia Li; Hanghang Zheng; Xiao Chen; Hong Liu; Mao Mao
    Abstract: The advent of artificial intelligence has significantly enhanced credit scoring technologies. Despite the remarkable efficacy of advanced deep learning models, mainstream adoption continues to favor tree-structured models due to their robust predictive performance on tabular data. Although pretrained models have seen considerable development, their application within the financial realm predominantly revolves around question-answering tasks and the use of such models for tabular-structured credit scoring datasets remains largely unexplored. Tabular-oriented large models, such as TabPFN, has made the application of large models in credit scoring feasible, albeit can only processing with limited sample sizes. This paper provides a novel framework to combine tabular-tailored dataset distillation technique with the pretrained model, empowers the scalability for TabPFN. Furthermore, though class imbalance distribution is the common nature in financial datasets, its influence during dataset distillation has not been explored. We thus integrate the imbalance-aware techniques during dataset distillation, resulting in improved performance in financial datasets (e.g., a 2.5% enhancement in AUC). This study presents a novel framework for scaling up the application of large pretrained models on financial tabular datasets and offers a comparative analysis of the influence of class imbalance on the dataset distillation process. We believe this approach can broaden the applications and downstream tasks of large models in the financial domain.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.10677
  20. By: Magomedov, Said; Fantazzini, Dean
    Abstract: The popularity of cryptocurrency exchanges has surged in recent years, accompanied by the proliferation of new digital platforms and tokens. However, the issue of credit risk and the reliability of crypto exchanges remain critical, highlighting the need for indicators to assess the safety of investing through these platforms. This study examines a unique, hand-collected dataset of 228 cryptocurrency exchanges operating between April 2011 and May 2024. Using various machine learning algorithms, we identify the key factors contributing to exchange shutdowns, with trading volume, exchange lifespan, and cybersecurity scores emerging as the most significant predictors. Since individual machine learning models often capture distinct data characteristics and exhibit varying error patterns, we employ a forecast combination approach by aggregating multiple predictive distributions. Specifically, we evaluate several specifications of the generalized linear pool (GLP), beta-transformed linear pool (BLP), and beta-mixture combination (BMC). Our findings reveal that the beta-transformed linear pool and the beta-mixture combination achieve the best performances, improving forecast accuracy by approximately 4.1% based on a robust H-measure, which effectively addresses the challenges of misclassification in imbalanced datasets.
    Keywords: forecast combination; exchange; bitcoin; crypto assets; cryptocurrencies; credit risk; bankruptcy; default probability
    JEL: C35 C51 C53 C58 G12 G17 G32 G33
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:123416
  21. By: Mahyar Habibi
    Abstract: This paper explores the economic underpinnings of open sourcing advanced large language models (LLMs) by for-profit companies. Empirical analysis reveals that: (1) LLMs are compatible with R&D portfolios of numerous technologically differentiated firms; (2) open-sourcing likelihood decreases with an LLM's performance edge over rivals, but increases for models from large tech companies; and (3) open-sourcing an advanced LLM led to an increase in research-related activities. Motivated by these findings, a theoretical framework is developed to examine factors influencing a profit-maximizing firm's open-sourcing decision. The analysis frames this decision as a trade-off between accelerating technology growth and securing immediate financial returns. A key prediction from the theoretical analysis is an inverted-U-shaped relationship between the owner's size, measured by its share of LLM-compatible applications, and its propensity to open source the LLM. This finding suggests that moderate market concentration may be beneficial to the open source ecosystems of multi-purpose software technologies.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.11581
  22. By: Dirk Czarnitzki; Robin Lepers; Maikel Pellens
    Abstract: The circular economy represents a systematic shift in production and consumption, aimed at extending the life cycle of products and materials while minimizing resource use and waste. Achieving the goals of the circular economy presents firms with the challenge of innovating new products, technologies, and business models, however. This paper explores the role of artificial intelligence as an enabler of circular economy innovations. Through an empirical analysis of the German Community Innovation Survey, we show that firms investing in artificial intelligence are more likely to introduce circular economy innovations than those that do not. Additionally, the results indicate that the use of artificial intelligence enhances firms’ abilities to lower production externalities (for instance, reducing pollution) through these innovations. The findings of this paper underscore artificial intelligence’s potential to accelerate the transition to the circular economy.
    Keywords: Circular economy, Innovation, Artificial intelligence
    Date: 2025–01–23
    URL: https://d.repec.org/n?u=RePEc:ete:msiper:758339
  23. By: Jonathan Payne (Princeton University); Adam Rebei (Stanford University); Yucheng Yang (University of Zurich; Swiss Finance Institute)
    Abstract: We develop a new method to globally solve and estimate search and matching models with aggregate shocks and heterogeneous agents. We characterize general equilibrium as a high-dimensional partial differential equation with the distribution as a state variable. We then use deep learning to solve the model and estimate economic parameters using the simulated method of moments. This allows us to study a wide class of search markets where the distribution affects agent decisions and compute variables (e.g. wages and prices) that were previously unattainable. In applications to labor search models, we show that distribution feedback plays an important role in amplification and that positive assortative matching weakens in prolonged expansions, disproportionately benefiting low-wage workers.
    Keywords: Search and Matching, Distribution Feedback, Two-sided Heterogeneity, Business Cycles, Sorting, Over-the-Counter Financial Markets, Deep learning
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:chf:rpseri:rp2505
  24. By: Nandi, Sukhendu; Barman, Subrata
    Abstract: Many policy questions need to be addressed within an economy-wide framework that captures impacts on the overall economy, and at sector and household levels. Over the past 25 years, computable general equilibrium (CGE) models have become a standard tool for empirical economic analysis. CGE models are especially designed to evaluate the direct and indirect impacts of policies shocks at both macroeconomic and microeconomic scales. In recent years, improvements in model specification, data availability, and computer technology have improved the payoffs and reduced the costs of policy analysis based on CGE models, paving the way for their widespread use by policy analysts throughout the world. Given the demand for economy-wide analysis, the International Food Policy Research Institute (IFPRI) and the South Asian Network on Economic Modeling (SANEM), together with the Indian Council of Agricultural Research-Indian Agricultural Research Institute (ICAR-IARI) and Indian Council of Agricultural Research-National Institute of Agricultural Economics and Policy Research (ICAR-NIAP) organized an introductory training program on CGE modeling in New Delhi from April 29 to May 04, 2024. The course was aimed at researchers and policy analysts from South Asia who had some economics background but were interested in learning more about economy-wide models and their applications.
    Keywords: capacity development; training programmes; computable general equilibrium models; economic policies
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:fpr:cgiarp:163463

This nep-cmp issue is ©2025 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.