nep-cmp New Economics Papers
on Computational Economics
Issue of 2026–02–09
fourteen papers chosen by
Stan Miles, Thompson Rivers University


  1. A BIBLIOMETRIC INSIGHT TO MACHINE LEARNING APPLICATIONS FOR DECISION-MAKING By Ivona Serafimovska; Bojan Kitanovikj; Filip Peovski
  2. Behavioral Economics of AI: LLM Biases and Corrections By Pietro Bini; Lin William Cong; Xing Huang; Lawrence J. Jin
  3. A machine learning approach to volatility forecasting By Kim Christensen; Mathias Siggaard; Bezirgen Veliyev
  4. LLM on a Budget: Active Knowledge Distillation for Efficient Classification of Large Text Corpora By Leland D. Crane; Xiaoyu Ge; Flora Haberkorn; Rithika Iyengar; Seung Jung Lee; Viviana Luccioli; Ryan Panley; Nitish R. Sinha
  5. How Well Do LLMs Predict Human Behavior? A Measure of their Pretrained Knowledge By Wayne Gao; Sukjin Han; Annie Liang
  6. Artificial intelligence for intelligence analysis By Goodall, Leonardo Sebastian; Törnberg, Petter; Ebner, Julia; Mosleh, Mohsen; Whitehouse, Harvey
  7. Brownian ReLU(Br-ReLU): A New Activation Function for a Long-Short Term Memory (LSTM) Network By George Awiakye-Marfo; Elijah Agbosu; Victoria Mawuena Barns; Samuel Asante Gyamerah
  8. Predictive modeling the past By Paker, Meredith; Stephenson, Judy; Wallis, Patrick
  9. Optimal Use of Preferences in Artificial Intelligence Algorithms By Joshua S. Gans
  10. Regret-Driven Portfolios: LLM-Guided Smart Clustering for Optimal Allocation By Muhammad Abro; Hassan Jaleel
  11. MAPPING RESEARCH ON AI AND CONSUMER PURCHASE INTENTION: BIBLIOMETRIC INSIGHTS (2009–2025) By Snezana Ristevska-Jovanovska; Ivona Serafimovska; Irena Bogoevska-Gavrilova
  12. Quantitative Methods in Finance By Eric Vansteenberghe
  13. The spatial (interprovincial) computable general equilibrium model for Morocco: theoretical specification and current developments By Mahmoud Arbouch; Eduardo Amaral Haddad
  14. Mechanism Design for Harm Reduction: Game Theory and Social Choice for Carceral MOUD and Recovery Housing By Brown, Tarnell

  1. By: Ivona Serafimovska (Faculty of Economics-Skopje, Ss. Cyril and Methodius University in Skopje, North Macedonia); Bojan Kitanovikj (Faculty of Economics-Skopje, Ss. Cyril and Methodius University in Skopje, North Macedonia); Filip Peovski (Faculty of Economics-Skopje, Ss. Cyril and Methodius University in Skopje, North Macedonia)
    Abstract: Using a multi-method bibliometric analysis of published documents from Web of Science and Scopus in the last 34 years, this comprehensive study investigates how machine learning improves advanced decision-making while adhering to the PRISMA guidelines. This study's main goal is to make the methodological patterns, thematic directions, and intellectual structure of research at the nexus of machine learning and decision-making visible. The results show that the U.S., China, India, Germany, and the U.K. are leading a rapidly expanding, cooperative research landscape with a strong emphasis on management, marketing, and finance. Tree-based models, support vector machines, deep learning, reinforcement learning, and explainable artificial intelligence are examples of frequently used algorithms. The field is moving toward applications in big data environments, ethical considerations, and increased interpretability. Digital transformation, competitive intelligence, and strategic planning are highlighted in influential works. This synthesis offers direction for developing more transparent machine learning models and practical frameworks for their use in decision-making, serving both academics and practitioners.
    Keywords: Bibliometric analysis, Decision-making, Machine learning
    JEL: B41 C55
    Date: 2025–12–15
    URL: https://d.repec.org/n?u=RePEc:aoh:conpro:2025:i:6:p:204-217
  2. By: Pietro Bini; Lin William Cong; Xing Huang; Lawrence J. Jin
    Abstract: Do generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to date—originally designed to document human biases—on prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases.
    JEL: D03 G02 G11 G4 G40 G41
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34745
  3. By: Kim Christensen; Mathias Siggaard; Bezirgen Veliyev
    Abstract: We inspect how accurate machine learning (ML) is at forecasting realized variance of the Dow Jones Industrial Average index constituents. We compare several ML algorithms, including regularization, regression trees, and neural networks, to multiple Heterogeneous AutoRegressive (HAR) models. ML is implemented with minimal hyperparameter tuning. In spite of this, ML is competitive and beats the HAR lineage, even when the only predictors are the daily, weekly, and monthly lags of realized variance. The forecast gains are more pronounced at longer horizons. We attribute this to higher persistence in the ML models, which helps to approximate the long-memory of realized variance. ML also excels at locating incremental information about future volatility from additional predictors. Lastly, we propose a ML measure of variable importance based on accumulated local effects. This shows that while there is agreement about the most important predictors, there is disagreement on their ranking, helping to reconcile our results.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.13014
  4. By: Leland D. Crane; Xiaoyu Ge; Flora Haberkorn; Rithika Iyengar; Seung Jung Lee; Viviana Luccioli; Ryan Panley; Nitish R. Sinha
    Abstract: Large Language Models (LLMs) are highly accurate in classification tasks, however, substantial computational and financial costs hinder their large-scale deployment in dynamic environments. Knowledge Distillation (KD) where a LLM ""teacher"" trains a smaller and more efficient ""student"" model, offers a promising solution to this problem. However, the distillation process itself often remains costly for large datasets, since it requires the teacher to label a vast number of samples while incurring significant token consumption. To alleviate this challenge, in this work we explore the active learning (AL) as a way to create efficient student models at a fraction of the cost while preserving the LLM's performance. In particular, we introduce M-RARU (Multi-class Randomized Accept/Reject Uncertainty Sampling), a novel AL algorithm that significantly reduces training costs. M-RARU employs an innovative strategy combining uncertainty with a randomized accept-reject mechanism to select only the most informative data points for the LLM teacher. This focused approach significantly minimizes required API calls and data processing time. We evaluate M-RARU against random sampling across five diverse student models (SVM, LDA, RF, GBDT, and DistilBERT) on multiple benchmark datasets. Experiments demonstrate that our proposed method achieves up to 80\% reduction in sample requirements as compared to random sampling, substantially improving classification accuracy while reducing financial costs and overall training time.
    Keywords: Machine learning; Sampling; Computational techniques
    JEL: C38 C45 C55
    Date: 2025–12–15
    URL: https://d.repec.org/n?u=RePEc:fip:fedgfe:102367
  5. By: Wayne Gao; Sukjin Han; Annie Liang
    Abstract: Large language models (LLMs) are increasingly used to predict human behavior. We propose a measure for evaluating how much knowledge a pretrained LLM brings to such a prediction: its equivalent sample size, defined as the amount of task-specific data needed to match the predictive accuracy of the LLM. We estimate this measure by comparing the prediction error of a fixed LLM in a given domain to that of flexible machine learning models trained on increasing samples of domain-specific data. We further provide a statistical inference procedure by developing a new asymptotic theory for cross-validated prediction error. Finally, we apply this method to the Panel Study of Income Dynamics. We find that LLMs encode considerable predictive information for some economic variables but much less for others, suggesting that their value as substitutes for domain-specific data differs markedly across settings.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.12343
  6. By: Goodall, Leonardo Sebastian; Törnberg, Petter; Ebner, Julia; Mosleh, Mohsen; Whitehouse, Harvey
    Abstract: Preventing and countering violent extremism (P/CVE) research has long relied on profiling approaches that draw on demographic variables, ideological labels, and observable late-stage behaviours. These strategies have consistently performed poorly, and their limitations are further exposed in contemporary digital environments, where ideological identities are fluid, most individuals who engage with extremist content never mobilise, and a highly fragmented ideological landscape frustrates stable categorisation. In parallel, psychological research has identified mechanisms that help explain why only a minority escalate to violence, yet these mechanisms remain difficult to operationalise and test at scale. This Perspective argues that recent advances in artificial intelligence—broadly defined to include statistical learning, generative modelling, and decision-oriented optimisation—provide tools to close this operational gap when explicitly aligned with psychological theory. At the individual level, machine learning, natural language processing, and large language models enable measurement and forecasting from heterogeneous digital traces. At the interpersonal level, graph-based approaches may capture influence dynamics, exposure pathways, and the evolution of extremist social milieus. At the collective level, agent-based simulations and field experiments support explanatory and counterfactual analysis of mobilisation processes. We advance a hybrid research agenda that prioritises theory testing, mechanism evaluation, and carefully bounded intervention analysis over automated individualised profiling, advancing a more mechanistic, empirically grounded, and scalable science of P/CVE.
    Date: 2026–02–04
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:3ytzr_v1
  7. By: George Awiakye-Marfo; Elijah Agbosu; Victoria Mawuena Barns; Samuel Asante Gyamerah
    Abstract: Deep learning models are effective for sequential data modeling, yet commonly used activation functions such as ReLU, LeakyReLU, and PReLU often exhibit gradient instability when applied to noisy, non-stationary financial time series. This study introduces BrownianReLU, a stochastic activation function induced by Brownian motion that enhances gradient propagation and learning stability in Long Short-Term Memory (LSTM) networks. Using Monte Carlo simulation, BrownianReLU provides a smooth, adaptive response for negative inputs, mitigating the dying ReLU problem. The proposed activation is evaluated on financial time series from Apple, GCB, and the S&P 500, as well as LendingClub loan data for classification. Results show consistently lower Mean Squared Error and higher $R^2$ values, indicating improved predictive accuracy and generalization. Although ROC-AUC metric is limited in classification tasks, activation choice significantly affects the trade-off between accuracy and sensitivity, with Brownian ReLU and the selected activation functions yielding practically meaningful performance.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.16446
  8. By: Paker, Meredith; Stephenson, Judy; Wallis, Patrick
    Abstract: Understanding long-run economic growth requires reliable historical data, yet the vast majority of long-run economic time series are drawn from incomplete records with significant temporal and geographic gaps. Conventional solutions to these gaps rely on linear regressions that risk bias or overfitting when data are scarce. We introduce “past predictive modeling, ” a framework that leverages machine learning and out-of-sample predictive modeling techniques to reconstruct representative historical time series from scarce data. Validating our approach using nominal wage data from England, 1300-1900, we show that this new method leads to more accurate and generalizable estimates, with bootstrapped standard errors 72% lower than benchmark linear regressions. Beyond just bettering accuracy, these improved wage estimates for England yield new insights into the impact of the Black Death on inequality, the economic geography of pre-industrial growth, and productivity over the long-run.
    Keywords: machine learning; predictive modeling; wages; black death; industrial revolution
    JEL: J31 C53 N33 N13 N63
    Date: 2025–06–13
    URL: https://d.repec.org/n?u=RePEc:ehl:wpaper:128852
  9. By: Joshua S. Gans
    Abstract: Machine learning systems embed preferences either in training losses or through post-processing of calibrated predictions. Applying information design methods from Strack and Yang (2024), this paper provides decision problem agnostic conditions under which separation training preference free and applying preferences ex post is optimal. Unlike prior work that requires specifying downstream objectives, the welfare results here apply uniformly across decision problems. The key primitive is a diminishing-value-of-information condition: relative to a fixed (normalised) preference-free loss, preference embedding makes informativeness less valuable at the margin, inducing a mean-preserving contraction of learned posteriors. Because the value of information is convex in beliefs, preference-free training weakly dominates for any expected utility decision problem. This provides theoretical foundations for modular AI pipelines that learn calibrated probabilities and implement asymmetric costs through downstream decision rules. However, separation requires users to implement optimal decision rules. When cognitive constraints bind, as documented in human AI decision-making, preference embedding can dominate by automating threshold computation. These results provide design guidance: preserve optionality through post-processing when objectives may shift; embed preferences when decision-stage frictions dominate.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.18732
  10. By: Muhammad Abro; Hassan Jaleel
    Abstract: We attempt to mitigate the persistent tradeoff between risk and return in medium- to long-term portfolio management. This paper proposes a novel LLM-guided no-regret portfolio allocation framework that integrates online learning dynamics, market sentiment indicators, and large language model (LLM)-based hedging to construct high-Sharpe ratio portfolios tailored for risk-averse investors and institutional fund managers. Our approach builds on a follow-the-leader approach, enriched with sentiment-based trade filtering and LLM-driven downside protection. Empirical results demonstrate that our method outperforms a SPY buy-and-hold baseline by 69% in annualized returns and 119% in Sharpe ratio.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.17021
  11. By: Snezana Ristevska-Jovanovska (Faculty of Economics-Skopje, Ss. Cyril and Methodius University in Skopje, North Macedonia); Ivona Serafimovska (Faculty of Economics-Skopje, Ss. Cyril and Methodius University in Skopje, North Macedonia); Irena Bogoevska-Gavrilova (Faculty of Economics-Skopje, Ss. Cyril and Methodius University in Skopje, North Macedonia)
    Abstract: Purpose Artificial Intelligence (AI) has rapidly become a core constituent of digital marketing, revolutionizing how companies interact with consumers. AI-driven tools, like chatbots and algorithmic product recommendations, personalized ads, and predictive analytics, give companies unprecedented capabilities to understand and influence consumer behavior in personalized, efficient, and scalable ways (Shoib and Hermawan, 2025). Reflecting this potential, industry analyses report that the use of AI in marketing escalated (e.g. increasing by an estimated 84% in 2020 alone) (Gera and Kumar, 2023). Research increasingly shows that AI technologies shape purchase intentions by leveraging adaptive machine learning, real-time data processing, and multimodal data integration (Zhang et al., 2025). These capabilities allow firms to predict consumer behavior, personalize services, and enhance experiences, often fostering satisfaction and willingness to buy (Erliana, 2025; Lopes et al., 2024; Guo et al., 2024), positioning AI at the center of strategies designed to influence consumer behavior (Gansser and Reich, 2021; Kumar et al., 2024). However, the influence of AI is not uniformly positive. Studies also highlight challenges like privacy concerns, security risks, and algorithmic biases that can undermine trust, with consumer attitudes, notably trust and perceived risk, acting as critical mediators of AI acceptance (Riandhi et al., 2025). Although the corpus on AI in marketing has expanded markedly, extant reviews remain fragmented or overly broad, leaving the purchase-intention focus under-synthesized (Chen and Prentice, 2024; Lee et al., 2023). With research output accelerating after 2020 and intensifying post-ChatGPT (2022), this study conducts a bibliometric and content analysis of literature from 2009–2025, which aims to map out publication trends, thematic concentrations, and emerging insights in this research stream. Specifically, our study addresses the following key research questions: RQ1: What are the publication trends and patterns in research on AI consumer purchase intention from 2009 to 2025? RQ2: What are the main themes and topics explored in the literature linking AI to consumer purchase intention? RQ3: How have these research themes evolved, particularly in the last three years (2022–2025) following recent AI advancements (e.g., ChatGPT)? Design/methodology/approach To ensure transparency in conducting the systematic literature review (Lim and Rasul, 2022), we followed the four stages outlined in the PRISMA protocol: identification, screening, eligibility, and inclusion (Moher et al., 2009). The initial search was performed on August 25, 2025, using the Web of Science database. We targeted article titles (TITLE), abstracts (ABS), and keywords (KEY) with the terms: "artificial intelligence" OR "machine learning" OR "deep learning" OR "natural language processing" AND "purchase intention" OR "buy* intention" OR "willingness to buy" OR "intention to purchase" OR "intention to buy". Only journal articles published between 2009 and 2025 were considered, yielding 261 documents. After excluding one non-English article, 43 ineligible items (e.g., reviews, books, editorials), and 32 papers during abstract screening, the final dataset comprised 183 journal articles. This refined sample was subsequently analyzed using text mining in the latest version (1.6.20) of VOSviewer, producing visualizations of keyword co-occurrence across the whole period and the last three years, country co-authorship networks, publication trends, leading journals, and the most cited works. The three-year focus reflects the impact of ChatGPT’s 2022 introduction, and although 2025 is ongoing, the results indicate a continuing upward trajectory. Results and analysis Descriptive analytics In this section, following the framework of Donthu et al. (2021), it is necessary to focus on two main techniques - performance analysis and science mapping. Table 1 provides the performance metrics, such as publication metrics, citation metrics, as well as a combination of both. The corpus comprises 183 publications across 16 active years (NAY), resulting in an average productivity per active year (PAY) of 11.44. Authorship patterns indicate a highly collaborative domain: 11 single-authored papers (6.0%) versus 172 co-authored (94.0%), with 602 contributing authors (NCA) and a collaboration index (CI) of 3.63. Impact indicators reinforce the field’s visibility. The set has accumulated 15, 195 total citations (TC), averaging 83.03 citations per document (AC), 83.61% of items are cited at least once (PCP), and citations per cited publication (CCP) equal 29.62. Collaboration intensity is further reflected in the collaboration coefficient (CC = 0.9399), confirming the dominance of multi-authored work and the need for complementary expertise. Altogether, these metrics portray a productive, influential, and highly collaborative research stream. Country co-authorship analysis A country co-authorship map identified 52 countries, with the largest connected network comprising 43 nations across seven clusters (Figure 1). China leads with 73 publications (Cluster 3), followed by the United States (32, Cluster 6) and India (22, Cluster 1). This dominance of China aligns with broader AI research trends showing China as a leading producer of AI scholarship (Li and Rohayati, 2025), while the US and India’s strong output is consistent with their established roles in technology and marketing research (Hue and Hung, 2025). Other notable contributors include South Korea (12), Taiwan (11), France (7), Germany, England, Japan, and Malaysia (6 each), and Australia (3). The clusters highlight strong European collaboration (Austria, France, Germany, Spain) alongside cross-regional groupings such as England–China–Japan, Canada–Bangladesh–Poland, Gulf/Asian partnerships, and links like USA–South Africa–Ghana and Egypt–Jordan–Saudi Arabia. Such patterns mirror observations in related bibliometric studies, where international collaboration is seen to bridge diverse research communities in AI applications (Li and Rohayati, 2025). Altogether, the country network suggests that consumer-AI research is highly international, with powerhouse countries driving output and fostering cross-border partnerships that bring together complementary knowledge and market contexts. Figure 1: Country co-authorship density visualization map (Source: Authors’ depiction) Keywords co-occurrence analysis based on text mining in the abstracts In this segment, we provide two network visualization maps depicting keyword co-occurrence patterns for the entire examined period (2009-2025) and the most recent years since the introduction of ChatGPT (2022-2025). Keywords co-occurrence analysis based on text mining in the abstracts for the whole analyzed period Through text mining of 183 abstracts (excluding structured abstract labels and copyright notices), a visualization map was constructed. The analysis produced 273 keywords meeting the minimum threshold, of which 31 were excluded, resulting in 242 keywords grouped into four clusters (Figure 2). In the network visualization, item size reflects frequency, lines represent co-occurrence strength, and selected labels are omitted to avoid overlap, revealing key thematic insights. The red cluster is dominated by “artificial intelligence” (49 occurrences) and closely related terms such as “acceptance”, “user acceptance”, “adoption”, “attitude”, and “perceived value”. These co-occurrences suggest that scholarships primarily investigate how consumers perceive and adopt AI-driven technologies. For example, Sohn and Kwon (2020) emphasize that traditional models must be adapted for novel AI products, and Gansser and Reich (2021) extend UTAUT specifically for AI contexts. Recent research similarly underscores the mediating role of trust and perceived quality in these models (Pathak and Bansal, 2024; Riandhi et al., 2025), which is consistent with the cluster’s strong ties between “AI” and attitude-oriented keywords. In the green cluster, “purchase intention” (71 occurrences) serves as the most central keyword, indicating its pivotal role within this body of research. Its strong co-occurrence with terms such as “artificial intelligence” and “consumer behaviour” highlights a growing interest in how AI-driven technologies shape consumer decision-making processes (Lopes et al., 2024). Connections with “engagement” and “social media” suggest that studies frequently examine interactive and digital environments as key contexts influencing purchase-related outcomes (Chen and Prentice, 2025). Meanwhile, links to “experience” and “trust” emphasize that both experiential factors and perceptions of credibility remain critical antecedents of purchase intention (Verhagen et al., 2006; Riandhi et al., 2025). The blue cluster centers on “behavior”, “machine learning”, “brand”, “information”, and “consumer satisfaction”. It reflects how machine learning methods are applied to analyze consumer actions, predict decision-making, and generate insights for brand strategy. The strong links to “information” and “consumer satisfaction” emphasize the importance of information quality and post-purchase evaluations, positioning this cluster at the intersection of behavioral theory and data-driven marketing research. The yellow cluster is anchored by “e-commerce” (15 occurrences) and extends to “continuance intention”, “voice assistants”, “conversational agent”, “anthropomorphism”, and “credibility”, reflecting research on how consumers engage with AI-driven commerce through human-like service interactions. In practice, this suggests that human-like AI agents and interfaces are a growing focus, as they influence consumer trust and satisfaction in digital commerce (Balakrishnan and Dwivedi, 2024; de Visser et al., 2016). Figure 2: Keywords co-occurrence for the whole analyzed period (Source: Authors’ depiction) Keywords co-occurrence analysis based on text mining in the abstracts for the last three years The text mining approach and criteria applied to the keyword co-occurrence analysis for the most recent three-year period are consistent with those used for the entire study period. The final dataset comprises 216 keywords, categorized into 3 clusters. These clusters ultimately form the co-occurrence visualization map depicted in Figure 3. The red cluster is structured around “purchase intention” (64 occurrences), which emerges as its central node. Its close links with “trust” and “acceptance” indicate that consumer confidence and openness toward technology are key antecedents of purchasing behavior. Connections with “machine learning” and “technology” highlight the role of advanced analytical tools and technological contexts in shaping these intentions (Pathak and Bansal, 2024; Riandhi et al., 2025). Meanwhile, associations with “information”, “intention”, and “satisfaction” suggest that decision-making is strongly influenced by the quality of information provided and subsequent evaluations of consumer experience. The green cluster is defined by “artificial intelligence” (47 occurrences), accompanied by closely related terms such as “adoption”, “behavior”, “e-commerce”, “behavioral intention”, and “online”. This configuration reflects a strong research orientation toward understanding how AI technologies are adopted and integrated into consumer contexts, particularly within digital commerce. The presence of methodological terms like “structural equation modeling” and theoretical constructs such as “planned behavior” suggests that much of this work is grounded in established behavioral frameworks and supported by advanced quantitative modeling. Together, these links emphasize the dual focus on both conceptual explanation and methodological rigor in studies examining AI-driven consumer adoption. The blue cluster centers on “artificial intelligence” (19 occurrences) and extends to “experience”, “social media”, “engagement”, and “consumer behavior”. This cluster reflects how AI applications are increasingly examined in relation to consumer interactions within digital environments. The strong ties with “experience” and “engagement” suggest a focus on how AI enhances or transforms user experiences and fosters deeper consumer involvement. The inclusion of “social media” indicates that platforms serve as a critical context for studying these dynamics, particularly in shaping consumer perceptions and behaviors. Overall, this cluster underscores the intersection of technological innovation and experiential marketing, highlighting how AI-enabled tools influence consumer behavior in socially interactive settings (Chen and Prentice, 2025; Mustak et al., 2021). Although 2025 is still ongoing, the observed trends strongly suggest that this focus on purchase-related outcomes and experiential dimensions will continue to intensify. Figure 3: Keywords co-occurrence for the period 2022-2025 (Source: Authors’ depiction) Originality/value This review offers a focused, bibliometric synthesis of AI’s impact on consumer purchase intention, going beyond broad “AI-in-marketing” overviews to map themes, influential works, and networks specific to the intention outcome. By segmenting the corpus pre- and post-ChatGPT (2022–2025), it empirically documents the generative-AI inflection in topics (e.g., conversational agents, disclosure/transparency, trust, LLM-enabled assistance) and traces their evolution in co-occurrence networks. The review consolidates publication trends, most-cited works, country co-authorship networks, and keyword co-occurrence for the full period and for 2022–2025. This dual-window approach documents the post-ChatGPT shift in thematic emphasis and clarifies how companies can convert the field’s dispersed insights into disciplined and powerful strategies that lift purchase outcomes. The study answers the research questions by mapping global publication patterns, synthesizing the main themes linking AI to purchase intention, and tracing their evolution over time, with clear evidence of thematic shifts after the introduction of ChatGPT. Future research could build on this foundation by refining the methodological scope of bibliometric analysis. To strengthen the analytical depth of future studies, the bibliometric framework could be expanded to include bibliographic coupling, thereby revealing emerging intellectual connections and thematic convergence across recent publications.
    Keywords: Artificial intelligence, Consumer behavior, Purchase intention, Bibliometric analysis
    JEL: O33
    Date: 2025–12–15
    URL: https://d.repec.org/n?u=RePEc:aoh:conpro:2025:i:6:p:325-331
  12. By: Eric Vansteenberghe
    Abstract: These lecture notes provide a comprehensive introduction to Quantitative Methods in Finance (QMF), designed for graduate students in finance and economics with heterogeneous programming backgrounds. The material develops a unified toolkit combining probability theory, statistics, numerical methods, and empirical modeling, with a strong emphasis on implementation in Python. Core topics include random variables and distributions, moments and dependence, simulation and Monte Carlo methods, numerical optimization, root-finding, and time-series models commonly used in finance and macro-finance. Particular attention is paid to translating theoretical concepts into reproducible code, emphasizing vectorization, numerical stability, and interpretation of outputs. The notes progressively bridge theory and practice through worked examples and exercises covering asset pricing intuition, risk measurement, forecasting, and empirical analysis. By focusing on clarity, minimal prerequisites, and hands-on computation, these lecture notes aim to serve both as a pedagogical entry point for non-programmers and as a practical reference for applied researchers seeking transparent and replicable quantitative methods in finance.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.12896
  13. By: Mahmoud Arbouch; Eduardo Amaral Haddad
    Abstract: This paper presents the theoretical specification and current developments of a Spatial (Interprovincial) Computable General Equilibrium (SCGE) model for Morocco. The model is formulated as a Johansen-type CGE system, solved in linearized form, and is designed to analyze the regional and national impacts of policy shocks within an integrated interregional economic framework. The Moroccan economy is disaggregated into 72 provinces, 20 production sectors, multiple institutional agents, and an external sector, allowing for detailed representation of interprovincial trade, production linkages, and income generation. Production technologies combine nested CES and Leontief structures, capturing substitution possibilities among regional and foreign sources of intermediate inputs and primary factors, while household behavior follows a Stone-Geary (Linear Expenditure System) specification. The model incorporates explicit treatments of investment allocation, capital accumulation, labor markets, migration, government behavior, and price formation under constant returns to scale, with extensions to allow for agglomeration economies. Calibration is based on a top- down disaggregation of the national input-output system for 2019, complemented by demographic and fiscal data, and parameterized using a combination of econometric estimates and standard values from the literature. In addition, the paper introduces a CO₂-emissions module that enables the simulation of carbon taxation policies and interregional revenue recycling schemes. The SCGE model provides a flexible and internally consistent tool for evaluating the regional distributional, environmental, and macroeconomic effects of structural reforms and climate-related policies in Morocco.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:ocp:rpaeco:rp18_25
  14. By: Brown, Tarnell
    Abstract: Individuals released from jails and prisons face extremely high risks of fatal overdose and reincarceration, yet many jurisdictions continue to underprovide medications for opioid use disorder (MOUD), recovery housing, and supervised consumption services. At the same time, recovery residences and diversion courts are expanding without a clear framework for institutional design. This paper develops a mechanism-design model of harm-reduction policy at the interface of criminal justice and community treatment. A public funder chooses a funding regime and certification rules, diversion judges set the stringency of supervision and treatment conditions, recovery residence providers decide whether to operate abstinence-only or MOUD-inclusive housing, and high-risk individuals choose whether to comply or relapse. The model yields a punitive equilibrium, supported by abstinence-only funding and strict conditions, and a harm-reduction equilibrium under MOUD-inclusive funding and flexible conditions. Using effect sizes from Rhode Island’s statewide corrections MOUD program, Massachusetts’ jail-based MOUD pilots, and recent recovery housing evaluations, we show that the harm-reduction equilibrium is Pareto-superior for funders, judges, providers, and high-severity residents, yet the punitive equilibrium can remain risk-dominant because of political and informational frictions. We then embed the game in a computational social choice framework: stakeholders hold multi-dimensional preferences over policy bundles—combinations of funding rules, certification standards, diversion guidelines, and overdose prevention interventions such as supervised consumption sites—and social choice is constrained by justice-based requirements that rule out policies generating avoidable lethal risk or systematic exclusion of MOUD patients from housing and treatment. The analysis characterizes which harm-reduction mechanisms are implementable as equilibrium outcomes of the institutional game while respecting these constrained social preferences, and it identifies simple instruments—MOUD-inclusive funding commitments, performance-based transparency, and structured diversion defaults—that can move jurisdictions from punitive to harm-reduction equilibria within existing legal constraints.
    Date: 2026–02–03
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:wrkj3_v1

This nep-cmp issue is ©2026 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.