nep-cmp New Economics Papers
on Computational Economics
Issue of 2024‒01‒22
twenty-six papers chosen by



  1. Historical Calibration of SVJD Models with Deep Learning By Milan Ficura; Jiri Witzany
  2. Finding the best trade-off between performance and interpretability in predicting hospital length of stay using structured and unstructured data By Franck Jaotombo; Luca Adorni; Badih Ghattas; Laurent Boyer
  3. Scalable Agent-Based Modeling for Complex Financial Market Simulations By Aaron Wheeler; Jeffrey D. Varner
  4. How Generative-AI can be Effectively used in Government Chatbots By Zeteng Lin
  5. Machine-learning prediction for hospital length of stay using a French medico-administrative database By Franck Jaotombo; Vanessa Pauly; Guillaume Fond; Veronica Orleans; Pascal Auquier; Badih Ghattas; Laurent Boyer
  6. Energy poverty prediction and effective targeting for just transitions with machine learning By Spandagos, Constantine; Tovar Reaños, Miguel; Lynch, Muireann Ã
  7. The Machine Learning Control Method for Counterfactual Forecasting By Augusto Cerqua; Marco Letta; Fiammetta Menchetti
  8. Detecting Toxic Flow By \'Alvaro Cartea; Gerardo Duran-Martin; Leandro S\'anchez-Betancourt
  9. Double Machine Learning for Static Panel Models with Fixed Effects By Paul Clarke; Annalivia Polselli
  10. Corporate Bankruptcy Prediction with Domain-Adapted BERT By Alex Kim; Sangwon Yoon
  11. Artificial Intelligence in the Knowledge Economy By Enrique Ide; Eduard Talamas
  12. Learning Merton's Strategies in an Incomplete Market: Recursive Entropy Regularization and Biased Gaussian Exploration By Min Dai; Yuchao Dong; Yanwei Jia; Xun Yu Zhou
  13. Shai: A large language model for asset management By Zhongyang Guo; Guanran Jiang; Zhongdan Zhang; Peng Li; Zhefeng Wang; Yinchun Wang
  14. Spontaneous Coupling of Q-Learning Algorithms in Equilibrium By Ivan Conjeaud
  15. CVA Hedging by Risk-Averse Stochastic-Horizon Reinforcement Learning By Roberto Daluiso; Marco Pinciroli; Michele Trapletti; Edoardo Vittori
  16. RetailSynth: Synthetic Data Generation for Retail AI Systems Evaluation By Yu Xia; Ali Arian; Sriram Narayanamoorthy; Joshua Mabry
  17. Predicting Financial Literacy via Semi-supervised Learning By David Hason Rudd; Huan Huo; Guandong Xu
  18. Mapping the Dynamics of Management Styles— Evidence from German Survey Data By Florian Englmaier; Michael Hofmann; Stefanie Wolter
  19. A survey on algorithms for Nash equilibria in finite normal-form games By Hanyu Li; Wenhan Huang; Zhijian Duan; David Henry Mguni; Kun Shao; Jun Wang; Xiaotie Deng
  20. Artificial Intelligence, Tasks, Skills and Wages: Worker-Level Evidence from Germany By Engberg, Erik; Koch, Michael; Lodefalk, Magnus; Schroeder, Sarah
  21. Bedrohungen und Chancen frühzeitig erkennen: Entwicklung eines Früherkennungskonzepts By Akalan, Rodi; Brink, Siegrun; Icks, Annette; Wolter, Hans-Jürgen
  22. SimPaths: an open-source microsimulation model for life course analysis By Richiardi, Matteo; Bronka, Patryk; van de Ven, Justin; Kopasker, Daniel; Vittal Katikireddi, Srinivasa
  23. Artificial Intelligence, Tasks, Skills and Wages: Worker-Level Evidence from Germany By Engberg, Erik; Koch, Michael; Lodefalk, Magnus; Schroeder, Sarah
  24. Simulation of a L\'evy process, its extremum, and hitting time of the extremum via characteristic functions By Svetlana Boyarchenko; Sergei Levendorskii
  25. A DSGE Model Including Trend Information and Regime Switching at the ZLB By Paolo Gelain; Pierlauro Lopez
  26. The Transformative Effects of AI on International Economics By Rafael Andersson Lipcsey

  1. By: Milan Ficura (Faculty of Finance and Accounting, Prague University of Economics and Business, Czech Republic); Jiri Witzany (Faculty of Finance and Accounting, Prague University of Economics and Business, Czech Republic.)
    Abstract: We propose how deep neural networks can be used to calibrate the parameters of Stochastic-Volatility Jump-Diffusion (SVJD) models to historical asset return time series. 1-Dimensional Convolutional Neural Networks (1D-CNN) are used for that purpose. The accuracy of the deep learning approach is compared with machine learning methods based on shallow neural networks and hand-crafted features, and with commonly used statistical approaches such as MCMC and approximate MLE. The deep learning approach is found to be accurate and robust, outperforming the other approaches in simulation tests. The main advantage of the deep learning approach is that it is fully generic and can be applied to any SVJD model from which simulations can be drawn. An additional advantage is the speed of the deep learning approach in situations when the parameter estimation needs to be repeated on new data. The trained neural network can be in these situations used to estimate the SVJD model parameters almost instantaneously.
    Keywords: Stochastic volatility, price jumps, SVJD, neural networks, deep learning, CNN
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2023_36&r=cmp
  2. By: Franck Jaotombo (EM - emlyon business school); Luca Adorni; Badih Ghattas; Laurent Boyer
    Abstract: Objective This study aims to develop high-performing Machine Learning and Deep Learning models in predicting hospital length of stay (LOS) while enhancing interpretability. We compare performance and interpretability of models trained only on structured tabular data with models trained only on unstructured clinical text data, and on mixed data. Methods The structured data was used to train fourteen classical Machine Learning models including advanced ensemble trees, neural networks and k-nearest neighbors. The unstructured data was used to fine-tune a pre-trained Bio Clinical BERT Transformer Deep Learning model. The structured and unstructured data were then merged into a tabular dataset after vectorization of the clinical text and a dimensional reduction through Latent Dirichlet Allocation. The study used the free and publicly available Medical Information Mart for Intensive Care (MIMIC) III database, on the open AutoML Library AutoGluon. Performance is evaluated with respect to two types of random classifiers, used as baselines. Results The best model from structured data demonstrates high performance (ROC AUC = 0.944, PRC AUC = 0.655) with limited interpretability, where the most important predictors of prolonged LOS are the level of blood urea nitrogen and of platelets. The Transformer model displays a good but lower performance (ROC AUC = 0.842, PRC AUC = 0.375) with a richer array of interpretability by providing more specific in-hospital factors including procedures, conditions, and medical history. The best model trained on mixed data satisfies both a high level of performance (ROC AUC = 0.963, PRC AUC = 0.746) and a much larger scope in interpretability including pathologies of the intestine, the colon, and the blood; infectious diseases, respiratory problems, procedures involving sedation and intubation, and vascular surgery. Conclusions Our results outperform most of the state-of-the-art models in LOS prediction both in terms of performance and of interpretability. Data fusion between structured and unstructured text data may significantly improve performance and interpretability.
    Keywords: hospital length of stay, explainable AI, data fusion, structured and unstructured data, clinical transformers
    Date: 2023–11–30
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04339462&r=cmp
  3. By: Aaron Wheeler; Jeffrey D. Varner
    Abstract: In this study, we developed a computational framework for simulating large-scale agent-based financial markets. Our platform supports trading multiple simultaneous assets and leverages distributed computing to scale the number and complexity of simulated agents. Heterogeneous agents make decisions in parallel, and their orders are processed through a realistic, continuous double auction matching engine. We present a baseline model implementation and show that it captures several known statistical properties of real financial markets (i.e., stylized facts). Further, we demonstrate these results without fitting models to historical financial data. Thus, this framework could be used for direct applications such as human-in-the-loop machine learning or to explore theoretically exciting questions about market microstructure's role in forming the statistical regularities of real markets. To the best of our knowledge, this study is the first to implement multiple assets, parallel agent decision-making, a continuous double auction mechanism, and intelligent agent types in a scalable real-time environment.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.14903&r=cmp
  4. By: Zeteng Lin
    Abstract: With the rapid development of artificial intelligence and breakthroughs in machine learning and natural language processing, intelligent question-answering robots have become widely used in government affairs. This paper conducts a horizontal comparison between Guangdong Province's government chatbots, ChatGPT, and Wenxin Ernie, two large language models, to analyze the strengths and weaknesses of existing government chatbots and AIGC technology. The study finds significant differences between government chatbots and large language models. China's government chatbots are still in an exploratory stage and have a gap to close to achieve "intelligence." To explore the future direction of government chatbots more deeply, this research proposes targeted optimization paths to help generative AI be effectively applied in government chatbot conversations.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.02181&r=cmp
  5. By: Franck Jaotombo (EM - emlyon business school); Vanessa Pauly; Guillaume Fond; Veronica Orleans; Pascal Auquier; Badih Ghattas; Laurent Boyer
    Abstract: "Introduction: Prolonged Hospital Length of Stay (PLOS) is an indicator of deteriorated efficiency in Quality of Care. One goal of public health management is to reduce PLOS by identifying its most relevant predictors. The objective of this study is to explore Machine Learning (ML) models that best predict PLOS.Methods: Our dataset was collected from the French Medico-Administrative database (PMSI) as a retrospective cohort study of all discharges in the year 2015 from a large university hospital in France (APHM). The study outcomes were LOS transformed into a binary variable (long vs. short LOS) according to the 90th percentile (14 days). Logistic regression (LR), classification and regression trees (CART), random forest (RF), gradient boosting (GB) and neural networks (NN) were applied to the collected data. The predictive performance of the models was evaluated using the area under the ROC curve (AUC).Results: Our analysis included 73, 182 hospitalizations, of which 7, 341 (10.0%) led to PLOS. The GB classifier was the most performant model with the highest AUC (0.810), superior to all the other models (all p-values
    Keywords: Machine learning, neural network, prediction, health services research, public health
    Date: 2023–01–01
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04325691&r=cmp
  6. By: Spandagos, Constantine; Tovar Reaños, Miguel; Lynch, Muireann Ã
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:esr:wpaper:wp762&r=cmp
  7. By: Augusto Cerqua; Marco Letta; Fiammetta Menchetti
    Abstract: Without a credible control group, the most widespread methodologies for estimating causal effects cannot be applied. To fill this gap, we propose the Machine Learning Control Method (MLCM), a new approach for causal panel analysis based on counterfactual forecasting with machine learning. The MLCM estimates policy-relevant causal parameters in short- and long-panel settings without relying on untreated units. We formalize identification in the potential outcomes framework and then provide estimation based on supervised machine learning algorithms. To illustrate the advantages of our estimator, we present simulation evidence and an empirical application on the impact of the COVID-19 crisis on educational inequality in Italy. We implement the proposed method in the companion R package MachineControl.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.05858&r=cmp
  8. By: \'Alvaro Cartea; Gerardo Duran-Martin; Leandro S\'anchez-Betancourt
    Abstract: This paper develops a framework to predict toxic trades that a broker receives from her clients. Toxic trades are predicted with a novel online Bayesian method which we call the projection-based unification of last-layer and subspace estimation (PULSE). PULSE is a fast and statistically-efficient online procedure to train a Bayesian neural network sequentially. We employ a proprietary dataset of foreign exchange transactions to test our methodology. PULSE outperforms standard machine learning and statistical methods when predicting if a trade will be toxic; the benchmark methods are logistic regression, random forests, and a recursively-updated maximum-likelihood estimator. We devise a strategy for the broker who uses toxicity predictions to internalise or to externalise each trade received from her clients. Our methodology can be implemented in real-time because it takes less than one millisecond to update parameters and make a prediction. Compared with the benchmarks, PULSE attains the highest PnL and the largest avoided loss for the horizons we consider.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.05827&r=cmp
  9. By: Paul Clarke; Annalivia Polselli
    Abstract: Machine Learning (ML) algorithms are powerful data-driven tools for approximating high-dimensional or non-linear nuisance functions which are useful in practice because the true functional form of the predictors is ex-ante unknown. In this paper, we develop estimators of policy interventions from panel data which allow for non-linear effects of the confounding regressors, and investigate the performance of these estimators using three well-known ML algorithms, specifically, LASSO, classification and regression trees, and random forests. We use Double Machine Learning (DML) (Chernozhukov et al., 2018) for the estimation of causal effects of homogeneous treatments with unobserved individual heterogeneity (fixed effects) and no unobserved confounding by extending Robinson (1988)'s partially linear regression model. We develop three alternative approaches for handling unobserved individual heterogeneity based on extending the within-group estimator, first-difference estimator, and correlated random effect estimator (Mundlak, 1978) for non-linear models. Using Monte Carlo simulations, we find that conventional least squares estimators can perform well even if the data generating process is non-linear, but there are substantial performance gains in terms of bias reduction under a process where the true effect of the regressors is non-linear and discontinuous. However, for the same scenarios, we also find -- despite extensive hyperparameter tuning -- inference to be problematic for both tree-based learners because these lead to highly non-normal estimator distributions and the estimator variance being severely under-estimated. This contradicts the performance of trees in other circumstances and requires further investigation. Finally, we provide an illustrative example of DML for observational panel data showing the impact of the introduction of the national minimum wage in the UK.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.08174&r=cmp
  10. By: Alex Kim; Sangwon Yoon
    Abstract: This study performs BERT-based analysis, which is a representative contextualized language model, on corporate disclosure data to predict impending bankruptcies. Prior literature on bankruptcy prediction mainly focuses on developing more sophisticated prediction methodologies with financial variables. However, in our study, we focus on improving the quality of input dataset. Specifically, we employ BERT model to perform sentiment analysis on MD&A disclosures. We show that BERT outperforms dictionary-based predictions and Word2Vec-based predictions in terms of adjusted R-square in logistic regression, k-nearest neighbor (kNN-5), and linear kernel support vector machine (SVM). Further, instead of pre-training the BERT model from scratch, we apply self-learning with confidence-based filtering to corporate disclosure data (10-K). We achieve the accuracy rate of 91.56% and demonstrate that the domain adaptation procedure brings a significant improvement in prediction accuracy.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.03194&r=cmp
  11. By: Enrique Ide; Eduard Talamas
    Abstract: How does Artificial Intelligence (AI) affect the organization of work and the structure of wages? We study this question in a model where heterogeneous agents in terms of knowledge--humans and machines--endogenously sort into hierarchical teams: Less knowledgeable agents become "workers" (i.e., execute routine tasks), while more knowledgeable agents become "managers" (i.e., specialize in problem solving). When AI's knowledge is equivalent to that of a pre-AI worker, AI displaces humans from routine work into managerial work compared to the pre-AI outcome. In contrast, when AI's knowledge is that of a pre-AI manager, it shifts humans from managerial work to routine work. AI increases total human labor income, but it necessarily creates winners and losers: When AI's knowledge is low, only the most knowledgeable humans experience income gains. In contrast, when AI's knowledge is high, both extremes of the knowledge distribution benefit. In any case, the introduction of AI harms the middle class.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.05481&r=cmp
  12. By: Min Dai; Yuchao Dong; Yanwei Jia; Xun Yu Zhou
    Abstract: We study Merton's expected utility maximization problem in an incomplete market, characterized by a factor process in addition to the stock price process, where all the model primitives are unknown. We take the reinforcement learning (RL) approach to learn optimal portfolio policies directly by exploring the unknown market, without attempting to estimate the model parameters. Based on the entropy-regularization framework for general continuous-time RL formulated in Wang et al. (2020), we propose a recursive weighting scheme on exploration that endogenously discounts the current exploration reward by the past accumulative amount of exploration. Such a recursive regularization restores the optimality of Gaussian exploration. However, contrary to the existing results, the optimal Gaussian policy turns out to be biased in general, due to the interwinding needs for hedging and for exploration. We present an asymptotic analysis of the resulting errors to show how the level of exploration affects the learned policies. Furthermore, we establish a policy improvement theorem and design several RL algorithms to learn Merton's optimal strategies. At last, we carry out both simulation and empirical studies with a stochastic volatility environment to demonstrate the efficiency and robustness of the RL algorithms in comparison to the conventional plug-in method.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.11797&r=cmp
  13. By: Zhongyang Guo; Guanran Jiang; Zhongdan Zhang; Peng Li; Zhefeng Wang; Yinchun Wang
    Abstract: This paper introduces "Shai" a 10B level large language model specifically designed for the asset management industry, built upon an open-source foundational model. With continuous pre-training and fine-tuning using a targeted corpus, Shai demonstrates enhanced performance in tasks relevant to its domain, outperforming baseline models. Our research includes the development of an innovative evaluation framework, which integrates professional qualification exams, tailored tasks, open-ended question answering, and safety assessments, to comprehensively assess Shai's capabilities. Furthermore, we discuss the challenges and implications of utilizing large language models like GPT-4 for performance assessment in asset management, suggesting a combination of automated evaluation and human judgment. Shai's development, showcasing the potential and versatility of 10B-level large language models in the financial sector with significant performance and modest computational requirements, hopes to provide practical insights and methodologies to assist industry peers in their similar endeavors.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.14203&r=cmp
  14. By: Ivan Conjeaud
    Abstract: Most contributions in the algorithmic collusion literature only consider symmetric algorithms interacting with each other. We study a simple model of algorithmic collusion in which Q-learning algorithms repeatedly play a prisoner's dilemma and allow players to choose different exploration policies. We characterize behavior of such algorithms with asymmetric policies for extreme values and prove that any Nash equilibrium features some cooperative behavior. We further investigate the dynamics for general profiles of exploration policy by running extensive numerical simulations which indicate symmetry of equilibria, and give insight for their distribution.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.02644&r=cmp
  15. By: Roberto Daluiso; Marco Pinciroli; Michele Trapletti; Edoardo Vittori
    Abstract: This work studies the dynamic risk management of the risk-neutral value of the potential credit losses on a portfolio of derivatives. Sensitivities-based hedging of such liability is sub-optimal because of bid-ask costs, pricing models which cannot be completely realistic, and a discontinuity at default time. We leverage recent advances on risk-averse Reinforcement Learning developed specifically for option hedging with an ad hoc practice-aligned objective function aware of pathwise volatility, generalizing them to stochastic horizons. We formalize accurately the evolution of the hedger's portfolio stressing such aspects. We showcase the efficacy of our approach by a numerical study for a portfolio composed of a single FX forward contract.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.14044&r=cmp
  16. By: Yu Xia; Ali Arian; Sriram Narayanamoorthy; Joshua Mabry
    Abstract: Significant research effort has been devoted in recent years to developing personalized pricing, promotions, and product recommendation algorithms that can leverage rich customer data to learn and earn. Systematic benchmarking and evaluation of these causal learning systems remains a critical challenge, due to the lack of suitable datasets and simulation environments. In this work, we propose a multi-stage model for simulating customer shopping behavior that captures important sources of heterogeneity, including price sensitivity and past experiences. We embedded this model into a working simulation environment -- RetailSynth. RetailSynth was carefully calibrated on publicly available grocery data to create realistic synthetic shopping transactions. Multiple pricing policies were implemented within the simulator and analyzed for impact on revenue, category penetration, and customer retention. Applied researchers can use RetailSynth to validate causal demand models for multi-category retail and to incorporate realistic price sensitivity into emerging benchmarking suites for personalized pricing, promotions, and product recommendations.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.14095&r=cmp
  17. By: David Hason Rudd; Huan Huo; Guandong Xu
    Abstract: Financial literacy (FL) represents a person's ability to turn assets into income, and understanding digital currencies has been added to the modern definition. FL can be predicted by exploiting unlabelled recorded data in financial networks via semi-supervised learning (SSL). Measuring and predicting FL has not been widely studied, resulting in limited understanding of customer financial engagement consequences. Previous studies have shown that low FL increases the risk of social harm. Therefore, it is important to accurately estimate FL to allocate specific intervention programs to less financially literate groups. This will not only increase company profitability, but will also reduce government spending. Some studies considered predicting FL in classification tasks, whereas others developed FL definitions and impacts. The current paper investigated mechanisms to learn customer FL level from their financial data using sampling by synthetic minority over-sampling techniques for regression with Gaussian noise (SMOGN). We propose the SMOGN-COREG model for semi-supervised regression, applying SMOGN to deal with unbalanced datasets and a nonparametric multi-learner co-regression (COREG) algorithm for labeling. We compared the SMOGN-COREG model with six well-known regressors on five datasets to evaluate the proposed models effectiveness on unbalanced and unlabelled financial data. Experimental results confirmed that the proposed method outperformed the comparator models for unbalanced and unlabelled financial data. Therefore, SMOGN-COREG is a step towards using unlabelled data to estimate FL level.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.10984&r=cmp
  18. By: Florian Englmaier (LMU Munich); Michael Hofmann (LMU München); Stefanie Wolter (IAB Nürnberg)
    Abstract: We study how firms adjust the bundles of management practices they adopt over time, using repeated survey data collected in Germany from 2012 to 2018. By employing unsupervised machine learning, we leverage high-dimensional data on human resource policies to describe clusters of management practices (management styles). Our results suggest that two management styles exist, one of which employs many and highly structured practices, while the other lacks these practices but retains training measures. We document sizeable differences in styles across German firms, which can (only) partially be explained by firm characteristics. Further, we show that management is highly persistent over time, in part because newly adopted practices are discontinued after a short time. We suggest miscalculations of cots-benefit trade-offs and non-fitting corporate culture as potential hindrances of adopting structured management. In light of previous findings that structured management increases firm performance, our findings have important policy implications since they show that firms which are managed in an unstructured way fail to catch up and will continue to underperform.
    Keywords: management practices; personnel management; panel data analysis; machine learning;
    JEL: M12 D22 C38
    Date: 2023–12–14
    URL: http://d.repec.org/n?u=RePEc:rco:dpaper:481&r=cmp
  19. By: Hanyu Li; Wenhan Huang; Zhijian Duan; David Henry Mguni; Kun Shao; Jun Wang; Xiaotie Deng
    Abstract: Nash equilibrium is one of the most influential solution concepts in game theory. With the development of computer science and artificial intelligence, there is an increasing demand on Nash equilibrium computation, especially for Internet economics and multi-agent learning. This paper reviews various algorithms computing the Nash equilibrium and its approximation solutions in finite normal-form games from both theoretical and empirical perspectives. For the theoretical part, we classify algorithms in the literature and present basic ideas on algorithm design and analysis. For the empirical part, we present a comprehensive comparison on the algorithms in the literature over different kinds of games. Based on these results, we provide practical suggestions on implementations and uses of these algorithms. Finally, we present a series of open problems from both theoretical and practical considerations.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.11063&r=cmp
  20. By: Engberg, Erik (The Ratio Institute); Koch, Michael (-); Lodefalk, Magnus (The Ratio Institute); Schroeder, Sarah (Aarhus University)
    Abstract: This paper documents novel facts on within-occupation task and skill changes over the past two decades in Germany. In a second step, it reveals a distinct relationship between occupational work content and exposure to artificial intelligence (AI) and au- tomation (robots). Workers in occupations with high AI exposure, perform different activities and face different skill requirements, compared to workers in occupations ex- posed to robots. In a third step, the study uses individual labour market biographies to investigate the impact on wages between 2010 and 2017. Results indicate a wage growth premium in occupations more exposed to AI, contrasting with a wage growth discount in occupations exposed to robots. Finally, the study further explores the dynamic in- fluence of AI exposure on individual wages over time, uncovering positive associations with wages, with nuanced variations across occupational groups.
    Keywords: Artificial intelligence technologies; Task content; Skills; Wages
    JEL: J23 J24 J44 N34 O33
    Date: 2023–12–27
    URL: http://d.repec.org/n?u=RePEc:hhs:ratioi:0371&r=cmp
  21. By: Akalan, Rodi; Brink, Siegrun; Icks, Annette; Wolter, Hans-Jürgen
    Abstract: Der Mittelstand ist derzeit mit vielfältigen Krisen konfrontiert. Eine frühe Erkennung relevanter Herausforderungen und Chancen ermöglicht es den mittelständischen Unternehmen und der Wirtschaftspolitik, sich darauf vorzubereiten und die geeigneten Rahmenbedingungen zu setzen. Gegenwärtig erfolgt die Früherkennung zumeist anhand von Konjunkturindikatoren, die i.d.R. anhand konkreter Zahlenwerte Rückschlüsse auf die zukünftige Wirtschaftsentwicklung ziehen. Eine systematische Auswertung wirtschaftsrelevanter Textdaten erfolgt nicht. Hier setzt das in der vorliegenden Studie entwickelte innovative Früherkennungskonzept an, das KI-gestützt Textdaten aus Medien und Wirtschaft effizient analysiert und Themen extrahiert. Mithilfe von Praxistests zeigen wir, dass das Konzept zuverlässig funktioniert und relevante Themen frühzeitig erkennen kann.
    Abstract: Currently, SMEs are confronted with several crises. The early detection of relevant threats and opportunities enables SMEs and economic policymakers to be prepared and set the appropriate framework conditions. Early detection is often based on economic indicators that develop forecasts with the help of structured data (usually numbers). Moreover, information from text data is not analysed systematically in current economic indicators. Hence, the present study develops an innovative early detection concept that efficiently analyses text data from the media and economy. The AI-based concept can analyse large amounts of text data and extract relevant topics. With the help of multiple tests with real data, we show that the concept delivers reliable results and can recognise relevant topics at an early stage.
    Keywords: Früherkennung, Themen, Topic Modeling, Maschinelles Lernen, early detection, topics, topic modeling, machine learning
    JEL: M20 O10
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:ifmmat:280979&r=cmp
  22. By: Richiardi, Matteo; Bronka, Patryk; van de Ven, Justin; Kopasker, Daniel; Vittal Katikireddi, Srinivasa
    Abstract: SimPaths is a family of models for individual and household life course events, all sharing common components. The framework is designed to project life histories through time, building up a detailed picture of career paths, family (inter)relations, health, and financial circumstances. It builds upon standardised assumptions and data sources, which facilitates adaptation to alternative countries – versions currently exist for the UK and Italy, and are under development for Hungary, Poland and Greece. Careful attention is paid to model validation, and sensitivity of projections to key assumptions. The modular nature of the SimPaths framework is designed to facilitate analysis of alternative assumptions concerning the tax and benefit system, sensitivity to parameter estimates and alternative approaches for projecting labour/leisure and consumption/savings decisions. Projections for a workhorse model parameterised to the UK context are reported, which closely reflect observed data throughout a validation window between the Financial crisis (2011) and the Covid-19 pandemic (2019).
    Date: 2023–04–18
    URL: http://d.repec.org/n?u=RePEc:ese:cempwp:cempa6-23&r=cmp
  23. By: Engberg, Erik (Örebro University School of Business); Koch, Michael (Aarhus University); Lodefalk, Magnus (Örebro University School of Business); Schroeder, Sarah (Aarhus University)
    Abstract: This paper documents novel facts on within-occupation task and skill changes over the past two decades in Germany. In a second step, it reveals a distinct relationship between occupational work content and exposure to artificial intelligence (AI) and automation (robots). Workers in occupations with high AI exposure, perform different activities and face different skill requirements, compared to workers in occupations exposed to robots. In a third step, the study uses individual labour market biographies to investigate the impact on wages between 2010 and 2017. Results indicate a wage growth premium in occupations more exposed to AI, contrasting with a wage growth discount in occupations exposed to robots. Finally, the study further explores the dynamic influence of AI exposure on individual wages over time, uncovering positive associations with wages, with nuanced variations across occupational groups.
    Keywords: Artificial intelligence technologies; Task content; Skills; Wages
    JEL: J23 J24 J44 N34 O33
    Date: 2023–12–27
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2023_012&r=cmp
  24. By: Svetlana Boyarchenko; Sergei Levendorskii
    Abstract: We suggest a general framework for simulation of the triplet $(X_T, \bar X_ T, \tau_T)$ (L\'evy process, its extremum, and hitting time of the extremum), and, separately, $X_T, \bar X_ T$ and pairs $(X_T, \bar X_ T)$, $(\bar X_ T, \tau_T)$, $(\bar X_ T-X_T, \tau_T)$, via characteristic functions and conditional characteristic functions. The conformal deformations technique allows one to evaluate probability distributions, joint probability distributions and conditional probability distributions accurately and fast. For simulations in the far tails of the distribution, we precalculate and store the values of the (conditional) characteristic functions on multi-grids on appropriate surfaces in $C^n$, and use these values to calculate the quantiles in the tails. For simulation in the central part of a distribution, we precalculate the values of the cumulative distribution at points of a non-uniform (multi-)grid, and use interpolation to calculate quantiles.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.03929&r=cmp
  25. By: Paolo Gelain; Pierlauro Lopez
    Abstract: This paper outlines the dynamic stochastic general equilibrium (DSGE) model developed at the Federal Reserve Bank of Cleveland as part of the suite of models used for forecasting and policy analysis by Cleveland Fed researchers, which we have nicknamed CLEMENTINE (CLeveland Equilibrium ModEl iNcluding Trend INformation and the Effective lower bound). This document adopts a practitioner's guide approach, detailing the construction of the model and offering practical guidance on its use as a policy tool designed to support decision-making through forecasting exercises and policy counterfactuals.
    Keywords: DSGE model; labor market frictions; zero lower bound; trends; expectations
    JEL: E32 E23 E31 E52 D58
    Date: 2023–12–27
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwq:97525&r=cmp
  26. By: Rafael Andersson Lipcsey
    Abstract: As AI adoption accelerates, research on its economic impacts becomes a salient source to consider for stakeholders of AI policy. Such research is however still in its infancy, and one in need of review. This paper aims to accomplish just that and is structured around two main themes. Firstly, the path towards transformative AI, and secondly the wealth created by it. It is found that sectors most embedded into global value chains will drive economic impacts, hence special attention is paid to the international trade perspective. When it comes to the path towards transformative AI, research is heterogenous in its predictions, with some predicting rapid, unhindered adoption, and others taking a more conservative view based on potential bottlenecks and comparisons to past disruptive technologies. As for wealth creation, while some agreement is to be found in AI's growth boosting abilities, predictions on timelines are lacking. Consensus exists however around the dispersion of AI induced wealth, which is heavily biased towards developed countries due to phenomena such as anchoring and reduced bargaining power of developing countries. Finally, a shortcoming of economic growth models in failing to consider AI risk is discovered. Based on the review, a calculated, and slower adoption rate of AI technologies is recommended.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.06679&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.