nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒02‒27
twenty-six papers chosen by

  1. ‘Seeing’ the Future: Improving Macroeconomic Forecasts with Spatial Data Using Recurrent Convolutional Neural Networks By Jonathan Leslie
  2. Machine Learning with High-Cardinality Categorical Features in Actuarial Applications By Benjamin Avanzi; Greg Taylor; Melantha Wang; Bernard Wong
  3. Bayesian Artificial Neural Networks for Frontier Efficiency Analysis By Mike Tsionas; Christopher F. Parmeter; Valentin Zelenyuk
  4. Efficient Pricing and Hedging of High Dimensional American Options Using Recurrent Networks By Andrew Na; Justin Wan
  5. Forecasting Value-at-Risk using deep neural network quantile regression By Chronopoulos, Ilias; Raftapostolos, Aristeidis; Kapetanios, George
  6. Validation of machine learning based scenario generators By Solveig Flaig; Gero Junike
  7. A Deep Neural Network Algorithm for Linear-Quadratic Portfolio Optimization with MGARCH and Small Transaction Costs By Andrew Papanicolaou; Hao Fu; Prashanth Krishnamurthy; Farshad Khorrami
  8. Long-Term Modeling of Financial Machine Learning for Active Portfolio Management By Kazuki Amagai; Tomoya Suzuki
  9. Agent-based Integrated Assessment Models: Alternative Foundations to the Environment-Energy-Economics Nexus By Karl Naumann-Woleske
  10. Inovasi Desain Produk Tas Berbasis AI By wardani, dita
  11. The SearchEngine: A holistic approach to matching By Doherr, Thorsten
  12. Quantum Boltzmann Machines: Applications in Quantitative Finance By Cameron Perot
  13. The impact of surplus sharing on the outcomes of specific investments under negotiated transfer pricing: An agent-based simulation with fuzzy Q-learning agents By Christian Mitsch
  14. Select and Trade: Towards Unified Pair Trading with Hierarchical Reinforcement Learning By Weiguang Han; Boyi Zhang; Qianqian Xie; Min Peng; Yanzhao Lai; Jimin Huang
  15. Structural Reforms and Economic Growth: A Machine Learning Approach By Mr. Anil Ari; Gabor Pula; Liyang Sun
  16. STEEL: Singularity-aware Reinforcement Learning By Xiaohong Chen; Zhengling Qi; Runzhe Wan
  17. Assessing the impacts of the COVID-19 wage supplement scheme: A microsimulation study By Glenn Abela
  18. Prediction of Customer Churn in Banking Industry By Sina Esmaeilpour Charandabi
  19. A Corporate Income Tax Microsimulation Model for Italy By Chiara Bellucci; Silvia Carta; Sara De Tollis; Federica Di Giacomo; Marco Manzo; Daniela Bucci; Donato Curto; Fabrizio De Grandis; Francesca Sica
  20. Directions and Implications of the United States' Artificial Intelligence Strategy By Kyung, Heekwon; Lee, Jun
  21. Governing knowledge and technology: European approach to artificial intelligence By Moreira, Hugo
  22. Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus? By John J. Horton
  23. Exploring the cost-effectiveness of energy efficiency implementation measures in the residential sector By Fateh Belaïd; Zeinab Ranjbar; Camille Massié
  24. On the marginal utility of fiat money: insurmountable circularity or not? By Reiss, Michael
  25. An Experimental Test of Algorithmic Dismissals By Brice Corgnet
  26. The Redistributive Impact of Consumption Taxation in the EU: Lessons from the post-financial crisis decade By MAIER ESSINGER Sofia; RICCI Mattia

  1. By: Jonathan Leslie (Indiana University, Department of Economics)
    Abstract: I evaluate whether incorporating sub-national trends improves macroeconomic forecasting accuracy in a deep machine learning framework. Specifically, I adopt a computer vision setting by transforming U.S. economic data into a ‘video’ series of geographic ‘images’ and utilizing a recurrent convolutional neural network to extract spatio-temporal features. This spatial forecasting model outperforms equivalent methods based on country-level data and achieves a 0.14 percentage point average error when forecasting out-of-sample monthly percentage changes in real GDP over a twelve-month horizon. The estimated model focuses on Middle America in particular when making its predictions: providing insight into the benefit of employing spatial data.
    Keywords: Macroeconomic Forecasting, Machine Learning, Deep Learning, Computer Vision, Economic Geography
    Date: 2023–02
  2. By: Benjamin Avanzi; Greg Taylor; Melantha Wang; Bernard Wong
    Abstract: High-cardinality categorical features are pervasive in actuarial data (e.g. occupation in commercial property insurance). Standard categorical encoding methods like one-hot encoding are inadequate in these settings. In this work, we present a novel _Generalised Linear Mixed Model Neural Network_ ("GLMMNet") approach to the modelling of high-cardinality categorical features. The GLMMNet integrates a generalised linear mixed model in a deep learning framework, offering the predictive power of neural networks and the transparency of random effects estimates, the latter of which cannot be obtained from the entity embedding models. Further, its flexibility to deal with any distribution in the exponential dispersion (ED) family makes it widely applicable to many actuarial contexts and beyond. We illustrate and compare the GLMMNet against existing approaches in a range of simulation experiments as well as in a real-life insurance case study. Notably, we find that the GLMMNet often outperforms or at least performs comparably with an entity embedded neural network, while providing the additional benefit of transparency, which is particularly valuable in practical applications. Importantly, while our model was motivated by actuarial applications, it can have wider applicability. The GLMMNet would suit any applications that involve high-cardinality categorical variables and where the response cannot be sufficiently modelled by a Gaussian distribution.
    Date: 2023–01
  3. By: Mike Tsionas (Montpellier Business School Université de Montpellier, Montpellier Research in Management and Lancaster University Management School); Christopher F. Parmeter (Miami Herbert Business School, University of Miami, Miami FL); Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia)
    Abstract: Artificial neural networks have offered their share of econometric insights, given their power to model complex relationships. One area where they have not been readily deployed is the estimation of frontiers. The literature on frontier estimation has seen its share of research comparing and contrasting data envelopment analysis (DEA) and stochastic frontier analysis (SFA), the two workhorse estimators. These studies rely on both Monte Carlo experiments and actual data sets to examine a range of performance issues which can be used to elucidate insights on the benefits or weaknesses of one method over the other. As can be imagined, neither method is universally better than the other. The present paper proposes an alternative approach that is quite exible in terms of functional form and distributional assumptions and it amalgamates the benefits of both DEA and SFA. Specifically, we bridge these two popular approaches via Bayesian artificial neural networks while accounting for possible endogeneity of inputs. We examine the performance of this new machine learning approach using Monte Carlo experiments which is found to be very good, comparable to, or often better than, the current standards in the literature. To illustrate the new techniques, we provide an application of this approach to a data set of large US banks.
    Keywords: Machine Learning; Simulation; Flexible Functional Forms; Bayesian Artificial Neural Networks; Banking; Efficiency Analysis.
    Date: 2023–01
  4. By: Andrew Na; Justin Wan
    Abstract: We propose a deep Recurrent neural network (RNN) framework for computing prices and deltas of American options in high dimensions. Our proposed framework uses two deep RNNs, where one network learns the price and the other learns the delta of the option for each timestep. Our proposed framework yields prices and deltas for the entire spacetime, not only at a given point (e.g. t = 0). The computational cost of the proposed approach is linear in time, which improves on the quadratic time seen for feedforward networks that price American options. The computational memory cost of our method is constant in memory, which is an improvement over the linear memory costs seen in feedforward networks. Our numerical simulations demonstrate these contributions, and show that the proposed deep RNN framework is computationally more efficient than traditional feedforward neural network frameworks in time and memory.
    Date: 2023–01
  5. By: Chronopoulos, Ilias; Raftapostolos, Aristeidis; Kapetanios, George
    Abstract: In this paper we use a deep quantile estimator, based on neural networks and their universal approximation property to examine a non-linear association between the conditional quantiles of a dependent variable and predictors. This methodology is versatile and allows both the use of different penalty functions, as well as high dimensional covariates. We present a Monte Carlo exercise where we examine the finite sample properties of the deep quantile estimator and show that it delivers good finite sample performance. We use the deep quantile estimator to forecast Value-at-Risk and find significant gains over linear quantile regression alternatives and other models, which are supported by various testing schemes. Further, we consider also an alternative architecture that allows the use of mixed frequency data in neural networks. This paper also contributes to the interpretability of neural networks output by making comparisons between the commonly used SHAP values and an alternative method based on partial derivatives.
    Keywords: Quantile regression, machine learning, neural networks, value-at-risk, forecasting
    Date: 2023–02–07
  6. By: Solveig Flaig; Gero Junike
    Abstract: Machine learning methods are getting more and more important in the development of internal models using scenario generation. As internal models under Solvency 2 have to be validated, an important question is in which aspects the validation of these data-driven models differs from a classical theory-based model. On the specific example of market risk, we discuss the necessity of two additional validation tasks: one to check the dependencies between the risk factors used and one to detect the unwanted memorizing effect. The first one is necessary because in this new method, the dependencies are not derived from a financial-mathematical theory. The latter one arises when the machine learning model only repeats empirical data instead of generating new scenarios. These measures are then applied for an machine learning based economic scenario generator. It is shown that those measures lead to reasonable results in this context and are able to be used for validation as well as for model optimization.
    Date: 2023–01
  7. By: Andrew Papanicolaou; Hao Fu; Prashanth Krishnamurthy; Farshad Khorrami
    Abstract: We analyze a fixed-point algorithm for reinforcement learning (RL) of optimal portfolio mean-variance preferences in the setting of multivariate generalized autoregressive conditional-heteroskedasticity (MGARCH) with a small penalty on trading. A numerical solution is obtained using a neural network (NN) architecture within a recursive RL loop. A fixed-point theorem proves that NN approximation error has a big-oh bound that we can reduce by increasing the number of NN parameters. The functional form of the trading penalty has a parameter $\epsilon>0$ that controls the magnitude of transaction costs. When $\epsilon$ is small, we can implement an NN algorithm based on the expansion of the solution in powers of $\epsilon$. This expansion has a base term equal to a myopic solution with an explicit form, and a first-order correction term that we compute in the RL loop. Our expansion-based algorithm is stable, allows for fast computation, and outputs a solution that shows positive testing performance.
    Date: 2023–01
  8. By: Kazuki Amagai; Tomoya Suzuki
    Abstract: In the practical business of asset management by investment trusts and the like, the general practice is to manage over the medium to long term owing to the burden of operations and increase in transaction costs with the increase in turnover ratio. However, when machine learning is used to construct a management model, the number of learning data decreases with the increase in the long-term time scale; this causes a decline in the learning precision. Accordingly, in this study, data augmentation was applied by the combined use of not only the time scales of the target tasks but also the learning data of shorter term time scales, demonstrating that degradation of the generalization performance can be inhibited even if the target tasks of machine learning have long-term time scales. Moreover, as an illustration of how this data augmentation can be applied, we conducted portfolio management in which machine learning of a multifactor model was done by an autoencoder and mispricing was used from the estimated theoretical values. The effectiveness could be confirmed in not only the stock market but also the FX market, and a general-purpose management model could be constructed in various financial markets.
    Date: 2023–01
  9. By: Karl Naumann-Woleske
    Abstract: Climate change is a major global challenge today. To assess how policies may lead to mitigation, economists have developed Integrated Assessment Models, however, most of the equilibrium based models have faced heavy critiques. Agent-based models have recently come to the fore as an alternative macroeconomic modeling framework. In this paper, four Agent-based Integrated Assessment Models linking environment, energy and economy are reviewed. These models have several advantages over existing models in terms of their heterogeneous agents, the allocation of damages amongst the individual agents, representation of the financial system, and policy mixes. While Agent-based Integrated Assessment Models have made strong advances, there are several avenues into which research should be continued, including incorporation of natural resources and spatial dynamics, closer analysis of distributional effects and feedbacks, and multi-sectoral firm network structures.
    Date: 2023–01
  10. By: wardani, dita
    Abstract: Kecerdasan Buatan, juga dikenal sebagai (AI) adalah jenis proses di mana komputer dan perangkat pintar dapat melakukan tugas-tugas cerdas tanpa campur tangan manusia dengan menggunakan teknologi khusus. Dengan kata lain, Artificial Intelligence (AI) adalah cabang ilmu pengetahuan dan teknologi informasi, di mana mesin memamerkan kecerdasan yang mirip dengan apa yang dilakukan manusia. Tujuan utama Artificial Intelligence (AI) adalah untuk memecahkan masalah seefektif manusia. Keuntungan dari profesional Kecerdasan Buatan bersertifikat adalah di bidang-bidang seperti mengembangkan sistem pakar, mesin pengenalan suara, robotika, visi komputer, bermain game, mesin pendeteksi bahasa, dan banyak lagi. Sistem AI saat ini unggul dalam mengatasi kendala pemrosesan informasi manusia di bidang pengembangan ide dan peluang. Saat ini, sistem AI sangat bergantung pada neural networks yang mampu memproses, sejumlah besar data. AI membuat hidup orang lebih efisien, mendukung banyak program dan layanan yang membantu mereka melakukan aktivitas sehari-hari. AI dapat secara dramatis meningkatkan efisiensi tempat kerja kita. Kecerdasan Buatan dapat juga menyelesaikan permasalahan manusia terkait masalah yang membutuhkan kreativitas. Hal tersebut mendukung dalam proses pencarian ide atau brainstorming dalam permasalahan desain produk. Ai dapat memberikan gambaran terkait ide ide yang diberikan oleh user.
    Date: 2023–01–24
  11. By: Doherr, Thorsten
    Abstract: The SearchEngine is an open source project providing an integrated framework for diverse matching activities, especially the linkage of large scale firm data by fuzzy criteria like company names and addresses. At its core, it utilizes an efficient candidate retrieval mechanism implementing a word respectively token driven heuristic. Every record in one table becomes a search term to retrieve similar candidate records in the base table according to a search strategy replacing blocking strategies of conventional matching efforts. Because similarity is inherently established by the candidate selection, it is only required to filter false positives by using the meta data export file derived from the matching heuristic to implement a machine learning approach. This paper discusses the general foundation of the heuristic and the algorithm while two detailed walkthroughs of company linkages show practical examples.
    Keywords: data linkage, firm matching, entity resolution, machine learning
    JEL: C81 C88
    Date: 2023
  12. By: Cameron Perot
    Abstract: In this thesis we explore using the D-Wave Advantage 4.1 quantum annealer to sample from quantum Boltzmann distributions and train quantum Boltzmann machines (QBMs). We focus on the real-world problem of using QBMs as generative models to produce synthetic foreign exchange market data and analyze how the results stack up against classical models based on restricted Boltzmann machines (RBMs). Additionally, we study a small 12-qubit problem which we use to compare samples obtained from the Advantage 4.1 with theory, and in the process gain vital insights into how well the Advantage 4.1 can sample quantum Boltzmann random variables and be used to train QBMs. Through this, we are able to show that the Advantage 4.1 can sample classical Boltzmann random variables to some extent, but is limited in its ability to sample from quantum Boltzmann distributions. Our findings indicate that QBMs trained using the Advantage 4.1 are much noisier than those trained using simulations and struggle to perform at the same level as classical RBMs. However, there is the potential for QBMs to outperform classical RBMs if future generation annealers can generate samples closer to the desired theoretical distributions.
    Date: 2023–01
  13. By: Christian Mitsch
    Abstract: This paper focuses on specific investments under negotiated transfer pricing. Reasons for transfer pricing studies are primarily to find conditions that maximize the firm's overall profit, especially in cases with bilateral trading problems with specific investments. However, the transfer pricing problem has been developed in the context where managers are fully individual rational utility maximizers. The underlying assumptions are rather heroic and, in particular, how managers process information under uncertainty, do not perfectly match with human decision-making behavior. Therefore, this paper relaxes key assumptions and studies whether cognitively bounded agents achieve the same results as fully rational utility maximizers and, in particular, whether the recommendations on managerial-compensation arrangements and bargaining infrastructures are designed to maximize headquarters' profit in such a setting. Based on an agent-based simulation with fuzzy Q-learning agents, it is shown that in case of symmetric marginal cost parameters, myopic fuzzy Q-learning agents invest only as much as in the classic hold-up problem, while non-myopic fuzzy Q-learning agents invest optimally. However, in scenarios with non-symmetric marginal cost parameters, a deviation from the previously recommended surplus sharing rules can lead to higher investment decisions and, thus, to an increase in the firm's overall profit.
    Date: 2023–01
  14. By: Weiguang Han; Boyi Zhang; Qianqian Xie; Min Peng; Yanzhao Lai; Jimin Huang
    Abstract: Pair trading is one of the most effective statistical arbitrage strategies which seeks a neutral profit by hedging a pair of selected assets. Existing methods generally decompose the task into two separate steps: pair selection and trading. However, the decoupling of two closely related subtasks can block information propagation and lead to limited overall performance. For pair selection, ignoring the trading performance results in the wrong assets being selected with irrelevant price movements, while the agent trained for trading can overfit to the selected assets without any historical information of other assets. To address it, in this paper, we propose a paradigm for automatic pair trading as a unified task rather than a two-step pipeline. We design a hierarchical reinforcement learning framework to jointly learn and optimize two subtasks. A high-level policy would select two assets from all possible combinations and a low-level policy would then perform a series of trading actions. Experimental results on real-world stock data demonstrate the effectiveness of our method on pair trading compared with both existing pair selection and trading methods.
    Date: 2023–01
  15. By: Mr. Anil Ari; Gabor Pula; Liyang Sun
    Abstract: The qualitative and granular nature of most structural indicators and the variety in data sources poses difficulties for consistent cross-country assessments and empirical analysis. We overcome these issues by using a machine learning approach (the partial least squares method) to combine a broad set of cross-country structural indicators into a small number of synthetic scores which correspond to key structural areas, and which are suitable for consistent quantitative comparisons across countries and time. With this newly constructed dataset of synthetic structural scores in 126 countries between 2000-2019, we establish stylized facts about structural gaps and reforms, and analyze the impact of reforms targeting different structural areas on economic growth. Our findings suggest that structural reforms in the area of product, labor and financial markets as well as the legal system have a significant impact on economic growth in a 5-year horizon, with one standard deviation improvement in one of these reform areas raising cumulative 5-year growth by 2 to 6 percent. We also find synergies between different structural areas, in particular between product and labor market reforms.
    Keywords: Structural reforms; institutions; economic growth; C. PLS estimation procedure; machine learning approach; Gabor pula; Liyang sun; labor market composite; Business environment; Labor markets; Machine learning; Labor market reforms; Global
    Date: 2022–09–16
  16. By: Xiaohong Chen; Zhengling Qi; Runzhe Wan
    Abstract: Batch reinforcement learning (RL) aims at finding an optimal policy in a dynamic environment in order to maximize the expected total rewards by leveraging pre-collected data. A fundamental challenge behind this task is the distributional mismatch between the batch data generating process and the distribution induced by target policies. Nearly all existing algorithms rely on the absolutely continuous assumption on the distribution induced by target policies with respect to the data distribution so that the batch data can be used to calibrate target policies via the change of measure. However, the absolute continuity assumption could be violated in practice, especially when the state-action space is large or continuous. In this paper, we propose a new batch RL algorithm without requiring absolute continuity in the setting of an infinite-horizon Markov decision process with continuous states and actions. We call our algorithm STEEL: SingulariTy-awarE rEinforcement Learning. Our algorithm is motivated by a new error analysis on off-policy evaluation, where we use maximum mean discrepancy, together with distributionally robust optimization, to characterize the error of off-policy evaluation caused by the possible singularity and to enable the power of model extrapolation. By leveraging the idea of pessimism and under some mild conditions, we derive a finite-sample regret guarantee for our proposed algorithm without imposing absolute continuity. Compared with existing algorithms, STEEL only requires some minimal data-coverage assumption and thus greatly enhances the applicability and robustness of batch RL. Extensive simulation studies and one real experiment on personalized pricing demonstrate the superior performance of our method when facing possible singularity in batch RL.
    Date: 2023–01
  17. By: Glenn Abela (Central Bank of Malta)
    Abstract: Microsimulation models have been particularly useful when dealing with the economic welfare impact of COVID-19, particularly since such models offer a way to obtain timely and policy-relevant information in the absence of detailed household-level survey data. This study uses EUROMOD, a static tax-benefit microsimulation model calibrated for Malta, to evaluate the microeconomic impact of the wage supplement scheme introduced in Malta in response to the COVID-19 pandemic for the year 2020. Results suggest that the wage supplement scheme had a number of positive effects. First, it dampened average income losses, both across the income distribution and within economic sectors irrespective of the extent to which these were impacted. In particular, the results show that the scheme’s impact across the income spectrum was progressive in the sense that it shielded the lowest earners relatively more. Poverty rates are invariably lower under the wage supplement scenario, than under a scenario where the scheme is not enacted, whilst its impacts on income inequality are ambiguous. Importantly, the size of the shock suffered by the worst-hit households declines markedly in the presence of the scheme. Future work can benefit from the availability of household survey data to conduct a more thorough assessment of the impacts this study attempts to measure, and in so doing, serve as a validation tool against which simulation exercises such as this can be compared.
    Date: 2022
  18. By: Sina Esmaeilpour Charandabi
    Abstract: With the growing competition in banking industry, banks are required to follow customer retention strategies while they are trying to increase their market share by acquiring new customers. This study compares the performance of six supervised classification techniques to suggest an efficient model to predict customer churn in banking industry, given 10 demographic and personal attributes from 10000 customers of European banks. The effect of feature selection, class imbalance, and outliers will be discussed for ANN and random forest as the two competing models. As shown, unlike random forest, ANN does not reveal any serious concern regarding overfitting and is also robust to noise. Therefore, ANN structure with five nodes in a single hidden layer is recognized as the best performing classifier.
    Date: 2023–01
  19. By: Chiara Bellucci (Ministry of Economy and Finance); Silvia Carta (Ministry of Economy and Finance); Sara De Tollis (Ministry of Economy and Finance); Federica Di Giacomo (Ministry of Economy and Finance); Marco Manzo (Ministry of Economy and Finance); Daniela Bucci (Soluzioni per il Sistema Economico S.p.A. - Sose); Donato Curto (Soluzioni per il Sistema Economico S.p.A. - Sose); Fabrizio De Grandis (Soluzioni per il Sistema Economico S.p.A. - Sose); Francesca Sica (Soluzioni per il Sistema Economico S.p.A. - Sose)
    Abstract: The CITSIM-DF is the corporate tax microsimulation model developed by the Department of Finance in order to estimate the heterogeneous impact of changes in fiscal regulation on average effective tax rates, both in terms of financial and distributional effects. One of the main innovations of the model is the inclusion of forecasts on future economic trends into the simulations, by projecting forward the main fiscal and financial variables. Currently projections are based, at macro level, on national accounts and official projections reflected in the documents of economy and finance. In the next future, the model will be further developed in a now-casting perspective, incorporating in the projections, at micro level, the most recent administrative data available. The model proposes also a new methodology for disentangling investments and historical cost broken by type of asset (buildings, machinery and equipment). CITSIM-DF is based on a unique dataset that integrates administrative data derived from tax returns and financial statements for corporations.
    Keywords: Tax treatment of losses; Allowance for corporate equity; Corporate taxation; Microsimulation
    JEL: C63 H25
    Date: 2023–02
  20. By: Kyung, Heekwon (Korea Institute for Industrial Economics and Trade); Lee, Jun (Korea Institute for Industrial Economics and Trade)
    Abstract: On March 2, the National Security Commission on Artificial Intelligence (NSCAI) released its final report: a glimpse of the U.S. view of advanced industries such as artificial intelligence (AI) and semiconductors, as well as the direction of its related strategies. The commission urged a full-scale mobilization of government capacity to beat China for global supremacy in AI and other related advanced industries. American strategies for AI and other advanced industries are key constants to be considered in devising Korea’s industrial policy, and a national blueprint is needed to respond to such strategies. This analytical brief analyzes the main features of the NSCAI report and the implications carried for Korean industrial strategy and policy.
    Keywords: artificial intelligence; AI; technology; US; China; Korea; semiconductors; supply chain; advanced technology; manufacturing; competition; competitiveness; national security; conflict; hegemony; economic strategy; innovation; R&D
    JEL: F02 F13 F23 F50 F52 H12 H56 J21 J24 J38 L16 L53 L63 O32 O38
    Date: 2021–05–17
  21. By: Moreira, Hugo
    Abstract: This study conducts a threefold analysis of the EU proposal for an Artificial Intelligence Act (AIA). The first objective is a regulatory analysis of the proposal, focusing on the proposed structures for implementation, concepts, and key requirements for Artificial Intelligence (AI) producers. The second objective is a comparison with the General Data Protection Regulation (GDPR), and its complementarity in providing a robust response to the needs of operators and users of data-driven algorithmic technologies. The third objective is to examine the potential for harmonization of the EU internal market and competitiveness with non-EU markets. The analysis includes a regulatory comparison with the GDPR, which highlights the EU's digital economy policy based on national authorities, risk-based approaches and European bodies for harmonization. The analytical framework of the Brussels Effect is also applied to the AIA proposal, which expresses the intentions of the regulations, both from internal and external pressures. The study concludes that the AIA proposed by the European Commission has the potential to have a significant impact on the development and use of AI systems, particularly in the EU and for companies operating internationally. However, it also poses challenges in terms of implementation and enforcement, which could hamper growth.
    Date: 2023–01–25
  22. By: John J. Horton
    Abstract: Newly-developed large language models (LLM) -- because of how they are trained and designed -- are implicit computational models of humans -- a homo silicus. These models can be used the same way economists use homo economicus: they can be given endowments, information, preferences, and so on and then their behavior can be explored in scenarios via simulation. I demonstrate this approach using OpenAI's GPT3 with experiments derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986) and Samuelson and Zeckhauser (1988). The findings are qualitatively similar to the original results, but it is also trivially easy to try variations that offer fresh insights. Departing from the traditional laboratory paradigm, I also create a hiring scenario where an employer faces applicants that differ in experience and wage ask and then analyze how a minimum wage affects realized wages and the extent of labor-labor substitution.
    Date: 2023–01
  23. By: Fateh Belaïd (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique); Zeinab Ranjbar; Camille Massié (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique)
    Abstract: This research investigates the cost-effectiveness of energy performance measures in French residential buildings. We develop an empirical approach based on a multivariate statistical approach and Cost-Benefit analysis. The strength of this research relies on the designing of a large cross-sectional database collected in 2013 including rich technical information of about 1, 400 dwellings representative of the French residential sector as well as individual recommendations relative to the energy renovations to be implemented, their investment costs, and energy savings potential. We provide valuable information on the cost-effectiveness of energy renovation measures for the entire housing stock. Results show that low-temperature and condensing boilers, as well as floor insulation, are the most cost-effective energy efficiency measures, which could be inconsistent with actual subsidy policies. We demonstrate that the cost-effectiveness of energy renovation measures is widely dependent on dwelling initial characteristics and the value of the inputs used in the economic indicators such as energy-savings amount, energy price, and the discount rate. Moreover, we provide a classification of French dwellings, which may help policymakers, better identify their target. Finally, we show that the renovation of the entire French residential dwelling stock can lead to a great amount of energy–and CO2–reductions but requires significant financial capacity.
    Keywords: Energy efficiency, Cost-benefit analysis, Energy demand, Multiple correspondence analysis, Monte Carlo simulation, Energy policy
    Date: 2021–03
  24. By: Reiss, Michael
    Abstract: The question of how a pure fiat currency is enforced and comes to have a non-zero value has been much debated (Selgin, 1994). What is less often addressed is, in the case where the enforcement is taken for granted and we ask what value (in terms of goods and services) the currency will end up taking. Establishing a decentralised mechanism for price formation has proven a challenge for economists: "Since no decentralized out-of-equilibrium adjustment mechanism has been discovered, we currently have no acceptable dynamical model of the Walrasian system." (Gintis, 2006) In his paper, Gintis put forward a model for price discovery based on the evolution of the model's agents, i.e. \poorly performing agents dying and being replaced by copies of the well performing agents." It seems improbable that this mechanism is the driving force behind price discovery in the real world. This paper proposes a more realistic mechanism and presents results from a corresponding agent based model.
    Keywords: Price discovery, Walrasian system, Agent based model
    JEL: E37 E47
    Date: 2023
  25. By: Brice Corgnet (Emlyon Business School, GATE UMR 5824, F-69130 Ecully, France)
    Abstract: We design a laboratory experiment in which a human or an algorithm decides which of two workers to dismiss. The algorithm automatically dismisses the least productive worker whereas human bosses have full discretion over their decisions. Using performance metrics and questionnaires, we find that fired workers react more negatively to human than to algorithmic decisions in a broad range of tasks. We show that spitefulness exacerbated this negative reaction. Our findings suggest algorithms could help tame negative reactions to dismissals.
    Keywords: Algorithmic dismissals, laboratory experiments, distributive justice, work satisfaction, social preferences
    JEL: C92 D23 D91 M50 O33
    Date: 2023
  26. By: MAIER ESSINGER Sofia (European Commission - JRC); RICCI Mattia (European Commission - JRC)
    Abstract: During the 2010-2019 decade, consumption taxes have risen in the vast majority of the EU Member States as a result of austerity measures, tax shifts as well as taxing transport and housing-related energy consumption. The redistributive impact of these policy changes remains mostly unexplored. In this paper, we provide new empirical evidence on the redistributive effect of changes in VAT and excises over this period, along with other developments in the broader tax-benefit system including tax shift reforms. Our results indicate that the consumption tax systems in the EU have become more unequalizing in most countries as a result of an increase in the tax burden and of its regressivity. While the taxation of transport is the component that has increased the most, the highest inequality impact was driven by the taxation of housing-related energy consumption. Only in a few countries these policy changes were accompanied by an increase in social transfers sufficient to compensate the poorest households.
    Keywords: Consumption taxation, Tax shift, Austerity, Inequality, Microsimulation
    Date: 2022–12

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.