nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒07‒10
twenty-six papers chosen by



  1. Predicting Stock Market Time-Series Data using CNN-LSTM Neural Network Model By Aadhitya A; Rajapriya R; Vineetha R S; Anurag M Bagde
  2. Forecasting the Performance of US Stock Market Indices During COVID-19: RF vs LSTM By Reza Nematirad; Amin Ahmadisharaf; Ali Lashgari
  3. Financial sentiment analysis using FinBERT with application in predicting stock movement By Tingsong Jiang; Andy Zeng
  4. Market Making with Deep Reinforcement Learning from Limit Order Books By Hong Guo; Jianwu Lin; Fanlin Huang
  5. A Comparative Analysis of Portfolio Optimization Using Mean-Variance, Hierarchical Risk Parity, and Reinforcement Learning Approaches on the Indian Stock Market By Jaydip Sen; Aditya Jaiswal; Anshuman Pathak; Atish Kumar Majee; Kushagra Kumar; Manas Kumar Sarkar; Soubhik Maji
  6. Deep Unsupervised Learning for Simultaneous Visual Odometry and Depth Estimation: A Novel Approach By Said, Aymen
  7. Swing Contract Pricing: A Parametric Approach with Adjoint Automatic Differentiation and Neural Networks By Vincent Lemaire; Gilles Pag\`es; Christian Yeo
  8. Explaining AI in Finance: Past, Present, Prospects By Barry Quinn
  9. Human oversight done right: The AI Act should use humans to monitor AI only when effective By Walter, Johannes
  10. Support for Stock Trend Prediction Using Transformers and Sentiment Analysis By Harsimrat Kaeley; Ye Qiao; Nader Bagherzadeh
  11. AWS Corporate AI Use Cases By Brian Kan; Douglas Klein
  12. Stock and market index prediction using Informer network By Yuze Lu; Hailong Zhang; Qiwen Guo
  13. Healthcare Procurement and Firm Innovation: Evidence from AI-powered Equipment By Sofia Patsali; Michele Pezzoni; Jackie Krafft
  14. Extension of Endogenous Growth Theory: Artificial Intelligence as a Self-Learning Entity By Julia M. Puaschunder
  15. Game-Theoretical Analysis of Reviewer Rewards in Peer-Review Journal Systems: Analysis and Experimental Evaluation using Deep Reinforcement Learning By Minhyeok Lee
  16. The Influence of ChatGPT on Artificial Intelligence Related Crypto Assets: Evidence from a Synthetic Control Analysis By Aman Saggu; Lennart Ante
  17. Valuing the U.S. Data Economy Using Machine Learning and Online Job Postings By J Bayoán Santiago Calderón; Dylan Rassier
  18. Monetary policy and financial markets: evidence from Twitter traffic By Donato Masciandaro; Davide Romelli; Gaia Rubera
  19. InProC: Industry and Product/Service Code Classification By Simerjot Kaur; Andrea Stefanucci; Sameena Shah
  20. Global universal approximation of functional input maps on weighted spaces By Christa Cuchiero; Philipp Schmocker; Josef Teichmann
  21. Robust inference for the treatment effect variance in experiments using machine learning By Alejandro Sanchez-Becerra
  22. Strategies in the repeated prisoner’s dilemma: A cluster analysis By Heller, Yuval; Tubul, Itay
  23. New Methods for Old Questions: Predicting Historical Urban Renewal Areas in the United States By Xu, Wenfei
  24. Artificial Energy General Intelligence AEGI By Alfarisi, Omar
  25. Quantum computing: a bubble ready to burst or a looming breakthrough? By Giuseppe Bruno
  26. Analysis of the preliminary AI standardisation work plan in support of the AI Act By SOLER GARRIDO Josep; FANO YELA Delia; PANIGUTTI Cecilia; JUNKLEWITZ Henrik; HAMON Ronan; EVAS Tatjana; ANDRÉ Antoine-Alexandre; SCALZO Salvatore

  1. By: Aadhitya A; Rajapriya R; Vineetha R S; Anurag M Bagde
    Abstract: Stock market is often important as it represents the ownership claims on businesses. Without sufficient stocks, a company cannot perform well in finance. Predicting a stock market performance of a company is nearly hard because every time the prices of a company stock keeps changing and not constant. So, its complex to determine the stock data. But if the previous performance of a company in stock market is known, then we can track the data and provide predictions to stockholders in order to wisely take decisions on handling the stocks to a company. To handle this, many machine learning models have been invented but they didn't succeed due to many reasons like absence of advanced libraries, inaccuracy of model when made to train with real time data and much more. So, to track the patterns and the features of data, a CNN-LSTM Neural Network can be made. Recently, CNN is now used in Natural Language Processing (NLP) based applications, so by identifying the features from stock data and converting them into tensors, we can obtain the features and then send it to LSTM neural network to find the patterns and thereby predicting the stock market for given period of time. The accuracy of the CNN-LSTM NN model is found to be high even when allowed to train on real-time stock market data. This paper describes about the features of the custom CNN-LSTM model, experiments we made with the model (like training with stock market datasets, performance comparison with other models) and the end product we obtained at final stage.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.14378&r=cmp
  2. By: Reza Nematirad; Amin Ahmadisharaf; Ali Lashgari
    Abstract: The US stock market experienced instability following the recession (2007-2009). COVID-19 poses a significant challenge to US stock traders and investors. Traders and investors should keep up with the stock market. This is to mitigate risks and improve profits by using forecasting models that account for the effects of the pandemic. With consideration of the COVID-19 pandemic after the recession, two machine learning models, including Random Forest and LSTM are used to forecast two major US stock market indices. Data on historical prices after the big recession is used for developing machine learning models and forecasting index returns. To evaluate the model performance during training, cross-validation is used. Additionally, hyperparameter optimizing, regularization, such as dropouts and weight decays, and preprocessing improve the performances of Machine Learning techniques. Using high-accuracy machine learning techniques, traders and investors can forecast stock market behavior, stay ahead of their competition, and improve profitability. Keywords: COVID-19, LSTM, S&P500, Random Forest, Russell 2000, Forecasting, Machine Learning, Time Series JEL Code: C6, C8, G4.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.03620&r=cmp
  3. By: Tingsong Jiang; Andy Zeng
    Abstract: We apply sentiment analysis in financial context using FinBERT, and build a deep neural network model based on LSTM to predict the movement of financial market movement. We apply this model on stock news dataset, and compare its effectiveness to BERT, LSTM and classical ARIMA model. We find that sentiment is an effective factor in predicting market movement. We also propose several method to improve the model.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.02136&r=cmp
  4. By: Hong Guo; Jianwu Lin; Fanlin Huang
    Abstract: Market making (MM) is an important research topic in quantitative finance, the agent needs to continuously optimize ask and bid quotes to provide liquidity and make profits. The limit order book (LOB) contains information on all active limit orders, which is an essential basis for decision-making. The modeling of evolving, high-dimensional and low signal-to-noise ratio LOB data is a critical challenge. Traditional MM strategy relied on strong assumptions such as price process, order arrival process, etc. Previous reinforcement learning (RL) works handcrafted market features, which is insufficient to represent the market. This paper proposes a RL agent for market making with LOB data. We leverage a neural network with convolutional filters and attention mechanism (Attn-LOB) for feature extraction from LOB. We design a new continuous action space and a hybrid reward function for the MM task. Finally, we conduct comprehensive experiments on latency and interpretability, showing that our agent has good applicability.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.15821&r=cmp
  5. By: Jaydip Sen; Aditya Jaiswal; Anshuman Pathak; Atish Kumar Majee; Kushagra Kumar; Manas Kumar Sarkar; Soubhik Maji
    Abstract: This paper presents a comparative analysis of the performances of three portfolio optimization approaches. Three approaches of portfolio optimization that are considered in this work are the mean-variance portfolio (MVP), hierarchical risk parity (HRP) portfolio, and reinforcement learning-based portfolio. The portfolios are trained and tested over several stock data and their performances are compared on their annual returns, annual risks, and Sharpe ratios. In the reinforcement learning-based portfolio design approach, the deep Q learning technique has been utilized. Due to the large number of possible states, the construction of the Q-table is done using a deep neural network. The historical prices of the 50 premier stocks from the Indian stock market, known as the NIFTY50 stocks, and several stocks from 10 important sectors of the Indian stock market are used to create the environment for training the agent.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.17523&r=cmp
  6. By: Said, Aymen
    Abstract: This article presents a novel approach for simultaneous visual odometry and depth estimation using deep unsupervised learning techniques. The proposed method leverages the power of deep neural networks to learn representations of visual data and estimate both camera motion and scene depth without the need for ground truth annotations. By formulating the problem as a self-supervised learning task, the network learns to extract meaningful features and infer depth information from monocular images. Experimental results on various datasets demonstrate the effectiveness and accuracy of the proposed approach in real-world scenarios.
    Date: 2023–05–20
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:56ngs&r=cmp
  7. By: Vincent Lemaire; Gilles Pag\`es; Christian Yeo
    Abstract: We propose two parametric approaches to price swing contracts with firm constraints. Our objective is to create approximations for the optimal control, which represents the amounts of energy purchased throughout the contract. The first approach involves explicitly defining a parametric function to model the optimal control, and the parameters using stochastic gradient descent-based algorithms. The second approach builds on the first one, replacing the parameters with neural networks. Our numerical experiments demonstrate that by using Langevin-based algorithms, both parameterizations provide, in a short computation time, better prices compared to state-of-the-art methods (like the one given by Longstaff and Schwartz).
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.03822&r=cmp
  8. By: Barry Quinn
    Abstract: This paper explores the journey of AI in finance, with a particular focus on the crucial role and potential of Explainable AI (XAI). We trace AI's evolution from early statistical methods to sophisticated machine learning, highlighting XAI's role in popular financial applications. The paper underscores the superior interpretability of methods like Shapley values compared to traditional linear regression in complex financial scenarios. It emphasizes the necessity of further XAI research, given forthcoming EU regulations. The paper demonstrates, through simulations, that XAI enhances trust in AI systems, fostering more responsible decision-making within finance.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.02773&r=cmp
  9. By: Walter, Johannes
    Abstract: The EU's proposed Artificial Intelligence Act (AI Act) is meant to ensure safe AI systems in high-risk applications. The Act relies on human supervision of machine-learning algorithms, yet mounting evidence indicates that such oversight is not always reliable. In many cases, humans cannot accurately assess the quality of algorithmic recommendations, and thus fail to prevent harmful behaviour. This policy brief proposes three ways to solve the problem: First, Article 14 of the AI Act should be revised to acknowledge that humans often have difficulty assessing recommendations made by algorithms. Second, the suitability of human oversight for preventing harmful outcomes should be empirically tested for every high-risk application under consideration. Third, following Biermann et al. (2022), human decision-makers should receive feedback on past decisions to enable learning and improve future decisions.
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:zewpbs:022023&r=cmp
  10. By: Harsimrat Kaeley; Ye Qiao; Nader Bagherzadeh
    Abstract: Stock trend analysis has been an influential time-series prediction topic due to its lucrative and inherently chaotic nature. Many models looking to accurately predict the trend of stocks have been based on Recurrent Neural Networks (RNNs). However, due to the limitations of RNNs, such as gradient vanish and long-term dependencies being lost as sequence length increases, in this paper we develop a Transformer based model that uses technical stock data and sentiment analysis to conduct accurate stock trend prediction over long time windows. This paper also introduces a novel dataset containing daily technical stock data and top news headline data spanning almost three years. Stock prediction based solely on technical data can suffer from lag caused by the inability of stock indicators to effectively factor in breaking market news. The use of sentiment analysis on top headlines can help account for unforeseen shifts in market conditions caused by news coverage. We measure the performance of our model against RNNs over sequence lengths spanning 5 business days to 30 business days to mimic different length trading strategies. This reveals an improvement in directional accuracy over RNNs as sequence length is increased, with the largest improvement being close to 18.63% at 30 business days.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.14368&r=cmp
  11. By: Brian Kan (Crean Lutheran High School, Irvine, United States); Douglas Klein (New Jersey City University, United States)
    Abstract: Amazon, with $469 Billion in sales in 2021, has established itself as a world-class user of AI, utilizing Machine Learning in its search engine to deliver desired results quickly - so millions of shoppers find the products they want to buy. Amazon’s affiliate, Amazon Web Services, had annual sales of $62 Billion in 2021, making it the 53rd largest company on the Fortune 500 as measured by revenues. AWS provides enterprises with a fully managed AI service with tools needed to execute every step of the ML development lifecycle in one integrated environment. By 2021, more than one hundred thousand companies utilized AWS Machine Learning - more than any other cloud platform. Outside of the traditional search engine applications what are some compelling and important business use cases where ML and AI have the greatest impact? Some use cases in this paper: AWS AI and Machine Learning are used by commercial landlords and industrial real estate owners to save energy and reduce carbon emissions. The World Wildlife Federation uses AWS AI tools in Indonesia to better understand the size and health of orangutan populations in their native habitat. And The Walt Disney Company uses ML and AI to organize metadata into one archival system, storing information about the stories, scenes, and characters in every second of Disney’s huge catalog of shows and movies.
    Keywords: AWS, Amazon Web Services AI, AWS Machine Learning, AWS Business Use Cases
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:smo:raiswp:0205&r=cmp
  12. By: Yuze Lu; Hailong Zhang; Qiwen Guo
    Abstract: Applications of deep learning in financial market prediction has attracted huge attention from investors and researchers. In particular, intra-day prediction at the minute scale, the dramatically fluctuating volume and stock prices within short time periods have posed a great challenge for the convergence of networks result. Informer is a more novel network, improved on Transformer with smaller computational complexity, longer prediction length and global time stamp features. We have designed three experiments to compare Informer with the commonly used networks LSTM, Transformer and BERT on 1-minute and 5-minute frequencies for four different stocks/ market indices. The prediction results are measured by three evaluation criteria: MAE, RMSE and MAPE. Informer has obtained best performance among all the networks on every dataset. Network without the global time stamp mechanism has significantly lower prediction effect compared to the complete Informer; it is evident that this mechanism grants the time series to the characteristics and substantially improves the prediction accuracy of the networks. Finally, transfer learning capability experiment is conducted, Informer also achieves a good performance. Informer has good robustness and improved performance in market prediction, which can be exactly adapted to real trading.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.14382&r=cmp
  13. By: Sofia Patsali (Université Côte d'Azur, France; CNRS, GREDEG); Michele Pezzoni (Université Côte d'Azur, France; CNRS, GREDEG; Observatoire des Sciences et Techniques, HCERES, France; ICRIOS, Bocconi University, Italy); Jackie Krafft (Université Côte d'Azur, France; CNRS, GREDEG)
    Abstract: In line with the innovation procurement literature, this work investigates the impact of becoming a supplier of a national network of excellence regrouping French hospitals on the supplier's innovative performance. It investigates whether a higher information flow from hospitals to suppliers, proxied by the supply of AI-powered medical equipment, is associated with higher innovative performance. Our empirical analysis relies on a dataset combining unprecedented granular data on procurement bids and equipment with patent data to measure the firm's innovative performance. To identify the firm's innovative activities relevant to the bid, we use an advanced neural network algorithm for text analysis linking firms' equipment descriptions with relevant patent documents. Our results show that firms becoming hospital suppliers have a significantly higher propensity to innovate. About the mechanism, we show that supplying AI-powered equipment further boosts the suppliers' innovative performance, and this raises potential important policy implications.
    Keywords: Innovation performance, public procurement, medical equipment, hospitals, artificial intelligence
    JEL: H57 D22 O31 C81
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:gre:wpaper:2023-05&r=cmp
  14. By: Julia M. Puaschunder (The New School, New York, USA)
    Abstract: The Artificial Intelligence (AI) evolution is a broad set of methods, algorithms, and technologies making software human-like intelligent that is encroaching our contemporary workplace. Thinking like humans but acting rational is the primary goal of AI innovations. The current market disruption with AI lies at the core of the IT-enhanced economic growth driven by algorithms – for instance enabled via the sharing economies and big data information gains, self-check outs, online purchases and bookings, medical services social care, law, retail, logistics and finance to name a few domains in which AI leads to productivity enhancement. While we have ample account of AI entering our everyday lives, we hardly have any information about economic growth driven by AI. Preliminary studies found a negative relation between digitalization and economic growth, indicating that we lack a proper growth theory capturing the economic value imbued in AI. We also have information that indicates AI-led growth based on ICT technologies may widen an inequality-rising skilled versus unskilled labor wage gap. This paper makes the theoretical case of AI as a self-learning entity to be integrated into endogenous growth theory, which gives credit to learning and knowledge transformation as vital economic productivity ingredients. Future research may empirically validate the claim that AI as a self-learning entity is a driver of endogenous growth. All these endeavors may prepare for research on how to enhance human welfare with AI-induced growth based on inclusive AI-human compatibility and mutual exchange between machines and human beings.
    Keywords: Algorithms, Artificial Intelligence (AI), Digitalization, Digitalization disruption, Digital inequality, Economic growth, Endogenous growth
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:smo:raiswp:0224&r=cmp
  15. By: Minhyeok Lee
    Abstract: In this paper, we navigate the intricate domain of reviewer rewards in open-access academic publishing, leveraging the precision of mathematics and the strategic acumen of game theory. We conceptualize the prevailing voucher-based reviewer reward system as a two-player game, subsequently identifying potential shortcomings that may incline reviewers towards binary decisions. To address this issue, we propose and mathematically formalize an alternative reward system with the objective of mitigating this bias and promoting more comprehensive reviews. We engage in a detailed investigation of the properties and outcomes of both systems, employing rigorous game-theoretical analysis and deep reinforcement learning simulations. Our results underscore a noteworthy divergence between the two systems, with our proposed system demonstrating a more balanced decision distribution and enhanced stability. This research not only augments the mathematical understanding of reviewer reward systems, but it also provides valuable insights for the formulation of policies within journal review system. Our contribution to the mathematical community lies in providing a game-theoretical perspective to a real-world problem and in the application of deep reinforcement learning to simulate and understand this complex system.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.12088&r=cmp
  16. By: Aman Saggu; Lennart Ante
    Abstract: The introduction of OpenAI's large language model, ChatGPT, catalyzed investor attention towards artificial intelligence (AI) technologies, including AI-related crypto assets not directly related to ChatGPT. Utilizing the synthetic difference-in-difference methodology, we identify significant 'ChatGPT effects' with returns of AI-related crypto assets experiencing average returns ranging between 10.7% and 15.6% (35.5% to 41.3%) in the one-month (two-month) period after the ChatGPT launch. Furthermore, Google search volumes, a proxy for attention to AI, emerged as critical pricing indicators for AI-related crypto post-launch. We conclude that investors perceived AI-assets as possessing heightened potential or value after the launch, resulting in higher market valuations.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.12739&r=cmp
  17. By: J Bayoán Santiago Calderón; Dylan Rassier (Bureau of Economic Analysis)
    Abstract: With the recent proliferation of data collection and uses in the digital economy, the understanding and statistical treatment of data stocks and flows is of interest among compilers and users of national economic accounts. In this paper, we measure the value of own-account data stocks and flows for the U.S. business sector by summing the production costs of data-related activities implicit in occupations. Our method augments the traditional sum-of-costs methodology for measuring other own-account intellectual property products in national economic accounts by proxying occupation-level time-use factors using a machine learning model and the text of online job advertisements (Blackburn 2021). In our experimental estimates, we find that annual current-dollar investment in own-account data assets for the U.S. business sector grew from $84 billion in 2002 to $186 billion in 2021, with an average annual growth rate of 4.2 percent. Cumulative current-dollar investment for the period 2002–2021 was $2.6 trillion. In addition to the annual current-dollar investment, we present historical-cost net stocks, real growth rates, and effects on value-added by the industrial sector.
    JEL: E22 O3 O51
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:bea:wpaper:0204&r=cmp
  18. By: Donato Masciandaro (Department of Economics, Bocconi University); Davide Romelli (Department of Economics, Trinity College Dublin); Gaia Rubera (Department of Marketing, Bocconi University)
    Abstract: Monetary policy announcements of major central banks trigger substantial discussions about the policy on social media. In this paper, we use machine learning tools to identify Twitter messages related to monetary policy in a short-time window around the release of policy decisions of three major central banks, namely the ECB, the US Fed and the Bank of England. We then build an hourly measure of similarity between the tweets about monetary policy and the text of policy announcements that can be used to evaluate both the ex-ante predictability and the ex-post credibility of the announcement. We show that large differences in similarity are associated with a higher stock market and sovereign yield volatility, particularly around ECB press conferences. Our results also show a strong link between changes in similarity and asset price returns for the ECB, but less so for the Fed or the Bank of England.
    Keywords: monetarypolicy, centralbankcommunication, financialmarkets, socialmedia, Twitter, USFederalReserve, EuropeanCentralBank, BankofEngland.
    JEL: E44 E52 E58 G14 G15 G41
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:tcd:tcduee:tep1023&r=cmp
  19. By: Simerjot Kaur; Andrea Stefanucci; Sameena Shah
    Abstract: Determining industry and product/service codes for a company is an important real-world task and is typically very expensive as it involves manual curation of data about the companies. Building an AI agent that can predict these codes automatically can significantly help reduce costs, and eliminate human biases and errors. However, unavailability of labeled datasets as well as the need for high precision results within the financial domain makes this a challenging problem. In this work, we propose a hierarchical multi-class industry code classifier with a targeted multi-label product/service code classifier leveraging advances in unsupervised representation learning techniques. We demonstrate how a high quality industry and product/service code classification system can be built using extremely limited labeled dataset. We evaluate our approach on a dataset of more than 20, 000 companies and achieved a classification accuracy of more than 92\%. Additionally, we also compared our approach with a dataset of 350 manually labeled product/service codes provided by Subject Matter Experts (SMEs) and obtained an accuracy of more than 96\% resulting in real-life adoption within the financial domain.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.13532&r=cmp
  20. By: Christa Cuchiero; Philipp Schmocker; Josef Teichmann
    Abstract: We introduce so-called functional input neural networks defined on a possibly infinite dimensional weighted space with values also in a possibly infinite dimensional output space. To this end, we use an additive family as hidden layer maps and a non-linear activation function applied to each hidden layer. Relying on Stone-Weierstrass theorems on weighted spaces, we can prove a global universal approximation result for generalizations of continuous functions going beyond the usual approximation on compact sets. This then applies in particular to approximation of (non-anticipative) path space functionals via functional input neural networks. As a further application of the weighted Stone-Weierstrass theorem we prove a global universal approximation result for linear functions of the signature. We also introduce the viewpoint of Gaussian process regression in this setting and show that the reproducing kernel Hilbert space of the signature kernels are Cameron-Martin spaces of certain Gaussian processes. This paves the way towards uncertainty quantification for signature kernel regression.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.03303&r=cmp
  21. By: Alejandro Sanchez-Becerra
    Abstract: Experimenters often collect baseline data to study heterogeneity. I propose the first valid confidence intervals for the VCATE, the treatment effect variance explained by observables. Conventional approaches yield incorrect coverage when the VCATE is zero. As a result, practitioners could be prone to detect heterogeneity even when none exists. The reason why coverage worsens at the boundary is that all efficient estimators have a locally-degenerate influence function and may not be asymptotically normal. I solve the problem for a broad class of multistep estimators with a predictive first stage. My confidence intervals account for higher-order terms in the limiting distribution and are fast to compute. I also find new connections between the VCATE and the problem of deciding whom to treat. The gains of targeting treatment are (sharply) bounded by half the square root of the VCATE. Finally, I document excellent performance in simulation and reanalyze an experiment from Malawi.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.03363&r=cmp
  22. By: Heller, Yuval; Tubul, Itay
    Abstract: This study uses k-means clustering to analyze the strategic choices made by participants playing the infinitely repeated prisoner’s dilemma in laboratory experiments. We identify five distinct strategies that closely resemble well-known pure strategies: always defecting, suspicious tit-for-tat, grim, tit-for-tat, and always cooperating. Our analysis reveals moderate systematic deviations of the clustered strategies from their pure counterparts, and these deviations are important for capturing the experimental behavior. Additionally, we demonstrate that our approach significantly enhances the predictive power of previous analyses. Finally, we examine how the frequencies and payoffs of these clustered strategies vary based on the underlying game parameters.
    Keywords: k-means clustering, machine-learning, memory, laboratory experiment, repeated games.
    JEL: C7 C91
    Date: 2023–05–25
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:117444&r=cmp
  23. By: Xu, Wenfei
    Abstract: Mid-20th urban renewal in the United States was transformational for the physical urban fabric and socioeconomic trajectories of these neighborhoods and its displaced residents. However, there is little research that systematically investigates its impacts due to incomplete national data. This article uses a multiple machine learning method to discover 204 new Census tracts that were likely sites of federal urban renewal, highway construction related demolition, and other urban renewal projects between 1949 and 1970. It also aims to understand the factors motivating the decision to “renew” certain neighborhoods. I find that race, housing age, and homeownership are all determinants of renewal. Moreover, by stratifying the analysis along neighborhoods perceived to be more or less risky, I also find that race and housing age are two distinct channels that influence renewal.
    Date: 2023–05–13
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:bsvr8&r=cmp
  24. By: Alfarisi, Omar
    Abstract: Artificial Energy General Intelligence (AEGI) is a natural progression of Artificial General Intelligence (AGI) that caters to the energy industry. It is crucial to optimize the entire value chain involved in generating, transporting, and storing energy for the betterment of humanity, the environment, industry, and the scientific community. Most research efforts focus on a specific area of the value chain, leading to a disconnect between multiple disciplines and hindering effective problem-solving. AEGI proposes integrating the learning from each discipline in the energy sector to create an optimal solution that simultaneously addresses multiple objectives. This integration is more complex than solving each discipline's challenges separately, but achieving a sustainable and efficient energy system is necessary.
    Date: 2023–06–01
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:ye254&r=cmp
  25. By: Giuseppe Bruno (Bank of Italy)
    Abstract: The advent of quantum computation and quantum information theory and the ever increasing empirical possibilities of translating these theories into real physical systems has raised expectations in the private and public sectors. Quantum computers process information using the laws of quantum mechanics. By exploiting superposition (an object can be in different states at the same time) and entanglement (different objects can be deeply connected without any direct physical interaction) quantum computers are heralded as the next technological breakthrough. Compared to traditional digital computing, quantum computing offers the potential to dramatically reduce both execution time and energy consumption. However, quantum algorithms cannot be fully realized on an actual scale of less than 1, 000 qubits. The greatest hurdle in harnessing quantum computing is the instability of their quantum mechanical features. Meanwhile, research has shifted towards making 'noisy' quantum computers useful. In this work we show three noteworthy applications for central banking activities such as gauging financial risk, credit scoring and transaction settlement. These are still proof-of-concepts applications but demonstrate the new software paradigms along with looming potential breakthroughs. We provide a few hints in the trade-off between deploying the innovative technology before it is mainstream and the risk of holding off on adopting it and being surpassed by nimbler competition.
    Keywords: quantum computing, quantum information, superposition, entanglement
    JEL: C65 C87
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_716_22&r=cmp
  26. By: SOLER GARRIDO Josep (European Commission - JRC); FANO YELA Delia (European Commission - JRC); PANIGUTTI Cecilia (European Commission - JRC); JUNKLEWITZ Henrik (European Commission - JRC); HAMON Ronan (European Commission - JRC); EVAS Tatjana; ANDRÉ Antoine-Alexandre; SCALZO Salvatore
    Abstract: This report provides a systematic analysis of the current standardisation roadmap in support of the AI Act (AIA). The analysis covers standards currently considered by CEN-CENELEC Joint Technical Committee (JTC) 21 on artificial intelligence (AI), evaluating their coverage of the requirements laid out in the legal text. We found that the international standards currently considered already partially cover the AIA requirements for trustworthy AI defined in the regulation. Furthermore, many of the identified remaining gaps are already planned to be addressed by dedicated European standardisation. In order to support the work of standardisers in addressing these gaps, this document presents an independent expert-based analysis and recommendation, by highlighting areas deserving further attention of standardisers, and pointing, when possible, to additional relevant existing standards or directly providing possible additions to the scope of future European standards in support of the AI Act.
    Keywords: Artificial Intelligence, Standards, Transparency, Conformity Assessment, Risk Management, Data Quality, Human Oversight, Record Keeping, Quality Management, Robustness, Accuracy, Cybersecurity
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc132833&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.