nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒03‒06
nineteen papers chosen by



  1. BEAST-Net: Learning novel behavioral insights using a neural network adaptation of a behavioral model By Shoshan, Vered; Hazan, Tamir; Plonsky, Ori
  2. Data-driven Approach for Static Hedging of Exchange Traded Options By Vikranth Lokeshwar Dhandapani; Shashi Jain
  3. Explainable AI in a Real Estate Context – Exploring the Determinants of Residential Real Estate Values By Bastian Krämer; Moritz Stang; Cathrine Nagl; Wolfgang Schäfers
  4. Human emotion recognition in the significance assessment of property attributes By Malgorzata Renigier-Biozor; Artur Janowski; Marek Walacik; Aneta Chmielewska
  5. Desain Produk Inovatif dengan AI By Sutanto, Hendrik
  6. Automatic Locally Robust Estimation with Generated Regressors By Juan Carlos Escanciano; Telmo P\'erez-Izquierdo
  7. Fixed-point iterative algorithm for SVI model By Shuzhen Yang; Wenqing Zhang
  8. Machine Learning in Building Documentation (ML-BAU-DOK) - Foundations for Information Extraction for Energy Efficiency and Life Cycle Analysis By Jonathan Rothenbusch; Konstantin Schütz; Feibai Huang; Björn-Martin Kurzrock
  9. Human-AI Interaction – Investigating the Impact on Individuals and Organizations By Peters, Felix
  10. Risk Budgeting Portfolios from Simulations By Bernardo Freitas Paulo da Costa; Silvana M. Pesenti; Rodrigo S. Targino
  11. A Bottom Up Industrial Taxonomy for the UK. Refinements and an Application By Juan Mateos-Garcia; George Richardson
  12. Estimating Very Large Demand Systems By Lanier, Joshua; Large, Jeremy; Quah, John
  13. A multiparametric procedure to understand how the Covid-19 influenced the real estate market By Laura Gabrielli; Aurora Ruggeri; Massimiliano Scarpa
  14. Understanding rental profit and mechanisms that yields rental and real estate prices using machine learning approach By Martin Regnaud; Julie Le Gallo; Marie Breuille
  15. Expanding Corporate Use of Artificial Intelligence By Choi, Mincheol; Song, Danbee; Cho, Jaehan
  16. 現代の金融研究における事前学習済み言語モデルの適用 By Lee, Heungmin
  17. International trade cooperation's impact on the world economy By Métivier, Jeanne; Bacchetta, Marc; Bekkers, Eddy; Koopman, Robert Bernard
  18. How to check a simulation study By Morris, Tim P; White, Ian R; Pham, Tra My; Quartagno, Matteo
  19. Application of Pretrained Language Models in Modern Financial Research By Lee, Heungmin

  1. By: Shoshan, Vered; Hazan, Tamir; Plonsky, Ori (Technion - Israel Institute of Technology)
    Abstract: In this paper, we propose a behavioral model called BEAST-Net, which combines the basic logic of BEAST, a psychological theory-based behavioral model, with machine learning (ML) techniques. Our approach is to formalize BEAST mathematically as a differentiable function and parameterize it with a neural network, enabling us to learn the model parameters from data and optimize it using backpropagation. The resulting model, BEAST-Net, is able to scale to larger datasets and adapt to new data with greater ease, while retaining the psychological insights and interpretability of the original model. We evaluate BEAST-Net on the largest public benchmark dataset of human choice tasks and show that it outperforms several baselines, including the original BEAST model. Furthermore, we demonstrate that our model can be used to provide interpretable explanations for choice behavior, allowing us to derive new psychological insights from the data. Our work makes a significant contribution to the field of human decision making by showing that ML techniques can be used to improve the scalability and adaptability of psychological theory based models while preserving their interpretability and ability to provide insights.
    Date: 2023–01–30
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:kaeny&r=cmp
  2. By: Vikranth Lokeshwar Dhandapani; Shashi Jain
    Abstract: In this paper, we present a data-driven explainable machine learning algorithm for semi-static hedging of Exchange Traded options taking into account transaction costs with efficient run-time. Further, we also provide empirical evidence on the performance of hedging longer-term National Stock Exchange (NSE) Index options using a self-replicating portfolio of shorter-term options and cash position, achieved by the automated algorithm, under different modeling assumptions and market conditions including covid stressed period. We also systematically assess the performance of the model using the Superior Predictive Ability (SPA) test by benchmarking against the static hedge proposed by Peter Carr and Liuren Wu and industry-standard dynamic hedging. We finally perform a Profit and Loss (PnL) attribution analysis for the option to be hedged, delta hedge, and static hedge portfolio to identify the factors that explain the performance of static hedging.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.00728&r=cmp
  3. By: Bastian Krämer; Moritz Stang; Cathrine Nagl; Wolfgang Schäfers
    Abstract: Real estate is a heterogeneous commodity where no two are alike. Therefore, making assumptions about determinants and the way they influence the value of a property is difficult. Traditionally, parametric and, thus, assumption-based regression techniques are used to identify those dependencies. However, recent studies show that these relationships can only be mapped to a limited extent by those approaches. On the contrary, modern Machine Learning (ML) approaches are less restrictive and able to identify complex patterns hidden in the data. Nevertheless, these algorithms are less transparent to human beings. An ML approach may be the best solution to predict the value of a property, but it fails at determining the factors driving that value. To overcome this limitation, explainable artificial intelligence (XAI) has come forward as a new important direction of research. So far, there has been almost no research applying XAI in the field of real estate. Therefore, we introduce two different state-of-the-art XAI approaches, namely Permutation Feature Importance (PFI) and Accumulated Local Effects Plots (ALE) in the context of real estate valuation. Focusing on the residential market, we use a dataset consisting of around 1.2 million observations in Germany. Our findings show that using XAI methods enables us to open the “black box” of ML models. In addition, we find several unexpected non-linear dependencies between real estate values and their hedonic characteristics and therefore deliver important insights to better understand the fundamental functioning of residential real estate markets.
    Keywords: ALE Plots; Explainable AI; housing market; Machine Learning
    JEL: R3
    Date: 2022–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:2022_50&r=cmp
  4. By: Malgorzata Renigier-Biozor; Artur Janowski; Marek Walacik; Aneta Chmielewska
    Abstract: One of the largest problems in the real estate market analysis, which includes valuation, is determining the significance of individual property attributes that may affect value or attractiveness perception. The study attempts to assess the significance of selected attributes of real estate based on the detection and analysis of the emotions of potential investors. Human facial expression is a carrier of information that can be recorded and interpreted effectively via the use of artificial intelligence methods, machine learning and computer vision. The development of a reliable algorithm requires, in this case, the identification and investigation of factors that may affect the final solution of the problem, from behavioural aspects through technological possibilities. In the presented experiment, an approach that correlates the emotional states of buyers with the visualization of selected attributes of properties is utilized. The objective of this study is to develop an original method for assessing the significance of property attributes based on emotion recognition technology as an alternative to the commonly used methods in the real estate analysis and valuation, which are usually based on surveys. The empirical analysis enabled determination of the mainstream property attributes significance from evoked emotions intensity within the group of property clients. The significance ranking determined on the basis of the unconscious expressed facial emotions was verified and compared to the answers given in a form of questionnaire. The results have shown that the conscious declaration of the attribute ranking differs from the emotion detection conclusions in several cases.
    Keywords: Artificial Intelligence; attribute significance; emotion recognition technology; human emotion detection
    JEL: R3
    Date: 2022–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:2022_18&r=cmp
  5. By: Sutanto, Hendrik
    Abstract: pembuatan design menggunakan artificial intelligence
    Date: 2023–01–23
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:qkver&r=cmp
  6. By: Juan Carlos Escanciano; Telmo P\'erez-Izquierdo
    Abstract: Many economic and causal parameters of interest depend on generated regressors, including structural parameters in models with endogenous variables estimated by control functions and in models with sample selection. Inference with generated regressors is complicated by the very complex expression for influence functions and asymptotic variances. To address this problem, we propose automatic Locally Robust/debiased GMM estimators in a general setting with generated regressors. Importantly, we allow for the generated regressors to be generated from machine learners, such as Random Forest, Neural Nets, Boosting, and many others. We use our results to construct novel Doubly Robust estimators for the Counterfactural Average Structural Function and Average Partial Effects in models with endogeneity and sample selection, respectively.
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.10643&r=cmp
  7. By: Shuzhen Yang; Wenqing Zhang
    Abstract: The stochastic volatility inspired (SVI) model is widely used to fit the implied variance smile. Presently, most optimizer algorithms for the SVI model have a strong dependence on the input starting point. In this study, we develop an efficient iterative algorithm for the SVI model based on a fixed-point and least-square optimizer. Furthermore, we present the convergence results in certain situations for this novel iterative algorithm. Compared with the quasi-explicit SVI method, we demonstrate the advantages of the fixed-point iterative algorithm using simulation and market data.
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2301.07830&r=cmp
  8. By: Jonathan Rothenbusch; Konstantin Schütz; Feibai Huang; Björn-Martin Kurzrock
    Abstract: The construction and real estate industry features long product life cycles, a wide range of stakeholders and a high information density. Information is not only available in large quantities, but also in a very heterogeneous and user-specific manner. Digital building documentation is partially available in some companies and even non-existent in others. The result is often analogue, unstructured building documentation, which makes the processing of data and information considerably more difficult and, in the worst case, leads to media disruptions between those involved.However, the benefits of a lean, complete and targeted digital building documentation can be manifold. In particular, automated information extraction and further information retrieval are seen as having great potential. Information extraction as an ultimate aim, requires a defined handling of analogue documents, transparent criteria regarding data quality and machine readability as well as a clear classification system.The research project ML-BAU-DOK (funded by The Federal Office for Building and Regional Planning BBSR, SWD-10.08.18.7-20.26) presents the necessary preparatory processes for advanced digital use of building documentation. First, a set of rules is created to digitize paper-based documentation in a targeted manner. The automated separation of mass documentation into individual documents, as well as the classification of documents into selected document classes, is mapped using machine-learning. The document classes are consolidated from the current worldwide class standards and prioritized according to their information content. The project includes the evaluation of 600, 000 document pages, which are analysed class-specifically with regard to two use cases, energy efficiency and life cycle analysis. The methodology ensures transferability of the results to other use cases.The key result of the ML-BAU-DOK is an algorithm that automatically separates individual documents from a mass scan, assigns the individual documents to defined document classes, and thus reduces the amount of scanning and filing required. This leads to a classification system that enables information extraction as a subsequent goal and brings the construction and real estate industry closer to a Common Data Environment.
    Keywords: Document Classification; Document Separation; Heterogeneous Building Documentation; Machine Learning
    JEL: R3
    Date: 2022–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:2022_234&r=cmp
  9. By: Peters, Felix
    Abstract: Artificial intelligence (AI) has become increasingly prevalent in consumer and business applications, equally affecting individuals and organizations. The emergence of AI-enabled systems, i.e., systems harnessing AI capabilities that are powered by machine learning (ML), is primarily driven by three technological trends and innovations: increased use of cloud computing allowing large-scale data collection, the development of specialized hardware, and the availability of software tools for developing AI-enabled systems. However, recent research has mainly focused on technological innovations, largely neglecting the interaction between humans and AI-enabled systems. Compared to previous technologies, AI-enabled systems possess some unique characteristics that make the design of human-AI interaction (HAI) particularly challenging. Examples of such challenges include the probabilistic nature of AIenabled systems due to their dependence on statistical patterns identified in data and their ability to take over predictive tasks previously reserved for humans. Thus, it is widely agreed that existing guidelines for human-computer interaction (HCI) need to be extended to maximize the potential of this groundbreaking technology. This thesis attempts to tackle this research gap by examining both individual-level and organizational-level impacts of increasing HAI. Regarding the impact of HAI on individuals, two widely discussed issues are how the opacity of complex AI-enabled systems affects the user interaction and how the increasing deployment of AI-enabled systems affects performance on specific tasks. Consequently, papers A and B of this cumulative thesis address these issues. Paper A addresses the lack of user-centric research in the field of explainable AI (XAI), which is concerned with making AI-enabled systems more transparent for end-users. It is investigated how individuals perceive explainability features of AI-enabled systems, i.e., features which aim to enhance transparency. To answer this research question, an online lab experiment with a subsequent survey is conducted in the context of credit scoring. The contributions of this study are two-fold. First, based on the experiment, it can be observed that individuals positively perceive explainability features and have a significant willingness to pay for them. Second, the theoretical model for explaining the purchase decision shows that increased perceived transparency leads to increased user trust and a more positive evaluation of the AI-enabled system. Paper B aims to identify task and technology characteristics that determine the fit between an individual's tasks and an AI-enabled system, as this is commonly believed to be the main driver for system utilization and individual performance. Based on a qualitative research approach in the form of expert interviews, AI-specific factors for task and technology characteristics, as well as the task-technology fit, are developed. The resulting theoretical model enables empirical research to investigate the relationship between task-technology fit and individual performance and can also be applied by practitioners to evaluate use cases of AI-enabled system deployment. While the first part of this thesis discusses individual-level impacts of increasing HAI, the second part is concerned with organizational-level impacts. Papers C and D address how the increasing use of AI-enabled systems within organizations affect organizational justice, i.e., the fairness of decision-making processes, and organizational learning, i.e., the accumulation and dissemination of knowledge. Paper C addresses the issue of organizational justice, as AI-enabled systems are increasingly supporting decision-making tasks that humans previously conducted on their own. In detail, the study examines the effects of deploying an AI-enabled system in the candidate selection phase of the recruiting process. Through an online lab experiment with recruiters from multinational companies, it is shown that the introduction of so-called CV recommender systems, i.e., systems that identify suitable candidates for a given job, positively influences the procedural justice of the recruiting process. More specifically, the objectivity and consistency of the candidate selection process are strengthened, which constitute two essential components of procedural justice. Paper D examines how the increasing use of AI-enabled systems influences organizational learning processes. The study derives propositions from conducting a series of agent-based simulations. It is found that AI-enabled systems can take over explorative tasks, which enables organizations to counter the longstanding issue of learning myopia, i.e., the human tendency to favor exploitation over exploration. Moreover, it is shown that the ongoing reconfiguration of deployed AI-enabled systems represents an essential activity for organizations aiming to leverage their full potential. Finally, the results suggest that knowledge created by AI-enabled systems can be particularly beneficial for organizations in turbulent environments.
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:136450&r=cmp
  10. By: Bernardo Freitas Paulo da Costa; Silvana M. Pesenti; Rodrigo S. Targino
    Abstract: Risk budgeting is a portfolio strategy where each asset contributes a prespecified amount to the aggregate risk of the portfolio. In this work, we propose an efficient numerical framework that uses only simulations of returns for estimating risk budgeting portfolios. Besides a general cutting planes algorithm for determining the weights of risk budgeting portfolios for arbitrary coherent distortion risk measures, we provide a specialised version for the Expected Shortfall, and a tailored Stochastic Gradient Descent (SGD) algorithm, also for the Expected Shortfall. We compare our algorithm to standard convex optimisation solvers and illustrate different risk budgeting portfolios, constructed using an especially designed Julia package, on real financial data and compare it to classical portfolio strategies.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.01196&r=cmp
  11. By: Juan Mateos-Garcia; George Richardson
    Abstract: In previous research, we used web data and machine learning methods to assess the limitations of the Standard Industrial Taxonomy (SIC) that measures the industrial structure of the UK, and developed a prototype taxonomy based on a bottom-up analysis of business website descriptions that could complement the SIC taxonomy and address some of its limitations. Here, we refine and improve that prototype taxonomy by doubling the number of SIC4 codes it covers, implementing a consequential evaluation strategy to select its clustering parameters, and generating measures of confidence about a company's assignment to a text sector based on the distribution of its neighbours and its distance in semantic (text) space. We deploy the resulting taxonomy to segment UK local economies based on their sectoral, similarities and differences and analyse the geography, sectoral composition and comparative performance in a variety of secondary indicators recently compiled to inform the UK Government's Levelling Up agenda. This analysis reveals significant links between the industrial composition of a local economy based on our taxonomy and a variety of social and economic outcomes, suggesting that policymakers should play strong attention to the industrial make-up of economies across the UK as they design and implement levelling-up strategies to reduce disparities between them.
    Keywords: Industrial taxonomy, web data, machine learning
    JEL: C80 L60 O25 O3
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:nsr:escoed:escoe-dp-2022-29&r=cmp
  12. By: Lanier, Joshua; Large, Jeremy; Quah, John
    Abstract: We present a discrete choice, random utility model and a new estimation technique for analyzing consumer demand for large numbers of products. We allow the consumer to purchase multiple units of any product and to purchase multiple products at once (think of a consumer selecting a bundle of goods in a supermarket). In our model each product has an associated unobservable vector of attributes from which the consumer derives utility. Our model allows for heterogeneous utility functions across consumers, complex patterns of substitution and complementarity across products, and nonlinear price effects. The dimension of the attribute space is, by assumption, much smaller than the number of products, which effectively reduces the size of the consumption space and simplifies estimation. Nonetheless, because the number of bundles available is massive, a new estimation technique, which is based on the practice of negative sampling in machine learning, is needed to sidestep an intractable likelihood function. We prove consistency of our estimator, validate the consistency result through simulation exercises, and estimate our model using supermarket scanner data.
    Keywords: discrete choice, demand estimation, negative sampling, machine learning, scanner data
    JEL: C13 C34 D12 L20 L66
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:amz:wpaper:2023-01&r=cmp
  13. By: Laura Gabrielli; Aurora Ruggeri; Massimiliano Scarpa
    Abstract: This paper, part of a more comprehensive research line, aims to discuss how the covid-19 pandemic has affected the demand in the real estate market in Padua. Padua is a medium-sized city that represents the typical Italian town.The authors have been investigating the real estate market in Padua for a few years, collecting information on the buildings on sale from related selling websites. This data collection procedure has been accomplished with the help of an automated web crawler developed in Python language.For this reason, the authors are now able to compare the real estate market in Padua at different times. In particular, two databases are here put into a detailed comparison. Database A dates back to 2019 (II semester), capturing a pre-Covid-19 scenario, while database B is dated 2021 (II semester), representing the actual situation.First of all, two forecasting algorithms to predict the market value of the properties as a function of their characteristics are developed using an Artificial Neural Networks (ANNs) approach.ANNs are a multi-parametric statistical technique employed to forecast a property's market value. The input neurons of the network, i.e. the independent variables, are the buildings' descriptive features and characteristics, while the output neuron is the market value, the dependent variable.ANN(A) is developed on database A, and ANN(B) is created on B. The comparison of the two forecasting functions represents the differences in the demand after two years from the first Covid-19 alerts.Since ANNs are a multi-parametric procedure, this methodology isolates each attribute's singular influence on the forecasted price. It is, therefore, possible to understand how the preferences of the demand have changed during the pandemic. Some characteristics are now more appreciated than before, such as external spaces, like a terrace or a private garden. Also, systems and technologies seem more appealing now than before the pandemic, for example, the presence of optical fibre or mechanical ventilation. Moreover, wider building typologies are more appreciated now, like villas, detached and semi-detached houses, or farmhouses. But, on the contrary, other characteristics are less appreciated. The location, for instance, is less influential than before in price formation. These changes in preferences can be attributed to the new lifestyle since new habits have been produced after the lockdown experience and new smart working schedules that the pandemic has led to.
    Keywords: Analytical Neural Network; COVID-19; Real Estate Valuation; Structural characteristics
    JEL: R3
    Date: 2022–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:2022_178&r=cmp
  14. By: Martin Regnaud; Julie Le Gallo; Marie Breuille
    Abstract: In 2020, MeilleursAgents was estimating that 2 years and 10 months were needed by a French household to amortize the cost of buying versus renting on average in France. At the same time, in Paris, the same household would have to wait 6 years and 11 months to amortize its costs. These figures are of utmost importance for households to help them decide whether they should buy or rent their main residence.From an operational perspective, estimating this time is made possible by a precise knowledge of rent to price ratios. The main objective of this contribution is estimating those ratios on the whole French territory using observations of rent and transaction prices for the same housing between 2010 and 2021. Once those ratios are estimated, we highlight the factors that determine them using machine learning methods.Using a coarsened exact matching, we estimate rent to price ratios on the whole territory. Then we compare two different approaches to identify the determinants of these ratios. The first approach consists in explaining the ratios using a linear regression model to predict them using housing characteristics and geographical amenities. The second approach uses a gradient boosting decision tree model to predict the ratios. Hence, we can explain the role of each feature of the model thanks to explainability methods associated with tree models: feature importance and shape values. In order to proceed with this study, we use rental listings from MeilleursAgents platform that have geolocation at the address level. This use of such listings is inspired by Chapelle&Eymeoud(2018)1 which shows that web scrapped listings are unbiased compared to survey data such as the ones from the “Observatoire Locaux des Loyers” (OL) in France. Moreover, these surveys are limited to certain dense areas whereas our study aims at comparing mechanisms on the whole French territory.These listings are matched with the national DV3F French database which provides us with a parcel geolocation level. Matching these two sources between 2010 and 2021 provides us with 85’000 matched ratio observations. The (rent, price) couples are used to estimate rent to price ratios and to highlight the differences in the influence of each factor depending on the territory but also thanks to our precise geolocation, inside urban areas.Our study has a double contribution. First, from a methodological point of view, using a gradient boosting model to estimate and explain rent to price ratios has never been done. The main advantage of this method compared to classic methods is a better handling of interactions and effect heterogeneity. The second contribution leans on the precise geolocation level of our observations. These ratios are most of the time studied using ratios of average rent and average prices because of the scarcity of precisely geolocated data. Yet, Hill&Syed(2016)2 showed that such an approximation can lead to an error up to 20% when estimating the ratio. Therefore, they advise to use housing level matching to control feature heterogeneity between rented and sold housing. Our study is thus the first study in France that allows an exact matching outside of the dense areas covered by the “Observatoire Locaux des loyers” on this topic.We highlight the strong heterogeneity of rent to price ratios inside dense urban areas but also at a larger scale. To our knowledge, this study is the first to bring this phenomenon out at this national scale.1. Chapelle G., Eymedoud J.-B., « Can Big Data Increase Our Knowledge of Local Rental Markets? Estimating the Cost of Density with Rents », SciencesPo Mimeo, 20182. Hill R.J., Syed I.A, « Hedonic price–rent ratios, user cost, and departures from equilibrium in the housing market», Regional Science and Urban Economics, pp 60-72, 2016
    Keywords: Machine Learning; Rent profitability; Rent to price ratios; Web platform data
    JEL: R3
    Date: 2022–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:2022_107&r=cmp
  15. By: Choi, Mincheol (Korea Institute for Industrial Economics and Trade); Song, Danbee (Korea Institute for Industrial Economics and Trade); Cho, Jaehan (Korea Institute for Industrial Economics and Trade)
    Abstract: Despite interest in artificial intelligence (AI) as a transformative driver of economic growth, few Korean companies use AI. Companies currently using AI are increasing their AI investments and expenditures. Companies employ AI in a wide range of functions and fields including automated operations, predictive analytics, product and service development, and sales and inventory management Challenges to the application and use of AI exist in multiple, closely-connected domains. Challenges include human capital scarcity, inadequate funding, the difficulty of acquiring technology, and both the internal and external business environments confronting companies. This work analyzes the barriers to increased corporate adoption of AI technologies and proposes a set of policy suggestions to improve AI uptake at Korean companies.
    Keywords: artificial intelligence; AI; AI adoption; productivity; firm productivity; corporate productivity; STEM; skilled labor; information technology; IT; information and communications technology; ICT; R&D; research and development; innovation; innovation policy; AI policy; regulation; personal data
    JEL: E24 H52 I28 J24 O32 O38
    Date: 2021–04–04
    URL: http://d.repec.org/n?u=RePEc:ris:kietia:2021_004&r=cmp
  16. By: Lee, Heungmin
    Abstract: 近年、事前学習済み言語モデル (Pretrained Language Model, PLM) が、自然言語処理 (Natural Language Processing, NLP) タスクの強力なツールとして登場しました。 この論文では、金融部門におけるこれらのモデルの可能性と、この分野で直面する課題について検討します。 また、これらのモデルの解釈可能性と、金融への展開に関連する倫理的考慮事項についても説明します。 私たちの分析は、事前に訓練された言語モデルが金融データの分析と処理の方法に革命を起こす可能性があることを示しています。 ただし、それらの展開に関連する課題と倫理的考慮事項に対処して、責任と説明責任のある方法でそれらが使用されるようにすることが重要です。 今後の研究では、金融データの変動性を処理し、トレーニング データのバイアスを軽減し、解釈可能な予測を提供できるモデルの開発に焦点を当てます。 全体として、金融における AI の未来は、事前トレーニング済みの言語モデルの継続的な開発と展開によって形成される と考えています。
    Date: 2023–02–01
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:jn7qw&r=cmp
  17. By: Métivier, Jeanne; Bacchetta, Marc; Bekkers, Eddy; Koopman, Robert Bernard
    Abstract: In this study, we investigate three trade policy scenarios: i) the revival of multilateralism, ii) plurilateral cooperation, and iii) geopolitical rivalry. In the first scenario, both tariffs and NTMs are reduced on a multilateral basis. In the second scenario, varying groups of countries cooperate on specific topics, such as E-commerce and services. In the last scenario, two main blocks emerge: a Western block and an Eastern block. International cooperation breaks down between blocks, leading to an increase in tariffs and NTMs, with blocks of countries setting up their own set of rules. Our findings are based on simulations with the WTO Global trade Model which has a specific novel feature: the diffusion of ideas between countries as a by-product of trade. The simulations indicate that: (i) there is a lot at stake for global trade cooperation, with global real GDP increasing by 3.2% compared to the baseline under multilateral cooperation and decreasing by 5.4% under geopolitical rivalry; (ii) LDCs would gain the most from multilateralism (real GDP increases by 4.8%) due to technology spillover effects; (iii) under both "open" and "exclusive" plurilateral agreements on services, most regions are projected to gain, with larger gains to participants if the initiative is "open" than if it is "exclusive"; (iv) intermediate linkages in services sectors will be reinforced in all scenarios, except under geopolitical rivalry; (v) geopolitical rivalry leads to a 21 percentage points decrease in exports between Western and Eastern blocks from 46% to 25%; and (vi) the WTO has an important role to play in preserving a free trade environment for developing countries and LDCs. The simulations show that in a decoupling world, it would be essential for LDCs to continue trading with both Eastern and Western blocks.
    Keywords: Trade policy simulations, CGE modeling
    JEL: F13 F17
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:wtowps:ersd202302&r=cmp
  18. By: Morris, Tim P (MRC Clinical Trials Unit at UCL); White, Ian R; Pham, Tra My; Quartagno, Matteo
    Abstract: Simulation studies are a powerful tool in epidemiology and biostatistics, but they can be hard to conduct successfully. Sometimes unexpected results are obtained. We offer advice on how to check a simulation study when this occurs, and how to design and conduct the study to give results that are easier to check. Simulation studies should be designed to include some settings where answers are already known. They should be coded sequentially, with data generating mechanisms checked before simulated data are analysed. Results should be explored carefully, with scatterplots of standard error estimates against point estimates a powerful tool. Failed estimation and outlying estimates should be identified and avoided by changing data generating mechanisms or coding realistic hybrid analysis procedures. Finally, surprising results should be investigated by methods including considering whether sources of variation are correctly included. Following our advice may help to prevent errors and to improve the quality of published simulation studies.
    Date: 2023–02–03
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:cbr72&r=cmp
  19. By: Lee, Heungmin
    Abstract: In recent years, pretrained language models (PLMs) have emerged as a powerful tool for natural language processing (NLP) tasks. In this paper, we examine the potential of these models in the finance sector and the challenges they face in this domain. We also discuss the interpretability of these models and the ethical considerations associated with their deployment in finance. Our analysis shows that pretrained language models have the potential to revolutionize the way financial data is analyzed and processed. However, it is important to address the challenges and ethical considerations associated with their deployment to ensure that they are used in a responsible and accountable manner. Future research will focus on developing models that can handle the volatility of financial data, mitigate bias in the training data, and provide interpretable predictions. Overall, we believe that the future of AI in finance will be shaped by the continued development and deployment of pretrained language models.
    Date: 2023–02–01
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:5s3nw&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.