|
on Computational Economics |
Issue of 2023‒02‒20
29 papers chosen by |
By: | Al-Haschimi, Alexander; Apostolou, Apostolos; Azqueta-Gavaldon, Andres; Ricci, Martino |
Abstract: | We develop a measure of overall financial risk in China by applying machine learning techniques to textual data. A pre-defined set of relevant newspaper articles is first selected using a specific constellation of risk-related keywords. Then, we employ topical modelling based on an unsupervised machine learning algorithm to decompose financial risk into its thematic drivers. The resulting aggregated indicator can identify major episodes of overall heightened financial risks in China, which cannot be consistently captured using financial data. Finally, a structural VAR framework is employed to show that shocks to the financial risk measure have a significant impact on macroeconomic and financial variables in China and abroad. JEL Classification: C32, C65, E32, F44, G15 |
Keywords: | China, financial risk, LDA, machine learning, textual analysis, topic modelling |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20232767&r=cmp |
By: | Dangxing Chen; Luyao Zhang |
Abstract: | Algorithm fairness in the application of artificial intelligence (AI) is essential for a better society. As the foundational axiom of social mechanisms, fairness consists of multiple facets. Although the machine learning (ML) community has focused on intersectionality as a matter of statistical parity, especially in discrimination issues, an emerging body of literature addresses another facet -- monotonicity. Based on domain expertise, monotonicity plays a vital role in numerous fairness-related areas, where violations could misguide human decisions and lead to disastrous consequences. In this paper, we first systematically evaluate the significance of applying monotonic neural additive models (MNAMs), which use a fairness-aware ML algorithm to enforce both individual and pairwise monotonicity principles, for the fairness of AI ethics and society. We have found, through a hybrid method of theoretical reasoning, simulation, and extensive empirical analysis, that considering monotonicity axioms is essential in all areas of fairness, including criminology, education, health care, and finance. Our research contributes to the interdisciplinary research at the interface of AI ethics, explainable AI (XAI), and human-computer interactions (HCIs). By evidencing the catastrophic consequences if monotonicity is not met, we address the significance of monotonicity requirements in AI applications. Furthermore, we demonstrate that MNAMs are an effective fairness-aware ML approach by imposing monotonicity restrictions integrating human intelligence. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.07060&r=cmp |
By: | Achim Ahrens; Christian B. Hansen; Mark E. Schaffer; Thomas Wiemann |
Abstract: | We introduce the package ddml for Double/Debiased Machine Learning (DDML) in Stata. Estimators of causal parameters for five different econometric models are supported, allowing for flexible estimation of causal effects of endogenous variables in settings with unknown functional forms and/or many exogenous variables. ddml is compatible with many existing supervised machine learning programs in Stata. We recommend using DDML in combination with stacking estimation which combines multiple machine learners into a final predictor. We provide Monte Carlo evidence to support our recommendation. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.09397&r=cmp |
By: | Yongming Li; Ariel Neufeld |
Abstract: | In this paper we provide a quantum Monte Carlo algorithm to solve high-dimensional Black-Scholes PDEs with correlation for high-dimensional option pricing. The payoff function of the option is of general form and is only required to be continuous and piece-wise affine (CPWA), which covers most of the relevant payoff functions used in finance. We provide a rigorous error analysis and complexity analysis of our algorithm. In particular, we prove that the computational complexity of our algorithm is bounded polynomially in the space dimension $d$ of the PDE and the reciprocal of the prescribed accuracy $\varepsilon$ and so demonstrate that our quantum Monte Carlo algorithm does not suffer from the curse of dimensionality. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.09241&r=cmp |
By: | Elisa Luciano; Matteo Cattaneo; Ron Kenett |
Abstract: | The rapid and dynamic pace of Artificial Intelligence (AI) and Machine Learning (ML) is revolutionizing the insurance sector. AI offers significant, very much welcome advantages to insurance companies, and is fundamental to their customer-centricity strategy. It also poses challenges, in the project and implementation phase. Among those, we study Adversarial Attacks, which consist of the creation of modified input data to deceive an AI system and produce false outputs. We provide examples of attacks on insurance AI applications, categorize them, and argue on defence methods and precautionary systems, considering that they can involve few-shot and zero-shot multilabelling. A related topic, with growing interest, is the validation and verification of systems incorporating AI and ML components. These topics are discussed in various sections of this paper. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.07520&r=cmp |
By: | María Victoria Landaberry (Banco Central del Uruguay); Kenji Nakasone (UTEC - Universidad Tecnológica); Johann Pérez (UTEC - Universidad Tecnológica); María del Pilar Posada (Banco Central del Uruguay) |
Abstract: | Las agencias calificadoras de riesgo como Moody's, Standard and Poor's y Fitch califican los activos soberanos basados en un análisis matemático de factores económicos, sociales y políticos conjuntamente con un análisis cualitativo de juicio de experto. De acuerdo a la calificación obtenida, los países pueden ser clasificados como aquellos que tienen grado inversor o cuentan con grado especulativo. Tener grado inversor es importante en la medida que reduce en costo de financiamiento y expande el conjunto de potenciales inversores en una economía. En este documento nos proponemos predecir si la deuda soberana de un país será calificada con grado inversor utilizando un conjunto de variables macroeconómicas y variables obtenidas a partir del análisis de texto de los reportes de Fitch entre 2000 y 2018 utilizando técnicas de procesamiento natural de lenguaje. Utilizamos una regresión logística y un conjunto de algoritmos de machine learning alternativos. De acuerdo a nuestros resultados, el índice de incertidumbre, construido a partir de los reportes de Fitch, es estadísticamente significativo para predecir el grado inversor. Al comparar los distintos algoritmos de machine learning, random forest es el que tiene mejor poder predictivo fuera de la muestra cuando la variable dependiente refiere al mismo año que las variables explicativas mientras que knearest neighbors tiene el mejor desempeño predictivo cuando las variables independientes refieren al año anterior en términos del f1-score y recall. |
Keywords: | Riesgo soberano, agencias calificadoras, variables macroeconómicas, análisis de texto, procesamiento natural del lenguaje; machine learning |
JEL: | E22 E66 G24 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:bku:doctra:2022005&r=cmp |
By: | Luca Eduardo Fierro; Federico Giri; Alberto Russo |
Abstract: | We study how income inequality affects monetary policy through the inequality-household debt channel. We design a minimal macro Agent-Based model that replicates several stylized facts, including two novel ones: falling aggregate saving rate and decreasing bankruptcies during the household's debt boom phase. When inequality meets financial liberalization, a leaning against-the-wind strategy can preserve financial stability at the cost of high unemployment, whereas an accommodative strategy, i.e. lowering the policy rate, can dampen the fall of aggregate demand at the cost of larger leverage. We conclude that inequality may constrain the central bank, even when it is not explicitly targeted. |
Keywords: | Inequality; Financial Fragility; Monetary Policy; Agent-Based Model. |
Date: | 2023–01–25 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/05&r=cmp |
By: | Bilgin, Rumeysa (Istanbul Sabahattin Zaim University) |
Abstract: | The previous literature on capital structure has produced plenty of potential determinants of leverage over the last decades. However, their research models usually cover only a restricted number of explanatory variables, and many suffer from omitted variable bias. This study contributes to the literature by advocating a sound approach to selecting the control variables for empirical capital structure studies. We applied two linear LASSO inference approaches and the double machine learning (DML) framework to the LASSO, random forest, decision tree, and gradient boosting learners to evaluate the marginal contributions of three proposed determinants; cash holdings, non-debt tax shield, and current ratio. While some studies did not use these variables in their models, others obtained contradictory results. Our findings have revealed that cash holdings, current ratio, and non-debt tax shield are crucial factors that substantially affect the leverage decisions of firms and should be controlled in empirical capital structure studies. |
Date: | 2023–01–23 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:e26qf&r=cmp |
By: | Ola, Aranuwa Felix |
Abstract: | Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals and humans |
Date: | 2023–01–09 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:8f59d&r=cmp |
By: | Pinski, Marc; Benlian, Alexander |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:dar:wpaper:135990&r=cmp |
By: | Tom, Daniel M. Ph.D. |
Abstract: | A recent online search for model performance for benchmarking purposes reveals evidence of disparate treatment on a prohibitive basis in ML models appearing in the search result. Using our logistic regression with AI approach, we are able to build a superior credit model without any prohibitive and other demographic characteristics (gender, age, marital status, level of education) from the default of credit card clients dataset in the UCI Machine Learning Repository. We compare our AI flashlight beam search result to exhaustive search approach in the space of all possible models, and the AI search finds the highest separation/highest likelihood models efficiently after evaluating a small number of model candidates. |
Date: | 2023–01–17 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:cfyzv&r=cmp |
By: | Ola, Aranuwa Felix |
Abstract: | Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. The Oxford English Dictionary of Oxford University Press defines artificial intelligence as: |
Date: | 2023–01–09 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:37m9k&r=cmp |
By: | Anastasios Petropoulos; Vassilis Siakoulis; Konstantinos P. Panousis; Loukas Papadoulas; Sotirios Chatzis |
Abstract: | In this study, we propose a novel approach of nowcasting and forecasting the macroeconomic status of a country using deep learning techniques. We focus particularly on the US economy but the methodology can be applied also to other economies. Specifically US economy has suffered a severe recession from 2008 to 2010 which practically breaks out conventional econometrics model attempts. Deep learning has the advantage that it models all macro variables simultaneously taking into account all interdependencies among them and detecting non-linear patterns which cannot be easily addressed under a univariate modelling framework. Our empirical results indicate that the deep learning methods have a superior out-of-sample performance when compared to traditional econometric techniques such as Bayesian Model Averaging (BMA). Therefore our results provide a concise view of a more robust method for assessing sovereign risk which is a crucial component in investment and monetary decisions. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.09856&r=cmp |
By: | Christopher Wimmer; Navid Rekabsaz |
Abstract: | Predicting future direction of stock markets using the historical data has been a fundamental component in financial forecasting. This historical data contains the information of a stock in each specific time span, such as the opening, closing, lowest, and highest price. Leveraging this data, the future direction of the market is commonly predicted using various time-series models such as Long-Short Term Memory networks. This work proposes modeling and predicting market movements with a fundamentally new approach, namely by utilizing image and byte-based number representation of the stock data processed with the recently introduced Vision-Language models. We conduct a large set of experiments on the hourly stock data of the German share index and evaluate various architectures on stock price prediction using historical stock data. We conduct a comprehensive evaluation of the results with various metrics to accurately depict the actual performance of various approaches. Our evaluation results show that our novel approach based on representation of stock data as text (bytes) and image significantly outperforms strong deep learning-based baselines. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.10166&r=cmp |
By: | Jongsub Lee; Hayong Yun |
Abstract: | Using deep learning techniques, we introduce a novel measure for production process heterogeneity across industries. For each pair of industries during 1990-2021, we estimate the functional distance between two industries' production processes via deep neural network. Our estimates uncover the underlying factors and weights reflected in the multi-stage production decision tree in each industry. We find that the greater the functional distance between two industries' production processes, the lower are the number of M&As, deal completion rates, announcement returns, and post-M&A survival likelihood. Our results highlight the importance of structural heterogeneity in production technology to firms' business integration decisions. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.08847&r=cmp |
By: | Nabeel, Rao |
Abstract: | Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. The Oxford English Dictionary of Oxford University Press defines artificial intelligence as: |
Date: | 2023–01–09 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:sz7mj&r=cmp |
By: | Nabeel, Rao |
Abstract: | Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. The Oxford English Dictionary of Oxford University Press defines artificial intelligence as: |
Date: | 2023–01–09 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:qhvak&r=cmp |
By: | Bruno Bouchard (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); Adil Reghai (Natixis Asset Management); Benjamin Virrion (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique, Natixis Asset Management) |
Abstract: | We consider a multi-step algorithm for the computation of the historical expected shortfall such as defined by the Basel Minimum Capital Requirements for Market Risk. At each step of the algorithm, we use Monte Carlo simulations to reduce the number of historical scenarios that potentially belong to the set of worst scenarios. The number of simulations increases as the number of candidate scenarios is reduced and the distance between them diminishes. For the most naive scheme, we show that the L p-error of the estimator of the Expected Shortfall is bounded by a linear combination of the probabilities of inversion of favorable and unfavorable scenarios at each step, and of the last step Monte Carlo error associated to each scenario. By using concentration inequalities, we then show that, for sub-gamma pricing errors, the probabilities of inversion converge at an exponential rate in the number of simulated paths. We then propose an adaptative version in which the algorithm improves step by step its knowledge on the unknown parameters of interest: mean and variance of the Monte Carlo estimators of the different scenarios. Both schemes can be optimized by using dynamic programming algorithms that can be solved off-line. To our knowledge, these are the first non-asymptotic bounds for such estimators. Our hypotheses are weak enough to allow for the use of estimators for the different scenarios and steps based on the same random variables, which, in practice, reduces considerably the computational effort. First numerical tests are performed. |
Keywords: | Expected Shortfall, ranking and selection, sequential design, Bayesian filter |
Date: | 2021–03–26 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02619589&r=cmp |
By: | Gebreel, Alia Youssef |
Abstract: | Artificial Intelligence in Knowledge Management for Construction-Specific Documents |
Date: | 2023–01–10 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:zc8mp&r=cmp |
By: | Ramis Khabibullin (Independent Researcher); Sergei Seleznev (Bank of Russia, Russian Federation) |
Abstract: | This paper presents a fast algorithm for estimating hidden states of Bayesian state space models. The algorithm is a variation of amortized simulation-based inference algorithms, where numerous artificial datasets are generated at the first stage, and then a flexible model is trained to predict the variables of interest. In contrast to those proposed earlier, the procedure described in this paper makes it possible to train estimators for hidden states by concentrating only on certain characteristics of the marginal posterior distributions and introducing inductive bias. Illustrations using the examples of stochastic volatility model, nonlinear dynamic stochastic general equilibrium model and seasonal adjustment procedure with breaks in seasonality show that the algorithm has sufficient accuracy for practical use. Moreover, after pretraining, which takes several hours, finding the posterior distribution for any dataset takes from hundredths to tenths of a second. |
Keywords: | amortized simulation-based inference, Bayesian state space models, neural networks, seasonal adjustment, stochastic volatility, SV-DSGE. |
JEL: | C11 C15 C32 C45 |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:bkr:wpaper:wps104&r=cmp |
By: | A. Papanicolaou; H. Fu; P. Krishnamurthy; B. Healy; F. Khorrami |
Abstract: | In this paper, we simulate the execution of a large stock order with real data and general power law in the Almgren and Chriss model. The example that we consider is the liquidation of a large position executed over the course of a single trading day in a limit order book. Transaction costs are incurred because large orders walk the order book, that is, they consume order-book liquidity beyond the best bid/ask. We model these transaction costs with a power law that is inversely proportional to trading volume. We obtain a policy approximation by training a long short term memory (LSTM) neural network to minimize transaction costs accumulated when execution is carried out as a sequence of smaller sub orders. Using historical S&P100 price and volume data, we evaluate our LSTM strategy relative to strategies based on time-weighted average price (TWAP) and volume-weighted average price (VWAP). For execution of a single stock, the input to the LSTM includes the entire cross section of data on all 100 stocks, including prices, volume, TWAPs and VWAPs. By using the entire data cross section, the LSTM should be able to exploit any inter-stock co-dependence in volume and price movements, thereby reducing overall transaction costs. Our tests on the S&P100 data demonstrate that in fact this is so, as our LSTM strategy consistently outperforms TWAP and VWAP-based strategies. |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.09705&r=cmp |
By: | Hua, Tan Kian |
Abstract: | A Case Report and Literature Review on Gallbladder Sarcomatoid Carcinoma |
Date: | 2023–01–09 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:8de57&r=cmp |
By: | Jonathan Proctor; Tamma Carleton; Sandy Sum |
Abstract: | Remotely sensed measurements and other machine learning predictions are increasingly used in place of direct observations in empirical analyses. Errors in such measures may bias parameter estimation, but it remains unclear how large such biases are or how to correct for them. We leverage a new benchmark dataset providing co-located ground truth observations and remotely sensed measurements for multiple variables across the contiguous U.S. to show that the common practice of using remotely sensed measurements without correction leads to biased parameter point estimates and standard errors across a diversity of empirical settings. More than three-quarters of the 95% confidence intervals we estimate using remotely sensed measurements do not contain the true coefficient of interest. These biases result from both classical measurement error and more structured measurement error, which we find is common in machine learning based remotely sensed measurements. We show that multiple imputation, a standard statistical imputation technique so far untested in this setting, effectively reduces bias and improves statistical coverage with only minor reductions in power in both simple linear regression and panel fixed effects frameworks. Our results demonstrate that multiple imputation is a generalizable and easily implementable method for correcting parameter estimates relying on remotely sensed variables. |
JEL: | C18 C45 C80 Q0 |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:30861&r=cmp |
By: | Huifang Huang; Ting Gao; Pengbo Li; Jin Guo; Peng Zhang |
Abstract: | With the fast development of quantitative portfolio optimization in financial engineering, lots of promising algorithmic trading strategies have shown competitive advantages in recent years. However, the environment from real financial markets is complex and hard to be fully simulated, considering non-stationarity of the stock data, unpredictable hidden causal factors and so on. Fortunately, difference of stock prices is often stationary series, and the internal relationship between difference of stocks can be linked to the decision-making process, then the portfolio should be able to achieve better performance. In this paper, we demonstrate normalizing flows is adopted to simulated high-dimensional joint probability of the complex trading environment, and develop a novel model based reinforcement learning framework to better understand the intrinsic mechanisms of quantitative online trading. Second, we experiment various stocks from three different financial markets (Dow, NASDAQ and S&P 500) and show that among these three financial markets, Dow gets the best performance results on various evaluation metrics under our back-testing system. Especially, our proposed method even resists big drop (less maximum drawdown) during COVID-19 pandemic period when the financial market got unpredictable crisis. All these results are comparatively better than modeling the state transition dynamics with independent Gaussian Processes. Third, we utilize a causal analysis method to study the causal relationship among different stocks of the environment. Further, by visualizing high dimensional state transition data comparisons from real and virtual buffer with t-SNE, we uncover some effective patterns of bet |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2301.09297&r=cmp |
By: | Michael Mueller-Smith; Benjamin Pyle; Caroline Walker |
Abstract: | This paper studies the impact of adult prosecution on recidivism and employment trajectories for adolescent, first-time felony defendants. We use extensive linked Criminal Justice Administrative Record System and socio-economic data from Wayne County, Michigan (Detroit). Using the discrete age of majority rule and a regression discontinuity design, we find that adult prosecution reduces future criminal charges over 5 years by 0.48 felony cases (? 20%) while also worsening labor market outcomes: 0.76 fewer employers (? 19%) and $674 fewer earnings (? 21%) per year. We develop a novel econometric framework that combines standard regression discontinuity methods with predictive machine learning models to identify mechanism-specific treatment effects that underpin the overall impact of adult prosecution. We leverage these estimates to consider four policy counterfactuals: (1) raising the age of majority, (2) increasing adult dismissals to match the juvenile disposition rates, (3) eliminating adult incarceration, and (4) expanding juvenile record sealing opportunities to teenage adult defendants. All four scenarios generate positive returns for government budgets. When accounting for impacts to defendants as well as victim costs borne by society stemming from increases in recidivism, we find positive social returns for juvenile record sealing expansions and dismissing marginal adult charges; raising the age of majority breaks even. Eliminating prison for first-time adult felony defendants, however, increases net social costs. Policymakers may still find this attractive if they are willing to value beneficiaries (taxpayers and defendants) slightly higher (124%) than potential victims. |
Keywords: | juvenile and criminal justice, regression discontinuity, machine learning, recidivism, employment |
JEL: | C36 C45 K14 K42 J24 |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:cen:wpaper:23-01&r=cmp |
By: | Resce, Giuliano; Vaquero-Piñeiro, Cristina |
Abstract: | We investigate the role of local favoritism in the Geographical Indications (GIs) quality scheme, one of the main pillars of agri-food policy in the EU. Taking advantage of a rich and unique municipalities' geo-referenced database over the 2000-2020 period, we evaluate whether the birthplaces of Regional council members are favored in the acknowledgment of GIs in Italy. To address the potential confounding effects and selection biases, we combine a Difference in Difference strategy with machine learning methods for counterfactual analysis. Results reveal that councilors' birth municipalities are more likely to obtain their products certified as GIs. The birth town bias is more substantial in areas where the level of institutional quality is lower, there is higher corruption, and lower government efficiency, suggesting that the mediation of politicians is determinant where the formal standardized procedures are muddled. |
Keywords: | Political Economy; Geographical Indications; Political representation; Electoral success; Local Development. |
JEL: | D72 L66 Q18 R11 |
Date: | 2023–02–07 |
URL: | http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp23089&r=cmp |
By: | Nikhil Sahni; George Stein; Rodney Zemmel; David M. Cutler |
Abstract: | The potential of artificial intelligence (AI) to simplify existing healthcare processes and create new, more efficient ones is a major topic of discussion in the industry. Yet healthcare lags other industries in AI adoption. In this paper, we estimate that wider adoption of AI could lead to savings of 5 to 10 percent in US healthcare spending—roughly $200 billion to $360 billion annually in 2019 dollars. These estimates are based on specific AI-enabled use cases that employ today’s technologies, are attainable within the next five years, and would not sacrifice quality or access. These opportunities could also lead to non-financial benefits such as improved healthcare quality, increased access, better patient experience, and greater clinician satisfaction. We further present case studies and discuss how to overcome the challenges to AI deployments. We conclude with a review of recent market trends that may shift the AI adoption trajectory toward a more rapid pace. |
JEL: | I10 L2 M15 |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:30857&r=cmp |
By: | Florian Oswald (ECON - Département d'économie (Sciences Po) - Sciences Po - Sciences Po - CNRS - Centre National de la Recherche Scientifique); Patrick Kofod Mogensen (UCPH - University of Copenhagen = Københavns Universitet) |
Abstract: | Website for numerical methods course. This is a course for PhD students in Computational Economics. Course Overview In this course you will learn about some commonly used methods in Computational Economics. These methods are being used in all fields of Economics. The course has a clear focus on applying what you learn. We will cover the theoretical concepts that underlie each topic, but you should expect a fair amount of hands on action required on your behalf. In the words of the great Che-Lin Su: "Doing Computation is the only way to learn Computation. Doing Computation is the only way to learn Computation. Doing Computation is the only way to learn Computation." |
Date: | 2022–07–14 |
URL: | http://d.repec.org/n?u=RePEc:hal:spmain:hal-03945730&r=cmp |
By: | Bergeaud, Antonin; Verluise, Cyril |
Abstract: | We use patent data to study the contribution of the US, Europe, China and Japan to frontier technology using automated patent landscaping. We find that China's contribution to frontier technology has become quantitatively similar to the US in the late 2010s while overcoming the European and Japanese contributions respectively. Although China still exhibits the stigmas of a catching up economy, these stigmas are on the downside. The quality of frontier technology patents published at the Chinese Patent Office has leveled up to the quality of patents published at the European and Japanese patent offices. At the same time, frontier technology patenting at the Chinese Patent Office seems to have been increasingly supported by domestic patentees, suggesting the build up of domestic capabilities. |
Keywords: | frontier technologies; China; patent landscaping; machine learning; patents |
JEL: | O30 O31 O32 |
Date: | 2022–10–14 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:117998&r=cmp |