nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒04‒16
eleven papers chosen by



  1. Evaluating Hospital Case Cost Prediction Models Using Azure Machine Learning Studio By Alexei Botchkarev
  2. The Construction of a Global General Equilibrium Model for the Russian Economy Based on International Experience By Nesterova, Kristina
  3. Reducing Estimation Risk in Mean-Variance Portfolios with Machine Learning By Daniel Kinn
  4. Exact simulation for a class of tempered stable By Dassios, Angelos; Qu, Yan; Zhao, Hongbiao
  5. A path integral based model for stocks and order dynamics By Giovanni Paolinelli; Gianni Arioli
  6. Market Opening, Growth and Employment By Frank van Tongeren; Dorothee Flaig; Jared Greenville
  7. Classifying Occupations According to Their Skill Requirements in Job Advertisements By Jyldyz Djumalieva; Antonio Lima; Cath Sleeman
  8. Inventor Name Disambiguation with Gradient Boosting Decision Tree and Inventor Mobility in China (1985-2016) By YIN Deyun; MOTOHASHI Kazuyuki
  9. Monetary Policy Communication of the Bank of Japan: Computational Text Analysis By Yusuke Oshima; Yoichi Matsubayashi
  10. Boosting Fiscal Space; The Roles of GDP-Linked Debt and Longer Maturities By Jonathan David Ostry; Jun I. Kim
  11. The Roles of Alternative Data and Machine Learning in Fintech Lending: Evidence from the LendingClub Consumer Platform By Jagtiani, Julapa; Lemieux, Catharine

  1. By: Alexei Botchkarev
    Abstract: Ability for accurate hospital case cost modelling and prediction is critical for efficient health care financial management and budgetary planning. A variety of regression machine learning algorithms are known to be effective for health care cost predictions. The purpose of this experiment was to build an Azure Machine Learning Studio tool for rapid assessment of multiple types of regression models. The tool offers environment for comparing 14 types of regression models in a unified experiment: linear regression, Bayesian linear regression, decision forest regression, boosted decision tree regression, neural network regression, Poisson regression, Gaussian processes for regression, gradient boosted machine, nonlinear least squares regression, projection pursuit regression, random forest regression, robust regression, robust regression with mm-type estimators, support vector regression. The tool presents assessment results arranged by model accuracy in a single table using five performance metrics. Evaluation of regression machine learning models for performing hospital case cost prediction demonstrated advantage of robust regression model, boosted decision tree regression and decision forest regression. The operational tool has been published to the web and openly available for experiments and extensions.
    Date: 2018–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1804.01825&r=cmp
  2. By: Nesterova, Kristina (Russian Presidential Academy of National Economy and Public Administration (RANEPA))
    Abstract: Multiregional computable general equilibrium models are employed extensively in international practice of examining the consequences of various measures of economic policy in the global context. Depending on the goals of the considered economic policy, the optimal model structure may vary. This study offers a comparison of a range of global CGE models, which results into constructing a global CGE model focused on current issues of Russian economic policy, such as the tax maneuver and the pension reform.
    Keywords: equilibrium model, international trade, tax reform
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:rnp:wpaper:021807&r=cmp
  3. By: Daniel Kinn
    Abstract: In portfolio analysis, the traditional approach of replacing population moments with sample counterparts may lead to suboptimal portfolio choices. In this paper I show that selecting asset positions to maximize expected quadratic utility is equivalent to a machine learning (ML) problem, where the asset weights are chosen to minimize out of sample mean squared error. It follows that ML specifically targets estimation risk when choosing the asset weights, and that "off-the-shelf" ML algorithms obtain optimal portfolios taking parameter uncertainty into account. Linear regression is a special case of the proposed ML framework, equivalent to the traditional approach. Standard results from the machine learning literature may be used to derive conditions for when ML algorithms improve upon linear regression. Based on simulation studies and several datasets, I find that ML significantly reduce estimation risk compared to the traditional approach and several shrinkage approaches proposed in the literature.
    Date: 2018–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1804.01764&r=cmp
  4. By: Dassios, Angelos; Qu, Yan; Zhao, Hongbiao
    Abstract: In this paper, we develop a new scheme of exact simulation for a class of tempered stable (TS) and other related distributions with similar Laplace transforms. We discover some interesting integral representations for the underlying density functions that imply a unique simulation framework based on a backward recursive procedure. Therefore, the foundation of this simulation design is very different from existing schemes in the literature. It works pretty efficiently for some subclasses of TS distributions, where even the conventional acceptancerejection mechanism can be avoided. It can also generate some other distributions beyond the TS family. For applications, this scheme could be easily adopted to generate a variety of TSconstructed random variables and TS-driven stochastic processes for modelling observational series in practice. Numerical experiments and tests are performed to demonstrate the accuracy and effectiveness of our scheme
    Keywords: Monte Carlo simulation; Exact simulation; Backward recursive scheme; Stable distribution; Tempered stable distribution; Exponentially tilted stable distribution; Lévy process; Lévy subordinator; Leptokurtosis
    JEL: C1
    Date: 2018–01–17
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:86981&r=cmp
  5. By: Giovanni Paolinelli; Gianni Arioli
    Abstract: We introduce a model for the short-term dynamics of financial assets based on an application to finance of quantum gauge theory, developing ideas of Ilinski. We present a numerical algorithm for the computation of the probability distribution of prices and compare the results with APPLE stocks prices and the S&P500 index.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.07904&r=cmp
  6. By: Frank van Tongeren (OECD); Dorothee Flaig (OECD); Jared Greenville (OECD)
    Abstract: What can further market integration contribute to growth and employment? A series of hypothetical trade reform scenarios explores what countries at different levels of development can expect to gain from reforming tariffs, non-tariff barriers, trade facilitation and domestic support to agriculture. Simulations of multilateral and regional trade agreements with the OECD METRO model show that positive effects are higher when more countries participate in trade integration because it broadens market opportunities, widens the range of products at lower prices, and reduces trade diversion. Smaller economies especially benefit. Firms in these economies can better specialise in international production networks as they have access to larger and more differentiated markets and also benefit from enhanced market access on the products they already produce. While trade integration boosts demand and lifts wages and factor returns, the required production adjustments also leads to reallocation of workers between sectors. The analysis highlights some of the distributional implications and emphasises the need for labour force adjustment policies to accompany trade integration.
    Keywords: agriculture support, Asia, CGE model, income distribution, International trade, market access, regional trade agreements
    JEL: C54 C68 F13 F15 F16 Q17
    Date: 2018–04–11
    URL: http://d.repec.org/n?u=RePEc:oec:traaab:214-en&r=cmp
  7. By: Jyldyz Djumalieva; Antonio Lima; Cath Sleeman
    Abstract: In this work, we propose a methodology for classifying occupations based on skill requirements provided in online job adverts. To develop the classification methodology, we apply semi-supervised machine learning techniques to a dataset of 37 million UK online job adverts collected by Burning Glass Technologies. The resulting occupational classification comprises four hierarchical layers: the first three layers relate to skill specialisation and group jobs that require similar types of skills. The fourth layer of the hierarchy is based on the offered salary and indicates skill level. The proposed classification will have the potential to enable measurement of an individual's career progression within the same skill domain, to recommend jobs to individuals based on their skills and to mitigate occupational misclassification issues. While we provide initial results and descriptions of occupational groups in the Burning Glass data, we believe that the main contribution of this work is the methodology for grouping jobs into occupations based on skills.
    Keywords: labour demand, occupational classification, online job adverts, big data, machine learning, word embeddings
    JEL: C18 J23 J24
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:nsr:escoed:escoe-dp-2018-04&r=cmp
  8. By: YIN Deyun; MOTOHASHI Kazuyuki
    Abstract: This paper presents the first systematic disambiguation result of all Chinese patent inventors in the State Intellectual Property Office of China (SIPO) patent database from 1985 to 2016. We provide a method of constructing high-qualitative training data from lists of rare names and evidence for the reliability of these generated labels when large-scale and representative hand-labeled data are crucial but expensive, prone to error, and even impossible to obtain. We then compare the performances of seven supervised models, i.e., naive Bayes, logistic, linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA), as well as tree-based methods (random forest, AdaBoost, and gradient boosting decision trees), and found that gradient boosting classifier outperforms all other classifiers with the highest F1-score and stable performance in solving the homonym problem prevailing in Chinese names. In the last step, instead of adopting the more popular hierarchical clustering method, we clustered records with the density-based spatial clustering of applications with noise (DBSCAN) based on the distance matrix predicated by the GBDT classifier. Varying across different testing data and parameters of DBSCAN, our algorithm yielded a F1-score ranging from 93.5%-99.3% with splitting error within the range 0.5%-3% and lumping error between 0.056%-0.37%. Based on our disambiguated result, we provide an overview of Chinese inventors' regional mobility.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:eti:dpaper:18018&r=cmp
  9. By: Yusuke Oshima (Graduate School of Economics, Kobe University); Yoichi Matsubayashi (Graduate School of Economics, Kobe University)
    Abstract: In this study, we empirically examine the effects of the Bank of Japan (BOJ)'s communications through its meeting minutes on the financial markets, especially during Mr. Kuroda's administration from April 2013 to September 2017. Using computational linguistic models and the Latent Dirichlet Allocation, we quantify the contents of the BOJ minutes and extract topics form these minutes, including the bank's historical monetary policy and policymakers' views on current economic conditions. The empirical results suggest that a relationship exists between the estimated topics and the market reactions on the days on which the minutes are released. Although the market paid attention to the monetary policy description in the minutes in the early period of the introduction of quantitative and qualitative monetary easing (QQE), the significance of monetary policy information under the October 2014 expansion of the QQE on financial markets faded. In contrast, information on fund-provisioning measures to support Japanese companies' activities, including a negative interest rate policy, induced a decline in the stock market. We found that the market pays attention to meeting members' opinions on current economic conditions.
    Date: 2018–04
    URL: http://d.repec.org/n?u=RePEc:koe:wpaper:1816&r=cmp
  10. By: Jonathan David Ostry; Jun I. Kim
    Abstract: Can debt management policy provide a way to increase fiscal space for a given path of primary fiscal balances? This note explores the role of two such policies: issuance of state-contingent debt; and issuance of longer maturity debt. New analytical models determine the debt limit and the default risk under uncertainty, and undertake numerical simulations to gauge the practical significance of the effect of debt management policies on fiscal space. The results suggest that, by managing debt along these two dimensions, economically salient gains in fiscal space are plausible for advanced and emerging markets.
    Keywords: Fiscal policy;Fiscal policy;Debt management policies;Default;Fiscal space;
    Date: 2018–03–14
    URL: http://d.repec.org/n?u=RePEc:imf:imfdep:18/04&r=cmp
  11. By: Jagtiani, Julapa (Federal Reserve Bank of Philadelphia); Lemieux, Catharine (Federal Reserve Bank of Chicago)
    Abstract: Supersedes Working Paper 17-17. Fintech has been playing an increasing role in shaping financial and banking landscapes. There have been concerns about the use of alternative data sources by fintech lenders and the impact on financial inclusion. We compare loans made by a large fintech lender and similar loans that were originated through traditional banking channels. Specifically, we use account-level data from LendingClub and Y-14M data reported by bank holding companies with total assets of $50 billion or more. We find a high correlation with interest rate spreads, LendingClub rating grades, and loan performance. Interestingly, the correlations between the rating grades and FICO scores have declined from about 80 percent (for loans that were originated in 2007) to only about 35 percent for recent vintages (originated in 2014–2015), indicating that nontraditional alternative data have been increasingly used by fintech lenders. Furthermore, we find that the rating grades (assigned based on alternative data) perform well in predicting loan performance over the two years after origination. The use of alternative data has allowed some borrowers who would have been classified as subprime by traditional criteria to be slotted into “better” loan grades, which allowed them to get lower priced credit. In addition, for the same risk of default, consumers pay smaller spreads on loans from LendingClub than from credit card borrowing.
    Keywords: Fintech; LendingClub; Marketplace Lending; Alternative Data; Shadow Banking; P2P Lending; Peer-to-peer Lending
    JEL: G18 G21 G28 L21
    Date: 2018–04–05
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:18-15&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.