|
on Computational Economics |
Issue of 2020‒11‒16
seventeen papers chosen by |
By: | Stefan Kremsner; Alexander Steinicke; Michaela Sz\"olgyenyi |
Abstract: | In insurance mathematics optimal control problems over an infinite time horizon arise when computing risk measures. Their solutions correspond to solutions of deterministic semilinear (degenerate) elliptic partial differential equations. In this paper we propose a deep neural network algorithm for solving such partial differential equations in high dimensions. The algorithm is based on the correspondence of elliptic partial differential equations to backward stochastic differential equations with random terminal time. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.15757&r=all |
By: | Max H. Farrell; Tengyuan Liang; Sanjog Misra |
Abstract: | We propose a methodology for effectively modeling individual heterogeneity using deep learning while still retaining the interpretability and economic discipline of classical models. We pair a transparent, interpretable modeling structure with rich data environments and machine learning methods to estimate heterogeneous parameters based on potentially high dimensional or complex observable characteristics. Our framework is widely-applicable, covering numerous settings of economic interest. We recover, as special cases, well-known examples such as average treatment effects and parametric components of partially linear models. However, we also seamlessly deliver new results for diverse examples such as price elasticities, willingness-to-pay, and surplus measures in choice models, average marginal and partial effects of continuous treatment variables, fractional outcome models, count data, heterogeneous production function components, and more. Deep neural networks are well-suited to structured modeling of heterogeneity: we show how the network architecture can be designed to match the global structure of the economic model, giving novel methodology for deep learning as well as, more formally, improved rates of convergence. Our results on deep learning have consequences for other structured modeling environments and applications, such as for additive models. Our inference results are based on an influence function we derive, which we show to be flexible enough to to encompass all settings with a single, unified calculation, removing any requirement for case-by-case derivations. The usefulness of the methodology in economics is shown in two empirical applications: the response of 410(k) participation rates to firm matching and the impact of prices on subscription choices for an online service. Extensions to instrumental variables and multinomial choices are shown. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.14694&r=all |
By: | Takanori Sakai; Yusuke Hara; Ravi Seshadri; Andr\'e Alho; Md Sami Hasnine; Peiyu Jing; ZhiYuan Chua; Moshe Ben-Akiva |
Abstract: | The e-commerce delivery demand has grown rapidly in the past two decades and such trend has accelerated tremendously due to the ongoing coronavirus pandemic. Given the situation, the need for predicting e-commerce delivery demand and evaluating relevant logistics solutions is increasing. However, the existing simulation models for e-commerce delivery demand are still limited and do not consider the delivery options and their attributes that shoppers face on e-commerce order placements. We propose a novel modeling framework which jointly predicts the average total value of e-commerce purchase, the purchase amount per transaction, and delivery option choices. The proposed framework can simulate the changes in e-commerce delivery demand attributable to the changes in delivery options. We assume the model parameters based on various sources of relevant information and conduct a demonstrative sensitivity analysis. Furthermore, we have applied the model to the simulation for the Auto-Innovative Prototype city. While the calibration of the model using real-world survey data is required, the result of the analysis highlights the applicability of the proposed framework. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.14375&r=all |
By: | Sidra Mehtab; Jaydip Sen |
Abstract: | Designing robust and accurate predictive models for stock price prediction has been an active area of research for a long time. While on one side, the supporters of the efficient market hypothesis claim that it is impossible to forecast stock prices accurately, many researchers believe otherwise. There exist propositions in the literature that have demonstrated that if properly designed and optimized, predictive models can very accurately and reliably predict future values of stock prices. This paper presents a suite of deep learning based models for stock price prediction. We use the historical records of the NIFTY 50 index listed in the National Stock Exchange of India, during the period from December 29, 2008 to July 31, 2020, for training and testing the models. Our proposition includes two regression models built on convolutional neural networks and three long and short term memory network based predictive models. To forecast the open values of the NIFTY 50 index records, we adopted a multi step prediction technique with walk forward validation. In this approach, the open values of the NIFTY 50 index are predicted on a time horizon of one week, and once a week is over, the actual index values are included in the training set before the model is trained again, and the forecasts for the next week are made. We present detailed results on the forecasting accuracies for all our proposed models. The results show that while all the models are very accurate in forecasting the NIFTY 50 open values, the univariate encoder decoder convolutional LSTM with the previous two weeks data as the input is the most accurate model. On the other hand, a univariate CNN model with previous one week data as the input is found to be the fastest model in terms of its execution speed. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13891&r=all |
By: | Katsafados, Apostolos G.; Androutsopoulos, Ion; Chalkidis, Ilias; Fergadiotis, Manos; Leledakis, George N.; Pyrgiotakis, Emmanouil G. |
Abstract: | This study examines the predictive power of textual information from S-1 filings in explaining IPO underpricing. Our empirical approach differs from previous research, as we utilize several machine learning algorithms to predict whether an IPO will be underpriced, or not. We analyze a large sample of 2,481 U.S. IPOs from 1997 to 2016, and we find that textual information can effectively complement traditional financial variables in terms of prediction accuracy. In fact, models that use both textual data and financial variables as inputs have superior performance compared to models using a single type of input. We attribute our findings to the fact that textual information can reduce the ex-ante valuation uncertainty of IPO firms, thus leading to more accurate estimates. |
Keywords: | Initial public offerings; First-day returns; Machine learning; Natural language processing |
JEL: | G02 G14 G30 G32 |
Date: | 2020–10–27 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:103813&r=all |
By: | Isao Yagi; Shunya Maruyama; Takanobu Mizuta |
Abstract: | A leveraged ETF is a fund aimed at achieving a rate of return several times greater than that of the underlying asset such as Nikkei 225 futures. Recently, it has been suggested that rebalancing trades of a leveraged ETF may destabilize the financial markets. An empirical study using an agent-based simulation indicated that a rebalancing trade strategy could affect the price formation of an underlying asset market. However, no leveraged ETF trading method for suppressing the increase in volatility as much as possible has yet been proposed. In this paper, we compare different strategies of trading for a proposed trading model and report the results of our investigation regarding how best to suppress an increase in market volatility. As a result, it was found that as the minimum number of orders in a rebalancing trade increases, the impact on the market price formation decreases. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13036&r=all |
By: | Andrés Alonso (Banco de España); José Manuel Carbó (Banco de España) |
Abstract: | New reports show that the financial sector is increasingly adopting machine learning (ML) tools to manage credit risk. In this environment, supervisors face the challenge of allowing credit institutions to benefit from technological progress and financial innovation, while at the same ensuring compatibility with regulatory requirements and that technological neutrality is observed. We propose a new framework for supervisors to measure the costs and benefits of evaluating ML models, aiming to shed more light on this technology’s alignment with the regulation. We follow three steps. First, we identify the benefits by reviewing the literature. We observe that ML delivers predictive gains of up to 20?% in default classification compared with traditional statistical models. Second, we use the process for validating internal ratings-based (IRB) systems for regulatory capital to detect ML’s limitations in credit risk mangement. We identify up to 13 factors that might constitute a supervisory cost. Finally, we propose a methodology for evaluating these costs. For illustrative purposes, we compute the benefits by estimating the predictive gains of six ML models using a public database on credit default. We then calculate a supervisory cost function through a scorecard in which we assign weights to each factor for each ML model, based on how the model is used by the financial institution and the supervisor’s risk tolerance. From a supervisory standpoint, having a structured methodology for assessing ML models could increase transparency and remove an obstacle to innovation in the financial industry. |
Keywords: | artificial intelligence, machine learning, credit risk, interpretability, bias, IRB models |
JEL: | C53 D81 G17 |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:2032&r=all |
By: | Comincioli, Nicola; Panteghini, Paolo M.; Vergalli, Sergio |
Abstract: | In this article we introduce model to describe the behavior of a multinational company (MNC) that operates transfer pricing and debt shifting, with the purpose of incrementing its value, intended as the sum of equity and debt. We compute, in a stochastic environment and under default risk, the optimal shares of profit and debt to be shifted and show how they are a effected by exogenous features of the market. In addition, by means of a numerical analysis, we simulate and quantify the benefit arising from the exploitation of tax avoidance practices and study the corresponding impact on MNC's fundamental indicators. A wide sensitivity analysis on model's parameters is also provided. |
Keywords: | Financial Economics |
Date: | 2020–11–05 |
URL: | http://d.repec.org/n?u=RePEc:ags:feemgc:307307&r=all |
By: | Antti J. Tanskanen |
Abstract: | Discrete-choice life cycle models can be used to, e.g., estimate how social security reforms change employment rate. Optimal employment choices during the life course of an individual can be solved in the framework of life cycle models. This enables estimating how a social security reform influences employment rate. Mostly, life cycle models have been solved with dynamic programming, which is not feasible when the state space is large, as often is the case in a realistic life cycle model. Solving such life cycle models requires the use of approximate methods, such as reinforced learning algorithms. We compare how well a deep reinforced learning algorithm ACKTR and dynamic programming solve a relatively simple life cycle model. We find that the average utility is almost the same in both algorithms, however, the details of the best policies found with different algorithms differ to a degree. In the baseline model representing the current Finnish social security scheme, we find that reinforced learning yields essentially as good results as dynamics programming. We then analyze a straight-forward social security reform and find that the employment changes due to the reform are almost the same. Our results suggest that reinforced learning algorithms are of significant value in analyzing complex life cycle models. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13471&r=all |
By: | Dongming Wei; Yogi Ahmad Erlangga; Gulzat Zhumakhanova |
Abstract: | In this paper, finite element method is applied to Leland's model for numerical simulation of option pricing with transaction costs. Spatial finite element models based on P1 and/or P2 elements are formulated in combination with a Crank-Nicolson-type temporal scheme. The temporal scheme is implemented using the Rannacher approach. Examples with several sets of parameter values are presented and compared with finite difference results in the literature. Spatial-temporal mesh-size ratios are observed for controlling the stability of our method. Our results compare favorably with the finite difference results in the literature for the model. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13541&r=all |
By: | Aur\'elien Alfonsi; Adel Cherchali; Jose Arturo Infante Acevedo |
Abstract: | This paper studies the multilevel Monte-Carlo estimator for the expectation of a maximum of conditional expectations. This problem arises naturally when considering many stress tests and appears in the calculation of the interest rate module of the standard formula for the SCR. We obtain theoretical convergence results that complements the recent work of Giles and Goda and gives some additional tractability through a parameter that somehow describes regularity properties around the maximum. We then apply the MLMC estimator to the calculation of the SCR at future dates with the standard formula for an ALM savings business on life insurance. We compare it with estimators obtained with Least Square Monte-Carlo or Neural Networks. We find that the MLMC estimator is computationally more efficient and has the main advantage to avoid regression issues, which is particularly significant in the context of projection of a balance sheet by an insurer due to the path dependency. Last, we discuss the potentiality of this numerical method and analyze in particular the effect of the portfolio allocation on the SCR at future~dates. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.12651&r=all |
By: | Michael Allan Ribers; Hannes Ullrich |
Abstract: | Human decision-making differs due to variation in both incentives and available information. This generates substantial challenges for the evaluation of whether and how machine learning predictions can improve decision outcomes. We propose a framework that incorporates machine learning on large-scale administrative data into a choice model featuring heterogeneity in decision maker payoff functions and predictive skill. We apply our framework to the major health policy problem of improving the efficiency in antibiotic prescribing in primary care, one of the leading causes of antibiotic resistance. Our analysis reveals large variation in physicians’ skill to diagnose bacterial infections and in how physicians trade off the externality inherent in antibiotic use against its curative benefit. Counterfactual policy simulations show the combination of machine learning predictions with physician diagnostic skill achieves a 25.4 percent reduction in prescribing. |
Keywords: | Prediction policy, expert decision-making, machine learning, antibiotic prescribing |
JEL: | C10 C55 I11 I18 Q28 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1911&r=all |
By: | Andrea Borsato |
Abstract: | The paper fills a gap in the Secular Stagnation literature and develops an agent-based SFC model to analyse the deep relationship between income distribution and productivity through the channel of innovation. With a steady gaze on US macro-economic data since 1950, we put forth the idea that the continuous shift of income fromwages to profits may have resulted in a smaller incentive to invest in R&D activity, with the decline in productivity performances that characterizes Secular Stagnation in the USA. The paper is the first step toward the growth model that will be developed in Part II. |
Keywords: | Secular Stagnation, Innovation dynamics, Incomedistribution, Agentbased SFC models. |
JEL: | E10 O31 O38 O43 P16 |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:usi:wpaper:840&r=all |
By: | Perone, G. |
Abstract: | Coronavirus disease (COVID-19) is a severe ongoing novel pandemic that has emerged in Wuhan, China, in December 2019. As of October 13, the outbreak has spread rapidly across the world, affecting over 38 million people, and causing over 1 million deaths. In this article, I analysed several time series forecasting methods to predict the spread of COVID-19 second wave in Italy, over the period after October 13, 2020. I used an autoregressive model (ARIMA), an exponential smoothing state space model (ETS), a neural network autoregression model (NNAR), and the following hybrid combinations of them: ARIMA-ETS, ARIMA-NNAR, ETS-NNAR, and ARIMA-ETS-NNAR. About the data, I forecasted the number of patients hospitalized with mild symptoms, and in intensive care units (ICU). The data refer to the period February 21, 2020– October 13, 2020 and are extracted from the website of the Italian Ministry of Health (www.salute.gov.it). The results show that i) the hybrid models, except for ARIMA-ETS, are better at capturing the linear and non-linear epidemic patterns, by outperforming the respective single models; and ii) the number of COVID-19-related hospitalized with mild symptoms and in ICU will rapidly increase in the next weeks, by reaching the peak in about 50-60 days, i.e. in mid-December 2020, at least. To tackle the upcoming COVID-19 second wave it is necessary to enhance social distancing, hire healthcare workers and implement sufficient hospital facilities, protective equipment, and ordinary and intensive care beds. |
Keywords: | COVID-19; outbreak; second wave; Italy; hybrid forecasting models; ARIMA; ETS; NNAR. |
JEL: | C22 C53 I18 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:yor:hectdg:20/18&r=all |
By: | Isao Yagi; Yuji Masuda; Takanobu Mizuta |
Abstract: | Many empirical studies have discussed market liquidity, which is regarded as a measure of a booming financial market. Further, various indicators for objectively evaluating market liquidity have also been proposed and their merits have been discussed. In recent years, the impact of high-frequency traders (HFTs) on financial markets has been a focal concern, but no studies have systematically discussed their relationship with major market liquidity indicators, including volume, tightness, resiliency, and depth. In this study, we used agent-based simulations to compare the major liquidity indicators in an artificial market where an HFT participated was compared to one where no HFT participated. The results showed that all liquidity indicators in the market where an HFT participated improved more than those in the market where no HFT participated. Furthermore, as a result of investigating the correlations between the major liquidity indicators in our simulations and the extant empirical literature, we found that market liquidity can be measured not only by the major liquidity indicators but also by execution rate. Therefore, it is suggested that it could be appropriate to employ execution rate as a novel liquidity indicator in future studies. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13038&r=all |
By: | Kristoffer Andersson; Cornelis W. Oosterlee |
Abstract: | In this paper, we propose a neural network-based method for CVA computations of a portfolio of derivatives. In particular, we focus on portfolios consisting of a combination of derivatives, with and without true optionality, \textit{e.g.,} a portfolio of a mix of European- and Bermudan-type derivatives. CVA is computed, with and without netting, for different levels of WWR and for different levels of credit quality of the counterparty. We show that the CVA is overestimated with up to 25\% by using the standard procedure of not adjusting the exercise strategy for the default-risk of the counterparty. For the Expected Shortfall of the CVA dynamics, the overestimation was found to be more than 100\% in some non-extreme cases. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13843&r=all |
By: | A. Georgantas |
Abstract: | The field of portfolio selection is an active research topic, which combines elements and methodologies from various fields, such as optimization, decision analysis, risk management, data science, forecasting, etc. The modeling and treatment of deep uncertainties for future asset returns is a major issue for the success of analytical portfolio selection models. Recently, robust optimization (RO) models have attracted a lot of interest in this area. RO provides a computationally tractable framework for portfolio optimization based on relatively general assumptions on the probability distributions of the uncertain risk parameters. Thus, RO extends the framework of traditional linear and non-linear models (e.g., the well-known mean-variance model), incorporating uncertainty through a formal and analytical approach into the modeling process. Robust counterparts of existing models can be considered as worst-case re-formulations as far as deviations of the uncertain parameters from their nominal values are concerned. Although several RO models have been proposed in the literature focusing on various risk measures and different types of uncertainty sets about asset returns, analytical empirical assessments of their performance have not been performed in a comprehensive manner. The objective of this study is to fill in this gap in the literature. More specifically, we consider different types of RO models based on popular risk measures and conduct an extensive comparative analysis of their performance using data from the US market during the period 2005-2016. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13397&r=all |