|
on Computational Economics |
Issue of 2019‒01‒14
nineteen papers chosen by |
By: | Reaz Chowdhury; M. R. C. Mahdy; Tanisha Nourin Alam; Golam Dastegir Al Quaderi |
Abstract: | The Black-Scholes Option pricing model (BSOPM) has long been in use for valuation of equity options to find the prices of stocks. In this work, using BSOPM, we have come up with a comparative analytical approach and numerical technique to find the price of call option and put option and considered these two prices as buying price and selling price of stocks of frontier markets so that we can predict the stock price (close price). Changes have been made to the model to find the parameters strike price and the time of expiration for calculating stock price of frontier markets. To verify the result obtained using modified BSOPM we have used machine learning approach using the software Rapidminer, where we have adopted different algorithms like the decision tree, ensemble learning method and neural network. It has been observed that, the prediction of close price using machine learning is very similar to the one obtained using BSOPM. Machine learning approach stands out to be a better predictor over BSOPM, because Black-Scholes-Merton equation includes risk and dividend parameter, which changes continuously. We have also numerically calculated volatility. As the prices of the stocks goes high due to overpricing, volatility increases at a tremendous rate and when volatility becomes very high market tends to fall, which can be observed and determined using our modified BSOPM. The proposed modified BSOPM has also been explained based on the analogy of Schrodinger equation (and heat equation) of quantum physics. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.10619&r=all |
By: | Achref Bachouch (UiO - University of Oslo); Côme Huré (LPSM UMR 8001 - Laboratoire de Probabilités, Statistique et Modélisation - UPMC - Université Pierre et Marie Curie - Paris 6 - UPD7 - Université Paris Diderot - Paris 7 - CNRS - Centre National de la Recherche Scientifique, UPD7 - Université Paris Diderot - Paris 7); Nicolas Langrené (CSIRO - Data61 [Canberra] - ANU - Australian National University - CSIRO - Commonwealth Scientific and Industrial Research Organisation [Canberra]); Huyen Pham (LPSM UMR 8001 - Laboratoire de Probabilités, Statistique et Modélisation - UPD7 - Université Paris Diderot - Paris 7 - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique, UPD7 - Université Paris Diderot - Paris 7) |
Abstract: | This paper presents several numerical applications of deep learning-based algorithms that have been analyzed in [11]. Numerical and comparative tests using TensorFlow illustrate the performance of our different algorithms, namely control learning by performance iteration (algorithms NNcontPI and ClassifPI), control learning by hybrid iteration (algorithms Hybrid-Now and Hybrid-LaterQ), on the 100-dimensional nonlinear PDEs examples from [6] and on quadratic Backward Stochastic Differential equations as in [5]. We also provide numerical results for an option hedging problem in finance, and energy storage problems arising in the valuation of gas storage and in microgrid management. |
Date: | 2018–12–12 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01949221&r=all |
By: | al Irsyad, M. Indra (University of Queensland, School of Earth and Environmental Sciences); Halog, Anthony (University of Queensland, School of Earth and Environmental Sciences); Nepal, Rabindra (Tasmanian School of Business & Economics, University of Tasmania) |
Abstract: | This study estimates the impacts of four solar energy policy interventions on the photovoltaic (PV) market potential, government expenditure, economic growth, and the environment. An agent-based model is developed to capture the specific economic and institutional features of developing economies, citing Indonesia as a specific case study. We undertake a novel approach to energy modelling by combining energy system analysis, input-output analysis, life-cycle analysis, and socio-economic analysis to obtain a comprehensive and integrated impact assessment. Our results, after sensitivity analysis, call for abolishing the existing PV grant policy in the Indonesian rural electrification programs. The government, instead, should encourage the PV industry to improve production efficiency and to provide after-sales service. A 100-watt peak (Wp) PV under this policy is affordable for 33.2 percent of rural households without electricity access in 2010. Rural PV market size potentially increases to 82.4 percent with rural financing institutions lending 70 percent of capital cost for five years at 12 percent annual interest rate. Additional 30 percent capital subsidy and 5 percent interest subsidy slightly increase the rural PV market potential to 89.6 percent of PV adopters. However, the subsidies are crucial for creating PV demands by urban households but the most effective policy for promoting PV to urban households is the net metering scheme. Several policy proposals are discussed in response to these findings. |
Keywords: | hybrid energy model, developing country, renewables policy, impact assessments, agent-based modelling, photovoltaic system |
JEL: | C60 Q21 Q43 Q48 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:tas:wpaper:28893&r=all |
By: | Li-Xin Wang |
Abstract: | A deep convolutional fuzzy system (DCFS) on a high-dimensional input space is a multi-layer connection of many low-dimensional fuzzy systems, where the input variables to the low-dimensional fuzzy systems are selected through a moving window (a convolution operator) across the input spaces of the layers. To design the DCFS based on input-output data pairs, we propose a bottom-up layer-by-layer scheme. Specifically, by viewing each of the first-layer fuzzy systems as a weak estimator of the output based only on a very small portion of the input variables, we can design these fuzzy systems using the WM Method. After the first-layer fuzzy systems are designed, we pass the data through the first layer and replace the inputs in the original data set by the corresponding outputs of the first layer to form a new data set, then we design the second-layer fuzzy systems based on this new data set in the same way as designing the first-layer fuzzy systems. Repeating this process we design the whole DCFS. Since the WM Method requires only one-pass of the data, this training algorithm for the DCFS is very fast. We apply the DCFS model with the training algorithm to predict a synthetic chaotic plus random time-series and the real Hang Seng Index of the Hong Kong stock market. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.11226&r=all |
By: | Christian Bayer; Chiheb Ben Hammouda; Raul Tempone |
Abstract: | The rough Bergomi (rBergomi) model, introduced recently in [4], is a promising rough volatility model in quantitative finance. This new model exhibits consistent results with the empirical fact of implied volatility surfaces being essentially time-invariant. This model also has the ability to capture the term structure of skew observed in equity markets. In the absence of analytical European option pricing methods for the model, and due to the non-Markovian nature of the fractional driver, the prevalent option is to use Monte Carlo (MC) simulation for pricing. Despite recent advances in the MC method in this context, pricing under the rBergomi model is still a time-consuming task. To overcome this issue, we design a novel, alternative, hierarchical approach, based on adaptive sparse grids quadrature, specifically using the same construction as multi-index stochastic collocation (MISC) [21], coupled with Brownian bridge construction and Richardson extrapolation. By uncovering the available regularity, our hierarchical method demonstrates substantial computational gains with respect to the standard MC method, when reaching a sufficiently small error tolerance in the price estimates across different parameter constellations, even for very small values of the Hurst parameter. Our work opens a new research direction in this field, i.e. to investigate the performance of methods other than Monte Carlo for pricing and calibrating under the rBergomi model. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.08533&r=all |
By: | Howell, Bronwyn E.; Potgieter, Petrus H. |
Abstract: | We discuss the effect of pricing strategies by two firms on the total firm revenue, consumer and total welfare using simulation and numerical analysis. We consider pricing decisions for mixed bundling and where each firm offers two closely related products as well as a bundle. Bundling is a key feature for information goods (Bakos & Brynjolfsson, 1999; Shapiro & Varian, 1999) and we might assume that the market has two differentiated content products (each of which is a bundle of channels, for example, or a bundle of content titles to which access is sold). In many markets, this would be a basic entertainment product and then a sports product or a premium bundle with recent films etc. We can also consider this to be an access and a content product, to consider the issues around merger of content and access firms. In the model for this paper, we introduce a principle of bounded rationality by limiting the ability of the firms to determine revenue-maximising pricing strategies. That means that the firms are able to reduce their effort to find a revenue optimum and will in general find a relatively good solution only but not necessarily an optimum one. Considering the effects of this approach might be useful for both regulators and firms. We also assume that the firms collude to maximise their joint revenue, which we regard as a realistic supposition in a duopoly market. The model can be extended to cover the case where one firm offers/bundles more than two products but this is a topic for future research. |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:itsb18:190345&r=all |
By: | Beutel, Johannes; List, Sophia; von Schweinitz, Gregor |
Abstract: | This paper compares the out-of-sample predictive performance of different early warning models for systemic banking crises using a sample of advanced economies covering the past 45 years. We compare a benchmark logit approach to several machine learning approaches recently proposed in the literature. We find that while machine learning methods often attain a very high in-sample fit, they are outperformed by the logit approach in recursive out-of-sample evaluations. This result is robust to the choice of performance measure, crisis definition, preference parameter, and sample length, as well as to using different sets of variables and data transformations. Thus, our paper suggests that further enhancements to machine learning early warning models are needed before they are able to offer a substantial value-added for predicting systemic banking crises. Conventional logit models appear to use the available information already fairly efficiently, and would for instance have been able to predict the 2007/2008 financial crisis out-of-sample for many countries. In line with economic intuition, these models identify credit expansions, asset price booms and external imbalances as key predictors of systemic banking crises. |
Keywords: | early warning system,logit,machine learning,systemic banking crises |
JEL: | C35 C53 G01 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:482018&r=all |
By: | Tsutsumi, Masahiko |
Abstract: | This paper aims at evaluating the economic consequences of the 2018 US-China trade conflict. The potential impact of the proposed tariff increases is calculated using a global CGE model. Capital deepening and technological spillover induced by trade are also taken into account to explore the long-run influence. We can derive the following implications.First, the additionally imposed tariffs on goods alone declines the GDP in the US and China by 0.1% and 0.2%, respectively. The equivalent variation in the US and China is reduced by 9.8 billion and 35.2 billion USD, respectively. Although other countries gain from trade diversion, losses exceed gains globally.Second, considering the effect from capital deepening and technological spillover induced by trade makes the situation worse. The GDP in the US and China declines by 1.6% and 2.5%, respectively. The equivalent variation in the US and China is reduced by 199.5 billion USD and 187.1 billion USD, respectively. Again, the trade diversion is not large enough to recover losses in these countries.Third, the imposed tariffs distort relative prices, resulting in changes to the global production structure. Both the US and China lose their comparative advantage in transport, electronic, and machinery equipment production, while other countries expand their production in these sectors.Finally, China’s retaliatory tariff increases worsens the US economy to some extent, but it comes at a cost to the Chinese economy. In the long run, retaliation is not an appropriate policy response |
Keywords: | the US, China, Tariff, Trade Policy, Retaliation, CGE model |
JEL: | F13 F17 F51 |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:hit:cisdps:676&r=all |
By: | Alessandro Pluchino; Alessio. E. Biondo; Andrea Rapisarda |
Abstract: | We review recent numerical results on the role of talent and luck in getting success by means of a schematic agent-based model. In general the role of luck is found to be very relevant in order to get success, while talent is necessary but not sufficient. Funding strategies to improve the success of the most talented people are also discussed. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.05206&r=all |
By: | Mbanda, Vandudzai; Bonga-Bonga, Lumengo |
Abstract: | This paper assesses the general equilibrium impacts of public infrastructure investment in the South African economy by making use of complementary general equilibrium models, such as the social accounting matrix (SAM) multiplier, the Structural Path Analyses (SPA) and the Computable General Equilibrium (CGE) models. Both the SAM and CGE analyses indicate that increasing public economic infrastructure can be an effective way of stimulating the economy in a way that has a positive impact on labour. SPA shows that the main and most important path of influence is a direct influence of the public economic sector on each of the formal labour categories. However, because the public economic sector does not employ informal labour, this labour account is only connected indirectly via intermediate consumption of the construction sector output. This is an important outcome for South Africa, as the results suggest that an increase in public economic infrastructure could help address the problem of unemployment as well as that of low income levels that exacerbate poverty |
Keywords: | public infrastructure, structural path analysis, social accounting matrix, Computable general equilibrium |
JEL: | C67 C68 H54 |
Date: | 2018–12–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:90613&r=all |
By: | Adriano Koshiyama; Nick Firoozye; Philip Treleaven |
Abstract: | Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine-tune its strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning and combination, have been extensively researched using several methods, but emerging techniques such as Generative Adversarial Networks can have an impact into such aspects. Therefore, our work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. To this purpose, we provide a full methodology on: (i) the training and selection of a cGAN for time series data; (ii) how each sample is used for strategies calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment with multiple trading strategies, encompassing 579 assets. We compared cGAN with an ensemble scheme and model validation methods, both suited for time series. Our results suggest that cGANs are a suitable alternative for strategies calibration and combination, providing outperformance when the traditional techniques fail to generate any alpha. |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1901.01751&r=all |
By: | Claude Meidinger (Centre d'Economie de la Sorbonne - Université Paris1 Panthéon-Sorbonne) |
Abstract: | Whether there is a pre-existing common “language” that ties down the literal meanings of cheap talk messages or not is a distinction plainly important in practice. But it is assumed irrelevant in traditional game theory because it affects neither the payoff structure nor the theoretical possibilities for signaling. And when in experiments the “common-language” assumption is simplicitly implemented, such situations ignore the meta-coordination problem created by communication. Players must coordinate their beliefs on what various messages mean before they can use messages to coordinate on what to do. Using simulations with populations of artificial agents, the paper investigates the way according to which a common meaning can be constituted through a collective process of learning and compares the results thus obtained with those available in some experiments |
Keywords: | Experimental Economics; Computational Economics; Signaling games |
JEL: | C73 C91 D03 |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:18036&r=all |
By: | Herbetsson, Alexander (Department of Economics, School of Business, Economics and Law, Göteborg University) |
Abstract: | We study CDS index options in a credit risk model where the defaults times have intensities which are driven by a finite-state Markov chain representing the underlying economy. In this setting we derive compact computationally tractable formulas for the CDS index spread and the price of a CDS index option. In particular, the evaluation of the CDS index option is handled by translating the Cox-framework into a bivariate Markov chain. Due to the potentially very large, but extremely sparse matrices obtained in this reformulating, special treatment is needed to efficiently compute the matrix exponential arising from the Kolmogorov Equation. We provide details of these computational methods as well as numerical results. The finite-state Markov chain model is calibrated to data with perfect fits, and several numerical studies are performed. In particular we show that under same exogenous circumstances, the CDS index options prices in the Markov chain framework can be close to or sometimes larger than prices in models which assume that the CDS index spreads follows a log-normal process. We also study the different default risk components in the option prices generated by the Markov model, an investigation which is difficult to do in models where the CDS index spreads follows a log-normal process. |
Keywords: | Credit risk; CDS index; CDS index options; intensity-based models; dependence modelling; Markov chains; matrix-analytical methods; numerical methods |
JEL: | C02 C63 G13 G32 G33 |
Date: | 2019–01–07 |
URL: | http://d.repec.org/n?u=RePEc:hhs:gunwpe:0748&r=all |
By: | Konrad Steiner (A.T. Kearney GmbH, Johannes Gutenberg University) |
Abstract: | This work addresses line planning for inter-city bus networks, which requires a high level of integration with other planning steps. One key reason is given by passengers choosing a speci?c timetabled service rather than just a line, as is typically the case in urban transportation. Schedule-based modeling approaches are required to incorporate this aspect, i.e., demand is assigned to a speci?c timetabled service. Furthermore, in liberalized markets, there is usually ?erce competition within and across modes. This encourages considering dynamic demand, i.e., not relying on static demand values, but adjusting them based on the trip characteristics. We provide a schedule-based mixed-integer model formulation allowing a bus operator to optimize multiple timetabled services in a travel corridor with simultaneous decisions on both departure time and which stations to serve. The demand behaves dynamically with respect to departure time, trip duration, trip frequency, and cannibalization. To solve this new problem formulation, we introduce a large multiple neighborhood search (LMNS) as an overall metaheuristic approach, together with multiple variations including matheuristics. Applying the LMNS algorithm, we solve instances based on real-world data from the German market. Computation times are attractive and the high quality of the solutions is con?rmed by analyzing examples with known optimal solutions. Moreover, we show that the explicit consideration of the dependencies between the di?erent timetabled services often produces insightful new results that di?er from approaches which only focus on a single service. |
Keywords: | integration, schedule-based modeling, inter-city bus transportation, dynamic demand, large multiple neighborhood search LMNS |
Date: | 2018–12–20 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:1825&r=all |
By: | Fix, Blair |
Abstract: | Where should we look to understand the origin of inequality? Most research focuses on three windows of evidence: (1) the archaeological record; (2) existing traditional societies; and (3) the historical record. I propose a fourth window of evidence - modern society itself. I hypothesize that we can infer the origin of inequality from the modern relation between energy use, hierarchy, and inequality. To do this, I create a large-scale numerical model that is informed by modern evidence. I then use this model to project modern trends into the past. The results are promising. The model predicts an explosion of inequality with the transition to agrarian levels of energy use. Subsequent increases in energy use are predicted to have little effect on inequality. The results are broadly consistent with the available evidence. This suggests that the hierarchical structure of modern societies may provide a window into the origin of inequality. |
Keywords: | origin of inequality,hierarchy,energy,institution size,numerical model,function,coercion |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:capwps:201809&r=all |
By: | Lechner, Michael |
Abstract: | Uncovering the heterogeneity of causal effects of policies and business decisions at various levels of granularity provides substantial value to decision makers. This paper develops new estimation and inference procedures for multiple treatment models in a selection-on-observables framework by modifying the Causal Forest approach suggested by Wager and Athey (2018). The new estimators have desirable theoretical and computational properties for various aggregation levels of the causal effects. An Empirical Monte Carlo study shows that they may outperform previously suggested estimators. Inference tends to be accurate for effects relating to larger groups and conservative for effects relating to fine levels of granularity. An application to the evaluation of an active labour market programme shows the value of the new methods for applied research. |
Keywords: | Causal machine learning, statistical learning, average treatment effects, conditional average treatment effects, multiple treatments, selection-on-observable, causal forests |
JEL: | C21 J68 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2019:01&r=all |
By: | Elias Cavalcante-Filho; Flavio Abdenur, Rodrigo De Losso |
Abstract: | Constructing optimal Markowitz Mean-Variance portfolios of publicly-traded stock is a straighforward and well-known task. Doing the same for portfolios of privately-owned firms, given the lack of historical price data, is a challenge. We apply machine learning models to historical accounting variable data to estimate risk-return metrics – specifically, expected excess returns, price volatility and (pairwise) price correlation – of private companies, which should allow the construction of Mean-Variance optimized portfolios consisting of private companies. We attain out-of-sample 𠑅2 s around 45%, while linear regressions yield 𠑅2 s of only about 10%. This short paper is the result of a real-world consulting project on behalf of Votorantim S.A (“VSA†), a multinational holding company. To the authors’ best knowledge this is a novel application of machine learning in the finance literature. |
Keywords: | assent pricing; Machine Learning; Portfolio Theory |
JEL: | G12 G17 |
Date: | 2018–12–20 |
URL: | http://d.repec.org/n?u=RePEc:spa:wpaper:2018wpecon23&r=all |
By: | Miyazaki, Kumiko; Sato; Ryusuke |
Abstract: | AI has been through several booms and we have currently reached the 3rd AI boom. Although AI has been evolving over six decades it seems that the current boom is different from the previous booms. In this paper, we attempt to elucidate the issues for widespread adoption of AI in firms. Through one of the authors work experience related to AI, it appears that although companies are willing to consider adopting AI for various applications, only a few are willing to make a commitment to go for full scale adoption. The main goal of this paper is to identify the characteristics of the current 3rd AI boom and to analyze the issues for adoption by firms. For this purpose we have put forward 3 research questions. 1) How has the technological performance in AI changed at the national level during the 2nd and the 3rd boom? 2) How have the key technologies and the applications of AI changed over time? 3) How is the companies' perspective on AI and what are the necessary conditions for firms to adopt AI? Through bibliometric analysis, we were able to extract the important keywords in the 3rd AI boom, which were Machine learning and Deep learning. The main focus of AI research has been shifting towards AI applications. The interviews with firms which were considering adopting AI suggested the existence of a gap between the needs of the company and what AI can deliver at present. AI could be used for finding suitable treatment for genetic illnesses if some issues are solved. |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:itsb18:190377&r=all |
By: | Dominik Gutt (Paderborn University) |
Abstract: | The growing body of literature on online ratings has reached a consensus of the positive impact of the average rating and the number of ratings on economic outcomes. Yet, little is known about the economic implication of the online rating variance, and existing studies have presented contradictory results. Therefore, this study examines the impact of the online rating variance on the prices and sales of digital cameras from Amazon.com. The key feature of our study is that we employ and validate a machine learning approach to decompose the online rating variance into a product failure-related and taste-related share. In line with our theoretical foundation, our empirical results highlight that the failure-related variance share has a negative impact on price and sales, while the impact of the taste-related share is positive. Our results highlight a new perspective on the online rating variance that has been largely neglected by prior studies. Sellers can benefit from our results by adjusting their pricing strategy and improving their sales forecasts. Review platforms can facilitate the identification of product failure-related ratings to support the purchasing decision process of customers. |
Keywords: | Online Rating Variance, Text Mining, Econometrics, User-Generated Social Media. |
JEL: | M15 M31 O32 D12 |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:pdn:dispap:40&r=all |