|
on Computational Economics |
Issue of 2020‒10‒26
33 papers chosen by |
By: | Kyle Steinhauer; Takahisa Fukadai; Sho Yoshida |
Abstract: | We use an optimization procedure based on simulated bifurcation (SB) to solve the integer portfolio and trading trajectory problem with an unprecedented computational speed. The underlying algorithm is based on a classical description of quantum adiabatic evolutions of a network of non-linearly interacting oscillators. This formulation has already proven to beat state of the art computation times for other NP-hard problems and is expected to show similar performance for certain portfolio optimization problems. Inspired by such we apply the SB approach to the portfolio integer optimization problem with quantity constraints and trading activities. We show first numerical results for portfolios of up to 1000 assets, which already confirm the power of the SB algorithm for its novel use-case as a portfolio and trading trajectory optimizer. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.08412&r=all |
By: | Hamed Vaheb |
Abstract: | This thesis serves three primary purposes, first of which is to forecast two stocks, i.e. Goldman Sachs (GS) and General Electric (GE). In order to forecast stock prices, we used a long short-term memory (LSTM) model in which we inputted the prices of two other stocks that lie in rather close correlation with GS. Other models such as ARIMA were used as benchmark. Empirical results manifest the practical challenges when using LSTM for forecasting stocks. One of the main upheavals was a recurring lag which we called "forecasting lag". The second purpose is to develop a more general and objective perspective on the task of time series forecasting so that it could be applied to assist in an arbitrary that of forecasting by ANNs. Thus, attempts are made for distinguishing previous works by certain criteria so as to summarise those including effective information. The summarised information is then unified and expressed through a common terminology that can be applied to different steps of a time series forecasting task. The last but not least purpose of this thesis is to elaborate on a mathematical framework on which ANNs are based. We are going to use the framework introduced in the book "Neural Networks in Mathematical Framework" by Anthony L. Caterini in which the structure of a generic neural network is introduced and the gradient descent algorithm (which incorporates backpropagation) is introduced in terms of their described framework. In the end, we use this framework for a specific architecture, which is recurrent neural networks on which we concentrated and our implementations are based. The book proves its theorems mostly for classification case. Instead, we proved theorems for regression case, which is the case of our problem. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.06417&r=all |
By: | Saeed Nosratabadi; Amir Mosavi; Ramin Keivani; Sina Ardabili; Farshid Aram |
Abstract: | Deep learning (DL) and machine learning (ML) methods have recently contributed to the advancement of models in the various aspects of prediction, planning, and uncertainty analysis of smart cities and urban development. This paper presents the state of the art of DL and ML methods used in this realm. Through a novel taxonomy, the advances in model development and new application domains in urban sustainability and smart cities are presented. Findings reveal that five DL and ML methods have been most applied to address the different aspects of smart cities. These are artificial neural networks; support vector machines; decision trees; ensembles, Bayesians, hybrids, and neuro-fuzzy; and deep learning. It is also disclosed that energy, health, and urban transport are the main domains of smart cities that DL and ML methods contributed in to address their problems. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.02670&r=all |
By: | Cosimo Magazzino (Roma Tre University); Marco Mele (UNITE - Universita degli studi di Teramo - University of Teramo [Italie]); Nicolas Schneider; Guillaume Vallet (CREG - Centre de recherche en économie de Grenoble - UGA [2020-....] - Université Grenoble Alpes [2020-....]) |
Abstract: | This study aims to investigate the relationship between nuclear energy consumption and economicgrowth in Switzerland over the period 1970–2018. We use data on capital, labour, and exportswithin a multivariate framework. Starting from the consideration that Switzerland has decided tophase out nuclear energy by 2034, we examine the effect of this structural economic-energy changein the country. To do so, two distinct estimation tools are performed. The first model, using atime-series approach, analyze the relationship between bivariate and multivariate causality. Thesecond, using a Machine Learning methodology, test the results of the econometric modellingthrough an Artificial Neural Networks process. This last empirical procedure represents ouroriginal contribution with respect to the previous energy-GDP papers. The results, in thelogarithmic propagation of neural networks, suggest a careful analysis of the process that will leadto the abandonment of nuclear energy in Switzerland to avoid adverse effects on economic growth. |
Keywords: | nuclear energy consumption,GDP,employment,capital stock,time-series,artificial neural networks |
Date: | 2020–09–01 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:halshs-02951860&r=all |
By: | Iro Ren\'e Kouarfate; Michael A. Kouritzin; Anne MacKay |
Abstract: | An explicit weak solution for the 3/2 stochastic volatility model is obtained and used to develop a simulation algorithm for option pricing purposes. The 3/2 model is a non-affine stochastic volatility model whose variance process is the inverse of a CIR process. This property is exploited here to obtain an explicit weak solution, similarly to Kouritzin (2018). A simulation algorithm based on this solution is proposed and tested via numerical examples. The performance of the resulting pricing algorithm is comparable to that of other popular simulation algorithms. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.09058&r=all |
By: | Sidra Mehtab; Jaydip Sen; Abhishek Dutta |
Abstract: | Prediction of stock prices has been an important area of research for a long time. While supporters of the efficient market hypothesis believe that it is impossible to predict stock prices accurately, there are formal propositions demonstrating that accurate modeling and designing of appropriate variables may lead to models using which stock prices and stock price movement patterns can be very accurately predicted. In this work, we propose an approach of hybrid modeling for stock price prediction building different machine learning and deep learning-based models. For the purpose of our study, we have used NIFTY 50 index values of the National Stock Exchange (NSE) of India, during the period December 29, 2014 till July 31, 2020. We have built eight regression models using the training data that consisted of NIFTY 50 index records during December 29, 2014 till December 28, 2018. Using these regression models, we predicted the open values of NIFTY 50 for the period December 31, 2018 till July 31, 2020. We, then, augment the predictive power of our forecasting framework by building four deep learning-based regression models using long-and short-term memory (LSTM) networks with a novel approach of walk-forward validation. We exploit the power of LSTM regression models in forecasting the future NIFTY 50 open values using four different models that differ in their architecture and in the structure of their input data. Extensive results are presented on various metrics for the all the regression models. The results clearly indicate that the LSTM-based univariate model that uses one-week prior data as input for predicting the next week open value of the NIFTY 50 time series is the most accurate model. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.10819&r=all |
By: | Arno Botha; Conrad Beyers; Pieter de Villiers |
Abstract: | A novel procedure is presented for the objective comparison and evaluation of a bank's decision rules in optimising the timing of loan recovery. This procedure is based on finding a delinquency threshold at which the financial loss of a loan portfolio (or segment therein) is minimised. Our procedure is an expert system that incorporates the time value of money, costs, and the fundamental trade-off between accumulating arrears versus forsaking future interest revenue. Moreover, the procedure can be used with different delinquency measures (other than payments in arrears), thereby allowing an indirect comparison of these measures. We demonstrate the system across a range of credit risk scenarios and portfolio compositions. The computational results show that threshold optima can exist across all reasonable values of both the payment probability (default risk) and the loss rate (loan collateral). In addition, the procedure reacts positively to portfolios afflicted by either systematic defaults (such as during an economic downturn) or episodic delinquency (i.e., cycles of curing and re-defaulting). In optimising a portfolio's recovery decision, our procedure can better inform the quantitative aspects of a bank's collection policy than relying on arbitrary discretion alone. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.11064&r=all |
By: | Sina Ardabili; Amir Mosavi; Asghar Mahmoudi; Tarahom Mesri Gundoshmian; Saeed Nosratabadi; Annamaria R. Varkonyi-Koczy |
Abstract: | The recent developments of computer and electronic systems have made the use of intelligent systems for the automation of agricultural industries. In this study, the temperature variation of the mushroom growing room was modeled by multi-layered perceptron and radial basis function networks based on independent parameters including ambient temperature, water temperature, fresh air and circulation air dampers, and water tap. According to the obtained results from the networks, the best network for MLP was in the second repetition with 12 neurons in the hidden layer and in 20 neurons in the hidden layer for radial basis function network. The obtained results from comparative parameters for two networks showed the highest correlation coefficient (0.966), the lowest root mean square error (RMSE) (0.787) and the lowest mean absolute error (MAE) (0.02746) for radial basis function. Therefore, the neural network with radial basis function was selected as a predictor of the behavior of the system for the temperature of mushroom growing halls controlling system. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.02673&r=all |
By: | Artur Sokolovsky; Luca Arnaboldi |
Abstract: | The study introduces an automated trading system for S\&P500 E-mini futures (ES) based on state-of-the-art machine learning. Concretely: we extract a set of scenarios from the tick market data to train the model and further use the predictions to model trading. We define the scenarios from the local extrema of the price action. Price extrema is a commonly traded pattern, however, to the best of our knowledge, there is no study presenting a pipeline for automated classification and profitability evaluation. Our study is filling this gap by presenting a broad evaluation of the approach showing the resulting average Sharpe ratio of 6.32. However, we do not take into account order execution queues, which of course affect the result in the live-trading setting. The obtained performance results give us confidence that this approach is worthwhile. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.09993&r=all |
By: | Guido de Blasio (Structural Economic Analysis Directorate, Bank of Italy); Alessio D'Ignazio (Structural Economic Analysis Directorate, Bank of Italy); Marco Letta |
Abstract: | Using police archives, we apply machine learning algorithms to predict corruption crimes in Italian municipalities during the period 2012-2014. We correctly identify over 70% (slightly less than 80%) of the municipalities that will experience corruption episodes (an increase in corruption crimes). We show that algorithmic predictions could strengthen the ability of the 2012 Italy’s anti-corruption law to fight white-collar delinquencies. |
Keywords: | crime prediction, white-collar crimes, machine learning, classification trees, policy targeting |
JEL: | C52 D73 H70 K10 |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:saq:wpaper:16/20&r=all |
By: | Bernhardt, Lea (Helmut Schmidt University, Hamburg) |
Abstract: | In this paper, we analyse the final decisions for merger cases prepared by the European Commission (EC) since 1990 and build a unique subsample for all non-cleared cases. These incorporate all merger notifications which were either withdrawn by the notifying parties or have been prohibited by the European Commission.We find a sudden decline in prohibitions and withdrawals of cases since 2002 and explore three judicial defeats of the European Commission as determining factors behind these developments. We also find a higher likelihood of withdrawal or prohibition if cases are registered in sectors which incorporate firms in the business of information and communication or transportation and storage. When classifying the documents with a supervised machine learning algorithm, we are able to automatically identify the cleared versus the non-cleared cases with over 90% accuracy. Finally, we find that network effects, high market shares and the risk of collusion are the main competitive concerns which contribute to prohibition decisions in the information and communications sector. |
Keywords: | mergers; competition policy; EU Commission; classification; network effects |
JEL: | G34 K21 L40 |
Date: | 2020–10–08 |
URL: | http://d.repec.org/n?u=RePEc:ris:vhsuwp:2020_184&r=all |
By: | Sanguinet, Eduardo; Alvim, Augusto |
Abstract: | This article aims to assess the impact of the EU-Mercosur Agreement on the Brazilian economy using the Computable General Model (CGE) Global Trade Analysis Project (GTAP). The study proposes two sets of simulations – one with the United Kingdom as a member of the EU and the other without being a member, according to Brexit context. There is evidence of positive effects on foreign trade and on the welfare level in Brazil, with emphasis on manufactured goods and the grains of crops. The EU consolidates its presence in global trade. The results show that Mercosur benefits Brazilian foreign trade, making it a strategic partner at the regional level. It is concluded that Brexit can reduce Brazilian gains in the EU-Mercosur agreement, being important the discussion about the creation of another agreement involving the United Kingdom. |
Keywords: | Policy analysis, GTAP, Brexit, Brazil, Mercosur. |
JEL: | D58 F13 R13 |
Date: | 2020–03–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:103010&r=all |
By: | Kluge, Jan (Institute for Advanced Studies, Vienna, Austria); Lappoehn, Sarah (Institute for Advanced Studies, Vienna, Austria); Plank, Kerstin (Institute for Advanced Studies, Vienna, Austria) |
Abstract: | This paper aims at identifying relevant indicators for TFP growth in EU countries during the recovery phase following the 2008/09 economic crisis. We proceed in three steps: First, we estimate TFP growth by means of Stochastic Frontier Analysis (SFA). Second, we perform a TFP growth decomposition in order to get measures for changes in technical progress (CTP), technical efficiency (CTE), scale efficiency (CSC) and allocative efficiency (CAE). And third, we use BART – a non-parametric Bayesian technique from the realm of statistical learning – in order to identify relevant predictors of TFP and its components from the Global Competitiveness Reports. We find that only a few indicators prove to be stable predictors. In particular, indicators that characterize technological readiness, such as broadband internet access, are outstandingly important in order to push technical progress while issues that describe innovation seem only to speed up CTP in higher-income economies. The results presented in this paper can be guidelines to policymakers as they identify areas in which further action could be taken in order to increase economic growth. Concerning the bigger picture, it becomes obvious that advanced machine learning techniques might not be able to replace sound economic theory but they help separating the wheat from the chaff when it comes to selecting the most relevant indicators of economic competitiveness. |
Keywords: | Competitiveness, TFP growth, Stochastic Frontier Analysis, BART |
JEL: | C23 E24 O47 |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:ihs:ihswps:24&r=all |
By: | Ya Chen (Hefei University of Technology, China); Mike Tsionas (Lancaster University, United Kingdom); Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia) |
Abstract: | In data envelopment analysis (DEA), the curse of dimensionality problem may jeopardize the accuracy or even the relevance of results when there is a relatively large dimension of inputs and outputs, even for relatively large samples. Recently, a machine learning approach based on the least absolute shrinkage and selection operator (LASSO) for variable selection was combined with SCNLS (a special case of DEA), and dubbed as LASSO-SCNLS, as a way to circumvent the curse of dimensionality problem. In this paper, we revisit this interesting approach, by considering various data generating processes. We also explore a more advanced version of LASSO, the so-called elastic net (EN) approach, adapt it to DEA and propose the EN-DEA. Our Monte Carlo simulations provide additional and to some extent, new evidence and conclusions. In particular, we find that none of the considered approaches clearly dominate the others. To circumvent the curse of dimensionality of DEA in the context of big wide data, we also propose a simplified two-step approach which we call LASSO+DEA. We find that the proposed simplified approach could be more useful than the existing more sophisticated approaches for reducing very large dimensions into sparser, more parsimonious DEA models that attain greater discriminatory power and suffer less from the curse of dimensionality. |
Keywords: | Data envelopment analysis; Data enabled analytics; Sign-constrainedconvex nonparametric least squares (SCNLS); Machine learning; LASSO; Elastic net; Big wide data |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:152&r=all |
By: | Steven J. Davis; Stephen Hansen; Cristhian Seminario-Amez |
Abstract: | Firm-level stock returns differ enormously in reaction to COVID-19 news. We characterize these reactions using the Risk Factors discussions in pre-pandemic 10-K filings and two text-analytic approaches: expert-curated dictionaries and supervised machine learning (ML). Bad COVID-19 news lowers returns for firms with high exposures to travel, traditional retail, aircraft production and energy supply — directly and via downstream demand linkages — and raises them for firms with high exposures to healthcare policy, e-commerce, web services, drug trials and materials that feed into supply chains for semiconductors, cloud computing and telecommunications. Monetary and fiscal policy responses to the pandemic strongly impact firm-level returns as well, but differently than pandemic news. Despite methodological differences, dictionary and ML approaches yield remarkably congruent return predictions. Importantly though, ML operates on a vastly larger feature space, yielding richer characterizations of risk exposures and outperforming the dictionary approach in goodness-of-fit. By integrating elements of both approaches, we uncover new risk factors and sharpen our explanations for firm-level returns. To illustrate the broader utility of our methods, we also apply them to explain firm-level returns in reaction to the March 2020 Super Tuesday election results. |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_8594&r=all |
By: | Garance Génicot; Laurent Bouton; Micael Castanheira De Moura |
Abstract: | This paper studies the political determinants of inequalities in government interventions under majoritarian (MAJ) and proportional representation (PR) systems. We propose a model of electoral competition with highly targetable government interventions and heterogeneous localities. We uncover a novel relative electoral sensitivity effect that affects government interventions only under the majoritarian (MAJ) systems. This effect tends to reduce inequality in government interventions under MAJ systems when districts are composed of sufficiently homogeneous localities. This effect goes against the conventional wisdom that MAJ systems are necessarily more conducive to inequality than PR systems. We illustrate the empirical relevance of our results with numerical simulations on possible reforms of the U.S. Electoral College. |
Keywords: | Distributive Politics; Electoral Systems; Electoral College; Public Good; Inequality |
JEL: | D72 H00 |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/313225&r=all |
By: | Céline Bonnet (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Jan Philip Schain (DUCE - Dusseldorf Institute for Competition Economics - Heinrich-Heine-Universität Düsseldorf [Düsseldorf]) |
Abstract: | In this article, we extend the literature on merger simulation models by incorporating its potential synergy gains into structural econometric analysis. We present a three-step integrated approach. We estimate a structural demand and supply model, as in Bonnet and Dubois (2010). This model allows us to recover the marginal cost of each differentiated product. Then we estimate potential efficiency gains using the Data Envelopment Analysis approach of Bogetoft and Wang (2005), and some assumptions about exogenous cost shifters. In the last step, we simulate the new price equilibrium post merger taking into account synergy gains, and derive price and welfare effects. We use a homescan dataset of dairy dessert purchases in France, and show that for two of the three mergers considered, synergy gains could offset the upward pressure on prices post. Some mergers could then be considered as not harmful for consumers. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02952921&r=all |
By: | Alves, Vasco |
Abstract: | This paper describes a duopoly market for healthcare where one of the two providers is publicly owned and charges a price of zero, while the other sets a price so as to maximize its profit. Both providers are subject to congestion in the form of an M/M/1 queue, and they serve patient-consumers who have randomly distributed unit costs of time. Consumer demand (as market share) for both providers is obtained and described. The private provider’s pricing decision is explored, equilibrium existence is proven, and conditions for uniqueness presented. Comparative statics for demand are presented. Social welfare functions are described and the welfare maximizing condition obtained. More detailed results are then obtained for cases when costs follow uniform and Kumaraswamy distributions. Numerical simulations are then performed for these distributions, employing several parameter values, demonstrating the private provider’s pricing decision and its relationship with social welfare. |
Keywords: | Waiting times; queueing; private health care; competition |
JEL: | D43 I11 L13 |
Date: | 2019–04–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:100996&r=all |
By: | Andrew Clark (Department of Economics, University of Reading) |
Abstract: | A longitudinal (1844-1965) study of the Pound Krona exchange rate is conducted utilizing London Times article news sentiment, gold price, GDP, and other relevant metrics to create a dynamic systems state-based model to predict the Pound Krona yearly exchange rate. The created model slightly outperforms a naive random walk forecasting model. |
Keywords: | Econometrics, Machine Learning, Dynamic Systems, Complex Systems |
JEL: | C32 C53 C63 E17 F31 |
Date: | 2020–10–09 |
URL: | http://d.repec.org/n?u=RePEc:rdg:emxxdp:em-dp2020-22&r=all |
By: | Anastasios Petropoulos; Vassilis Siakoulis; Konstantinos P. Panousis; Theodoros Christophides; Sotirios Chatzis |
Abstract: | In the aftermath of the financial crisis, supervisory authorities have considerably improved their approaches in performing financial stress testing. However, they have received significant criticism by the market participants due to the methodological assumptions and simplifications employed, which are considered as not accurately reflecting real conditions. First and foremost, current stress testing methodologies attempt to simulate the risks underlying a financial institution's balance sheet by using several satellite models, making their integration a really challenging task with significant estimation errors. Secondly, they still suffer from not employing advanced statistical techniques, like machine learning, which capture better the nonlinear nature of adverse shocks. Finally, the static balance sheet assumption, that is often employed, implies that the management of a bank passively monitors the realization of the adverse scenario, but does nothing to mitigate its impact. To address the above mentioned criticism, we introduce in this study a novel approach utilizing deep learning approach for dynamic balance sheet stress testing. Experimental results give strong evidence that deep learning applied in big financial/supervisory datasets create a state of the art paradigm, which is capable of simulating real world scenarios in a more efficient way. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.11075&r=all |
By: | Bruno Spilak; Wolfgang Karl H\"ardle |
Abstract: | Tail risk protection is in the focus of the financial industry and requires solid mathematical and statistical tools, especially when a trading strategy is derived. Recent hype driven by machine learning (ML) mechanisms has raised the necessity to display and understand the functionality of ML tools. In this paper, we present a dynamic tail risk protection strategy that targets a maximum predefined level of risk measured by Value-At-Risk while controlling for participation in bull market regimes. We propose different weak classifiers, parametric and non-parametric, that estimate the exceedance probability of the risk level from which we derive trading signals in order to hedge tail events. We then compare the different approaches both with statistical and trading strategy performance, finally we propose an ensemble classifier that produces a meta tail risk protection strategy improving both generalization and trading performance. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.03315&r=all |
By: | Martin Jaraiz |
Abstract: | Most of the grand challenges of humanity today involve complex agent-based systems, such as epidemiology, economics or ecology. However, remains as a pending task the challenge of identifying the general principles underlying the self-organizing capabilities of those complex systems. This article presents a novel modeling approach capable to self-deploy both the system structure and the activities for goal-driven agents that can take appropriate actions to achieve their goals. Humans, robots, and animals are all endowed with this type of behavior. Self-organization is shown to emerge from the decisions of a common rational activity algorithm based on the information of a system-specific goals dependency network. The unique self-deployment feature of this systematic approach can boost considerably the range and depth of application of agent-based modeling. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.10823&r=all |
By: | Stefan Tübbicke (University of Potsdam) |
Abstract: | Interest in evaluating the effects of continuous treatments has been on the rise recently. To facilitate the estimation of causal effects in this setting, the present paper introduces entropy balancing for continuous treatments (EBCT) by extending the original entropy balancing methodology of Hainmüller (2012). In order to estimate balancing weights, the proposed approach solves a globally convex constrained optimization problem, allowing for much more computationally efficient implementation compared to other available methods. EBCT weights reliably eradicate Pearson correlations between covariates and the continuous treatment variable. This is the case even when other methods based on the generalized propensity score tend to yield insufficient balance due to strong selection into different treatment intensities. Moreover, the optimization procedure is more successful in avoiding extreme weights attached to a single unit. Extensive Monte-Carlo simulations show that treatment effect estimates using EBCT display similar or lower bias and uniformly lower root mean squared error. These properties make EBCT an attractive method for the evaluation of continuous treatments. Software implementation is available for Stata and R. |
Keywords: | Balancing weights, Continuous Treatment, Monte-Carlo simulation, Observational studies |
JEL: | C14 C21 C87 |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:pot:cepadp:21&r=all |
By: | Andrey Itkin; Dmitry Muravey |
Abstract: | We continue a series of papers devoted to construction of semi-analytic solutions for barrier options. These options are written on underlying following some simple one-factor diffusion model, but all the parameters of the model as well as the barriers are time-dependent. We managed to show that these solutions are systematically more efficient for pricing and calibration than, eg., the corresponding finite-difference solvers. In this paper we extend this technique to pricing double barrier options and present two approaches to solving it: the General Integral transform method and the Heat Potential method. Our results confirm that for double barrier options these semi-analytic techniques are also more efficient than the traditional numerical methods used to solve this type of problems. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.09342&r=all |
By: | Thomas Dierckx; Jesse Davis; Wim Schoutens |
Abstract: | Using machine learning and alternative data for the prediction of financial markets has been a popular topic in recent years. Many financial variables such as stock price, historical volatility and trade volume have already been through extensive investigation. Remarkably, we found no existing research on the prediction of an asset's market implied volatility within this context. This forward-looking measure gauges the sentiment on the future volatility of an asset, and is deemed one of the most important parameters in the world of derivatives. The ability to predict this statistic may therefore provide a competitive edge to practitioners of market making and asset management alike. Consequently, in this paper we investigate Google News statistics and Wikipedia site traffic as alternative data sources to quantitative market data and consider Logistic Regression, Support Vector Machines and AdaBoost as machine learning models. We show that movements in market implied volatility can indeed be predicted through the help of machine learning techniques. Although the employed alternative data appears to not enhance predictive accuracy, we reveal preliminary evidence of non-linear relationships between features obtained from Wikipedia page traffic and movements in market implied volatility. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.07947&r=all |
By: | Peter K. Friz; Paul Gassiat; Paolo Pigato |
Abstract: | In [Precise Asymptotics for Robust Stochastic Volatility Models; Ann. Appl. Probab. 2020] we introduce a new methodology to analyze large classes of (classical and rough) stochastic volatility models, with special regard to short-time and small noise formulae for option prices, using the framework [Bayer et al; A regularity structure for rough volatility; Math. Fin. 2020]. We investigate here the fine structure of this expansion in large deviations and moderate deviations regimes, together with consequences for implied volatility. We discuss computational aspects relevant for the practical application of these formulas. We specialize such expansions to prototypical rough volatility examples and discuss numerical evidence. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.08814&r=all |
By: | Baranowski, Rafal; Chen, Yining; Fryzlewicz, Piotr |
Abstract: | We propose a ranking-based variable selection (RBVS) technique that identifies important variables influencing the response in high-dimensional data. RBVS uses subsampling to identify the covariates that appear nonspuriously at the top of a chosen variable ranking. We study the conditions under which such a set is unique, and show that it can be recovered successfully from the data by our procedure. Unlike many existing high-dimensional variable selection techniques, among all relevant variables, RBVS distinguishes between important and unimportant variables, and aims to recover only the important ones. Moreover, RBVS does not require model restrictions on the relationship between the response and the covariates, and, thus, is widely applicable in both parametric and nonparametric contexts. Lastly, we illustrate the good practical performance of the proposed technique by means of a comparative simulation study. The RBVS algorithm is implemented in rbvs, a publicly available R package. |
Keywords: | variable screening; subset selection; bootstrap; stability selection. |
JEL: | C1 |
Date: | 2020–07–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:90233&r=all |
By: | Xiao Yang; Weiqing Liu; Dong Zhou; Jiang Bian; Tie-Yan Liu |
Abstract: | Quantitative investment aims to maximize the return and minimize the risk in a sequential trading period over a set of financial instruments. Recently, inspired by rapid development and great potential of AI technologies in generating remarkable innovation in quantitative investment, there has been increasing adoption of AI-driven workflow for quantitative research and practical investment. In the meantime of enriching the quantitative investment methodology, AI technologies have raised new challenges to the quantitative investment system. Particularly, the new learning paradigms for quantitative investment call for an infrastructure upgrade to accommodate the renovated workflow; moreover, the data-driven nature of AI technologies indeed indicates a requirement of the infrastructure with more powerful performance; additionally, there exist some unique challenges for applying AI technologies to solve different tasks in the financial scenarios. To address these challenges and bridge the gap between AI technologies and quantitative investment, we design and develop Qlib that aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2009.11189&r=all |
By: | Qi Zhao |
Abstract: | This paper presents a deep learning framework based on Long Short-term Memory Network(LSTM) that predicts price movement of cryptocurrencies from trade-by-trade data. The main focus of this study is on predicting short-term price changes in a fixed time horizon from a looking back period. By carefully designing features and detailed searching for best hyper-parameters, the model is trained to achieve high performance on nearly a year of trade-by-trade data. The optimal model delivers stable high performance(over 60% accuracy) on out-of-sample test periods. In a realistic trading simulation setting, the prediction made by the model could be easily monetized. Moreover, this study shows that the LSTM model could extract universal features from trade-by-trade data, as the learned parameters well maintain their high performance on other cryptocurrency instruments that were not included in training data. This study exceeds existing researches in term of the scale and precision of data used, as well as the high prediction accuracy achieved. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.07404&r=all |
By: | Hannes Mueller; Andre Groger; Jonathan Hersh; Andrea Matranga; Joan Serrat |
Abstract: | Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep learning techniques combined with data augmentation to expand training samples. We apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. The approach allows generating destruction data with unprecedented scope, resolution, and frequency - only limited by the available satellite imagery - which can alleviate data limitations decisively. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.05970&r=all |
By: | Samuel Mugel; Enrique Lizaso; Roman Orus |
Abstract: | In this paper we briefly review two recent use-cases of quantum optimization algorithms applied to hard problems in finance and economy. Specifically, we discuss the prediction of financial crashes as well as dynamic portfolio optimization. We comment on the different types of quantum strategies to carry on these optimizations, such as those based on quantum annealers, universal gate-based quantum processors, and quantum-inspired Tensor Networks. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.01312&r=all |
By: | Junfeng Hu; Xiaosa Li; Yuru Xu; Shaowu Wu; Bin Zheng |
Abstract: | In this paper, company investment value evaluation models are established based on comprehensive company information. After data mining and extracting a set of 436 feature parameters, an optimal subset of features is obtained by dimension reduction through tree-based feature selection, followed by the 5-fold cross-validation using XGBoost and LightGBM models. The results show that the Root-Mean-Square Error (RMSE) reached 3.098 and 3.059, respectively. In order to further improve the stability and generalization capability, Bayesian Ridge Regression has been used to train a stacking model based on the XGBoost and LightGBM models. The corresponding RMSE is up to 3.047. Finally, the importance of different features to the LightGBM model is analysed. |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.01996&r=all |
By: | Poblete-Cazenave, Miguel; Pachauri, Shonali |
Abstract: | Understanding how electricity demand is likely to rise once households gain access to it is important to policy makers and planners alike. Current approaches to estimate the latent demand of unelectrified populations usually assume constant elasticity of demand. Here we use a simulation-based structural estimation approach, employing micro-data from household surveys for four developing nations, to estimate responsiveness of electricity demand and appliance ownership to income considering changes both on the intensive and extensive margin. We find significant heterogeneity in household response to income changes, which suggest that assuming a non-varying elasticity can result in biased estimates of demand. Our results confirm that neglecting heterogeneity in individual behavior and responses can result in biased demand estimates. |
Keywords: | Energy Access, Household Energy Demand, Appliances Uptake, Simulation-based Econometrics, Scenario Analysis |
JEL: | C53 D12 O13 Q4 |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:103403&r=all |