|
on Computational Economics |
Issue of 2019‒12‒16
33 papers chosen by |
By: | Vikranth Lokeshwar; Vikram Bhardawaj; Shashi Jain |
Abstract: | We present here a regress later based Monte Carlo approach that uses neural networks for pricing high-dimensional contingent claims. The choice of specific architecture of the neural networks used in the proposed algorithm provides for interpretability of the model, a feature that is often desirable in the financial context. Specifically, the interpretation leads us to demonstrate that any contingent claim -- possibly high dimensional and path-dependent -- under the Markovian and the no-arbitrage assumptions, can be semi-statically hedged using a portfolio of short maturity options. We show how the method can be used to obtain an upper and lower bound to the true price, where the lower bound is obtained by following a sub-optimal policy, while the upper bound by exploiting the dual formulation. Unlike other duality based upper bounds where one typically has to resort to nested simulation for constructing super-martingales, the martingales in the current approach come at no extra cost, without the need for any sub-simulations. We demonstrate through numerical examples the simplicity and efficiency of the method for both pricing and semi-static hedging of path-dependent options |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.11362&r=all |
By: | Ali Al-Aradi; Adolfo Correia; Danilo de Frietas Naiff; Gabriel Jardim; Yuri Saporito |
Abstract: | We extend the Deep Galerkin Method (DGM) introduced in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations (PDEs) that arise in the context of optimal stochastic control and mean field games. First, we consider PDEs where the function is constrained to be positive and integrate to unity, as is the case with Fokker-Planck equations. Our approach involves reparameterizing the solution as the exponential of a neural network appropriately normalized to ensure both requirements are satisfied. This then gives rise to a partial integro-differential equation (PIDE) where the integral appearing in the equation is handled using importance sampling. Secondly, we tackle a number of Hamilton-Jacobi-Bellman (HJB) equations that appear in stochastic optimal control problems. The key contribution is that these equations are approached in their unsimplified primal form which includes an optimization problem as part of the equation. We extend the DGM algorithm to solve for the value function and the optimal control simultaneously by characterizing both as deep neural networks. Training the networks is performed by taking alternating stochastic gradient descent steps for the two functions, a technique similar in spirit to policy improvement algorithms. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.01455&r=all |
By: | Svitlana Vyetrenko; David Byrd; Nick Petosa; Mahmoud Mahfouz; Danial Dervovic; Manuela Veloso; Tucker Hybinette Balch |
Abstract: | Machine learning (especially reinforcement learning) methods for trading are increasingly reliant on simulation for agent training and testing. Furthermore, simulation is important for validation of hand-coded trading strategies and for testing hypotheses about market structure. A challenge, however, concerns the robustness of policies validated in simulation because the simulations lack fidelity. In fact, researchers have shown that many market simulation approaches fail to reproduce statistics and stylized facts seen in real markets. As a step towards addressing this we surveyed the literature to collect a set of reference metrics and applied them to real market data and simulation output. Our paper provides a comprehensive catalog of these metrics including mathematical formulations where appropriate. Our results show that there are still significant discrepancies between simulated markets and real ones. However, this work serves as a benchmark against which we can measure future improvement. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.04941&r=all |
By: | Tao Chen; Michael Ludkovski |
Abstract: | We investigate the adaptive robust control framework for portfolio optimization and loss-based hedging under drift and volatility uncertainty. Adaptive robust problems offer many advantages but require handling a double optimization problem (infimum over market measures, supremum over the control) at each instance. Moreover, the underlying Bellman equations are intrinsically multi-dimensional. We propose a novel machine learning approach that solves for the local saddle-point at a chosen set of inputs and then uses a nonparametric (Gaussian process) regression to obtain a functional representation of the value function. Our algorithm resembles control randomization and regression Monte Carlo techniques but also brings multiple innovations, including adaptive experimental design, separate surrogates for optimal control and the local worst-case measure, and computational speed-ups for the sup-inf optimization. Thanks to the new scheme we are able to consider settings that have been previously computationally intractable and provide several new financial insights about learning and optimal trading under unknown market parameters. In particular, we demonstrate the financial advantages of adaptive robust framework compared to adaptive and static robust alternatives. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.00244&r=all |
By: | Omer Berat Sezer; Mehmet Ugur Gudelek; Ahmet Murat Ozbayoglu |
Abstract: | Financial time series forecasting is, without a doubt, the top choice of computational intelligence for finance researchers from both academia and financial industry due to its broad implementation areas and substantial impact. Machine Learning (ML) researchers came up with various models and a vast number of studies have been published accordingly. As such, a significant amount of surveys exist covering ML for financial time series forecasting studies. Lately, Deep Learning (DL) models started appearing within the field, with results that significantly outperform traditional ML counterparts. Even though there is a growing interest in developing models for financial time series forecasting research, there is a lack of review papers that were solely focused on DL for finance. Hence, our motivation in this paper is to provide a comprehensive literature review on DL studies for financial time series forecasting implementations. We not only categorized the studies according to their intended forecasting implementation areas, such as index, forex, commodity forecasting, but also grouped them based on their DL model choices, such as Convolutional Neural Networks (CNNs), Deep Belief Networks (DBNs), Long-Short Term Memory (LSTM). We also tried to envision the future for the field by highlighting the possible setbacks and opportunities, so the interested researchers can benefit. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.13288&r=all |
By: | Michael Drexl (Deggendorf Institute of Technology, Johannes Gutenberg-University) |
Abstract: | This paper studies an extension of the well-known one-to-one pickup-and-delivery problem with time windows. In the latter problem, requests to transport goods from pickup to delivery locations must be fulfilled by a set of vehicles with limited capacity subject to time window constraints at locations. The goods are not interchangeable; what is picked up at one particular location must be delivered to one particular other location. The extension discussed here consists in the consideration of a heterogeneous vehicle fleet comprising lorries with detachable trailers. Trailers are advantageous as they increase the overall vehicle capacity. However, some locations may be accessible only by a lorry without a trailer. Therefore, special locations are available where trailers can be parked while lorries visit accessibility-constrained locations. This induces a nontrivial tradeoff between an enlarged vehicle capacity and the necessity of scheduling detours for parking and reattaching a trailer. The contribution of the present paper is threefold: (i) It studies a practically relevant generalization of the one-to-one pickup-and-delivery problem with time windows. (ii) It develops an exact amortized constant-time procedure for testing the feasibility of an insertion of a transport task into a given route with regard to time windows and lorry and trailer capacities and embeds this test in an adaptive large neighbourhood search algorithm for the heuristic solution of the problem. (iii) It provides a comprehensive set of benchmark instances on which the running time of the constant-time test is compared with a naïve one that requires linear time. The results of computational experiments show signiï¬ cant speedups of one order of magnitude on average. |
Keywords: | Vehicle routing;Pickup-and-delivery;Trailers;Adaptive large neighbourhood search; Insertion heuristic; Constant-time feasibility test. |
Date: | 2018–10–04 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:1816&r=all |
By: | Koichi Miyamoto; Kenji Shiohara |
Abstract: | It is known that quantum computers can speed up Monte Carlo simulation compared to classical counterparts. There are already some proposals of application of the quantum algorithm to practical problems, including quantitative finance. In many problems in finance to which Monte Carlo simulation is applied, many random numbers are required to obtain one sample value of the integrand, since those problems are extremely high-dimensional integrations, for example, risk measurement of credit portfolio. This leads to the situation that the required qubit number is too large in the naive implementation where a quantum register is allocated per random number. In this paper, we point out that we can reduce qubits keeping quantum speed up if we perform calculation similar to classical one, that is, estimate the average of integrand values sampled by a pseudo-random number generator (PRNG) implemented on a quantum circuit. We present not only the overview of the idea but also concrete implementation of PRNG and application to credit risk measurement. Actually, reduction of qubits is a trade-off against increase of circuit depth. Therefore full reduction might be impractical, but such a trade-off between speed and memory space will be important in adjustment of calculation setting considering machine specs, if large-scale Monte Carlo simulation by quantum computer is in operation in the future. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.12469&r=all |
By: | Fall, Cheickh Sadibou; Fofana, Ismaël; Traoré, Fousseini |
Abstract: | In this study, we develop an economy wide model for Burkina Faso to assess the most promising opportunities for technological innovations to enhance maize production and productivity and their economywide effects. We simulate the implementation of two agricultural technological innovations using a customized Computable General Equilibrium (CGE) model. One innovation is an improvement of famers’ efficiency, i.e. operating on the production frontier (typology scenario). The other shifts the frontier itself and involves the introduction of a new cultivar (crop scenario). The model has been made agriculture-focused through the following features: separate agriculture and non-agriculture labor markets, separate urban and rural representative household groups, including welfare analysis and the imperfect integration of land markets, i.e. the land market is split into agroecological zones (AEZs). The CGE model is a single-country, multi-sector, multi-market model and solved for multiple periods in a recursive manner, ten years in the case of Burkina Faso. The CGE model is calibrated using a 2013 Social Accounting Matrix (SAM). The SAM has several interesting features with regards to agricultural modelling and highlights the focus crops for Burkina Faso and particularly maize, which is the focus crop of this study. The results showed prospects of gains for the economy with the introduction of technological innovations in the maize value chain in Burkina Faso. Welfare analyses performed showed welfare gains for all household profiles studied. In other words, the introduction of innovations in the maize value chain seems to be pro-poor. Finally, the study found that a total increase of about 2% of public expenditure in this sector over 10 years is required to achieve the simulated results. |
Keywords: | Agricultural and Food Policy, Research and Development/Tech Change/Emerging Technologies, Research Methods/ Statistical Methods |
Date: | 2019–12–09 |
URL: | http://d.repec.org/n?u=RePEc:ags:ubzefd:298421&r=all |
By: | Dramsch, Jesper Sören; Corte, Gustavo; Amini, Hamed; Lüthje, Mikael; MacBeth, Colin |
Abstract: | In this work we present a deep neural network inversion on map-based 4D seismic data for pressure and saturation. We present a novel neural network architecture that trains on synthetic data and provides insights into observed field seismic. The network explicitly includes AVO gradient calculation within the network as physical knowledge to stabilize pressure and saturation changes separation. We apply the method to Schiehallion field data and go on to compare the results to Bayesian inversion results. Despite not using convolutional neural networks for spatial information, we produce maps with good signal to noise ratio and coherency. |
Date: | 2019–02–21 |
URL: | http://d.repec.org/n?u=RePEc:osf:eartha:zytp2&r=all |
By: | Junhao Wang; Yinheng Li; Yijie Cao |
Abstract: | Dynamic Portfolio Management is a domain that concerns the continuous redistribution of assets within a portfolio to maximize the total return in a given period of time. With the recent advancement in machine learning and artificial intelligence, many efforts have been put in designing and discovering efficient algorithmic ways to manage the portfolio. This paper presents two different reinforcement learning agents, policy gradient actor-critic and evolution strategy. The performance of the two agents is compared during backtesting. We also discuss the problem set up from state space design, to state value function approximator and policy control design. We include the short position to give the agent more flexibility during assets redistribution and a constant trading cost of 0.25%. The agent is able to achieve 5% return in 10 days daily trading despite 0.25% trading cost. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.11880&r=all |
By: | Timo Gschwind (Johannes Gutenberg-University); Nicola Bianchessi (Johannes Gutenber-University); Stefan Irnich (Johannes Gutenberg-University) |
Abstract: | In the commodity-constrained split delivery vehicle routing problem (C-SDVRP), customer demands are composed of sets of different commodities. The C-SDVRP asks for a minimum-distance set of vehicle routes such that all customer demands are met and vehicle capacities are respected. Moreover, whenever a commodity is delivered by a vehicle to a customer, the entire amount requested by this customer must be provided. Different commodities demanded by one customer, however, can be delivered by different vehicles. Thus, the C-SDVRP is a relaxation of the capacitated vehicle routing problem and a restriction of the split delivery vehicle routing problem. For its exact solution, we propose a branch-price-and-cut algorithm that employs and tailors stabilization techniques that have been successfully applied to several cutting and packing problems. More precisely, we make use of (deep) dual-optimal inequalities which are particularly suited to reduce the negative effects caused by the inherent symmetry of C-SDVRP instances. One main issues here is the interaction of branching and cutting decisions and the different classes of dual inequalities. Extensive computational tests on existing and extended benchmark instances show that all stabilized variants of our branch-price-and-cut are clearly superior to the non-stabilized version. On the existing benchmark, we are signifcantly faster than the state-of-the-art algorithm and provide several new optima for instances with up to 60 customers and 180 tasks. Lower bounds are reported for all tested instances with up to 80 customers and 480 tasks, improving the bounds for all unsolved instances and providing first lower bounds for several instances. |
Keywords: | routing, vehicle routing, dual-optimal inequalities, column generation, discrete split delivery |
Date: | 2018–10–05 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:1817&r=all |
By: | Hongshan Li; Zhongyi Huang |
Abstract: | This paper considers the valuation of a European call option under the Heston stochastic volatility model. We present the asymptotic solution to the option pricing problem in powers of the volatility of variance. Then we introduce the artificial boundary method for solving the problem on a truncated domain, and derive several artificial boundary conditions (ABCs) on the artificial boundary of the bounded computational domain. A typical finite difference scheme and quadrature rule are used for the numerical solution of the reduced problem. Numerical experiments show that the proposed ABCs are able to improve the accuracy of the results and have a significant advantage over the widely-used boundary conditions by Heston in the original paper (Heston, 1993). |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.00691&r=all |
By: | Christoph Böhringer; Knut Einar Rosendahl; Halvor Briseid Storrøsten (Statistics Norway) |
Abstract: | Policy makers in the EU and elsewhere are concerned that unilateral carbon pricing induces carbon leakage through relocation of emission-intensive and trade-exposed industries to other regions. A common measure to mitigate such leakage is to combine an emission trading system (ETS) with output-based allocation (OBA) of allowances to exposed industries. We first show analytically that in a situation with an ETS combined with OBA, it is optimal to impose a consumption tax on the goods that are entitled to OBA, where the tax is equivalent in value to the OBA-rate. Then, using a multiregion, multi-sector computable general equilibrium (CGE) model calibrated to empirical data, we quantify the welfare gains for the EU to impose such a consumption tax on top of its existing ETS with OBA. We run Monte Carlo simulations to account for uncertain leakage exposure of goods entitled to OBA. The consumption tax increases welfare whether the goods are highly exposed to leakage or not, and can hence be regarded as smart hedging against carbon leakage. |
Keywords: | Carbon leakage; output-based allocation; consumption tax |
JEL: | D61 F18 H23 Q54 |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:920&r=all |
By: | Shaogao Lv; Yongchao Hou; Hongwei Zhou |
Abstract: | Forecasting stock market direction is always an amazing but challenging problem in finance. Although many popular shallow computational methods (such as Backpropagation Network and Support Vector Machine) have extensively been proposed, most algorithms have not yet attained a desirable level of applicability. In this paper, we present a deep learning model with strong ability to generate high level feature representations for accurate financial prediction. Precisely, a stacked denoising autoencoder (SDAE) from deep learning is applied to predict the daily CSI 300 index, from Shanghai and Shenzhen Stock Exchanges in China. We use six evaluation criteria to evaluate its performance compared with the back propagation network, support vector machine. The experiment shows that the underlying financial model with deep machine technology has a significant advantage for the prediction of the CSI 300 index. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.00712&r=all |
By: | Ludovic Mathys |
Abstract: | The present article provides an efficient and accurate hybrid method to price American standard options in certain jump-diffusion models as well as American barrier-type options under the Black & Scholes framework. Our method generalizes the quadratic approximation scheme of Barone-Adesi & Whaley (1987) and several of its extensions. Using perturbative arguments, we decompose the early exercise pricing problem into sub-problems of different orders and solve these sub-problems successively. The obtained solutions are combined to recover approximations to the original pricing problem of multiple orders, with the 0-th order version matching the general Barone-Adesi & Whaley ansatz. We test the accuracy and efficiency of the approximations via numerical simulations. The results show a clear dominance of higher order approximations over their respective 0-th order version and reveal that significantly more pricing accuracy can be obtained by relying on approximations of the first few orders. Additionally, they suggest that increasing the order of any approximation by one generally refines the pricing precision, however that this happens at the expense of greater computational costs. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.00454&r=all |
By: | Schnaubelt, Matthias |
Abstract: | Machine learning is increasingly applied to time series data, as it constitutes an attractive alternative to forecasts based on traditional time series models. For independent and identically distributed observations, cross-validation is the prevalent scheme for estimating out-of-sample performance in both model selection and assessment. For time series data, however, it is unclear whether forwardvalidation schemes, i.e., schemes that keep the temporal order of observations, should be preferred. In this paper, we perform a comprehensive empirical study of eight common validation schemes. We introduce a study design that perturbs global stationarity by introducing a slow evolution of the underlying data-generating process. Our results demonstrate that, even for relatively small perturbations, commonly used cross-validation schemes often yield estimates with the largest bias and variance, and forward-validation schemes yield better estimates of the out-of-sample error. We provide an interpretation of these results in terms of an additional evolution-induced bias and the sample-size dependent estimation error. Using a large-scale financial data set, we demonstrate the practical significance in a replication study of a statistical arbitrage problem. We conclude with some general guidelines on the selection of suitable validation schemes for time series data. |
Keywords: | machine learning,model selection,model validation,time series,cross-validation |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:112019&r=all |
By: | Bernhard Hientzsch |
Abstract: | In this introductory paper, we discuss how quantitative finance problems under some common risk factor dynamics for some common instruments and approaches can be formulated as time-continuous or time-discrete forward-backward stochastic differential equations (FBSDE) final-value or control problems, how these final value problems can be turned into control problems, how time-continuous problems can be turned into time-discrete problems, and how the forward and backward stochastic differential equations (SDE) can be time-stepped. We obtain both forward and backward time-stepped time-discrete stochastic control problems (where forward and backward indicate in which direction the Y SDE is time-stepped) that we will solve with optimization approaches using deep neural networks for the controls and stochastic gradient and other deep learning methods for the actual optimization/learning. We close with examples for the forward and backward methods for an European option pricing problem. Several methods and approaches are new. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.12231&r=all |
By: | Andrew Schaug; Harish Chandra |
Abstract: | Stochastic bridges are commonly used to impute missing data with a lower sampling rate to generate data with a higher sampling rate, while preserving key properties of the dynamics involved in an unbiased way. While the generation of Brownian bridges and Ornstein-Uhlenbeck bridges is well understood, unbiased generation of such stochastic bridges subject to a given extremum has been less explored in the literature. After a review of known results, we compare two algorithms for generating Brownian bridges constrained to a given extremum, one of which generalises to other diffusions. We further apply this to generate unbiased Ornstein-Uhlenbeck bridges and unconstrained processes, both constrained to a given extremum, along with more tractable numerical approximations of these algorithms. Finally, we consider the case of drift, and applications to geometric Brownian motions. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.10972&r=all |
By: | Noacco, Valentina; Sarrazin, Fanny; Pianosi, Francesca; Wagener, Thorsten (University of Bristol) |
Abstract: | Global Sensitivity Analysis (GSA) is a set of statistical techniques to investigate the effects of the uncertainty in the input factors of a mathematical model on the model’s outputs. The value of GSA for the construction, evaluation, and improvement of earth system models is reviewed in a companion paper by Wagener and Pianosi [n.d.]. The present paper focuses on the implementation of GSA and provides a set of workflow scripts to assess the critical choices that GSA users need to make before and while executing GSA. The workflows proposed here can be adopted by GSA users and easily adjusted to a range of GSA methods. We demonstrate how to interpret the outcomes resulting from these different choices and how to revise the choices to improve GSA quality, using a simple rainfall-runoff model as an example. We implement the workflows in the SAFE toolbox, a widely used open source software for GSA available in MATLAB and R. • The workflows aim to contribute to the dissemination of good practice in GSA applications. • The workflows are well-documented and reusable, as a way to ensure robust and reproducible computational science. |
Date: | 2019–04–05 |
URL: | http://d.repec.org/n?u=RePEc:osf:eartha:pu83z&r=all |
By: | Sithanonxay SUVANNAPHAKDY (Laos-Australia Development Learning Facility); Toshihisa TOYODA (Center for Social Systems Innovation and GSICS, Kobe University) |
Abstract: | Trade liberalization entails the transition from trade taxes to domestic taxes. Certain structural characteristics such as narrow tax base and significant proportion of subsistence sectors, however, constrain such transition and hence reducing public revenues in developing countries. This paper contributes to this debate by assessing the impact of trade liberalisation on domestic tax revenue in Laos. We find that Laos has been able to recover revenue loss from tariff reduction through the introduction of value-added tax (VAT). VAT generated LAK 5,510 billion or 30% of tax revenue in 2017, which was about twice higher than the ratio of tariff revenue to tax revenue in 2000. Our simulation results of tariff liberalization using a computable general equilibrium (CGE) model also reveals that further reduction in tariff rate will be associated with lower indirect tax rate. In particular, the 20% tariff reduction will increase private consumption by 1.14%, but will decrease the effective indirect tax rate from 6.2% to 5.2% and reduce tax revenue by 11%. The worsening tax revenue loss reflects the non-optimal indirect tax rate, which needs to be reduced by 11%. The key policy implication is that any policy designed for raising tax revenue should aim at improving tax collection system and broadening tax base rather raising indirect tax rate. |
Keywords: | Trade liberalization, Fiscal impacts, Domestic tax revenue, Laos, CGE model |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:kcs:wpaper:34&r=all |
By: | Gribling, Sander (Tilburg University, School of Economics and Management) |
Abstract: | Optimization is a fundamental area in mathematics and computer science, with many real-world applications. In this thesis we study the efficiency with which we can solve certain optimization problems from two different perspectives. Firstly, we study it from the perspective of matrix factorization ranks, which comes with connections to quantum information theory. Secondly, we study it from the perspective of quantum computing. This thesis is accordingly divided into two parts, where we take these perspectives. In the first part of this thesis we study several matrix factorization ranks and their connections to quantum information theory. In Chapter 5, we first give a unified approach to lower bounding matrix factorization ranks, using polynomial optimization techniques. In Chapter 6, we exploit the connection between one particular factorization rank, the completely positive semidefinite rank, and quantum correlations to provide an explicit family of matrices with a large completely positive semidefinite factorization rank. In Chapter 8, we use the same connection to study quantum versions of classical graph parameters from the perspective of polynomial optimization (albeit in noncommutative variables). In Chapter 7 we propose and study a new measure for the amount of entanglement needed to realize quantum correlations. We can approximate this new measure using our approach to matrix factorization ranks, i.e., through polynomial optimization in noncommutative variables. In the second part of this thesis we turn our attention to the question: Can we solve optimization problems faster on a quantum computer? First, in Chapter 10, we look at the problem of evaluating a Boolean function. We give a new semidefinite programming characterization of the minimum number of quantum queries to the input that are needed to determine the corresponding function value. In Chapter 11 we revisit the semidefinite programming problem that plays a crucial role in the rest of the thesis; we provide a quantum algorithm for solving semidefinite programs. Finally, in Chapter 12, we study the general problem of solving convex optimization problems in the oracle model. We provide both upper and lower bounds on the efficiency of various reductions in the quantum setting. In particular, we show that quantum computers are more efficient than classical computers for the task of answering separation queries when access to the convex problem is given through membership queries. |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiutis:5c681ab9-2344-4a07-b818-f242b33859ea&r=all |
By: | Dramsch, Jesper Sören; Christensen, Anders Nymark; MacBeth, Colin; Lüthje, Mikael |
Abstract: | We present a novel 3D warping technique for the estimation of 4D seismic time-shift. This unsupervised method provides a diffeomorphic 3D time shift field that includes uncertainties, therefore it does not need prior time-shift data to be trained. This results in a widely applicable method in time-lapse seismic data analysis. We explore the generalization of the method to unseen data both in the same geological setting and in a different field, where the generalization error stays constant and within an acceptable range across test cases. We further explore upsampling of the warp field from a smaller network to decrease computational cost and see some deterioration of the warp field quality as a result. |
Date: | 2019–10–31 |
URL: | http://d.repec.org/n?u=RePEc:osf:eartha:82bnj&r=all |
By: | Ehsan Hoseinzade; Saman Haratizadeh; Arash Khoeini |
Abstract: | The performance of financial market prediction systems depends heavily on the quality of features it is using. While researchers have used various techniques for enhancing the stock specific features, less attention has been paid to extracting features that represent general mechanism of financial markets. In this paper, we investigate the importance of extracting such general features in stock market prediction domain and show how it can improve the performance of financial market prediction. We present a framework called U-CNNpred, that uses a CNN-based structure. A base model is trained in a specially designed layer-wise training procedure over a pool of historical data from many financial markets, in order to extract the common patterns from different markets. Our experiments, in which we have used hundreds of stocks in S\&P 500 as well as 14 famous indices around the world, show that this model can outperform baseline algorithms when predicting the directional movement of the markets for which it has been trained for. We also show that the base model can be fine-tuned for predicting new markets and achieve a better performance compared to the state of the art baseline algorithms that focus on constructing market-specific models from scratch. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.12540&r=all |
By: | Pierre Boulanger (European Commission – JRC); Emanuele Ferrari (European Commission – JRC); Alfredo Mainar Causape (European Commission – JRC); Martina Sartori (European Commission – JRC); Mohammed Beshir; Kidanemariam Hailu; Solomon Tsehay |
Abstract: | In 2017, the Ministry of Agriculture and Natural Resources of Ethiopia adopted the Rural Job Opportunity Creation Strategy (RJOCS) to address a lack of job opportunities in rural areas, and related effects such as migration to urban areas and poverty. This report assesses the likely effects of six policy options, in terms of jobs opportunity creation and key macroeconomic indicators. It employs a dynamic Computable General Equilibrium (CGE) model developed by the Joint Research Centre (JRC) tailored to the Ethiopian context. The analysis of the Ethiopian economy, through multipliers based on a specifically developed database, shows that livestock has the greatest employment generation capacity, followed by cash crops, food crops and agri-food industry. This means that policies focusing on rural and agri-food sectors should have great potential to create job opportunities. All scenarios show the capacity of the Ethiopian agriculture and food industry to generate job opportunities and improve conditions for workers and their families, with particularly positive effects under the scenarios supporting agroparks and developing workers’ skills through education. |
Keywords: | CGE, Ethiopia,jobs, Agricultural policy |
JEL: | C68 |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc117916&r=all |
By: | Ukpe, U.H.; Djomo, C.R.F.; Olayiwola, S.A.; Gama, E.N. |
Keywords: | Agribusiness, Public Economics |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaae19:295741&r=all |
By: | Niccolò Zaccaria |
Abstract: | We based our work mainly on ‘Does Money Illusion Matter?’, by E. Fehr and J. R. Tyran (AER2001) in which the authors show experimental evidence of the presence of money illusion withinsubject groups. We build up a model which provides a formal and mathematical framework of theexperiment design, and it is in principle able to explain subjects’ behaviour within the experiment.Once we had analysed the dynamic properties of our model, we have run numerical simulationsin order to see whether we were able to get the same pattern found by the authors. It turns outthat not only our model is able to give a theoretical justification of the results found in the lab, butmoreover it is able to replicate the experimental series found by the two couple of authors, up to acertain degree of fitness. Then we used a replication of the original experiment by L. Petersen andA. Winn, ‘Does Money Illusion Matter?: Comment’, (AER 2014), to test for robustness of ourmodel. |
Keywords: | Money Illusion, Computational Model, Experiments |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:frz:wpaper:wp2019_25.rdf&r=all |
By: | Mariano Zeron-Medina Laris; Ignacio Ruiz |
Abstract: | In this paper we introduce a new technique based on high-dimensional Chebyshev Tensors that we call \emph{Orthogonal Chebyshev Sliding Technique}. We implemented this technique inside the systems of a tier-one bank, and used it to approximate Front Office pricing functions in order to reduce the substantial computational burden associated with the capital calculation as specified by FRTB IMA. In all cases, the computational burden reductions obtained were of more than $90\%$, while keeping high degrees of accuracy, the latter obtained as a result of the mathematical properties enjoyed by Chebyshev Tensors. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.10948&r=all |
By: | Indranil SenGupta; William Nganje; Erik Hanson |
Abstract: | A commonly used stochastic model for derivative and commodity market analysis is the Barndorff-Nielsen and Shephard (BN-S) model. Though this model is very efficient and analytically tractable, it suffers from the absence of long range dependence and many other issues. For this paper, the analysis is restricted to crude oil price dynamics. A simple way of improving the BN-S model with the implementation of various machine learning algorithms is proposed. This refined BN-S model is more efficient and has fewer parameters than other models which are used in practice as improvements of the BN-S model. The procedure and the model show the application of data science for extracting a "deterministic component" out of processes that are usually considered to be completely stochastic. Empirical applications validate the efficacy of the proposed model for long range dependence. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.13300&r=all |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University); Zhentao Shi (The Chinese University of Hong Kong) |
Abstract: | The Hodrick-Prescott (HP) ï¬ lter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend speciï¬ cation. Like all nonparametric methods, the HP ï¬ lter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP ï¬ lter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research \citep{phillips2015business} has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the ï¬ lter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the \emph{boosted HP ï¬ lter} in view of its connection to $L_{2}$-boosting in machine learning. The paper develops limit theory to show that the boosted HP (bHP) ï¬ lter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks, thereby covering the most common trends that appear in macroeconomic data and current modeling methodology. In doing so, the boosted ï¬ lter provides a new mechanism for consistently estimating multiple structural breaks even without knowledge of the number of such breaks. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP ï¬ ltering, the data-determined boosted ï¬ lter, and an alternative autoregressive approach. These examples show that the bHP ï¬ lter is helpful in analyzing a large collection of heterogeneous macroeconomic time series that manifest various degrees of persistence, trend behavior, and volatility. |
Keywords: | Boosting, Cycles, Empirical macroeconomics, Hodrick-Prescott filter, Machine learning, Nonstationary time series, Trends, Unit root processes |
JEL: | C22 C55 E20 |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2212&r=all |
By: | Christophe Hurlin (LEO - Laboratoire d'Économie d'Orleans - CNRS - Centre National de la Recherche Scientifique - Université de Tours - UO - Université d'Orléans); Christophe Pérignon (GREGH - Groupement de Recherche et d'Etudes en Gestion à HEC - HEC Paris - Ecole des Hautes Etudes Commerciales - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | In this article, we discuss the contribution of Machine Learning techniques and new data sources (New Data) to credit-risk modelling. Credit scoring was historically one of the first fields of application of Machine Learning techniques. Today, these techniques permit to exploit new sources of data made available by the digitalization of customer relationships and social networks. The combination of the emergence of new methodologies and new data has structurally changed the credit industry and favored the emergence of new players. First, we analyse the incremental contribution of Machine Learning techniques per se. We show that they lead to significant productivity gains but that the forecasting improvement remains modest. Second, we quantify the contribution of the "datadiversity", whether or not these new data are exploited through Machine Learning. It appears that some of these data contain weak signals that significantly improve the quality of the assessment of borrowers' creditworthiness. At the microeconomic level, these new approaches promote financial inclusion and access to credit for the most vulnerable borrowers. However, Machine Learning applied to these data can also lead to severe biases and discrimination. |
Abstract: | Dans cet article, nous proposons une réflexion sur l'apport des techniques d'apprentissage automatique (Machine Learning) et des nouvelles sources de données (New Data) pour la modélisation du risque de crédit. Le scoring de crédit fut historiquement l'un des premiers champs d'application des techniques de Machine Learning. Aujourd'hui, ces techniques permettent d'exploiter de « nouvelles » données rendues disponibles par la digitalisation de la relation clientèle et les réseaux sociaux. La conjonction de l'émergence de nouvelles méthodologies et de nouvelles données a ainsi modifié de façon structurelle l'industrie du crédit et favorisé l'émergence de nouveaux acteurs. Premièrement, nous analysons l'apport des algorithmes de Machine Learning à ensemble d'information constant. Nous montrons qu'il existe des gains de productivité liés à ces nouvelles approches mais que les gains de prévision du risque de crédit restent en revanche modestes. Deuxièmement, nous évaluons l'apport de cette « datadiversité », que ces nouvelles données soient exploitées ou non par des techniques de Machine Learning. Il s'avère que certaines de ces données permettent de révéler des signaux faibles qui améliorent sensiblement la qualité de l'évaluation de la solvabilité des emprunteurs. Au niveau microéconomique, ces nouvelles approches favorisent l'inclusion financière et l'accès au crédit des emprunteurs les plus fragiles. Cependant, le Machine Learning appliqué à ces données peut aussi conduire à des biais et à des phénomènes de discrimination. |
Keywords: | Machine Learning ML,Credit scoring,New data,Nouvelles données,Scoring de crédit,Apprentissage automatique |
Date: | 2019–11–21 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-02377886&r=all |
By: | Ludovic Mathys |
Abstract: | The present article provides a novel theoretical way to evaluate tradeability in markets of ordinary exponential L\'evy type. We consider non-tradeability as a particular type of market illiquidity and investigate its impact on the price of the assets. Starting from an adaption of the continuous-time optional asset replacement problem initiated by McDonald and Siegel (1986), we derive tradeability premiums and subsequently characterize them in terms of free-boundary problems. This provides a simple way to compute non-tradeability values, e.g. by means of standard numerical techniques, and, in particular, to express the price of a non-tradeable asset as a percentage of the price of a tradeable equivalent. Our approach is illustrated via numerical examples where we discuss various properties of the tradeability premiums. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.00469&r=all |
By: | Bart Thijs |
Abstract: | Science mapping using document networks is based on the assumption that scientific papers are indivisible units with unique links to neighbour documents. Research on proximity in co-citation analysis and the study of lexical properties of sections and citation contexts indicate that this assumption is questionable. Moreover, the meaning of words and co-words depends on the context in which they appear. This study proposes the use of a neural network architecture for word and paragraph embeddings (Doc2Vec) for the measurement of similarity among those smaller units of analysis. It is shown that paragraphs in the ‘Introduction’ and the ‘Discussion’ section are more similar to the abstract, that the similarity among paragraphs is related to -but not linearly- the distance between the paragraphs. The ‘Methodology’ section is least similar to the other sections. Abstracts of citing-cited documents are more similar than random pairs and the context in which a reference appears is most similar to the abstract of the cited document. This novel approach with higher granularity can be used for bibliometric aided retrieval and to assist in measuring interdisciplinarity through the application of network-based centrality measures. |
Date: | 2019–02–11 |
URL: | http://d.repec.org/n?u=RePEc:ete:ecoomp:633963&r=all |
By: | Guzman, Jorge; Li, Aishen |
Abstract: | We propose an approach to measure strategy using text-based machine learning. The key insight is that distance in the statements made by companies can be partially indicative of their strategic positioning with respect to each other. We formalize this insight by proposing a new measure of strategic positioning---the strategy score---and defining the assumptions and conditions under which we can estimate it empirically. We then implement this approach to score the strategic positioning of a large sample of startups in Crunchbase in relation to contemporaneous public companies. Startups with a higher founding strategy score have higher equity outcomes, reside in locations with more venture capital, and receive a higher amount of financing in seed financing events. One implication of this result is that founding strategic positioning is important for startup performance. |
Date: | 2019–11–22 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:7cvge&r=all |